vendor/0000755000175000017500000000000014172417313012650 5ustar mwhudsonmwhudsonvendor/glob/0000775000175000017500000000000014160055207013572 5ustar mwhudsonmwhudsonvendor/glob/.cargo-checksum.json0000664000175000017500000000013114160055207017431 0ustar mwhudsonmwhudson{"files":{},"package":"9b919933a397b79c37e33b77bb2aa3dc8eb6e165ad809e58ff75bc7db2e34574"}vendor/glob/LICENSE-APACHE0000664000175000017500000002513714160055207015526 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/glob/Cargo.toml0000664000175000017500000000164514160055207015530 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] name = "glob" version = "0.3.0" authors = ["The Rust Project Developers"] description = "Support for matching file paths against Unix shell style patterns.\n" homepage = "https://github.com/rust-lang/glob" documentation = "https://docs.rs/glob/0.3.0" categories = ["filesystem"] license = "MIT/Apache-2.0" repository = "https://github.com/rust-lang/glob" [dev-dependencies.tempdir] version = "0.3" vendor/glob/src/0000775000175000017500000000000014160055207014361 5ustar mwhudsonmwhudsonvendor/glob/src/lib.rs0000664000175000017500000014160314160055207015502 0ustar mwhudsonmwhudson// Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Support for matching file paths against Unix shell style patterns. //! //! The `glob` and `glob_with` functions allow querying the filesystem for all //! files that match a particular pattern (similar to the libc `glob` function). //! The methods on the `Pattern` type provide functionality for checking if //! individual paths match a particular pattern (similar to the libc `fnmatch` //! function). //! //! For consistency across platforms, and for Windows support, this module //! is implemented entirely in Rust rather than deferring to the libc //! `glob`/`fnmatch` functions. //! //! # Examples //! //! To print all jpg files in `/media/` and all of its subdirectories. //! //! ```rust,no_run //! use glob::glob; //! //! for entry in glob("/media/**/*.jpg").expect("Failed to read glob pattern") { //! match entry { //! Ok(path) => println!("{:?}", path.display()), //! Err(e) => println!("{:?}", e), //! } //! } //! ``` //! //! To print all files containing the letter "a", case insensitive, in a `local` //! directory relative to the current working directory. This ignores errors //! instead of printing them. //! //! ```rust,no_run //! use glob::glob_with; //! use glob::MatchOptions; //! //! let options = MatchOptions { //! case_sensitive: false, //! require_literal_separator: false, //! require_literal_leading_dot: false, //! }; //! for entry in glob_with("local/*a*", options).unwrap() { //! if let Ok(path) = entry { //! println!("{:?}", path.display()) //! } //! } //! ``` #![doc( html_logo_url = "https://www.rust-lang.org/logos/rust-logo-128x128-blk-v2.png", html_favicon_url = "https://www.rust-lang.org/favicon.ico", html_root_url = "https://docs.rs/glob/0.3.0" )] #![deny(missing_docs)] #![cfg_attr(all(test, windows), feature(std_misc))] use std::cmp; use std::error::Error; use std::fmt; use std::fs; use std::io; use std::path::{self, Component, Path, PathBuf}; use std::str::FromStr; use CharSpecifier::{CharRange, SingleChar}; use MatchResult::{EntirePatternDoesntMatch, Match, SubPatternDoesntMatch}; use PatternToken::AnyExcept; use PatternToken::{AnyChar, AnyRecursiveSequence, AnySequence, AnyWithin, Char}; /// An iterator that yields `Path`s from the filesystem that match a particular /// pattern. /// /// Note that it yields `GlobResult` in order to report any `IoErrors` that may /// arise during iteration. If a directory matches but is unreadable, /// thereby preventing its contents from being checked for matches, a /// `GlobError` is returned to express this. /// /// See the `glob` function for more details. pub struct Paths { dir_patterns: Vec, require_dir: bool, options: MatchOptions, todo: Vec>, scope: Option, } /// Return an iterator that produces all the `Path`s that match the given /// pattern using default match options, which may be absolute or relative to /// the current working directory. /// /// This may return an error if the pattern is invalid. /// /// This method uses the default match options and is equivalent to calling /// `glob_with(pattern, MatchOptions::new())`. Use `glob_with` directly if you /// want to use non-default match options. /// /// When iterating, each result is a `GlobResult` which expresses the /// possibility that there was an `IoError` when attempting to read the contents /// of the matched path. In other words, each item returned by the iterator /// will either be an `Ok(Path)` if the path matched, or an `Err(GlobError)` if /// the path (partially) matched _but_ its contents could not be read in order /// to determine if its contents matched. /// /// See the `Paths` documentation for more information. /// /// # Examples /// /// Consider a directory `/media/pictures` containing only the files /// `kittens.jpg`, `puppies.jpg` and `hamsters.gif`: /// /// ```rust,no_run /// use glob::glob; /// /// for entry in glob("/media/pictures/*.jpg").unwrap() { /// match entry { /// Ok(path) => println!("{:?}", path.display()), /// /// // if the path matched but was unreadable, /// // thereby preventing its contents from matching /// Err(e) => println!("{:?}", e), /// } /// } /// ``` /// /// The above code will print: /// /// ```ignore /// /media/pictures/kittens.jpg /// /media/pictures/puppies.jpg /// ``` /// /// If you want to ignore unreadable paths, you can use something like /// `filter_map`: /// /// ```rust /// use glob::glob; /// use std::result::Result; /// /// for path in glob("/media/pictures/*.jpg").unwrap().filter_map(Result::ok) { /// println!("{}", path.display()); /// } /// ``` /// Paths are yielded in alphabetical order. pub fn glob(pattern: &str) -> Result { glob_with(pattern, MatchOptions::new()) } /// Return an iterator that produces all the `Path`s that match the given /// pattern using the specified match options, which may be absolute or relative /// to the current working directory. /// /// This may return an error if the pattern is invalid. /// /// This function accepts Unix shell style patterns as described by /// `Pattern::new(..)`. The options given are passed through unchanged to /// `Pattern::matches_with(..)` with the exception that /// `require_literal_separator` is always set to `true` regardless of the value /// passed to this function. /// /// Paths are yielded in alphabetical order. pub fn glob_with(pattern: &str, options: MatchOptions) -> Result { #[cfg(windows)] fn check_windows_verbatim(p: &Path) -> bool { use std::path::Prefix; match p.components().next() { Some(Component::Prefix(ref p)) => p.kind().is_verbatim(), _ => false, } } #[cfg(not(windows))] fn check_windows_verbatim(_: &Path) -> bool { false } #[cfg(windows)] fn to_scope(p: &Path) -> PathBuf { // FIXME handle volume relative paths here p.to_path_buf() } #[cfg(not(windows))] fn to_scope(p: &Path) -> PathBuf { p.to_path_buf() } // make sure that the pattern is valid first, else early return with error if let Err(err) = Pattern::new(pattern) { return Err(err); } let mut components = Path::new(pattern).components().peekable(); loop { match components.peek() { Some(&Component::Prefix(..)) | Some(&Component::RootDir) => { components.next(); } _ => break, } } let rest = components.map(|s| s.as_os_str()).collect::(); let normalized_pattern = Path::new(pattern).iter().collect::(); let root_len = normalized_pattern.to_str().unwrap().len() - rest.to_str().unwrap().len(); let root = if root_len > 0 { Some(Path::new(&pattern[..root_len])) } else { None }; if root_len > 0 && check_windows_verbatim(root.unwrap()) { // FIXME: How do we want to handle verbatim paths? I'm inclined to // return nothing, since we can't very well find all UNC shares with a // 1-letter server name. return Ok(Paths { dir_patterns: Vec::new(), require_dir: false, options, todo: Vec::new(), scope: None, }); } let scope = root.map_or_else(|| PathBuf::from("."), to_scope); let mut dir_patterns = Vec::new(); let components = pattern[cmp::min(root_len, pattern.len())..].split_terminator(path::is_separator); for component in components { dir_patterns.push(Pattern::new(component)?); } if root_len == pattern.len() { dir_patterns.push(Pattern { original: "".to_string(), tokens: Vec::new(), is_recursive: false, }); } let last_is_separator = pattern.chars().next_back().map(path::is_separator); let require_dir = last_is_separator == Some(true); let todo = Vec::new(); Ok(Paths { dir_patterns, require_dir, options, todo, scope: Some(scope), }) } /// A glob iteration error. /// /// This is typically returned when a particular path cannot be read /// to determine if its contents match the glob pattern. This is possible /// if the program lacks the appropriate permissions, for example. #[derive(Debug)] pub struct GlobError { path: PathBuf, error: io::Error, } impl GlobError { /// The Path that the error corresponds to. pub fn path(&self) -> &Path { &self.path } /// The error in question. pub fn error(&self) -> &io::Error { &self.error } /// Consumes self, returning the _raw_ underlying `io::Error` pub fn into_error(self) -> io::Error { self.error } } impl Error for GlobError { fn description(&self) -> &str { self.error.description() } fn cause(&self) -> Option<&Error> { Some(&self.error) } } impl fmt::Display for GlobError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!( f, "attempting to read `{}` resulted in an error: {}", self.path.display(), self.error ) } } fn is_dir(p: &Path) -> bool { fs::metadata(p).map(|m| m.is_dir()).unwrap_or(false) } /// An alias for a glob iteration result. /// /// This represents either a matched path or a glob iteration error, /// such as failing to read a particular directory's contents. pub type GlobResult = Result; impl Iterator for Paths { type Item = GlobResult; fn next(&mut self) -> Option { // the todo buffer hasn't been initialized yet, so it's done at this // point rather than in glob() so that the errors are unified that is, // failing to fill the buffer is an iteration error construction of the // iterator (i.e. glob()) only fails if it fails to compile the Pattern if let Some(scope) = self.scope.take() { if !self.dir_patterns.is_empty() { // Shouldn't happen, but we're using -1 as a special index. assert!(self.dir_patterns.len() < !0 as usize); fill_todo(&mut self.todo, &self.dir_patterns, 0, &scope, self.options); } } loop { if self.dir_patterns.is_empty() || self.todo.is_empty() { return None; } let (path, mut idx) = match self.todo.pop().unwrap() { Ok(pair) => pair, Err(e) => return Some(Err(e)), }; // idx -1: was already checked by fill_todo, maybe path was '.' or // '..' that we can't match here because of normalization. if idx == !0 as usize { if self.require_dir && !is_dir(&path) { continue; } return Some(Ok(path)); } if self.dir_patterns[idx].is_recursive { let mut next = idx; // collapse consecutive recursive patterns while (next + 1) < self.dir_patterns.len() && self.dir_patterns[next + 1].is_recursive { next += 1; } if is_dir(&path) { // the path is a directory, so it's a match // push this directory's contents fill_todo( &mut self.todo, &self.dir_patterns, next, &path, self.options, ); if next == self.dir_patterns.len() - 1 { // pattern ends in recursive pattern, so return this // directory as a result return Some(Ok(path)); } else { // advanced to the next pattern for this path idx = next + 1; } } else if next == self.dir_patterns.len() - 1 { // not a directory and it's the last pattern, meaning no // match continue; } else { // advanced to the next pattern for this path idx = next + 1; } } // not recursive, so match normally if self.dir_patterns[idx].matches_with( { match path.file_name().and_then(|s| s.to_str()) { // FIXME (#9639): How do we handle non-utf8 filenames? // Ignore them for now; ideally we'd still match them // against a * None => continue, Some(x) => x, } }, self.options, ) { if idx == self.dir_patterns.len() - 1 { // it is not possible for a pattern to match a directory // *AND* its children so we don't need to check the // children if !self.require_dir || is_dir(&path) { return Some(Ok(path)); } } else { fill_todo( &mut self.todo, &self.dir_patterns, idx + 1, &path, self.options, ); } } } } } /// A pattern parsing error. #[derive(Debug)] #[allow(missing_copy_implementations)] pub struct PatternError { /// The approximate character index of where the error occurred. pub pos: usize, /// A message describing the error. pub msg: &'static str, } impl Error for PatternError { fn description(&self) -> &str { self.msg } } impl fmt::Display for PatternError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!( f, "Pattern syntax error near position {}: {}", self.pos, self.msg ) } } /// A compiled Unix shell style pattern. /// /// - `?` matches any single character. /// /// - `*` matches any (possibly empty) sequence of characters. /// /// - `**` matches the current directory and arbitrary subdirectories. This /// sequence **must** form a single path component, so both `**a` and `b**` /// are invalid and will result in an error. A sequence of more than two /// consecutive `*` characters is also invalid. /// /// - `[...]` matches any character inside the brackets. Character sequences /// can also specify ranges of characters, as ordered by Unicode, so e.g. /// `[0-9]` specifies any character between 0 and 9 inclusive. An unclosed /// bracket is invalid. /// /// - `[!...]` is the negation of `[...]`, i.e. it matches any characters /// **not** in the brackets. /// /// - The metacharacters `?`, `*`, `[`, `]` can be matched by using brackets /// (e.g. `[?]`). When a `]` occurs immediately following `[` or `[!` then it /// is interpreted as being part of, rather then ending, the character set, so /// `]` and NOT `]` can be matched by `[]]` and `[!]]` respectively. The `-` /// character can be specified inside a character sequence pattern by placing /// it at the start or the end, e.g. `[abc-]`. #[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Default, Debug)] pub struct Pattern { original: String, tokens: Vec, is_recursive: bool, } /// Show the original glob pattern. impl fmt::Display for Pattern { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { self.original.fmt(f) } } impl FromStr for Pattern { type Err = PatternError; fn from_str(s: &str) -> Result { Self::new(s) } } #[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Debug)] enum PatternToken { Char(char), AnyChar, AnySequence, AnyRecursiveSequence, AnyWithin(Vec), AnyExcept(Vec), } #[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Debug)] enum CharSpecifier { SingleChar(char), CharRange(char, char), } #[derive(Copy, Clone, PartialEq)] enum MatchResult { Match, SubPatternDoesntMatch, EntirePatternDoesntMatch, } const ERROR_WILDCARDS: &str = "wildcards are either regular `*` or recursive `**`"; const ERROR_RECURSIVE_WILDCARDS: &str = "recursive wildcards must form a single path \ component"; const ERROR_INVALID_RANGE: &str = "invalid range pattern"; impl Pattern { /// This function compiles Unix shell style patterns. /// /// An invalid glob pattern will yield a `PatternError`. pub fn new(pattern: &str) -> Result { let chars = pattern.chars().collect::>(); let mut tokens = Vec::new(); let mut is_recursive = false; let mut i = 0; while i < chars.len() { match chars[i] { '?' => { tokens.push(AnyChar); i += 1; } '*' => { let old = i; while i < chars.len() && chars[i] == '*' { i += 1; } let count = i - old; if count > 2 { return Err(PatternError { pos: old + 2, msg: ERROR_WILDCARDS, }); } else if count == 2 { // ** can only be an entire path component // i.e. a/**/b is valid, but a**/b or a/**b is not // invalid matches are treated literally let is_valid = if i == 2 || path::is_separator(chars[i - count - 1]) { // it ends in a '/' if i < chars.len() && path::is_separator(chars[i]) { i += 1; true // or the pattern ends here // this enables the existing globbing mechanism } else if i == chars.len() { true // `**` ends in non-separator } else { return Err(PatternError { pos: i, msg: ERROR_RECURSIVE_WILDCARDS, }); } // `**` begins with non-separator } else { return Err(PatternError { pos: old - 1, msg: ERROR_RECURSIVE_WILDCARDS, }); }; let tokens_len = tokens.len(); if is_valid { // collapse consecutive AnyRecursiveSequence to a // single one if !(tokens_len > 1 && tokens[tokens_len - 1] == AnyRecursiveSequence) { is_recursive = true; tokens.push(AnyRecursiveSequence); } } } else { tokens.push(AnySequence); } } '[' => { if i + 4 <= chars.len() && chars[i + 1] == '!' { match chars[i + 3..].iter().position(|x| *x == ']') { None => (), Some(j) => { let chars = &chars[i + 2..i + 3 + j]; let cs = parse_char_specifiers(chars); tokens.push(AnyExcept(cs)); i += j + 4; continue; } } } else if i + 3 <= chars.len() && chars[i + 1] != '!' { match chars[i + 2..].iter().position(|x| *x == ']') { None => (), Some(j) => { let cs = parse_char_specifiers(&chars[i + 1..i + 2 + j]); tokens.push(AnyWithin(cs)); i += j + 3; continue; } } } // if we get here then this is not a valid range pattern return Err(PatternError { pos: i, msg: ERROR_INVALID_RANGE, }); } c => { tokens.push(Char(c)); i += 1; } } } Ok(Self { tokens, original: pattern.to_string(), is_recursive, }) } /// Escape metacharacters within the given string by surrounding them in /// brackets. The resulting string will, when compiled into a `Pattern`, /// match the input string and nothing else. pub fn escape(s: &str) -> String { let mut escaped = String::new(); for c in s.chars() { match c { // note that ! does not need escaping because it is only special // inside brackets '?' | '*' | '[' | ']' => { escaped.push('['); escaped.push(c); escaped.push(']'); } c => { escaped.push(c); } } } escaped } /// Return if the given `str` matches this `Pattern` using the default /// match options (i.e. `MatchOptions::new()`). /// /// # Examples /// /// ```rust /// use glob::Pattern; /// /// assert!(Pattern::new("c?t").unwrap().matches("cat")); /// assert!(Pattern::new("k[!e]tteh").unwrap().matches("kitteh")); /// assert!(Pattern::new("d*g").unwrap().matches("doog")); /// ``` pub fn matches(&self, str: &str) -> bool { self.matches_with(str, MatchOptions::new()) } /// Return if the given `Path`, when converted to a `str`, matches this /// `Pattern` using the default match options (i.e. `MatchOptions::new()`). pub fn matches_path(&self, path: &Path) -> bool { // FIXME (#9639): This needs to handle non-utf8 paths path.to_str().map_or(false, |s| self.matches(s)) } /// Return if the given `str` matches this `Pattern` using the specified /// match options. pub fn matches_with(&self, str: &str, options: MatchOptions) -> bool { self.matches_from(true, str.chars(), 0, options) == Match } /// Return if the given `Path`, when converted to a `str`, matches this /// `Pattern` using the specified match options. pub fn matches_path_with(&self, path: &Path, options: MatchOptions) -> bool { // FIXME (#9639): This needs to handle non-utf8 paths path.to_str() .map_or(false, |s| self.matches_with(s, options)) } /// Access the original glob pattern. pub fn as_str(&self) -> &str { &self.original } fn matches_from( &self, mut follows_separator: bool, mut file: std::str::Chars, i: usize, options: MatchOptions, ) -> MatchResult { for (ti, token) in self.tokens[i..].iter().enumerate() { match *token { AnySequence | AnyRecursiveSequence => { // ** must be at the start. debug_assert!(match *token { AnyRecursiveSequence => follows_separator, _ => true, }); // Empty match match self.matches_from(follows_separator, file.clone(), i + ti + 1, options) { SubPatternDoesntMatch => (), // keep trying m => return m, }; while let Some(c) = file.next() { if follows_separator && options.require_literal_leading_dot && c == '.' { return SubPatternDoesntMatch; } follows_separator = path::is_separator(c); match *token { AnyRecursiveSequence if !follows_separator => continue, AnySequence if options.require_literal_separator && follows_separator => { return SubPatternDoesntMatch } _ => (), } match self.matches_from( follows_separator, file.clone(), i + ti + 1, options, ) { SubPatternDoesntMatch => (), // keep trying m => return m, } } } _ => { let c = match file.next() { Some(c) => c, None => return EntirePatternDoesntMatch, }; let is_sep = path::is_separator(c); if !match *token { AnyChar | AnyWithin(..) | AnyExcept(..) if (options.require_literal_separator && is_sep) || (follows_separator && options.require_literal_leading_dot && c == '.') => { false } AnyChar => true, AnyWithin(ref specifiers) => in_char_specifiers(&specifiers, c, options), AnyExcept(ref specifiers) => !in_char_specifiers(&specifiers, c, options), Char(c2) => chars_eq(c, c2, options.case_sensitive), AnySequence | AnyRecursiveSequence => unreachable!(), } { return SubPatternDoesntMatch; } follows_separator = is_sep; } } } // Iter is fused. if file.next().is_none() { Match } else { SubPatternDoesntMatch } } } // Fills `todo` with paths under `path` to be matched by `patterns[idx]`, // special-casing patterns to match `.` and `..`, and avoiding `readdir()` // calls when there are no metacharacters in the pattern. fn fill_todo( todo: &mut Vec>, patterns: &[Pattern], idx: usize, path: &Path, options: MatchOptions, ) { // convert a pattern that's just many Char(_) to a string fn pattern_as_str(pattern: &Pattern) -> Option { let mut s = String::new(); for token in &pattern.tokens { match *token { Char(c) => s.push(c), _ => return None, } } Some(s) } let add = |todo: &mut Vec<_>, next_path: PathBuf| { if idx + 1 == patterns.len() { // We know it's good, so don't make the iterator match this path // against the pattern again. In particular, it can't match // . or .. globs since these never show up as path components. todo.push(Ok((next_path, !0 as usize))); } else { fill_todo(todo, patterns, idx + 1, &next_path, options); } }; let pattern = &patterns[idx]; let is_dir = is_dir(path); let curdir = path == Path::new("."); match pattern_as_str(pattern) { Some(s) => { // This pattern component doesn't have any metacharacters, so we // don't need to read the current directory to know where to // continue. So instead of passing control back to the iterator, // we can just check for that one entry and potentially recurse // right away. let special = "." == s || ".." == s; let next_path = if curdir { PathBuf::from(s) } else { path.join(&s) }; if (special && is_dir) || (!special && fs::metadata(&next_path).is_ok()) { add(todo, next_path); } } None if is_dir => { let dirs = fs::read_dir(path).and_then(|d| { d.map(|e| { e.map(|e| { if curdir { PathBuf::from(e.path().file_name().unwrap()) } else { e.path() } }) }) .collect::, _>>() }); match dirs { Ok(mut children) => { children.sort_by(|p1, p2| p2.file_name().cmp(&p1.file_name())); todo.extend(children.into_iter().map(|x| Ok((x, idx)))); // Matching the special directory entries . and .. that // refer to the current and parent directory respectively // requires that the pattern has a leading dot, even if the // `MatchOptions` field `require_literal_leading_dot` is not // set. if !pattern.tokens.is_empty() && pattern.tokens[0] == Char('.') { for &special in &[".", ".."] { if pattern.matches_with(special, options) { add(todo, path.join(special)); } } } } Err(e) => { todo.push(Err(GlobError { path: path.to_path_buf(), error: e, })); } } } None => { // not a directory, nothing more to find } } } fn parse_char_specifiers(s: &[char]) -> Vec { let mut cs = Vec::new(); let mut i = 0; while i < s.len() { if i + 3 <= s.len() && s[i + 1] == '-' { cs.push(CharRange(s[i], s[i + 2])); i += 3; } else { cs.push(SingleChar(s[i])); i += 1; } } cs } fn in_char_specifiers(specifiers: &[CharSpecifier], c: char, options: MatchOptions) -> bool { for &specifier in specifiers.iter() { match specifier { SingleChar(sc) => { if chars_eq(c, sc, options.case_sensitive) { return true; } } CharRange(start, end) => { // FIXME: work with non-ascii chars properly (issue #1347) if !options.case_sensitive && c.is_ascii() && start.is_ascii() && end.is_ascii() { let start = start.to_ascii_lowercase(); let end = end.to_ascii_lowercase(); let start_up = start.to_uppercase().next().unwrap(); let end_up = end.to_uppercase().next().unwrap(); // only allow case insensitive matching when // both start and end are within a-z or A-Z if start != start_up && end != end_up { let c = c.to_ascii_lowercase(); if c >= start && c <= end { return true; } } } if c >= start && c <= end { return true; } } } } false } /// A helper function to determine if two chars are (possibly case-insensitively) equal. fn chars_eq(a: char, b: char, case_sensitive: bool) -> bool { if cfg!(windows) && path::is_separator(a) && path::is_separator(b) { true } else if !case_sensitive && a.is_ascii() && b.is_ascii() { // FIXME: work with non-ascii chars properly (issue #9084) a.to_ascii_lowercase() == b.to_ascii_lowercase() } else { a == b } } /// Configuration options to modify the behaviour of `Pattern::matches_with(..)`. #[allow(missing_copy_implementations)] #[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Default)] pub struct MatchOptions { /// Whether or not patterns should be matched in a case-sensitive manner. /// This currently only considers upper/lower case relationships between /// ASCII characters, but in future this might be extended to work with /// Unicode. pub case_sensitive: bool, /// Whether or not path-component separator characters (e.g. `/` on /// Posix) must be matched by a literal `/`, rather than by `*` or `?` or /// `[...]`. pub require_literal_separator: bool, /// Whether or not paths that contain components that start with a `.` /// will require that `.` appears literally in the pattern; `*`, `?`, `**`, /// or `[...]` will not match. This is useful because such files are /// conventionally considered hidden on Unix systems and it might be /// desirable to skip them when listing files. pub require_literal_leading_dot: bool, } impl MatchOptions { /// Constructs a new `MatchOptions` with default field values. This is used /// when calling functions that do not take an explicit `MatchOptions` /// parameter. /// /// This function always returns this value: /// /// ```rust,ignore /// MatchOptions { /// case_sensitive: true, /// require_literal_separator: false, /// require_literal_leading_dot: false /// } /// ``` pub fn new() -> Self { Self { case_sensitive: true, require_literal_separator: false, require_literal_leading_dot: false, } } } #[cfg(test)] mod test { use super::{glob, MatchOptions, Pattern}; use std::path::Path; #[test] fn test_pattern_from_str() { assert!("a*b".parse::().unwrap().matches("a_b")); assert!("a/**b".parse::().unwrap_err().pos == 4); } #[test] fn test_wildcard_errors() { assert!(Pattern::new("a/**b").unwrap_err().pos == 4); assert!(Pattern::new("a/bc**").unwrap_err().pos == 3); assert!(Pattern::new("a/*****").unwrap_err().pos == 4); assert!(Pattern::new("a/b**c**d").unwrap_err().pos == 2); assert!(Pattern::new("a**b").unwrap_err().pos == 0); } #[test] fn test_unclosed_bracket_errors() { assert!(Pattern::new("abc[def").unwrap_err().pos == 3); assert!(Pattern::new("abc[!def").unwrap_err().pos == 3); assert!(Pattern::new("abc[").unwrap_err().pos == 3); assert!(Pattern::new("abc[!").unwrap_err().pos == 3); assert!(Pattern::new("abc[d").unwrap_err().pos == 3); assert!(Pattern::new("abc[!d").unwrap_err().pos == 3); assert!(Pattern::new("abc[]").unwrap_err().pos == 3); assert!(Pattern::new("abc[!]").unwrap_err().pos == 3); } #[test] fn test_glob_errors() { assert!(glob("a/**b").err().unwrap().pos == 4); assert!(glob("abc[def").err().unwrap().pos == 3); } // this test assumes that there is a /root directory and that // the user running this test is not root or otherwise doesn't // have permission to read its contents #[cfg(all(unix, not(target_os = "macos")))] #[test] fn test_iteration_errors() { use std::io; let mut iter = glob("/root/*").unwrap(); // GlobErrors shouldn't halt iteration let next = iter.next(); assert!(next.is_some()); let err = next.unwrap(); assert!(err.is_err()); let err = err.err().unwrap(); assert!(err.path() == Path::new("/root")); assert!(err.error().kind() == io::ErrorKind::PermissionDenied); } #[test] fn test_absolute_pattern() { assert!(glob("/").unwrap().next().is_some()); assert!(glob("//").unwrap().next().is_some()); // assume that the filesystem is not empty! assert!(glob("/*").unwrap().next().is_some()); #[cfg(not(windows))] fn win() {} #[cfg(windows)] fn win() { use std::env::current_dir; use std::ffi::AsOsStr; // check windows absolute paths with host/device components let root_with_device = current_dir() .ok() .and_then(|p| p.prefix().map(|p| p.join("*"))) .unwrap(); // FIXME (#9639): This needs to handle non-utf8 paths assert!(glob(root_with_device.as_os_str().to_str().unwrap()) .unwrap() .next() .is_some()); } win() } #[test] fn test_wildcards() { assert!(Pattern::new("a*b").unwrap().matches("a_b")); assert!(Pattern::new("a*b*c").unwrap().matches("abc")); assert!(!Pattern::new("a*b*c").unwrap().matches("abcd")); assert!(Pattern::new("a*b*c").unwrap().matches("a_b_c")); assert!(Pattern::new("a*b*c").unwrap().matches("a___b___c")); assert!(Pattern::new("abc*abc*abc") .unwrap() .matches("abcabcabcabcabcabcabc")); assert!(!Pattern::new("abc*abc*abc") .unwrap() .matches("abcabcabcabcabcabcabca")); assert!(Pattern::new("a*a*a*a*a*a*a*a*a") .unwrap() .matches("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa")); assert!(Pattern::new("a*b[xyz]c*d").unwrap().matches("abxcdbxcddd")); } #[test] fn test_recursive_wildcards() { let pat = Pattern::new("some/**/needle.txt").unwrap(); assert!(pat.matches("some/needle.txt")); assert!(pat.matches("some/one/needle.txt")); assert!(pat.matches("some/one/two/needle.txt")); assert!(pat.matches("some/other/needle.txt")); assert!(!pat.matches("some/other/notthis.txt")); // a single ** should be valid, for globs // Should accept anything let pat = Pattern::new("**").unwrap(); assert!(pat.is_recursive); assert!(pat.matches("abcde")); assert!(pat.matches("")); assert!(pat.matches(".asdf")); assert!(pat.matches("/x/.asdf")); // collapse consecutive wildcards let pat = Pattern::new("some/**/**/needle.txt").unwrap(); assert!(pat.matches("some/needle.txt")); assert!(pat.matches("some/one/needle.txt")); assert!(pat.matches("some/one/two/needle.txt")); assert!(pat.matches("some/other/needle.txt")); assert!(!pat.matches("some/other/notthis.txt")); // ** can begin the pattern let pat = Pattern::new("**/test").unwrap(); assert!(pat.matches("one/two/test")); assert!(pat.matches("one/test")); assert!(pat.matches("test")); // /** can begin the pattern let pat = Pattern::new("/**/test").unwrap(); assert!(pat.matches("/one/two/test")); assert!(pat.matches("/one/test")); assert!(pat.matches("/test")); assert!(!pat.matches("/one/notthis")); assert!(!pat.matches("/notthis")); // Only start sub-patterns on start of path segment. let pat = Pattern::new("**/.*").unwrap(); assert!(pat.matches(".abc")); assert!(pat.matches("abc/.abc")); assert!(!pat.matches("ab.c")); assert!(!pat.matches("abc/ab.c")); } #[test] fn test_lots_of_files() { // this is a good test because it touches lots of differently named files glob("/*/*/*/*").unwrap().skip(10000).next(); } #[test] fn test_range_pattern() { let pat = Pattern::new("a[0-9]b").unwrap(); for i in 0..10 { assert!(pat.matches(&format!("a{}b", i))); } assert!(!pat.matches("a_b")); let pat = Pattern::new("a[!0-9]b").unwrap(); for i in 0..10 { assert!(!pat.matches(&format!("a{}b", i))); } assert!(pat.matches("a_b")); let pats = ["[a-z123]", "[1a-z23]", "[123a-z]"]; for &p in pats.iter() { let pat = Pattern::new(p).unwrap(); for c in "abcdefghijklmnopqrstuvwxyz".chars() { assert!(pat.matches(&c.to_string())); } for c in "ABCDEFGHIJKLMNOPQRSTUVWXYZ".chars() { let options = MatchOptions { case_sensitive: false, ..MatchOptions::new() }; assert!(pat.matches_with(&c.to_string(), options)); } assert!(pat.matches("1")); assert!(pat.matches("2")); assert!(pat.matches("3")); } let pats = ["[abc-]", "[-abc]", "[a-c-]"]; for &p in pats.iter() { let pat = Pattern::new(p).unwrap(); assert!(pat.matches("a")); assert!(pat.matches("b")); assert!(pat.matches("c")); assert!(pat.matches("-")); assert!(!pat.matches("d")); } let pat = Pattern::new("[2-1]").unwrap(); assert!(!pat.matches("1")); assert!(!pat.matches("2")); assert!(Pattern::new("[-]").unwrap().matches("-")); assert!(!Pattern::new("[!-]").unwrap().matches("-")); } #[test] fn test_pattern_matches() { let txt_pat = Pattern::new("*hello.txt").unwrap(); assert!(txt_pat.matches("hello.txt")); assert!(txt_pat.matches("gareth_says_hello.txt")); assert!(txt_pat.matches("some/path/to/hello.txt")); assert!(txt_pat.matches("some\\path\\to\\hello.txt")); assert!(txt_pat.matches("/an/absolute/path/to/hello.txt")); assert!(!txt_pat.matches("hello.txt-and-then-some")); assert!(!txt_pat.matches("goodbye.txt")); let dir_pat = Pattern::new("*some/path/to/hello.txt").unwrap(); assert!(dir_pat.matches("some/path/to/hello.txt")); assert!(dir_pat.matches("a/bigger/some/path/to/hello.txt")); assert!(!dir_pat.matches("some/path/to/hello.txt-and-then-some")); assert!(!dir_pat.matches("some/other/path/to/hello.txt")); } #[test] fn test_pattern_escape() { let s = "_[_]_?_*_!_"; assert_eq!(Pattern::escape(s), "_[[]_[]]_[?]_[*]_!_".to_string()); assert!(Pattern::new(&Pattern::escape(s)).unwrap().matches(s)); } #[test] fn test_pattern_matches_case_insensitive() { let pat = Pattern::new("aBcDeFg").unwrap(); let options = MatchOptions { case_sensitive: false, require_literal_separator: false, require_literal_leading_dot: false, }; assert!(pat.matches_with("aBcDeFg", options)); assert!(pat.matches_with("abcdefg", options)); assert!(pat.matches_with("ABCDEFG", options)); assert!(pat.matches_with("AbCdEfG", options)); } #[test] fn test_pattern_matches_case_insensitive_range() { let pat_within = Pattern::new("[a]").unwrap(); let pat_except = Pattern::new("[!a]").unwrap(); let options_case_insensitive = MatchOptions { case_sensitive: false, require_literal_separator: false, require_literal_leading_dot: false, }; let options_case_sensitive = MatchOptions { case_sensitive: true, require_literal_separator: false, require_literal_leading_dot: false, }; assert!(pat_within.matches_with("a", options_case_insensitive)); assert!(pat_within.matches_with("A", options_case_insensitive)); assert!(!pat_within.matches_with("A", options_case_sensitive)); assert!(!pat_except.matches_with("a", options_case_insensitive)); assert!(!pat_except.matches_with("A", options_case_insensitive)); assert!(pat_except.matches_with("A", options_case_sensitive)); } #[test] fn test_pattern_matches_require_literal_separator() { let options_require_literal = MatchOptions { case_sensitive: true, require_literal_separator: true, require_literal_leading_dot: false, }; let options_not_require_literal = MatchOptions { case_sensitive: true, require_literal_separator: false, require_literal_leading_dot: false, }; assert!(Pattern::new("abc/def") .unwrap() .matches_with("abc/def", options_require_literal)); assert!(!Pattern::new("abc?def") .unwrap() .matches_with("abc/def", options_require_literal)); assert!(!Pattern::new("abc*def") .unwrap() .matches_with("abc/def", options_require_literal)); assert!(!Pattern::new("abc[/]def") .unwrap() .matches_with("abc/def", options_require_literal)); assert!(Pattern::new("abc/def") .unwrap() .matches_with("abc/def", options_not_require_literal)); assert!(Pattern::new("abc?def") .unwrap() .matches_with("abc/def", options_not_require_literal)); assert!(Pattern::new("abc*def") .unwrap() .matches_with("abc/def", options_not_require_literal)); assert!(Pattern::new("abc[/]def") .unwrap() .matches_with("abc/def", options_not_require_literal)); } #[test] fn test_pattern_matches_require_literal_leading_dot() { let options_require_literal_leading_dot = MatchOptions { case_sensitive: true, require_literal_separator: false, require_literal_leading_dot: true, }; let options_not_require_literal_leading_dot = MatchOptions { case_sensitive: true, require_literal_separator: false, require_literal_leading_dot: false, }; let f = |options| { Pattern::new("*.txt") .unwrap() .matches_with(".hello.txt", options) }; assert!(f(options_not_require_literal_leading_dot)); assert!(!f(options_require_literal_leading_dot)); let f = |options| { Pattern::new(".*.*") .unwrap() .matches_with(".hello.txt", options) }; assert!(f(options_not_require_literal_leading_dot)); assert!(f(options_require_literal_leading_dot)); let f = |options| { Pattern::new("aaa/bbb/*") .unwrap() .matches_with("aaa/bbb/.ccc", options) }; assert!(f(options_not_require_literal_leading_dot)); assert!(!f(options_require_literal_leading_dot)); let f = |options| { Pattern::new("aaa/bbb/*") .unwrap() .matches_with("aaa/bbb/c.c.c.", options) }; assert!(f(options_not_require_literal_leading_dot)); assert!(f(options_require_literal_leading_dot)); let f = |options| { Pattern::new("aaa/bbb/.*") .unwrap() .matches_with("aaa/bbb/.ccc", options) }; assert!(f(options_not_require_literal_leading_dot)); assert!(f(options_require_literal_leading_dot)); let f = |options| { Pattern::new("aaa/?bbb") .unwrap() .matches_with("aaa/.bbb", options) }; assert!(f(options_not_require_literal_leading_dot)); assert!(!f(options_require_literal_leading_dot)); let f = |options| { Pattern::new("aaa/[.]bbb") .unwrap() .matches_with("aaa/.bbb", options) }; assert!(f(options_not_require_literal_leading_dot)); assert!(!f(options_require_literal_leading_dot)); let f = |options| Pattern::new("**/*").unwrap().matches_with(".bbb", options); assert!(f(options_not_require_literal_leading_dot)); assert!(!f(options_require_literal_leading_dot)); } #[test] fn test_matches_path() { // on windows, (Path::new("a/b").as_str().unwrap() == "a\\b"), so this // tests that / and \ are considered equivalent on windows assert!(Pattern::new("a/b").unwrap().matches_path(&Path::new("a/b"))); } #[test] fn test_path_join() { let pattern = Path::new("one").join(&Path::new("**/*.rs")); assert!(Pattern::new(pattern.to_str().unwrap()).is_ok()); } } vendor/glob/tests/0000775000175000017500000000000014160055207014734 5ustar mwhudsonmwhudsonvendor/glob/tests/glob-std.rs0000664000175000017500000002644414160055207017027 0ustar mwhudsonmwhudson// Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // ignore-windows TempDir may cause IoError on windows: #10462 #![cfg_attr(test, deny(warnings))] extern crate glob; extern crate tempdir; use glob::glob; use std::env; use std::fs; use std::path::PathBuf; use tempdir::TempDir; #[test] fn main() { fn mk_file(path: &str, directory: bool) { if directory { fs::create_dir(path).unwrap(); } else { fs::File::create(path).unwrap(); } } fn glob_vec(pattern: &str) -> Vec { glob(pattern).unwrap().map(|r| r.unwrap()).collect() } let root = TempDir::new("glob-tests"); let root = root.ok().expect("Should have created a temp directory"); assert!(env::set_current_dir(root.path()).is_ok()); mk_file("aaa", true); mk_file("aaa/apple", true); mk_file("aaa/orange", true); mk_file("aaa/tomato", true); mk_file("aaa/tomato/tomato.txt", false); mk_file("aaa/tomato/tomoto.txt", false); mk_file("bbb", true); mk_file("bbb/specials", true); mk_file("bbb/specials/!", false); // windows does not allow `*` or `?` characters to exist in filenames if env::consts::FAMILY != "windows" { mk_file("bbb/specials/*", false); mk_file("bbb/specials/?", false); } mk_file("bbb/specials/[", false); mk_file("bbb/specials/]", false); mk_file("ccc", true); mk_file("xyz", true); mk_file("xyz/x", false); mk_file("xyz/y", false); mk_file("xyz/z", false); mk_file("r", true); mk_file("r/current_dir.md", false); mk_file("r/one", true); mk_file("r/one/a.md", false); mk_file("r/one/another", true); mk_file("r/one/another/a.md", false); mk_file("r/one/another/deep", true); mk_file("r/one/another/deep/spelunking.md", false); mk_file("r/another", true); mk_file("r/another/a.md", false); mk_file("r/two", true); mk_file("r/two/b.md", false); mk_file("r/three", true); mk_file("r/three/c.md", false); // all recursive entities assert_eq!( glob_vec("r/**"), vec!( PathBuf::from("r/another"), PathBuf::from("r/one"), PathBuf::from("r/one/another"), PathBuf::from("r/one/another/deep"), PathBuf::from("r/three"), PathBuf::from("r/two") ) ); // collapse consecutive recursive patterns assert_eq!( glob_vec("r/**/**"), vec!( PathBuf::from("r/another"), PathBuf::from("r/one"), PathBuf::from("r/one/another"), PathBuf::from("r/one/another/deep"), PathBuf::from("r/three"), PathBuf::from("r/two") ) ); assert_eq!( glob_vec("r/**/*"), vec!( PathBuf::from("r/another"), PathBuf::from("r/another/a.md"), PathBuf::from("r/current_dir.md"), PathBuf::from("r/one"), PathBuf::from("r/one/a.md"), PathBuf::from("r/one/another"), PathBuf::from("r/one/another/a.md"), PathBuf::from("r/one/another/deep"), PathBuf::from("r/one/another/deep/spelunking.md"), PathBuf::from("r/three"), PathBuf::from("r/three/c.md"), PathBuf::from("r/two"), PathBuf::from("r/two/b.md") ) ); // followed by a wildcard assert_eq!( glob_vec("r/**/*.md"), vec!( PathBuf::from("r/another/a.md"), PathBuf::from("r/current_dir.md"), PathBuf::from("r/one/a.md"), PathBuf::from("r/one/another/a.md"), PathBuf::from("r/one/another/deep/spelunking.md"), PathBuf::from("r/three/c.md"), PathBuf::from("r/two/b.md") ) ); // followed by a precise pattern assert_eq!( glob_vec("r/one/**/a.md"), vec!( PathBuf::from("r/one/a.md"), PathBuf::from("r/one/another/a.md") ) ); // followed by another recursive pattern // collapses consecutive recursives into one assert_eq!( glob_vec("r/one/**/**/a.md"), vec!( PathBuf::from("r/one/a.md"), PathBuf::from("r/one/another/a.md") ) ); // followed by two precise patterns assert_eq!( glob_vec("r/**/another/a.md"), vec!( PathBuf::from("r/another/a.md"), PathBuf::from("r/one/another/a.md") ) ); assert_eq!(glob_vec(""), Vec::::new()); assert_eq!(glob_vec("."), vec!(PathBuf::from("."))); assert_eq!(glob_vec(".."), vec!(PathBuf::from(".."))); assert_eq!(glob_vec("aaa"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("aaa/"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("a"), Vec::::new()); assert_eq!(glob_vec("aa"), Vec::::new()); assert_eq!(glob_vec("aaaa"), Vec::::new()); assert_eq!(glob_vec("aaa/apple"), vec!(PathBuf::from("aaa/apple"))); assert_eq!(glob_vec("aaa/apple/nope"), Vec::::new()); // windows should support both / and \ as directory separators if env::consts::FAMILY == "windows" { assert_eq!(glob_vec("aaa\\apple"), vec!(PathBuf::from("aaa/apple"))); } assert_eq!( glob_vec("???/"), vec!( PathBuf::from("aaa"), PathBuf::from("bbb"), PathBuf::from("ccc"), PathBuf::from("xyz") ) ); assert_eq!( glob_vec("aaa/tomato/tom?to.txt"), vec!( PathBuf::from("aaa/tomato/tomato.txt"), PathBuf::from("aaa/tomato/tomoto.txt") ) ); assert_eq!( glob_vec("xyz/?"), vec!( PathBuf::from("xyz/x"), PathBuf::from("xyz/y"), PathBuf::from("xyz/z") ) ); assert_eq!(glob_vec("a*"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("*a*"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("a*a"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("aaa*"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("*aaa"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("*aaa*"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("*a*a*a*"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("aaa*/"), vec!(PathBuf::from("aaa"))); assert_eq!( glob_vec("aaa/*"), vec!( PathBuf::from("aaa/apple"), PathBuf::from("aaa/orange"), PathBuf::from("aaa/tomato") ) ); assert_eq!( glob_vec("aaa/*a*"), vec!( PathBuf::from("aaa/apple"), PathBuf::from("aaa/orange"), PathBuf::from("aaa/tomato") ) ); assert_eq!( glob_vec("*/*/*.txt"), vec!( PathBuf::from("aaa/tomato/tomato.txt"), PathBuf::from("aaa/tomato/tomoto.txt") ) ); assert_eq!( glob_vec("*/*/t[aob]m?to[.]t[!y]t"), vec!( PathBuf::from("aaa/tomato/tomato.txt"), PathBuf::from("aaa/tomato/tomoto.txt") ) ); assert_eq!(glob_vec("./aaa"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("./*"), glob_vec("*")); assert_eq!(glob_vec("*/..").pop().unwrap(), PathBuf::from("xyz/..")); assert_eq!(glob_vec("aaa/../bbb"), vec!(PathBuf::from("aaa/../bbb"))); assert_eq!(glob_vec("nonexistent/../bbb"), Vec::::new()); assert_eq!(glob_vec("aaa/tomato/tomato.txt/.."), Vec::::new()); assert_eq!(glob_vec("aaa/tomato/tomato.txt/"), Vec::::new()); assert_eq!(glob_vec("aa[a]"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("aa[abc]"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("a[bca]a"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("aa[b]"), Vec::::new()); assert_eq!(glob_vec("aa[xyz]"), Vec::::new()); assert_eq!(glob_vec("aa[]]"), Vec::::new()); assert_eq!(glob_vec("aa[!b]"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("aa[!bcd]"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("a[!bcd]a"), vec!(PathBuf::from("aaa"))); assert_eq!(glob_vec("aa[!a]"), Vec::::new()); assert_eq!(glob_vec("aa[!abc]"), Vec::::new()); assert_eq!( glob_vec("bbb/specials/[[]"), vec!(PathBuf::from("bbb/specials/[")) ); assert_eq!( glob_vec("bbb/specials/!"), vec!(PathBuf::from("bbb/specials/!")) ); assert_eq!( glob_vec("bbb/specials/[]]"), vec!(PathBuf::from("bbb/specials/]")) ); if env::consts::FAMILY != "windows" { assert_eq!( glob_vec("bbb/specials/[*]"), vec!(PathBuf::from("bbb/specials/*")) ); assert_eq!( glob_vec("bbb/specials/[?]"), vec!(PathBuf::from("bbb/specials/?")) ); } if env::consts::FAMILY == "windows" { assert_eq!( glob_vec("bbb/specials/[![]"), vec!( PathBuf::from("bbb/specials/!"), PathBuf::from("bbb/specials/]") ) ); assert_eq!( glob_vec("bbb/specials/[!]]"), vec!( PathBuf::from("bbb/specials/!"), PathBuf::from("bbb/specials/[") ) ); assert_eq!( glob_vec("bbb/specials/[!!]"), vec!( PathBuf::from("bbb/specials/["), PathBuf::from("bbb/specials/]") ) ); } else { assert_eq!( glob_vec("bbb/specials/[![]"), vec!( PathBuf::from("bbb/specials/!"), PathBuf::from("bbb/specials/*"), PathBuf::from("bbb/specials/?"), PathBuf::from("bbb/specials/]") ) ); assert_eq!( glob_vec("bbb/specials/[!]]"), vec!( PathBuf::from("bbb/specials/!"), PathBuf::from("bbb/specials/*"), PathBuf::from("bbb/specials/?"), PathBuf::from("bbb/specials/[") ) ); assert_eq!( glob_vec("bbb/specials/[!!]"), vec!( PathBuf::from("bbb/specials/*"), PathBuf::from("bbb/specials/?"), PathBuf::from("bbb/specials/["), PathBuf::from("bbb/specials/]") ) ); assert_eq!( glob_vec("bbb/specials/[!*]"), vec!( PathBuf::from("bbb/specials/!"), PathBuf::from("bbb/specials/?"), PathBuf::from("bbb/specials/["), PathBuf::from("bbb/specials/]") ) ); assert_eq!( glob_vec("bbb/specials/[!?]"), vec!( PathBuf::from("bbb/specials/!"), PathBuf::from("bbb/specials/*"), PathBuf::from("bbb/specials/["), PathBuf::from("bbb/specials/]") ) ); } } vendor/glob/LICENSE-MIT0000664000175000017500000000205714160055207015232 0ustar mwhudsonmwhudsonCopyright (c) 2014 The Rust Project Developers Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/glob/README.md0000664000175000017500000000132414160055207015051 0ustar mwhudsonmwhudsonglob ==== Support for matching file paths against Unix shell style patterns. [![Build Status](https://travis-ci.org/rust-lang-nursery/glob.svg?branch=master)](https://travis-ci.org/rust-lang-nursery/glob) [Documentation](https://doc.rust-lang.org/glob) ## Usage To use `glob`, add this to your `Cargo.toml`: ```toml [dependencies] glob = "0.3.0" ``` And add this to your crate root: ```rust extern crate glob; ``` ## Examples Print all jpg files in /media/ and all of its subdirectories. ```rust use glob::glob; for entry in glob("/media/**/*.jpg").expect("Failed to read glob pattern") { match entry { Ok(path) => println!("{:?}", path.display()), Err(e) => println!("{:?}", e), } } ``` vendor/openssl/0000775000175000017500000000000014172417313014335 5ustar mwhudsonmwhudsonvendor/openssl/.cargo-checksum.json0000664000175000017500000000013114172417313020174 0ustar mwhudsonmwhudson{"files":{},"package":"0c7ae222234c30df141154f159066c5093ff73b63204dcda7121eb082fc56a95"}vendor/openssl/LICENSE0000664000175000017500000000115214160055207015336 0ustar mwhudsonmwhudsonCopyright 2011-2017 Google Inc. 2013 Jack Lloyd 2013-2014 Steven Fackler Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/openssl/Cargo.toml0000664000175000017500000000227214172417313016270 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2018" name = "openssl" version = "0.10.38" authors = ["Steven Fackler "] description = "OpenSSL bindings" readme = "README.md" keywords = ["crypto", "tls", "ssl", "dtls"] categories = ["cryptography", "api-bindings"] license = "Apache-2.0" repository = "https://github.com/sfackler/rust-openssl" [dependencies.bitflags] version = "1.0" [dependencies.cfg-if] version = "1.0" [dependencies.ffi] version = "0.9.69" package = "openssl-sys" [dependencies.foreign-types] version = "0.3.1" [dependencies.libc] version = "0.2" [dependencies.once_cell] version = "1.5.2" [dev-dependencies.hex] version = "0.3" [dev-dependencies.tempdir] version = "0.3" [features] v101 = [] v102 = [] v110 = [] v111 = [] vendor/openssl/CHANGELOG.md0000664000175000017500000005521114172417313016152 0ustar mwhudsonmwhudson# Change Log ## [Unreleased] ## [v0.10.38] - 2021-10-31 ### Added * Added `Pkey::ec_gen`. ## [v0.10.37] - 2021-10-27 ### Fixed * Fixed linkage against OpenSSL distributions built with `no-chacha`. ### Added * Added `BigNumRef::to_vec_padded`. * Added `X509Name::from_der` and `X509NameRef::to_der`. * Added `BigNum::new_secure`, `BigNumReef::set_const_time`, `BigNumref::is_const_time`, and `BigNumRef::is_secure`. ## [v0.10.36] - 2021-08-17 ### Added * Added `Asn1Object::as_slice`. * Added `PKeyRef::{raw_public_key, raw_private_key, private_key_to_pkcs8_passphrase}` and `PKey::{private_key_from_raw_bytes, public_key_from_raw_bytes}`. * Added `Cipher::{seed_cbc, seed_cfb128, seed_ecb, seed_ofb}`. ## [v0.10.35] - 2021-06-18 ### Fixed * Fixed a memory leak in `Deriver`. ### Added * Added support for OpenSSL 3.x.x. * Added `SslStream::peek`. ## [v0.10.34] - 2021-04-28 ### Added * Added `Dh::set_private_key` and `DhRef::private_key`. * Added `EcPointRef::affine_coordinates`. * Added `TryFrom` implementations to convert between `PKey` and specific key types. * Added `X509StoreBuilderRef::set_flags`. ## [v0.10.33] - 2021-03-13 ### Fixed * `Dh::generate_params` now uses `DH_generate_params_ex` rather than the deprecated `DH_generated_params` function. ### Added * Added `Asn1Type`. * Added `CmsContentInfoRef::decrypt_without_cert_check`. * Added `EcPointRef::{is_infinity, is_on_curve}`. * Added `Encrypter::set_rsa_oaep_label`. * Added `MessageDigest::sm3`. * Added `Pkcs7Ref::signers`. * Added `Cipher::nid`. * Added `X509Ref::authority_info` and `AccessDescription::{method, location}`. * Added `X509NameBuilder::{append_entry_by_text_with_type, append_entry_by_nid_with_type}`. ## [v0.10.32] - 2020-12-24 ### Fixed * Fixed `Ssl::new` to take a `&SslContextRef` rather than `&SslContext`. ### Added * Added the `encrypt` module to support asymmetric encryption and decryption with `PKey`s. * Added `MessageDigest::from_name`. * Added `ConnectConfiguration::into_ssl`. * Added the ability to create unconnected `SslStream`s directly from an `Ssl` and transport stream without performing any part of the handshake with `SslStream::new`. * Added `SslStream::{read_early_data, write_early_data, connect, accept, do_handshake, stateless}`. * Implemented `ToOwned` for `SslContextRef`. * Added `SslRef::{set_connect_state, set_accept_state}`. ### Deprecated * Deprecated `SslStream::from_raw_parts` in favor of `Ssl::from_ptr` and `SslStream::new`. * Deprecated `SslStreamBuilder` in favor of methods on `Ssl` and `SslStream`. ## [v0.10.31] - 2020-12-09 ### Added * Added `Asn1Object::from_str`. * Added `Dh::from_pgq`, `DhRef::prime_p`, `DhRef::prime_q`, `DhRef::generator`, `DhRef::generate_params`, `DhRef::generate_key`, `DhRef::public_key`, and `DhRef::compute_key`. * Added `Pkcs7::from_der` and `Pkcs7Ref::to_der`. * Added `Id::X25519`, `Id::X448`, `PKey::generate_x25519`, and `PKey::generate_x448`. * Added `SrtpProfileId::SRTP_AEAD_AES_128_GCM` and `SrtpProfileId::SRTP_AEAD_AES_256_GCM`. * Added `SslContextBuilder::verify_param` and `SslContextBuilder::verify_param_mut`. * Added `X509Ref::subject_name_hash` and `X509Ref::version`. * Added `X509StoreBuilderRef::add_lookup`, and the `X509Lookup` type. * Added `X509VerifyFlags`, `X509VerifyParamRef::set_flags`, `X509VerifyParamRef::clear_flags` `X509VerifyParamRef::get_flags`. ## [v0.10.30] - 2020-06-25 ### Fixed * `DsaRef::private_key_to_pem` can no longer be called without a private key. ### Changed * Improved the `Debug` implementations of many types. ### Added * Added `is_empty` implementations for `Asn1StringRef` and `Asn1BitStringRef`. * Added `EcPointRef::{to_pem, to_dir}` and `EcKeyRef::{public_key_from_pem, public_key_from_der}`. * Added `Default` implementations for many types. * Added `Debug` implementations for many types. * Added `SslStream::from_raw_parts`. * Added `SslRef::set_mtu`. * Added `Cipher::{aes_128_ocb, aes_192_ocb, aes_256_ocb}`. ### Deprecated * Deprecated `SslStreamBuilder::set_dtls_mtu_size` in favor of `SslRef::set_mtu`. ## [v0.10.29] - 2020-04-07 ### Fixed * Fixed a memory leak in `X509Builder::append_extension`. ### Added * Added `SslConnector::into_context` and `SslConnector::context`. * Added `SslAcceptor::into_context` and `SslAcceptor::context`. * Added `SslMethod::tls_client` and `SslMethod::tls_server`. * Added `SslContextBuilder::set_cert_store`. * Added `SslContextRef::verify_mode` and `SslRef::verify_mode`. * Added `SslRef::is_init_finished`. * Added `X509Object`. * Added `X509StoreRef::objects`. ## [v0.10.28] - 2020-02-04 ### Fixed * Fixed the mutability of `Signer::sign_oneshot` and `Verifier::verify_oneshot`. This is unfortunately a breaking change, but a necessary soundness fix. ## [v0.10.27] - 2020-01-29 ### Added * Added `MessageDigest::null`. * Added `PKey::private_key_from_pkcs8`. * Added `SslOptions::NO_RENEGOTIATION`. * Added `SslStreamBuilder::set_dtls_mtu_size`. ## [v0.10.26] - 2019-11-22 ### Fixed * Fixed improper handling of the IV buffer in `envelope::{Seal, Unseal}`. ### Added * Added `Asn1TimeRef::{diff, compare}`. * Added `Asn1Time::from_unix`. * Added `PartialEq` and `PartialOrd` implementations for `Asn1Time` and `Asn1TimeRef`. * Added `base64::{encode_block, decode_block}`. * Added `EcGroupRef::order_bits`. * Added `Clone` implementations for `Sha1`, `Sha224`, `Sha256`, `Sha384`, and `Sha512`. * Added `SslContextBuilder::{set_sigalgs_list, set_groups_list}`. ## [v0.10.25] - 2019-10-02 ### Fixed * Fixed a memory leak in `EcdsaSig::from_private_components` when using OpenSSL 1.0.x. ### Added * Added support for Ed25519 and Ed448 keys. * Implemented `ToOwned` for `PKeyRef` and `Clone` for `PKey`. ## [v0.10.24] - 2019-07-19 ### Fixed * Worked around an OpenSSL 1.0.x bug triggered by code calling `SSL_set_app_data`. ### Added * Added `aes::{wrap_key, unwrap_key}`. * Added `CmsContentInfoRef::to_pem` and `CmsContentInfo::from_pem`. * Added `DsaRef::private_key_to_pem`. * Added `EcGroupRef::{cofactor, generator}`. * Added `EcPointRef::to_owned`. * Added a `Debug` implementation for `EcKey`. * Added `SslAcceptor::{mozilla_intermediate_v5, mozilla_modern_v5}`. * Added `Cipher::{aes_128_ofb, aes_192_ecb, aes_192_cbc, aes_192_ctr, aes_192_cfb1, aes_192_cfb128, aes_192_cfb8, aes_192_gcm, aes_192_ccm, aes_192_ofb, aes_256_ofb}`. ## [v0.10.23] - 2019-05-18 ### Fixed * Fixed session callbacks when an `Ssl`'s context is replaced. ### Added * Added `SslContextBuilder::add_client_ca`. ## [v0.10.22] - 2019-05-08 ### Added * Added support for the LibreSSL 2.9.x series. ## [v0.10.21] - 2019-04-30 ### Fixed * Fixed overly conservatifve buffer size checks in `Crypter` when using stream ciphers. ### Added * Added bindings to envelope encryption APIs. * Added `PkeyRef::size`. ## [v0.10.20] - 2019-03-20 ### Added * Added `CmsContentInfo::from_der` and `CmsContentInfo::encrypt`. * Added `X509Ref::verify` and `X509ReqRef::verify`. * Implemented `PartialEq` and `Eq` for `MessageDigest`. * Added `MessageDigest::type_` and `EcGroupRef::curve_name`. ## [v0.10.19] - 2019-03-01 ### Added * The openssl-sys build script now logs the values of environment variables. * Added `ERR_PACK` to openssl-sys. * The `ERR_*` functions in openssl-sys are const functions when building against newer Rust versions. * Implemented `Clone` for `Dsa`. * Added `SslContextRef::add_session` and `SslContextRef::remove_session`. * Added `SslSessionRef::time`, `SslSessionRef::timeout`, and `SslSessionRef::protocol_version`. * Added `SslContextBuilder::set_session_cache_size` and `SslContextRef::session_cache_size`. ## [v0.10.18] - 2019-02-22 ### Fixed * Fixed the return type of `ssl::cipher_name`. ## [v0.10.17] - 2019-02-22 ### Added * Implemented `AsRef` and `AsRef<[u8]>` for `OpenSslString`. * Added `Asn1Integer::from_bn`. * Added `RsaRef::check_key`. * Added `Asn1Time::from_str` and `Asn1Time::from_str_x509`. * Added `Rsa::generate_with_e`. * Added `Cipher::des_ede3_cfb64`. * Added `SslCipherRef::standard_name` and `ssl::cipher_name`. ## [v0.10.16] - 2018-12-16 ### Added * Added SHA3 and SHAKE to `MessageDigest`. * Added `rand::keep_random_devices_open`. * Added support for LibreSSL 2.9.0. ## [v0.10.15] - 2018-10-22 ### Added * Implemented `DoubleEndedIterator` for stack iterators. ## [v0.10.14] - 2018-10-18 ### Fixed * Made some accidentally exposed internal functions private. ### Added * Added support for LibreSSL 2.8. ### Changed * The OpenSSL version used with the `vendored` feature has been upgraded from 1.1.0 to 1.1.1. ## [v0.10.13] - 2018-10-14 ### Fixed * Fixed a double-free in the `SslContextBuilder::set_get_session_callback` API. ### Added * Added `SslContextBuilder::set_client_hello_callback`. * Added support for LibreSSL 2.8.1. * Added `EcdsaSig::from_der` and `EcdsaSig::to_der`. * Added PKCS#7 support. ## [v0.10.12] - 2018-09-13 ### Fixed * Fixed handling of SNI callbacks during renegotiation. ### Added * Added `SslRef::get_shutdown` and `SslRef::set_shutdown`. * Added support for SRTP in DTLS sessions. * Added support for LibreSSL 2.8.0. ## [v0.10.11] - 2018-08-04 ### Added * The new `vendored` cargo feature will cause openssl-sys to compile and statically link to a vendored copy of OpenSSL. * Added `SslContextBuilder::set_psk_server_callback`. * Added `DsaRef::pub_key` and `DsaRef::priv_key`. * Added `Dsa::from_private_components` and `Dsa::from_public_components`. * Added `X509NameRef::entries`. ### Deprecated * `SslContextBuilder::set_psk_callback` has been renamed to `SslContextBuilder::set_psk_client_callback` and deprecated. ## [v0.10.10] - 2018-06-06 ### Added * Added `SslRef::set_alpn_protos`. * Added `SslContextBuilder::set_ciphersuites`. ## [v0.10.9] - 2018-06-01 ### Fixed * Fixed a use-after-free in `CmsContentInfo::sign`. * `SslRef::servername` now returns `None` rather than panicking on a non-UTF8 name. ### Added * Added `MessageDigest::from_nid`. * Added `Nid::signature_algorithms`, `Nid::long_name`, and `Nid::short_name`. * Added early data and early keying material export support for TLS 1.3. * Added `SslRef::verified_chain`. * Added `SslRef::servername_raw` which returns a `&[u8]` rather than `&str`. * Added `SslRef::finished` and `SslRef::peer_finished`. * Added `X509Ref::digest` to replace `X509Ref::fingerprint`. * `X509StoreBuilder` and `X509Store` now implement `Sync` and `Send`. ### Deprecated * `X509Ref::fingerprint` has been deprecated in favor of `X509Ref::digest`. ## [v0.10.8] - 2018-05-20 ### Fixed * `openssl-sys` will now detect Homebrew-installed OpenSSL when installed to a non-default directory. * The `X509_V_ERR_INVALID_CALL`, `X509_V_ERR_STORE_LOOKUP`, and `X509_V_ERR_PROXY_SUBJECT_NAME_VIOLATION` constants in `openssl-sys` are now only present when building against 1.1.0g and up rather than 1.1.0. * `SslContextBuilder::max_proto_version` and `SslContextBuilder::min_proto_version` are only present when building against 1.1.0g and up rather than 1.1.0. ### Added * Added `CmsContentInfo::sign`. * Added `Clone` and `ToOwned` implementations to `Rsa` and `RsaRef` respectively. * The `min_proto_version` and `max_proto_version` methods are available when linking against LibreSSL 2.6.1 and up in addition to OpenSSL. * `X509VerifyParam` is available when linking against LibreSSL 2.6.1 and up in addition to OpenSSL. * ALPN support is available when linking against LibreSSL 2.6.1 and up in addition to OpenSSL. * `Stack` and `StackRef` are now `Sync` and `Send`. ## [v0.10.7] - 2018-04-30 ### Added * Added `X509Req::public_key` and `X509Req::extensions`. * Added `RsaPrivateKeyBuilder` to allow control over initialization of optional components of an RSA private key. * Added DER encode/decode support to `SslSession`. * openssl-sys now provides the `DEP_OPENSSL_VERSION_NUMBER` and `DEP_OPENSSL_LIBRESSL_VERSION_NUMBER` environment variables to downstream build scripts which contains the hex-encoded version number of the OpenSSL or LibreSSL distribution being built against. The other variables are deprecated. ## [v0.10.6] - 2018-03-05 ### Added * Added `SslOptions::ENABLE_MIDDLEBOX_COMPAT`. * Added more `Sync` and `Send` implementations. * Added `PKeyRef::id`. * Added `Padding::PKCS1_PSS`. * Added `Signer::set_rsa_pss_saltlen`, `Signer::set_rsa_mgf1_md`, `Signer::set_rsa_pss_saltlen`, and `Signer::set_rsa_mgf1_md` * Added `X509StoreContextRef::verify` to directly verify certificates. * Added low level ECDSA support. * Added support for TLSv1.3 custom extensions. (OpenSSL 1.1.1 only) * Added AES-CCM support. * Added `EcKey::from_private_components`. * Added CMAC support. * Added support for LibreSSL 2.7. * Added `X509Ref::serial_number`. * Added `Asn1IntegerRef::to_bn`. * Added support for TLSv1.3 stateless handshakes. (OpenSSL 1.1.1 only) ### Changed * The Cargo features previously used to gate access to version-specific OpenSSL APIs have been removed. Those APIs will be available automatically when building against an appropriate OpenSSL version. * Fixed `PKey::private_key_from_der` to return a `PKey` rather than a `PKey`. This is technically a breaking change but the function was pretty useless previously. ### Deprecated * `X509CheckFlags::FLAG_NO_WILDCARDS` has been renamed to `X509CheckFlags::NO_WILDCARDS` and the old name deprecated. ## [v0.10.5] - 2018-02-28 ### Fixed * `ErrorStack`'s `Display` implementation no longer writes an empty string if it contains no errors. ### Added * Added `SslRef::version2`. * Added `Cipher::des_ede3_cbc`. * Added `SslRef::export_keying_material`. * Added the ability to push an `Error` or `ErrorStack` back onto OpenSSL's error stack. Various callback bindings use this to propagate errors properly. * Added `SslContextBuilder::set_cookie_generate_cb` and `SslContextBuilder::set_cookie_verify_cb`. * Added `SslContextBuilder::set_max_proto_version`, `SslContextBuilder::set_min_proto_version`, `SslContextBuilder::max_proto_version`, and `SslContextBuilder::min_proto_version`. ### Changed * Updated `SslConnector`'s default cipher list to match Python's. ### Deprecated * `SslRef::version` has been deprecated. Use `SslRef::version_str` instead. ## [v0.10.4] - 2018-02-18 ### Added * Added OpenSSL 1.1.1 support. * Added `Rsa::public_key_from_pem_pkcs1`. * Added `SslOptions::NO_TLSV1_3`. (OpenSSL 1.1.1 only) * Added `SslVersion`. * Added `SslSessionCacheMode` and `SslContextBuilder::set_session_cache_mode`. * Added `SslContextBuilder::set_new_session_callback`, `SslContextBuilder::set_remove_session_callback`, and `SslContextBuilder::set_get_session_callback`. * Added `SslContextBuilder::set_keylog_callback`. (OpenSSL 1.1.1 only) * Added `SslRef::client_random` and `SslRef::server_random`. (OpenSSL 1.1.0+ only) ### Fixed * The `SslAcceptorBuilder::mozilla_modern` constructor now disables TLSv1.0 and TLSv1.1 in accordance with Mozilla's recommendations. ## [v0.10.3] - 2018-02-12 ### Added * OpenSSL is now automatically detected on FreeBSD systems. * Added `GeneralName` accessors for `rfc822Name` and `uri` variants. * Added DES-EDE3 support. ### Fixed * Fixed a memory leak in `X509StoreBuilder::add_cert`. ## [v0.10.2] - 2018-01-11 ### Added * Added `ConnectConfiguration::set_use_server_name_indication` and `ConnectConfiguration::set_verify_hostname` for use in contexts where you don't have ownership of the `ConnectConfiguration`. ## [v0.10.1] - 2018-01-10 ### Added * Added a `From for ssl::Error` implementation. ## [v0.10.0] - 2018-01-10 ### Compatibility * openssl 0.10 still uses openssl-sys 0.9, so openssl 0.9 and 0.10 can coexist without issue. ### Added * The `ssl::select_next_proto` function can be used to easily implement the ALPN selection callback in a "standard" way. * FIPS mode support is available in the `fips` module. * Accessors for the Issuer and Issuer Alternative Name fields of X509 certificates have been added. * The `X509VerifyResult` can now be set in the certificate verification callback via `X509StoreContextRef::set_error`. ### Changed * All constants have been moved to associated constants of their type. For example, `bn::MSB_ONE` is now `bn::MsbOption::ONE`. * Asymmetric key types are now parameterized over what they contain. In OpenSSL, the same type is used for key parameters, public keys, and private keys. Unfortunately, some APIs simply assume that certain components are present and will segfault trying to use things that aren't there. The `pkey` module contains new tag types named `Params`, `Public`, and `Private`, and the `Dh`, `Dsa`, `EcKey`, `Rsa`, and `PKey` have a type parameter set to one of those values. This allows the `Signer` constructor to indicate that it requires a private key at compile time for example. Previously, `Signer` would simply segfault if provided a key without private components. * ALPN support has been changed to more directly model OpenSSL's own APIs. Instead of a single method used for both the server and client sides which performed everything automatically, the `SslContextBuilder::set_alpn_protos` and `SslContextBuilder::set_alpn_select_callback` handle the client and server sides respectively. * `SslConnector::danger_connect_without_providing_domain_for_certificate_verification_and_server_name_indication` has been removed in favor of new methods which provide more control. The `ConnectConfiguration::use_server_name_indication` method controls the use of Server Name Indication (SNI), and the `ConnectConfiguration::verify_hostname` method controls the use of hostname verification. These can be controlled independently, and if both are disabled, the domain argument to `ConnectConfiguration::connect` is ignored. * Shared secret derivation is now handled by the new `derive::Deriver` type rather than `pkey::PKeyContext`, which has been removed. * `ssl::Error` is now no longer an enum, and provides more direct access to the relevant state. * `SslConnectorBuilder::new` has been moved and renamed to `SslConnector::builder`. * `SslAcceptorBuilder::mozilla_intermediate` and `SslAcceptorBuilder::mozilla_modern` have been moved to `SslAcceptor` and no longer take the private key and certificate chain. Install those manually after creating the builder. * `X509VerifyError` is now `X509VerifyResult` and can now have the "ok" value in addition to error values. * `x509::X509FileType` is now `ssl::SslFiletype`. * Asymmetric key serialization and deserialization methods now document the formats that they correspond to, and some have been renamed to better indicate that. ### Removed * All deprecated APIs have been removed. * NPN support has been removed. It has been supersceded by ALPN, and is hopefully no longer being used in practice. If you still depend on it, please file an issue! * `SslRef::compression` has been removed. * Some `ssl::SslOptions` flags have been removed as they no longer do anything. ## Older Look at the [release tags] for information about older releases. [Unreleased]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.38...master [v0.10.38]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.37...openssl-v0.10.38 [v0.10.37]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.36...openssl-v0.10.37 [v0.10.36]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.35...openssl-v0.10.36 [v0.10.35]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.34...openssl-v0.10.35 [v0.10.34]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.33...openssl-v0.10.34 [v0.10.33]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.32...openssl-v0.10.33 [v0.10.32]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.31...openssl-v0.10.32 [v0.10.31]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.30...openssl-v0.10.31 [v0.10.30]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.29...openssl-v0.10.30 [v0.10.29]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.28...openssl-v0.10.29 [v0.10.28]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.27...openssl-v0.10.28 [v0.10.27]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.26...openssl-v0.10.27 [v0.10.26]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.25...openssl-v0.10.26 [v0.10.25]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.24...openssl-v0.10.25 [v0.10.24]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.23...openssl-v0.10.24 [v0.10.23]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.22...openssl-v0.10.23 [v0.10.22]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.21...openssl-v0.10.22 [v0.10.21]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.20...openssl-v0.10.21 [v0.10.20]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.19...openssl-v0.10.20 [v0.10.19]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.18...openssl-v0.10.19 [v0.10.18]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.17...openssl-v0.10.18 [v0.10.17]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.16...openssl-v0.10.17 [v0.10.16]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.15...openssl-v0.10.16 [v0.10.15]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.14...openssl-v0.10.15 [v0.10.14]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.13...openssl-v0.10.14 [v0.10.13]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.12...openssl-v0.10.13 [v0.10.12]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.11...openssl-v0.10.12 [v0.10.11]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.10...openssl-v0.10.11 [v0.10.10]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.9...openssl-v0.10.10 [v0.10.9]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.8...openssl-v0.10.9 [v0.10.8]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.7...openssl-v0.10.8 [v0.10.7]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.6...openssl-v0.10.7 [v0.10.6]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.5...openssl-v0.10.6 [v0.10.5]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.4...openssl-v0.10.5 [v0.10.4]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.3...openssl-v0.10.4 [v0.10.3]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.2...openssl-v0.10.3 [v0.10.2]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.1...openssl-v0.10.2 [v0.10.1]: https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.0...openssl-v0.10.1 [v0.10.0]: https://github.com/sfackler/rust-openssl/compare/v0.9.23...openssl-v0.10.0 [release tags]: https://github.com/sfackler/rust-openssl/releases vendor/openssl/build.rs0000664000175000017500000000440014172417313016000 0ustar mwhudsonmwhudson#![allow(clippy::inconsistent_digit_grouping, clippy::unusual_byte_groupings)] use std::env; fn main() { if env::var("DEP_OPENSSL_LIBRESSL").is_ok() { println!("cargo:rustc-cfg=libressl"); } if let Ok(v) = env::var("DEP_OPENSSL_LIBRESSL_VERSION") { println!("cargo:rustc-cfg=libressl{}", v); } if let Ok(vars) = env::var("DEP_OPENSSL_CONF") { for var in vars.split(',') { println!("cargo:rustc-cfg=osslconf=\"{}\"", var); } } if let Ok(version) = env::var("DEP_OPENSSL_VERSION_NUMBER") { let version = u64::from_str_radix(&version, 16).unwrap(); if version >= 0x1_00_01_00_0 { println!("cargo:rustc-cfg=ossl101"); } if version >= 0x1_00_02_00_0 { println!("cargo:rustc-cfg=ossl102"); } if version >= 0x1_01_00_00_0 { println!("cargo:rustc-cfg=ossl110"); } if version >= 0x1_01_00_07_0 { println!("cargo:rustc-cfg=ossl110g"); } if version >= 0x1_01_01_00_0 { println!("cargo:rustc-cfg=ossl111"); } if version >= 0x3_00_00_00_0 { println!("cargo:rustc-cfg=ossl300"); } } if let Ok(version) = env::var("DEP_OPENSSL_LIBRESSL_VERSION_NUMBER") { let version = u64::from_str_radix(&version, 16).unwrap(); if version >= 0x2_06_01_00_0 { println!("cargo:rustc-cfg=libressl261"); } if version >= 0x2_07_00_00_0 { println!("cargo:rustc-cfg=libressl270"); } if version >= 0x2_07_01_00_0 { println!("cargo:rustc-cfg=libressl271"); } if version >= 0x2_07_03_00_0 { println!("cargo:rustc-cfg=libressl273"); } if version >= 0x2_08_00_00_0 { println!("cargo:rustc-cfg=libressl280"); } if version >= 0x2_09_01_00_0 { println!("cargo:rustc-cfg=libressl291"); } if version >= 0x3_02_01_00_0 { println!("cargo:rustc-cfg=libressl321"); } if version >= 0x3_03_02_00_0 { println!("cargo:rustc-cfg=libressl332"); } if version >= 0x3_04_00_00_0 { println!("cargo:rustc-cfg=libressl340"); } } } vendor/openssl/debian/0000775000175000017500000000000014160055207015554 5ustar mwhudsonmwhudsonvendor/openssl/debian/patches/0000775000175000017500000000000014160055207017203 5ustar mwhudsonmwhudsonvendor/openssl/debian/patches/disable-vendor.patch0000664000175000017500000000016214160055207023121 0ustar mwhudsonmwhudson--- a/Cargo.toml +++ b/Cargo.toml @@ -50,4 +50,3 @@ v102 = [] v110 = [] v111 = [] -vendored = ["ffi/vendored"] vendor/openssl/debian/patches/series0000664000175000017500000000002514160055207020415 0ustar mwhudsonmwhudsondisable-vendor.patch vendor/openssl/src/0000775000175000017500000000000014172417313015124 5ustar mwhudsonmwhudsonvendor/openssl/src/error.rs0000664000175000017500000002241514160055207016624 0ustar mwhudsonmwhudson//! Errors returned by OpenSSL library. //! //! OpenSSL errors are stored in an `ErrorStack`. Most methods in the crate //! returns a `Result` type. //! //! # Examples //! //! ``` //! use openssl::error::ErrorStack; //! use openssl::bn::BigNum; //! //! let an_error = BigNum::from_dec_str("Cannot parse letters"); //! match an_error { //! Ok(_) => (), //! Err(e) => println!("Parsing Error: {:?}", e), //! } //! ``` use cfg_if::cfg_if; use libc::{c_char, c_int, c_ulong}; use std::borrow::Cow; use std::error; use std::ffi::CStr; use std::fmt; use std::io; use std::ptr; use std::str; /// Collection of [`Error`]s from OpenSSL. /// /// [`Error`]: struct.Error.html #[derive(Debug, Clone)] pub struct ErrorStack(Vec); impl ErrorStack { /// Returns the contents of the OpenSSL error stack. pub fn get() -> ErrorStack { let mut vec = vec![]; while let Some(err) = Error::get() { vec.push(err); } ErrorStack(vec) } /// Pushes the errors back onto the OpenSSL error stack. pub fn put(&self) { for error in self.errors() { error.put(); } } } impl ErrorStack { /// Returns the errors in the stack. pub fn errors(&self) -> &[Error] { &self.0 } } impl fmt::Display for ErrorStack { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { if self.0.is_empty() { return fmt.write_str("OpenSSL error"); } let mut first = true; for err in &self.0 { if !first { fmt.write_str(", ")?; } write!(fmt, "{}", err)?; first = false; } Ok(()) } } impl error::Error for ErrorStack {} impl From for io::Error { fn from(e: ErrorStack) -> io::Error { io::Error::new(io::ErrorKind::Other, e) } } impl From for fmt::Error { fn from(_: ErrorStack) -> fmt::Error { fmt::Error } } /// An error reported from OpenSSL. #[derive(Clone)] pub struct Error { code: c_ulong, file: *const c_char, line: c_int, func: *const c_char, data: Option>, } unsafe impl Sync for Error {} unsafe impl Send for Error {} impl Error { /// Returns the first error on the OpenSSL error stack. pub fn get() -> Option { unsafe { ffi::init(); let mut file = ptr::null(); let mut line = 0; let mut func = ptr::null(); let mut data = ptr::null(); let mut flags = 0; match ERR_get_error_all(&mut file, &mut line, &mut func, &mut data, &mut flags) { 0 => None, code => { // The memory referenced by data is only valid until that slot is overwritten // in the error stack, so we'll need to copy it off if it's dynamic let data = if flags & ffi::ERR_TXT_STRING != 0 { let bytes = CStr::from_ptr(data as *const _).to_bytes(); let data = str::from_utf8(bytes).unwrap(); let data = if flags & ffi::ERR_TXT_MALLOCED != 0 { Cow::Owned(data.to_string()) } else { Cow::Borrowed(data) }; Some(data) } else { None }; Some(Error { code, file, line, func, data, }) } } } } /// Pushes the error back onto the OpenSSL error stack. pub fn put(&self) { self.put_error(); unsafe { let data = match self.data { Some(Cow::Borrowed(data)) => Some((data.as_ptr() as *mut c_char, 0)), Some(Cow::Owned(ref data)) => { let ptr = ffi::CRYPTO_malloc( (data.len() + 1) as _, concat!(file!(), "\0").as_ptr() as _, line!() as _, ) as *mut c_char; if ptr.is_null() { None } else { ptr::copy_nonoverlapping(data.as_ptr(), ptr as *mut u8, data.len()); *ptr.add(data.len()) = 0; Some((ptr, ffi::ERR_TXT_MALLOCED)) } } None => None, }; if let Some((ptr, flags)) = data { ffi::ERR_set_error_data(ptr, flags | ffi::ERR_TXT_STRING); } } } #[cfg(ossl300)] fn put_error(&self) { unsafe { ffi::ERR_new(); ffi::ERR_set_debug(self.file, self.line, self.func); ffi::ERR_set_error( ffi::ERR_GET_LIB(self.code), ffi::ERR_GET_REASON(self.code), ptr::null(), ); } } #[cfg(not(ossl300))] fn put_error(&self) { unsafe { ffi::ERR_put_error( ffi::ERR_GET_LIB(self.code), ffi::ERR_GET_FUNC(self.code), ffi::ERR_GET_REASON(self.code), self.file, self.line, ); } } /// Returns the raw OpenSSL error code for this error. pub fn code(&self) -> c_ulong { self.code } /// Returns the name of the library reporting the error, if available. pub fn library(&self) -> Option<&'static str> { unsafe { let cstr = ffi::ERR_lib_error_string(self.code); if cstr.is_null() { return None; } let bytes = CStr::from_ptr(cstr as *const _).to_bytes(); Some(str::from_utf8(bytes).unwrap()) } } /// Returns the name of the function reporting the error. pub fn function(&self) -> Option<&'static str> { unsafe { if self.func.is_null() { return None; } let bytes = CStr::from_ptr(self.func).to_bytes(); Some(str::from_utf8(bytes).unwrap()) } } /// Returns the reason for the error. pub fn reason(&self) -> Option<&'static str> { unsafe { let cstr = ffi::ERR_reason_error_string(self.code); if cstr.is_null() { return None; } let bytes = CStr::from_ptr(cstr as *const _).to_bytes(); Some(str::from_utf8(bytes).unwrap()) } } /// Returns the name of the source file which encountered the error. pub fn file(&self) -> &'static str { unsafe { assert!(!self.file.is_null()); let bytes = CStr::from_ptr(self.file as *const _).to_bytes(); str::from_utf8(bytes).unwrap() } } /// Returns the line in the source file which encountered the error. pub fn line(&self) -> u32 { self.line as u32 } /// Returns additional data describing the error. #[allow(clippy::option_as_ref_deref)] pub fn data(&self) -> Option<&str> { self.data.as_ref().map(|s| &**s) } } impl fmt::Debug for Error { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { let mut builder = fmt.debug_struct("Error"); builder.field("code", &self.code()); if let Some(library) = self.library() { builder.field("library", &library); } if let Some(function) = self.function() { builder.field("function", &function); } if let Some(reason) = self.reason() { builder.field("reason", &reason); } builder.field("file", &self.file()); builder.field("line", &self.line()); if let Some(data) = self.data() { builder.field("data", &data); } builder.finish() } } impl fmt::Display for Error { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { write!(fmt, "error:{:08X}", self.code())?; match self.library() { Some(l) => write!(fmt, ":{}", l)?, None => write!(fmt, ":lib({})", ffi::ERR_GET_LIB(self.code()))?, } match self.function() { Some(f) => write!(fmt, ":{}", f)?, None => write!(fmt, ":func({})", ffi::ERR_GET_FUNC(self.code()))?, } match self.reason() { Some(r) => write!(fmt, ":{}", r)?, None => write!(fmt, ":reason({})", ffi::ERR_GET_REASON(self.code()))?, } write!( fmt, ":{}:{}:{}", self.file(), self.line(), self.data().unwrap_or("") ) } } impl error::Error for Error {} cfg_if! { if #[cfg(ossl300)] { use ffi::ERR_get_error_all; } else { #[allow(bad_style)] unsafe extern "C" fn ERR_get_error_all( file: *mut *const c_char, line: *mut c_int, func: *mut *const c_char, data: *mut *const c_char, flags: *mut c_int, ) -> c_ulong { let code = ffi::ERR_get_error_line_data(file, line, data, flags); *func = ffi::ERR_func_error_string(code); code } } } vendor/openssl/src/asn1.rs0000664000175000017500000006026014160055207016335 0ustar mwhudsonmwhudson#![deny(missing_docs)] //! Defines the format of certificiates //! //! This module is used by [`x509`] and other certificate building functions //! to describe time, strings, and objects. //! //! Abstract Syntax Notation One is an interface description language. //! The specification comes from [X.208] by OSI, and rewritten in X.680. //! ASN.1 describes properties of an object with a type set. Those types //! can be atomic, structured, choice, and other (CHOICE and ANY). These //! types are expressed as a number and the assignment operator ::= gives //! the type a name. //! //! The implementation here provides a subset of the ASN.1 types that OpenSSL //! uses, especially in the properties of a certificate used in HTTPS. //! //! [X.208]: https://www.itu.int/rec/T-REC-X.208-198811-W/en //! [`x509`]: ../x509/struct.X509Builder.html //! //! ## Examples //! //! ``` //! use openssl::asn1::Asn1Time; //! let tomorrow = Asn1Time::days_from_now(1); //! ``` use cfg_if::cfg_if; use foreign_types::{ForeignType, ForeignTypeRef}; use libc::{c_char, c_int, c_long, time_t}; #[cfg(ossl102)] use std::cmp::Ordering; use std::ffi::CString; use std::fmt; use std::ptr; use std::slice; use std::str; use crate::bio::MemBio; use crate::bn::{BigNum, BigNumRef}; use crate::error::ErrorStack; use crate::nid::Nid; use crate::string::OpensslString; use crate::{cvt, cvt_p}; foreign_type_and_impl_send_sync! { type CType = ffi::ASN1_GENERALIZEDTIME; fn drop = ffi::ASN1_GENERALIZEDTIME_free; /// Non-UTC representation of time /// /// If a time can be represented by UTCTime, UTCTime is used /// otherwise, ASN1_GENERALIZEDTIME is used. This would be, for /// example outside the year range of 1950-2049. /// /// [ASN1_GENERALIZEDTIME_set] documentation from OpenSSL provides /// further details of implementation. Note: these docs are from the master /// branch as documentation on the 1.1.0 branch did not include this page. /// /// [ASN1_GENERALIZEDTIME_set]: https://www.openssl.org/docs/manmaster/man3/ASN1_GENERALIZEDTIME_set.html pub struct Asn1GeneralizedTime; /// Reference to a [`Asn1GeneralizedTime`] /// /// [`Asn1GeneralizedTime`]: struct.Asn1GeneralizedTime.html pub struct Asn1GeneralizedTimeRef; } impl fmt::Display for Asn1GeneralizedTimeRef { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { unsafe { let mem_bio = match MemBio::new() { Err(_) => return f.write_str("error"), Ok(m) => m, }; let print_result = cvt(ffi::ASN1_GENERALIZEDTIME_print( mem_bio.as_ptr(), self.as_ptr(), )); match print_result { Err(_) => f.write_str("error"), Ok(_) => f.write_str(str::from_utf8_unchecked(mem_bio.get_buf())), } } } } /// The type of an ASN.1 value. #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct Asn1Type(c_int); #[allow(missing_docs)] // no need to document the constants impl Asn1Type { pub const EOC: Asn1Type = Asn1Type(ffi::V_ASN1_EOC); pub const BOOLEAN: Asn1Type = Asn1Type(ffi::V_ASN1_BOOLEAN); pub const INTEGER: Asn1Type = Asn1Type(ffi::V_ASN1_INTEGER); pub const BIT_STRING: Asn1Type = Asn1Type(ffi::V_ASN1_BIT_STRING); pub const OCTET_STRING: Asn1Type = Asn1Type(ffi::V_ASN1_OCTET_STRING); pub const NULL: Asn1Type = Asn1Type(ffi::V_ASN1_NULL); pub const OBJECT: Asn1Type = Asn1Type(ffi::V_ASN1_OBJECT); pub const OBJECT_DESCRIPTOR: Asn1Type = Asn1Type(ffi::V_ASN1_OBJECT_DESCRIPTOR); pub const EXTERNAL: Asn1Type = Asn1Type(ffi::V_ASN1_EXTERNAL); pub const REAL: Asn1Type = Asn1Type(ffi::V_ASN1_REAL); pub const ENUMERATED: Asn1Type = Asn1Type(ffi::V_ASN1_ENUMERATED); pub const UTF8STRING: Asn1Type = Asn1Type(ffi::V_ASN1_UTF8STRING); pub const SEQUENCE: Asn1Type = Asn1Type(ffi::V_ASN1_SEQUENCE); pub const SET: Asn1Type = Asn1Type(ffi::V_ASN1_SET); pub const NUMERICSTRING: Asn1Type = Asn1Type(ffi::V_ASN1_NUMERICSTRING); pub const PRINTABLESTRING: Asn1Type = Asn1Type(ffi::V_ASN1_PRINTABLESTRING); pub const T61STRING: Asn1Type = Asn1Type(ffi::V_ASN1_T61STRING); pub const TELETEXSTRING: Asn1Type = Asn1Type(ffi::V_ASN1_TELETEXSTRING); pub const VIDEOTEXSTRING: Asn1Type = Asn1Type(ffi::V_ASN1_VIDEOTEXSTRING); pub const IA5STRING: Asn1Type = Asn1Type(ffi::V_ASN1_IA5STRING); pub const UTCTIME: Asn1Type = Asn1Type(ffi::V_ASN1_UTCTIME); pub const GENERALIZEDTIME: Asn1Type = Asn1Type(ffi::V_ASN1_GENERALIZEDTIME); pub const GRAPHICSTRING: Asn1Type = Asn1Type(ffi::V_ASN1_GRAPHICSTRING); pub const ISO64STRING: Asn1Type = Asn1Type(ffi::V_ASN1_ISO64STRING); pub const VISIBLESTRING: Asn1Type = Asn1Type(ffi::V_ASN1_VISIBLESTRING); pub const GENERALSTRING: Asn1Type = Asn1Type(ffi::V_ASN1_GENERALSTRING); pub const UNIVERSALSTRING: Asn1Type = Asn1Type(ffi::V_ASN1_UNIVERSALSTRING); pub const BMPSTRING: Asn1Type = Asn1Type(ffi::V_ASN1_BMPSTRING); /// Constructs an `Asn1Type` from a raw OpenSSL value. pub fn from_raw(value: c_int) -> Self { Asn1Type(value) } /// Returns the raw OpenSSL value represented by this type. pub fn as_raw(&self) -> c_int { self.0 } } /// Difference between two ASN1 times. /// /// This `struct` is created by the [`diff`] method on [`Asn1TimeRef`]. See its /// documentation for more. /// /// [`diff`]: struct.Asn1TimeRef.html#method.diff /// [`Asn1TimeRef`]: struct.Asn1TimeRef.html #[derive(Debug, Clone, PartialEq, Eq, Hash)] #[cfg(ossl102)] pub struct TimeDiff { /// Difference in days pub days: c_int, /// Difference in seconds. /// /// This is always less than the number of seconds in a day. pub secs: c_int, } foreign_type_and_impl_send_sync! { type CType = ffi::ASN1_TIME; fn drop = ffi::ASN1_TIME_free; /// Time storage and comparison /// /// Asn1Time should be used to store and share time information /// using certificates. If Asn1Time is set using a string, it must /// be in either YYMMDDHHMMSSZ, YYYYMMDDHHMMSSZ, or another ASN.1 format. /// /// [ASN_TIME_set] documentation at OpenSSL explains the ASN.1 implementation /// used by OpenSSL. /// /// [ASN_TIME_set]: https://www.openssl.org/docs/man1.1.0/crypto/ASN1_TIME_set.html pub struct Asn1Time; /// Reference to an [`Asn1Time`] /// /// [`Asn1Time`]: struct.Asn1Time.html pub struct Asn1TimeRef; } impl Asn1TimeRef { /// Find difference between two times /// /// This corresponds to [`ASN1_TIME_diff`]. /// /// [`ASN1_TIME_diff`]: https://www.openssl.org/docs/man1.1.0/crypto/ASN1_TIME_diff.html #[cfg(ossl102)] pub fn diff(&self, compare: &Self) -> Result { let mut days = 0; let mut secs = 0; let other = compare.as_ptr(); let err = unsafe { ffi::ASN1_TIME_diff(&mut days, &mut secs, self.as_ptr(), other) }; match err { 0 => Err(ErrorStack::get()), _ => Ok(TimeDiff { days, secs }), } } /// Compare two times /// /// This corresponds to [`ASN1_TIME_compare`] but is implemented using [`diff`] so that it is /// also supported on older versions of OpenSSL. /// /// [`ASN1_TIME_compare`]: https://www.openssl.org/docs/man1.1.1/man3/ASN1_TIME_compare.html /// [`diff`]: struct.Asn1TimeRef.html#method.diff #[cfg(ossl102)] pub fn compare(&self, other: &Self) -> Result { let d = self.diff(other)?; if d.days > 0 || d.secs > 0 { return Ok(Ordering::Less); } if d.days < 0 || d.secs < 0 { return Ok(Ordering::Greater); } Ok(Ordering::Equal) } } #[cfg(ossl102)] impl PartialEq for Asn1TimeRef { fn eq(&self, other: &Asn1TimeRef) -> bool { self.diff(other) .map(|t| t.days == 0 && t.secs == 0) .unwrap_or(false) } } #[cfg(ossl102)] impl PartialEq for Asn1TimeRef { fn eq(&self, other: &Asn1Time) -> bool { self.diff(other) .map(|t| t.days == 0 && t.secs == 0) .unwrap_or(false) } } #[cfg(ossl102)] impl<'a> PartialEq for &'a Asn1TimeRef { fn eq(&self, other: &Asn1Time) -> bool { self.diff(other) .map(|t| t.days == 0 && t.secs == 0) .unwrap_or(false) } } #[cfg(ossl102)] impl PartialOrd for Asn1TimeRef { fn partial_cmp(&self, other: &Asn1TimeRef) -> Option { self.compare(other).ok() } } #[cfg(ossl102)] impl PartialOrd for Asn1TimeRef { fn partial_cmp(&self, other: &Asn1Time) -> Option { self.compare(other).ok() } } #[cfg(ossl102)] impl<'a> PartialOrd for &'a Asn1TimeRef { fn partial_cmp(&self, other: &Asn1Time) -> Option { self.compare(other).ok() } } impl fmt::Display for Asn1TimeRef { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { unsafe { let mem_bio = match MemBio::new() { Err(_) => return f.write_str("error"), Ok(m) => m, }; let print_result = cvt(ffi::ASN1_TIME_print(mem_bio.as_ptr(), self.as_ptr())); match print_result { Err(_) => f.write_str("error"), Ok(_) => f.write_str(str::from_utf8_unchecked(mem_bio.get_buf())), } } } } impl fmt::Debug for Asn1TimeRef { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.write_str(&self.to_string()) } } impl Asn1Time { fn new() -> Result { ffi::init(); unsafe { let handle = cvt_p(ffi::ASN1_TIME_new())?; Ok(Asn1Time::from_ptr(handle)) } } fn from_period(period: c_long) -> Result { ffi::init(); unsafe { let handle = cvt_p(ffi::X509_gmtime_adj(ptr::null_mut(), period))?; Ok(Asn1Time::from_ptr(handle)) } } /// Creates a new time on specified interval in days from now pub fn days_from_now(days: u32) -> Result { Asn1Time::from_period(days as c_long * 60 * 60 * 24) } /// Creates a new time from the specified `time_t` value pub fn from_unix(time: time_t) -> Result { ffi::init(); unsafe { let handle = cvt_p(ffi::ASN1_TIME_set(ptr::null_mut(), time))?; Ok(Asn1Time::from_ptr(handle)) } } /// Creates a new time corresponding to the specified ASN1 time string. /// /// This corresponds to [`ASN1_TIME_set_string`]. /// /// [`ASN1_TIME_set_string`]: https://www.openssl.org/docs/manmaster/man3/ASN1_TIME_set_string.html #[allow(clippy::should_implement_trait)] pub fn from_str(s: &str) -> Result { unsafe { let s = CString::new(s).unwrap(); let time = Asn1Time::new()?; cvt(ffi::ASN1_TIME_set_string(time.as_ptr(), s.as_ptr()))?; Ok(time) } } /// Creates a new time corresponding to the specified X509 time string. /// /// This corresponds to [`ASN1_TIME_set_string_X509`]. /// /// Requires OpenSSL 1.1.1 or newer. /// /// [`ASN1_TIME_set_string_X509`]: https://www.openssl.org/docs/manmaster/man3/ASN1_TIME_set_string.html #[cfg(ossl111)] pub fn from_str_x509(s: &str) -> Result { unsafe { let s = CString::new(s).unwrap(); let time = Asn1Time::new()?; cvt(ffi::ASN1_TIME_set_string_X509(time.as_ptr(), s.as_ptr()))?; Ok(time) } } } #[cfg(ossl102)] impl PartialEq for Asn1Time { fn eq(&self, other: &Asn1Time) -> bool { self.diff(other) .map(|t| t.days == 0 && t.secs == 0) .unwrap_or(false) } } #[cfg(ossl102)] impl PartialEq for Asn1Time { fn eq(&self, other: &Asn1TimeRef) -> bool { self.diff(other) .map(|t| t.days == 0 && t.secs == 0) .unwrap_or(false) } } #[cfg(ossl102)] impl<'a> PartialEq<&'a Asn1TimeRef> for Asn1Time { fn eq(&self, other: &&'a Asn1TimeRef) -> bool { self.diff(other) .map(|t| t.days == 0 && t.secs == 0) .unwrap_or(false) } } #[cfg(ossl102)] impl PartialOrd for Asn1Time { fn partial_cmp(&self, other: &Asn1Time) -> Option { self.compare(other).ok() } } #[cfg(ossl102)] impl PartialOrd for Asn1Time { fn partial_cmp(&self, other: &Asn1TimeRef) -> Option { self.compare(other).ok() } } #[cfg(ossl102)] impl<'a> PartialOrd<&'a Asn1TimeRef> for Asn1Time { fn partial_cmp(&self, other: &&'a Asn1TimeRef) -> Option { self.compare(other).ok() } } foreign_type_and_impl_send_sync! { type CType = ffi::ASN1_STRING; fn drop = ffi::ASN1_STRING_free; /// Primary ASN.1 type used by OpenSSL /// /// Almost all ASN.1 types in OpenSSL are represented by ASN1_STRING /// structures. This implementation uses [ASN1_STRING-to_UTF8] to preserve /// compatibility with Rust's String. /// /// [ASN1_STRING-to_UTF8]: https://www.openssl.org/docs/man1.1.0/crypto/ASN1_STRING_to_UTF8.html pub struct Asn1String; /// Reference to [`Asn1String`] /// /// [`Asn1String`]: struct.Asn1String.html pub struct Asn1StringRef; } impl Asn1StringRef { /// Converts the ASN.1 underlying format to UTF8 /// /// ASN.1 strings may utilize UTF-16, ASCII, BMP, or UTF8. This is important to /// consume the string in a meaningful way without knowing the underlying /// format. pub fn as_utf8(&self) -> Result { unsafe { let mut ptr = ptr::null_mut(); let len = ffi::ASN1_STRING_to_UTF8(&mut ptr, self.as_ptr()); if len < 0 { return Err(ErrorStack::get()); } Ok(OpensslString::from_ptr(ptr as *mut c_char)) } } /// Return the string as an array of bytes. /// /// The bytes do not directly correspond to UTF-8 encoding. To interact with /// strings in rust, it is preferable to use [`as_utf8`] /// /// [`as_utf8`]: struct.Asn1String.html#method.as_utf8 pub fn as_slice(&self) -> &[u8] { unsafe { slice::from_raw_parts(ASN1_STRING_get0_data(self.as_ptr()), self.len()) } } /// Returns the number of bytes in the string. pub fn len(&self) -> usize { unsafe { ffi::ASN1_STRING_length(self.as_ptr()) as usize } } /// Determines if the string is empty. pub fn is_empty(&self) -> bool { self.len() == 0 } } impl fmt::Debug for Asn1StringRef { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { match self.as_utf8() { Ok(openssl_string) => openssl_string.fmt(fmt), Err(_) => fmt.write_str("error"), } } } foreign_type_and_impl_send_sync! { type CType = ffi::ASN1_INTEGER; fn drop = ffi::ASN1_INTEGER_free; /// Numeric representation /// /// Integers in ASN.1 may include BigNum, int64 or uint64. BigNum implementation /// can be found within [`bn`] module. /// /// OpenSSL documentation includes [`ASN1_INTEGER_set`]. /// /// [`bn`]: ../bn/index.html /// [`ASN1_INTEGER_set`]: https://www.openssl.org/docs/man1.1.0/crypto/ASN1_INTEGER_set.html pub struct Asn1Integer; /// Reference to [`Asn1Integer`] /// /// [`Asn1Integer`]: struct.Asn1Integer.html pub struct Asn1IntegerRef; } impl Asn1Integer { /// Converts a bignum to an `Asn1Integer`. /// /// Corresponds to [`BN_to_ASN1_INTEGER`]. Also see /// [`BigNumRef::to_asn1_integer`]. /// /// [`BN_to_ASN1_INTEGER`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_to_ASN1_INTEGER.html /// [`BigNumRef::to_asn1_integer`]: ../bn/struct.BigNumRef.html#method.to_asn1_integer pub fn from_bn(bn: &BigNumRef) -> Result { bn.to_asn1_integer() } } impl Asn1IntegerRef { #[allow(missing_docs)] #[deprecated(since = "0.10.6", note = "use to_bn instead")] pub fn get(&self) -> i64 { unsafe { ffi::ASN1_INTEGER_get(self.as_ptr()) as i64 } } /// Converts the integer to a `BigNum`. /// /// This corresponds to [`ASN1_INTEGER_to_BN`]. /// /// [`ASN1_INTEGER_to_BN`]: https://www.openssl.org/docs/man1.1.0/crypto/ASN1_INTEGER_get.html pub fn to_bn(&self) -> Result { unsafe { cvt_p(ffi::ASN1_INTEGER_to_BN(self.as_ptr(), ptr::null_mut())) .map(|p| BigNum::from_ptr(p)) } } /// Sets the ASN.1 value to the value of a signed 32-bit integer, for larger numbers /// see [`bn`]. /// /// OpenSSL documentation at [`ASN1_INTEGER_set`] /// /// [`bn`]: ../bn/struct.BigNumRef.html#method.to_asn1_integer /// [`ASN1_INTEGER_set`]: https://www.openssl.org/docs/man1.1.0/crypto/ASN1_INTEGER_set.html pub fn set(&mut self, value: i32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::ASN1_INTEGER_set(self.as_ptr(), value as c_long)).map(|_| ()) } } } foreign_type_and_impl_send_sync! { type CType = ffi::ASN1_BIT_STRING; fn drop = ffi::ASN1_BIT_STRING_free; /// Sequence of bytes /// /// Asn1BitString is used in [`x509`] certificates for the signature. /// The bit string acts as a collection of bytes. /// /// [`x509`]: ../x509/struct.X509.html#method.signature pub struct Asn1BitString; /// Reference to [`Asn1BitString`] /// /// [`Asn1BitString`]: struct.Asn1BitString.html pub struct Asn1BitStringRef; } impl Asn1BitStringRef { /// Returns the Asn1BitString as a slice. pub fn as_slice(&self) -> &[u8] { unsafe { slice::from_raw_parts(ASN1_STRING_get0_data(self.as_ptr() as *mut _), self.len()) } } /// Returns the number of bytes in the string. pub fn len(&self) -> usize { unsafe { ffi::ASN1_STRING_length(self.as_ptr() as *const _) as usize } } /// Determines if the string is empty. pub fn is_empty(&self) -> bool { self.len() == 0 } } foreign_type_and_impl_send_sync! { type CType = ffi::ASN1_OBJECT; fn drop = ffi::ASN1_OBJECT_free; /// Object Identifier /// /// Represents an ASN.1 Object. Typically, NIDs, or numeric identifiers /// are stored as a table within the [`Nid`] module. These constants are /// used to determine attributes of a certificate, such as mapping the /// attribute "CommonName" to "CN" which is represented as the OID of 13. /// This attribute is a constant in the [`nid::COMMONNAME`]. /// /// OpenSSL documentation at [`OBJ_nid2obj`] /// /// [`Nid`]: ../nid/index.html /// [`nid::COMMONNAME`]: ../nid/constant.COMMONNAME.html /// [`OBJ_nid2obj`]: https://www.openssl.org/docs/man1.1.0/crypto/OBJ_obj2nid.html pub struct Asn1Object; /// Reference to [`Asn1Object`] /// /// [`Asn1Object`]: struct.Asn1Object.html pub struct Asn1ObjectRef; } impl Asn1Object { /// Constructs an ASN.1 Object Identifier from a string representation of /// the OID. /// /// This corresponds to [`OBJ_txt2obj`]. /// /// [`OBJ_txt2obj`]: https://www.openssl.org/docs/man1.1.0/man3/OBJ_txt2obj.html #[allow(clippy::should_implement_trait)] pub fn from_str(txt: &str) -> Result { unsafe { ffi::init(); let txt = CString::new(txt).unwrap(); let obj: *mut ffi::ASN1_OBJECT = cvt_p(ffi::OBJ_txt2obj(txt.as_ptr() as *const _, 0))?; Ok(Asn1Object::from_ptr(obj)) } } /// Return the OID as an DER encoded array of bytes. This is the ASN.1 /// value, not including tag or length. /// /// This corresponds to [`OBJ_get0_data`]. /// /// Requires OpenSSL 1.1.1 or newer. /// /// [`OBJ_get0_data`]: https://www.openssl.org/docs/man1.1.0/man3/OBJ_get0_data.html #[cfg(ossl111)] pub fn as_slice(&self) -> &[u8] { unsafe { let len = ffi::OBJ_length(self.as_ptr()); slice::from_raw_parts(ffi::OBJ_get0_data(self.as_ptr()), len) } } } impl Asn1ObjectRef { /// Returns the NID associated with this OID. pub fn nid(&self) -> Nid { unsafe { Nid::from_raw(ffi::OBJ_obj2nid(self.as_ptr())) } } } impl fmt::Display for Asn1ObjectRef { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { unsafe { let mut buf = [0; 80]; let len = ffi::OBJ_obj2txt( buf.as_mut_ptr() as *mut _, buf.len() as c_int, self.as_ptr(), 0, ); match str::from_utf8(&buf[..len as usize]) { Err(_) => fmt.write_str("error"), Ok(s) => fmt.write_str(s), } } } } impl fmt::Debug for Asn1ObjectRef { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt.write_str(self.to_string().as_str()) } } cfg_if! { if #[cfg(any(ossl110, libressl273))] { use ffi::ASN1_STRING_get0_data; } else { #[allow(bad_style)] unsafe fn ASN1_STRING_get0_data(s: *mut ffi::ASN1_STRING) -> *const ::libc::c_uchar { ffi::ASN1_STRING_data(s) } } } #[cfg(test)] mod tests { use super::*; use crate::bn::BigNum; use crate::nid::Nid; /// Tests conversion between BigNum and Asn1Integer. #[test] fn bn_cvt() { fn roundtrip(bn: BigNum) { let large = Asn1Integer::from_bn(&bn).unwrap(); assert_eq!(large.to_bn().unwrap(), bn); } roundtrip(BigNum::from_dec_str("1000000000000000000000000000000000").unwrap()); roundtrip(-BigNum::from_dec_str("1000000000000000000000000000000000").unwrap()); roundtrip(BigNum::from_u32(1234).unwrap()); roundtrip(-BigNum::from_u32(1234).unwrap()); } #[test] fn time_from_str() { Asn1Time::from_str("99991231235959Z").unwrap(); #[cfg(ossl111)] Asn1Time::from_str_x509("99991231235959Z").unwrap(); } #[test] fn time_from_unix() { let t = Asn1Time::from_unix(0).unwrap(); assert_eq!("Jan 1 00:00:00 1970 GMT", t.to_string()); } #[test] #[cfg(ossl102)] fn time_eq() { let a = Asn1Time::from_str("99991231235959Z").unwrap(); let b = Asn1Time::from_str("99991231235959Z").unwrap(); let c = Asn1Time::from_str("99991231235958Z").unwrap(); let a_ref = a.as_ref(); let b_ref = b.as_ref(); let c_ref = c.as_ref(); assert!(a == b); assert!(a != c); assert!(a == b_ref); assert!(a != c_ref); assert!(b_ref == a); assert!(c_ref != a); assert!(a_ref == b_ref); assert!(a_ref != c_ref); } #[test] #[cfg(ossl102)] fn time_ord() { let a = Asn1Time::from_str("99991231235959Z").unwrap(); let b = Asn1Time::from_str("99991231235959Z").unwrap(); let c = Asn1Time::from_str("99991231235958Z").unwrap(); let a_ref = a.as_ref(); let b_ref = b.as_ref(); let c_ref = c.as_ref(); assert!(a >= b); assert!(a > c); assert!(b <= a); assert!(c < a); assert!(a_ref >= b); assert!(a_ref > c); assert!(b_ref <= a); assert!(c_ref < a); assert!(a >= b_ref); assert!(a > c_ref); assert!(b <= a_ref); assert!(c < a_ref); assert!(a_ref >= b_ref); assert!(a_ref > c_ref); assert!(b_ref <= a_ref); assert!(c_ref < a_ref); } #[test] fn object_from_str() { let object = Asn1Object::from_str("2.16.840.1.101.3.4.2.1").unwrap(); assert_eq!(object.nid(), Nid::SHA256); } #[test] fn object_from_str_with_invalid_input() { Asn1Object::from_str("NOT AN OID") .map(|object| object.to_string()) .expect_err("parsing invalid OID should fail"); } #[test] #[cfg(ossl111)] fn object_to_slice() { let object = Asn1Object::from_str("2.16.840.1.101.3.4.2.1").unwrap(); assert_eq!( object.as_slice(), &[0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x01], ); } } vendor/openssl/src/aes.rs0000664000175000017500000002401314160055207016237 0ustar mwhudsonmwhudson//! Low level AES IGE and key wrapping functionality //! //! AES ECB, CBC, XTS, CTR, CFB, GCM and other conventional symmetric encryption //! modes are found in [`symm`]. This is the implementation of AES IGE and key wrapping //! //! Advanced Encryption Standard (AES) provides symmetric key cipher that //! the same key is used to encrypt and decrypt data. This implementation //! uses 128, 192, or 256 bit keys. This module provides functions to //! create a new key with [`new_encrypt`] and perform an encryption/decryption //! using that key with [`aes_ige`]. //! //! [`new_encrypt`]: struct.AesKey.html#method.new_encrypt //! [`aes_ige`]: fn.aes_ige.html //! //! The [`symm`] module should be used in preference to this module in most cases. //! The IGE block cypher is a non-traditional cipher mode. More traditional AES //! encryption methods are found in the [`Crypter`] and [`Cipher`] structs. //! //! [`symm`]: ../symm/index.html //! [`Crypter`]: ../symm/struct.Crypter.html //! [`Cipher`]: ../symm/struct.Cipher.html //! //! # Examples //! //! ## AES IGE //! ```rust //! use openssl::aes::{AesKey, aes_ige}; //! use openssl::symm::Mode; //! //! let key = b"\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"; //! let plaintext = b"\x12\x34\x56\x78\x90\x12\x34\x56\x12\x34\x56\x78\x90\x12\x34\x56"; //! let mut iv = *b"\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F\ //! \x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1A\x1B\x1C\x1D\x1E\x1F"; //! //! let key = AesKey::new_encrypt(key).unwrap(); //! let mut output = [0u8; 16]; //! aes_ige(plaintext, &mut output, &key, &mut iv, Mode::Encrypt); //! assert_eq!(output, *b"\xa6\xad\x97\x4d\x5c\xea\x1d\x36\xd2\xf3\x67\x98\x09\x07\xed\x32"); //! ``` //! //! ## Key wrapping //! ```rust //! use openssl::aes::{AesKey, unwrap_key, wrap_key}; //! //! let kek = b"\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"; //! let key_to_wrap = b"\x00\x11\x22\x33\x44\x55\x66\x77\x88\x99\xAA\xBB\xCC\xDD\xEE\xFF"; //! //! let enc_key = AesKey::new_encrypt(kek).unwrap(); //! let mut ciphertext = [0u8; 24]; //! wrap_key(&enc_key, None, &mut ciphertext, &key_to_wrap[..]).unwrap(); //! let dec_key = AesKey::new_decrypt(kek).unwrap(); //! let mut orig_key = [0u8; 16]; //! unwrap_key(&dec_key, None, &mut orig_key, &ciphertext[..]).unwrap(); //! //! assert_eq!(&orig_key[..], &key_to_wrap[..]); //! ``` //! use libc::{c_int, c_uint}; use std::mem::MaybeUninit; use std::ptr; use crate::symm::Mode; /// Provides Error handling for parsing keys. #[derive(Debug)] pub struct KeyError(()); /// The key used to encrypt or decrypt cipher blocks. pub struct AesKey(ffi::AES_KEY); impl AesKey { /// Prepares a key for encryption. /// /// # Failure /// /// Returns an error if the key is not 128, 192, or 256 bits. pub fn new_encrypt(key: &[u8]) -> Result { unsafe { assert!(key.len() <= c_int::max_value() as usize / 8); let mut aes_key = MaybeUninit::uninit(); let r = ffi::AES_set_encrypt_key( key.as_ptr() as *const _, key.len() as c_int * 8, aes_key.as_mut_ptr(), ); if r == 0 { Ok(AesKey(aes_key.assume_init())) } else { Err(KeyError(())) } } } /// Prepares a key for decryption. /// /// # Failure /// /// Returns an error if the key is not 128, 192, or 256 bits. pub fn new_decrypt(key: &[u8]) -> Result { unsafe { assert!(key.len() <= c_int::max_value() as usize / 8); let mut aes_key = MaybeUninit::uninit(); let r = ffi::AES_set_decrypt_key( key.as_ptr() as *const _, key.len() as c_int * 8, aes_key.as_mut_ptr(), ); if r == 0 { Ok(AesKey(aes_key.assume_init())) } else { Err(KeyError(())) } } } } /// Performs AES IGE encryption or decryption /// /// AES IGE (Infinite Garble Extension) is a form of AES block cipher utilized in /// OpenSSL. Infinite Garble refers to propagating forward errors. IGE, like other /// block ciphers implemented for AES requires an initialization vector. The IGE mode /// allows a stream of blocks to be encrypted or decrypted without having the entire /// plaintext available. For more information, visit [AES IGE Encryption]. /// /// This block cipher uses 16 byte blocks. The rust implementation will panic /// if the input or output does not meet this 16-byte boundary. Attention must /// be made in this low level implementation to pad the value to the 128-bit boundary. /// /// [AES IGE Encryption]: http://www.links.org/files/openssl-ige.pdf /// /// # Panics /// /// Panics if `in_` is not the same length as `out`, if that length is not a multiple of 16, or if /// `iv` is not at least 32 bytes. pub fn aes_ige(in_: &[u8], out: &mut [u8], key: &AesKey, iv: &mut [u8], mode: Mode) { unsafe { assert!(in_.len() == out.len()); assert!(in_.len() % ffi::AES_BLOCK_SIZE as usize == 0); assert!(iv.len() >= ffi::AES_BLOCK_SIZE as usize * 2); let mode = match mode { Mode::Encrypt => ffi::AES_ENCRYPT, Mode::Decrypt => ffi::AES_DECRYPT, }; ffi::AES_ige_encrypt( in_.as_ptr() as *const _, out.as_mut_ptr() as *mut _, in_.len(), &key.0, iv.as_mut_ptr() as *mut _, mode, ); } } /// Wrap a key, according to [RFC 3394](https://tools.ietf.org/html/rfc3394) /// /// * `key`: The key-encrypting-key to use. Must be a encrypting key /// * `iv`: The IV to use. You must use the same IV for both wrapping and unwrapping /// * `out`: The output buffer to store the ciphertext /// * `in_`: The input buffer, storing the key to be wrapped /// /// Returns the number of bytes written into `out` /// /// # Panics /// /// Panics if either `out` or `in_` do not have sizes that are a multiple of 8, or if /// `out` is not 8 bytes longer than `in_` pub fn wrap_key( key: &AesKey, iv: Option<[u8; 8]>, out: &mut [u8], in_: &[u8], ) -> Result { unsafe { assert!(out.len() >= in_.len() + 8); // Ciphertext is 64 bits longer (see 2.2.1) let written = ffi::AES_wrap_key( &key.0 as *const _ as *mut _, // this is safe, the implementation only uses the key as a const pointer. iv.as_ref() .map_or(ptr::null(), |iv| iv.as_ptr() as *const _), out.as_ptr() as *mut _, in_.as_ptr() as *const _, in_.len() as c_uint, ); if written <= 0 { Err(KeyError(())) } else { Ok(written as usize) } } } /// Unwrap a key, according to [RFC 3394](https://tools.ietf.org/html/rfc3394) /// /// * `key`: The key-encrypting-key to decrypt the wrapped key. Must be a decrypting key /// * `iv`: The same IV used for wrapping the key /// * `out`: The buffer to write the unwrapped key to /// * `in_`: The input ciphertext /// /// Returns the number of bytes written into `out` /// /// # Panics /// /// Panics if either `out` or `in_` do not have sizes that are a multiple of 8, or /// if `in_` is not 8 bytes longer than `out` pub fn unwrap_key( key: &AesKey, iv: Option<[u8; 8]>, out: &mut [u8], in_: &[u8], ) -> Result { unsafe { assert!(out.len() + 8 <= in_.len()); let written = ffi::AES_unwrap_key( &key.0 as *const _ as *mut _, // this is safe, the implementation only uses the key as a const pointer. iv.as_ref() .map_or(ptr::null(), |iv| iv.as_ptr() as *const _), out.as_ptr() as *mut _, in_.as_ptr() as *const _, in_.len() as c_uint, ); if written <= 0 { Err(KeyError(())) } else { Ok(written as usize) } } } #[cfg(test)] mod test { use hex::FromHex; use super::*; use crate::symm::Mode; // From https://www.mgp25.com/AESIGE/ #[test] fn ige_vector_1() { let raw_key = "000102030405060708090A0B0C0D0E0F"; let raw_iv = "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F"; let raw_pt = "0000000000000000000000000000000000000000000000000000000000000000"; let raw_ct = "1A8519A6557BE652E9DA8E43DA4EF4453CF456B4CA488AA383C79C98B34797CB"; let key = AesKey::new_encrypt(&Vec::from_hex(raw_key).unwrap()).unwrap(); let mut iv = Vec::from_hex(raw_iv).unwrap(); let pt = Vec::from_hex(raw_pt).unwrap(); let ct = Vec::from_hex(raw_ct).unwrap(); let mut ct_actual = vec![0; ct.len()]; aes_ige(&pt, &mut ct_actual, &key, &mut iv, Mode::Encrypt); assert_eq!(ct_actual, ct); let key = AesKey::new_decrypt(&Vec::from_hex(raw_key).unwrap()).unwrap(); let mut iv = Vec::from_hex(raw_iv).unwrap(); let mut pt_actual = vec![0; pt.len()]; aes_ige(&ct, &mut pt_actual, &key, &mut iv, Mode::Decrypt); assert_eq!(pt_actual, pt); } // from the RFC https://tools.ietf.org/html/rfc3394#section-2.2.3 #[test] fn test_wrap_unwrap() { let raw_key = Vec::from_hex("000102030405060708090A0B0C0D0E0F").unwrap(); let key_data = Vec::from_hex("00112233445566778899AABBCCDDEEFF").unwrap(); let expected_ciphertext = Vec::from_hex("1FA68B0A8112B447AEF34BD8FB5A7B829D3E862371D2CFE5").unwrap(); let enc_key = AesKey::new_encrypt(&raw_key).unwrap(); let mut wrapped = [0; 24]; assert_eq!( wrap_key(&enc_key, None, &mut wrapped, &key_data).unwrap(), 24 ); assert_eq!(&wrapped[..], &expected_ciphertext[..]); let dec_key = AesKey::new_decrypt(&raw_key).unwrap(); let mut unwrapped = [0; 16]; assert_eq!( unwrap_key(&dec_key, None, &mut unwrapped, &wrapped).unwrap(), 16 ); assert_eq!(&unwrapped[..], &key_data[..]); } } vendor/openssl/src/cms.rs0000664000175000017500000002674714160055207016271 0ustar mwhudsonmwhudson//! SMIME implementation using CMS //! //! CMS (PKCS#7) is an encryption standard. It allows signing and encrypting data using //! X.509 certificates. The OpenSSL implementation of CMS is used in email encryption //! generated from a `Vec` of bytes. This `Vec` follows the smime protocol standards. //! Data accepted by this module will be smime type `enveloped-data`. use bitflags::bitflags; use foreign_types::{ForeignType, ForeignTypeRef}; use libc::c_uint; use std::ptr; use crate::bio::{MemBio, MemBioSlice}; use crate::error::ErrorStack; use crate::pkey::{HasPrivate, PKeyRef}; use crate::stack::StackRef; use crate::symm::Cipher; use crate::x509::{X509Ref, X509}; use crate::{cvt, cvt_p}; bitflags! { pub struct CMSOptions : c_uint { const TEXT = ffi::CMS_TEXT; const CMS_NOCERTS = ffi::CMS_NOCERTS; const NO_CONTENT_VERIFY = ffi::CMS_NO_CONTENT_VERIFY; const NO_ATTR_VERIFY = ffi::CMS_NO_ATTR_VERIFY; const NOSIGS = ffi::CMS_NOSIGS; const NOINTERN = ffi::CMS_NOINTERN; const NO_SIGNER_CERT_VERIFY = ffi::CMS_NO_SIGNER_CERT_VERIFY; const NOVERIFY = ffi::CMS_NOVERIFY; const DETACHED = ffi::CMS_DETACHED; const BINARY = ffi::CMS_BINARY; const NOATTR = ffi::CMS_NOATTR; const NOSMIMECAP = ffi::CMS_NOSMIMECAP; const NOOLDMIMETYPE = ffi::CMS_NOOLDMIMETYPE; const CRLFEOL = ffi::CMS_CRLFEOL; const STREAM = ffi::CMS_STREAM; const NOCRL = ffi::CMS_NOCRL; const PARTIAL = ffi::CMS_PARTIAL; const REUSE_DIGEST = ffi::CMS_REUSE_DIGEST; const USE_KEYID = ffi::CMS_USE_KEYID; const DEBUG_DECRYPT = ffi::CMS_DEBUG_DECRYPT; #[cfg(all(not(libressl), not(ossl101)))] const KEY_PARAM = ffi::CMS_KEY_PARAM; #[cfg(all(not(libressl), not(ossl101), not(ossl102)))] const ASCIICRLF = ffi::CMS_ASCIICRLF; } } foreign_type_and_impl_send_sync! { type CType = ffi::CMS_ContentInfo; fn drop = ffi::CMS_ContentInfo_free; /// High level CMS wrapper /// /// CMS supports nesting various types of data, including signatures, certificates, /// encrypted data, smime messages (encrypted email), and data digest. The ContentInfo /// content type is the encapsulation of all those content types. [`RFC 5652`] describes /// CMS and OpenSSL follows this RFC's implementation. /// /// [`RFC 5652`]: https://tools.ietf.org/html/rfc5652#page-6 pub struct CmsContentInfo; /// Reference to [`CMSContentInfo`] /// /// [`CMSContentInfo`]:struct.CmsContentInfo.html pub struct CmsContentInfoRef; } impl CmsContentInfoRef { /// Given the sender's private key, `pkey` and the recipient's certificiate, `cert`, /// decrypt the data in `self`. /// /// OpenSSL documentation at [`CMS_decrypt`] /// /// [`CMS_decrypt`]: https://www.openssl.org/docs/man1.1.0/crypto/CMS_decrypt.html pub fn decrypt(&self, pkey: &PKeyRef, cert: &X509) -> Result, ErrorStack> where T: HasPrivate, { unsafe { let pkey = pkey.as_ptr(); let cert = cert.as_ptr(); let out = MemBio::new()?; cvt(ffi::CMS_decrypt( self.as_ptr(), pkey, cert, ptr::null_mut(), out.as_ptr(), 0, ))?; Ok(out.get_buf().to_owned()) } } /// Given the sender's private key, `pkey`, /// decrypt the data in `self` without validating the recipient certificate. /// /// *Warning*: Not checking the recipient certificate may leave you vulnerable to Bleichenbacher's attack on PKCS#1 v1.5 RSA padding. /// See [`CMS_decrypt`] for more information. /// /// [`CMS_decrypt`]: https://www.openssl.org/docs/man1.1.0/crypto/CMS_decrypt.html // FIXME merge into decrypt pub fn decrypt_without_cert_check(&self, pkey: &PKeyRef) -> Result, ErrorStack> where T: HasPrivate, { unsafe { let pkey = pkey.as_ptr(); let out = MemBio::new()?; cvt(ffi::CMS_decrypt( self.as_ptr(), pkey, ptr::null_mut(), ptr::null_mut(), out.as_ptr(), 0, ))?; Ok(out.get_buf().to_owned()) } } to_der! { /// Serializes this CmsContentInfo using DER. /// /// OpenSSL documentation at [`i2d_CMS_ContentInfo`] /// /// [`i2d_CMS_ContentInfo`]: https://www.openssl.org/docs/man1.0.2/crypto/i2d_CMS_ContentInfo.html to_der, ffi::i2d_CMS_ContentInfo } to_pem! { /// Serializes this CmsContentInfo using DER. /// /// OpenSSL documentation at [`PEM_write_bio_CMS`] /// /// [`PEM_write_bio_CMS`]: https://www.openssl.org/docs/man1.1.0/man3/PEM_write_bio_CMS.html to_pem, ffi::PEM_write_bio_CMS } } impl CmsContentInfo { /// Parses a smime formatted `vec` of bytes into a `CmsContentInfo`. /// /// OpenSSL documentation at [`SMIME_read_CMS`] /// /// [`SMIME_read_CMS`]: https://www.openssl.org/docs/man1.0.2/crypto/SMIME_read_CMS.html pub fn smime_read_cms(smime: &[u8]) -> Result { unsafe { let bio = MemBioSlice::new(smime)?; let cms = cvt_p(ffi::SMIME_read_CMS(bio.as_ptr(), ptr::null_mut()))?; Ok(CmsContentInfo::from_ptr(cms)) } } from_der! { /// Deserializes a DER-encoded ContentInfo structure. /// /// This corresponds to [`d2i_CMS_ContentInfo`]. /// /// [`d2i_CMS_ContentInfo`]: https://www.openssl.org/docs/manmaster/man3/d2i_X509.html from_der, CmsContentInfo, ffi::d2i_CMS_ContentInfo } from_pem! { /// Deserializes a PEM-encoded ContentInfo structure. /// /// This corresponds to [`PEM_read_bio_CMS`]. /// /// [`PEM_read_bio_CMS`]: https://www.openssl.org/docs/man1.1.0/man3/PEM_read_bio_CMS.html from_pem, CmsContentInfo, ffi::PEM_read_bio_CMS } /// Given a signing cert `signcert`, private key `pkey`, a certificate stack `certs`, /// data `data` and flags `flags`, create a CmsContentInfo struct. /// /// All arguments are optional. /// /// OpenSSL documentation at [`CMS_sign`] /// /// [`CMS_sign`]: https://www.openssl.org/docs/manmaster/man3/CMS_sign.html pub fn sign( signcert: Option<&X509Ref>, pkey: Option<&PKeyRef>, certs: Option<&StackRef>, data: Option<&[u8]>, flags: CMSOptions, ) -> Result where T: HasPrivate, { unsafe { let signcert = signcert.map_or(ptr::null_mut(), |p| p.as_ptr()); let pkey = pkey.map_or(ptr::null_mut(), |p| p.as_ptr()); let data_bio = match data { Some(data) => Some(MemBioSlice::new(data)?), None => None, }; let data_bio_ptr = data_bio.as_ref().map_or(ptr::null_mut(), |p| p.as_ptr()); let certs = certs.map_or(ptr::null_mut(), |p| p.as_ptr()); let cms = cvt_p(ffi::CMS_sign( signcert, pkey, certs, data_bio_ptr, flags.bits(), ))?; Ok(CmsContentInfo::from_ptr(cms)) } } /// Given a certificate stack `certs`, data `data`, cipher `cipher` and flags `flags`, /// create a CmsContentInfo struct. /// /// OpenSSL documentation at [`CMS_encrypt`] /// /// [`CMS_encrypt`]: https://www.openssl.org/docs/manmaster/man3/CMS_encrypt.html pub fn encrypt( certs: &StackRef, data: &[u8], cipher: Cipher, flags: CMSOptions, ) -> Result { unsafe { let data_bio = MemBioSlice::new(data)?; let cms = cvt_p(ffi::CMS_encrypt( certs.as_ptr(), data_bio.as_ptr(), cipher.as_ptr(), flags.bits(), ))?; Ok(CmsContentInfo::from_ptr(cms)) } } } #[cfg(test)] mod test { use super::*; use crate::pkcs12::Pkcs12; use crate::stack::Stack; use crate::x509::X509; #[test] #[cfg_attr(ossl300, ignore)] // 3.0.0 can't load RC2-40-CBC fn cms_encrypt_decrypt() { // load cert with public key only let pub_cert_bytes = include_bytes!("../test/cms_pubkey.der"); let pub_cert = X509::from_der(pub_cert_bytes).expect("failed to load pub cert"); // load cert with private key let priv_cert_bytes = include_bytes!("../test/cms.p12"); let priv_cert = Pkcs12::from_der(priv_cert_bytes).expect("failed to load priv cert"); let priv_cert = priv_cert .parse("mypass") .expect("failed to parse priv cert"); // encrypt cms message using public key cert let input = String::from("My Message"); let mut cert_stack = Stack::new().expect("failed to create stack"); cert_stack .push(pub_cert) .expect("failed to add pub cert to stack"); let encrypt = CmsContentInfo::encrypt( &cert_stack, input.as_bytes(), Cipher::des_ede3_cbc(), CMSOptions::empty(), ) .expect("failed create encrypted cms"); // decrypt cms message using private key cert (DER) { let encrypted_der = encrypt.to_der().expect("failed to create der from cms"); let decrypt = CmsContentInfo::from_der(&encrypted_der).expect("failed read cms from der"); let decrypt_with_cert_check = decrypt .decrypt(&priv_cert.pkey, &priv_cert.cert) .expect("failed to decrypt cms"); let decrypt_with_cert_check = String::from_utf8(decrypt_with_cert_check) .expect("failed to create string from cms content"); let decrypt_without_cert_check = decrypt .decrypt_without_cert_check(&priv_cert.pkey) .expect("failed to decrypt cms"); let decrypt_without_cert_check = String::from_utf8(decrypt_without_cert_check) .expect("failed to create string from cms content"); assert_eq!(input, decrypt_with_cert_check); assert_eq!(input, decrypt_without_cert_check); } // decrypt cms message using private key cert (PEM) { let encrypted_pem = encrypt.to_pem().expect("failed to create pem from cms"); let decrypt = CmsContentInfo::from_pem(&encrypted_pem).expect("failed read cms from pem"); let decrypt_with_cert_check = decrypt .decrypt(&priv_cert.pkey, &priv_cert.cert) .expect("failed to decrypt cms"); let decrypt_with_cert_check = String::from_utf8(decrypt_with_cert_check) .expect("failed to create string from cms content"); let decrypt_without_cert_check = decrypt .decrypt_without_cert_check(&priv_cert.pkey) .expect("failed to decrypt cms"); let decrypt_without_cert_check = String::from_utf8(decrypt_without_cert_check) .expect("failed to create string from cms content"); assert_eq!(input, decrypt_with_cert_check); assert_eq!(input, decrypt_without_cert_check); } } } vendor/openssl/src/pkcs5.rs0000664000175000017500000002255414160055207016524 0ustar mwhudsonmwhudsonuse libc::c_int; use std::ptr; use crate::cvt; use crate::error::ErrorStack; use crate::hash::MessageDigest; use crate::symm::Cipher; #[derive(Clone, Eq, PartialEq, Hash, Debug)] pub struct KeyIvPair { pub key: Vec, pub iv: Option>, } /// Derives a key and an IV from various parameters. /// /// If specified, `salt` must be 8 bytes in length. /// /// If the total key and IV length is less than 16 bytes and MD5 is used then /// the algorithm is compatible with the key derivation algorithm from PKCS#5 /// v1.5 or PBKDF1 from PKCS#5 v2.0. /// /// New applications should not use this and instead use /// `pbkdf2_hmac` or another more modern key derivation algorithm. #[allow(clippy::useless_conversion)] pub fn bytes_to_key( cipher: Cipher, digest: MessageDigest, data: &[u8], salt: Option<&[u8]>, count: i32, ) -> Result { unsafe { assert!(data.len() <= c_int::max_value() as usize); let salt_ptr = match salt { Some(salt) => { assert_eq!(salt.len(), ffi::PKCS5_SALT_LEN as usize); salt.as_ptr() } None => ptr::null(), }; ffi::init(); let mut iv = cipher.iv_len().map(|l| vec![0; l]); let cipher = cipher.as_ptr(); let digest = digest.as_ptr(); let len = cvt(ffi::EVP_BytesToKey( cipher, digest, salt_ptr, ptr::null(), data.len() as c_int, count.into(), ptr::null_mut(), ptr::null_mut(), ))?; let mut key = vec![0; len as usize]; let iv_ptr = iv .as_mut() .map(|v| v.as_mut_ptr()) .unwrap_or(ptr::null_mut()); cvt(ffi::EVP_BytesToKey( cipher, digest, salt_ptr, data.as_ptr(), data.len() as c_int, count as c_int, key.as_mut_ptr(), iv_ptr, ))?; Ok(KeyIvPair { key, iv }) } } /// Derives a key from a password and salt using the PBKDF2-HMAC algorithm with a digest function. pub fn pbkdf2_hmac( pass: &[u8], salt: &[u8], iter: usize, hash: MessageDigest, key: &mut [u8], ) -> Result<(), ErrorStack> { unsafe { assert!(pass.len() <= c_int::max_value() as usize); assert!(salt.len() <= c_int::max_value() as usize); assert!(key.len() <= c_int::max_value() as usize); ffi::init(); cvt(ffi::PKCS5_PBKDF2_HMAC( pass.as_ptr() as *const _, pass.len() as c_int, salt.as_ptr(), salt.len() as c_int, iter as c_int, hash.as_ptr(), key.len() as c_int, key.as_mut_ptr(), )) .map(|_| ()) } } /// Derives a key from a password and salt using the scrypt algorithm. /// /// Requires OpenSSL 1.1.0 or newer. #[cfg(any(ossl110))] pub fn scrypt( pass: &[u8], salt: &[u8], n: u64, r: u64, p: u64, maxmem: u64, key: &mut [u8], ) -> Result<(), ErrorStack> { unsafe { ffi::init(); cvt(ffi::EVP_PBE_scrypt( pass.as_ptr() as *const _, pass.len(), salt.as_ptr() as *const _, salt.len(), n, r, p, maxmem, key.as_mut_ptr() as *mut _, key.len(), )) .map(|_| ()) } } #[cfg(test)] mod tests { use crate::hash::MessageDigest; use crate::symm::Cipher; // Test vectors from // https://git.lysator.liu.se/nettle/nettle/blob/nettle_3.1.1_release_20150424/testsuite/pbkdf2-test.c #[test] fn pbkdf2_hmac_sha256() { let mut buf = [0; 16]; super::pbkdf2_hmac(b"passwd", b"salt", 1, MessageDigest::sha256(), &mut buf).unwrap(); assert_eq!( buf, &[ 0x55_u8, 0xac_u8, 0x04_u8, 0x6e_u8, 0x56_u8, 0xe3_u8, 0x08_u8, 0x9f_u8, 0xec_u8, 0x16_u8, 0x91_u8, 0xc2_u8, 0x25_u8, 0x44_u8, 0xb6_u8, 0x05_u8, ][..] ); super::pbkdf2_hmac( b"Password", b"NaCl", 80000, MessageDigest::sha256(), &mut buf, ) .unwrap(); assert_eq!( buf, &[ 0x4d_u8, 0xdc_u8, 0xd8_u8, 0xf6_u8, 0x0b_u8, 0x98_u8, 0xbe_u8, 0x21_u8, 0x83_u8, 0x0c_u8, 0xee_u8, 0x5e_u8, 0xf2_u8, 0x27_u8, 0x01_u8, 0xf9_u8, ][..] ); } // Test vectors from // https://git.lysator.liu.se/nettle/nettle/blob/nettle_3.1.1_release_20150424/testsuite/pbkdf2-test.c #[test] fn pbkdf2_hmac_sha512() { let mut buf = [0; 64]; super::pbkdf2_hmac(b"password", b"NaCL", 1, MessageDigest::sha512(), &mut buf).unwrap(); assert_eq!( &buf[..], &[ 0x73_u8, 0xde_u8, 0xcf_u8, 0xa5_u8, 0x8a_u8, 0xa2_u8, 0xe8_u8, 0x4f_u8, 0x94_u8, 0x77_u8, 0x1a_u8, 0x75_u8, 0x73_u8, 0x6b_u8, 0xb8_u8, 0x8b_u8, 0xd3_u8, 0xc7_u8, 0xb3_u8, 0x82_u8, 0x70_u8, 0xcf_u8, 0xb5_u8, 0x0c_u8, 0xb3_u8, 0x90_u8, 0xed_u8, 0x78_u8, 0xb3_u8, 0x05_u8, 0x65_u8, 0x6a_u8, 0xf8_u8, 0x14_u8, 0x8e_u8, 0x52_u8, 0x45_u8, 0x2b_u8, 0x22_u8, 0x16_u8, 0xb2_u8, 0xb8_u8, 0x09_u8, 0x8b_u8, 0x76_u8, 0x1f_u8, 0xc6_u8, 0x33_u8, 0x60_u8, 0x60_u8, 0xa0_u8, 0x9f_u8, 0x76_u8, 0x41_u8, 0x5e_u8, 0x9f_u8, 0x71_u8, 0xea_u8, 0x47_u8, 0xf9_u8, 0xe9_u8, 0x06_u8, 0x43_u8, 0x06_u8, ][..] ); super::pbkdf2_hmac( b"pass\0word", b"sa\0lt", 1, MessageDigest::sha512(), &mut buf, ) .unwrap(); assert_eq!( &buf[..], &[ 0x71_u8, 0xa0_u8, 0xec_u8, 0x84_u8, 0x2a_u8, 0xbd_u8, 0x5c_u8, 0x67_u8, 0x8b_u8, 0xcf_u8, 0xd1_u8, 0x45_u8, 0xf0_u8, 0x9d_u8, 0x83_u8, 0x52_u8, 0x2f_u8, 0x93_u8, 0x36_u8, 0x15_u8, 0x60_u8, 0x56_u8, 0x3c_u8, 0x4d_u8, 0x0d_u8, 0x63_u8, 0xb8_u8, 0x83_u8, 0x29_u8, 0x87_u8, 0x10_u8, 0x90_u8, 0xe7_u8, 0x66_u8, 0x04_u8, 0xa4_u8, 0x9a_u8, 0xf0_u8, 0x8f_u8, 0xe7_u8, 0xc9_u8, 0xf5_u8, 0x71_u8, 0x56_u8, 0xc8_u8, 0x79_u8, 0x09_u8, 0x96_u8, 0xb2_u8, 0x0f_u8, 0x06_u8, 0xbc_u8, 0x53_u8, 0x5e_u8, 0x5a_u8, 0xb5_u8, 0x44_u8, 0x0d_u8, 0xf7_u8, 0xe8_u8, 0x78_u8, 0x29_u8, 0x6f_u8, 0xa7_u8, ][..] ); super::pbkdf2_hmac( b"passwordPASSWORDpassword", b"salt\0\0\0", 50, MessageDigest::sha512(), &mut buf, ) .unwrap(); assert_eq!( &buf[..], &[ 0x01_u8, 0x68_u8, 0x71_u8, 0xa4_u8, 0xc4_u8, 0xb7_u8, 0x5f_u8, 0x96_u8, 0x85_u8, 0x7f_u8, 0xd2_u8, 0xb9_u8, 0xf8_u8, 0xca_u8, 0x28_u8, 0x02_u8, 0x3b_u8, 0x30_u8, 0xee_u8, 0x2a_u8, 0x39_u8, 0xf5_u8, 0xad_u8, 0xca_u8, 0xc8_u8, 0xc9_u8, 0x37_u8, 0x5f_u8, 0x9b_u8, 0xda_u8, 0x1c_u8, 0xcd_u8, 0x1b_u8, 0x6f_u8, 0x0b_u8, 0x2f_u8, 0xc3_u8, 0xad_u8, 0xda_u8, 0x50_u8, 0x54_u8, 0x12_u8, 0xe7_u8, 0x9d_u8, 0x89_u8, 0x00_u8, 0x56_u8, 0xc6_u8, 0x2e_u8, 0x52_u8, 0x4c_u8, 0x7d_u8, 0x51_u8, 0x15_u8, 0x4b_u8, 0x1a_u8, 0x85_u8, 0x34_u8, 0x57_u8, 0x5b_u8, 0xd0_u8, 0x2d_u8, 0xee_u8, 0x39_u8, ][..] ); } #[test] fn bytes_to_key() { let salt = [16_u8, 34_u8, 19_u8, 23_u8, 141_u8, 4_u8, 207_u8, 221_u8]; let data = [ 143_u8, 210_u8, 75_u8, 63_u8, 214_u8, 179_u8, 155_u8, 241_u8, 242_u8, 31_u8, 154_u8, 56_u8, 198_u8, 145_u8, 192_u8, 64_u8, 2_u8, 245_u8, 167_u8, 220_u8, 55_u8, 119_u8, 233_u8, 136_u8, 139_u8, 27_u8, 71_u8, 242_u8, 119_u8, 175_u8, 65_u8, 207_u8, ]; let expected_key = vec![ 249_u8, 115_u8, 114_u8, 97_u8, 32_u8, 213_u8, 165_u8, 146_u8, 58_u8, 87_u8, 234_u8, 3_u8, 43_u8, 250_u8, 97_u8, 114_u8, 26_u8, 98_u8, 245_u8, 246_u8, 238_u8, 177_u8, 229_u8, 161_u8, 183_u8, 224_u8, 174_u8, 3_u8, 6_u8, 244_u8, 236_u8, 255_u8, ]; let expected_iv = vec![ 4_u8, 223_u8, 153_u8, 219_u8, 28_u8, 142_u8, 234_u8, 68_u8, 227_u8, 69_u8, 98_u8, 107_u8, 208_u8, 14_u8, 236_u8, 60_u8, ]; assert_eq!( super::bytes_to_key( Cipher::aes_256_cbc(), MessageDigest::sha1(), &data, Some(&salt), 1, ) .unwrap(), super::KeyIvPair { key: expected_key, iv: Some(expected_iv), } ); } #[test] #[cfg(any(ossl110))] fn scrypt() { let pass = "pleaseletmein"; let salt = "SodiumChloride"; let expected = "7023bdcb3afd7348461c06cd81fd38ebfda8fbba904f8e3ea9b543f6545da1f2d5432955613\ f0fcf62d49705242a9af9e61e85dc0d651e40dfcf017b45575887"; let mut actual = [0; 64]; super::scrypt( pass.as_bytes(), salt.as_bytes(), 16384, 8, 1, 0, &mut actual, ) .unwrap(); assert_eq!(hex::encode(&actual[..]), expected); } } vendor/openssl/src/ecdsa.rs0000664000175000017500000002011514160055207016545 0ustar mwhudsonmwhudson//! Low level Elliptic Curve Digital Signature Algorithm (ECDSA) functions. use cfg_if::cfg_if; use foreign_types::{ForeignType, ForeignTypeRef}; use libc::c_int; use std::mem; use std::ptr; use crate::bn::{BigNum, BigNumRef}; use crate::ec::EcKeyRef; use crate::error::ErrorStack; use crate::pkey::{HasPrivate, HasPublic}; use crate::util::ForeignTypeRefExt; use crate::{cvt_n, cvt_p}; foreign_type_and_impl_send_sync! { type CType = ffi::ECDSA_SIG; fn drop = ffi::ECDSA_SIG_free; /// A low level interface to ECDSA /// /// OpenSSL documentation at [`ECDSA_sign`] /// /// [`ECDSA_sign`]: https://www.openssl.org/docs/man1.1.0/crypto/ECDSA_sign.html pub struct EcdsaSig; /// Reference to [`EcdsaSig`] /// /// [`EcdsaSig`]: struct.EcdsaSig.html pub struct EcdsaSigRef; } impl EcdsaSig { /// Computes a digital signature of the hash value `data` using the private EC key eckey. /// /// OpenSSL documentation at [`ECDSA_do_sign`] /// /// [`ECDSA_do_sign`]: https://www.openssl.org/docs/man1.1.0/crypto/ECDSA_do_sign.html pub fn sign(data: &[u8], eckey: &EcKeyRef) -> Result where T: HasPrivate, { unsafe { assert!(data.len() <= c_int::max_value() as usize); let sig = cvt_p(ffi::ECDSA_do_sign( data.as_ptr(), data.len() as c_int, eckey.as_ptr(), ))?; Ok(EcdsaSig::from_ptr(sig)) } } /// Returns a new `EcdsaSig` by setting the `r` and `s` values associated with a /// ECDSA signature. /// /// OpenSSL documentation at [`ECDSA_SIG_set0`] /// /// [`ECDSA_SIG_set0`]: https://www.openssl.org/docs/man1.1.0/crypto/ECDSA_SIG_set0.html pub fn from_private_components(r: BigNum, s: BigNum) -> Result { unsafe { let sig = cvt_p(ffi::ECDSA_SIG_new())?; ECDSA_SIG_set0(sig, r.as_ptr(), s.as_ptr()); mem::forget((r, s)); Ok(EcdsaSig::from_ptr(sig)) } } from_der! { /// Decodes a DER-encoded ECDSA signature. /// /// This corresponds to [`d2i_ECDSA_SIG`]. /// /// [`d2i_ECDSA_SIG`]: https://www.openssl.org/docs/man1.1.0/crypto/d2i_ECDSA_SIG.html from_der, EcdsaSig, ffi::d2i_ECDSA_SIG } } impl EcdsaSigRef { to_der! { /// Serializes the ECDSA signature into a DER-encoded ECDSASignature structure. /// /// This corresponds to [`i2d_ECDSA_SIG`]. /// /// [`i2d_ECDSA_SIG`]: https://www.openssl.org/docs/man1.1.0/crypto/i2d_ECDSA_SIG.html to_der, ffi::i2d_ECDSA_SIG } /// Verifies if the signature is a valid ECDSA signature using the given public key. /// /// OpenSSL documentation at [`ECDSA_do_verify`] /// /// [`ECDSA_do_verify`]: https://www.openssl.org/docs/man1.1.0/crypto/ECDSA_do_verify.html pub fn verify(&self, data: &[u8], eckey: &EcKeyRef) -> Result where T: HasPublic, { unsafe { assert!(data.len() <= c_int::max_value() as usize); cvt_n(ffi::ECDSA_do_verify( data.as_ptr(), data.len() as c_int, self.as_ptr(), eckey.as_ptr(), )) .map(|x| x == 1) } } /// Returns internal component: `r` of an `EcdsaSig`. (See X9.62 or FIPS 186-2) /// /// OpenSSL documentation at [`ECDSA_SIG_get0`] /// /// [`ECDSA_SIG_get0`]: https://www.openssl.org/docs/man1.1.0/crypto/ECDSA_SIG_get0.html pub fn r(&self) -> &BigNumRef { unsafe { let mut r = ptr::null(); ECDSA_SIG_get0(self.as_ptr(), &mut r, ptr::null_mut()); BigNumRef::from_const_ptr(r) } } /// Returns internal components: `s` of an `EcdsaSig`. (See X9.62 or FIPS 186-2) /// /// OpenSSL documentation at [`ECDSA_SIG_get0`] /// /// [`ECDSA_SIG_get0`]: https://www.openssl.org/docs/man1.1.0/crypto/ECDSA_SIG_get0.html pub fn s(&self) -> &BigNumRef { unsafe { let mut s = ptr::null(); ECDSA_SIG_get0(self.as_ptr(), ptr::null_mut(), &mut s); BigNumRef::from_const_ptr(s) } } } cfg_if! { if #[cfg(any(ossl110, libressl273))] { use ffi::{ECDSA_SIG_set0, ECDSA_SIG_get0}; } else { #[allow(bad_style)] unsafe fn ECDSA_SIG_set0( sig: *mut ffi::ECDSA_SIG, r: *mut ffi::BIGNUM, s: *mut ffi::BIGNUM, ) -> c_int { if r.is_null() || s.is_null() { return 0; } ffi::BN_clear_free((*sig).r); ffi::BN_clear_free((*sig).s); (*sig).r = r; (*sig).s = s; 1 } #[allow(bad_style)] unsafe fn ECDSA_SIG_get0( sig: *const ffi::ECDSA_SIG, pr: *mut *const ffi::BIGNUM, ps: *mut *const ffi::BIGNUM) { if !pr.is_null() { (*pr) = (*sig).r; } if !ps.is_null() { (*ps) = (*sig).s; } } } } #[cfg(test)] mod test { use super::*; use crate::ec::EcGroup; use crate::ec::EcKey; use crate::nid::Nid; use crate::pkey::{Private, Public}; fn get_public_key(group: &EcGroup, x: &EcKey) -> Result, ErrorStack> { EcKey::from_public_key(group, x.public_key()) } #[test] #[cfg_attr(osslconf = "OPENSSL_NO_EC2M", ignore)] fn sign_and_verify() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME192V1).unwrap(); let private_key = EcKey::generate(&group).unwrap(); let public_key = get_public_key(&group, &private_key).unwrap(); let private_key2 = EcKey::generate(&group).unwrap(); let public_key2 = get_public_key(&group, &private_key2).unwrap(); let data = String::from("hello"); let res = EcdsaSig::sign(data.as_bytes(), &private_key).unwrap(); // Signature can be verified using the correct data & correct public key let verification = res.verify(data.as_bytes(), &public_key).unwrap(); assert!(verification); // Signature will not be verified using the incorrect data but the correct public key let verification2 = res .verify(String::from("hello2").as_bytes(), &public_key) .unwrap(); assert!(!verification2); // Signature will not be verified using the correct data but the incorrect public key let verification3 = res.verify(data.as_bytes(), &public_key2).unwrap(); assert!(!verification3); } #[test] #[cfg_attr(osslconf = "OPENSSL_NO_EC2M", ignore)] fn check_private_components() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME192V1).unwrap(); let private_key = EcKey::generate(&group).unwrap(); let public_key = get_public_key(&group, &private_key).unwrap(); let data = String::from("hello"); let res = EcdsaSig::sign(data.as_bytes(), &private_key).unwrap(); let verification = res.verify(data.as_bytes(), &public_key).unwrap(); assert!(verification); let r = res.r().to_owned().unwrap(); let s = res.s().to_owned().unwrap(); let res2 = EcdsaSig::from_private_components(r, s).unwrap(); let verification2 = res2.verify(data.as_bytes(), &public_key).unwrap(); assert!(verification2); } #[test] #[cfg_attr(osslconf = "OPENSSL_NO_EC2M", ignore)] fn serialize_deserialize() { let group = EcGroup::from_curve_name(Nid::SECP256K1).unwrap(); let private_key = EcKey::generate(&group).unwrap(); let public_key = get_public_key(&group, &private_key).unwrap(); let data = String::from("hello"); let res = EcdsaSig::sign(data.as_bytes(), &private_key).unwrap(); let der = res.to_der().unwrap(); let sig = EcdsaSig::from_der(&der).unwrap(); let verification = sig.verify(data.as_bytes(), &public_key).unwrap(); assert!(verification); } } vendor/openssl/src/sign.rs0000664000175000017500000007113514160055207016436 0ustar mwhudsonmwhudson//! Message signatures. //! //! The `Signer` allows for the computation of cryptographic signatures of //! data given a private key. The `Verifier` can then be used with the //! corresponding public key to verify the integrity and authenticity of that //! data given the signature. //! //! # Examples //! //! Sign and verify data given an RSA keypair: //! //! ```rust //! use openssl::sign::{Signer, Verifier}; //! use openssl::rsa::Rsa; //! use openssl::pkey::PKey; //! use openssl::hash::MessageDigest; //! //! // Generate a keypair //! let keypair = Rsa::generate(2048).unwrap(); //! let keypair = PKey::from_rsa(keypair).unwrap(); //! //! let data = b"hello, world!"; //! let data2 = b"hola, mundo!"; //! //! // Sign the data //! let mut signer = Signer::new(MessageDigest::sha256(), &keypair).unwrap(); //! signer.update(data).unwrap(); //! signer.update(data2).unwrap(); //! let signature = signer.sign_to_vec().unwrap(); //! //! // Verify the data //! let mut verifier = Verifier::new(MessageDigest::sha256(), &keypair).unwrap(); //! verifier.update(data).unwrap(); //! verifier.update(data2).unwrap(); //! assert!(verifier.verify(&signature).unwrap()); //! ``` //! //! Compute an HMAC: //! //! ```rust //! use openssl::hash::MessageDigest; //! use openssl::memcmp; //! use openssl::pkey::PKey; //! use openssl::sign::Signer; //! //! // Create a PKey //! let key = PKey::hmac(b"my secret").unwrap(); //! //! let data = b"hello, world!"; //! let data2 = b"hola, mundo!"; //! //! // Compute the HMAC //! let mut signer = Signer::new(MessageDigest::sha256(), &key).unwrap(); //! signer.update(data).unwrap(); //! signer.update(data2).unwrap(); //! let hmac = signer.sign_to_vec().unwrap(); //! //! // `Verifier` cannot be used with HMACs; use the `memcmp::eq` function instead //! // //! // Do not simply check for equality with `==`! //! # let target = hmac.clone(); //! assert!(memcmp::eq(&hmac, &target)); //! ``` use cfg_if::cfg_if; use foreign_types::ForeignTypeRef; use libc::c_int; use std::io::{self, Write}; use std::marker::PhantomData; use std::ptr; use crate::error::ErrorStack; use crate::hash::MessageDigest; use crate::pkey::{HasPrivate, HasPublic, PKeyRef}; use crate::rsa::Padding; use crate::{cvt, cvt_p}; cfg_if! { if #[cfg(ossl110)] { use ffi::{EVP_MD_CTX_free, EVP_MD_CTX_new}; } else { use ffi::{EVP_MD_CTX_create as EVP_MD_CTX_new, EVP_MD_CTX_destroy as EVP_MD_CTX_free}; } } /// Salt lengths that must be used with `set_rsa_pss_saltlen`. pub struct RsaPssSaltlen(c_int); impl RsaPssSaltlen { /// Returns the integer representation of `RsaPssSaltlen`. fn as_raw(&self) -> c_int { self.0 } /// Sets the salt length to the given value. pub fn custom(val: c_int) -> RsaPssSaltlen { RsaPssSaltlen(val) } /// The salt length is set to the digest length. /// Corresponds to the special value `-1`. pub const DIGEST_LENGTH: RsaPssSaltlen = RsaPssSaltlen(-1); /// The salt length is set to the maximum permissible value. /// Corresponds to the special value `-2`. pub const MAXIMUM_LENGTH: RsaPssSaltlen = RsaPssSaltlen(-2); } /// A type which computes cryptographic signatures of data. pub struct Signer<'a> { md_ctx: *mut ffi::EVP_MD_CTX, pctx: *mut ffi::EVP_PKEY_CTX, _p: PhantomData<&'a ()>, } unsafe impl<'a> Sync for Signer<'a> {} unsafe impl<'a> Send for Signer<'a> {} impl<'a> Drop for Signer<'a> { fn drop(&mut self) { // pkey_ctx is owned by the md_ctx, so no need to explicitly free it. unsafe { EVP_MD_CTX_free(self.md_ctx); } } } #[allow(clippy::len_without_is_empty)] impl<'a> Signer<'a> { /// Creates a new `Signer`. /// /// This cannot be used with Ed25519 or Ed448 keys. Please refer to /// `new_without_digest`. /// /// OpenSSL documentation at [`EVP_DigestSignInit`]. /// /// [`EVP_DigestSignInit`]: https://www.openssl.org/docs/manmaster/man3/EVP_DigestSignInit.html pub fn new(type_: MessageDigest, pkey: &'a PKeyRef) -> Result, ErrorStack> where T: HasPrivate, { Self::new_intern(Some(type_), pkey) } /// Creates a new `Signer` without a digest. /// /// This is the only way to create a `Verifier` for Ed25519 or Ed448 keys. /// It can also be used to create a CMAC. /// /// OpenSSL documentation at [`EVP_DigestSignInit`]. /// /// [`EVP_DigestSignInit`]: https://www.openssl.org/docs/manmaster/man3/EVP_DigestSignInit.html pub fn new_without_digest(pkey: &'a PKeyRef) -> Result, ErrorStack> where T: HasPrivate, { Self::new_intern(None, pkey) } fn new_intern( type_: Option, pkey: &'a PKeyRef, ) -> Result, ErrorStack> where T: HasPrivate, { unsafe { ffi::init(); let ctx = cvt_p(EVP_MD_CTX_new())?; let mut pctx: *mut ffi::EVP_PKEY_CTX = ptr::null_mut(); let r = ffi::EVP_DigestSignInit( ctx, &mut pctx, type_.map(|t| t.as_ptr()).unwrap_or(ptr::null()), ptr::null_mut(), pkey.as_ptr(), ); if r != 1 { EVP_MD_CTX_free(ctx); return Err(ErrorStack::get()); } assert!(!pctx.is_null()); Ok(Signer { md_ctx: ctx, pctx, _p: PhantomData, }) } } /// Returns the RSA padding mode in use. /// /// This is only useful for RSA keys. /// /// This corresponds to `EVP_PKEY_CTX_get_rsa_padding`. pub fn rsa_padding(&self) -> Result { unsafe { let mut pad = 0; cvt(ffi::EVP_PKEY_CTX_get_rsa_padding(self.pctx, &mut pad)) .map(|_| Padding::from_raw(pad)) } } /// Sets the RSA padding mode. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_padding`]. /// /// [`EVP_PKEY_CTX_set_rsa_padding`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_CTX_set_rsa_padding.html pub fn set_rsa_padding(&mut self, padding: Padding) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_padding( self.pctx, padding.as_raw(), )) .map(|_| ()) } } /// Sets the RSA PSS salt length. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_pss_saltlen`]. /// /// [`EVP_PKEY_CTX_set_rsa_pss_saltlen`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_CTX_set_rsa_pss_saltlen.html pub fn set_rsa_pss_saltlen(&mut self, len: RsaPssSaltlen) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_pss_saltlen( self.pctx, len.as_raw(), )) .map(|_| ()) } } /// Sets the RSA MGF1 algorithm. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_mgf1_md`]. /// /// [`EVP_PKEY_CTX_set_rsa_mgf1_md`]: https://www.openssl.org/docs/manmaster/man7/RSA-PSS.html pub fn set_rsa_mgf1_md(&mut self, md: MessageDigest) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_mgf1_md( self.pctx, md.as_ptr() as *mut _, )) .map(|_| ()) } } /// Feeds more data into the `Signer`. /// /// Please note that PureEdDSA (Ed25519 and Ed448 keys) do not support streaming. /// Use `sign_oneshot` instead. /// /// OpenSSL documentation at [`EVP_DigestUpdate`]. /// /// [`EVP_DigestUpdate`]: https://www.openssl.org/docs/manmaster/man3/EVP_DigestInit.html pub fn update(&mut self, buf: &[u8]) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_DigestUpdate( self.md_ctx, buf.as_ptr() as *const _, buf.len(), )) .map(|_| ()) } } /// Computes an upper bound on the signature length. /// /// The actual signature may be shorter than this value. Check the return value of /// `sign` to get the exact length. /// /// OpenSSL documentation at [`EVP_DigestSignFinal`]. /// /// [`EVP_DigestSignFinal`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_DigestSignFinal.html pub fn len(&self) -> Result { self.len_intern() } #[cfg(not(ossl111))] fn len_intern(&self) -> Result { unsafe { let mut len = 0; cvt(ffi::EVP_DigestSignFinal( self.md_ctx, ptr::null_mut(), &mut len, ))?; Ok(len) } } #[cfg(ossl111)] fn len_intern(&self) -> Result { unsafe { let mut len = 0; cvt(ffi::EVP_DigestSign( self.md_ctx, ptr::null_mut(), &mut len, ptr::null(), 0, ))?; Ok(len) } } /// Writes the signature into the provided buffer, returning the number of bytes written. /// /// This method will fail if the buffer is not large enough for the signature. Use the `len` /// method to get an upper bound on the required size. /// /// OpenSSL documentation at [`EVP_DigestSignFinal`]. /// /// [`EVP_DigestSignFinal`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_DigestSignFinal.html pub fn sign(&self, buf: &mut [u8]) -> Result { unsafe { let mut len = buf.len(); cvt(ffi::EVP_DigestSignFinal( self.md_ctx, buf.as_mut_ptr() as *mut _, &mut len, ))?; Ok(len) } } /// Returns the signature. /// /// This is a simple convenience wrapper over `len` and `sign`. pub fn sign_to_vec(&self) -> Result, ErrorStack> { let mut buf = vec![0; self.len()?]; let len = self.sign(&mut buf)?; // The advertised length is not always equal to the real length for things like DSA buf.truncate(len); Ok(buf) } /// Signs the data in data_buf and writes the signature into the buffer sig_buf, returning the /// number of bytes written. /// /// For PureEdDSA (Ed25519 and Ed448 keys) this is the only way to sign data. /// /// This method will fail if the buffer is not large enough for the signature. Use the `len` /// method to get an upper bound on the required size. /// /// OpenSSL documentation at [`EVP_DigestSign`]. /// /// [`EVP_DigestSign`]: https://www.openssl.org/docs/man1.1.1/man3/EVP_DigestSign.html #[cfg(ossl111)] pub fn sign_oneshot( &mut self, sig_buf: &mut [u8], data_buf: &[u8], ) -> Result { unsafe { let mut sig_len = sig_buf.len(); cvt(ffi::EVP_DigestSign( self.md_ctx, sig_buf.as_mut_ptr() as *mut _, &mut sig_len, data_buf.as_ptr() as *const _, data_buf.len(), ))?; Ok(sig_len) } } /// Returns the signature. /// /// This is a simple convenience wrapper over `len` and `sign_oneshot`. #[cfg(ossl111)] pub fn sign_oneshot_to_vec(&mut self, data_buf: &[u8]) -> Result, ErrorStack> { let mut sig_buf = vec![0; self.len()?]; let len = self.sign_oneshot(&mut sig_buf, data_buf)?; // The advertised length is not always equal to the real length for things like DSA sig_buf.truncate(len); Ok(sig_buf) } } impl<'a> Write for Signer<'a> { fn write(&mut self, buf: &[u8]) -> io::Result { self.update(buf)?; Ok(buf.len()) } fn flush(&mut self) -> io::Result<()> { Ok(()) } } pub struct Verifier<'a> { md_ctx: *mut ffi::EVP_MD_CTX, pctx: *mut ffi::EVP_PKEY_CTX, pkey_pd: PhantomData<&'a ()>, } unsafe impl<'a> Sync for Verifier<'a> {} unsafe impl<'a> Send for Verifier<'a> {} impl<'a> Drop for Verifier<'a> { fn drop(&mut self) { // pkey_ctx is owned by the md_ctx, so no need to explicitly free it. unsafe { EVP_MD_CTX_free(self.md_ctx); } } } /// A type which verifies cryptographic signatures of data. impl<'a> Verifier<'a> { /// Creates a new `Verifier`. /// /// This cannot be used with Ed25519 or Ed448 keys. Please refer to /// `new_without_digest`. /// /// OpenSSL documentation at [`EVP_DigestVerifyInit`]. /// /// [`EVP_DigestVerifyInit`]: https://www.openssl.org/docs/manmaster/man3/EVP_DigestVerifyInit.html pub fn new(type_: MessageDigest, pkey: &'a PKeyRef) -> Result, ErrorStack> where T: HasPublic, { Verifier::new_intern(Some(type_), pkey) } /// Creates a new `Verifier` without a digest. /// /// This is the only way to create a `Verifier` for Ed25519 or Ed448 keys. /// /// OpenSSL documentation at [`EVP_DigestVerifyInit`]. /// /// [`EVP_DigestVerifyInit`]: https://www.openssl.org/docs/manmaster/man3/EVP_DigestVerifyInit.html pub fn new_without_digest(pkey: &'a PKeyRef) -> Result, ErrorStack> where T: HasPublic, { Verifier::new_intern(None, pkey) } fn new_intern( type_: Option, pkey: &'a PKeyRef, ) -> Result, ErrorStack> where T: HasPublic, { unsafe { ffi::init(); let ctx = cvt_p(EVP_MD_CTX_new())?; let mut pctx: *mut ffi::EVP_PKEY_CTX = ptr::null_mut(); let r = ffi::EVP_DigestVerifyInit( ctx, &mut pctx, type_.map(|t| t.as_ptr()).unwrap_or(ptr::null()), ptr::null_mut(), pkey.as_ptr(), ); if r != 1 { EVP_MD_CTX_free(ctx); return Err(ErrorStack::get()); } assert!(!pctx.is_null()); Ok(Verifier { md_ctx: ctx, pctx, pkey_pd: PhantomData, }) } } /// Returns the RSA padding mode in use. /// /// This is only useful for RSA keys. /// /// This corresponds to `EVP_PKEY_CTX_get_rsa_padding`. pub fn rsa_padding(&self) -> Result { unsafe { let mut pad = 0; cvt(ffi::EVP_PKEY_CTX_get_rsa_padding(self.pctx, &mut pad)) .map(|_| Padding::from_raw(pad)) } } /// Sets the RSA padding mode. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_padding`]. /// /// [`EVP_PKEY_CTX_set_rsa_padding`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_CTX_set_rsa_padding.html pub fn set_rsa_padding(&mut self, padding: Padding) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_padding( self.pctx, padding.as_raw(), )) .map(|_| ()) } } /// Sets the RSA PSS salt length. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_pss_saltlen`]. /// /// [`EVP_PKEY_CTX_set_rsa_pss_saltlen`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_CTX_set_rsa_pss_saltlen.html pub fn set_rsa_pss_saltlen(&mut self, len: RsaPssSaltlen) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_pss_saltlen( self.pctx, len.as_raw(), )) .map(|_| ()) } } /// Sets the RSA MGF1 algorithm. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_mgf1_md`]. /// /// [`EVP_PKEY_CTX_set_rsa_mgf1_md`]: https://www.openssl.org/docs/manmaster/man7/RSA-PSS.html pub fn set_rsa_mgf1_md(&mut self, md: MessageDigest) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_mgf1_md( self.pctx, md.as_ptr() as *mut _, )) .map(|_| ()) } } /// Feeds more data into the `Verifier`. /// /// Please note that PureEdDSA (Ed25519 and Ed448 keys) do not support streaming. /// Use `verify_oneshot` instead. /// /// OpenSSL documentation at [`EVP_DigestUpdate`]. /// /// [`EVP_DigestUpdate`]: https://www.openssl.org/docs/manmaster/man3/EVP_DigestInit.html pub fn update(&mut self, buf: &[u8]) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_DigestUpdate( self.md_ctx, buf.as_ptr() as *const _, buf.len(), )) .map(|_| ()) } } /// Determines if the data fed into the `Verifier` matches the provided signature. /// /// OpenSSL documentation at [`EVP_DigestVerifyFinal`]. /// /// [`EVP_DigestVerifyFinal`]: https://www.openssl.org/docs/manmaster/man3/EVP_DigestVerifyFinal.html pub fn verify(&self, signature: &[u8]) -> Result { unsafe { let r = EVP_DigestVerifyFinal(self.md_ctx, signature.as_ptr() as *mut _, signature.len()); match r { 1 => Ok(true), 0 => { ErrorStack::get(); // discard error stack Ok(false) } _ => Err(ErrorStack::get()), } } } /// Determines if the data given in buf matches the provided signature. /// /// OpenSSL documentation at [`EVP_DigestVerify`]. /// /// [`EVP_DigestVerify`]: https://www.openssl.org/docs/man1.1.1/man3/EVP_DigestVerify.html #[cfg(ossl111)] pub fn verify_oneshot(&mut self, signature: &[u8], buf: &[u8]) -> Result { unsafe { let r = ffi::EVP_DigestVerify( self.md_ctx, signature.as_ptr() as *const _, signature.len(), buf.as_ptr() as *const _, buf.len(), ); match r { 1 => Ok(true), 0 => { ErrorStack::get(); Ok(false) } _ => Err(ErrorStack::get()), } } } } impl<'a> Write for Verifier<'a> { fn write(&mut self, buf: &[u8]) -> io::Result { self.update(buf)?; Ok(buf.len()) } fn flush(&mut self) -> io::Result<()> { Ok(()) } } #[cfg(not(ossl101))] use ffi::EVP_DigestVerifyFinal; #[cfg(ossl101)] #[allow(bad_style)] unsafe fn EVP_DigestVerifyFinal( ctx: *mut ffi::EVP_MD_CTX, sigret: *const ::libc::c_uchar, siglen: ::libc::size_t, ) -> ::libc::c_int { ffi::EVP_DigestVerifyFinal(ctx, sigret as *mut _, siglen) } #[cfg(test)] mod test { use hex::{self, FromHex}; use std::iter; use crate::ec::{EcGroup, EcKey}; use crate::hash::MessageDigest; use crate::nid::Nid; use crate::pkey::PKey; use crate::rsa::{Padding, Rsa}; #[cfg(ossl111)] use crate::sign::RsaPssSaltlen; use crate::sign::{Signer, Verifier}; const INPUT: &str = "65794a68624763694f694a53557a49314e694a392e65794a7063334d694f694a71623255694c41304b49434a6c\ 654841694f6a457a4d4441344d546b7a4f44417344516f67496d6830644841364c79396c654746746347786c4c\ 6d4e76625339706331397962323930496a7030636e566c6651"; const SIGNATURE: &str = "702e218943e88fd11eb5d82dbf7845f34106ae1b81fff7731116add1717d83656d420afd3c96eedd73a2663e51\ 66687b000b87226e0187ed1073f945e582adfcef16d85a798ee8c66ddb3db8975b17d09402beedd5d9d9700710\ 8db28160d5f8040ca7445762b81fbe7ff9d92e0ae76f24f25b33bbe6f44ae61eb1040acb20044d3ef9128ed401\ 30795bd4bd3b41eecad066ab651981fde48df77f372dc38b9fafdd3befb18b5da3cc3c2eb02f9e3a41d612caad\ 15911273a05f23b9e838faaf849d698429ef5a1e88798236c3d40e604522a544c8f27a7a2db80663d16cf7caea\ 56de405cb2215a45b2c25566b55ac1a748a070dfc8a32a469543d019eefb47"; #[test] fn rsa_sign() { let key = include_bytes!("../test/rsa.pem"); let private_key = Rsa::private_key_from_pem(key).unwrap(); let pkey = PKey::from_rsa(private_key).unwrap(); let mut signer = Signer::new(MessageDigest::sha256(), &pkey).unwrap(); assert_eq!(signer.rsa_padding().unwrap(), Padding::PKCS1); signer.set_rsa_padding(Padding::PKCS1).unwrap(); signer.update(&Vec::from_hex(INPUT).unwrap()).unwrap(); let result = signer.sign_to_vec().unwrap(); assert_eq!(hex::encode(result), SIGNATURE); } #[test] fn rsa_verify_ok() { let key = include_bytes!("../test/rsa.pem"); let private_key = Rsa::private_key_from_pem(key).unwrap(); let pkey = PKey::from_rsa(private_key).unwrap(); let mut verifier = Verifier::new(MessageDigest::sha256(), &pkey).unwrap(); assert_eq!(verifier.rsa_padding().unwrap(), Padding::PKCS1); verifier.update(&Vec::from_hex(INPUT).unwrap()).unwrap(); assert!(verifier.verify(&Vec::from_hex(SIGNATURE).unwrap()).unwrap()); } #[test] fn rsa_verify_invalid() { let key = include_bytes!("../test/rsa.pem"); let private_key = Rsa::private_key_from_pem(key).unwrap(); let pkey = PKey::from_rsa(private_key).unwrap(); let mut verifier = Verifier::new(MessageDigest::sha256(), &pkey).unwrap(); verifier.update(&Vec::from_hex(INPUT).unwrap()).unwrap(); verifier.update(b"foobar").unwrap(); assert!(!verifier.verify(&Vec::from_hex(SIGNATURE).unwrap()).unwrap()); } fn test_hmac(ty: MessageDigest, tests: &[(Vec, Vec, Vec)]) { for &(ref key, ref data, ref res) in tests.iter() { let pkey = PKey::hmac(key).unwrap(); let mut signer = Signer::new(ty, &pkey).unwrap(); signer.update(data).unwrap(); assert_eq!(signer.sign_to_vec().unwrap(), *res); } } #[test] fn hmac_md5() { // test vectors from RFC 2202 let tests: [(Vec, Vec, Vec); 7] = [ ( iter::repeat(0x0b_u8).take(16).collect(), b"Hi There".to_vec(), Vec::from_hex("9294727a3638bb1c13f48ef8158bfc9d").unwrap(), ), ( b"Jefe".to_vec(), b"what do ya want for nothing?".to_vec(), Vec::from_hex("750c783e6ab0b503eaa86e310a5db738").unwrap(), ), ( iter::repeat(0xaa_u8).take(16).collect(), iter::repeat(0xdd_u8).take(50).collect(), Vec::from_hex("56be34521d144c88dbb8c733f0e8b3f6").unwrap(), ), ( Vec::from_hex("0102030405060708090a0b0c0d0e0f10111213141516171819").unwrap(), iter::repeat(0xcd_u8).take(50).collect(), Vec::from_hex("697eaf0aca3a3aea3a75164746ffaa79").unwrap(), ), ( iter::repeat(0x0c_u8).take(16).collect(), b"Test With Truncation".to_vec(), Vec::from_hex("56461ef2342edc00f9bab995690efd4c").unwrap(), ), ( iter::repeat(0xaa_u8).take(80).collect(), b"Test Using Larger Than Block-Size Key - Hash Key First".to_vec(), Vec::from_hex("6b1ab7fe4bd7bf8f0b62e6ce61b9d0cd").unwrap(), ), ( iter::repeat(0xaa_u8).take(80).collect(), b"Test Using Larger Than Block-Size Key \ and Larger Than One Block-Size Data" .to_vec(), Vec::from_hex("6f630fad67cda0ee1fb1f562db3aa53e").unwrap(), ), ]; test_hmac(MessageDigest::md5(), &tests); } #[test] fn hmac_sha1() { // test vectors from RFC 2202 let tests: [(Vec, Vec, Vec); 7] = [ ( iter::repeat(0x0b_u8).take(20).collect(), b"Hi There".to_vec(), Vec::from_hex("b617318655057264e28bc0b6fb378c8ef146be00").unwrap(), ), ( b"Jefe".to_vec(), b"what do ya want for nothing?".to_vec(), Vec::from_hex("effcdf6ae5eb2fa2d27416d5f184df9c259a7c79").unwrap(), ), ( iter::repeat(0xaa_u8).take(20).collect(), iter::repeat(0xdd_u8).take(50).collect(), Vec::from_hex("125d7342b9ac11cd91a39af48aa17b4f63f175d3").unwrap(), ), ( Vec::from_hex("0102030405060708090a0b0c0d0e0f10111213141516171819").unwrap(), iter::repeat(0xcd_u8).take(50).collect(), Vec::from_hex("4c9007f4026250c6bc8414f9bf50c86c2d7235da").unwrap(), ), ( iter::repeat(0x0c_u8).take(20).collect(), b"Test With Truncation".to_vec(), Vec::from_hex("4c1a03424b55e07fe7f27be1d58bb9324a9a5a04").unwrap(), ), ( iter::repeat(0xaa_u8).take(80).collect(), b"Test Using Larger Than Block-Size Key - Hash Key First".to_vec(), Vec::from_hex("aa4ae5e15272d00e95705637ce8a3b55ed402112").unwrap(), ), ( iter::repeat(0xaa_u8).take(80).collect(), b"Test Using Larger Than Block-Size Key \ and Larger Than One Block-Size Data" .to_vec(), Vec::from_hex("e8e99d0f45237d786d6bbaa7965c7808bbff1a91").unwrap(), ), ]; test_hmac(MessageDigest::sha1(), &tests); } #[test] #[cfg(ossl110)] #[cfg_attr(ossl300, ignore)] // https://github.com/openssl/openssl/issues/11671 fn test_cmac() { let cipher = crate::symm::Cipher::aes_128_cbc(); let key = Vec::from_hex("9294727a3638bb1c13f48ef8158bfc9d").unwrap(); let pkey = PKey::cmac(&cipher, &key).unwrap(); let mut signer = Signer::new_without_digest(&pkey).unwrap(); let data = b"Hi There"; signer.update(data as &[u8]).unwrap(); let expected = vec![ 136, 101, 61, 167, 61, 30, 248, 234, 124, 166, 196, 157, 203, 52, 171, 19, ]; assert_eq!(signer.sign_to_vec().unwrap(), expected); } #[test] fn ec() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let key = EcKey::generate(&group).unwrap(); let key = PKey::from_ec_key(key).unwrap(); let mut signer = Signer::new(MessageDigest::sha256(), &key).unwrap(); signer.update(b"hello world").unwrap(); let signature = signer.sign_to_vec().unwrap(); let mut verifier = Verifier::new(MessageDigest::sha256(), &key).unwrap(); verifier.update(b"hello world").unwrap(); assert!(verifier.verify(&signature).unwrap()); } #[test] #[cfg(ossl111)] fn eddsa() { let key = PKey::generate_ed25519().unwrap(); let mut signer = Signer::new_without_digest(&key).unwrap(); let signature = signer.sign_oneshot_to_vec(b"hello world").unwrap(); let mut verifier = Verifier::new_without_digest(&key).unwrap(); assert!(verifier.verify_oneshot(&signature, b"hello world").unwrap()); } #[test] #[cfg(ossl111)] fn rsa_sign_verify() { let key = include_bytes!("../test/rsa.pem"); let private_key = Rsa::private_key_from_pem(key).unwrap(); let pkey = PKey::from_rsa(private_key).unwrap(); let mut signer = Signer::new(MessageDigest::sha256(), &pkey).unwrap(); signer.set_rsa_padding(Padding::PKCS1_PSS).unwrap(); assert_eq!(signer.rsa_padding().unwrap(), Padding::PKCS1_PSS); signer .set_rsa_pss_saltlen(RsaPssSaltlen::DIGEST_LENGTH) .unwrap(); signer.set_rsa_mgf1_md(MessageDigest::sha256()).unwrap(); signer.update(&Vec::from_hex(INPUT).unwrap()).unwrap(); let signature = signer.sign_to_vec().unwrap(); let mut verifier = Verifier::new(MessageDigest::sha256(), &pkey).unwrap(); verifier.set_rsa_padding(Padding::PKCS1_PSS).unwrap(); verifier .set_rsa_pss_saltlen(RsaPssSaltlen::DIGEST_LENGTH) .unwrap(); verifier.set_rsa_mgf1_md(MessageDigest::sha256()).unwrap(); verifier.update(&Vec::from_hex(INPUT).unwrap()).unwrap(); assert!(verifier.verify(&signature).unwrap()); } } vendor/openssl/src/ec.rs0000664000175000017500000011552614160055207016070 0ustar mwhudsonmwhudson//! Elliptic Curve //! //! Cryptology relies on the difficulty of solving mathematical problems, such as the factor //! of large integers composed of two large prime numbers and the discrete logarithm of a //! random eliptic curve. This module provides low-level features of the latter. //! Elliptic Curve protocols can provide the same security with smaller keys. //! //! There are 2 forms of elliptic curves, `Fp` and `F2^m`. These curves use irreducible //! trinomial or pentanomial . Being a generic interface to a wide range of algorithms, //! the cuves are generally referenced by [`EcGroup`]. There are many built in groups //! found in [`Nid`]. //! //! OpenSSL Wiki explains the fields and curves in detail at [Eliptic Curve Cryptography]. //! //! [`EcGroup`]: struct.EcGroup.html //! [`Nid`]: ../nid/struct.Nid.html //! [Eliptic Curve Cryptography]: https://wiki.openssl.org/index.php/Elliptic_Curve_Cryptography use foreign_types::{ForeignType, ForeignTypeRef}; use libc::c_int; use std::fmt; use std::ptr; use crate::bn::{BigNumContextRef, BigNumRef}; use crate::error::ErrorStack; use crate::nid::Nid; use crate::pkey::{HasParams, HasPrivate, HasPublic, Params, Private, Public}; use crate::util::ForeignTypeRefExt; use crate::{cvt, cvt_n, cvt_p, init}; /// Compressed or Uncompressed conversion /// /// Conversion from the binary value of the point on the curve is performed in one of /// compressed, uncompressed, or hybrid conversions. The default is compressed, except /// for binary curves. /// /// Further documentation is available in the [X9.62] standard. /// /// [X9.62]: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.202.2977&rep=rep1&type=pdf #[derive(Copy, Clone)] pub struct PointConversionForm(ffi::point_conversion_form_t); impl PointConversionForm { /// Compressed conversion from point value. pub const COMPRESSED: PointConversionForm = PointConversionForm(ffi::point_conversion_form_t::POINT_CONVERSION_COMPRESSED); /// Uncompressed conversion from point value. pub const UNCOMPRESSED: PointConversionForm = PointConversionForm(ffi::point_conversion_form_t::POINT_CONVERSION_UNCOMPRESSED); /// Performs both compressed and uncompressed conversions. pub const HYBRID: PointConversionForm = PointConversionForm(ffi::point_conversion_form_t::POINT_CONVERSION_HYBRID); } /// Named Curve or Explicit /// /// This type acts as a boolean as to whether the `EcGroup` is named or explicit. #[derive(Copy, Clone)] pub struct Asn1Flag(c_int); impl Asn1Flag { /// Curve defined using polynomial parameters /// /// Most applications use a named EC_GROUP curve, however, support /// is included to explicitly define the curve used to calculate keys /// This information would need to be known by both endpoint to make communication /// effective. /// /// OPENSSL_EC_EXPLICIT_CURVE, but that was only added in 1.1. /// Man page documents that 0 can be used in older versions. /// /// OpenSSL documentation at [`EC_GROUP`] /// /// [`EC_GROUP`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_GROUP_get_seed_len.html pub const EXPLICIT_CURVE: Asn1Flag = Asn1Flag(0); /// Standard Curves /// /// Curves that make up the typical encryption use cases. The collection of curves /// are well known but extensible. /// /// OpenSSL documentation at [`EC_GROUP`] /// /// [`EC_GROUP`]: https://www.openssl.org/docs/manmaster/man3/EC_GROUP_order_bits.html pub const NAMED_CURVE: Asn1Flag = Asn1Flag(ffi::OPENSSL_EC_NAMED_CURVE); } foreign_type_and_impl_send_sync! { type CType = ffi::EC_GROUP; fn drop = ffi::EC_GROUP_free; /// Describes the curve /// /// A curve can be of the named curve type. These curves can be discovered /// using openssl binary `openssl ecparam -list_curves`. Other operations /// are available in the [wiki]. These named curves are available in the /// [`Nid`] module. /// /// Curves can also be generated using prime field parameters or a binary field. /// /// Prime fields use the formula `y^2 mod p = x^3 + ax + b mod p`. Binary /// fields use the formula `y^2 + xy = x^3 + ax^2 + b`. Named curves have /// assured security. To prevent accidental vulnerabilities, they should /// be preferred. /// /// [wiki]: https://wiki.openssl.org/index.php/Command_Line_Elliptic_Curve_Operations /// [`Nid`]: ../nid/index.html pub struct EcGroup; /// Reference to [`EcGroup`] /// /// [`EcGroup`]: struct.EcGroup.html pub struct EcGroupRef; } impl EcGroup { /// Returns the group of a standard named curve. /// /// OpenSSL documentation at [`EC_GROUP_new`]. /// /// [`EC_GROUP_new`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_GROUP_new.html pub fn from_curve_name(nid: Nid) -> Result { unsafe { init(); cvt_p(ffi::EC_GROUP_new_by_curve_name(nid.as_raw())).map(EcGroup) } } } impl EcGroupRef { /// Places the components of a curve over a prime field in the provided `BigNum`s. /// The components make up the formula `y^2 mod p = x^3 + ax + b mod p`. /// /// OpenSSL documentation available at [`EC_GROUP_get_curve_GFp`] /// /// [`EC_GROUP_get_curve_GFp`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_GROUP_get_curve_GFp.html pub fn components_gfp( &self, p: &mut BigNumRef, a: &mut BigNumRef, b: &mut BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_GROUP_get_curve_GFp( self.as_ptr(), p.as_ptr(), a.as_ptr(), b.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the components of a curve over a binary field in the provided `BigNum`s. /// The components make up the formula `y^2 + xy = x^3 + ax^2 + b`. /// /// In this form `p` relates to the irreducible polynomial. Each bit represents /// a term in the polynomial. It will be set to 3 `1`s or 5 `1`s depending on /// using a trinomial or pentanomial. /// /// OpenSSL documentation at [`EC_GROUP_get_curve_GF2m`]. /// /// [`EC_GROUP_get_curve_GF2m`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_GROUP_get_curve_GF2m.html #[cfg(not(osslconf = "OPENSSL_NO_EC2M"))] pub fn components_gf2m( &self, p: &mut BigNumRef, a: &mut BigNumRef, b: &mut BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_GROUP_get_curve_GF2m( self.as_ptr(), p.as_ptr(), a.as_ptr(), b.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the cofactor of the group in the provided `BigNum`. /// /// OpenSSL documentation at [`EC_GROUP_get_cofactor`] /// /// [`EC_GROUP_get_cofactor`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_GROUP_get_cofactor.html pub fn cofactor( &self, cofactor: &mut BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_GROUP_get_cofactor( self.as_ptr(), cofactor.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Returns the degree of the curve. /// /// OpenSSL documentation at [`EC_GROUP_get_degree`] /// /// [`EC_GROUP_get_degree`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_GROUP_get_degree.html pub fn degree(&self) -> u32 { unsafe { ffi::EC_GROUP_get_degree(self.as_ptr()) as u32 } } /// Returns the number of bits in the group order. /// /// OpenSSL documentation at [`EC_GROUP_order_bits`] /// /// [`EC_GROUP_order_bits`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_GROUP_order_bits.html #[cfg(ossl110)] pub fn order_bits(&self) -> u32 { unsafe { ffi::EC_GROUP_order_bits(self.as_ptr()) as u32 } } /// Returns the generator for the given curve as a [`EcPoint`]. /// /// OpenSSL documentation at [`EC_GROUP_get0_generator`] /// /// [`EC_GROUP_get0_generator`]: https://www.openssl.org/docs/man1.1.0/man3/EC_GROUP_get0_generator.html pub fn generator(&self) -> &EcPointRef { unsafe { let ptr = ffi::EC_GROUP_get0_generator(self.as_ptr()); EcPointRef::from_const_ptr(ptr) } } /// Places the order of the curve in the provided `BigNum`. /// /// OpenSSL documentation at [`EC_GROUP_get_order`] /// /// [`EC_GROUP_get_order`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_GROUP_get_order.html pub fn order( &self, order: &mut BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_GROUP_get_order( self.as_ptr(), order.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Sets the flag determining if the group corresponds to a named curve or must be explicitly /// parameterized. /// /// This defaults to `EXPLICIT_CURVE` in OpenSSL 1.0.1 and 1.0.2, but `NAMED_CURVE` in OpenSSL /// 1.1.0. pub fn set_asn1_flag(&mut self, flag: Asn1Flag) { unsafe { ffi::EC_GROUP_set_asn1_flag(self.as_ptr(), flag.0); } } /// Returns the name of the curve, if a name is associated. /// /// OpenSSL documentation at [`EC_GROUP_get_curve_name`] /// /// [`EC_GROUP_get_curve_name`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_GROUP_get_curve_name.html pub fn curve_name(&self) -> Option { let nid = unsafe { ffi::EC_GROUP_get_curve_name(self.as_ptr()) }; if nid > 0 { Some(Nid::from_raw(nid)) } else { None } } } foreign_type_and_impl_send_sync! { type CType = ffi::EC_POINT; fn drop = ffi::EC_POINT_free; /// Represents a point on the curve /// /// OpenSSL documentation at [`EC_POINT_new`] /// /// [`EC_POINT_new`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_POINT_new.html pub struct EcPoint; /// Reference to [`EcPoint`] /// /// [`EcPoint`]: struct.EcPoint.html pub struct EcPointRef; } impl EcPointRef { /// Computes `a + b`, storing the result in `self`. /// /// OpenSSL documentation at [`EC_POINT_add`] /// /// [`EC_POINT_add`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_POINT_add.html pub fn add( &mut self, group: &EcGroupRef, a: &EcPointRef, b: &EcPointRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_POINT_add( group.as_ptr(), self.as_ptr(), a.as_ptr(), b.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Computes `q * m`, storing the result in `self`. /// /// OpenSSL documentation at [`EC_POINT_mul`] /// /// [`EC_POINT_mul`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_POINT_mul.html pub fn mul( &mut self, group: &EcGroupRef, q: &EcPointRef, m: &BigNumRef, // FIXME should be &mut ctx: &BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_POINT_mul( group.as_ptr(), self.as_ptr(), ptr::null(), q.as_ptr(), m.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Computes `generator * n`, storing the result in `self`. pub fn mul_generator( &mut self, group: &EcGroupRef, n: &BigNumRef, // FIXME should be &mut ctx: &BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_POINT_mul( group.as_ptr(), self.as_ptr(), n.as_ptr(), ptr::null(), ptr::null(), ctx.as_ptr(), )) .map(|_| ()) } } /// Computes `generator * n + q * m`, storing the result in `self`. pub fn mul_full( &mut self, group: &EcGroupRef, n: &BigNumRef, q: &EcPointRef, m: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_POINT_mul( group.as_ptr(), self.as_ptr(), n.as_ptr(), q.as_ptr(), m.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Inverts `self`. /// /// OpenSSL documentation at [`EC_POINT_invert`] /// /// [`EC_POINT_invert`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_POINT_invert.html pub fn invert(&mut self, group: &EcGroupRef, ctx: &BigNumContextRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_POINT_invert( group.as_ptr(), self.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Serializes the point to a binary representation. /// /// OpenSSL documentation at [`EC_POINT_point2oct`] /// /// [`EC_POINT_point2oct`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_POINT_point2oct.html pub fn to_bytes( &self, group: &EcGroupRef, form: PointConversionForm, ctx: &mut BigNumContextRef, ) -> Result, ErrorStack> { unsafe { let len = ffi::EC_POINT_point2oct( group.as_ptr(), self.as_ptr(), form.0, ptr::null_mut(), 0, ctx.as_ptr(), ); if len == 0 { return Err(ErrorStack::get()); } let mut buf = vec![0; len]; let len = ffi::EC_POINT_point2oct( group.as_ptr(), self.as_ptr(), form.0, buf.as_mut_ptr(), len, ctx.as_ptr(), ); if len == 0 { Err(ErrorStack::get()) } else { Ok(buf) } } } /// Creates a new point on the specified curve with the same value. /// /// OpenSSL documentation at [`EC_POINT_dup`] /// /// [`EC_POINT_dup`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_POINT_dup.html pub fn to_owned(&self, group: &EcGroupRef) -> Result { unsafe { cvt_p(ffi::EC_POINT_dup(self.as_ptr(), group.as_ptr())).map(EcPoint) } } /// Determines if this point is equal to another. /// /// OpenSSL doucmentation at [`EC_POINT_cmp`] /// /// [`EC_POINT_cmp`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_POINT_cmp.html pub fn eq( &self, group: &EcGroupRef, other: &EcPointRef, ctx: &mut BigNumContextRef, ) -> Result { unsafe { let res = cvt_n(ffi::EC_POINT_cmp( group.as_ptr(), self.as_ptr(), other.as_ptr(), ctx.as_ptr(), ))?; Ok(res == 0) } } /// Place affine coordinates of a curve over a prime field in the provided /// `x` and `y` `BigNum`s /// /// OpenSSL documentation at [`EC_POINT_get_affine_coordinates`] /// /// [`EC_POINT_get_affine_coordinates`]: https://www.openssl.org/docs/man1.1.1/man3/EC_POINT_get_affine_coordinates.html #[cfg(ossl111)] pub fn affine_coordinates( &self, group: &EcGroupRef, x: &mut BigNumRef, y: &mut BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_POINT_get_affine_coordinates( group.as_ptr(), self.as_ptr(), x.as_ptr(), y.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Place affine coordinates of a curve over a prime field in the provided /// `x` and `y` `BigNum`s /// /// OpenSSL documentation at [`EC_POINT_get_affine_coordinates_GFp`] /// /// [`EC_POINT_get_affine_coordinates_GFp`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_POINT_get_affine_coordinates_GFp.html pub fn affine_coordinates_gfp( &self, group: &EcGroupRef, x: &mut BigNumRef, y: &mut BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_POINT_get_affine_coordinates_GFp( group.as_ptr(), self.as_ptr(), x.as_ptr(), y.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Place affine coordinates of a curve over a binary field in the provided /// `x` and `y` `BigNum`s /// /// OpenSSL documentation at [`EC_POINT_get_affine_coordinates_GF2m`] /// /// [`EC_POINT_get_affine_coordinates_GF2m`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_POINT_get_affine_coordinates_GF2m.html #[cfg(not(osslconf = "OPENSSL_NO_EC2M"))] pub fn affine_coordinates_gf2m( &self, group: &EcGroupRef, x: &mut BigNumRef, y: &mut BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_POINT_get_affine_coordinates_GF2m( group.as_ptr(), self.as_ptr(), x.as_ptr(), y.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Checks if point is infinity /// /// OpenSSL documentation at [`EC_POINT_is_at_infinity`] /// /// [`EC_POINT_is_at_infinity`]: https://www.openssl.org/docs/man1.1.0/man3/EC_POINT_is_at_infinity.html pub fn is_infinity(&self, group: &EcGroupRef) -> bool { unsafe { let res = ffi::EC_POINT_is_at_infinity(group.as_ptr(), self.as_ptr()); res == 1 } } /// Checks if point is on a given curve /// /// OpenSSL documentation at [`EC_POINT_is_on_curve`] /// /// [`EC_POINT_is_on_curve`]: https://www.openssl.org/docs/man1.1.0/man3/EC_POINT_is_on_curve.html pub fn is_on_curve( &self, group: &EcGroupRef, ctx: &mut BigNumContextRef, ) -> Result { unsafe { let res = cvt_n(ffi::EC_POINT_is_on_curve( group.as_ptr(), self.as_ptr(), ctx.as_ptr(), ))?; Ok(res == 1) } } } impl EcPoint { /// Creates a new point on the specified curve. /// /// OpenSSL documentation at [`EC_POINT_new`] /// /// [`EC_POINT_new`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_POINT_new.html pub fn new(group: &EcGroupRef) -> Result { unsafe { cvt_p(ffi::EC_POINT_new(group.as_ptr())).map(EcPoint) } } /// Creates point from a binary representation /// /// OpenSSL documentation at [`EC_POINT_oct2point`] /// /// [`EC_POINT_oct2point`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_POINT_oct2point.html pub fn from_bytes( group: &EcGroupRef, buf: &[u8], ctx: &mut BigNumContextRef, ) -> Result { let point = EcPoint::new(group)?; unsafe { cvt(ffi::EC_POINT_oct2point( group.as_ptr(), point.as_ptr(), buf.as_ptr(), buf.len(), ctx.as_ptr(), ))?; } Ok(point) } } generic_foreign_type_and_impl_send_sync! { type CType = ffi::EC_KEY; fn drop = ffi::EC_KEY_free; /// Public and optional Private key on the given curve /// /// OpenSSL documentation at [`EC_KEY_new`] /// /// [`EC_KEY_new`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_KEY_new.html pub struct EcKey; /// Reference to [`EcKey`] /// /// [`EcKey`]: struct.EcKey.html pub struct EcKeyRef; } impl EcKeyRef where T: HasPrivate, { private_key_to_pem! { /// Serializes the private key to a PEM-encoded ECPrivateKey structure. /// /// The output will have a header of `-----BEGIN EC PRIVATE KEY-----`. /// /// This corresponds to [`PEM_write_bio_ECPrivateKey`]. /// /// [`PEM_write_bio_ECPrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_write_bio_ECPrivateKey.html private_key_to_pem, /// Serializes the private key to a PEM-encoded encrypted ECPrivateKey structure. /// /// The output will have a header of `-----BEGIN EC PRIVATE KEY-----`. /// /// This corresponds to [`PEM_write_bio_ECPrivateKey`]. /// /// [`PEM_write_bio_ECPrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_write_bio_ECPrivateKey.html private_key_to_pem_passphrase, ffi::PEM_write_bio_ECPrivateKey } to_der! { /// Serializes the private key into a DER-encoded ECPrivateKey structure. /// /// This corresponds to [`i2d_ECPrivateKey`]. /// /// [`i2d_ECPrivateKey`]: https://www.openssl.org/docs/man1.0.2/crypto/d2i_ECPrivate_key.html private_key_to_der, ffi::i2d_ECPrivateKey } /// Return [`EcPoint`] associated with the private key /// /// OpenSSL documentation at [`EC_KEY_get0_private_key`] /// /// [`EC_KEY_get0_private_key`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_KEY_get0_private_key.html pub fn private_key(&self) -> &BigNumRef { unsafe { let ptr = ffi::EC_KEY_get0_private_key(self.as_ptr()); BigNumRef::from_const_ptr(ptr) } } } impl EcKeyRef where T: HasPublic, { /// Returns the public key. /// /// OpenSSL documentation at [`EC_KEY_get0_public_key`] /// /// [`EC_KEY_get0_public_key`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_KEY_get0_public_key.html pub fn public_key(&self) -> &EcPointRef { unsafe { let ptr = ffi::EC_KEY_get0_public_key(self.as_ptr()); EcPointRef::from_const_ptr(ptr) } } to_pem! { /// Serialies the public key into a PEM-encoded SubjectPublicKeyInfo structure. /// /// The output will have a header of `-----BEGIN PUBLIC KEY-----`. /// /// This corresponds to [`PEM_write_bio_EC_PUBKEY`]. /// /// [`PEM_write_bio_EC_PUBKEY`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_write_bio_EC_PUBKEY.html public_key_to_pem, ffi::PEM_write_bio_EC_PUBKEY } to_der! { /// Serializes the public key into a DER-encoded SubjectPublicKeyInfo structure. /// /// This corresponds to [`i2d_EC_PUBKEY`]. /// /// [`i2d_EC_PUBKEY`]: https://www.openssl.org/docs/man1.1.0/crypto/i2d_EC_PUBKEY.html public_key_to_der, ffi::i2d_EC_PUBKEY } } impl EcKeyRef where T: HasParams, { /// Return [`EcGroup`] of the `EcKey` /// /// OpenSSL documentation at [`EC_KEY_get0_group`] /// /// [`EC_KEY_get0_group`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_KEY_get0_group.html pub fn group(&self) -> &EcGroupRef { unsafe { let ptr = ffi::EC_KEY_get0_group(self.as_ptr()); EcGroupRef::from_const_ptr(ptr) } } /// Checks the key for validity. /// /// OpenSSL documentation at [`EC_KEY_check_key`] /// /// [`EC_KEY_check_key`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_KEY_check_key.html pub fn check_key(&self) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EC_KEY_check_key(self.as_ptr())).map(|_| ()) } } } impl ToOwned for EcKeyRef { type Owned = EcKey; fn to_owned(&self) -> EcKey { unsafe { let r = ffi::EC_KEY_up_ref(self.as_ptr()); assert!(r == 1); EcKey::from_ptr(self.as_ptr()) } } } impl EcKey { /// Constructs an `EcKey` corresponding to a known curve. /// /// It will not have an associated public or private key. This kind of key is primarily useful /// to be provided to the `set_tmp_ecdh` methods on `Ssl` and `SslContextBuilder`. /// /// OpenSSL documentation at [`EC_KEY_new_by_curve_name`] /// /// [`EC_KEY_new_by_curve_name`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_KEY_new_by_curve_name.html pub fn from_curve_name(nid: Nid) -> Result, ErrorStack> { unsafe { init(); cvt_p(ffi::EC_KEY_new_by_curve_name(nid.as_raw())).map(|p| EcKey::from_ptr(p)) } } /// Constructs an `EcKey` corresponding to a curve. /// /// This corresponds to [`EC_KEY_set_group`]. /// /// [`EC_KEY_set_group`]: https://www.openssl.org/docs/man1.1.0/crypto/EC_KEY_new.html pub fn from_group(group: &EcGroupRef) -> Result, ErrorStack> { unsafe { cvt_p(ffi::EC_KEY_new()) .map(|p| EcKey::from_ptr(p)) .and_then(|key| { cvt(ffi::EC_KEY_set_group(key.as_ptr(), group.as_ptr())).map(|_| key) }) } } } impl EcKey { /// Constructs an `EcKey` from the specified group with the associated `EcPoint`, public_key. /// /// This will only have the associated public_key. /// /// # Example /// /// ```no_run /// use openssl::bn::BigNumContext; /// use openssl::ec::*; /// use openssl::nid::Nid; /// use openssl::pkey::PKey; /// /// // get bytes from somewhere, i.e. this will not produce a valid key /// let public_key: Vec = vec![]; /// /// // create an EcKey from the binary form of a EcPoint /// let group = EcGroup::from_curve_name(Nid::SECP256K1).unwrap(); /// let mut ctx = BigNumContext::new().unwrap(); /// let point = EcPoint::from_bytes(&group, &public_key, &mut ctx).unwrap(); /// let key = EcKey::from_public_key(&group, &point); /// ``` pub fn from_public_key( group: &EcGroupRef, public_key: &EcPointRef, ) -> Result, ErrorStack> { unsafe { cvt_p(ffi::EC_KEY_new()) .map(|p| EcKey::from_ptr(p)) .and_then(|key| { cvt(ffi::EC_KEY_set_group(key.as_ptr(), group.as_ptr())).map(|_| key) }) .and_then(|key| { cvt(ffi::EC_KEY_set_public_key( key.as_ptr(), public_key.as_ptr(), )) .map(|_| key) }) } } /// Constructs a public key from its affine coordinates. pub fn from_public_key_affine_coordinates( group: &EcGroupRef, x: &BigNumRef, y: &BigNumRef, ) -> Result, ErrorStack> { unsafe { cvt_p(ffi::EC_KEY_new()) .map(|p| EcKey::from_ptr(p)) .and_then(|key| { cvt(ffi::EC_KEY_set_group(key.as_ptr(), group.as_ptr())).map(|_| key) }) .and_then(|key| { cvt(ffi::EC_KEY_set_public_key_affine_coordinates( key.as_ptr(), x.as_ptr(), y.as_ptr(), )) .map(|_| key) }) } } from_pem! { /// Decodes a PEM-encoded SubjectPublicKeyInfo structure containing a EC key. /// /// The input should have a header of `-----BEGIN PUBLIC KEY-----`. /// /// This corresponds to [`PEM_read_bio_EC_PUBKEY`]. /// /// [`PEM_read_bio_EC_PUBKEY`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_read_bio_EC_PUBKEY.html public_key_from_pem, EcKey, ffi::PEM_read_bio_EC_PUBKEY } from_der! { /// Decodes a DER-encoded SubjectPublicKeyInfo structure containing a EC key. /// /// This corresponds to [`d2i_EC_PUBKEY`]. /// /// [`d2i_EC_PUBKEY`]: https://www.openssl.org/docs/man1.1.0/crypto/d2i_EC_PUBKEY.html public_key_from_der, EcKey, ffi::d2i_EC_PUBKEY } } impl EcKey { /// Generates a new public/private key pair on the specified curve. pub fn generate(group: &EcGroupRef) -> Result, ErrorStack> { unsafe { cvt_p(ffi::EC_KEY_new()) .map(|p| EcKey::from_ptr(p)) .and_then(|key| { cvt(ffi::EC_KEY_set_group(key.as_ptr(), group.as_ptr())).map(|_| key) }) .and_then(|key| cvt(ffi::EC_KEY_generate_key(key.as_ptr())).map(|_| key)) } } /// Constructs an public/private key pair given a curve, a private key and a public key point. pub fn from_private_components( group: &EcGroupRef, private_number: &BigNumRef, public_key: &EcPointRef, ) -> Result, ErrorStack> { unsafe { cvt_p(ffi::EC_KEY_new()) .map(|p| EcKey::from_ptr(p)) .and_then(|key| { cvt(ffi::EC_KEY_set_group(key.as_ptr(), group.as_ptr())).map(|_| key) }) .and_then(|key| { cvt(ffi::EC_KEY_set_private_key( key.as_ptr(), private_number.as_ptr(), )) .map(|_| key) }) .and_then(|key| { cvt(ffi::EC_KEY_set_public_key( key.as_ptr(), public_key.as_ptr(), )) .map(|_| key) }) } } private_key_from_pem! { /// Deserializes a private key from a PEM-encoded ECPrivateKey structure. /// /// The input should have a header of `-----BEGIN EC PRIVATE KEY-----`. /// /// This corresponds to `PEM_read_bio_ECPrivateKey`. private_key_from_pem, /// Deserializes a private key from a PEM-encoded encrypted ECPrivateKey structure. /// /// The input should have a header of `-----BEGIN EC PRIVATE KEY-----`. /// /// This corresponds to `PEM_read_bio_ECPrivateKey`. private_key_from_pem_passphrase, /// Deserializes a private key from a PEM-encoded encrypted ECPrivateKey structure. /// /// The callback should fill the password into the provided buffer and return its length. /// /// The input should have a header of `-----BEGIN EC PRIVATE KEY-----`. /// /// This corresponds to `PEM_read_bio_ECPrivateKey`. private_key_from_pem_callback, EcKey, ffi::PEM_read_bio_ECPrivateKey } from_der! { /// Decodes a DER-encoded elliptic curve private key structure. /// /// This corresponds to [`d2i_ECPrivateKey`]. /// /// [`d2i_ECPrivateKey`]: https://www.openssl.org/docs/man1.0.2/crypto/d2i_ECPrivate_key.html private_key_from_der, EcKey, ffi::d2i_ECPrivateKey } } impl Clone for EcKey { fn clone(&self) -> EcKey { (**self).to_owned() } } impl fmt::Debug for EcKey { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "EcKey") } } #[cfg(test)] mod test { use hex::FromHex; use super::*; use crate::bn::{BigNum, BigNumContext}; use crate::nid::Nid; #[test] fn key_new_by_curve_name() { EcKey::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); } #[test] fn generate() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); EcKey::generate(&group).unwrap(); } #[test] fn cofactor() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let mut ctx = BigNumContext::new().unwrap(); let mut cofactor = BigNum::new().unwrap(); group.cofactor(&mut cofactor, &mut ctx).unwrap(); let one = BigNum::from_u32(1).unwrap(); assert_eq!(cofactor, one); } #[test] #[allow(clippy::redundant_clone)] fn dup() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let key = EcKey::generate(&group).unwrap(); drop(key.clone()); } #[test] fn point_new() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); EcPoint::new(&group).unwrap(); } #[test] fn point_bytes() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let key = EcKey::generate(&group).unwrap(); let point = key.public_key(); let mut ctx = BigNumContext::new().unwrap(); let bytes = point .to_bytes(&group, PointConversionForm::COMPRESSED, &mut ctx) .unwrap(); let point2 = EcPoint::from_bytes(&group, &bytes, &mut ctx).unwrap(); assert!(point.eq(&group, &point2, &mut ctx).unwrap()); } #[test] fn point_owned() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let key = EcKey::generate(&group).unwrap(); let point = key.public_key(); let owned = point.to_owned(&group).unwrap(); let mut ctx = BigNumContext::new().unwrap(); assert!(owned.eq(&group, point, &mut ctx).unwrap()); } #[test] fn mul_generator() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let key = EcKey::generate(&group).unwrap(); let mut ctx = BigNumContext::new().unwrap(); let mut public_key = EcPoint::new(&group).unwrap(); public_key .mul_generator(&group, key.private_key(), &ctx) .unwrap(); assert!(public_key.eq(&group, key.public_key(), &mut ctx).unwrap()); } #[test] fn generator() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let gen = group.generator(); let one = BigNum::from_u32(1).unwrap(); let mut ctx = BigNumContext::new().unwrap(); let mut ecp = EcPoint::new(&group).unwrap(); ecp.mul_generator(&group, &one, &ctx).unwrap(); assert!(ecp.eq(&group, gen, &mut ctx).unwrap()); } #[test] fn key_from_public_key() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let key = EcKey::generate(&group).unwrap(); let mut ctx = BigNumContext::new().unwrap(); let bytes = key .public_key() .to_bytes(&group, PointConversionForm::COMPRESSED, &mut ctx) .unwrap(); drop(key); let public_key = EcPoint::from_bytes(&group, &bytes, &mut ctx).unwrap(); let ec_key = EcKey::from_public_key(&group, &public_key).unwrap(); assert!(ec_key.check_key().is_ok()); } #[test] fn key_from_private_components() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let key = EcKey::generate(&group).unwrap(); let dup_key = EcKey::from_private_components(&group, key.private_key(), key.public_key()).unwrap(); dup_key.check_key().unwrap(); assert!(key.private_key() == dup_key.private_key()); } #[test] fn key_from_affine_coordinates() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let x = Vec::from_hex("30a0424cd21c2944838a2d75c92b37e76ea20d9f00893a3b4eee8a3c0aafec3e") .unwrap(); let y = Vec::from_hex("e04b65e92456d9888b52b379bdfbd51ee869ef1f0fc65b6659695b6cce081723") .unwrap(); let xbn = BigNum::from_slice(&x).unwrap(); let ybn = BigNum::from_slice(&y).unwrap(); let ec_key = EcKey::from_public_key_affine_coordinates(&group, &xbn, &ybn).unwrap(); assert!(ec_key.check_key().is_ok()); } #[cfg(ossl111)] #[test] fn get_affine_coordinates() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let x = Vec::from_hex("30a0424cd21c2944838a2d75c92b37e76ea20d9f00893a3b4eee8a3c0aafec3e") .unwrap(); let y = Vec::from_hex("e04b65e92456d9888b52b379bdfbd51ee869ef1f0fc65b6659695b6cce081723") .unwrap(); let xbn = BigNum::from_slice(&x).unwrap(); let ybn = BigNum::from_slice(&y).unwrap(); let ec_key = EcKey::from_public_key_affine_coordinates(&group, &xbn, &ybn).unwrap(); let mut xbn2 = BigNum::new().unwrap(); let mut ybn2 = BigNum::new().unwrap(); let mut ctx = BigNumContext::new().unwrap(); let ec_key_pk = ec_key.public_key(); ec_key_pk .affine_coordinates(&group, &mut xbn2, &mut ybn2, &mut ctx) .unwrap(); assert_eq!(xbn2, xbn); assert_eq!(ybn2, ybn); } #[test] fn get_affine_coordinates_gfp() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let x = Vec::from_hex("30a0424cd21c2944838a2d75c92b37e76ea20d9f00893a3b4eee8a3c0aafec3e") .unwrap(); let y = Vec::from_hex("e04b65e92456d9888b52b379bdfbd51ee869ef1f0fc65b6659695b6cce081723") .unwrap(); let xbn = BigNum::from_slice(&x).unwrap(); let ybn = BigNum::from_slice(&y).unwrap(); let ec_key = EcKey::from_public_key_affine_coordinates(&group, &xbn, &ybn).unwrap(); let mut xbn2 = BigNum::new().unwrap(); let mut ybn2 = BigNum::new().unwrap(); let mut ctx = BigNumContext::new().unwrap(); let ec_key_pk = ec_key.public_key(); ec_key_pk .affine_coordinates_gfp(&group, &mut xbn2, &mut ybn2, &mut ctx) .unwrap(); assert_eq!(xbn2, xbn); assert_eq!(ybn2, ybn); } #[test] fn is_infinity() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let mut ctx = BigNumContext::new().unwrap(); let g = group.generator(); assert!(!g.is_infinity(&group)); let mut order = BigNum::new().unwrap(); group.order(&mut order, &mut ctx).unwrap(); let mut inf = EcPoint::new(&group).unwrap(); inf.mul_generator(&group, &order, &ctx).unwrap(); assert!(inf.is_infinity(&group)); } #[test] #[cfg(not(osslconf = "OPENSSL_NO_EC2M"))] fn is_on_curve() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let mut ctx = BigNumContext::new().unwrap(); let g = group.generator(); assert!(g.is_on_curve(&group, &mut ctx).unwrap()); let group2 = EcGroup::from_curve_name(Nid::X9_62_PRIME239V3).unwrap(); assert!(!g.is_on_curve(&group2, &mut ctx).unwrap()); } } vendor/openssl/src/encrypt.rs0000664000175000017500000004410014160055207017152 0ustar mwhudsonmwhudson//! Message encryption. //! //! The [`Encrypter`] allows for encryption of data given a public key. The [`Decrypter`] can be //! used with the corresponding private key to decrypt the data. //! //! # Examples //! //! Encrypt and decrypt data given an RSA keypair: //! //! ```rust //! use openssl::encrypt::{Encrypter, Decrypter}; //! use openssl::rsa::{Rsa, Padding}; //! use openssl::pkey::PKey; //! //! // Generate a keypair //! let keypair = Rsa::generate(2048).unwrap(); //! let keypair = PKey::from_rsa(keypair).unwrap(); //! //! let data = b"hello, world!"; //! //! // Encrypt the data with RSA PKCS1 //! let mut encrypter = Encrypter::new(&keypair).unwrap(); //! encrypter.set_rsa_padding(Padding::PKCS1).unwrap(); //! // Create an output buffer //! let buffer_len = encrypter.encrypt_len(data).unwrap(); //! let mut encrypted = vec![0; buffer_len]; //! // Encrypt and truncate the buffer //! let encrypted_len = encrypter.encrypt(data, &mut encrypted).unwrap(); //! encrypted.truncate(encrypted_len); //! //! // Decrypt the data //! let mut decrypter = Decrypter::new(&keypair).unwrap(); //! decrypter.set_rsa_padding(Padding::PKCS1).unwrap(); //! // Create an output buffer //! let buffer_len = decrypter.decrypt_len(&encrypted).unwrap(); //! let mut decrypted = vec![0; buffer_len]; //! // Encrypt and truncate the buffer //! let decrypted_len = decrypter.decrypt(&encrypted, &mut decrypted).unwrap(); //! decrypted.truncate(decrypted_len); //! assert_eq!(&*decrypted, data); //! ``` #[cfg(any(ossl102, libressl310))] use libc::{c_int, c_void}; use std::{marker::PhantomData, ptr}; use crate::error::ErrorStack; use crate::hash::MessageDigest; use crate::pkey::{HasPrivate, HasPublic, PKeyRef}; use crate::rsa::Padding; use crate::{cvt, cvt_p}; use foreign_types::ForeignTypeRef; /// A type which encrypts data. pub struct Encrypter<'a> { pctx: *mut ffi::EVP_PKEY_CTX, _p: PhantomData<&'a ()>, } unsafe impl<'a> Sync for Encrypter<'a> {} unsafe impl<'a> Send for Encrypter<'a> {} impl<'a> Drop for Encrypter<'a> { fn drop(&mut self) { unsafe { ffi::EVP_PKEY_CTX_free(self.pctx); } } } impl<'a> Encrypter<'a> { /// Creates a new `Encrypter`. /// /// OpenSSL documentation at [`EVP_PKEY_encrypt_init`]. /// /// [`EVP_PKEY_encrypt_init`]: https://www.openssl.org/docs/manmaster/man3/EVP_PKEY_encrypt_init.html pub fn new(pkey: &'a PKeyRef) -> Result, ErrorStack> where T: HasPublic, { unsafe { ffi::init(); let pctx = cvt_p(ffi::EVP_PKEY_CTX_new(pkey.as_ptr(), ptr::null_mut()))?; let r = ffi::EVP_PKEY_encrypt_init(pctx); if r != 1 { ffi::EVP_PKEY_CTX_free(pctx); return Err(ErrorStack::get()); } Ok(Encrypter { pctx, _p: PhantomData, }) } } /// Returns the RSA padding mode in use. /// /// This is only useful for RSA keys. /// /// This corresponds to `EVP_PKEY_CTX_get_rsa_padding`. pub fn rsa_padding(&self) -> Result { unsafe { let mut pad = 0; cvt(ffi::EVP_PKEY_CTX_get_rsa_padding(self.pctx, &mut pad)) .map(|_| Padding::from_raw(pad)) } } /// Sets the RSA padding mode. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_padding`]. /// /// [`EVP_PKEY_CTX_set_rsa_padding`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_CTX_set_rsa_padding.html pub fn set_rsa_padding(&mut self, padding: Padding) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_padding( self.pctx, padding.as_raw(), )) .map(|_| ()) } } /// Sets the RSA MGF1 algorithm. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_mgf1_md`]. /// /// [`EVP_PKEY_CTX_set_rsa_mgf1_md`]: https://www.openssl.org/docs/manmaster/man7/RSA-PSS.html pub fn set_rsa_mgf1_md(&mut self, md: MessageDigest) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_mgf1_md( self.pctx, md.as_ptr() as *mut _, )) .map(|_| ()) } } /// Sets the RSA OAEP algorithm. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_oaep_md`]. /// /// [`EVP_PKEY_CTX_set_rsa_oaep_md`]: https://www.openssl.org/docs/manmaster/man3/EVP_PKEY_CTX_set_rsa_oaep_md.html #[cfg(any(ossl102, libressl310))] pub fn set_rsa_oaep_md(&mut self, md: MessageDigest) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_oaep_md( self.pctx, md.as_ptr() as *mut _, )) .map(|_| ()) } } /// Sets the RSA OAEP label. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set0_rsa_oaep_label`]. /// /// [`EVP_PKEY_CTX_set0_rsa_oaep_label`]: https://www.openssl.org/docs/manmaster/man3/EVP_PKEY_CTX_set0_rsa_oaep_label.html #[cfg(any(ossl102, libressl310))] pub fn set_rsa_oaep_label(&mut self, label: &[u8]) -> Result<(), ErrorStack> { unsafe { let p = cvt_p(ffi::CRYPTO_malloc( label.len() as _, concat!(file!(), "\0").as_ptr() as *const _, line!() as c_int, ))?; ptr::copy_nonoverlapping(label.as_ptr(), p as *mut u8, label.len()); cvt(ffi::EVP_PKEY_CTX_set0_rsa_oaep_label( self.pctx, p as *mut c_void, label.len() as c_int, )) .map(|_| ()) .map_err(|e| { #[cfg(not(ossl110))] ::ffi::CRYPTO_free(p as *mut c_void); #[cfg(ossl110)] ::ffi::CRYPTO_free( p as *mut c_void, concat!(file!(), "\0").as_ptr() as *const _, line!() as c_int, ); e }) } } /// Performs public key encryption. /// /// In order to know the size needed for the output buffer, use [`encrypt_len`](Encrypter::encrypt_len). /// Note that the length of the output buffer can be greater of the length of the encoded data. /// ``` /// # use openssl::{ /// # encrypt::Encrypter, /// # pkey::PKey, /// # rsa::{Rsa, Padding}, /// # }; /// # /// # let key = include_bytes!("../test/rsa.pem"); /// # let private_key = Rsa::private_key_from_pem(key).unwrap(); /// # let pkey = PKey::from_rsa(private_key).unwrap(); /// # let input = b"hello world".to_vec(); /// # /// let mut encrypter = Encrypter::new(&pkey).unwrap(); /// encrypter.set_rsa_padding(Padding::PKCS1).unwrap(); /// /// // Get the length of the output buffer /// let buffer_len = encrypter.encrypt_len(&input).unwrap(); /// let mut encoded = vec![0u8; buffer_len]; /// /// // Encode the data and get its length /// let encoded_len = encrypter.encrypt(&input, &mut encoded).unwrap(); /// /// // Use only the part of the buffer with the encoded data /// let encoded = &encoded[..encoded_len]; /// ``` /// /// This corresponds to [`EVP_PKEY_encrypt`]. /// /// [`EVP_PKEY_encrypt`]: https://www.openssl.org/docs/manmaster/man3/EVP_PKEY_encrypt.html pub fn encrypt(&self, from: &[u8], to: &mut [u8]) -> Result { let mut written = to.len(); unsafe { cvt(ffi::EVP_PKEY_encrypt( self.pctx, to.as_mut_ptr(), &mut written, from.as_ptr(), from.len(), ))?; } Ok(written) } /// Gets the size of the buffer needed to encrypt the input data. /// /// This corresponds to [`EVP_PKEY_encrypt`] called with a null pointer as output argument. /// /// [`EVP_PKEY_encrypt`]: https://www.openssl.org/docs/manmaster/man3/EVP_PKEY_encrypt.html pub fn encrypt_len(&self, from: &[u8]) -> Result { let mut written = 0; unsafe { cvt(ffi::EVP_PKEY_encrypt( self.pctx, ptr::null_mut(), &mut written, from.as_ptr(), from.len(), ))?; } Ok(written) } } /// A type which decrypts data. pub struct Decrypter<'a> { pctx: *mut ffi::EVP_PKEY_CTX, _p: PhantomData<&'a ()>, } unsafe impl<'a> Sync for Decrypter<'a> {} unsafe impl<'a> Send for Decrypter<'a> {} impl<'a> Drop for Decrypter<'a> { fn drop(&mut self) { unsafe { ffi::EVP_PKEY_CTX_free(self.pctx); } } } impl<'a> Decrypter<'a> { /// Creates a new `Decrypter`. /// /// OpenSSL documentation at [`EVP_PKEY_decrypt_init`]. /// /// [`EVP_PKEY_decrypt_init`]: https://www.openssl.org/docs/manmaster/man3/EVP_PKEY_decrypt_init.html pub fn new(pkey: &'a PKeyRef) -> Result, ErrorStack> where T: HasPrivate, { unsafe { ffi::init(); let pctx = cvt_p(ffi::EVP_PKEY_CTX_new(pkey.as_ptr(), ptr::null_mut()))?; let r = ffi::EVP_PKEY_decrypt_init(pctx); if r != 1 { ffi::EVP_PKEY_CTX_free(pctx); return Err(ErrorStack::get()); } Ok(Decrypter { pctx, _p: PhantomData, }) } } /// Returns the RSA padding mode in use. /// /// This is only useful for RSA keys. /// /// This corresponds to `EVP_PKEY_CTX_get_rsa_padding`. pub fn rsa_padding(&self) -> Result { unsafe { let mut pad = 0; cvt(ffi::EVP_PKEY_CTX_get_rsa_padding(self.pctx, &mut pad)) .map(|_| Padding::from_raw(pad)) } } /// Sets the RSA padding mode. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_padding`]. /// /// [`EVP_PKEY_CTX_set_rsa_padding`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_CTX_set_rsa_padding.html pub fn set_rsa_padding(&mut self, padding: Padding) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_padding( self.pctx, padding.as_raw(), )) .map(|_| ()) } } /// Sets the RSA MGF1 algorithm. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_mgf1_md`]. /// /// [`EVP_PKEY_CTX_set_rsa_mgf1_md`]: https://www.openssl.org/docs/manmaster/man7/RSA-PSS.html pub fn set_rsa_mgf1_md(&mut self, md: MessageDigest) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_mgf1_md( self.pctx, md.as_ptr() as *mut _, )) .map(|_| ()) } } /// Sets the RSA OAEP algorithm. /// /// This is only useful for RSA keys. /// /// This corresponds to [`EVP_PKEY_CTX_set_rsa_oaep_md`]. /// /// [`EVP_PKEY_CTX_set_rsa_oaep_md`]: https://www.openssl.org/docs/manmaster/man3/EVP_PKEY_CTX_set_rsa_oaep_md.html #[cfg(any(ossl102, libressl310))] pub fn set_rsa_oaep_md(&mut self, md: MessageDigest) -> Result<(), ErrorStack> { unsafe { cvt(ffi::EVP_PKEY_CTX_set_rsa_oaep_md( self.pctx, md.as_ptr() as *mut _, )) .map(|_| ()) } } /// Performs public key decryption. /// /// In order to know the size needed for the output buffer, use [`decrypt_len`](Decrypter::decrypt_len). /// Note that the length of the output buffer can be greater of the length of the decoded data. /// ``` /// # use openssl::{ /// # encrypt::Decrypter, /// # pkey::PKey, /// # rsa::{Rsa, Padding}, /// # }; /// # /// # const INPUT: &[u8] = b"\ /// # \x26\xa1\xc1\x13\xc5\x7f\xb4\x9f\xa0\xb4\xde\x61\x5e\x2e\xc6\xfb\x76\x5c\xd1\x2b\x5f\ /// # \x1d\x36\x60\xfa\xf8\xe8\xb3\x21\xf4\x9c\x70\xbc\x03\xea\xea\xac\xce\x4b\xb3\xf6\x45\ /// # \xcc\xb3\x80\x9e\xa8\xf7\xc3\x5d\x06\x12\x7a\xa3\x0c\x30\x67\xf1\xe7\x94\x6c\xf6\x26\ /// # \xac\x28\x17\x59\x69\xe1\xdc\xed\x7e\xc0\xe9\x62\x57\x49\xce\xdd\x13\x07\xde\x18\x03\ /// # \x0f\x9d\x61\x65\xb9\x23\x8c\x78\x4b\xad\x23\x49\x75\x47\x64\xa0\xa0\xa2\x90\xc1\x49\ /// # \x1b\x05\x24\xc2\xe9\x2c\x0d\x49\x78\x72\x61\x72\xed\x8b\x6f\x8a\xe8\xca\x05\x5c\x58\ /// # \xd6\x95\xd6\x7b\xe3\x2d\x0d\xaa\x3e\x6d\x3c\x9a\x1c\x1d\xb4\x6c\x42\x9d\x9a\x82\x55\ /// # \xd9\xde\xc8\x08\x7b\x17\xac\xd7\xaf\x86\x7b\x69\x9e\x3c\xf4\x5e\x1c\x39\x52\x6d\x62\ /// # \x50\x51\xbd\xa6\xc8\x4e\xe9\x34\xf0\x37\x0d\xa9\xa9\x77\xe6\xf5\xc2\x47\x2d\xa8\xee\ /// # \x3f\x69\x78\xff\xa9\xdc\x70\x22\x20\x9a\x5c\x9b\x70\x15\x90\xd3\xb4\x0e\x54\x9e\x48\ /// # \xed\xb6\x2c\x88\xfc\xb4\xa9\x37\x10\xfa\x71\xb2\xec\x75\xe7\xe7\x0e\xf4\x60\x2c\x7b\ /// # \x58\xaf\xa0\x53\xbd\x24\xf1\x12\xe3\x2e\x99\x25\x0a\x54\x54\x9d\xa1\xdb\xca\x41\x85\ /// # \xf4\x62\x78\x64"; /// # /// # let key = include_bytes!("../test/rsa.pem"); /// # let private_key = Rsa::private_key_from_pem(key).unwrap(); /// # let pkey = PKey::from_rsa(private_key).unwrap(); /// # let input = INPUT.to_vec(); /// # /// let mut decrypter = Decrypter::new(&pkey).unwrap(); /// decrypter.set_rsa_padding(Padding::PKCS1).unwrap(); /// /// // Get the length of the output buffer /// let buffer_len = decrypter.decrypt_len(&input).unwrap(); /// let mut decoded = vec![0u8; buffer_len]; /// /// // Decrypt the data and get its length /// let decoded_len = decrypter.decrypt(&input, &mut decoded).unwrap(); /// /// // Use only the part of the buffer with the decrypted data /// let decoded = &decoded[..decoded_len]; /// ``` /// /// This corresponds to [`EVP_PKEY_decrypt`]. /// /// [`EVP_PKEY_decrypt`]: https://www.openssl.org/docs/manmaster/man3/EVP_PKEY_decrypt.html pub fn decrypt(&self, from: &[u8], to: &mut [u8]) -> Result { let mut written = to.len(); unsafe { cvt(ffi::EVP_PKEY_decrypt( self.pctx, to.as_mut_ptr(), &mut written, from.as_ptr(), from.len(), ))?; } Ok(written) } /// Gets the size of the buffer needed to decrypt the input data. /// /// This corresponds to [`EVP_PKEY_decrypt`] called with a null pointer as output argument. /// /// [`EVP_PKEY_decrypt`]: https://www.openssl.org/docs/manmaster/man3/EVP_PKEY_decrypt.html pub fn decrypt_len(&self, from: &[u8]) -> Result { let mut written = 0; unsafe { cvt(ffi::EVP_PKEY_decrypt( self.pctx, ptr::null_mut(), &mut written, from.as_ptr(), from.len(), ))?; } Ok(written) } } #[cfg(test)] mod test { use hex::FromHex; use crate::encrypt::{Decrypter, Encrypter}; #[cfg(any(ossl102, libressl310))] use crate::hash::MessageDigest; use crate::pkey::PKey; use crate::rsa::{Padding, Rsa}; const INPUT: &str = "65794a68624763694f694a53557a49314e694a392e65794a7063334d694f694a71623255694c41304b49434a6c\ 654841694f6a457a4d4441344d546b7a4f44417344516f67496d6830644841364c79396c654746746347786c4c\ 6d4e76625339706331397962323930496a7030636e566c6651"; #[test] fn rsa_encrypt_decrypt() { let key = include_bytes!("../test/rsa.pem"); let private_key = Rsa::private_key_from_pem(key).unwrap(); let pkey = PKey::from_rsa(private_key).unwrap(); let mut encrypter = Encrypter::new(&pkey).unwrap(); encrypter.set_rsa_padding(Padding::PKCS1).unwrap(); let input = Vec::from_hex(INPUT).unwrap(); let buffer_len = encrypter.encrypt_len(&input).unwrap(); let mut encoded = vec![0u8; buffer_len]; let encoded_len = encrypter.encrypt(&input, &mut encoded).unwrap(); let encoded = &encoded[..encoded_len]; let mut decrypter = Decrypter::new(&pkey).unwrap(); decrypter.set_rsa_padding(Padding::PKCS1).unwrap(); let buffer_len = decrypter.decrypt_len(encoded).unwrap(); let mut decoded = vec![0u8; buffer_len]; let decoded_len = decrypter.decrypt(encoded, &mut decoded).unwrap(); let decoded = &decoded[..decoded_len]; assert_eq!(decoded, &*input); } #[test] #[cfg(any(ossl102, libressl310))] fn rsa_encrypt_decrypt_with_sha256() { let key = include_bytes!("../test/rsa.pem"); let private_key = Rsa::private_key_from_pem(key).unwrap(); let pkey = PKey::from_rsa(private_key).unwrap(); let md = MessageDigest::sha256(); let mut encrypter = Encrypter::new(&pkey).unwrap(); encrypter.set_rsa_padding(Padding::PKCS1_OAEP).unwrap(); encrypter.set_rsa_oaep_md(md).unwrap(); encrypter.set_rsa_mgf1_md(md).unwrap(); let input = Vec::from_hex(INPUT).unwrap(); let buffer_len = encrypter.encrypt_len(&input).unwrap(); let mut encoded = vec![0u8; buffer_len]; let encoded_len = encrypter.encrypt(&input, &mut encoded).unwrap(); let encoded = &encoded[..encoded_len]; let mut decrypter = Decrypter::new(&pkey).unwrap(); decrypter.set_rsa_padding(Padding::PKCS1_OAEP).unwrap(); decrypter.set_rsa_oaep_md(md).unwrap(); decrypter.set_rsa_mgf1_md(md).unwrap(); let buffer_len = decrypter.decrypt_len(encoded).unwrap(); let mut decoded = vec![0u8; buffer_len]; let decoded_len = decrypter.decrypt(encoded, &mut decoded).unwrap(); let decoded = &decoded[..decoded_len]; assert_eq!(decoded, &*input); } } vendor/openssl/src/stack.rs0000664000175000017500000002274514160055207016606 0ustar mwhudsonmwhudsonuse cfg_if::cfg_if; use foreign_types::{ForeignType, ForeignTypeRef, Opaque}; use libc::c_int; use std::borrow::Borrow; use std::convert::AsRef; use std::fmt; use std::iter; use std::marker::PhantomData; use std::mem; use std::ops::{Deref, DerefMut, Index, IndexMut, Range}; use crate::error::ErrorStack; use crate::util::ForeignTypeExt; use crate::{cvt, cvt_p}; cfg_if! { if #[cfg(ossl110)] { use ffi::{ OPENSSL_sk_pop, OPENSSL_sk_free, OPENSSL_sk_num, OPENSSL_sk_value, OPENSSL_STACK, OPENSSL_sk_new_null, OPENSSL_sk_push, }; } else { use ffi::{ sk_pop as OPENSSL_sk_pop, sk_free as OPENSSL_sk_free, sk_num as OPENSSL_sk_num, sk_value as OPENSSL_sk_value, _STACK as OPENSSL_STACK, sk_new_null as OPENSSL_sk_new_null, sk_push as OPENSSL_sk_push, }; } } /// Trait implemented by types which can be placed in a stack. /// /// It should not be implemented for any type outside of this crate. pub trait Stackable: ForeignType { /// The C stack type for this element. /// /// Generally called `stack_st_{ELEMENT_TYPE}`, normally hidden by the /// `STACK_OF(ELEMENT_TYPE)` macro in the OpenSSL API. type StackType; } /// An owned stack of `T`. pub struct Stack(*mut T::StackType); unsafe impl Send for Stack {} unsafe impl Sync for Stack {} impl fmt::Debug for Stack where T: Stackable, T::Ref: fmt::Debug, { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt.debug_list().entries(self).finish() } } impl Drop for Stack { fn drop(&mut self) { unsafe { while self.pop().is_some() {} OPENSSL_sk_free(self.0 as *mut _); } } } impl Stack { pub fn new() -> Result, ErrorStack> { unsafe { ffi::init(); let ptr = cvt_p(OPENSSL_sk_new_null())?; Ok(Stack(ptr as *mut _)) } } } impl iter::IntoIterator for Stack { type IntoIter = IntoIter; type Item = T; fn into_iter(self) -> IntoIter { let it = IntoIter { stack: self.0, idxs: 0..self.len() as c_int, }; mem::forget(self); it } } impl AsRef> for Stack { fn as_ref(&self) -> &StackRef { &*self } } impl Borrow> for Stack { fn borrow(&self) -> &StackRef { &*self } } impl ForeignType for Stack { type CType = T::StackType; type Ref = StackRef; #[inline] unsafe fn from_ptr(ptr: *mut T::StackType) -> Stack { assert!( !ptr.is_null(), "Must not instantiate a Stack from a null-ptr - use Stack::new() in \ that case" ); Stack(ptr) } #[inline] fn as_ptr(&self) -> *mut T::StackType { self.0 } } impl Deref for Stack { type Target = StackRef; fn deref(&self) -> &StackRef { unsafe { StackRef::from_ptr(self.0) } } } impl DerefMut for Stack { fn deref_mut(&mut self) -> &mut StackRef { unsafe { StackRef::from_ptr_mut(self.0) } } } pub struct IntoIter { stack: *mut T::StackType, idxs: Range, } impl Drop for IntoIter { fn drop(&mut self) { unsafe { // https://github.com/rust-lang/rust-clippy/issues/7510 #[allow(clippy::while_let_on_iterator)] while let Some(_) = self.next() {} OPENSSL_sk_free(self.stack as *mut _); } } } impl Iterator for IntoIter { type Item = T; fn next(&mut self) -> Option { unsafe { self.idxs .next() .map(|i| T::from_ptr(OPENSSL_sk_value(self.stack as *mut _, i) as *mut _)) } } fn size_hint(&self) -> (usize, Option) { self.idxs.size_hint() } } impl DoubleEndedIterator for IntoIter { fn next_back(&mut self) -> Option { unsafe { self.idxs .next_back() .map(|i| T::from_ptr(OPENSSL_sk_value(self.stack as *mut _, i) as *mut _)) } } } impl ExactSizeIterator for IntoIter {} pub struct StackRef(Opaque, PhantomData); unsafe impl Send for StackRef {} unsafe impl Sync for StackRef {} impl ForeignTypeRef for StackRef { type CType = T::StackType; } impl StackRef { fn as_stack(&self) -> *mut OPENSSL_STACK { self.as_ptr() as *mut _ } /// Returns the number of items in the stack. pub fn len(&self) -> usize { unsafe { OPENSSL_sk_num(self.as_stack()) as usize } } /// Determines if the stack is empty. pub fn is_empty(&self) -> bool { self.len() == 0 } pub fn iter(&self) -> Iter<'_, T> { Iter { stack: self, idxs: 0..self.len() as c_int, } } pub fn iter_mut(&mut self) -> IterMut<'_, T> { IterMut { idxs: 0..self.len() as c_int, stack: self, } } /// Returns a reference to the element at the given index in the /// stack or `None` if the index is out of bounds pub fn get(&self, idx: usize) -> Option<&T::Ref> { unsafe { if idx >= self.len() { return None; } Some(T::Ref::from_ptr(self._get(idx))) } } /// Returns a mutable reference to the element at the given index in the /// stack or `None` if the index is out of bounds pub fn get_mut(&mut self, idx: usize) -> Option<&mut T::Ref> { unsafe { if idx >= self.len() { return None; } Some(T::Ref::from_ptr_mut(self._get(idx))) } } /// Pushes a value onto the top of the stack. pub fn push(&mut self, data: T) -> Result<(), ErrorStack> { unsafe { cvt(OPENSSL_sk_push(self.as_stack(), data.as_ptr() as *mut _))?; mem::forget(data); Ok(()) } } /// Removes the last element from the stack and returns it. pub fn pop(&mut self) -> Option { unsafe { let ptr = OPENSSL_sk_pop(self.as_stack()); T::from_ptr_opt(ptr as *mut _) } } unsafe fn _get(&self, idx: usize) -> *mut T::CType { OPENSSL_sk_value(self.as_stack(), idx as c_int) as *mut _ } } impl Index for StackRef { type Output = T::Ref; fn index(&self, index: usize) -> &T::Ref { self.get(index).unwrap() } } impl IndexMut for StackRef { fn index_mut(&mut self, index: usize) -> &mut T::Ref { self.get_mut(index).unwrap() } } impl<'a, T: Stackable> iter::IntoIterator for &'a StackRef { type Item = &'a T::Ref; type IntoIter = Iter<'a, T>; fn into_iter(self) -> Iter<'a, T> { self.iter() } } impl<'a, T: Stackable> iter::IntoIterator for &'a mut StackRef { type Item = &'a mut T::Ref; type IntoIter = IterMut<'a, T>; fn into_iter(self) -> IterMut<'a, T> { self.iter_mut() } } impl<'a, T: Stackable> iter::IntoIterator for &'a Stack { type Item = &'a T::Ref; type IntoIter = Iter<'a, T>; fn into_iter(self) -> Iter<'a, T> { self.iter() } } impl<'a, T: Stackable> iter::IntoIterator for &'a mut Stack { type Item = &'a mut T::Ref; type IntoIter = IterMut<'a, T>; fn into_iter(self) -> IterMut<'a, T> { self.iter_mut() } } /// An iterator over the stack's contents. pub struct Iter<'a, T: Stackable> { stack: &'a StackRef, idxs: Range, } impl<'a, T: Stackable> Iterator for Iter<'a, T> { type Item = &'a T::Ref; fn next(&mut self) -> Option<&'a T::Ref> { unsafe { self.idxs .next() .map(|i| T::Ref::from_ptr(OPENSSL_sk_value(self.stack.as_stack(), i) as *mut _)) } } fn size_hint(&self) -> (usize, Option) { self.idxs.size_hint() } } impl<'a, T: Stackable> DoubleEndedIterator for Iter<'a, T> { fn next_back(&mut self) -> Option<&'a T::Ref> { unsafe { self.idxs .next_back() .map(|i| T::Ref::from_ptr(OPENSSL_sk_value(self.stack.as_stack(), i) as *mut _)) } } } impl<'a, T: Stackable> ExactSizeIterator for Iter<'a, T> {} /// A mutable iterator over the stack's contents. pub struct IterMut<'a, T: Stackable> { stack: &'a mut StackRef, idxs: Range, } impl<'a, T: Stackable> Iterator for IterMut<'a, T> { type Item = &'a mut T::Ref; fn next(&mut self) -> Option<&'a mut T::Ref> { unsafe { self.idxs .next() .map(|i| T::Ref::from_ptr_mut(OPENSSL_sk_value(self.stack.as_stack(), i) as *mut _)) } } fn size_hint(&self) -> (usize, Option) { self.idxs.size_hint() } } impl<'a, T: Stackable> DoubleEndedIterator for IterMut<'a, T> { fn next_back(&mut self) -> Option<&'a mut T::Ref> { unsafe { self.idxs .next_back() .map(|i| T::Ref::from_ptr_mut(OPENSSL_sk_value(self.stack.as_stack(), i) as *mut _)) } } } impl<'a, T: Stackable> ExactSizeIterator for IterMut<'a, T> {} vendor/openssl/src/fips.rs0000664000175000017500000000121414160055207016426 0ustar mwhudsonmwhudson//! FIPS 140-2 support. //! //! See [OpenSSL's documentation] for details. //! //! [OpenSSL's documentation]: https://www.openssl.org/docs/fips/UserGuide-2.0.pdf use crate::cvt; use crate::error::ErrorStack; /// Moves the library into or out of the FIPS 140-2 mode of operation. /// /// This corresponds to `FIPS_mode_set`. pub fn enable(enabled: bool) -> Result<(), ErrorStack> { ffi::init(); unsafe { cvt(ffi::FIPS_mode_set(enabled as _)).map(|_| ()) } } /// Determines if the library is running in the FIPS 140-2 mode of operation. /// /// This corresponds to `FIPS_mode`. pub fn enabled() -> bool { unsafe { ffi::FIPS_mode() != 0 } } vendor/openssl/src/memcmp.rs0000664000175000017500000000452214160055207016750 0ustar mwhudsonmwhudson//! Utilities to safely compare cryptographic values. //! //! Extra care must be taken when comparing values in //! cryptographic code. If done incorrectly, it can lead //! to a [timing attack](https://en.wikipedia.org/wiki/Timing_attack). //! By analyzing the time taken to execute parts of a cryptographic //! algorithm, and attacker can attempt to compromise the //! cryptosystem. //! //! The utilities in this module are designed to be resistant //! to this type of attack. //! //! # Examples //! //! To perform a constant-time comparison of two arrays of the same length but different //! values: //! //! ``` //! use openssl::memcmp::eq; //! //! // We want to compare `a` to `b` and `c`, without giving //! // away through timing analysis that `c` is more similar to `a` //! // than `b`. //! let a = [0, 0, 0]; //! let b = [1, 1, 1]; //! let c = [0, 0, 1]; //! //! // These statements will execute in the same amount of time. //! assert!(!eq(&a, &b)); //! assert!(!eq(&a, &c)); //! ``` use libc::size_t; /// Returns `true` iff `a` and `b` contain the same bytes. /// /// This operation takes an amount of time dependent on the length of the two /// arrays given, but is independent of the contents of a and b. /// /// # Panics /// /// This function will panic the current task if `a` and `b` do not have the same /// length. /// /// # Examples /// /// To perform a constant-time comparison of two arrays of the same length but different /// values: /// /// ``` /// use openssl::memcmp::eq; /// /// // We want to compare `a` to `b` and `c`, without giving /// // away through timing analysis that `c` is more similar to `a` /// // than `b`. /// let a = [0, 0, 0]; /// let b = [1, 1, 1]; /// let c = [0, 0, 1]; /// /// // These statements will execute in the same amount of time. /// assert!(!eq(&a, &b)); /// assert!(!eq(&a, &c)); /// ``` pub fn eq(a: &[u8], b: &[u8]) -> bool { assert!(a.len() == b.len()); let ret = unsafe { ffi::CRYPTO_memcmp( a.as_ptr() as *const _, b.as_ptr() as *const _, a.len() as size_t, ) }; ret == 0 } #[cfg(test)] mod tests { use super::eq; #[test] fn test_eq() { assert!(eq(&[], &[])); assert!(eq(&[1], &[1])); assert!(!eq(&[1, 2, 3], &[1, 2, 4])); } #[test] #[should_panic] fn test_diff_lens() { eq(&[], &[1]); } } vendor/openssl/src/ssl/0000775000175000017500000000000014160055207015722 5ustar mwhudsonmwhudsonvendor/openssl/src/ssl/error.rs0000664000175000017500000001317414160055207017427 0ustar mwhudsonmwhudsonuse libc::c_int; use std::error; use std::error::Error as StdError; use std::fmt; use std::io; use crate::error::ErrorStack; use crate::ssl::MidHandshakeSslStream; use crate::x509::X509VerifyResult; /// An error code returned from SSL functions. #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct ErrorCode(c_int); impl ErrorCode { /// The SSL session has been closed. pub const ZERO_RETURN: ErrorCode = ErrorCode(ffi::SSL_ERROR_ZERO_RETURN); /// An attempt to read data from the underlying socket returned `WouldBlock`. /// /// Wait for read readiness and retry the operation. pub const WANT_READ: ErrorCode = ErrorCode(ffi::SSL_ERROR_WANT_READ); /// An attempt to write data to the underlying socket returned `WouldBlock`. /// /// Wait for write readiness and retry the operation. pub const WANT_WRITE: ErrorCode = ErrorCode(ffi::SSL_ERROR_WANT_WRITE); /// A non-recoverable IO error occurred. pub const SYSCALL: ErrorCode = ErrorCode(ffi::SSL_ERROR_SYSCALL); /// An error occurred in the SSL library. pub const SSL: ErrorCode = ErrorCode(ffi::SSL_ERROR_SSL); /// The client hello callback indicated that it needed to be retried. /// /// Requires OpenSSL 1.1.1 or newer. #[cfg(ossl111)] pub const WANT_CLIENT_HELLO_CB: ErrorCode = ErrorCode(ffi::SSL_ERROR_WANT_CLIENT_HELLO_CB); pub fn from_raw(raw: c_int) -> ErrorCode { ErrorCode(raw) } #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } } #[derive(Debug)] pub(crate) enum InnerError { Io(io::Error), Ssl(ErrorStack), } /// An SSL error. #[derive(Debug)] pub struct Error { pub(crate) code: ErrorCode, pub(crate) cause: Option, } impl Error { pub fn code(&self) -> ErrorCode { self.code } pub fn io_error(&self) -> Option<&io::Error> { match self.cause { Some(InnerError::Io(ref e)) => Some(e), _ => None, } } pub fn into_io_error(self) -> Result { match self.cause { Some(InnerError::Io(e)) => Ok(e), _ => Err(self), } } pub fn ssl_error(&self) -> Option<&ErrorStack> { match self.cause { Some(InnerError::Ssl(ref e)) => Some(e), _ => None, } } } impl From for Error { fn from(e: ErrorStack) -> Error { Error { code: ErrorCode::SSL, cause: Some(InnerError::Ssl(e)), } } } impl fmt::Display for Error { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { match self.code { ErrorCode::ZERO_RETURN => fmt.write_str("the SSL session has been shut down"), ErrorCode::WANT_READ => match self.io_error() { Some(_) => fmt.write_str("a nonblocking read call would have blocked"), None => fmt.write_str("the operation should be retried"), }, ErrorCode::WANT_WRITE => match self.io_error() { Some(_) => fmt.write_str("a nonblocking write call would have blocked"), None => fmt.write_str("the operation should be retried"), }, ErrorCode::SYSCALL => match self.io_error() { Some(err) => write!(fmt, "{}", err), None => fmt.write_str("unexpected EOF"), }, ErrorCode::SSL => match self.ssl_error() { Some(e) => write!(fmt, "{}", e), None => fmt.write_str("OpenSSL error"), }, ErrorCode(code) => write!(fmt, "unknown error code {}", code), } } } impl error::Error for Error { fn source(&self) -> Option<&(dyn error::Error + 'static)> { match self.cause { Some(InnerError::Io(ref e)) => Some(e), Some(InnerError::Ssl(ref e)) => Some(e), None => None, } } } /// An error or intermediate state after a TLS handshake attempt. // FIXME overhaul #[derive(Debug)] pub enum HandshakeError { /// Setup failed. SetupFailure(ErrorStack), /// The handshake failed. Failure(MidHandshakeSslStream), /// The handshake encountered a `WouldBlock` error midway through. /// /// This error will never be returned for blocking streams. WouldBlock(MidHandshakeSslStream), } impl StdError for HandshakeError { fn source(&self) -> Option<&(dyn StdError + 'static)> { match *self { HandshakeError::SetupFailure(ref e) => Some(e), HandshakeError::Failure(ref s) | HandshakeError::WouldBlock(ref s) => Some(s.error()), } } } impl fmt::Display for HandshakeError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match *self { HandshakeError::SetupFailure(ref e) => write!(f, "stream setup failed: {}", e)?, HandshakeError::Failure(ref s) => { write!(f, "the handshake failed: {}", s.error())?; let verify = s.ssl().verify_result(); if verify != X509VerifyResult::OK { write!(f, ": {}", verify)?; } } HandshakeError::WouldBlock(ref s) => { write!(f, "the handshake was interrupted: {}", s.error())?; let verify = s.ssl().verify_result(); if verify != X509VerifyResult::OK { write!(f, ": {}", verify)?; } } } Ok(()) } } impl From for HandshakeError { fn from(e: ErrorStack) -> HandshakeError { HandshakeError::SetupFailure(e) } } vendor/openssl/src/ssl/mod.rs0000664000175000017500000045717114160055207017066 0ustar mwhudsonmwhudson//! SSL/TLS support. //! //! `SslConnector` and `SslAcceptor` should be used in most cases - they handle //! configuration of the OpenSSL primitives for you. //! //! # Examples //! //! To connect as a client to a remote server: //! //! ```no_run //! use openssl::ssl::{SslMethod, SslConnector}; //! use std::io::{Read, Write}; //! use std::net::TcpStream; //! //! let connector = SslConnector::builder(SslMethod::tls()).unwrap().build(); //! //! let stream = TcpStream::connect("google.com:443").unwrap(); //! let mut stream = connector.connect("google.com", stream).unwrap(); //! //! stream.write_all(b"GET / HTTP/1.0\r\n\r\n").unwrap(); //! let mut res = vec![]; //! stream.read_to_end(&mut res).unwrap(); //! println!("{}", String::from_utf8_lossy(&res)); //! ``` //! //! To accept connections as a server from remote clients: //! //! ```no_run //! use openssl::ssl::{SslMethod, SslAcceptor, SslStream, SslFiletype}; //! use std::net::{TcpListener, TcpStream}; //! use std::sync::Arc; //! use std::thread; //! //! //! let mut acceptor = SslAcceptor::mozilla_intermediate(SslMethod::tls()).unwrap(); //! acceptor.set_private_key_file("key.pem", SslFiletype::PEM).unwrap(); //! acceptor.set_certificate_chain_file("certs.pem").unwrap(); //! acceptor.check_private_key().unwrap(); //! let acceptor = Arc::new(acceptor.build()); //! //! let listener = TcpListener::bind("0.0.0.0:8443").unwrap(); //! //! fn handle_client(stream: SslStream) { //! // ... //! } //! //! for stream in listener.incoming() { //! match stream { //! Ok(stream) => { //! let acceptor = acceptor.clone(); //! thread::spawn(move || { //! let stream = acceptor.accept(stream).unwrap(); //! handle_client(stream); //! }); //! } //! Err(e) => { /* connection failed */ } //! } //! } //! ``` use bitflags::bitflags; use cfg_if::cfg_if; use foreign_types::{ForeignType, ForeignTypeRef, Opaque}; use libc::{c_char, c_int, c_long, c_uchar, c_uint, c_void}; use once_cell::sync::{Lazy, OnceCell}; use std::any::TypeId; use std::cmp; use std::collections::HashMap; use std::ffi::{CStr, CString}; use std::fmt; use std::io; use std::io::prelude::*; use std::marker::PhantomData; use std::mem::{self, ManuallyDrop}; use std::ops::{Deref, DerefMut}; use std::panic::resume_unwind; use std::path::Path; use std::ptr; use std::slice; use std::str; use std::sync::{Arc, Mutex}; use crate::dh::{Dh, DhRef}; #[cfg(all(ossl101, not(ossl110)))] use crate::ec::EcKey; use crate::ec::EcKeyRef; use crate::error::ErrorStack; use crate::ex_data::Index; #[cfg(ossl111)] use crate::hash::MessageDigest; #[cfg(ossl110)] use crate::nid::Nid; use crate::pkey::{HasPrivate, PKeyRef, Params, Private}; use crate::srtp::{SrtpProtectionProfile, SrtpProtectionProfileRef}; use crate::ssl::bio::BioMethod; use crate::ssl::callbacks::*; use crate::ssl::error::InnerError; use crate::stack::{Stack, StackRef}; use crate::util::{ForeignTypeExt, ForeignTypeRefExt}; use crate::x509::store::{X509Store, X509StoreBuilderRef, X509StoreRef}; #[cfg(any(ossl102, libressl261))] use crate::x509::verify::X509VerifyParamRef; use crate::x509::{X509Name, X509Ref, X509StoreContextRef, X509VerifyResult, X509}; use crate::{cvt, cvt_n, cvt_p, init}; pub use crate::ssl::connector::{ ConnectConfiguration, SslAcceptor, SslAcceptorBuilder, SslConnector, SslConnectorBuilder, }; pub use crate::ssl::error::{Error, ErrorCode, HandshakeError}; mod bio; mod callbacks; mod connector; mod error; #[cfg(test)] mod test; /// Returns the OpenSSL name of a cipher corresponding to an RFC-standard cipher name. /// /// If the cipher has no corresponding OpenSSL name, the string `(NONE)` is returned. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`OPENSSL_cipher_name`] /// /// [`OPENSSL_cipher_name`]: https://www.openssl.org/docs/manmaster/man3/SSL_CIPHER_get_name.html #[cfg(ossl111)] pub fn cipher_name(std_name: &str) -> &'static str { unsafe { ffi::init(); let s = CString::new(std_name).unwrap(); let ptr = ffi::OPENSSL_cipher_name(s.as_ptr()); CStr::from_ptr(ptr).to_str().unwrap() } } cfg_if! { if #[cfg(ossl300)] { type SslOptionsRepr = u64; } else { type SslOptionsRepr = libc::c_ulong; } } bitflags! { /// Options controlling the behavior of an `SslContext`. pub struct SslOptions: SslOptionsRepr { /// Disables a countermeasure against an SSLv3/TLSv1.0 vulnerability affecting CBC ciphers. const DONT_INSERT_EMPTY_FRAGMENTS = ffi::SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS; /// A "reasonable default" set of options which enables compatibility flags. const ALL = ffi::SSL_OP_ALL; /// Do not query the MTU. /// /// Only affects DTLS connections. const NO_QUERY_MTU = ffi::SSL_OP_NO_QUERY_MTU; /// Enables Cookie Exchange as described in [RFC 4347 Section 4.2.1]. /// /// Only affects DTLS connections. /// /// [RFC 4347 Section 4.2.1]: https://tools.ietf.org/html/rfc4347#section-4.2.1 const COOKIE_EXCHANGE = ffi::SSL_OP_COOKIE_EXCHANGE; /// Disables the use of session tickets for session resumption. const NO_TICKET = ffi::SSL_OP_NO_TICKET; /// Always start a new session when performing a renegotiation on the server side. const NO_SESSION_RESUMPTION_ON_RENEGOTIATION = ffi::SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION; /// Disables the use of TLS compression. const NO_COMPRESSION = ffi::SSL_OP_NO_COMPRESSION; /// Allow legacy insecure renegotiation with servers or clients that do not support secure /// renegotiation. const ALLOW_UNSAFE_LEGACY_RENEGOTIATION = ffi::SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION; /// Creates a new key for each session when using ECDHE. /// /// This is always enabled in OpenSSL 1.1.0. const SINGLE_ECDH_USE = ffi::SSL_OP_SINGLE_ECDH_USE; /// Creates a new key for each session when using DHE. /// /// This is always enabled in OpenSSL 1.1.0. const SINGLE_DH_USE = ffi::SSL_OP_SINGLE_DH_USE; /// Use the server's preferences rather than the client's when selecting a cipher. /// /// This has no effect on the client side. const CIPHER_SERVER_PREFERENCE = ffi::SSL_OP_CIPHER_SERVER_PREFERENCE; /// Disables version rollback attach detection. const TLS_ROLLBACK_BUG = ffi::SSL_OP_TLS_ROLLBACK_BUG; /// Disables the use of SSLv2. const NO_SSLV2 = ffi::SSL_OP_NO_SSLv2; /// Disables the use of SSLv3. const NO_SSLV3 = ffi::SSL_OP_NO_SSLv3; /// Disables the use of TLSv1.0. const NO_TLSV1 = ffi::SSL_OP_NO_TLSv1; /// Disables the use of TLSv1.1. const NO_TLSV1_1 = ffi::SSL_OP_NO_TLSv1_1; /// Disables the use of TLSv1.2. const NO_TLSV1_2 = ffi::SSL_OP_NO_TLSv1_2; /// Disables the use of TLSv1.3. /// /// Requires OpenSSL 1.1.1 or newer. #[cfg(ossl111)] const NO_TLSV1_3 = ffi::SSL_OP_NO_TLSv1_3; /// Disables the use of DTLSv1.0 /// /// Requires OpenSSL 1.0.2 or LibreSSL 3.3.2 or newer. #[cfg(any(ossl102, ossl110, libressl332))] const NO_DTLSV1 = ffi::SSL_OP_NO_DTLSv1; /// Disables the use of DTLSv1.2. /// /// Requires OpenSSL 1.0.2 or LibreSSL 3.3.2 or newer. #[cfg(any(ossl102, ossl110, libressl332))] const NO_DTLSV1_2 = ffi::SSL_OP_NO_DTLSv1_2; /// Disables the use of all (D)TLS protocol versions. /// /// This can be used as a mask when whitelisting protocol versions. /// /// Requires OpenSSL 1.0.2 or newer. /// /// # Examples /// /// Only support TLSv1.2: /// /// ```rust /// use openssl::ssl::SslOptions; /// /// let options = SslOptions::NO_SSL_MASK & !SslOptions::NO_TLSV1_2; /// ``` #[cfg(any(ossl102, ossl110))] const NO_SSL_MASK = ffi::SSL_OP_NO_SSL_MASK; /// Disallow all renegotiation in TLSv1.2 and earlier. /// /// Requires OpenSSL 1.1.0h or newer. #[cfg(ossl110h)] const NO_RENEGOTIATION = ffi::SSL_OP_NO_RENEGOTIATION; /// Enable TLSv1.3 Compatibility mode. /// /// Requires OpenSSL 1.1.1 or newer. This is on by default in 1.1.1, but a future version /// may have this disabled by default. #[cfg(ossl111)] const ENABLE_MIDDLEBOX_COMPAT = ffi::SSL_OP_ENABLE_MIDDLEBOX_COMPAT; } } bitflags! { /// Options controlling the behavior of an `SslContext`. pub struct SslMode: c_long { /// Enables "short writes". /// /// Normally, a write in OpenSSL will always write out all of the requested data, even if it /// requires more than one TLS record or write to the underlying stream. This option will /// cause a write to return after writing a single TLS record instead. const ENABLE_PARTIAL_WRITE = ffi::SSL_MODE_ENABLE_PARTIAL_WRITE; /// Disables a check that the data buffer has not moved between calls when operating in a /// nonblocking context. const ACCEPT_MOVING_WRITE_BUFFER = ffi::SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER; /// Enables automatic retries after TLS session events such as renegotiations or heartbeats. /// /// By default, OpenSSL will return a `WantRead` error after a renegotiation or heartbeat. /// This option will cause OpenSSL to automatically continue processing the requested /// operation instead. /// /// Note that `SslStream::read` and `SslStream::write` will automatically retry regardless /// of the state of this option. It only affects `SslStream::ssl_read` and /// `SslStream::ssl_write`. const AUTO_RETRY = ffi::SSL_MODE_AUTO_RETRY; /// Disables automatic chain building when verifying a peer's certificate. /// /// TLS peers are responsible for sending the entire certificate chain from the leaf to a /// trusted root, but some will incorrectly not do so. OpenSSL will try to build the chain /// out of certificates it knows of, and this option will disable that behavior. const NO_AUTO_CHAIN = ffi::SSL_MODE_NO_AUTO_CHAIN; /// Release memory buffers when the session does not need them. /// /// This saves ~34 KiB of memory for idle streams. const RELEASE_BUFFERS = ffi::SSL_MODE_RELEASE_BUFFERS; /// Sends the fake `TLS_FALLBACK_SCSV` cipher suite in the ClientHello message of a /// handshake. /// /// This should only be enabled if a client has failed to connect to a server which /// attempted to downgrade the protocol version of the session. /// /// Do not use this unless you know what you're doing! #[cfg(not(libressl))] const SEND_FALLBACK_SCSV = ffi::SSL_MODE_SEND_FALLBACK_SCSV; } } /// A type specifying the kind of protocol an `SslContext` will speak. #[derive(Copy, Clone)] pub struct SslMethod(*const ffi::SSL_METHOD); impl SslMethod { /// Support all versions of the TLS protocol. /// /// This corresponds to `TLS_method` on OpenSSL 1.1.0 and `SSLv23_method` /// on OpenSSL 1.0.x. pub fn tls() -> SslMethod { unsafe { SslMethod(TLS_method()) } } /// Support all versions of the DTLS protocol. /// /// This corresponds to `DTLS_method` on OpenSSL 1.1.0 and `DTLSv1_method` /// on OpenSSL 1.0.x. pub fn dtls() -> SslMethod { unsafe { SslMethod(DTLS_method()) } } /// Support all versions of the TLS protocol, explicitly as a client. /// /// This corresponds to `TLS_client_method` on OpenSSL 1.1.0 and /// `SSLv23_client_method` on OpenSSL 1.0.x. pub fn tls_client() -> SslMethod { unsafe { SslMethod(TLS_client_method()) } } /// Support all versions of the TLS protocol, explicitly as a server. /// /// This corresponds to `TLS_server_method` on OpenSSL 1.1.0 and /// `SSLv23_server_method` on OpenSSL 1.0.x. pub fn tls_server() -> SslMethod { unsafe { SslMethod(TLS_server_method()) } } /// Constructs an `SslMethod` from a pointer to the underlying OpenSSL value. /// /// # Safety /// /// The caller must ensure the pointer is valid. pub unsafe fn from_ptr(ptr: *const ffi::SSL_METHOD) -> SslMethod { SslMethod(ptr) } /// Returns a pointer to the underlying OpenSSL value. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_ptr(&self) -> *const ffi::SSL_METHOD { self.0 } } unsafe impl Sync for SslMethod {} unsafe impl Send for SslMethod {} bitflags! { /// Options controlling the behavior of certificate verification. pub struct SslVerifyMode: i32 { /// Verifies that the peer's certificate is trusted. /// /// On the server side, this will cause OpenSSL to request a certificate from the client. const PEER = ffi::SSL_VERIFY_PEER; /// Disables verification of the peer's certificate. /// /// On the server side, this will cause OpenSSL to not request a certificate from the /// client. On the client side, the certificate will be checked for validity, but the /// negotiation will continue regardless of the result of that check. const NONE = ffi::SSL_VERIFY_NONE; /// On the server side, abort the handshake if the client did not send a certificate. /// /// This should be paired with `SSL_VERIFY_PEER`. It has no effect on the client side. const FAIL_IF_NO_PEER_CERT = ffi::SSL_VERIFY_FAIL_IF_NO_PEER_CERT; } } bitflags! { /// Options controlling the behavior of session caching. pub struct SslSessionCacheMode: c_long { /// No session caching for the client or server takes place. const OFF = ffi::SSL_SESS_CACHE_OFF; /// Enable session caching on the client side. /// /// OpenSSL has no way of identifying the proper session to reuse automatically, so the /// application is responsible for setting it explicitly via [`SslRef::set_session`]. /// /// [`SslRef::set_session`]: struct.SslRef.html#method.set_session const CLIENT = ffi::SSL_SESS_CACHE_CLIENT; /// Enable session caching on the server side. /// /// This is the default mode. const SERVER = ffi::SSL_SESS_CACHE_SERVER; /// Enable session caching on both the client and server side. const BOTH = ffi::SSL_SESS_CACHE_BOTH; /// Disable automatic removal of expired sessions from the session cache. const NO_AUTO_CLEAR = ffi::SSL_SESS_CACHE_NO_AUTO_CLEAR; /// Disable use of the internal session cache for session lookups. const NO_INTERNAL_LOOKUP = ffi::SSL_SESS_CACHE_NO_INTERNAL_LOOKUP; /// Disable use of the internal session cache for session storage. const NO_INTERNAL_STORE = ffi::SSL_SESS_CACHE_NO_INTERNAL_STORE; /// Disable use of the internal session cache for storage and lookup. const NO_INTERNAL = ffi::SSL_SESS_CACHE_NO_INTERNAL; } } #[cfg(ossl111)] bitflags! { /// Which messages and under which conditions an extension should be added or expected. pub struct ExtensionContext: c_uint { /// This extension is only allowed in TLS const TLS_ONLY = ffi::SSL_EXT_TLS_ONLY; /// This extension is only allowed in DTLS const DTLS_ONLY = ffi::SSL_EXT_DTLS_ONLY; /// Some extensions may be allowed in DTLS but we don't implement them for it const TLS_IMPLEMENTATION_ONLY = ffi::SSL_EXT_TLS_IMPLEMENTATION_ONLY; /// Most extensions are not defined for SSLv3 but EXT_TYPE_renegotiate is const SSL3_ALLOWED = ffi::SSL_EXT_SSL3_ALLOWED; /// Extension is only defined for TLS1.2 and below const TLS1_2_AND_BELOW_ONLY = ffi::SSL_EXT_TLS1_2_AND_BELOW_ONLY; /// Extension is only defined for TLS1.3 and above const TLS1_3_ONLY = ffi::SSL_EXT_TLS1_3_ONLY; /// Ignore this extension during parsing if we are resuming const IGNORE_ON_RESUMPTION = ffi::SSL_EXT_IGNORE_ON_RESUMPTION; const CLIENT_HELLO = ffi::SSL_EXT_CLIENT_HELLO; /// Really means TLS1.2 or below const TLS1_2_SERVER_HELLO = ffi::SSL_EXT_TLS1_2_SERVER_HELLO; const TLS1_3_SERVER_HELLO = ffi::SSL_EXT_TLS1_3_SERVER_HELLO; const TLS1_3_ENCRYPTED_EXTENSIONS = ffi::SSL_EXT_TLS1_3_ENCRYPTED_EXTENSIONS; const TLS1_3_HELLO_RETRY_REQUEST = ffi::SSL_EXT_TLS1_3_HELLO_RETRY_REQUEST; const TLS1_3_CERTIFICATE = ffi::SSL_EXT_TLS1_3_CERTIFICATE; const TLS1_3_NEW_SESSION_TICKET = ffi::SSL_EXT_TLS1_3_NEW_SESSION_TICKET; const TLS1_3_CERTIFICATE_REQUEST = ffi::SSL_EXT_TLS1_3_CERTIFICATE_REQUEST; } } /// An identifier of the format of a certificate or key file. #[derive(Copy, Clone)] pub struct SslFiletype(c_int); impl SslFiletype { /// The PEM format. /// /// This corresponds to `SSL_FILETYPE_PEM`. pub const PEM: SslFiletype = SslFiletype(ffi::SSL_FILETYPE_PEM); /// The ASN1 format. /// /// This corresponds to `SSL_FILETYPE_ASN1`. pub const ASN1: SslFiletype = SslFiletype(ffi::SSL_FILETYPE_ASN1); /// Constructs an `SslFiletype` from a raw OpenSSL value. pub fn from_raw(raw: c_int) -> SslFiletype { SslFiletype(raw) } /// Returns the raw OpenSSL value represented by this type. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } } /// An identifier of a certificate status type. #[derive(Copy, Clone)] pub struct StatusType(c_int); impl StatusType { /// An OSCP status. pub const OCSP: StatusType = StatusType(ffi::TLSEXT_STATUSTYPE_ocsp); /// Constructs a `StatusType` from a raw OpenSSL value. pub fn from_raw(raw: c_int) -> StatusType { StatusType(raw) } /// Returns the raw OpenSSL value represented by this type. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } } /// An identifier of a session name type. #[derive(Copy, Clone)] pub struct NameType(c_int); impl NameType { /// A host name. pub const HOST_NAME: NameType = NameType(ffi::TLSEXT_NAMETYPE_host_name); /// Constructs a `StatusType` from a raw OpenSSL value. pub fn from_raw(raw: c_int) -> StatusType { StatusType(raw) } /// Returns the raw OpenSSL value represented by this type. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } } static INDEXES: Lazy>> = Lazy::new(|| Mutex::new(HashMap::new())); static SSL_INDEXES: Lazy>> = Lazy::new(|| Mutex::new(HashMap::new())); static SESSION_CTX_INDEX: OnceCell> = OnceCell::new(); fn try_get_session_ctx_index() -> Result<&'static Index, ErrorStack> { SESSION_CTX_INDEX.get_or_try_init(Ssl::new_ex_index) } unsafe extern "C" fn free_data_box( _parent: *mut c_void, ptr: *mut c_void, _ad: *mut ffi::CRYPTO_EX_DATA, _idx: c_int, _argl: c_long, _argp: *mut c_void, ) { if !ptr.is_null() { Box::::from_raw(ptr as *mut T); } } /// An error returned from the SNI callback. #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct SniError(c_int); impl SniError { /// Abort the handshake with a fatal alert. pub const ALERT_FATAL: SniError = SniError(ffi::SSL_TLSEXT_ERR_ALERT_FATAL); /// Send a warning alert to the client and continue the handshake. pub const ALERT_WARNING: SniError = SniError(ffi::SSL_TLSEXT_ERR_ALERT_WARNING); pub const NOACK: SniError = SniError(ffi::SSL_TLSEXT_ERR_NOACK); } /// An SSL/TLS alert. #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct SslAlert(c_int); impl SslAlert { /// Alert 112 - `unrecognized_name`. pub const UNRECOGNIZED_NAME: SslAlert = SslAlert(ffi::SSL_AD_UNRECOGNIZED_NAME); pub const ILLEGAL_PARAMETER: SslAlert = SslAlert(ffi::SSL_AD_ILLEGAL_PARAMETER); pub const DECODE_ERROR: SslAlert = SslAlert(ffi::SSL_AD_DECODE_ERROR); } /// An error returned from an ALPN selection callback. /// /// Requires OpenSSL 1.0.2 or LibreSSL 2.6.1 or newer. #[cfg(any(ossl102, libressl261))] #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct AlpnError(c_int); #[cfg(any(ossl102, libressl261))] impl AlpnError { /// Terminate the handshake with a fatal alert. /// /// Requires OpenSSL 1.1.0 or newer. #[cfg(any(ossl110))] pub const ALERT_FATAL: AlpnError = AlpnError(ffi::SSL_TLSEXT_ERR_ALERT_FATAL); /// Do not select a protocol, but continue the handshake. pub const NOACK: AlpnError = AlpnError(ffi::SSL_TLSEXT_ERR_NOACK); } /// The result of a client hello callback. /// /// Requires OpenSSL 1.1.1 or newer. #[cfg(ossl111)] #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct ClientHelloResponse(c_int); #[cfg(ossl111)] impl ClientHelloResponse { /// Continue the handshake. pub const SUCCESS: ClientHelloResponse = ClientHelloResponse(ffi::SSL_CLIENT_HELLO_SUCCESS); /// Return from the handshake with an `ErrorCode::WANT_CLIENT_HELLO_CB` error. pub const RETRY: ClientHelloResponse = ClientHelloResponse(ffi::SSL_CLIENT_HELLO_RETRY); } /// An SSL/TLS protocol version. #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct SslVersion(c_int); impl SslVersion { /// SSLv3 pub const SSL3: SslVersion = SslVersion(ffi::SSL3_VERSION); /// TLSv1.0 pub const TLS1: SslVersion = SslVersion(ffi::TLS1_VERSION); /// TLSv1.1 pub const TLS1_1: SslVersion = SslVersion(ffi::TLS1_1_VERSION); /// TLSv1.2 pub const TLS1_2: SslVersion = SslVersion(ffi::TLS1_2_VERSION); /// TLSv1.3 /// /// Requires OpenSSL 1.1.1 or newer. #[cfg(ossl111)] pub const TLS1_3: SslVersion = SslVersion(ffi::TLS1_3_VERSION); } /// A standard implementation of protocol selection for Application Layer Protocol Negotiation /// (ALPN). /// /// `server` should contain the server's list of supported protocols and `client` the client's. They /// must both be in the ALPN wire format. See the documentation for /// [`SslContextBuilder::set_alpn_protos`] for details. /// /// It will select the first protocol supported by the server which is also supported by the client. /// /// This corresponds to [`SSL_select_next_proto`]. /// /// [`SslContextBuilder::set_alpn_protos`]: struct.SslContextBuilder.html#method.set_alpn_protos /// [`SSL_select_next_proto`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_set_alpn_protos.html pub fn select_next_proto<'a>(server: &[u8], client: &'a [u8]) -> Option<&'a [u8]> { unsafe { let mut out = ptr::null_mut(); let mut outlen = 0; let r = ffi::SSL_select_next_proto( &mut out, &mut outlen, server.as_ptr(), server.len() as c_uint, client.as_ptr(), client.len() as c_uint, ); if r == ffi::OPENSSL_NPN_NEGOTIATED { Some(slice::from_raw_parts(out as *const u8, outlen as usize)) } else { None } } } /// A builder for `SslContext`s. pub struct SslContextBuilder(SslContext); impl SslContextBuilder { /// Creates a new `SslContextBuilder`. /// /// This corresponds to [`SSL_CTX_new`]. /// /// [`SSL_CTX_new`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_new.html pub fn new(method: SslMethod) -> Result { unsafe { init(); let ctx = cvt_p(ffi::SSL_CTX_new(method.as_ptr()))?; Ok(SslContextBuilder::from_ptr(ctx)) } } /// Creates an `SslContextBuilder` from a pointer to a raw OpenSSL value. /// /// # Safety /// /// The caller must ensure that the pointer is valid and uniquely owned by the builder. pub unsafe fn from_ptr(ctx: *mut ffi::SSL_CTX) -> SslContextBuilder { SslContextBuilder(SslContext::from_ptr(ctx)) } /// Returns a pointer to the raw OpenSSL value. pub fn as_ptr(&self) -> *mut ffi::SSL_CTX { self.0.as_ptr() } /// Configures the certificate verification method for new connections. /// /// This corresponds to [`SSL_CTX_set_verify`]. /// /// [`SSL_CTX_set_verify`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_set_verify.html pub fn set_verify(&mut self, mode: SslVerifyMode) { unsafe { ffi::SSL_CTX_set_verify(self.as_ptr(), mode.bits as c_int, None); } } /// Configures the certificate verification method for new connections and /// registers a verification callback. /// /// The callback is passed a boolean indicating if OpenSSL's internal verification succeeded as /// well as a reference to the `X509StoreContext` which can be used to examine the certificate /// chain. It should return a boolean indicating if verification succeeded. /// /// This corresponds to [`SSL_CTX_set_verify`]. /// /// [`SSL_CTX_set_verify`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_set_verify.html pub fn set_verify_callback(&mut self, mode: SslVerifyMode, verify: F) where F: Fn(bool, &mut X509StoreContextRef) -> bool + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), verify); ffi::SSL_CTX_set_verify(self.as_ptr(), mode.bits as c_int, Some(raw_verify::)); } } /// Configures the server name indication (SNI) callback for new connections. /// /// SNI is used to allow a single server to handle requests for multiple domains, each of which /// has its own certificate chain and configuration. /// /// Obtain the server name with the `servername` method and then set the corresponding context /// with `set_ssl_context` /// /// This corresponds to [`SSL_CTX_set_tlsext_servername_callback`]. /// /// [`SSL_CTX_set_tlsext_servername_callback`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_tlsext_servername_callback.html // FIXME tlsext prefix? pub fn set_servername_callback(&mut self, callback: F) where F: Fn(&mut SslRef, &mut SslAlert) -> Result<(), SniError> + 'static + Sync + Send, { unsafe { // The SNI callback is somewhat unique in that the callback associated with the original // context associated with an SSL can be used even if the SSL's context has been swapped // out. When that happens, we wouldn't be able to look up the callback's state in the // context's ex data. Instead, pass the pointer directly as the servername arg. It's // still stored in ex data to manage the lifetime. let arg = self.set_ex_data_inner(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_tlsext_servername_arg(self.as_ptr(), arg); let f: extern "C" fn(_, _, _) -> _ = raw_sni::; let f: extern "C" fn() = mem::transmute(f); ffi::SSL_CTX_set_tlsext_servername_callback(self.as_ptr(), Some(f)); } } /// Sets the certificate verification depth. /// /// If the peer's certificate chain is longer than this value, verification will fail. /// /// This corresponds to [`SSL_CTX_set_verify_depth`]. /// /// [`SSL_CTX_set_verify_depth`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_set_verify_depth.html pub fn set_verify_depth(&mut self, depth: u32) { unsafe { ffi::SSL_CTX_set_verify_depth(self.as_ptr(), depth as c_int); } } /// Sets a custom certificate store for verifying peer certificates. /// /// Requires OpenSSL 1.0.2 or newer. /// /// This corresponds to [`SSL_CTX_set0_verify_cert_store`]. /// /// [`SSL_CTX_set0_verify_cert_store`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set0_verify_cert_store.html #[cfg(any(ossl102, ossl110))] pub fn set_verify_cert_store(&mut self, cert_store: X509Store) -> Result<(), ErrorStack> { unsafe { let ptr = cert_store.as_ptr(); cvt(ffi::SSL_CTX_set0_verify_cert_store(self.as_ptr(), ptr) as c_int)?; mem::forget(cert_store); Ok(()) } } /// Replaces the context's certificate store. /// /// This corresponds to [`SSL_CTX_set_cert_store`]. /// /// [`SSL_CTX_set_cert_store`]: https://www.openssl.org/docs/man1.0.2/man3/SSL_CTX_set_cert_store.html pub fn set_cert_store(&mut self, cert_store: X509Store) { unsafe { ffi::SSL_CTX_set_cert_store(self.as_ptr(), cert_store.as_ptr()); mem::forget(cert_store); } } /// Controls read ahead behavior. /// /// If enabled, OpenSSL will read as much data as is available from the underlying stream, /// instead of a single record at a time. /// /// It has no effect when used with DTLS. /// /// This corresponds to [`SSL_CTX_set_read_ahead`]. /// /// [`SSL_CTX_set_read_ahead`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_set_read_ahead.html pub fn set_read_ahead(&mut self, read_ahead: bool) { unsafe { ffi::SSL_CTX_set_read_ahead(self.as_ptr(), read_ahead as c_long); } } /// Sets the mode used by the context, returning the previous mode. /// /// This corresponds to [`SSL_CTX_set_mode`]. /// /// [`SSL_CTX_set_mode`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_mode.html pub fn set_mode(&mut self, mode: SslMode) -> SslMode { unsafe { let bits = ffi::SSL_CTX_set_mode(self.as_ptr(), mode.bits()); SslMode { bits } } } /// Sets the parameters to be used during ephemeral Diffie-Hellman key exchange. /// /// This corresponds to [`SSL_CTX_set_tmp_dh`]. /// /// [`SSL_CTX_set_tmp_dh`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_tmp_dh.html pub fn set_tmp_dh(&mut self, dh: &DhRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_CTX_set_tmp_dh(self.as_ptr(), dh.as_ptr()) as c_int).map(|_| ()) } } /// Sets the callback which will generate parameters to be used during ephemeral Diffie-Hellman /// key exchange. /// /// The callback is provided with a reference to the `Ssl` for the session, as well as a boolean /// indicating if the selected cipher is export-grade, and the key length. The export and key /// length options are archaic and should be ignored in almost all cases. /// /// This corresponds to [`SSL_CTX_set_tmp_dh_callback`]. /// /// [`SSL_CTX_set_tmp_dh_callback`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_tmp_dh.html pub fn set_tmp_dh_callback(&mut self, callback: F) where F: Fn(&mut SslRef, bool, u32) -> Result, ErrorStack> + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_tmp_dh_callback(self.as_ptr(), raw_tmp_dh::); } } /// Sets the parameters to be used during ephemeral elliptic curve Diffie-Hellman key exchange. /// /// This corresponds to `SSL_CTX_set_tmp_ecdh`. pub fn set_tmp_ecdh(&mut self, key: &EcKeyRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_CTX_set_tmp_ecdh(self.as_ptr(), key.as_ptr()) as c_int).map(|_| ()) } } /// Sets the callback which will generate parameters to be used during ephemeral elliptic curve /// Diffie-Hellman key exchange. /// /// The callback is provided with a reference to the `Ssl` for the session, as well as a boolean /// indicating if the selected cipher is export-grade, and the key length. The export and key /// length options are archaic and should be ignored in almost all cases. /// /// Requires OpenSSL 1.0.1 or 1.0.2. /// /// This corresponds to `SSL_CTX_set_tmp_ecdh_callback`. #[cfg(all(ossl101, not(ossl110)))] pub fn set_tmp_ecdh_callback(&mut self, callback: F) where F: Fn(&mut SslRef, bool, u32) -> Result, ErrorStack> + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_tmp_ecdh_callback(self.as_ptr(), raw_tmp_ecdh::); } } /// Use the default locations of trusted certificates for verification. /// /// These locations are read from the `SSL_CERT_FILE` and `SSL_CERT_DIR` environment variables /// if present, or defaults specified at OpenSSL build time otherwise. /// /// This corresponds to [`SSL_CTX_set_default_verify_paths`]. /// /// [`SSL_CTX_set_default_verify_paths`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_set_default_verify_paths.html pub fn set_default_verify_paths(&mut self) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_CTX_set_default_verify_paths(self.as_ptr())).map(|_| ()) } } /// Loads trusted root certificates from a file. /// /// The file should contain a sequence of PEM-formatted CA certificates. /// /// This corresponds to [`SSL_CTX_load_verify_locations`]. /// /// [`SSL_CTX_load_verify_locations`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_load_verify_locations.html pub fn set_ca_file>(&mut self, file: P) -> Result<(), ErrorStack> { let file = CString::new(file.as_ref().as_os_str().to_str().unwrap()).unwrap(); unsafe { cvt(ffi::SSL_CTX_load_verify_locations( self.as_ptr(), file.as_ptr() as *const _, ptr::null(), )) .map(|_| ()) } } /// Sets the list of CA names sent to the client. /// /// The CA certificates must still be added to the trust root - they are not automatically set /// as trusted by this method. /// /// This corresponds to [`SSL_CTX_set_client_CA_list`]. /// /// [`SSL_CTX_set_client_CA_list`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_client_CA_list.html pub fn set_client_ca_list(&mut self, list: Stack) { unsafe { ffi::SSL_CTX_set_client_CA_list(self.as_ptr(), list.as_ptr()); mem::forget(list); } } /// Add the provided CA certificate to the list sent by the server to the client when /// requesting client-side TLS authentication. /// /// This corresponds to [`SSL_CTX_add_client_CA`]. /// /// [`SSL_CTX_add_client_CA`]: https://www.openssl.org/docs/man1.0.2/man3/SSL_CTX_set_client_CA_list.html #[cfg(not(libressl))] pub fn add_client_ca(&mut self, cacert: &X509Ref) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_CTX_add_client_CA(self.as_ptr(), cacert.as_ptr())).map(|_| ()) } } /// Set the context identifier for sessions. /// /// This value identifies the server's session cache to clients, telling them when they're /// able to reuse sessions. It should be set to a unique value per server, unless multiple /// servers share a session cache. /// /// This value should be set when using client certificates, or each request will fail its /// handshake and need to be restarted. /// /// This corresponds to [`SSL_CTX_set_session_id_context`]. /// /// [`SSL_CTX_set_session_id_context`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_session_id_context.html pub fn set_session_id_context(&mut self, sid_ctx: &[u8]) -> Result<(), ErrorStack> { unsafe { assert!(sid_ctx.len() <= c_uint::max_value() as usize); cvt(ffi::SSL_CTX_set_session_id_context( self.as_ptr(), sid_ctx.as_ptr(), sid_ctx.len() as c_uint, )) .map(|_| ()) } } /// Loads a leaf certificate from a file. /// /// Only a single certificate will be loaded - use `add_extra_chain_cert` to add the remainder /// of the certificate chain, or `set_certificate_chain_file` to load the entire chain from a /// single file. /// /// This corresponds to [`SSL_CTX_use_certificate_file`]. /// /// [`SSL_CTX_use_certificate_file`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_use_certificate_file.html pub fn set_certificate_file>( &mut self, file: P, file_type: SslFiletype, ) -> Result<(), ErrorStack> { let file = CString::new(file.as_ref().as_os_str().to_str().unwrap()).unwrap(); unsafe { cvt(ffi::SSL_CTX_use_certificate_file( self.as_ptr(), file.as_ptr() as *const _, file_type.as_raw(), )) .map(|_| ()) } } /// Loads a certificate chain from a file. /// /// The file should contain a sequence of PEM-formatted certificates, the first being the leaf /// certificate, and the remainder forming the chain of certificates up to and including the /// trusted root certificate. /// /// This corresponds to [`SSL_CTX_use_certificate_chain_file`]. /// /// [`SSL_CTX_use_certificate_chain_file`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_use_certificate_file.html pub fn set_certificate_chain_file>( &mut self, file: P, ) -> Result<(), ErrorStack> { let file = CString::new(file.as_ref().as_os_str().to_str().unwrap()).unwrap(); unsafe { cvt(ffi::SSL_CTX_use_certificate_chain_file( self.as_ptr(), file.as_ptr() as *const _, )) .map(|_| ()) } } /// Sets the leaf certificate. /// /// Use `add_extra_chain_cert` to add the remainder of the certificate chain. /// /// This corresponds to [`SSL_CTX_use_certificate`]. /// /// [`SSL_CTX_use_certificate`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_use_certificate_file.html pub fn set_certificate(&mut self, cert: &X509Ref) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_CTX_use_certificate(self.as_ptr(), cert.as_ptr())).map(|_| ()) } } /// Appends a certificate to the certificate chain. /// /// This chain should contain all certificates necessary to go from the certificate specified by /// `set_certificate` to a trusted root. /// /// This corresponds to [`SSL_CTX_add_extra_chain_cert`]. /// /// [`SSL_CTX_add_extra_chain_cert`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_add_extra_chain_cert.html pub fn add_extra_chain_cert(&mut self, cert: X509) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_CTX_add_extra_chain_cert(self.as_ptr(), cert.as_ptr()) as c_int)?; mem::forget(cert); Ok(()) } } /// Loads the private key from a file. /// /// This corresponds to [`SSL_CTX_use_PrivateKey_file`]. /// /// [`SSL_CTX_use_PrivateKey_file`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_use_PrivateKey_file.html pub fn set_private_key_file>( &mut self, file: P, file_type: SslFiletype, ) -> Result<(), ErrorStack> { let file = CString::new(file.as_ref().as_os_str().to_str().unwrap()).unwrap(); unsafe { cvt(ffi::SSL_CTX_use_PrivateKey_file( self.as_ptr(), file.as_ptr() as *const _, file_type.as_raw(), )) .map(|_| ()) } } /// Sets the private key. /// /// This corresponds to [`SSL_CTX_use_PrivateKey`]. /// /// [`SSL_CTX_use_PrivateKey`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_use_PrivateKey_file.html pub fn set_private_key(&mut self, key: &PKeyRef) -> Result<(), ErrorStack> where T: HasPrivate, { unsafe { cvt(ffi::SSL_CTX_use_PrivateKey(self.as_ptr(), key.as_ptr())).map(|_| ()) } } /// Sets the list of supported ciphers for protocols before TLSv1.3. /// /// The `set_ciphersuites` method controls the cipher suites for TLSv1.3. /// /// See [`ciphers`] for details on the format. /// /// This corresponds to [`SSL_CTX_set_cipher_list`]. /// /// [`ciphers`]: https://www.openssl.org/docs/man1.1.0/apps/ciphers.html /// [`SSL_CTX_set_cipher_list`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_cipher_list.html pub fn set_cipher_list(&mut self, cipher_list: &str) -> Result<(), ErrorStack> { let cipher_list = CString::new(cipher_list).unwrap(); unsafe { cvt(ffi::SSL_CTX_set_cipher_list( self.as_ptr(), cipher_list.as_ptr() as *const _, )) .map(|_| ()) } } /// Sets the list of supported ciphers for the TLSv1.3 protocol. /// /// The `set_cipher_list` method controls the cipher suites for protocols before TLSv1.3. /// /// The format consists of TLSv1.3 ciphersuite names separated by `:` characters in order of /// preference. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_CTX_set_ciphersuites`]. /// /// [`SSL_CTX_set_ciphersuites`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_ciphersuites.html #[cfg(ossl111)] pub fn set_ciphersuites(&mut self, cipher_list: &str) -> Result<(), ErrorStack> { let cipher_list = CString::new(cipher_list).unwrap(); unsafe { cvt(ffi::SSL_CTX_set_ciphersuites( self.as_ptr(), cipher_list.as_ptr() as *const _, )) .map(|_| ()) } } /// Enables ECDHE key exchange with an automatically chosen curve list. /// /// Requires OpenSSL 1.0.2. /// /// This corresponds to [`SSL_CTX_set_ecdh_auto`]. /// /// [`SSL_CTX_set_ecdh_auto`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_ecdh_auto.html #[cfg(any(libressl, all(ossl102, not(ossl110))))] pub fn set_ecdh_auto(&mut self, onoff: bool) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_CTX_set_ecdh_auto(self.as_ptr(), onoff as c_int)).map(|_| ()) } } /// Sets the options used by the context, returning the old set. /// /// This corresponds to [`SSL_CTX_set_options`]. /// /// # Note /// /// This *enables* the specified options, but does not disable unspecified options. Use /// `clear_options` for that. /// /// [`SSL_CTX_set_options`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_options.html pub fn set_options(&mut self, option: SslOptions) -> SslOptions { let bits = unsafe { ffi::SSL_CTX_set_options(self.as_ptr(), option.bits()) }; SslOptions { bits } } /// Returns the options used by the context. /// /// This corresponds to [`SSL_CTX_get_options`]. /// /// [`SSL_CTX_get_options`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_options.html pub fn options(&self) -> SslOptions { let bits = unsafe { ffi::SSL_CTX_get_options(self.as_ptr()) }; SslOptions { bits } } /// Clears the options used by the context, returning the old set. /// /// This corresponds to [`SSL_CTX_clear_options`]. /// /// [`SSL_CTX_clear_options`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_options.html pub fn clear_options(&mut self, option: SslOptions) -> SslOptions { let bits = unsafe { ffi::SSL_CTX_clear_options(self.as_ptr(), option.bits()) }; SslOptions { bits } } /// Sets the minimum supported protocol version. /// /// A value of `None` will enable protocol versions down the the lowest version supported by /// OpenSSL. /// /// This corresponds to [`SSL_CTX_set_min_proto_version`]. /// /// Requires OpenSSL 1.1.0 or LibreSSL 2.6.1 or newer. /// /// [`SSL_CTX_set_min_proto_version`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_set_min_proto_version.html #[cfg(any(ossl110, libressl261))] pub fn set_min_proto_version(&mut self, version: Option) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_CTX_set_min_proto_version( self.as_ptr(), version.map_or(0, |v| v.0 as _), )) .map(|_| ()) } } /// Sets the maximum supported protocol version. /// /// A value of `None` will enable protocol versions down the the highest version supported by /// OpenSSL. /// /// This corresponds to [`SSL_CTX_set_max_proto_version`]. /// /// Requires OpenSSL 1.1.0 or or LibreSSL 2.6.1 or newer. /// /// [`SSL_CTX_set_max_proto_version`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_set_min_proto_version.html #[cfg(any(ossl110, libressl261))] pub fn set_max_proto_version(&mut self, version: Option) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_CTX_set_max_proto_version( self.as_ptr(), version.map_or(0, |v| v.0 as _), )) .map(|_| ()) } } /// Gets the minimum supported protocol version. /// /// A value of `None` indicates that all versions down the the lowest version supported by /// OpenSSL are enabled. /// /// This corresponds to [`SSL_CTX_get_min_proto_version`]. /// /// Requires OpenSSL 1.1.0g or LibreSSL 2.7.0 or newer. /// /// [`SSL_CTX_get_min_proto_version`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_set_min_proto_version.html #[cfg(any(ossl110g, libressl270))] pub fn min_proto_version(&mut self) -> Option { unsafe { let r = ffi::SSL_CTX_get_min_proto_version(self.as_ptr()); if r == 0 { None } else { Some(SslVersion(r)) } } } /// Gets the maximum supported protocol version. /// /// A value of `None` indicates that all versions down the the highest version supported by /// OpenSSL are enabled. /// /// This corresponds to [`SSL_CTX_get_max_proto_version`]. /// /// Requires OpenSSL 1.1.0g or LibreSSL 2.7.0 or newer. /// /// [`SSL_CTX_get_max_proto_version`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_set_min_proto_version.html #[cfg(any(ossl110g, libressl270))] pub fn max_proto_version(&mut self) -> Option { unsafe { let r = ffi::SSL_CTX_get_max_proto_version(self.as_ptr()); if r == 0 { None } else { Some(SslVersion(r)) } } } /// Sets the protocols to sent to the server for Application Layer Protocol Negotiation (ALPN). /// /// The input must be in ALPN "wire format". It consists of a sequence of supported protocol /// names prefixed by their byte length. For example, the protocol list consisting of `spdy/1` /// and `http/1.1` is encoded as `b"\x06spdy/1\x08http/1.1"`. The protocols are ordered by /// preference. /// /// This corresponds to [`SSL_CTX_set_alpn_protos`]. /// /// Requires OpenSSL 1.0.2 or LibreSSL 2.6.1 or newer. /// /// [`SSL_CTX_set_alpn_protos`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_set_alpn_protos.html #[cfg(any(ossl102, libressl261))] pub fn set_alpn_protos(&mut self, protocols: &[u8]) -> Result<(), ErrorStack> { unsafe { assert!(protocols.len() <= c_uint::max_value() as usize); let r = ffi::SSL_CTX_set_alpn_protos( self.as_ptr(), protocols.as_ptr(), protocols.len() as c_uint, ); // fun fact, SSL_CTX_set_alpn_protos has a reversed return code D: if r == 0 { Ok(()) } else { Err(ErrorStack::get()) } } } /// Enables the DTLS extension "use_srtp" as defined in RFC5764. /// /// This corresponds to [`SSL_CTX_set_tlsext_use_srtp`]. /// /// [`SSL_CTX_set_tlsext_use_srtp`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_tlsext_use_srtp.html pub fn set_tlsext_use_srtp(&mut self, protocols: &str) -> Result<(), ErrorStack> { unsafe { let cstr = CString::new(protocols).unwrap(); let r = ffi::SSL_CTX_set_tlsext_use_srtp(self.as_ptr(), cstr.as_ptr()); // fun fact, set_tlsext_use_srtp has a reversed return code D: if r == 0 { Ok(()) } else { Err(ErrorStack::get()) } } } /// Sets the callback used by a server to select a protocol for Application Layer Protocol /// Negotiation (ALPN). /// /// The callback is provided with the client's protocol list in ALPN wire format. See the /// documentation for [`SslContextBuilder::set_alpn_protos`] for details. It should return one /// of those protocols on success. The [`select_next_proto`] function implements the standard /// protocol selection algorithm. /// /// This corresponds to [`SSL_CTX_set_alpn_select_cb`]. /// /// Requires OpenSSL 1.0.2 or LibreSSL 2.6.1 or newer. /// /// [`SslContextBuilder::set_alpn_protos`]: struct.SslContextBuilder.html#method.set_alpn_protos /// [`select_next_proto`]: fn.select_next_proto.html /// [`SSL_CTX_set_alpn_select_cb`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_set_alpn_protos.html #[cfg(any(ossl102, libressl261))] pub fn set_alpn_select_callback(&mut self, callback: F) where F: for<'a> Fn(&mut SslRef, &'a [u8]) -> Result<&'a [u8], AlpnError> + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_alpn_select_cb( self.as_ptr(), callbacks::raw_alpn_select::, ptr::null_mut(), ); } } /// Checks for consistency between the private key and certificate. /// /// This corresponds to [`SSL_CTX_check_private_key`]. /// /// [`SSL_CTX_check_private_key`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_check_private_key.html pub fn check_private_key(&self) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_CTX_check_private_key(self.as_ptr())).map(|_| ()) } } /// Returns a shared reference to the context's certificate store. /// /// This corresponds to [`SSL_CTX_get_cert_store`]. /// /// [`SSL_CTX_get_cert_store`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_get_cert_store.html pub fn cert_store(&self) -> &X509StoreBuilderRef { unsafe { X509StoreBuilderRef::from_ptr(ffi::SSL_CTX_get_cert_store(self.as_ptr())) } } /// Returns a mutable reference to the context's certificate store. /// /// This corresponds to [`SSL_CTX_get_cert_store`]. /// /// [`SSL_CTX_get_cert_store`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_get_cert_store.html pub fn cert_store_mut(&mut self) -> &mut X509StoreBuilderRef { unsafe { X509StoreBuilderRef::from_ptr_mut(ffi::SSL_CTX_get_cert_store(self.as_ptr())) } } /// Returns a reference to the X509 verification configuration. /// /// Requires OpenSSL 1.0.2 or newer. /// /// This corresponds to [`SSL_CTX_get0_param`]. /// /// [`SSL_CTX_get0_param`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_get0_param.html #[cfg(any(ossl102, libressl261))] pub fn verify_param(&self) -> &X509VerifyParamRef { unsafe { X509VerifyParamRef::from_ptr(ffi::SSL_CTX_get0_param(self.as_ptr())) } } /// Returns a mutable reference to the X509 verification configuration. /// /// Requires OpenSSL 1.0.2 or newer. /// /// This corresponds to [`SSL_CTX_get0_param`]. /// /// [`SSL_CTX_get0_param`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_get0_param.html #[cfg(any(ossl102, libressl261))] pub fn verify_param_mut(&mut self) -> &mut X509VerifyParamRef { unsafe { X509VerifyParamRef::from_ptr_mut(ffi::SSL_CTX_get0_param(self.as_ptr())) } } /// Sets the callback dealing with OCSP stapling. /// /// On the client side, this callback is responsible for validating the OCSP status response /// returned by the server. The status may be retrieved with the `SslRef::ocsp_status` method. /// A response of `Ok(true)` indicates that the OCSP status is valid, and a response of /// `Ok(false)` indicates that the OCSP status is invalid and the handshake should be /// terminated. /// /// On the server side, this callback is resopnsible for setting the OCSP status response to be /// returned to clients. The status may be set with the `SslRef::set_ocsp_status` method. A /// response of `Ok(true)` indicates that the OCSP status should be returned to the client, and /// `Ok(false)` indicates that the status should not be returned to the client. /// /// This corresponds to [`SSL_CTX_set_tlsext_status_cb`]. /// /// [`SSL_CTX_set_tlsext_status_cb`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_tlsext_status_cb.html pub fn set_status_callback(&mut self, callback: F) -> Result<(), ErrorStack> where F: Fn(&mut SslRef) -> Result + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); cvt( ffi::SSL_CTX_set_tlsext_status_cb(self.as_ptr(), Some(raw_tlsext_status::)) as c_int, ) .map(|_| ()) } } /// Sets the callback for providing an identity and pre-shared key for a TLS-PSK client. /// /// The callback will be called with the SSL context, an identity hint if one was provided /// by the server, a mutable slice for each of the identity and pre-shared key bytes. The /// identity must be written as a null-terminated C string. /// /// This corresponds to [`SSL_CTX_set_psk_client_callback`]. /// /// [`SSL_CTX_set_psk_client_callback`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_psk_client_callback.html #[cfg(not(osslconf = "OPENSSL_NO_PSK"))] pub fn set_psk_client_callback(&mut self, callback: F) where F: Fn(&mut SslRef, Option<&[u8]>, &mut [u8], &mut [u8]) -> Result + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_psk_client_callback(self.as_ptr(), Some(raw_client_psk::)); } } #[deprecated(since = "0.10.10", note = "renamed to `set_psk_client_callback`")] #[cfg(not(osslconf = "OPENSSL_NO_PSK"))] pub fn set_psk_callback(&mut self, callback: F) where F: Fn(&mut SslRef, Option<&[u8]>, &mut [u8], &mut [u8]) -> Result + 'static + Sync + Send, { self.set_psk_client_callback(callback) } /// Sets the callback for providing an identity and pre-shared key for a TLS-PSK server. /// /// The callback will be called with the SSL context, an identity provided by the client, /// and, a mutable slice for the pre-shared key bytes. The callback returns the number of /// bytes in the pre-shared key. /// /// This corresponds to [`SSL_CTX_set_psk_server_callback`]. /// /// [`SSL_CTX_set_psk_server_callback`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_psk_server_callback.html #[cfg(not(osslconf = "OPENSSL_NO_PSK"))] pub fn set_psk_server_callback(&mut self, callback: F) where F: Fn(&mut SslRef, Option<&[u8]>, &mut [u8]) -> Result + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_psk_server_callback(self.as_ptr(), Some(raw_server_psk::)); } } /// Sets the callback which is called when new sessions are negotiated. /// /// This can be used by clients to implement session caching. While in TLSv1.2 the session is /// available to access via [`SslRef::session`] immediately after the handshake completes, this /// is not the case for TLSv1.3. There, a session is not generally available immediately, and /// the server may provide multiple session tokens to the client over a single session. The new /// session callback is a portable way to deal with both cases. /// /// Note that session caching must be enabled for the callback to be invoked, and it defaults /// off for clients. [`set_session_cache_mode`] controls that behavior. /// /// This corresponds to [`SSL_CTX_sess_set_new_cb`]. /// /// [`SslRef::session`]: struct.SslRef.html#method.session /// [`set_session_cache_mode`]: #method.set_session_cache_mode /// [`SSL_CTX_sess_set_new_cb`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_sess_set_new_cb.html pub fn set_new_session_callback(&mut self, callback: F) where F: Fn(&mut SslRef, SslSession) + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_sess_set_new_cb(self.as_ptr(), Some(callbacks::raw_new_session::)); } } /// Sets the callback which is called when sessions are removed from the context. /// /// Sessions can be removed because they have timed out or because they are considered faulty. /// /// This corresponds to [`SSL_CTX_sess_set_remove_cb`]. /// /// [`SSL_CTX_sess_set_remove_cb`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_sess_set_new_cb.html pub fn set_remove_session_callback(&mut self, callback: F) where F: Fn(&SslContextRef, &SslSessionRef) + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_sess_set_remove_cb( self.as_ptr(), Some(callbacks::raw_remove_session::), ); } } /// Sets the callback which is called when a client proposed to resume a session but it was not /// found in the internal cache. /// /// The callback is passed a reference to the session ID provided by the client. It should /// return the session corresponding to that ID if available. This is only used for servers, not /// clients. /// /// This corresponds to [`SSL_CTX_sess_set_get_cb`]. /// /// # Safety /// /// The returned `SslSession` must not be associated with a different `SslContext`. /// /// [`SSL_CTX_sess_set_get_cb`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_sess_set_new_cb.html pub unsafe fn set_get_session_callback(&mut self, callback: F) where F: Fn(&mut SslRef, &[u8]) -> Option + 'static + Sync + Send, { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_sess_set_get_cb(self.as_ptr(), Some(callbacks::raw_get_session::)); } /// Sets the TLS key logging callback. /// /// The callback is invoked whenever TLS key material is generated, and is passed a line of NSS /// SSLKEYLOGFILE-formatted text. This can be used by tools like Wireshark to decrypt message /// traffic. The line does not contain a trailing newline. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_CTX_set_keylog_callback`]. /// /// [`SSL_CTX_set_keylog_callback`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_keylog_callback.html #[cfg(ossl111)] pub fn set_keylog_callback(&mut self, callback: F) where F: Fn(&SslRef, &str) + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_keylog_callback(self.as_ptr(), Some(callbacks::raw_keylog::)); } } /// Sets the session caching mode use for connections made with the context. /// /// Returns the previous session caching mode. /// /// This corresponds to [`SSL_CTX_set_session_cache_mode`]. /// /// [`SSL_CTX_set_session_cache_mode`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_get_session_cache_mode.html pub fn set_session_cache_mode(&mut self, mode: SslSessionCacheMode) -> SslSessionCacheMode { unsafe { let bits = ffi::SSL_CTX_set_session_cache_mode(self.as_ptr(), mode.bits()); SslSessionCacheMode { bits } } } /// Sets the callback for generating an application cookie for TLS1.3 /// stateless handshakes. /// /// The callback will be called with the SSL context and a slice into which the cookie /// should be written. The callback should return the number of bytes written. /// /// This corresponds to `SSL_CTX_set_stateless_cookie_generate_cb`. #[cfg(ossl111)] pub fn set_stateless_cookie_generate_cb(&mut self, callback: F) where F: Fn(&mut SslRef, &mut [u8]) -> Result + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_stateless_cookie_generate_cb( self.as_ptr(), Some(raw_stateless_cookie_generate::), ); } } /// Sets the callback for verifying an application cookie for TLS1.3 /// stateless handshakes. /// /// The callback will be called with the SSL context and the cookie supplied by the /// client. It should return true if and only if the cookie is valid. /// /// Note that the OpenSSL implementation independently verifies the integrity of /// application cookies using an HMAC before invoking the supplied callback. /// /// This corresponds to `SSL_CTX_set_stateless_cookie_verify_cb`. #[cfg(ossl111)] pub fn set_stateless_cookie_verify_cb(&mut self, callback: F) where F: Fn(&mut SslRef, &[u8]) -> bool + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_stateless_cookie_verify_cb( self.as_ptr(), Some(raw_stateless_cookie_verify::), ) } } /// Sets the callback for generating a DTLSv1 cookie /// /// The callback will be called with the SSL context and a slice into which the cookie /// should be written. The callback should return the number of bytes written. /// /// This corresponds to `SSL_CTX_set_cookie_generate_cb`. pub fn set_cookie_generate_cb(&mut self, callback: F) where F: Fn(&mut SslRef, &mut [u8]) -> Result + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_cookie_generate_cb(self.as_ptr(), Some(raw_cookie_generate::)); } } /// Sets the callback for verifying a DTLSv1 cookie /// /// The callback will be called with the SSL context and the cookie supplied by the /// client. It should return true if and only if the cookie is valid. /// /// This corresponds to `SSL_CTX_set_cookie_verify_cb`. pub fn set_cookie_verify_cb(&mut self, callback: F) where F: Fn(&mut SslRef, &[u8]) -> bool + 'static + Sync + Send, { unsafe { self.set_ex_data(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_cookie_verify_cb(self.as_ptr(), Some(raw_cookie_verify::)); } } /// Sets the extra data at the specified index. /// /// This can be used to provide data to callbacks registered with the context. Use the /// `SslContext::new_ex_index` method to create an `Index`. /// /// This corresponds to [`SSL_CTX_set_ex_data`]. /// /// [`SSL_CTX_set_ex_data`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_ex_data.html pub fn set_ex_data(&mut self, index: Index, data: T) { self.set_ex_data_inner(index, data); } fn set_ex_data_inner(&mut self, index: Index, data: T) -> *mut c_void { unsafe { let data = Box::into_raw(Box::new(data)) as *mut c_void; ffi::SSL_CTX_set_ex_data(self.as_ptr(), index.as_raw(), data); data } } /// Adds a custom extension for a TLS/DTLS client or server for all supported protocol versions. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_CTX_add_custom_ext`]. /// /// [`SSL_CTX_add_custom_ext`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_add_custom_ext.html #[cfg(ossl111)] pub fn add_custom_ext( &mut self, ext_type: u16, context: ExtensionContext, add_cb: AddFn, parse_cb: ParseFn, ) -> Result<(), ErrorStack> where AddFn: Fn( &mut SslRef, ExtensionContext, Option<(usize, &X509Ref)>, ) -> Result, SslAlert> + 'static + Sync + Send, T: AsRef<[u8]> + 'static + Sync + Send, ParseFn: Fn( &mut SslRef, ExtensionContext, &[u8], Option<(usize, &X509Ref)>, ) -> Result<(), SslAlert> + 'static + Sync + Send, { let ret = unsafe { self.set_ex_data(SslContext::cached_ex_index::(), add_cb); self.set_ex_data(SslContext::cached_ex_index::(), parse_cb); ffi::SSL_CTX_add_custom_ext( self.as_ptr(), ext_type as c_uint, context.bits(), Some(raw_custom_ext_add::), Some(raw_custom_ext_free::), ptr::null_mut(), Some(raw_custom_ext_parse::), ptr::null_mut(), ) }; if ret == 1 { Ok(()) } else { Err(ErrorStack::get()) } } /// Sets the maximum amount of early data that will be accepted on incoming connections. /// /// Defaults to 0. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_CTX_set_max_early_data`]. /// /// [`SSL_CTX_set_max_early_data`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_max_early_data.html #[cfg(ossl111)] pub fn set_max_early_data(&mut self, bytes: u32) -> Result<(), ErrorStack> { if unsafe { ffi::SSL_CTX_set_max_early_data(self.as_ptr(), bytes) } == 1 { Ok(()) } else { Err(ErrorStack::get()) } } /// Sets a callback which will be invoked just after the client's hello message is received. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_CTX_set_client_hello_cb`]. /// /// [`SSL_CTX_set_client_hello_cb`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_client_hello_cb.html #[cfg(ossl111)] pub fn set_client_hello_callback(&mut self, callback: F) where F: Fn(&mut SslRef, &mut SslAlert) -> Result + 'static + Sync + Send, { unsafe { let ptr = self.set_ex_data_inner(SslContext::cached_ex_index::(), callback); ffi::SSL_CTX_set_client_hello_cb( self.as_ptr(), Some(callbacks::raw_client_hello::), ptr, ); } } /// Sets the context's session cache size limit, returning the previous limit. /// /// A value of 0 means that the cache size is unbounded. /// /// This corresponds to [`SSL_CTX_sess_get_cache_size`]. /// /// [`SSL_CTX_sess_get_cache_size`]: https://www.openssl.org/docs/man1.0.2/man3/SSL_CTX_sess_set_cache_size.html #[allow(clippy::useless_conversion)] pub fn set_session_cache_size(&mut self, size: i32) -> i64 { unsafe { ffi::SSL_CTX_sess_set_cache_size(self.as_ptr(), size.into()).into() } } /// Sets the context's supported signature algorithms. /// /// This corresponds to [`SSL_CTX_set1_sigalgs_list`]. /// /// Requires OpenSSL 1.0.2 or newer. /// /// [`SSL_CTX_set1_sigalgs_list`]: https://www.openssl.org/docs/man1.1.0/man3/SSL_CTX_set1_sigalgs_list.html #[cfg(ossl102)] pub fn set_sigalgs_list(&mut self, sigalgs: &str) -> Result<(), ErrorStack> { let sigalgs = CString::new(sigalgs).unwrap(); unsafe { cvt(ffi::SSL_CTX_set1_sigalgs_list(self.as_ptr(), sigalgs.as_ptr()) as c_int) .map(|_| ()) } } /// Sets the context's supported elliptic curve groups. /// /// This corresponds to [`SSL_CTX_set1_groups_list`]. /// /// Requires OpenSSL 1.1.1 or newer. /// /// [`SSL_CTX_set1_groups_list`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set1_groups_list.html #[cfg(ossl111)] pub fn set_groups_list(&mut self, groups: &str) -> Result<(), ErrorStack> { let groups = CString::new(groups).unwrap(); unsafe { cvt(ffi::SSL_CTX_set1_groups_list(self.as_ptr(), groups.as_ptr()) as c_int).map(|_| ()) } } /// Consumes the builder, returning a new `SslContext`. pub fn build(self) -> SslContext { self.0 } } foreign_type_and_impl_send_sync! { type CType = ffi::SSL_CTX; fn drop = ffi::SSL_CTX_free; /// A context object for TLS streams. /// /// Applications commonly configure a single `SslContext` that is shared by all of its /// `SslStreams`. pub struct SslContext; /// Reference to [`SslContext`] /// /// [`SslContext`]: struct.SslContext.html pub struct SslContextRef; } impl Clone for SslContext { fn clone(&self) -> Self { (**self).to_owned() } } impl ToOwned for SslContextRef { type Owned = SslContext; fn to_owned(&self) -> Self::Owned { unsafe { SSL_CTX_up_ref(self.as_ptr()); SslContext::from_ptr(self.as_ptr()) } } } // TODO: add useful info here impl fmt::Debug for SslContext { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { write!(fmt, "SslContext") } } impl SslContext { /// Creates a new builder object for an `SslContext`. pub fn builder(method: SslMethod) -> Result { SslContextBuilder::new(method) } /// Returns a new extra data index. /// /// Each invocation of this function is guaranteed to return a distinct index. These can be used /// to store data in the context that can be retrieved later by callbacks, for example. /// /// This corresponds to [`SSL_CTX_get_ex_new_index`]. /// /// [`SSL_CTX_get_ex_new_index`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_get_ex_new_index.html pub fn new_ex_index() -> Result, ErrorStack> where T: 'static + Sync + Send, { unsafe { ffi::init(); let idx = cvt_n(get_new_idx(free_data_box::))?; Ok(Index::from_raw(idx)) } } // FIXME should return a result? fn cached_ex_index() -> Index where T: 'static + Sync + Send, { unsafe { let idx = *INDEXES .lock() .unwrap_or_else(|e| e.into_inner()) .entry(TypeId::of::()) .or_insert_with(|| SslContext::new_ex_index::().unwrap().as_raw()); Index::from_raw(idx) } } } impl SslContextRef { /// Returns the certificate associated with this `SslContext`, if present. /// /// Requires OpenSSL 1.0.2 or newer. /// /// This corresponds to [`SSL_CTX_get0_certificate`]. /// /// [`SSL_CTX_get0_certificate`]: https://www.openssl.org/docs/man1.1.0/ssl/ssl.html #[cfg(any(ossl102, ossl110))] pub fn certificate(&self) -> Option<&X509Ref> { unsafe { let ptr = ffi::SSL_CTX_get0_certificate(self.as_ptr()); X509Ref::from_const_ptr_opt(ptr) } } /// Returns the private key associated with this `SslContext`, if present. /// /// Requires OpenSSL 1.0.2 or newer. /// /// This corresponds to [`SSL_CTX_get0_privatekey`]. /// /// [`SSL_CTX_get0_privatekey`]: https://www.openssl.org/docs/man1.1.0/ssl/ssl.html #[cfg(any(ossl102, ossl110))] pub fn private_key(&self) -> Option<&PKeyRef> { unsafe { let ptr = ffi::SSL_CTX_get0_privatekey(self.as_ptr()); PKeyRef::from_const_ptr_opt(ptr) } } /// Returns a shared reference to the certificate store used for verification. /// /// This corresponds to [`SSL_CTX_get_cert_store`]. /// /// [`SSL_CTX_get_cert_store`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_get_cert_store.html pub fn cert_store(&self) -> &X509StoreRef { unsafe { X509StoreRef::from_ptr(ffi::SSL_CTX_get_cert_store(self.as_ptr())) } } /// Returns a shared reference to the stack of certificates making up the chain from the leaf. /// /// This corresponds to `SSL_CTX_get_extra_chain_certs`. pub fn extra_chain_certs(&self) -> &StackRef { unsafe { let mut chain = ptr::null_mut(); ffi::SSL_CTX_get_extra_chain_certs(self.as_ptr(), &mut chain); StackRef::from_const_ptr_opt(chain).expect("extra chain certs must not be null") } } /// Returns a reference to the extra data at the specified index. /// /// This corresponds to [`SSL_CTX_get_ex_data`]. /// /// [`SSL_CTX_get_ex_data`]: https://www.openssl.org/docs/manmaster/man3/SSL_CTX_get_ex_data.html pub fn ex_data(&self, index: Index) -> Option<&T> { unsafe { let data = ffi::SSL_CTX_get_ex_data(self.as_ptr(), index.as_raw()); if data.is_null() { None } else { Some(&*(data as *const T)) } } } /// Gets the maximum amount of early data that will be accepted on incoming connections. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_CTX_get_max_early_data`]. /// /// [`SSL_CTX_get_max_early_data`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_get_max_early_data.html #[cfg(ossl111)] pub fn max_early_data(&self) -> u32 { unsafe { ffi::SSL_CTX_get_max_early_data(self.as_ptr()) } } /// Adds a session to the context's cache. /// /// Returns `true` if the session was successfully added to the cache, and `false` if it was already present. /// /// This corresponds to [`SSL_CTX_add_session`]. /// /// # Safety /// /// The caller of this method is responsible for ensuring that the session has never been used with another /// `SslContext` than this one. /// /// [`SSL_CTX_add_session`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_remove_session.html pub unsafe fn add_session(&self, session: &SslSessionRef) -> bool { ffi::SSL_CTX_add_session(self.as_ptr(), session.as_ptr()) != 0 } /// Removes a session from the context's cache and marks it as non-resumable. /// /// Returns `true` if the session was successfully found and removed, and `false` otherwise. /// /// This corresponds to [`SSL_CTX_remove_session`]. /// /// # Safety /// /// The caller of this method is responsible for ensuring that the session has never been used with another /// `SslContext` than this one. /// /// [`SSL_CTX_remove_session`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_remove_session.html pub unsafe fn remove_session(&self, session: &SslSessionRef) -> bool { ffi::SSL_CTX_remove_session(self.as_ptr(), session.as_ptr()) != 0 } /// Returns the context's session cache size limit. /// /// A value of 0 means that the cache size is unbounded. /// /// This corresponds to [`SSL_CTX_sess_get_cache_size`]. /// /// [`SSL_CTX_sess_get_cache_size`]: https://www.openssl.org/docs/man1.0.2/man3/SSL_CTX_sess_set_cache_size.html #[allow(clippy::useless_conversion)] pub fn session_cache_size(&self) -> i64 { unsafe { ffi::SSL_CTX_sess_get_cache_size(self.as_ptr()).into() } } /// Returns the verify mode that was set on this context from [`SslContextBuilder::set_verify`]. /// /// This corresponds to [`SSL_CTX_get_verify_mode`]. /// /// [`SslContextBuilder::set_verify`]: struct.SslContextBuilder.html#method.set_verify /// [`SSL_CTX_get_verify_mode`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_get_verify_mode.html pub fn verify_mode(&self) -> SslVerifyMode { let mode = unsafe { ffi::SSL_CTX_get_verify_mode(self.as_ptr()) }; SslVerifyMode::from_bits(mode).expect("SSL_CTX_get_verify_mode returned invalid mode") } } /// Information about the state of a cipher. pub struct CipherBits { /// The number of secret bits used for the cipher. pub secret: i32, /// The number of bits processed by the chosen algorithm. pub algorithm: i32, } /// Information about a cipher. pub struct SslCipher(*mut ffi::SSL_CIPHER); impl ForeignType for SslCipher { type CType = ffi::SSL_CIPHER; type Ref = SslCipherRef; #[inline] unsafe fn from_ptr(ptr: *mut ffi::SSL_CIPHER) -> SslCipher { SslCipher(ptr) } #[inline] fn as_ptr(&self) -> *mut ffi::SSL_CIPHER { self.0 } } impl Deref for SslCipher { type Target = SslCipherRef; fn deref(&self) -> &SslCipherRef { unsafe { SslCipherRef::from_ptr(self.0) } } } impl DerefMut for SslCipher { fn deref_mut(&mut self) -> &mut SslCipherRef { unsafe { SslCipherRef::from_ptr_mut(self.0) } } } /// Reference to an [`SslCipher`]. /// /// [`SslCipher`]: struct.SslCipher.html pub struct SslCipherRef(Opaque); impl ForeignTypeRef for SslCipherRef { type CType = ffi::SSL_CIPHER; } impl SslCipherRef { /// Returns the name of the cipher. /// /// This corresponds to [`SSL_CIPHER_get_name`]. /// /// [`SSL_CIPHER_get_name`]: https://www.openssl.org/docs/manmaster/man3/SSL_CIPHER_get_name.html pub fn name(&self) -> &'static str { unsafe { let ptr = ffi::SSL_CIPHER_get_name(self.as_ptr()); CStr::from_ptr(ptr).to_str().unwrap() } } /// Returns the RFC-standard name of the cipher, if one exists. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_CIPHER_standard_name`]. /// /// [`SSL_CIPHER_standard_name`]: https://www.openssl.org/docs/manmaster/man3/SSL_CIPHER_get_name.html #[cfg(ossl111)] pub fn standard_name(&self) -> Option<&'static str> { unsafe { let ptr = ffi::SSL_CIPHER_standard_name(self.as_ptr()); if ptr.is_null() { None } else { Some(CStr::from_ptr(ptr).to_str().unwrap()) } } } /// Returns the SSL/TLS protocol version that first defined the cipher. /// /// This corresponds to [`SSL_CIPHER_get_version`]. /// /// [`SSL_CIPHER_get_version`]: https://www.openssl.org/docs/manmaster/man3/SSL_CIPHER_get_name.html pub fn version(&self) -> &'static str { let version = unsafe { let ptr = ffi::SSL_CIPHER_get_version(self.as_ptr()); CStr::from_ptr(ptr as *const _) }; str::from_utf8(version.to_bytes()).unwrap() } /// Returns the number of bits used for the cipher. /// /// This corresponds to [`SSL_CIPHER_get_bits`]. /// /// [`SSL_CIPHER_get_bits`]: https://www.openssl.org/docs/manmaster/man3/SSL_CIPHER_get_name.html #[allow(clippy::useless_conversion)] pub fn bits(&self) -> CipherBits { unsafe { let mut algo_bits = 0; let secret_bits = ffi::SSL_CIPHER_get_bits(self.as_ptr(), &mut algo_bits); CipherBits { secret: secret_bits.into(), algorithm: algo_bits.into(), } } } /// Returns a textual description of the cipher. /// /// This corresponds to [`SSL_CIPHER_description`]. /// /// [`SSL_CIPHER_description`]: https://www.openssl.org/docs/manmaster/man3/SSL_CIPHER_get_name.html pub fn description(&self) -> String { unsafe { // SSL_CIPHER_description requires a buffer of at least 128 bytes. let mut buf = [0; 128]; let ptr = ffi::SSL_CIPHER_description(self.as_ptr(), buf.as_mut_ptr(), 128); String::from_utf8(CStr::from_ptr(ptr as *const _).to_bytes().to_vec()).unwrap() } } /// Returns the handshake digest of the cipher. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_CIPHER_get_handshake_digest`]. /// /// [`SSL_CIPHER_get_handshake_digest`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CIPHER_get_handshake_digest.html #[cfg(ossl111)] pub fn handshake_digest(&self) -> Option { unsafe { let ptr = ffi::SSL_CIPHER_get_handshake_digest(self.as_ptr()); if ptr.is_null() { None } else { Some(MessageDigest::from_ptr(ptr)) } } } /// Returns the NID corresponding to the cipher. /// /// Requires OpenSSL 1.1.0 or newer. /// /// This corresponds to [`SSL_CIPHER_get_cipher_nid`]. /// /// [`SSL_CIPHER_get_cipher_nid`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_CIPHER_get_cipher_nid.html #[cfg(any(ossl110))] pub fn cipher_nid(&self) -> Option { let n = unsafe { ffi::SSL_CIPHER_get_cipher_nid(self.as_ptr()) }; if n == 0 { None } else { Some(Nid::from_raw(n)) } } } foreign_type_and_impl_send_sync! { type CType = ffi::SSL_SESSION; fn drop = ffi::SSL_SESSION_free; /// An encoded SSL session. /// /// These can be cached to share sessions across connections. pub struct SslSession; /// Reference to [`SslSession`]. /// /// [`SslSession`]: struct.SslSession.html pub struct SslSessionRef; } impl Clone for SslSession { fn clone(&self) -> SslSession { SslSessionRef::to_owned(self) } } impl SslSession { from_der! { /// Deserializes a DER-encoded session structure. /// /// This corresponds to [`d2i_SSL_SESSION`]. /// /// [`d2i_SSL_SESSION`]: https://www.openssl.org/docs/man1.0.2/ssl/d2i_SSL_SESSION.html from_der, SslSession, ffi::d2i_SSL_SESSION } } impl ToOwned for SslSessionRef { type Owned = SslSession; fn to_owned(&self) -> SslSession { unsafe { SSL_SESSION_up_ref(self.as_ptr()); SslSession(self.as_ptr()) } } } impl SslSessionRef { /// Returns the SSL session ID. /// /// This corresponds to [`SSL_SESSION_get_id`]. /// /// [`SSL_SESSION_get_id`]: https://www.openssl.org/docs/manmaster/man3/SSL_SESSION_get_id.html pub fn id(&self) -> &[u8] { unsafe { let mut len = 0; let p = ffi::SSL_SESSION_get_id(self.as_ptr(), &mut len); slice::from_raw_parts(p as *const u8, len as usize) } } /// Returns the length of the master key. /// /// This corresponds to [`SSL_SESSION_get_master_key`]. /// /// [`SSL_SESSION_get_master_key`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_SESSION_get_master_key.html pub fn master_key_len(&self) -> usize { unsafe { SSL_SESSION_get_master_key(self.as_ptr(), ptr::null_mut(), 0) } } /// Copies the master key into the provided buffer. /// /// Returns the number of bytes written, or the size of the master key if the buffer is empty. /// /// This corresponds to [`SSL_SESSION_get_master_key`]. /// /// [`SSL_SESSION_get_master_key`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_SESSION_get_master_key.html pub fn master_key(&self, buf: &mut [u8]) -> usize { unsafe { SSL_SESSION_get_master_key(self.as_ptr(), buf.as_mut_ptr(), buf.len()) } } /// Gets the maximum amount of early data that can be sent on this session. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_SESSION_get_max_early_data`]. /// /// [`SSL_SESSION_get_max_early_data`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_SESSION_get_max_early_data.html #[cfg(ossl111)] pub fn max_early_data(&self) -> u32 { unsafe { ffi::SSL_SESSION_get_max_early_data(self.as_ptr()) } } /// Returns the time at which the session was established, in seconds since the Unix epoch. /// /// This corresponds to [`SSL_SESSION_get_time`]. /// /// [`SSL_SESSION_get_time`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_SESSION_get_time.html #[allow(clippy::useless_conversion)] pub fn time(&self) -> i64 { unsafe { ffi::SSL_SESSION_get_time(self.as_ptr()).into() } } /// Returns the sessions timeout, in seconds. /// /// A session older than this time should not be used for session resumption. /// /// This corresponds to [`SSL_SESSION_get_timeout`]. /// /// [`SSL_SESSION_get_timeout`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_SESSION_get_time.html #[allow(clippy::useless_conversion)] pub fn timeout(&self) -> i64 { unsafe { ffi::SSL_SESSION_get_timeout(self.as_ptr()).into() } } /// Returns the session's TLS protocol version. /// /// Requires OpenSSL 1.1.0 or newer. /// /// This corresponds to [`SSL_SESSION_get_protocol_version`]. /// /// [`SSL_SESSION_get_protocol_version`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_SESSION_get_protocol_version.html #[cfg(ossl110)] pub fn protocol_version(&self) -> SslVersion { unsafe { let version = ffi::SSL_SESSION_get_protocol_version(self.as_ptr()); SslVersion(version) } } to_der! { /// Serializes the session into a DER-encoded structure. /// /// This corresponds to [`i2d_SSL_SESSION`]. /// /// [`i2d_SSL_SESSION`]: https://www.openssl.org/docs/man1.0.2/ssl/i2d_SSL_SESSION.html to_der, ffi::i2d_SSL_SESSION } } foreign_type_and_impl_send_sync! { type CType = ffi::SSL; fn drop = ffi::SSL_free; /// The state of an SSL/TLS session. /// /// `Ssl` objects are created from an [`SslContext`], which provides configuration defaults. /// These defaults can be overridden on a per-`Ssl` basis, however. /// /// [`SslContext`]: struct.SslContext.html pub struct Ssl; /// Reference to an [`Ssl`]. /// /// [`Ssl`]: struct.Ssl.html pub struct SslRef; } impl fmt::Debug for Ssl { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt::Debug::fmt(&**self, fmt) } } impl Ssl { /// Returns a new extra data index. /// /// Each invocation of this function is guaranteed to return a distinct index. These can be used /// to store data in the context that can be retrieved later by callbacks, for example. /// /// This corresponds to [`SSL_get_ex_new_index`]. /// /// [`SSL_get_ex_new_index`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_get_ex_new_index.html pub fn new_ex_index() -> Result, ErrorStack> where T: 'static + Sync + Send, { unsafe { ffi::init(); let idx = cvt_n(get_new_ssl_idx(free_data_box::))?; Ok(Index::from_raw(idx)) } } // FIXME should return a result? fn cached_ex_index() -> Index where T: 'static + Sync + Send, { unsafe { let idx = *SSL_INDEXES .lock() .unwrap_or_else(|e| e.into_inner()) .entry(TypeId::of::()) .or_insert_with(|| Ssl::new_ex_index::().unwrap().as_raw()); Index::from_raw(idx) } } /// Creates a new `Ssl`. /// /// This corresponds to [`SSL_new`]. /// /// [`SSL_new`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_new.html // FIXME should take &SslContextRef pub fn new(ctx: &SslContextRef) -> Result { let session_ctx_index = try_get_session_ctx_index()?; unsafe { let ptr = cvt_p(ffi::SSL_new(ctx.as_ptr()))?; let mut ssl = Ssl::from_ptr(ptr); ssl.set_ex_data(*session_ctx_index, ctx.to_owned()); Ok(ssl) } } /// Initiates a client-side TLS handshake. /// /// This corresponds to [`SSL_connect`]. /// /// # Warning /// /// OpenSSL's default configuration is insecure. It is highly recommended to use /// `SslConnector` rather than `Ssl` directly, as it manages that configuration. /// /// [`SSL_connect`]: https://www.openssl.org/docs/manmaster/man3/SSL_connect.html #[allow(deprecated)] pub fn connect(self, stream: S) -> Result, HandshakeError> where S: Read + Write, { SslStreamBuilder::new(self, stream).connect() } /// Initiates a server-side TLS handshake. /// /// This corresponds to [`SSL_accept`]. /// /// # Warning /// /// OpenSSL's default configuration is insecure. It is highly recommended to use /// `SslAcceptor` rather than `Ssl` directly, as it manages that configuration. /// /// [`SSL_accept`]: https://www.openssl.org/docs/manmaster/man3/SSL_accept.html #[allow(deprecated)] pub fn accept(self, stream: S) -> Result, HandshakeError> where S: Read + Write, { SslStreamBuilder::new(self, stream).accept() } } impl fmt::Debug for SslRef { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt.debug_struct("Ssl") .field("state", &self.state_string_long()) .field("verify_result", &self.verify_result()) .finish() } } impl SslRef { fn get_raw_rbio(&self) -> *mut ffi::BIO { unsafe { ffi::SSL_get_rbio(self.as_ptr()) } } fn read(&mut self, buf: &mut [u8]) -> c_int { let len = cmp::min(c_int::max_value() as usize, buf.len()) as c_int; unsafe { ffi::SSL_read(self.as_ptr(), buf.as_ptr() as *mut c_void, len) } } fn peek(&mut self, buf: &mut [u8]) -> c_int { let len = cmp::min(c_int::max_value() as usize, buf.len()) as c_int; unsafe { ffi::SSL_peek(self.as_ptr(), buf.as_ptr() as *mut c_void, len) } } fn write(&mut self, buf: &[u8]) -> c_int { let len = cmp::min(c_int::max_value() as usize, buf.len()) as c_int; unsafe { ffi::SSL_write(self.as_ptr(), buf.as_ptr() as *const c_void, len) } } fn get_error(&self, ret: c_int) -> ErrorCode { unsafe { ErrorCode::from_raw(ffi::SSL_get_error(self.as_ptr(), ret)) } } /// Configure as an outgoing stream from a client. /// /// This corresponds to [`SSL_set_connect_state`]. /// /// [`SSL_set_connect_state`]: https://www.openssl.org/docs/manmaster/man3/SSL_set_connect_state.html pub fn set_connect_state(&mut self) { unsafe { ffi::SSL_set_connect_state(self.as_ptr()) } } /// Configure as an incoming stream to a server. /// /// This corresponds to [`SSL_set_accept_state`]. /// /// [`SSL_set_accept_state`]: https://www.openssl.org/docs/manmaster/man3/SSL_set_accept_state.html pub fn set_accept_state(&mut self) { unsafe { ffi::SSL_set_accept_state(self.as_ptr()) } } /// Like [`SslContextBuilder::set_verify`]. /// /// This corresponds to [`SSL_set_verify`]. /// /// [`SslContextBuilder::set_verify`]: struct.SslContextBuilder.html#method.set_verify /// [`SSL_set_verify`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_verify.html pub fn set_verify(&mut self, mode: SslVerifyMode) { unsafe { ffi::SSL_set_verify(self.as_ptr(), mode.bits as c_int, None) } } /// Returns the verify mode that was set using `set_verify`. /// /// This corresponds to [`SSL_get_verify_mode`]. /// /// [`SSL_get_verify_mode`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_get_verify_mode.html pub fn verify_mode(&self) -> SslVerifyMode { let mode = unsafe { ffi::SSL_get_verify_mode(self.as_ptr()) }; SslVerifyMode::from_bits(mode).expect("SSL_get_verify_mode returned invalid mode") } /// Like [`SslContextBuilder::set_verify_callback`]. /// /// This corresponds to [`SSL_set_verify`]. /// /// [`SslContextBuilder::set_verify_callback`]: struct.SslContextBuilder.html#method.set_verify_callback /// [`SSL_set_verify`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_verify.html pub fn set_verify_callback(&mut self, mode: SslVerifyMode, verify: F) where F: Fn(bool, &mut X509StoreContextRef) -> bool + 'static + Sync + Send, { unsafe { // this needs to be in an Arc since the callback can register a new callback! self.set_ex_data(Ssl::cached_ex_index(), Arc::new(verify)); ffi::SSL_set_verify(self.as_ptr(), mode.bits as c_int, Some(ssl_raw_verify::)); } } /// Like [`SslContextBuilder::set_tmp_dh`]. /// /// This corresponds to [`SSL_set_tmp_dh`]. /// /// [`SslContextBuilder::set_tmp_dh`]: struct.SslContextBuilder.html#method.set_tmp_dh /// [`SSL_set_tmp_dh`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_tmp_dh.html pub fn set_tmp_dh(&mut self, dh: &DhRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_set_tmp_dh(self.as_ptr(), dh.as_ptr()) as c_int).map(|_| ()) } } /// Like [`SslContextBuilder::set_tmp_dh_callback`]. /// /// This corresponds to [`SSL_set_tmp_dh_callback`]. /// /// [`SslContextBuilder::set_tmp_dh_callback`]: struct.SslContextBuilder.html#method.set_tmp_dh_callback /// [`SSL_set_tmp_dh_callback`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_tmp_dh.html pub fn set_tmp_dh_callback(&mut self, callback: F) where F: Fn(&mut SslRef, bool, u32) -> Result, ErrorStack> + 'static + Sync + Send, { unsafe { // this needs to be in an Arc since the callback can register a new callback! self.set_ex_data(Ssl::cached_ex_index(), Arc::new(callback)); ffi::SSL_set_tmp_dh_callback(self.as_ptr(), raw_tmp_dh_ssl::); } } /// Like [`SslContextBuilder::set_tmp_ecdh`]. /// /// This corresponds to `SSL_set_tmp_ecdh`. /// /// [`SslContextBuilder::set_tmp_ecdh`]: struct.SslContextBuilder.html#method.set_tmp_ecdh pub fn set_tmp_ecdh(&mut self, key: &EcKeyRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_set_tmp_ecdh(self.as_ptr(), key.as_ptr()) as c_int).map(|_| ()) } } /// Like [`SslContextBuilder::set_tmp_ecdh_callback`]. /// /// Requires OpenSSL 1.0.1 or 1.0.2. /// /// This corresponds to `SSL_set_tmp_ecdh_callback`. /// /// [`SslContextBuilder::set_tmp_ecdh_callback`]: struct.SslContextBuilder.html#method.set_tmp_ecdh_callback #[cfg(any(all(ossl101, not(ossl110))))] pub fn set_tmp_ecdh_callback(&mut self, callback: F) where F: Fn(&mut SslRef, bool, u32) -> Result, ErrorStack> + 'static + Sync + Send, { unsafe { // this needs to be in an Arc since the callback can register a new callback! self.set_ex_data(Ssl::cached_ex_index(), Arc::new(callback)); ffi::SSL_set_tmp_ecdh_callback(self.as_ptr(), raw_tmp_ecdh_ssl::); } } /// Like [`SslContextBuilder::set_ecdh_auto`]. /// /// Requires OpenSSL 1.0.2. /// /// This corresponds to [`SSL_set_ecdh_auto`]. /// /// [`SslContextBuilder::set_tmp_ecdh`]: struct.SslContextBuilder.html#method.set_tmp_ecdh /// [`SSL_set_ecdh_auto`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_ecdh_auto.html #[cfg(all(ossl102, not(ossl110)))] pub fn set_ecdh_auto(&mut self, onoff: bool) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_set_ecdh_auto(self.as_ptr(), onoff as c_int)).map(|_| ()) } } /// Like [`SslContextBuilder::set_alpn_protos`]. /// /// Requires OpenSSL 1.0.2 or LibreSSL 2.6.1 or newer. /// /// This corresponds to [`SSL_set_alpn_protos`]. /// /// [`SslContextBuilder::set_alpn_protos`]: struct.SslContextBuilder.html#method.set_alpn_protos /// [`SSL_set_alpn_protos`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_set_alpn_protos.html #[cfg(any(ossl102, libressl261))] pub fn set_alpn_protos(&mut self, protocols: &[u8]) -> Result<(), ErrorStack> { unsafe { assert!(protocols.len() <= c_uint::max_value() as usize); let r = ffi::SSL_set_alpn_protos( self.as_ptr(), protocols.as_ptr(), protocols.len() as c_uint, ); // fun fact, SSL_set_alpn_protos has a reversed return code D: if r == 0 { Ok(()) } else { Err(ErrorStack::get()) } } } /// Returns the current cipher if the session is active. /// /// This corresponds to [`SSL_get_current_cipher`]. /// /// [`SSL_get_current_cipher`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_get_current_cipher.html pub fn current_cipher(&self) -> Option<&SslCipherRef> { unsafe { let ptr = ffi::SSL_get_current_cipher(self.as_ptr()); SslCipherRef::from_const_ptr_opt(ptr) } } /// Returns a short string describing the state of the session. /// /// This corresponds to [`SSL_state_string`]. /// /// [`SSL_state_string`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_state_string.html pub fn state_string(&self) -> &'static str { let state = unsafe { let ptr = ffi::SSL_state_string(self.as_ptr()); CStr::from_ptr(ptr as *const _) }; str::from_utf8(state.to_bytes()).unwrap() } /// Returns a longer string describing the state of the session. /// /// This corresponds to [`SSL_state_string_long`]. /// /// [`SSL_state_string_long`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_state_string_long.html pub fn state_string_long(&self) -> &'static str { let state = unsafe { let ptr = ffi::SSL_state_string_long(self.as_ptr()); CStr::from_ptr(ptr as *const _) }; str::from_utf8(state.to_bytes()).unwrap() } /// Sets the host name to be sent to the server for Server Name Indication (SNI). /// /// It has no effect for a server-side connection. /// /// This corresponds to [`SSL_set_tlsext_host_name`]. /// /// [`SSL_set_tlsext_host_name`]: https://www.openssl.org/docs/manmaster/man3/SSL_get_servername_type.html pub fn set_hostname(&mut self, hostname: &str) -> Result<(), ErrorStack> { let cstr = CString::new(hostname).unwrap(); unsafe { cvt(ffi::SSL_set_tlsext_host_name(self.as_ptr(), cstr.as_ptr() as *mut _) as c_int) .map(|_| ()) } } /// Returns the peer's certificate, if present. /// /// This corresponds to [`SSL_get_peer_certificate`]. /// /// [`SSL_get_peer_certificate`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_get_peer_certificate.html pub fn peer_certificate(&self) -> Option { unsafe { let ptr = SSL_get1_peer_certificate(self.as_ptr()); X509::from_ptr_opt(ptr) } } /// Returns the certificate chain of the peer, if present. /// /// On the client side, the chain includes the leaf certificate, but on the server side it does /// not. Fun! /// /// This corresponds to [`SSL_get_peer_cert_chain`]. /// /// [`SSL_get_peer_cert_chain`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_get_peer_cert_chain.html pub fn peer_cert_chain(&self) -> Option<&StackRef> { unsafe { let ptr = ffi::SSL_get_peer_cert_chain(self.as_ptr()); StackRef::from_const_ptr_opt(ptr) } } /// Returns the verified certificate chain of the peer, including the leaf certificate. /// /// If verification was not successful (i.e. [`verify_result`] does not return /// [`X509VerifyResult::OK`]), this chain may be incomplete or invalid. /// /// Requires OpenSSL 1.1.0 or newer. /// /// This corresponds to [`SSL_get0_verified_chain`]. /// /// [`verify_result`]: #method.verify_result /// [`X509VerifyResult::OK`]: ../x509/struct.X509VerifyResult.html#associatedconstant.OK /// [`SSL_get0_verified_chain`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_get0_verified_chain.html #[cfg(ossl110)] pub fn verified_chain(&self) -> Option<&StackRef> { unsafe { let ptr = ffi::SSL_get0_verified_chain(self.as_ptr()); StackRef::from_const_ptr_opt(ptr) } } /// Like [`SslContext::certificate`]. /// /// This corresponds to `SSL_get_certificate`. /// /// [`SslContext::certificate`]: struct.SslContext.html#method.certificate pub fn certificate(&self) -> Option<&X509Ref> { unsafe { let ptr = ffi::SSL_get_certificate(self.as_ptr()); X509Ref::from_const_ptr_opt(ptr) } } /// Like [`SslContext::private_key`]. /// /// This corresponds to `SSL_get_privatekey`. /// /// [`SslContext::private_key`]: struct.SslContext.html#method.private_key pub fn private_key(&self) -> Option<&PKeyRef> { unsafe { let ptr = ffi::SSL_get_privatekey(self.as_ptr()); PKeyRef::from_const_ptr_opt(ptr) } } #[deprecated(since = "0.10.5", note = "renamed to `version_str`")] pub fn version(&self) -> &str { self.version_str() } /// Returns the protocol version of the session. /// /// This corresponds to [`SSL_version`]. /// /// [`SSL_version`]: https://www.openssl.org/docs/manmaster/man3/SSL_version.html pub fn version2(&self) -> Option { unsafe { let r = ffi::SSL_version(self.as_ptr()); if r == 0 { None } else { Some(SslVersion(r)) } } } /// Returns a string describing the protocol version of the session. /// /// This corresponds to [`SSL_get_version`]. /// /// [`SSL_get_version`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_get_version.html pub fn version_str(&self) -> &'static str { let version = unsafe { let ptr = ffi::SSL_get_version(self.as_ptr()); CStr::from_ptr(ptr as *const _) }; str::from_utf8(version.to_bytes()).unwrap() } /// Returns the protocol selected via Application Layer Protocol Negotiation (ALPN). /// /// The protocol's name is returned is an opaque sequence of bytes. It is up to the client /// to interpret it. /// /// Requires OpenSSL 1.0.2 or LibreSSL 2.6.1 or newer. /// /// This corresponds to [`SSL_get0_alpn_selected`]. /// /// [`SSL_get0_alpn_selected`]: https://www.openssl.org/docs/manmaster/man3/SSL_get0_next_proto_negotiated.html #[cfg(any(ossl102, libressl261))] pub fn selected_alpn_protocol(&self) -> Option<&[u8]> { unsafe { let mut data: *const c_uchar = ptr::null(); let mut len: c_uint = 0; // Get the negotiated protocol from the SSL instance. // `data` will point at a `c_uchar` array; `len` will contain the length of this array. ffi::SSL_get0_alpn_selected(self.as_ptr(), &mut data, &mut len); if data.is_null() { None } else { Some(slice::from_raw_parts(data, len as usize)) } } } /// Enables the DTLS extension "use_srtp" as defined in RFC5764. /// /// This corresponds to [`SSL_set_tlsext_use_srtp`]. /// /// [`SSL_set_tlsext_use_srtp`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_tlsext_use_srtp.html pub fn set_tlsext_use_srtp(&mut self, protocols: &str) -> Result<(), ErrorStack> { unsafe { let cstr = CString::new(protocols).unwrap(); let r = ffi::SSL_set_tlsext_use_srtp(self.as_ptr(), cstr.as_ptr()); // fun fact, set_tlsext_use_srtp has a reversed return code D: if r == 0 { Ok(()) } else { Err(ErrorStack::get()) } } } /// Gets all SRTP profiles that are enabled for handshake via set_tlsext_use_srtp /// /// DTLS extension "use_srtp" as defined in RFC5764 has to be enabled. /// /// This corresponds to [`SSL_get_srtp_profiles`]. /// /// [`SSL_get_srtp_profiles`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_tlsext_use_srtp.html pub fn srtp_profiles(&self) -> Option<&StackRef> { unsafe { let chain = ffi::SSL_get_srtp_profiles(self.as_ptr()); StackRef::from_const_ptr_opt(chain) } } /// Gets the SRTP profile selected by handshake. /// /// DTLS extension "use_srtp" as defined in RFC5764 has to be enabled. /// /// This corresponds to [`SSL_get_selected_srtp_profile`]. /// /// [`SSL_get_selected_srtp_profile`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_tlsext_use_srtp.html pub fn selected_srtp_profile(&self) -> Option<&SrtpProtectionProfileRef> { unsafe { let profile = ffi::SSL_get_selected_srtp_profile(self.as_ptr()); SrtpProtectionProfileRef::from_const_ptr_opt(profile) } } /// Returns the number of bytes remaining in the currently processed TLS record. /// /// If this is greater than 0, the next call to `read` will not call down to the underlying /// stream. /// /// This corresponds to [`SSL_pending`]. /// /// [`SSL_pending`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_pending.html pub fn pending(&self) -> usize { unsafe { ffi::SSL_pending(self.as_ptr()) as usize } } /// Returns the servername sent by the client via Server Name Indication (SNI). /// /// It is only useful on the server side. /// /// This corresponds to [`SSL_get_servername`]. /// /// # Note /// /// While the SNI specification requires that servernames be valid domain names (and therefore /// ASCII), OpenSSL does not enforce this restriction. If the servername provided by the client /// is not valid UTF-8, this function will return `None`. The `servername_raw` method returns /// the raw bytes and does not have this restriction. /// /// [`SSL_get_servername`]: https://www.openssl.org/docs/manmaster/man3/SSL_get_servername.html // FIXME maybe rethink in 0.11? pub fn servername(&self, type_: NameType) -> Option<&str> { self.servername_raw(type_) .and_then(|b| str::from_utf8(b).ok()) } /// Returns the servername sent by the client via Server Name Indication (SNI). /// /// It is only useful on the server side. /// /// This corresponds to [`SSL_get_servername`]. /// /// # Note /// /// Unlike `servername`, this method does not require the name be valid UTF-8. /// /// [`SSL_get_servername`]: https://www.openssl.org/docs/manmaster/man3/SSL_get_servername.html pub fn servername_raw(&self, type_: NameType) -> Option<&[u8]> { unsafe { let name = ffi::SSL_get_servername(self.as_ptr(), type_.0); if name.is_null() { None } else { Some(CStr::from_ptr(name as *const _).to_bytes()) } } } /// Changes the context corresponding to the current connection. /// /// It is most commonly used in the Server Name Indication (SNI) callback. /// /// This corresponds to `SSL_set_SSL_CTX`. pub fn set_ssl_context(&mut self, ctx: &SslContextRef) -> Result<(), ErrorStack> { unsafe { cvt_p(ffi::SSL_set_SSL_CTX(self.as_ptr(), ctx.as_ptr())).map(|_| ()) } } /// Returns the context corresponding to the current connection. /// /// This corresponds to [`SSL_get_SSL_CTX`]. /// /// [`SSL_get_SSL_CTX`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_get_SSL_CTX.html pub fn ssl_context(&self) -> &SslContextRef { unsafe { let ssl_ctx = ffi::SSL_get_SSL_CTX(self.as_ptr()); SslContextRef::from_ptr(ssl_ctx) } } /// Returns a mutable reference to the X509 verification configuration. /// /// Requires OpenSSL 1.0.2 or newer. /// /// This corresponds to [`SSL_get0_param`]. /// /// [`SSL_get0_param`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_get0_param.html #[cfg(any(ossl102, libressl261))] pub fn param_mut(&mut self) -> &mut X509VerifyParamRef { unsafe { X509VerifyParamRef::from_ptr_mut(ffi::SSL_get0_param(self.as_ptr())) } } /// Returns the certificate verification result. /// /// This corresponds to [`SSL_get_verify_result`]. /// /// [`SSL_get_verify_result`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_get_verify_result.html pub fn verify_result(&self) -> X509VerifyResult { unsafe { X509VerifyResult::from_raw(ffi::SSL_get_verify_result(self.as_ptr()) as c_int) } } /// Returns a shared reference to the SSL session. /// /// This corresponds to [`SSL_get_session`]. /// /// [`SSL_get_session`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_get_session.html pub fn session(&self) -> Option<&SslSessionRef> { unsafe { let p = ffi::SSL_get_session(self.as_ptr()); SslSessionRef::from_const_ptr_opt(p) } } /// Copies the client_random value sent by the client in the TLS handshake into a buffer. /// /// Returns the number of bytes copied, or if the buffer is empty, the size of the client_random /// value. /// /// Requires OpenSSL 1.1.0 or newer. /// /// This corresponds to [`SSL_get_client_random`]. /// /// [`SSL_get_client_random`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_get_client_random.html #[cfg(any(ossl110))] pub fn client_random(&self, buf: &mut [u8]) -> usize { unsafe { ffi::SSL_get_client_random(self.as_ptr(), buf.as_mut_ptr() as *mut c_uchar, buf.len()) } } /// Copies the server_random value sent by the server in the TLS handshake into a buffer. /// /// Returns the number of bytes copied, or if the buffer is empty, the size of the server_random /// value. /// /// Requires OpenSSL 1.1.0 or newer. /// /// This corresponds to [`SSL_get_server_random`]. /// /// [`SSL_get_server_random`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_get_client_random.html #[cfg(any(ossl110))] pub fn server_random(&self, buf: &mut [u8]) -> usize { unsafe { ffi::SSL_get_server_random(self.as_ptr(), buf.as_mut_ptr() as *mut c_uchar, buf.len()) } } /// Derives keying material for application use in accordance to RFC 5705. /// /// This corresponds to [`SSL_export_keying_material`]. /// /// [`SSL_export_keying_material`]: https://www.openssl.org/docs/manmaster/man3/SSL_export_keying_material.html pub fn export_keying_material( &self, out: &mut [u8], label: &str, context: Option<&[u8]>, ) -> Result<(), ErrorStack> { unsafe { let (context, contextlen, use_context) = match context { Some(context) => (context.as_ptr() as *const c_uchar, context.len(), 1), None => (ptr::null(), 0, 0), }; cvt(ffi::SSL_export_keying_material( self.as_ptr(), out.as_mut_ptr() as *mut c_uchar, out.len(), label.as_ptr() as *const c_char, label.len(), context, contextlen, use_context, )) .map(|_| ()) } } /// Derives keying material for application use in accordance to RFC 5705. /// /// This function is only usable with TLSv1.3, wherein there is no distinction between an empty context and no /// context. Therefore, unlike `export_keying_material`, `context` must always be supplied. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_export_keying_material_early`]. /// /// [`SSL_export_keying_material_early`]: https://www.openssl.org/docs/manmaster/man3/SSL_export_keying_material_early.html #[cfg(ossl111)] pub fn export_keying_material_early( &self, out: &mut [u8], label: &str, context: &[u8], ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_export_keying_material_early( self.as_ptr(), out.as_mut_ptr() as *mut c_uchar, out.len(), label.as_ptr() as *const c_char, label.len(), context.as_ptr() as *const c_uchar, context.len(), )) .map(|_| ()) } } /// Sets the session to be used. /// /// This should be called before the handshake to attempt to reuse a previously established /// session. If the server is not willing to reuse the session, a new one will be transparently /// negotiated. /// /// This corresponds to [`SSL_set_session`]. /// /// # Safety /// /// The caller of this method is responsible for ensuring that the session is associated /// with the same `SslContext` as this `Ssl`. /// /// [`SSL_set_session`]: https://www.openssl.org/docs/manmaster/man3/SSL_set_session.html pub unsafe fn set_session(&mut self, session: &SslSessionRef) -> Result<(), ErrorStack> { cvt(ffi::SSL_set_session(self.as_ptr(), session.as_ptr())).map(|_| ()) } /// Determines if the session provided to `set_session` was successfully reused. /// /// This corresponds to [`SSL_session_reused`]. /// /// [`SSL_session_reused`]: https://www.openssl.org/docs/man1.1.0/ssl/SSL_session_reused.html pub fn session_reused(&self) -> bool { unsafe { ffi::SSL_session_reused(self.as_ptr()) != 0 } } /// Sets the status response a client wishes the server to reply with. /// /// This corresponds to [`SSL_set_tlsext_status_type`]. /// /// [`SSL_set_tlsext_status_type`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_tlsext_status_type.html pub fn set_status_type(&mut self, type_: StatusType) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_set_tlsext_status_type(self.as_ptr(), type_.as_raw()) as c_int).map(|_| ()) } } /// Returns the server's OCSP response, if present. /// /// This corresponds to [`SSL_get_tlsext_status_ocsp_resp`]. /// /// [`SSL_get_tlsext_status_ocsp_resp`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_tlsext_status_type.html pub fn ocsp_status(&self) -> Option<&[u8]> { unsafe { let mut p = ptr::null_mut(); let len = ffi::SSL_get_tlsext_status_ocsp_resp(self.as_ptr(), &mut p); if len < 0 { None } else { Some(slice::from_raw_parts(p as *const u8, len as usize)) } } } /// Sets the OCSP response to be returned to the client. /// /// This corresponds to [`SSL_set_tlsext_status_ocsp_resp`]. /// /// [`SSL_set_tlsext_status_ocsp_resp`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_tlsext_status_type.html pub fn set_ocsp_status(&mut self, response: &[u8]) -> Result<(), ErrorStack> { unsafe { assert!(response.len() <= c_int::max_value() as usize); let p = cvt_p(ffi::CRYPTO_malloc( response.len() as _, concat!(file!(), "\0").as_ptr() as *const _, line!() as c_int, ))?; ptr::copy_nonoverlapping(response.as_ptr(), p as *mut u8, response.len()); cvt(ffi::SSL_set_tlsext_status_ocsp_resp( self.as_ptr(), p as *mut c_uchar, response.len() as c_long, ) as c_int) .map(|_| ()) } } /// Determines if this `Ssl` is configured for server-side or client-side use. /// /// This corresponds to [`SSL_is_server`]. /// /// [`SSL_is_server`]: https://www.openssl.org/docs/manmaster/man3/SSL_is_server.html pub fn is_server(&self) -> bool { unsafe { SSL_is_server(self.as_ptr()) != 0 } } /// Sets the extra data at the specified index. /// /// This can be used to provide data to callbacks registered with the context. Use the /// `Ssl::new_ex_index` method to create an `Index`. /// /// This corresponds to [`SSL_set_ex_data`]. /// /// [`SSL_set_ex_data`]: https://www.openssl.org/docs/manmaster/man3/SSL_set_ex_data.html pub fn set_ex_data(&mut self, index: Index, data: T) { unsafe { let data = Box::new(data); ffi::SSL_set_ex_data( self.as_ptr(), index.as_raw(), Box::into_raw(data) as *mut c_void, ); } } /// Returns a reference to the extra data at the specified index. /// /// This corresponds to [`SSL_get_ex_data`]. /// /// [`SSL_get_ex_data`]: https://www.openssl.org/docs/manmaster/man3/SSL_set_ex_data.html pub fn ex_data(&self, index: Index) -> Option<&T> { unsafe { let data = ffi::SSL_get_ex_data(self.as_ptr(), index.as_raw()); if data.is_null() { None } else { Some(&*(data as *const T)) } } } /// Returns a mutable reference to the extra data at the specified index. /// /// This corresponds to [`SSL_get_ex_data`]. /// /// [`SSL_get_ex_data`]: https://www.openssl.org/docs/manmaster/man3/SSL_set_ex_data.html pub fn ex_data_mut(&mut self, index: Index) -> Option<&mut T> { unsafe { let data = ffi::SSL_get_ex_data(self.as_ptr(), index.as_raw()); if data.is_null() { None } else { Some(&mut *(data as *mut T)) } } } /// Sets the maximum amount of early data that will be accepted on this connection. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_set_max_early_data`]. /// /// [`SSL_set_max_early_data`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_set_max_early_data.html #[cfg(ossl111)] pub fn set_max_early_data(&mut self, bytes: u32) -> Result<(), ErrorStack> { if unsafe { ffi::SSL_set_max_early_data(self.as_ptr(), bytes) } == 1 { Ok(()) } else { Err(ErrorStack::get()) } } /// Gets the maximum amount of early data that can be sent on this connection. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_get_max_early_data`]. /// /// [`SSL_get_max_early_data`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_get_max_early_data.html #[cfg(ossl111)] pub fn max_early_data(&self) -> u32 { unsafe { ffi::SSL_get_max_early_data(self.as_ptr()) } } /// Copies the contents of the last Finished message sent to the peer into the provided buffer. /// /// The total size of the message is returned, so this can be used to determine the size of the /// buffer required. /// /// This corresponds to `SSL_get_finished`. pub fn finished(&self, buf: &mut [u8]) -> usize { unsafe { ffi::SSL_get_finished(self.as_ptr(), buf.as_mut_ptr() as *mut c_void, buf.len()) } } /// Copies the contents of the last Finished message received from the peer into the provided /// buffer. /// /// The total size of the message is returned, so this can be used to determine the size of the /// buffer required. /// /// This corresponds to `SSL_get_peer_finished`. pub fn peer_finished(&self, buf: &mut [u8]) -> usize { unsafe { ffi::SSL_get_peer_finished(self.as_ptr(), buf.as_mut_ptr() as *mut c_void, buf.len()) } } /// Determines if the initial handshake has been completed. /// /// This corresponds to [`SSL_is_init_finished`]. /// /// [`SSL_is_init_finished`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_is_init_finished.html #[cfg(ossl110)] pub fn is_init_finished(&self) -> bool { unsafe { ffi::SSL_is_init_finished(self.as_ptr()) != 0 } } /// Determines if the client's hello message is in the SSLv2 format. /// /// This can only be used inside of the client hello callback. Otherwise, `false` is returned. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_client_hello_isv2`]. /// /// [`SSL_client_hello_isv2`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_client_hello_cb.html #[cfg(ossl111)] pub fn client_hello_isv2(&self) -> bool { unsafe { ffi::SSL_client_hello_isv2(self.as_ptr()) != 0 } } /// Returns the legacy version field of the client's hello message. /// /// This can only be used inside of the client hello callback. Otherwise, `None` is returned. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_client_hello_get0_legacy_version`]. /// /// [`SSL_client_hello_get0_legacy_version`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_client_hello_cb.html #[cfg(ossl111)] pub fn client_hello_legacy_version(&self) -> Option { unsafe { let version = ffi::SSL_client_hello_get0_legacy_version(self.as_ptr()); if version == 0 { None } else { Some(SslVersion(version as c_int)) } } } /// Returns the random field of the client's hello message. /// /// This can only be used inside of the client hello callback. Otherwise, `None` is returned. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_client_hello_get0_random`]. /// /// [`SSL_client_hello_get0_random`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_client_hello_cb.html #[cfg(ossl111)] pub fn client_hello_random(&self) -> Option<&[u8]> { unsafe { let mut ptr = ptr::null(); let len = ffi::SSL_client_hello_get0_random(self.as_ptr(), &mut ptr); if len == 0 { None } else { Some(slice::from_raw_parts(ptr, len)) } } } /// Returns the session ID field of the client's hello message. /// /// This can only be used inside of the client hello callback. Otherwise, `None` is returned. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_client_hello_get0_session_id`]. /// /// [`SSL_client_hello_get0_session_id`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_client_hello_cb.html #[cfg(ossl111)] pub fn client_hello_session_id(&self) -> Option<&[u8]> { unsafe { let mut ptr = ptr::null(); let len = ffi::SSL_client_hello_get0_session_id(self.as_ptr(), &mut ptr); if len == 0 { None } else { Some(slice::from_raw_parts(ptr, len)) } } } /// Returns the ciphers field of the client's hello message. /// /// This can only be used inside of the client hello callback. Otherwise, `None` is returned. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_client_hello_get0_ciphers`]. /// /// [`SSL_client_hello_get0_ciphers`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_client_hello_cb.html #[cfg(ossl111)] pub fn client_hello_ciphers(&self) -> Option<&[u8]> { unsafe { let mut ptr = ptr::null(); let len = ffi::SSL_client_hello_get0_ciphers(self.as_ptr(), &mut ptr); if len == 0 { None } else { Some(slice::from_raw_parts(ptr, len)) } } } /// Returns the compression methods field of the client's hello message. /// /// This can only be used inside of the client hello callback. Otherwise, `None` is returned. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_client_hello_get0_compression_methods`]. /// /// [`SSL_client_hello_get0_compression_methods`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_client_hello_cb.html #[cfg(ossl111)] pub fn client_hello_compression_methods(&self) -> Option<&[u8]> { unsafe { let mut ptr = ptr::null(); let len = ffi::SSL_client_hello_get0_compression_methods(self.as_ptr(), &mut ptr); if len == 0 { None } else { Some(slice::from_raw_parts(ptr, len)) } } } /// Sets the MTU used for DTLS connections. /// /// This corresponds to `SSL_set_mtu`. pub fn set_mtu(&mut self, mtu: u32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::SSL_set_mtu(self.as_ptr(), mtu as c_long) as c_int).map(|_| ()) } } } /// An SSL stream midway through the handshake process. #[derive(Debug)] pub struct MidHandshakeSslStream { stream: SslStream, error: Error, } impl MidHandshakeSslStream { /// Returns a shared reference to the inner stream. pub fn get_ref(&self) -> &S { self.stream.get_ref() } /// Returns a mutable reference to the inner stream. pub fn get_mut(&mut self) -> &mut S { self.stream.get_mut() } /// Returns a shared reference to the `Ssl` of the stream. pub fn ssl(&self) -> &SslRef { self.stream.ssl() } /// Returns the underlying error which interrupted this handshake. pub fn error(&self) -> &Error { &self.error } /// Consumes `self`, returning its error. pub fn into_error(self) -> Error { self.error } } impl MidHandshakeSslStream where S: Read + Write, { /// Restarts the handshake process. /// /// This corresponds to [`SSL_do_handshake`]. /// /// [`SSL_do_handshake`]: https://www.openssl.org/docs/manmaster/man3/SSL_do_handshake.html pub fn handshake(mut self) -> Result, HandshakeError> { match self.stream.do_handshake() { Ok(()) => Ok(self.stream), Err(error) => { self.error = error; match self.error.code() { ErrorCode::WANT_READ | ErrorCode::WANT_WRITE => { Err(HandshakeError::WouldBlock(self)) } _ => Err(HandshakeError::Failure(self)), } } } } } /// A TLS session over a stream. pub struct SslStream { ssl: ManuallyDrop, method: ManuallyDrop, _p: PhantomData, } impl Drop for SslStream { fn drop(&mut self) { // ssl holds a reference to method internally so it has to drop first unsafe { ManuallyDrop::drop(&mut self.ssl); ManuallyDrop::drop(&mut self.method); } } } impl fmt::Debug for SslStream where S: fmt::Debug, { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt.debug_struct("SslStream") .field("stream", &self.get_ref()) .field("ssl", &self.ssl()) .finish() } } impl SslStream { /// Creates a new `SslStream`. /// /// This function performs no IO; the stream will not have performed any part of the handshake /// with the peer. If the `Ssl` was configured with [`SslRef::set_connect_state`] or /// [`SslRef::set_accept_state`], the handshake can be performed automatically during the first /// call to read or write. Otherwise the `connect` and `accept` methods can be used to /// explicitly perform the handshake. /// /// This corresponds to [`SSL_set_bio`]. /// /// [`SSL_set_bio`]: https://www.openssl.org/docs/manmaster/man3/SSL_set_bio.html pub fn new(ssl: Ssl, stream: S) -> Result { let (bio, method) = bio::new(stream)?; unsafe { ffi::SSL_set_bio(ssl.as_ptr(), bio, bio); } Ok(SslStream { ssl: ManuallyDrop::new(ssl), method: ManuallyDrop::new(method), _p: PhantomData, }) } /// Constructs an `SslStream` from a pointer to the underlying OpenSSL `SSL` struct. /// /// This is useful if the handshake has already been completed elsewhere. /// /// # Safety /// /// The caller must ensure the pointer is valid. #[deprecated( since = "0.10.32", note = "use Ssl::from_ptr and SslStream::new instead" )] pub unsafe fn from_raw_parts(ssl: *mut ffi::SSL, stream: S) -> Self { let ssl = Ssl::from_ptr(ssl); Self::new(ssl, stream).unwrap() } /// Read application data transmitted by a client before handshake completion. /// /// Useful for reducing latency, but vulnerable to replay attacks. Call /// [`SslRef::set_accept_state`] first. /// /// Returns `Ok(0)` if all early data has been read. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_read_early_data`]. /// /// [`SSL_read_early_data`]: https://www.openssl.org/docs/manmaster/man3/SSL_read_early_data.html #[cfg(ossl111)] pub fn read_early_data(&mut self, buf: &mut [u8]) -> Result { let mut read = 0; let ret = unsafe { ffi::SSL_read_early_data( self.ssl.as_ptr(), buf.as_ptr() as *mut c_void, buf.len(), &mut read, ) }; match ret { ffi::SSL_READ_EARLY_DATA_ERROR => Err(self.make_error(ret)), ffi::SSL_READ_EARLY_DATA_SUCCESS => Ok(read), ffi::SSL_READ_EARLY_DATA_FINISH => Ok(0), _ => unreachable!(), } } /// Send data to the server without blocking on handshake completion. /// /// Useful for reducing latency, but vulnerable to replay attacks. Call /// [`SslRef::set_connect_state`] first. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_write_early_data`]. /// /// [`SSL_write_early_data`]: https://www.openssl.org/docs/manmaster/man3/SSL_write_early_data.html #[cfg(ossl111)] pub fn write_early_data(&mut self, buf: &[u8]) -> Result { let mut written = 0; let ret = unsafe { ffi::SSL_write_early_data( self.ssl.as_ptr(), buf.as_ptr() as *const c_void, buf.len(), &mut written, ) }; if ret > 0 { Ok(written as usize) } else { Err(self.make_error(ret)) } } /// Initiates a client-side TLS handshake. /// /// This corresponds to [`SSL_connect`]. /// /// # Warning /// /// OpenSSL's default configuration is insecure. It is highly recommended to use /// `SslConnector` rather than `Ssl` directly, as it manages that configuration. /// /// [`SSL_connect`]: https://www.openssl.org/docs/manmaster/man3/SSL_connect.html pub fn connect(&mut self) -> Result<(), Error> { let ret = unsafe { ffi::SSL_connect(self.ssl.as_ptr()) }; if ret > 0 { Ok(()) } else { Err(self.make_error(ret)) } } /// Initiates a server-side TLS handshake. /// /// This corresponds to [`SSL_accept`]. /// /// # Warning /// /// OpenSSL's default configuration is insecure. It is highly recommended to use /// `SslAcceptor` rather than `Ssl` directly, as it manages that configuration. /// /// [`SSL_accept`]: https://www.openssl.org/docs/manmaster/man3/SSL_accept.html pub fn accept(&mut self) -> Result<(), Error> { let ret = unsafe { ffi::SSL_accept(self.ssl.as_ptr()) }; if ret > 0 { Ok(()) } else { Err(self.make_error(ret)) } } /// Initiates the handshake. /// /// This will fail if `set_accept_state` or `set_connect_state` was not called first. /// /// This corresponds to [`SSL_do_handshake`]. /// /// [`SSL_do_handshake`]: https://www.openssl.org/docs/manmaster/man3/SSL_do_handshake.html pub fn do_handshake(&mut self) -> Result<(), Error> { let ret = unsafe { ffi::SSL_do_handshake(self.ssl.as_ptr()) }; if ret > 0 { Ok(()) } else { Err(self.make_error(ret)) } } /// Perform a stateless server-side handshake. /// /// Requires that cookie generation and verification callbacks were /// set on the SSL context. /// /// Returns `Ok(true)` if a complete ClientHello containing a valid cookie /// was read, in which case the handshake should be continued via /// `accept`. If a HelloRetryRequest containing a fresh cookie was /// transmitted, `Ok(false)` is returned instead. If the handshake cannot /// proceed at all, `Err` is returned. /// /// This corresponds to [`SSL_stateless`] /// /// [`SSL_stateless`]: https://www.openssl.org/docs/manmaster/man3/SSL_stateless.html #[cfg(ossl111)] pub fn stateless(&mut self) -> Result { match unsafe { ffi::SSL_stateless(self.ssl.as_ptr()) } { 1 => Ok(true), 0 => Ok(false), -1 => Err(ErrorStack::get()), _ => unreachable!(), } } /// Like `read`, but returns an `ssl::Error` rather than an `io::Error`. /// /// It is particularly useful with a nonblocking socket, where the error value will identify if /// OpenSSL is waiting on read or write readiness. /// /// This corresponds to [`SSL_read`]. /// /// [`SSL_read`]: https://www.openssl.org/docs/manmaster/man3/SSL_read.html pub fn ssl_read(&mut self, buf: &mut [u8]) -> Result { // The interpretation of the return code here is a little odd with a // zero-length write. OpenSSL will likely correctly report back to us // that it read zero bytes, but zero is also the sentinel for "error". // To avoid that confusion short-circuit that logic and return quickly // if `buf` has a length of zero. if buf.is_empty() { return Ok(0); } let ret = self.ssl.read(buf); if ret > 0 { Ok(ret as usize) } else { Err(self.make_error(ret)) } } /// Like `write`, but returns an `ssl::Error` rather than an `io::Error`. /// /// It is particularly useful with a nonblocking socket, where the error value will identify if /// OpenSSL is waiting on read or write readiness. /// /// This corresponds to [`SSL_write`]. /// /// [`SSL_write`]: https://www.openssl.org/docs/manmaster/man3/SSL_write.html pub fn ssl_write(&mut self, buf: &[u8]) -> Result { // See above for why we short-circuit on zero-length buffers if buf.is_empty() { return Ok(0); } let ret = self.ssl.write(buf); if ret > 0 { Ok(ret as usize) } else { Err(self.make_error(ret)) } } /// Reads data from the stream, without removing it from the queue. /// /// This corresponds to [`SSL_peek`]. /// /// [`SSL_peek`]: https://www.openssl.org/docs/manmaster/man3/SSL_peek.html pub fn ssl_peek(&mut self, buf: &mut [u8]) -> Result { // See above for why we short-circuit on zero-length buffers if buf.is_empty() { return Ok(0); } let ret = self.ssl.peek(buf); if ret > 0 { Ok(ret as usize) } else { Err(self.make_error(ret)) } } /// Shuts down the session. /// /// The shutdown process consists of two steps. The first step sends a close notify message to /// the peer, after which `ShutdownResult::Sent` is returned. The second step awaits the receipt /// of a close notify message from the peer, after which `ShutdownResult::Received` is returned. /// /// While the connection may be closed after the first step, it is recommended to fully shut the /// session down. In particular, it must be fully shut down if the connection is to be used for /// further communication in the future. /// /// This corresponds to [`SSL_shutdown`]. /// /// [`SSL_shutdown`]: https://www.openssl.org/docs/man1.0.2/ssl/SSL_shutdown.html pub fn shutdown(&mut self) -> Result { match unsafe { ffi::SSL_shutdown(self.ssl.as_ptr()) } { 0 => Ok(ShutdownResult::Sent), 1 => Ok(ShutdownResult::Received), n => Err(self.make_error(n)), } } /// Returns the session's shutdown state. /// /// This corresponds to [`SSL_get_shutdown`]. /// /// [`SSL_get_shutdown`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_set_shutdown.html pub fn get_shutdown(&mut self) -> ShutdownState { unsafe { let bits = ffi::SSL_get_shutdown(self.ssl.as_ptr()); ShutdownState { bits } } } /// Sets the session's shutdown state. /// /// This can be used to tell OpenSSL that the session should be cached even if a full two-way /// shutdown was not completed. /// /// This corresponds to [`SSL_set_shutdown`]. /// /// [`SSL_set_shutdown`]: https://www.openssl.org/docs/man1.1.1/man3/SSL_set_shutdown.html pub fn set_shutdown(&mut self, state: ShutdownState) { unsafe { ffi::SSL_set_shutdown(self.ssl.as_ptr(), state.bits()) } } } impl SslStream { fn make_error(&mut self, ret: c_int) -> Error { self.check_panic(); let code = self.ssl.get_error(ret); let cause = match code { ErrorCode::SSL => Some(InnerError::Ssl(ErrorStack::get())), ErrorCode::SYSCALL => { let errs = ErrorStack::get(); if errs.errors().is_empty() { self.get_bio_error().map(InnerError::Io) } else { Some(InnerError::Ssl(errs)) } } ErrorCode::ZERO_RETURN => None, ErrorCode::WANT_READ | ErrorCode::WANT_WRITE => { self.get_bio_error().map(InnerError::Io) } _ => None, }; Error { code, cause } } fn check_panic(&mut self) { if let Some(err) = unsafe { bio::take_panic::(self.ssl.get_raw_rbio()) } { resume_unwind(err) } } fn get_bio_error(&mut self) -> Option { unsafe { bio::take_error::(self.ssl.get_raw_rbio()) } } /// Returns a shared reference to the underlying stream. pub fn get_ref(&self) -> &S { unsafe { let bio = self.ssl.get_raw_rbio(); bio::get_ref(bio) } } /// Returns a mutable reference to the underlying stream. /// /// # Warning /// /// It is inadvisable to read from or write to the underlying stream as it /// will most likely corrupt the SSL session. pub fn get_mut(&mut self) -> &mut S { unsafe { let bio = self.ssl.get_raw_rbio(); bio::get_mut(bio) } } /// Returns a shared reference to the `Ssl` object associated with this stream. pub fn ssl(&self) -> &SslRef { &self.ssl } } impl Read for SslStream { fn read(&mut self, buf: &mut [u8]) -> io::Result { loop { match self.ssl_read(buf) { Ok(n) => return Ok(n), Err(ref e) if e.code() == ErrorCode::ZERO_RETURN => return Ok(0), Err(ref e) if e.code() == ErrorCode::SYSCALL && e.io_error().is_none() => { return Ok(0); } Err(ref e) if e.code() == ErrorCode::WANT_READ && e.io_error().is_none() => {} Err(e) => { return Err(e .into_io_error() .unwrap_or_else(|e| io::Error::new(io::ErrorKind::Other, e))); } } } } } impl Write for SslStream { fn write(&mut self, buf: &[u8]) -> io::Result { loop { match self.ssl_write(buf) { Ok(n) => return Ok(n), Err(ref e) if e.code() == ErrorCode::WANT_READ && e.io_error().is_none() => {} Err(e) => { return Err(e .into_io_error() .unwrap_or_else(|e| io::Error::new(io::ErrorKind::Other, e))); } } } } fn flush(&mut self) -> io::Result<()> { self.get_mut().flush() } } /// A partially constructed `SslStream`, useful for unusual handshakes. #[deprecated( since = "0.10.32", note = "use the methods directly on Ssl/SslStream instead" )] pub struct SslStreamBuilder { inner: SslStream, } #[allow(deprecated)] impl SslStreamBuilder where S: Read + Write, { /// Begin creating an `SslStream` atop `stream` pub fn new(ssl: Ssl, stream: S) -> Self { Self { inner: SslStream::new(ssl, stream).unwrap(), } } /// Perform a stateless server-side handshake /// /// Requires that cookie generation and verification callbacks were /// set on the SSL context. /// /// Returns `Ok(true)` if a complete ClientHello containing a valid cookie /// was read, in which case the handshake should be continued via /// `accept`. If a HelloRetryRequest containing a fresh cookie was /// transmitted, `Ok(false)` is returned instead. If the handshake cannot /// proceed at all, `Err` is returned. /// /// This corresponds to [`SSL_stateless`] /// /// [`SSL_stateless`]: https://www.openssl.org/docs/manmaster/man3/SSL_stateless.html #[cfg(ossl111)] pub fn stateless(&mut self) -> Result { match unsafe { ffi::SSL_stateless(self.inner.ssl.as_ptr()) } { 1 => Ok(true), 0 => Ok(false), -1 => Err(ErrorStack::get()), _ => unreachable!(), } } /// Configure as an outgoing stream from a client. /// /// This corresponds to [`SSL_set_connect_state`]. /// /// [`SSL_set_connect_state`]: https://www.openssl.org/docs/manmaster/man3/SSL_set_connect_state.html pub fn set_connect_state(&mut self) { unsafe { ffi::SSL_set_connect_state(self.inner.ssl.as_ptr()) } } /// Configure as an incoming stream to a server. /// /// This corresponds to [`SSL_set_accept_state`]. /// /// [`SSL_set_accept_state`]: https://www.openssl.org/docs/manmaster/man3/SSL_set_accept_state.html pub fn set_accept_state(&mut self) { unsafe { ffi::SSL_set_accept_state(self.inner.ssl.as_ptr()) } } /// See `Ssl::connect` pub fn connect(mut self) -> Result, HandshakeError> { match self.inner.connect() { Ok(()) => Ok(self.inner), Err(error) => match error.code() { ErrorCode::WANT_READ | ErrorCode::WANT_WRITE => { Err(HandshakeError::WouldBlock(MidHandshakeSslStream { stream: self.inner, error, })) } _ => Err(HandshakeError::Failure(MidHandshakeSslStream { stream: self.inner, error, })), }, } } /// See `Ssl::accept` pub fn accept(mut self) -> Result, HandshakeError> { match self.inner.accept() { Ok(()) => Ok(self.inner), Err(error) => match error.code() { ErrorCode::WANT_READ | ErrorCode::WANT_WRITE => { Err(HandshakeError::WouldBlock(MidHandshakeSslStream { stream: self.inner, error, })) } _ => Err(HandshakeError::Failure(MidHandshakeSslStream { stream: self.inner, error, })), }, } } /// Initiates the handshake. /// /// This will fail if `set_accept_state` or `set_connect_state` was not called first. /// /// This corresponds to [`SSL_do_handshake`]. /// /// [`SSL_do_handshake`]: https://www.openssl.org/docs/manmaster/man3/SSL_do_handshake.html pub fn handshake(mut self) -> Result, HandshakeError> { match self.inner.do_handshake() { Ok(()) => Ok(self.inner), Err(error) => match error.code() { ErrorCode::WANT_READ | ErrorCode::WANT_WRITE => { Err(HandshakeError::WouldBlock(MidHandshakeSslStream { stream: self.inner, error, })) } _ => Err(HandshakeError::Failure(MidHandshakeSslStream { stream: self.inner, error, })), }, } } /// Read application data transmitted by a client before handshake /// completion. /// /// Useful for reducing latency, but vulnerable to replay attacks. Call /// `set_accept_state` first. /// /// Returns `Ok(0)` if all early data has been read. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_read_early_data`]. /// /// [`SSL_read_early_data`]: https://www.openssl.org/docs/manmaster/man3/SSL_read_early_data.html #[cfg(ossl111)] pub fn read_early_data(&mut self, buf: &mut [u8]) -> Result { self.inner.read_early_data(buf) } /// Send data to the server without blocking on handshake completion. /// /// Useful for reducing latency, but vulnerable to replay attacks. Call /// `set_connect_state` first. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`SSL_write_early_data`]. /// /// [`SSL_write_early_data`]: https://www.openssl.org/docs/manmaster/man3/SSL_write_early_data.html #[cfg(ossl111)] pub fn write_early_data(&mut self, buf: &[u8]) -> Result { self.inner.write_early_data(buf) } } #[allow(deprecated)] impl SslStreamBuilder { /// Returns a shared reference to the underlying stream. pub fn get_ref(&self) -> &S { unsafe { let bio = self.inner.ssl.get_raw_rbio(); bio::get_ref(bio) } } /// Returns a mutable reference to the underlying stream. /// /// # Warning /// /// It is inadvisable to read from or write to the underlying stream as it /// will most likely corrupt the SSL session. pub fn get_mut(&mut self) -> &mut S { unsafe { let bio = self.inner.ssl.get_raw_rbio(); bio::get_mut(bio) } } /// Returns a shared reference to the `Ssl` object associated with this builder. pub fn ssl(&self) -> &SslRef { &self.inner.ssl } /// Set the DTLS MTU size. /// /// It will be ignored if the value is smaller than the minimum packet size /// the DTLS protocol requires. /// /// # Panics /// This function panics if the given mtu size can't be represented in a positive `c_long` range #[deprecated(note = "Use SslRef::set_mtu instead", since = "0.10.30")] pub fn set_dtls_mtu_size(&mut self, mtu_size: usize) { unsafe { let bio = self.inner.ssl.get_raw_rbio(); bio::set_dtls_mtu_size::(bio, mtu_size); } } } /// The result of a shutdown request. #[derive(Copy, Clone, Debug, PartialEq, Eq)] pub enum ShutdownResult { /// A close notify message has been sent to the peer. Sent, /// A close notify response message has been received from the peer. Received, } bitflags! { /// The shutdown state of a session. pub struct ShutdownState: c_int { /// A close notify message has been sent to the peer. const SENT = ffi::SSL_SENT_SHUTDOWN; /// A close notify message has been received from the peer. const RECEIVED = ffi::SSL_RECEIVED_SHUTDOWN; } } cfg_if! { if #[cfg(any(ossl110, libressl273))] { use ffi::{SSL_CTX_up_ref, SSL_SESSION_get_master_key, SSL_SESSION_up_ref, SSL_is_server}; } else { #[allow(bad_style)] pub unsafe fn SSL_CTX_up_ref(ssl: *mut ffi::SSL_CTX) -> c_int { ffi::CRYPTO_add_lock( &mut (*ssl).references, 1, ffi::CRYPTO_LOCK_SSL_CTX, "mod.rs\0".as_ptr() as *const _, line!() as c_int, ); 0 } #[allow(bad_style)] pub unsafe fn SSL_SESSION_get_master_key( session: *const ffi::SSL_SESSION, out: *mut c_uchar, mut outlen: usize, ) -> usize { if outlen == 0 { return (*session).master_key_length as usize; } if outlen > (*session).master_key_length as usize { outlen = (*session).master_key_length as usize; } ptr::copy_nonoverlapping((*session).master_key.as_ptr(), out, outlen); outlen } #[allow(bad_style)] pub unsafe fn SSL_is_server(s: *mut ffi::SSL) -> c_int { (*s).server } #[allow(bad_style)] pub unsafe fn SSL_SESSION_up_ref(ses: *mut ffi::SSL_SESSION) -> c_int { ffi::CRYPTO_add_lock( &mut (*ses).references, 1, ffi::CRYPTO_LOCK_SSL_CTX, "mod.rs\0".as_ptr() as *const _, line!() as c_int, ); 0 } } } cfg_if! { if #[cfg(ossl300)] { use ffi::SSL_get1_peer_certificate; } else { use ffi::SSL_get_peer_certificate as SSL_get1_peer_certificate; } } cfg_if! { if #[cfg(any(ossl110, libressl291))] { use ffi::{TLS_method, DTLS_method, TLS_client_method, TLS_server_method}; } else { use ffi::{ SSLv23_method as TLS_method, DTLSv1_method as DTLS_method, SSLv23_client_method as TLS_client_method, SSLv23_server_method as TLS_server_method, }; } } cfg_if! { if #[cfg(ossl110)] { unsafe fn get_new_idx(f: ffi::CRYPTO_EX_free) -> c_int { ffi::CRYPTO_get_ex_new_index( ffi::CRYPTO_EX_INDEX_SSL_CTX, 0, ptr::null_mut(), None, None, Some(f), ) } unsafe fn get_new_ssl_idx(f: ffi::CRYPTO_EX_free) -> c_int { ffi::CRYPTO_get_ex_new_index( ffi::CRYPTO_EX_INDEX_SSL, 0, ptr::null_mut(), None, None, Some(f), ) } } else { use std::sync::Once; unsafe fn get_new_idx(f: ffi::CRYPTO_EX_free) -> c_int { // hack around https://rt.openssl.org/Ticket/Display.html?id=3710&user=guest&pass=guest static ONCE: Once = Once::new(); ONCE.call_once(|| { ffi::SSL_CTX_get_ex_new_index(0, ptr::null_mut(), None, None, None); }); ffi::SSL_CTX_get_ex_new_index(0, ptr::null_mut(), None, None, Some(f)) } unsafe fn get_new_ssl_idx(f: ffi::CRYPTO_EX_free) -> c_int { // hack around https://rt.openssl.org/Ticket/Display.html?id=3710&user=guest&pass=guest static ONCE: Once = Once::new(); ONCE.call_once(|| { ffi::SSL_get_ex_new_index(0, ptr::null_mut(), None, None, None); }); ffi::SSL_get_ex_new_index(0, ptr::null_mut(), None, None, Some(f)) } } } vendor/openssl/src/ssl/callbacks.rs0000664000175000017500000004671614160055207020225 0ustar mwhudsonmwhudsonuse cfg_if::cfg_if; use foreign_types::ForeignType; use foreign_types::ForeignTypeRef; #[cfg(any(ossl111, not(osslconf = "OPENSSL_NO_PSK")))] use libc::c_char; #[cfg(ossl111)] use libc::size_t; use libc::{c_int, c_uchar, c_uint, c_void}; #[cfg(any(ossl111, not(osslconf = "OPENSSL_NO_PSK")))] use std::ffi::CStr; use std::mem; use std::ptr; use std::slice; #[cfg(ossl111)] use std::str; use std::sync::Arc; use crate::dh::Dh; #[cfg(all(ossl101, not(ossl110)))] use crate::ec::EcKey; use crate::error::ErrorStack; use crate::pkey::Params; #[cfg(any(ossl102, libressl261))] use crate::ssl::AlpnError; use crate::ssl::{ try_get_session_ctx_index, SniError, Ssl, SslAlert, SslContext, SslContextRef, SslRef, SslSession, SslSessionRef, }; #[cfg(ossl111)] use crate::ssl::{ClientHelloResponse, ExtensionContext}; #[cfg(ossl111)] use crate::util::ForeignTypeRefExt; #[cfg(ossl111)] use crate::x509::X509Ref; use crate::x509::{X509StoreContext, X509StoreContextRef}; pub extern "C" fn raw_verify(preverify_ok: c_int, x509_ctx: *mut ffi::X509_STORE_CTX) -> c_int where F: Fn(bool, &mut X509StoreContextRef) -> bool + 'static + Sync + Send, { unsafe { let ctx = X509StoreContextRef::from_ptr_mut(x509_ctx); let ssl_idx = X509StoreContext::ssl_idx().expect("BUG: store context ssl index missing"); let verify_idx = SslContext::cached_ex_index::(); // raw pointer shenanigans to break the borrow of ctx // the callback can't mess with its own ex_data slot so this is safe let verify = ctx .ex_data(ssl_idx) .expect("BUG: store context missing ssl") .ssl_context() .ex_data(verify_idx) .expect("BUG: verify callback missing") as *const F; (*verify)(preverify_ok != 0, ctx) as c_int } } #[cfg(not(osslconf = "OPENSSL_NO_PSK"))] pub extern "C" fn raw_client_psk( ssl: *mut ffi::SSL, hint: *const c_char, identity: *mut c_char, max_identity_len: c_uint, psk: *mut c_uchar, max_psk_len: c_uint, ) -> c_uint where F: Fn(&mut SslRef, Option<&[u8]>, &mut [u8], &mut [u8]) -> Result + 'static + Sync + Send, { unsafe { let ssl = SslRef::from_ptr_mut(ssl); let callback_idx = SslContext::cached_ex_index::(); let callback = ssl .ssl_context() .ex_data(callback_idx) .expect("BUG: psk callback missing") as *const F; let hint = if !hint.is_null() { Some(CStr::from_ptr(hint).to_bytes()) } else { None }; // Give the callback mutable slices into which it can write the identity and psk. let identity_sl = slice::from_raw_parts_mut(identity as *mut u8, max_identity_len as usize); let psk_sl = slice::from_raw_parts_mut(psk as *mut u8, max_psk_len as usize); match (*callback)(ssl, hint, identity_sl, psk_sl) { Ok(psk_len) => psk_len as u32, Err(e) => { e.put(); 0 } } } } #[cfg(not(osslconf = "OPENSSL_NO_PSK"))] pub extern "C" fn raw_server_psk( ssl: *mut ffi::SSL, identity: *const c_char, psk: *mut c_uchar, max_psk_len: c_uint, ) -> c_uint where F: Fn(&mut SslRef, Option<&[u8]>, &mut [u8]) -> Result + 'static + Sync + Send, { unsafe { let ssl = SslRef::from_ptr_mut(ssl); let callback_idx = SslContext::cached_ex_index::(); let callback = ssl .ssl_context() .ex_data(callback_idx) .expect("BUG: psk callback missing") as *const F; let identity = if identity.is_null() { None } else { Some(CStr::from_ptr(identity).to_bytes()) }; // Give the callback mutable slices into which it can write the psk. let psk_sl = slice::from_raw_parts_mut(psk as *mut u8, max_psk_len as usize); match (*callback)(ssl, identity, psk_sl) { Ok(psk_len) => psk_len as u32, Err(e) => { e.put(); 0 } } } } pub extern "C" fn ssl_raw_verify( preverify_ok: c_int, x509_ctx: *mut ffi::X509_STORE_CTX, ) -> c_int where F: Fn(bool, &mut X509StoreContextRef) -> bool + 'static + Sync + Send, { unsafe { let ctx = X509StoreContextRef::from_ptr_mut(x509_ctx); let ssl_idx = X509StoreContext::ssl_idx().expect("BUG: store context ssl index missing"); let callback_idx = Ssl::cached_ex_index::>(); let callback = ctx .ex_data(ssl_idx) .expect("BUG: store context missing ssl") .ex_data(callback_idx) .expect("BUG: ssl verify callback missing") .clone(); callback(preverify_ok != 0, ctx) as c_int } } pub extern "C" fn raw_sni(ssl: *mut ffi::SSL, al: *mut c_int, arg: *mut c_void) -> c_int where F: Fn(&mut SslRef, &mut SslAlert) -> Result<(), SniError> + 'static + Sync + Send, { unsafe { let ssl = SslRef::from_ptr_mut(ssl); let callback = arg as *const F; let mut alert = SslAlert(*al); let r = (*callback)(ssl, &mut alert); *al = alert.0; match r { Ok(()) => ffi::SSL_TLSEXT_ERR_OK, Err(e) => e.0, } } } #[cfg(any(ossl102, libressl261))] pub extern "C" fn raw_alpn_select( ssl: *mut ffi::SSL, out: *mut *const c_uchar, outlen: *mut c_uchar, inbuf: *const c_uchar, inlen: c_uint, _arg: *mut c_void, ) -> c_int where F: for<'a> Fn(&mut SslRef, &'a [u8]) -> Result<&'a [u8], AlpnError> + 'static + Sync + Send, { unsafe { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ssl_context() .ex_data(SslContext::cached_ex_index::()) .expect("BUG: alpn callback missing") as *const F; let protos = slice::from_raw_parts(inbuf as *const u8, inlen as usize); match (*callback)(ssl, protos) { Ok(proto) => { *out = proto.as_ptr() as *const c_uchar; *outlen = proto.len() as c_uchar; ffi::SSL_TLSEXT_ERR_OK } Err(e) => e.0, } } } pub unsafe extern "C" fn raw_tmp_dh( ssl: *mut ffi::SSL, is_export: c_int, keylength: c_int, ) -> *mut ffi::DH where F: Fn(&mut SslRef, bool, u32) -> Result, ErrorStack> + 'static + Sync + Send, { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ssl_context() .ex_data(SslContext::cached_ex_index::()) .expect("BUG: tmp dh callback missing") as *const F; match (*callback)(ssl, is_export != 0, keylength as u32) { Ok(dh) => { let ptr = dh.as_ptr(); mem::forget(dh); ptr } Err(e) => { e.put(); ptr::null_mut() } } } #[cfg(all(ossl101, not(ossl110)))] pub unsafe extern "C" fn raw_tmp_ecdh( ssl: *mut ffi::SSL, is_export: c_int, keylength: c_int, ) -> *mut ffi::EC_KEY where F: Fn(&mut SslRef, bool, u32) -> Result, ErrorStack> + 'static + Sync + Send, { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ssl_context() .ex_data(SslContext::cached_ex_index::()) .expect("BUG: tmp ecdh callback missing") as *const F; match (*callback)(ssl, is_export != 0, keylength as u32) { Ok(ec_key) => { let ptr = ec_key.as_ptr(); mem::forget(ec_key); ptr } Err(e) => { e.put(); ptr::null_mut() } } } pub unsafe extern "C" fn raw_tmp_dh_ssl( ssl: *mut ffi::SSL, is_export: c_int, keylength: c_int, ) -> *mut ffi::DH where F: Fn(&mut SslRef, bool, u32) -> Result, ErrorStack> + 'static + Sync + Send, { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ex_data(Ssl::cached_ex_index::>()) .expect("BUG: ssl tmp dh callback missing") .clone(); match callback(ssl, is_export != 0, keylength as u32) { Ok(dh) => { let ptr = dh.as_ptr(); mem::forget(dh); ptr } Err(e) => { e.put(); ptr::null_mut() } } } #[cfg(all(ossl101, not(ossl110)))] pub unsafe extern "C" fn raw_tmp_ecdh_ssl( ssl: *mut ffi::SSL, is_export: c_int, keylength: c_int, ) -> *mut ffi::EC_KEY where F: Fn(&mut SslRef, bool, u32) -> Result, ErrorStack> + 'static + Sync + Send, { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ex_data(Ssl::cached_ex_index::>()) .expect("BUG: ssl tmp ecdh callback missing") .clone(); match callback(ssl, is_export != 0, keylength as u32) { Ok(ec_key) => { let ptr = ec_key.as_ptr(); mem::forget(ec_key); ptr } Err(e) => { e.put(); ptr::null_mut() } } } pub unsafe extern "C" fn raw_tlsext_status(ssl: *mut ffi::SSL, _: *mut c_void) -> c_int where F: Fn(&mut SslRef) -> Result + 'static + Sync + Send, { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ssl_context() .ex_data(SslContext::cached_ex_index::()) .expect("BUG: ocsp callback missing") as *const F; let ret = (*callback)(ssl); if ssl.is_server() { match ret { Ok(true) => ffi::SSL_TLSEXT_ERR_OK, Ok(false) => ffi::SSL_TLSEXT_ERR_NOACK, Err(e) => { e.put(); ffi::SSL_TLSEXT_ERR_ALERT_FATAL } } } else { match ret { Ok(true) => 1, Ok(false) => 0, Err(e) => { e.put(); -1 } } } } pub unsafe extern "C" fn raw_new_session( ssl: *mut ffi::SSL, session: *mut ffi::SSL_SESSION, ) -> c_int where F: Fn(&mut SslRef, SslSession) + 'static + Sync + Send, { let session_ctx_index = try_get_session_ctx_index().expect("BUG: session context index initialization failed"); let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ex_data(*session_ctx_index) .expect("BUG: session context missing") .ex_data(SslContext::cached_ex_index::()) .expect("BUG: new session callback missing") as *const F; let session = SslSession::from_ptr(session); (*callback)(ssl, session); // the return code doesn't indicate error vs success, but whether or not we consumed the session 1 } pub unsafe extern "C" fn raw_remove_session( ctx: *mut ffi::SSL_CTX, session: *mut ffi::SSL_SESSION, ) where F: Fn(&SslContextRef, &SslSessionRef) + 'static + Sync + Send, { let ctx = SslContextRef::from_ptr(ctx); let callback = ctx .ex_data(SslContext::cached_ex_index::()) .expect("BUG: remove session callback missing"); let session = SslSessionRef::from_ptr(session); callback(ctx, session) } cfg_if! { if #[cfg(any(ossl110, libressl280))] { type DataPtr = *const c_uchar; } else { type DataPtr = *mut c_uchar; } } pub unsafe extern "C" fn raw_get_session( ssl: *mut ffi::SSL, data: DataPtr, len: c_int, copy: *mut c_int, ) -> *mut ffi::SSL_SESSION where F: Fn(&mut SslRef, &[u8]) -> Option + 'static + Sync + Send, { let session_ctx_index = try_get_session_ctx_index().expect("BUG: session context index initialization failed"); let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ex_data(*session_ctx_index) .expect("BUG: session context missing") .ex_data(SslContext::cached_ex_index::()) .expect("BUG: get session callback missing") as *const F; let data = slice::from_raw_parts(data as *const u8, len as usize); match (*callback)(ssl, data) { Some(session) => { let p = session.as_ptr(); mem::forget(session); *copy = 0; p } None => ptr::null_mut(), } } #[cfg(ossl111)] pub unsafe extern "C" fn raw_keylog(ssl: *const ffi::SSL, line: *const c_char) where F: Fn(&SslRef, &str) + 'static + Sync + Send, { let ssl = SslRef::from_const_ptr(ssl); let callback = ssl .ssl_context() .ex_data(SslContext::cached_ex_index::()) .expect("BUG: get session callback missing"); let line = CStr::from_ptr(line).to_bytes(); let line = str::from_utf8_unchecked(line); callback(ssl, line); } #[cfg(ossl111)] pub unsafe extern "C" fn raw_stateless_cookie_generate( ssl: *mut ffi::SSL, cookie: *mut c_uchar, cookie_len: *mut size_t, ) -> c_int where F: Fn(&mut SslRef, &mut [u8]) -> Result + 'static + Sync + Send, { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ssl_context() .ex_data(SslContext::cached_ex_index::()) .expect("BUG: stateless cookie generate callback missing") as *const F; let slice = slice::from_raw_parts_mut(cookie as *mut u8, ffi::SSL_COOKIE_LENGTH as usize); match (*callback)(ssl, slice) { Ok(len) => { *cookie_len = len as size_t; 1 } Err(e) => { e.put(); 0 } } } #[cfg(ossl111)] pub unsafe extern "C" fn raw_stateless_cookie_verify( ssl: *mut ffi::SSL, cookie: *const c_uchar, cookie_len: size_t, ) -> c_int where F: Fn(&mut SslRef, &[u8]) -> bool + 'static + Sync + Send, { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ssl_context() .ex_data(SslContext::cached_ex_index::()) .expect("BUG: stateless cookie verify callback missing") as *const F; let slice = slice::from_raw_parts(cookie as *const c_uchar as *const u8, cookie_len as usize); (*callback)(ssl, slice) as c_int } pub extern "C" fn raw_cookie_generate( ssl: *mut ffi::SSL, cookie: *mut c_uchar, cookie_len: *mut c_uint, ) -> c_int where F: Fn(&mut SslRef, &mut [u8]) -> Result + 'static + Sync + Send, { unsafe { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ssl_context() .ex_data(SslContext::cached_ex_index::()) .expect("BUG: cookie generate callback missing") as *const F; // We subtract 1 from DTLS1_COOKIE_LENGTH as the ostensible value, 256, is erroneous but retained for // compatibility. See comments in dtls1.h. let slice = slice::from_raw_parts_mut(cookie as *mut u8, ffi::DTLS1_COOKIE_LENGTH as usize - 1); match (*callback)(ssl, slice) { Ok(len) => { *cookie_len = len as c_uint; 1 } Err(e) => { e.put(); 0 } } } } cfg_if! { if #[cfg(any(ossl110, libressl280))] { type CookiePtr = *const c_uchar; } else { type CookiePtr = *mut c_uchar; } } pub extern "C" fn raw_cookie_verify( ssl: *mut ffi::SSL, cookie: CookiePtr, cookie_len: c_uint, ) -> c_int where F: Fn(&mut SslRef, &[u8]) -> bool + 'static + Sync + Send, { unsafe { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ssl_context() .ex_data(SslContext::cached_ex_index::()) .expect("BUG: cookie verify callback missing") as *const F; let slice = slice::from_raw_parts(cookie as *const c_uchar as *const u8, cookie_len as usize); (*callback)(ssl, slice) as c_int } } #[cfg(ossl111)] pub struct CustomExtAddState(Option); #[cfg(ossl111)] pub extern "C" fn raw_custom_ext_add( ssl: *mut ffi::SSL, _: c_uint, context: c_uint, out: *mut *const c_uchar, outlen: *mut size_t, x: *mut ffi::X509, chainidx: size_t, al: *mut c_int, _: *mut c_void, ) -> c_int where F: Fn(&mut SslRef, ExtensionContext, Option<(usize, &X509Ref)>) -> Result, SslAlert> + 'static + Sync + Send, T: AsRef<[u8]> + 'static + Sync + Send, { unsafe { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ssl_context() .ex_data(SslContext::cached_ex_index::()) .expect("BUG: custom ext add callback missing") as *const F; let ectx = ExtensionContext::from_bits_truncate(context); let cert = if ectx.contains(ExtensionContext::TLS1_3_CERTIFICATE) { Some((chainidx, X509Ref::from_ptr(x))) } else { None }; match (*callback)(ssl, ectx, cert) { Ok(None) => 0, Ok(Some(buf)) => { *outlen = buf.as_ref().len(); *out = buf.as_ref().as_ptr(); let idx = Ssl::cached_ex_index::>(); let mut buf = Some(buf); let new = match ssl.ex_data_mut(idx) { Some(state) => { state.0 = buf.take(); false } None => true, }; if new { ssl.set_ex_data(idx, CustomExtAddState(buf)); } 1 } Err(alert) => { *al = alert.0; -1 } } } } #[cfg(ossl111)] pub extern "C" fn raw_custom_ext_free( ssl: *mut ffi::SSL, _: c_uint, _: c_uint, _: *mut *const c_uchar, _: *mut c_void, ) where T: 'static + Sync + Send, { unsafe { let ssl = SslRef::from_ptr_mut(ssl); let idx = Ssl::cached_ex_index::>(); if let Some(state) = ssl.ex_data_mut(idx) { state.0 = None; } } } #[cfg(ossl111)] pub extern "C" fn raw_custom_ext_parse( ssl: *mut ffi::SSL, _: c_uint, context: c_uint, input: *const c_uchar, inlen: size_t, x: *mut ffi::X509, chainidx: size_t, al: *mut c_int, _: *mut c_void, ) -> c_int where F: Fn(&mut SslRef, ExtensionContext, &[u8], Option<(usize, &X509Ref)>) -> Result<(), SslAlert> + 'static + Sync + Send, { unsafe { let ssl = SslRef::from_ptr_mut(ssl); let callback = ssl .ssl_context() .ex_data(SslContext::cached_ex_index::()) .expect("BUG: custom ext parse callback missing") as *const F; let ectx = ExtensionContext::from_bits_truncate(context); let slice = slice::from_raw_parts(input as *const u8, inlen as usize); let cert = if ectx.contains(ExtensionContext::TLS1_3_CERTIFICATE) { Some((chainidx, X509Ref::from_ptr(x))) } else { None }; match (*callback)(ssl, ectx, slice, cert) { Ok(()) => 1, Err(alert) => { *al = alert.0; 0 } } } } #[cfg(ossl111)] pub unsafe extern "C" fn raw_client_hello( ssl: *mut ffi::SSL, al: *mut c_int, arg: *mut c_void, ) -> c_int where F: Fn(&mut SslRef, &mut SslAlert) -> Result + 'static + Sync + Send, { let ssl = SslRef::from_ptr_mut(ssl); let callback = arg as *const F; let mut alert = SslAlert(*al); let r = (*callback)(ssl, &mut alert); *al = alert.0; match r { Ok(c) => c.0, Err(e) => { e.put(); ffi::SSL_CLIENT_HELLO_ERROR } } } vendor/openssl/src/ssl/test/0000775000175000017500000000000014172417313016704 5ustar mwhudsonmwhudsonvendor/openssl/src/ssl/test/server.rs0000664000175000017500000000757114160055207020567 0ustar mwhudsonmwhudsonuse std::io::{Read, Write}; use std::net::{SocketAddr, TcpListener, TcpStream}; use std::thread::{self, JoinHandle}; use crate::ssl::{Ssl, SslContext, SslContextBuilder, SslFiletype, SslMethod, SslRef, SslStream}; pub struct Server { handle: Option>, addr: SocketAddr, } impl Drop for Server { fn drop(&mut self) { if !thread::panicking() { self.handle.take().unwrap().join().unwrap(); } } } impl Server { pub fn builder() -> Builder { let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); ctx.set_certificate_chain_file("test/cert.pem").unwrap(); ctx.set_private_key_file("test/key.pem", SslFiletype::PEM) .unwrap(); Builder { ctx, ssl_cb: Box::new(|_| {}), io_cb: Box::new(|_| {}), should_error: false, } } pub fn client(&self) -> ClientBuilder { ClientBuilder { ctx: SslContext::builder(SslMethod::tls()).unwrap(), addr: self.addr, } } pub fn connect_tcp(&self) -> TcpStream { TcpStream::connect(self.addr).unwrap() } } pub struct Builder { ctx: SslContextBuilder, ssl_cb: Box, io_cb: Box) + Send>, should_error: bool, } impl Builder { pub fn ctx(&mut self) -> &mut SslContextBuilder { &mut self.ctx } pub fn ssl_cb(&mut self, cb: F) where F: 'static + FnMut(&mut SslRef) + Send, { self.ssl_cb = Box::new(cb); } pub fn io_cb(&mut self, cb: F) where F: 'static + FnMut(SslStream) + Send, { self.io_cb = Box::new(cb); } pub fn should_error(&mut self) { self.should_error = true; } pub fn build(self) -> Server { let ctx = self.ctx.build(); let socket = TcpListener::bind("127.0.0.1:0").unwrap(); let addr = socket.local_addr().unwrap(); let mut ssl_cb = self.ssl_cb; let mut io_cb = self.io_cb; let should_error = self.should_error; let handle = thread::spawn(move || { let socket = socket.accept().unwrap().0; let mut ssl = Ssl::new(&ctx).unwrap(); ssl_cb(&mut ssl); let r = ssl.accept(socket); if should_error { r.unwrap_err(); } else { let mut socket = r.unwrap(); socket.write_all(&[0]).unwrap(); io_cb(socket); } }); Server { handle: Some(handle), addr, } } } pub struct ClientBuilder { ctx: SslContextBuilder, addr: SocketAddr, } impl ClientBuilder { pub fn ctx(&mut self) -> &mut SslContextBuilder { &mut self.ctx } pub fn build(self) -> Client { Client { ctx: self.ctx.build(), addr: self.addr, } } pub fn connect(self) -> SslStream { self.build().builder().connect() } pub fn connect_err(self) { self.build().builder().connect_err(); } } pub struct Client { ctx: SslContext, addr: SocketAddr, } impl Client { pub fn builder(&self) -> ClientSslBuilder { ClientSslBuilder { ssl: Ssl::new(&self.ctx).unwrap(), addr: self.addr, } } } pub struct ClientSslBuilder { ssl: Ssl, addr: SocketAddr, } impl ClientSslBuilder { pub fn ssl(&mut self) -> &mut SslRef { &mut self.ssl } pub fn connect(self) -> SslStream { let socket = TcpStream::connect(self.addr).unwrap(); let mut s = self.ssl.connect(socket).unwrap(); s.read_exact(&mut [0]).unwrap(); s } pub fn connect_err(self) { let socket = TcpStream::connect(self.addr).unwrap(); self.ssl.connect(socket).unwrap_err(); } } vendor/openssl/src/ssl/test/mod.rs0000664000175000017500000011632714172417313020043 0ustar mwhudsonmwhudson#![allow(unused_imports)] use std::env; use std::fs::File; use std::io::prelude::*; use std::io::{self, BufReader}; use std::iter; use std::mem; use std::net::UdpSocket; use std::net::{SocketAddr, TcpListener, TcpStream}; use std::path::Path; use std::process::{Child, ChildStdin, Command, Stdio}; use std::sync::atomic::{AtomicBool, Ordering}; use std::thread; use std::time::Duration; use tempdir::TempDir; use crate::dh::Dh; use crate::error::ErrorStack; use crate::hash::MessageDigest; use crate::ocsp::{OcspResponse, OcspResponseStatus}; use crate::pkey::PKey; use crate::srtp::SrtpProfileId; use crate::ssl; use crate::ssl::test::server::Server; #[cfg(any(ossl110, ossl111, libressl261))] use crate::ssl::SslVersion; #[cfg(ossl111)] use crate::ssl::{ClientHelloResponse, ExtensionContext}; use crate::ssl::{ Error, HandshakeError, MidHandshakeSslStream, ShutdownResult, ShutdownState, Ssl, SslAcceptor, SslAcceptorBuilder, SslConnector, SslContext, SslContextBuilder, SslFiletype, SslMethod, SslOptions, SslSessionCacheMode, SslStream, SslVerifyMode, StatusType, }; #[cfg(ossl102)] use crate::x509::store::X509StoreBuilder; #[cfg(ossl102)] use crate::x509::verify::X509CheckFlags; use crate::x509::{X509Name, X509StoreContext, X509VerifyResult, X509}; mod server; static ROOT_CERT: &[u8] = include_bytes!("../../../test/root-ca.pem"); static CERT: &[u8] = include_bytes!("../../../test/cert.pem"); static KEY: &[u8] = include_bytes!("../../../test/key.pem"); #[test] fn verify_untrusted() { let mut server = Server::builder(); server.should_error(); let server = server.build(); let mut client = server.client(); client.ctx().set_verify(SslVerifyMode::PEER); client.connect_err(); } #[test] fn verify_trusted() { let server = Server::builder().build(); let mut client = server.client(); client.ctx().set_ca_file("test/root-ca.pem").unwrap(); client.connect(); } #[test] #[cfg(ossl102)] fn verify_trusted_with_set_cert() { let server = Server::builder().build(); let mut store = X509StoreBuilder::new().unwrap(); let x509 = X509::from_pem(ROOT_CERT).unwrap(); store.add_cert(x509).unwrap(); let mut client = server.client(); client.ctx().set_verify(SslVerifyMode::PEER); client.ctx().set_verify_cert_store(store.build()).unwrap(); client.connect(); } #[test] fn verify_untrusted_callback_override_ok() { let server = Server::builder().build(); let mut client = server.client(); client .ctx() .set_verify_callback(SslVerifyMode::PEER, |_, x509| { assert!(x509.current_cert().is_some()); true }); client.connect(); } #[test] fn verify_untrusted_callback_override_bad() { let mut server = Server::builder(); server.should_error(); let server = server.build(); let mut client = server.client(); client .ctx() .set_verify_callback(SslVerifyMode::PEER, |_, _| false); client.connect_err(); } #[test] fn verify_trusted_callback_override_ok() { let server = Server::builder().build(); let mut client = server.client(); client.ctx().set_ca_file("test/root-ca.pem").unwrap(); client .ctx() .set_verify_callback(SslVerifyMode::PEER, |_, x509| { assert!(x509.current_cert().is_some()); true }); client.connect(); } #[test] fn verify_trusted_callback_override_bad() { let mut server = Server::builder(); server.should_error(); let server = server.build(); let mut client = server.client(); client.ctx().set_ca_file("test/root-ca.pem").unwrap(); client .ctx() .set_verify_callback(SslVerifyMode::PEER, |_, _| false); client.connect_err(); } #[test] fn verify_callback_load_certs() { let server = Server::builder().build(); let mut client = server.client(); client .ctx() .set_verify_callback(SslVerifyMode::PEER, |_, x509| { assert!(x509.current_cert().is_some()); true }); client.connect(); } #[test] fn verify_trusted_get_error_ok() { let server = Server::builder().build(); let mut client = server.client(); client.ctx().set_ca_file("test/root-ca.pem").unwrap(); client .ctx() .set_verify_callback(SslVerifyMode::PEER, |_, x509| { assert_eq!(x509.error(), X509VerifyResult::OK); true }); client.connect(); } #[test] fn verify_trusted_get_error_err() { let mut server = Server::builder(); server.should_error(); let server = server.build(); let mut client = server.client(); client .ctx() .set_verify_callback(SslVerifyMode::PEER, |_, x509| { assert_ne!(x509.error(), X509VerifyResult::OK); false }); client.connect_err(); } #[test] fn verify_callback() { static CALLED_BACK: AtomicBool = AtomicBool::new(false); let server = Server::builder().build(); let mut client = server.client(); let expected = "59172d9313e84459bcff27f967e79e6e9217e584"; client .ctx() .set_verify_callback(SslVerifyMode::PEER, move |_, x509| { CALLED_BACK.store(true, Ordering::SeqCst); let cert = x509.current_cert().unwrap(); let digest = cert.digest(MessageDigest::sha1()).unwrap(); assert_eq!(hex::encode(&digest), expected); true }); client.connect(); assert!(CALLED_BACK.load(Ordering::SeqCst)); } #[test] fn ssl_verify_callback() { static CALLED_BACK: AtomicBool = AtomicBool::new(false); let server = Server::builder().build(); let mut client = server.client().build().builder(); let expected = "59172d9313e84459bcff27f967e79e6e9217e584"; client .ssl() .set_verify_callback(SslVerifyMode::PEER, move |_, x509| { CALLED_BACK.store(true, Ordering::SeqCst); let cert = x509.current_cert().unwrap(); let digest = cert.digest(MessageDigest::sha1()).unwrap(); assert_eq!(hex::encode(&digest), expected); true }); client.connect(); assert!(CALLED_BACK.load(Ordering::SeqCst)); } #[test] fn get_ctx_options() { let ctx = SslContext::builder(SslMethod::tls()).unwrap(); ctx.options(); } #[test] fn set_ctx_options() { let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); let opts = ctx.set_options(SslOptions::NO_TICKET); assert!(opts.contains(SslOptions::NO_TICKET)); } #[test] fn clear_ctx_options() { let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); ctx.set_options(SslOptions::ALL); let opts = ctx.clear_options(SslOptions::ALL); assert!(!opts.contains(SslOptions::ALL)); } #[test] fn zero_length_buffers() { let server = Server::builder().build(); let mut s = server.client().connect(); assert_eq!(s.write(&[]).unwrap(), 0); assert_eq!(s.read(&mut []).unwrap(), 0); } #[test] fn peer_certificate() { let server = Server::builder().build(); let s = server.client().connect(); let cert = s.ssl().peer_certificate().unwrap(); let fingerprint = cert.digest(MessageDigest::sha1()).unwrap(); assert_eq!( hex::encode(fingerprint), "59172d9313e84459bcff27f967e79e6e9217e584" ); } #[test] fn pending() { let mut server = Server::builder(); server.io_cb(|mut s| s.write_all(&[0; 10]).unwrap()); let server = server.build(); let mut s = server.client().connect(); s.read_exact(&mut [0]).unwrap(); assert_eq!(s.ssl().pending(), 9); assert_eq!(s.read(&mut [0; 10]).unwrap(), 9); } #[test] fn state() { let server = Server::builder().build(); let s = server.client().connect(); assert_eq!(s.ssl().state_string().trim(), "SSLOK"); assert_eq!( s.ssl().state_string_long(), "SSL negotiation finished successfully" ); } /// Tests that when both the client as well as the server use SRTP and their /// lists of supported protocols have an overlap -- with only ONE protocol /// being valid for both. #[test] fn test_connect_with_srtp_ctx() { let listener = TcpListener::bind("127.0.0.1:0").unwrap(); let addr = listener.local_addr().unwrap(); let guard = thread::spawn(move || { let stream = listener.accept().unwrap().0; let mut ctx = SslContext::builder(SslMethod::dtls()).unwrap(); ctx.set_tlsext_use_srtp("SRTP_AES128_CM_SHA1_80:SRTP_AES128_CM_SHA1_32") .unwrap(); ctx.set_certificate_file(&Path::new("test/cert.pem"), SslFiletype::PEM) .unwrap(); ctx.set_private_key_file(&Path::new("test/key.pem"), SslFiletype::PEM) .unwrap(); let mut ssl = Ssl::new(&ctx.build()).unwrap(); ssl.set_mtu(1500).unwrap(); let mut stream = ssl.accept(stream).unwrap(); let mut buf = [0; 60]; stream .ssl() .export_keying_material(&mut buf, "EXTRACTOR-dtls_srtp", None) .unwrap(); stream.write_all(&[0]).unwrap(); buf }); let stream = TcpStream::connect(addr).unwrap(); let mut ctx = SslContext::builder(SslMethod::dtls()).unwrap(); ctx.set_tlsext_use_srtp("SRTP_AES128_CM_SHA1_80:SRTP_AES128_CM_SHA1_32") .unwrap(); let mut ssl = Ssl::new(&ctx.build()).unwrap(); ssl.set_mtu(1500).unwrap(); let mut stream = ssl.connect(stream).unwrap(); let mut buf = [1; 60]; { let srtp_profile = stream.ssl().selected_srtp_profile().unwrap(); assert_eq!("SRTP_AES128_CM_SHA1_80", srtp_profile.name()); assert_eq!(SrtpProfileId::SRTP_AES128_CM_SHA1_80, srtp_profile.id()); } stream .ssl() .export_keying_material(&mut buf, "EXTRACTOR-dtls_srtp", None) .expect("extract"); stream.read_exact(&mut [0]).unwrap(); let buf2 = guard.join().unwrap(); assert_eq!(buf[..], buf2[..]); } /// Tests that when both the client as well as the server use SRTP and their /// lists of supported protocols have an overlap -- with only ONE protocol /// being valid for both. #[test] fn test_connect_with_srtp_ssl() { let listener = TcpListener::bind("127.0.0.1:0").unwrap(); let addr = listener.local_addr().unwrap(); let guard = thread::spawn(move || { let stream = listener.accept().unwrap().0; let mut ctx = SslContext::builder(SslMethod::dtls()).unwrap(); ctx.set_certificate_file(&Path::new("test/cert.pem"), SslFiletype::PEM) .unwrap(); ctx.set_private_key_file(&Path::new("test/key.pem"), SslFiletype::PEM) .unwrap(); let mut ssl = Ssl::new(&ctx.build()).unwrap(); ssl.set_tlsext_use_srtp("SRTP_AES128_CM_SHA1_80:SRTP_AES128_CM_SHA1_32") .unwrap(); let mut profilenames = String::new(); for profile in ssl.srtp_profiles().unwrap() { if !profilenames.is_empty() { profilenames.push(':'); } profilenames += profile.name(); } assert_eq!( "SRTP_AES128_CM_SHA1_80:SRTP_AES128_CM_SHA1_32", profilenames ); ssl.set_mtu(1500).unwrap(); let mut stream = ssl.accept(stream).unwrap(); let mut buf = [0; 60]; stream .ssl() .export_keying_material(&mut buf, "EXTRACTOR-dtls_srtp", None) .unwrap(); stream.write_all(&[0]).unwrap(); buf }); let stream = TcpStream::connect(addr).unwrap(); let ctx = SslContext::builder(SslMethod::dtls()).unwrap(); let mut ssl = Ssl::new(&ctx.build()).unwrap(); ssl.set_tlsext_use_srtp("SRTP_AES128_CM_SHA1_80:SRTP_AES128_CM_SHA1_32") .unwrap(); ssl.set_mtu(1500).unwrap(); let mut stream = ssl.connect(stream).unwrap(); let mut buf = [1; 60]; { let srtp_profile = stream.ssl().selected_srtp_profile().unwrap(); assert_eq!("SRTP_AES128_CM_SHA1_80", srtp_profile.name()); assert_eq!(SrtpProfileId::SRTP_AES128_CM_SHA1_80, srtp_profile.id()); } stream .ssl() .export_keying_material(&mut buf, "EXTRACTOR-dtls_srtp", None) .expect("extract"); stream.read_exact(&mut [0]).unwrap(); let buf2 = guard.join().unwrap(); assert_eq!(buf[..], buf2[..]); } /// Tests that when the `SslStream` is created as a server stream, the protocols /// are correctly advertised to the client. #[test] #[cfg(any(ossl102, libressl261))] fn test_alpn_server_advertise_multiple() { let mut server = Server::builder(); server.ctx().set_alpn_select_callback(|_, client| { ssl::select_next_proto(b"\x08http/1.1\x08spdy/3.1", client).ok_or(ssl::AlpnError::NOACK) }); let server = server.build(); let mut client = server.client(); client.ctx().set_alpn_protos(b"\x08spdy/3.1").unwrap(); let s = client.connect(); assert_eq!(s.ssl().selected_alpn_protocol(), Some(&b"spdy/3.1"[..])); } #[test] #[cfg(any(ossl110))] fn test_alpn_server_select_none_fatal() { let mut server = Server::builder(); server.ctx().set_alpn_select_callback(|_, client| { ssl::select_next_proto(b"\x08http/1.1\x08spdy/3.1", client) .ok_or(ssl::AlpnError::ALERT_FATAL) }); server.should_error(); let server = server.build(); let mut client = server.client(); client.ctx().set_alpn_protos(b"\x06http/2").unwrap(); client.connect_err(); } #[test] #[cfg(any(ossl102, libressl261))] fn test_alpn_server_select_none() { let mut server = Server::builder(); server.ctx().set_alpn_select_callback(|_, client| { ssl::select_next_proto(b"\x08http/1.1\x08spdy/3.1", client).ok_or(ssl::AlpnError::NOACK) }); let server = server.build(); let mut client = server.client(); client.ctx().set_alpn_protos(b"\x06http/2").unwrap(); let s = client.connect(); assert_eq!(None, s.ssl().selected_alpn_protocol()); } #[test] #[cfg(any(ossl102, libressl261))] fn test_alpn_server_unilateral() { let server = Server::builder().build(); let mut client = server.client(); client.ctx().set_alpn_protos(b"\x06http/2").unwrap(); let s = client.connect(); assert_eq!(None, s.ssl().selected_alpn_protocol()); } #[test] #[should_panic(expected = "blammo")] fn write_panic() { struct ExplodingStream(TcpStream); impl Read for ExplodingStream { fn read(&mut self, buf: &mut [u8]) -> io::Result { self.0.read(buf) } } impl Write for ExplodingStream { fn write(&mut self, _: &[u8]) -> io::Result { panic!("blammo"); } fn flush(&mut self) -> io::Result<()> { self.0.flush() } } let mut server = Server::builder(); server.should_error(); let server = server.build(); let stream = ExplodingStream(server.connect_tcp()); let ctx = SslContext::builder(SslMethod::tls()).unwrap(); let _ = Ssl::new(&ctx.build()).unwrap().connect(stream); } #[test] #[should_panic(expected = "blammo")] fn read_panic() { struct ExplodingStream(TcpStream); impl Read for ExplodingStream { fn read(&mut self, _: &mut [u8]) -> io::Result { panic!("blammo"); } } impl Write for ExplodingStream { fn write(&mut self, buf: &[u8]) -> io::Result { self.0.write(buf) } fn flush(&mut self) -> io::Result<()> { self.0.flush() } } let mut server = Server::builder(); server.should_error(); let server = server.build(); let stream = ExplodingStream(server.connect_tcp()); let ctx = SslContext::builder(SslMethod::tls()).unwrap(); let _ = Ssl::new(&ctx.build()).unwrap().connect(stream); } #[test] #[cfg_attr(libressl321, ignore)] #[should_panic(expected = "blammo")] fn flush_panic() { struct ExplodingStream(TcpStream); impl Read for ExplodingStream { fn read(&mut self, buf: &mut [u8]) -> io::Result { self.0.read(buf) } } impl Write for ExplodingStream { fn write(&mut self, buf: &[u8]) -> io::Result { self.0.write(buf) } fn flush(&mut self) -> io::Result<()> { panic!("blammo"); } } let mut server = Server::builder(); server.should_error(); let server = server.build(); let stream = ExplodingStream(server.connect_tcp()); let ctx = SslContext::builder(SslMethod::tls()).unwrap(); let _ = Ssl::new(&ctx.build()).unwrap().connect(stream); } #[test] fn refcount_ssl_context() { let mut ssl = { let ctx = SslContext::builder(SslMethod::tls()).unwrap(); ssl::Ssl::new(&ctx.build()).unwrap() }; { let new_ctx_a = SslContext::builder(SslMethod::tls()).unwrap().build(); let _new_ctx_b = ssl.set_ssl_context(&new_ctx_a); } } #[test] #[cfg_attr(libressl250, ignore)] #[cfg_attr(target_os = "windows", ignore)] #[cfg_attr(all(target_os = "macos", feature = "vendored"), ignore)] fn default_verify_paths() { let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); ctx.set_default_verify_paths().unwrap(); ctx.set_verify(SslVerifyMode::PEER); let ctx = ctx.build(); let s = match TcpStream::connect("google.com:443") { Ok(s) => s, Err(_) => return, }; let mut ssl = Ssl::new(&ctx).unwrap(); ssl.set_hostname("google.com").unwrap(); let mut socket = ssl.connect(s).unwrap(); socket.write_all(b"GET / HTTP/1.0\r\n\r\n").unwrap(); let mut result = vec![]; socket.read_to_end(&mut result).unwrap(); println!("{}", String::from_utf8_lossy(&result)); assert!(result.starts_with(b"HTTP/1.0")); assert!(result.ends_with(b"\r\n") || result.ends_with(b"")); } #[test] fn add_extra_chain_cert() { let cert = X509::from_pem(CERT).unwrap(); let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); ctx.add_extra_chain_cert(cert).unwrap(); } #[test] #[cfg(ossl102)] fn verify_valid_hostname() { let server = Server::builder().build(); let mut client = server.client(); client.ctx().set_ca_file("test/root-ca.pem").unwrap(); client.ctx().set_verify(SslVerifyMode::PEER); let mut client = client.build().builder(); client .ssl() .param_mut() .set_hostflags(X509CheckFlags::NO_PARTIAL_WILDCARDS); client.ssl().param_mut().set_host("foobar.com").unwrap(); client.connect(); } #[test] #[cfg(ossl102)] fn verify_invalid_hostname() { let mut server = Server::builder(); server.should_error(); let server = server.build(); let mut client = server.client(); client.ctx().set_ca_file("test/root-ca.pem").unwrap(); client.ctx().set_verify(SslVerifyMode::PEER); let mut client = client.build().builder(); client .ssl() .param_mut() .set_hostflags(X509CheckFlags::NO_PARTIAL_WILDCARDS); client.ssl().param_mut().set_host("bogus.com").unwrap(); client.connect_err(); } #[test] fn connector_valid_hostname() { let server = Server::builder().build(); let mut connector = SslConnector::builder(SslMethod::tls()).unwrap(); connector.set_ca_file("test/root-ca.pem").unwrap(); let s = server.connect_tcp(); let mut s = connector.build().connect("foobar.com", s).unwrap(); s.read_exact(&mut [0]).unwrap(); } #[test] fn connector_invalid_hostname() { let mut server = Server::builder(); server.should_error(); let server = server.build(); let mut connector = SslConnector::builder(SslMethod::tls()).unwrap(); connector.set_ca_file("test/root-ca.pem").unwrap(); let s = server.connect_tcp(); connector.build().connect("bogus.com", s).unwrap_err(); } #[test] fn connector_invalid_no_hostname_verification() { let server = Server::builder().build(); let mut connector = SslConnector::builder(SslMethod::tls()).unwrap(); connector.set_ca_file("test/root-ca.pem").unwrap(); let s = server.connect_tcp(); let mut s = connector .build() .configure() .unwrap() .verify_hostname(false) .connect("bogus.com", s) .unwrap(); s.read_exact(&mut [0]).unwrap(); } #[test] fn connector_no_hostname_still_verifies() { let mut server = Server::builder(); server.should_error(); let server = server.build(); let connector = SslConnector::builder(SslMethod::tls()).unwrap().build(); let s = server.connect_tcp(); assert!(connector .configure() .unwrap() .verify_hostname(false) .connect("fizzbuzz.com", s) .is_err()); } #[test] fn connector_no_hostname_can_disable_verify() { let server = Server::builder().build(); let mut connector = SslConnector::builder(SslMethod::tls()).unwrap(); connector.set_verify(SslVerifyMode::NONE); let connector = connector.build(); let s = server.connect_tcp(); let mut s = connector .configure() .unwrap() .verify_hostname(false) .connect("foobar.com", s) .unwrap(); s.read_exact(&mut [0]).unwrap(); } fn test_mozilla_server(new: fn(SslMethod) -> Result) { let listener = TcpListener::bind("127.0.0.1:0").unwrap(); let port = listener.local_addr().unwrap().port(); let t = thread::spawn(move || { let key = PKey::private_key_from_pem(KEY).unwrap(); let cert = X509::from_pem(CERT).unwrap(); let mut acceptor = new(SslMethod::tls()).unwrap(); acceptor.set_private_key(&key).unwrap(); acceptor.set_certificate(&cert).unwrap(); let acceptor = acceptor.build(); let stream = listener.accept().unwrap().0; let mut stream = acceptor.accept(stream).unwrap(); stream.write_all(b"hello").unwrap(); }); let mut connector = SslConnector::builder(SslMethod::tls()).unwrap(); connector.set_ca_file("test/root-ca.pem").unwrap(); let connector = connector.build(); let stream = TcpStream::connect(("127.0.0.1", port)).unwrap(); let mut stream = connector.connect("foobar.com", stream).unwrap(); let mut buf = [0; 5]; stream.read_exact(&mut buf).unwrap(); assert_eq!(b"hello", &buf); t.join().unwrap(); } #[test] fn connector_client_server_mozilla_intermediate() { test_mozilla_server(SslAcceptor::mozilla_intermediate); } #[test] fn connector_client_server_mozilla_modern() { test_mozilla_server(SslAcceptor::mozilla_modern); } #[test] fn connector_client_server_mozilla_intermediate_v5() { test_mozilla_server(SslAcceptor::mozilla_intermediate_v5); } #[test] #[cfg(ossl111)] fn connector_client_server_mozilla_modern_v5() { test_mozilla_server(SslAcceptor::mozilla_modern_v5); } #[test] fn shutdown() { let mut server = Server::builder(); server.io_cb(|mut s| { assert_eq!(s.read(&mut [0]).unwrap(), 0); assert_eq!(s.shutdown().unwrap(), ShutdownResult::Received); }); let server = server.build(); let mut s = server.client().connect(); assert_eq!(s.get_shutdown(), ShutdownState::empty()); assert_eq!(s.shutdown().unwrap(), ShutdownResult::Sent); assert_eq!(s.get_shutdown(), ShutdownState::SENT); assert_eq!(s.shutdown().unwrap(), ShutdownResult::Received); assert_eq!( s.get_shutdown(), ShutdownState::SENT | ShutdownState::RECEIVED ); } #[test] fn client_ca_list() { let names = X509Name::load_client_ca_file("test/root-ca.pem").unwrap(); assert_eq!(names.len(), 1); let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); ctx.set_client_ca_list(names); } #[test] fn cert_store() { let server = Server::builder().build(); let mut client = server.client(); let cert = X509::from_pem(ROOT_CERT).unwrap(); client.ctx().cert_store_mut().add_cert(cert).unwrap(); client.ctx().set_verify(SslVerifyMode::PEER); client.connect(); } #[test] #[cfg_attr(libressl321, ignore)] fn tmp_dh_callback() { static CALLED_BACK: AtomicBool = AtomicBool::new(false); let mut server = Server::builder(); server.ctx().set_tmp_dh_callback(|_, _, _| { CALLED_BACK.store(true, Ordering::SeqCst); let dh = include_bytes!("../../../test/dhparams.pem"); Dh::params_from_pem(dh) }); let server = server.build(); let mut client = server.client(); // TLS 1.3 has no DH suites, so make sure we don't pick that version #[cfg(ossl111)] client.ctx().set_options(super::SslOptions::NO_TLSV1_3); client.ctx().set_cipher_list("EDH").unwrap(); client.connect(); assert!(CALLED_BACK.load(Ordering::SeqCst)); } #[test] #[cfg(all(ossl101, not(ossl110)))] fn tmp_ecdh_callback() { use crate::ec::EcKey; use crate::nid::Nid; static CALLED_BACK: AtomicBool = AtomicBool::new(false); let mut server = Server::builder(); server.ctx().set_tmp_ecdh_callback(|_, _, _| { CALLED_BACK.store(true, Ordering::SeqCst); EcKey::from_curve_name(Nid::X9_62_PRIME256V1) }); let server = server.build(); let mut client = server.client(); client.ctx().set_cipher_list("ECDH").unwrap(); client.connect(); assert!(CALLED_BACK.load(Ordering::SeqCst)); } #[test] #[cfg_attr(libressl321, ignore)] fn tmp_dh_callback_ssl() { static CALLED_BACK: AtomicBool = AtomicBool::new(false); let mut server = Server::builder(); server.ssl_cb(|ssl| { ssl.set_tmp_dh_callback(|_, _, _| { CALLED_BACK.store(true, Ordering::SeqCst); let dh = include_bytes!("../../../test/dhparams.pem"); Dh::params_from_pem(dh) }); }); let server = server.build(); let mut client = server.client(); // TLS 1.3 has no DH suites, so make sure we don't pick that version #[cfg(ossl111)] client.ctx().set_options(super::SslOptions::NO_TLSV1_3); client.ctx().set_cipher_list("EDH").unwrap(); client.connect(); assert!(CALLED_BACK.load(Ordering::SeqCst)); } #[test] #[cfg(all(ossl101, not(ossl110)))] fn tmp_ecdh_callback_ssl() { use crate::ec::EcKey; use crate::nid::Nid; static CALLED_BACK: AtomicBool = AtomicBool::new(false); let mut server = Server::builder(); server.ssl_cb(|ssl| { ssl.set_tmp_ecdh_callback(|_, _, _| { CALLED_BACK.store(true, Ordering::SeqCst); EcKey::from_curve_name(Nid::X9_62_PRIME256V1) }); }); let server = server.build(); let mut client = server.client(); client.ctx().set_cipher_list("ECDH").unwrap(); client.connect(); assert!(CALLED_BACK.load(Ordering::SeqCst)); } #[test] fn idle_session() { let ctx = SslContext::builder(SslMethod::tls()).unwrap().build(); let ssl = Ssl::new(&ctx).unwrap(); assert!(ssl.session().is_none()); } #[test] #[cfg_attr(libressl321, ignore)] fn active_session() { let server = Server::builder().build(); let s = server.client().connect(); let session = s.ssl().session().unwrap(); let len = session.master_key_len(); let mut buf = vec![0; len - 1]; let copied = session.master_key(&mut buf); assert_eq!(copied, buf.len()); let mut buf = vec![0; len + 1]; let copied = session.master_key(&mut buf); assert_eq!(copied, len); } #[test] fn status_callbacks() { static CALLED_BACK_SERVER: AtomicBool = AtomicBool::new(false); static CALLED_BACK_CLIENT: AtomicBool = AtomicBool::new(false); let mut server = Server::builder(); server .ctx() .set_status_callback(|ssl| { CALLED_BACK_SERVER.store(true, Ordering::SeqCst); let response = OcspResponse::create(OcspResponseStatus::UNAUTHORIZED, None).unwrap(); let response = response.to_der().unwrap(); ssl.set_ocsp_status(&response).unwrap(); Ok(true) }) .unwrap(); let server = server.build(); let mut client = server.client(); client .ctx() .set_status_callback(|ssl| { CALLED_BACK_CLIENT.store(true, Ordering::SeqCst); let response = OcspResponse::from_der(ssl.ocsp_status().unwrap()).unwrap(); assert_eq!(response.status(), OcspResponseStatus::UNAUTHORIZED); Ok(true) }) .unwrap(); let mut client = client.build().builder(); client.ssl().set_status_type(StatusType::OCSP).unwrap(); client.connect(); assert!(CALLED_BACK_SERVER.load(Ordering::SeqCst)); assert!(CALLED_BACK_CLIENT.load(Ordering::SeqCst)); } #[test] #[cfg_attr(libressl321, ignore)] fn new_session_callback() { static CALLED_BACK: AtomicBool = AtomicBool::new(false); let mut server = Server::builder(); server.ctx().set_session_id_context(b"foo").unwrap(); let server = server.build(); let mut client = server.client(); client .ctx() .set_session_cache_mode(SslSessionCacheMode::CLIENT | SslSessionCacheMode::NO_INTERNAL); client .ctx() .set_new_session_callback(|_, _| CALLED_BACK.store(true, Ordering::SeqCst)); client.connect(); assert!(CALLED_BACK.load(Ordering::SeqCst)); } #[test] #[cfg_attr(libressl321, ignore)] fn new_session_callback_swapped_ctx() { static CALLED_BACK: AtomicBool = AtomicBool::new(false); let mut server = Server::builder(); server.ctx().set_session_id_context(b"foo").unwrap(); let server = server.build(); let mut client = server.client(); client .ctx() .set_session_cache_mode(SslSessionCacheMode::CLIENT | SslSessionCacheMode::NO_INTERNAL); client .ctx() .set_new_session_callback(|_, _| CALLED_BACK.store(true, Ordering::SeqCst)); let mut client = client.build().builder(); let ctx = SslContextBuilder::new(SslMethod::tls()).unwrap().build(); client.ssl().set_ssl_context(&ctx).unwrap(); client.connect(); assert!(CALLED_BACK.load(Ordering::SeqCst)); } #[test] fn keying_export() { let listener = TcpListener::bind("127.0.0.1:0").unwrap(); let addr = listener.local_addr().unwrap(); let label = "EXPERIMENTAL test"; let context = b"my context"; let guard = thread::spawn(move || { let stream = listener.accept().unwrap().0; let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); ctx.set_certificate_file(&Path::new("test/cert.pem"), SslFiletype::PEM) .unwrap(); ctx.set_private_key_file(&Path::new("test/key.pem"), SslFiletype::PEM) .unwrap(); let ssl = Ssl::new(&ctx.build()).unwrap(); let mut stream = ssl.accept(stream).unwrap(); let mut buf = [0; 32]; stream .ssl() .export_keying_material(&mut buf, label, Some(context)) .unwrap(); stream.write_all(&[0]).unwrap(); buf }); let stream = TcpStream::connect(addr).unwrap(); let ctx = SslContext::builder(SslMethod::tls()).unwrap(); let ssl = Ssl::new(&ctx.build()).unwrap(); let mut stream = ssl.connect(stream).unwrap(); let mut buf = [1; 32]; stream .ssl() .export_keying_material(&mut buf, label, Some(context)) .unwrap(); stream.read_exact(&mut [0]).unwrap(); let buf2 = guard.join().unwrap(); assert_eq!(buf, buf2); } #[test] #[cfg(any(ossl110, libressl261))] fn no_version_overlap() { let mut server = Server::builder(); server.ctx().set_min_proto_version(None).unwrap(); server .ctx() .set_max_proto_version(Some(SslVersion::TLS1_1)) .unwrap(); #[cfg(any(ossl110g, libressl270))] assert_eq!(server.ctx().max_proto_version(), Some(SslVersion::TLS1_1)); server.should_error(); let server = server.build(); let mut client = server.client(); client .ctx() .set_min_proto_version(Some(SslVersion::TLS1_2)) .unwrap(); #[cfg(ossl110g)] assert_eq!(client.ctx().min_proto_version(), Some(SslVersion::TLS1_2)); client.ctx().set_max_proto_version(None).unwrap(); client.connect_err(); } #[test] #[cfg(ossl111)] fn custom_extensions() { static FOUND_EXTENSION: AtomicBool = AtomicBool::new(false); let mut server = Server::builder(); server .ctx() .add_custom_ext( 12345, ExtensionContext::CLIENT_HELLO, |_, _, _| -> Result, _> { unreachable!() }, |_, _, data, _| { FOUND_EXTENSION.store(data == b"hello", Ordering::SeqCst); Ok(()) }, ) .unwrap(); let server = server.build(); let mut client = server.client(); client .ctx() .add_custom_ext( 12345, ssl::ExtensionContext::CLIENT_HELLO, |_, _, _| Ok(Some(b"hello")), |_, _, _, _| unreachable!(), ) .unwrap(); client.connect(); assert!(FOUND_EXTENSION.load(Ordering::SeqCst)); } fn _check_kinds() { fn is_send() {} fn is_sync() {} is_send::>(); is_sync::>(); } #[test] #[cfg(ossl111)] fn stateless() { use super::SslOptions; #[derive(Debug)] struct MemoryStream { incoming: io::Cursor>, outgoing: Vec, } impl MemoryStream { pub fn new() -> Self { Self { incoming: io::Cursor::new(Vec::new()), outgoing: Vec::new(), } } pub fn extend_incoming(&mut self, data: &[u8]) { self.incoming.get_mut().extend_from_slice(data); } pub fn take_outgoing(&mut self) -> Outgoing<'_> { Outgoing(&mut self.outgoing) } } impl Read for MemoryStream { fn read(&mut self, buf: &mut [u8]) -> io::Result { let n = self.incoming.read(buf)?; if self.incoming.position() == self.incoming.get_ref().len() as u64 { self.incoming.set_position(0); self.incoming.get_mut().clear(); } if n == 0 { return Err(io::Error::new( io::ErrorKind::WouldBlock, "no data available", )); } Ok(n) } } impl Write for MemoryStream { fn write(&mut self, buf: &[u8]) -> io::Result { self.outgoing.write(buf) } fn flush(&mut self) -> io::Result<()> { Ok(()) } } pub struct Outgoing<'a>(&'a mut Vec); impl<'a> Drop for Outgoing<'a> { fn drop(&mut self) { self.0.clear(); } } impl<'a> ::std::ops::Deref for Outgoing<'a> { type Target = [u8]; fn deref(&self) -> &[u8] { self.0 } } impl<'a> AsRef<[u8]> for Outgoing<'a> { fn as_ref(&self) -> &[u8] { self.0 } } fn send(from: &mut MemoryStream, to: &mut MemoryStream) { to.extend_incoming(&from.take_outgoing()); } // // Setup // let mut client_ctx = SslContext::builder(SslMethod::tls()).unwrap(); client_ctx.clear_options(SslOptions::ENABLE_MIDDLEBOX_COMPAT); let mut client_stream = SslStream::new(Ssl::new(&client_ctx.build()).unwrap(), MemoryStream::new()).unwrap(); let mut server_ctx = SslContext::builder(SslMethod::tls()).unwrap(); server_ctx .set_certificate_file(&Path::new("test/cert.pem"), SslFiletype::PEM) .unwrap(); server_ctx .set_private_key_file(&Path::new("test/key.pem"), SslFiletype::PEM) .unwrap(); const COOKIE: &[u8] = b"chocolate chip"; server_ctx.set_stateless_cookie_generate_cb(|_tls, buf| { buf[0..COOKIE.len()].copy_from_slice(COOKIE); Ok(COOKIE.len()) }); server_ctx.set_stateless_cookie_verify_cb(|_tls, buf| buf == COOKIE); let mut server_stream = SslStream::new(Ssl::new(&server_ctx.build()).unwrap(), MemoryStream::new()).unwrap(); // // Handshake // // Initial ClientHello client_stream.connect().unwrap_err(); send(client_stream.get_mut(), server_stream.get_mut()); // HelloRetryRequest assert!(!server_stream.stateless().unwrap()); send(server_stream.get_mut(), client_stream.get_mut()); // Second ClientHello client_stream.do_handshake().unwrap_err(); send(client_stream.get_mut(), server_stream.get_mut()); // OldServerHello assert!(server_stream.stateless().unwrap()); server_stream.accept().unwrap_err(); send(server_stream.get_mut(), client_stream.get_mut()); // Finished client_stream.do_handshake().unwrap(); send(client_stream.get_mut(), server_stream.get_mut()); server_stream.do_handshake().unwrap(); } #[cfg(not(osslconf = "OPENSSL_NO_PSK"))] #[test] fn psk_ciphers() { const CIPHER: &str = "PSK-AES128-CBC-SHA"; const PSK: &[u8] = b"thisisaverysecurekey"; const CLIENT_IDENT: &[u8] = b"thisisaclient"; static CLIENT_CALLED: AtomicBool = AtomicBool::new(false); static SERVER_CALLED: AtomicBool = AtomicBool::new(false); let mut server = Server::builder(); server.ctx().set_cipher_list(CIPHER).unwrap(); server.ctx().set_psk_server_callback(|_, identity, psk| { assert!(identity.unwrap_or(&[]) == CLIENT_IDENT); psk[..PSK.len()].copy_from_slice(PSK); SERVER_CALLED.store(true, Ordering::SeqCst); Ok(PSK.len()) }); let server = server.build(); let mut client = server.client(); // This test relies on TLS 1.2 suites #[cfg(ossl111)] client.ctx().set_options(super::SslOptions::NO_TLSV1_3); client.ctx().set_cipher_list(CIPHER).unwrap(); client .ctx() .set_psk_client_callback(move |_, _, identity, psk| { identity[..CLIENT_IDENT.len()].copy_from_slice(CLIENT_IDENT); identity[CLIENT_IDENT.len()] = 0; psk[..PSK.len()].copy_from_slice(PSK); CLIENT_CALLED.store(true, Ordering::SeqCst); Ok(PSK.len()) }); client.connect(); assert!(CLIENT_CALLED.load(Ordering::SeqCst) && SERVER_CALLED.load(Ordering::SeqCst)); } #[test] fn sni_callback_swapped_ctx() { static CALLED_BACK: AtomicBool = AtomicBool::new(false); let mut server = Server::builder(); let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); ctx.set_servername_callback(|_, _| { CALLED_BACK.store(true, Ordering::SeqCst); Ok(()) }); let keyed_ctx = mem::replace(server.ctx(), ctx).build(); server.ssl_cb(move |ssl| ssl.set_ssl_context(&keyed_ctx).unwrap()); let server = server.build(); server.client().connect(); assert!(CALLED_BACK.load(Ordering::SeqCst)); } #[test] #[cfg(ossl111)] fn client_hello() { static CALLED_BACK: AtomicBool = AtomicBool::new(false); let mut server = Server::builder(); server.ctx().set_client_hello_callback(|ssl, _| { assert!(!ssl.client_hello_isv2()); assert_eq!(ssl.client_hello_legacy_version(), Some(SslVersion::TLS1_2)); assert!(ssl.client_hello_random().is_some()); assert!(ssl.client_hello_session_id().is_some()); assert!(ssl.client_hello_ciphers().is_some()); assert!(ssl.client_hello_compression_methods().is_some()); CALLED_BACK.store(true, Ordering::SeqCst); Ok(ClientHelloResponse::SUCCESS) }); let server = server.build(); server.client().connect(); assert!(CALLED_BACK.load(Ordering::SeqCst)); } #[test] #[cfg(ossl111)] fn openssl_cipher_name() { assert_eq!( super::cipher_name("TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384"), "ECDHE-RSA-AES256-SHA384", ); assert_eq!(super::cipher_name("asdf"), "(NONE)"); } #[test] fn session_cache_size() { let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); ctx.set_session_cache_size(1234); let ctx = ctx.build(); assert_eq!(ctx.session_cache_size(), 1234); } vendor/openssl/src/ssl/bio.rs0000664000175000017500000001770714160055207017055 0ustar mwhudsonmwhudsonuse cfg_if::cfg_if; use ffi::{ self, BIO_clear_retry_flags, BIO_new, BIO_set_retry_read, BIO_set_retry_write, BIO, BIO_CTRL_DGRAM_QUERY_MTU, BIO_CTRL_FLUSH, }; use libc::{c_char, c_int, c_long, c_void, strlen}; use std::any::Any; use std::io; use std::io::prelude::*; use std::panic::{catch_unwind, AssertUnwindSafe}; use std::ptr; use std::slice; use crate::cvt_p; use crate::error::ErrorStack; pub struct StreamState { pub stream: S, pub error: Option, pub panic: Option>, pub dtls_mtu_size: c_long, } /// Safe wrapper for BIO_METHOD pub struct BioMethod(BIO_METHOD); impl BioMethod { fn new() -> Result { BIO_METHOD::new::().map(BioMethod) } } unsafe impl Sync for BioMethod {} unsafe impl Send for BioMethod {} pub fn new(stream: S) -> Result<(*mut BIO, BioMethod), ErrorStack> { let method = BioMethod::new::()?; let state = Box::new(StreamState { stream, error: None, panic: None, dtls_mtu_size: 0, }); unsafe { let bio = cvt_p(BIO_new(method.0.get()))?; BIO_set_data(bio, Box::into_raw(state) as *mut _); BIO_set_init(bio, 1); Ok((bio, method)) } } pub unsafe fn take_error(bio: *mut BIO) -> Option { let state = state::(bio); state.error.take() } pub unsafe fn take_panic(bio: *mut BIO) -> Option> { let state = state::(bio); state.panic.take() } pub unsafe fn get_ref<'a, S: 'a>(bio: *mut BIO) -> &'a S { let state = &*(BIO_get_data(bio) as *const StreamState); &state.stream } pub unsafe fn get_mut<'a, S: 'a>(bio: *mut BIO) -> &'a mut S { &mut state(bio).stream } pub unsafe fn set_dtls_mtu_size(bio: *mut BIO, mtu_size: usize) { if mtu_size as u64 > c_long::max_value() as u64 { panic!( "Given MTU size {} can't be represented in a positive `c_long` range", mtu_size ) } state::(bio).dtls_mtu_size = mtu_size as c_long; } unsafe fn state<'a, S: 'a>(bio: *mut BIO) -> &'a mut StreamState { &mut *(BIO_get_data(bio) as *mut _) } unsafe extern "C" fn bwrite(bio: *mut BIO, buf: *const c_char, len: c_int) -> c_int { BIO_clear_retry_flags(bio); let state = state::(bio); let buf = slice::from_raw_parts(buf as *const _, len as usize); match catch_unwind(AssertUnwindSafe(|| state.stream.write(buf))) { Ok(Ok(len)) => len as c_int, Ok(Err(err)) => { if retriable_error(&err) { BIO_set_retry_write(bio); } state.error = Some(err); -1 } Err(err) => { state.panic = Some(err); -1 } } } unsafe extern "C" fn bread(bio: *mut BIO, buf: *mut c_char, len: c_int) -> c_int { BIO_clear_retry_flags(bio); let state = state::(bio); let buf = slice::from_raw_parts_mut(buf as *mut _, len as usize); match catch_unwind(AssertUnwindSafe(|| state.stream.read(buf))) { Ok(Ok(len)) => len as c_int, Ok(Err(err)) => { if retriable_error(&err) { BIO_set_retry_read(bio); } state.error = Some(err); -1 } Err(err) => { state.panic = Some(err); -1 } } } #[allow(clippy::match_like_matches_macro)] // matches macro requires rust 1.42.0 fn retriable_error(err: &io::Error) -> bool { match err.kind() { io::ErrorKind::WouldBlock | io::ErrorKind::NotConnected => true, _ => false, } } unsafe extern "C" fn bputs(bio: *mut BIO, s: *const c_char) -> c_int { bwrite::(bio, s, strlen(s) as c_int) } unsafe extern "C" fn ctrl( bio: *mut BIO, cmd: c_int, _num: c_long, _ptr: *mut c_void, ) -> c_long { let state = state::(bio); if cmd == BIO_CTRL_FLUSH { match catch_unwind(AssertUnwindSafe(|| state.stream.flush())) { Ok(Ok(())) => 1, Ok(Err(err)) => { state.error = Some(err); 0 } Err(err) => { state.panic = Some(err); 0 } } } else if cmd == BIO_CTRL_DGRAM_QUERY_MTU { state.dtls_mtu_size } else { 0 } } unsafe extern "C" fn create(bio: *mut BIO) -> c_int { BIO_set_init(bio, 0); BIO_set_num(bio, 0); BIO_set_data(bio, ptr::null_mut()); BIO_set_flags(bio, 0); 1 } unsafe extern "C" fn destroy(bio: *mut BIO) -> c_int { if bio.is_null() { return 0; } let data = BIO_get_data(bio); assert!(!data.is_null()); Box::>::from_raw(data as *mut _); BIO_set_data(bio, ptr::null_mut()); BIO_set_init(bio, 0); 1 } cfg_if! { if #[cfg(any(ossl110, libressl273))] { use ffi::{BIO_get_data, BIO_set_data, BIO_set_flags, BIO_set_init}; use crate::cvt; #[allow(bad_style)] unsafe fn BIO_set_num(_bio: *mut ffi::BIO, _num: c_int) {} #[allow(bad_style, clippy::upper_case_acronyms)] struct BIO_METHOD(*mut ffi::BIO_METHOD); impl BIO_METHOD { fn new() -> Result { unsafe { let ptr = cvt_p(ffi::BIO_meth_new(ffi::BIO_TYPE_NONE, b"rust\0".as_ptr() as *const _))?; let method = BIO_METHOD(ptr); cvt(ffi::BIO_meth_set_write(method.0, bwrite::))?; cvt(ffi::BIO_meth_set_read(method.0, bread::))?; cvt(ffi::BIO_meth_set_puts(method.0, bputs::))?; cvt(ffi::BIO_meth_set_ctrl(method.0, ctrl::))?; cvt(ffi::BIO_meth_set_create(method.0, create))?; cvt(ffi::BIO_meth_set_destroy(method.0, destroy::))?; Ok(method) } } fn get(&self) -> *mut ffi::BIO_METHOD { self.0 } } impl Drop for BIO_METHOD { fn drop(&mut self) { unsafe { ffi::BIO_meth_free(self.0); } } } } else { #[allow(bad_style, clippy::upper_case_acronyms)] struct BIO_METHOD(*mut ffi::BIO_METHOD); impl BIO_METHOD { fn new() -> Result { let ptr = Box::new(ffi::BIO_METHOD { type_: ffi::BIO_TYPE_NONE, name: b"rust\0".as_ptr() as *const _, bwrite: Some(bwrite::), bread: Some(bread::), bputs: Some(bputs::), bgets: None, ctrl: Some(ctrl::), create: Some(create), destroy: Some(destroy::), callback_ctrl: None, }); Ok(BIO_METHOD(Box::into_raw(ptr))) } fn get(&self) -> *mut ffi::BIO_METHOD { self.0 } } impl Drop for BIO_METHOD { fn drop(&mut self) { unsafe { Box::::from_raw(self.0); } } } #[allow(bad_style)] unsafe fn BIO_set_init(bio: *mut ffi::BIO, init: c_int) { (*bio).init = init; } #[allow(bad_style)] unsafe fn BIO_set_flags(bio: *mut ffi::BIO, flags: c_int) { (*bio).flags = flags; } #[allow(bad_style)] unsafe fn BIO_get_data(bio: *mut ffi::BIO) -> *mut c_void { (*bio).ptr } #[allow(bad_style)] unsafe fn BIO_set_data(bio: *mut ffi::BIO, data: *mut c_void) { (*bio).ptr = data; } #[allow(bad_style)] unsafe fn BIO_set_num(bio: *mut ffi::BIO, num: c_int) { (*bio).num = num; } } } vendor/openssl/src/ssl/connector.rs0000664000175000017500000005567014160055207020277 0ustar mwhudsonmwhudsonuse cfg_if::cfg_if; use std::io::{Read, Write}; use std::ops::{Deref, DerefMut}; use crate::dh::Dh; use crate::error::ErrorStack; use crate::ssl::{ HandshakeError, Ssl, SslContext, SslContextBuilder, SslContextRef, SslMethod, SslMode, SslOptions, SslRef, SslStream, SslVerifyMode, }; use crate::version; const FFDHE_2048: &str = " -----BEGIN DH PARAMETERS----- MIIBCAKCAQEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz +8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a 87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7 YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi 7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD ssbzSibBsu/6iGtCOGEoXJf//////////wIBAg== -----END DH PARAMETERS----- "; #[allow(clippy::inconsistent_digit_grouping, clippy::unusual_byte_groupings)] fn ctx(method: SslMethod) -> Result { let mut ctx = SslContextBuilder::new(method)?; let mut opts = SslOptions::ALL | SslOptions::NO_COMPRESSION | SslOptions::NO_SSLV2 | SslOptions::NO_SSLV3 | SslOptions::SINGLE_DH_USE | SslOptions::SINGLE_ECDH_USE; opts &= !SslOptions::DONT_INSERT_EMPTY_FRAGMENTS; ctx.set_options(opts); let mut mode = SslMode::AUTO_RETRY | SslMode::ACCEPT_MOVING_WRITE_BUFFER | SslMode::ENABLE_PARTIAL_WRITE; // This is quite a useful optimization for saving memory, but historically // caused CVEs in OpenSSL pre-1.0.1h, according to // https://bugs.python.org/issue25672 if version::number() >= 0x1_00_01_08_0 { mode |= SslMode::RELEASE_BUFFERS; } ctx.set_mode(mode); Ok(ctx) } /// A type which wraps client-side streams in a TLS session. /// /// OpenSSL's default configuration is highly insecure. This connector manages the OpenSSL /// structures, configuring cipher suites, session options, hostname verification, and more. /// /// OpenSSL's built in hostname verification is used when linking against OpenSSL 1.0.2 or 1.1.0, /// and a custom implementation is used when linking against OpenSSL 1.0.1. #[derive(Clone, Debug)] pub struct SslConnector(SslContext); impl SslConnector { /// Creates a new builder for TLS connections. /// /// The default configuration is subject to change, and is currently derived from Python. pub fn builder(method: SslMethod) -> Result { let mut ctx = ctx(method)?; ctx.set_default_verify_paths()?; ctx.set_cipher_list( "DEFAULT:!aNULL:!eNULL:!MD5:!3DES:!DES:!RC4:!IDEA:!SEED:!aDSS:!SRP:!PSK", )?; setup_verify(&mut ctx); Ok(SslConnectorBuilder(ctx)) } /// Initiates a client-side TLS session on a stream. /// /// The domain is used for SNI and hostname verification. pub fn connect(&self, domain: &str, stream: S) -> Result, HandshakeError> where S: Read + Write, { self.configure()?.connect(domain, stream) } /// Returns a structure allowing for configuration of a single TLS session before connection. pub fn configure(&self) -> Result { Ssl::new(&self.0).map(|ssl| ConnectConfiguration { ssl, sni: true, verify_hostname: true, }) } /// Consumes the `SslConnector`, returning the inner raw `SslContext`. pub fn into_context(self) -> SslContext { self.0 } /// Returns a shared reference to the inner raw `SslContext`. pub fn context(&self) -> &SslContextRef { &*self.0 } } /// A builder for `SslConnector`s. pub struct SslConnectorBuilder(SslContextBuilder); impl SslConnectorBuilder { /// Consumes the builder, returning an `SslConnector`. pub fn build(self) -> SslConnector { SslConnector(self.0.build()) } } impl Deref for SslConnectorBuilder { type Target = SslContextBuilder; fn deref(&self) -> &SslContextBuilder { &self.0 } } impl DerefMut for SslConnectorBuilder { fn deref_mut(&mut self) -> &mut SslContextBuilder { &mut self.0 } } /// A type which allows for configuration of a client-side TLS session before connection. pub struct ConnectConfiguration { ssl: Ssl, sni: bool, verify_hostname: bool, } impl ConnectConfiguration { /// A builder-style version of `set_use_server_name_indication`. pub fn use_server_name_indication(mut self, use_sni: bool) -> ConnectConfiguration { self.set_use_server_name_indication(use_sni); self } /// Configures the use of Server Name Indication (SNI) when connecting. /// /// Defaults to `true`. pub fn set_use_server_name_indication(&mut self, use_sni: bool) { self.sni = use_sni; } /// A builder-style version of `set_verify_hostname`. pub fn verify_hostname(mut self, verify_hostname: bool) -> ConnectConfiguration { self.set_verify_hostname(verify_hostname); self } /// Configures the use of hostname verification when connecting. /// /// Defaults to `true`. /// /// # Warning /// /// You should think very carefully before you use this method. If hostname verification is not /// used, *any* valid certificate for *any* site will be trusted for use from any other. This /// introduces a significant vulnerability to man-in-the-middle attacks. pub fn set_verify_hostname(&mut self, verify_hostname: bool) { self.verify_hostname = verify_hostname; } /// Returns an `Ssl` configured to connect to the provided domain. /// /// The domain is used for SNI and hostname verification if enabled. pub fn into_ssl(mut self, domain: &str) -> Result { if self.sni { self.ssl.set_hostname(domain)?; } if self.verify_hostname { setup_verify_hostname(&mut self.ssl, domain)?; } Ok(self.ssl) } /// Initiates a client-side TLS session on a stream. /// /// The domain is used for SNI and hostname verification if enabled. pub fn connect(self, domain: &str, stream: S) -> Result, HandshakeError> where S: Read + Write, { self.into_ssl(domain)?.connect(stream) } } impl Deref for ConnectConfiguration { type Target = SslRef; fn deref(&self) -> &SslRef { &self.ssl } } impl DerefMut for ConnectConfiguration { fn deref_mut(&mut self) -> &mut SslRef { &mut self.ssl } } /// A type which wraps server-side streams in a TLS session. /// /// OpenSSL's default configuration is highly insecure. This connector manages the OpenSSL /// structures, configuring cipher suites, session options, and more. #[derive(Clone)] pub struct SslAcceptor(SslContext); impl SslAcceptor { /// Creates a new builder configured to connect to non-legacy clients. This should generally be /// considered a reasonable default choice. /// /// This corresponds to the intermediate configuration of version 5 of Mozilla's server side TLS /// recommendations. See its [documentation][docs] for more details on specifics. /// /// [docs]: https://wiki.mozilla.org/Security/Server_Side_TLS pub fn mozilla_intermediate_v5(method: SslMethod) -> Result { let mut ctx = ctx(method)?; ctx.set_options(SslOptions::NO_TLSV1 | SslOptions::NO_TLSV1_1); let dh = Dh::params_from_pem(FFDHE_2048.as_bytes())?; ctx.set_tmp_dh(&dh)?; setup_curves(&mut ctx)?; ctx.set_cipher_list( "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:\ ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:\ DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384" )?; #[cfg(ossl111)] ctx.set_ciphersuites( "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256", )?; Ok(SslAcceptorBuilder(ctx)) } /// Creates a new builder configured to connect to modern clients. /// /// This corresponds to the modern configuration of version 5 of Mozilla's server side TLS recommendations. /// See its [documentation][docs] for more details on specifics. /// /// Requires OpenSSL 1.1.1 or newer. /// /// [docs]: https://wiki.mozilla.org/Security/Server_Side_TLS #[cfg(ossl111)] pub fn mozilla_modern_v5(method: SslMethod) -> Result { let mut ctx = ctx(method)?; ctx.set_options(SslOptions::NO_SSL_MASK & !SslOptions::NO_TLSV1_3); ctx.set_ciphersuites( "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256", )?; Ok(SslAcceptorBuilder(ctx)) } /// Creates a new builder configured to connect to non-legacy clients. This should generally be /// considered a reasonable default choice. /// /// This corresponds to the intermediate configuration of version 4 of Mozilla's server side TLS /// recommendations. See its [documentation][docs] for more details on specifics. /// /// [docs]: https://wiki.mozilla.org/Security/Server_Side_TLS // FIXME remove in next major version pub fn mozilla_intermediate(method: SslMethod) -> Result { let mut ctx = ctx(method)?; ctx.set_options(SslOptions::CIPHER_SERVER_PREFERENCE); #[cfg(ossl111)] ctx.set_options(SslOptions::NO_TLSV1_3); let dh = Dh::params_from_pem(FFDHE_2048.as_bytes())?; ctx.set_tmp_dh(&dh)?; setup_curves(&mut ctx)?; ctx.set_cipher_list( "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:\ ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:\ DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:\ ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:\ ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:\ DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:\ EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:\ AES256-SHA:DES-CBC3-SHA:!DSS", )?; Ok(SslAcceptorBuilder(ctx)) } /// Creates a new builder configured to connect to modern clients. /// /// This corresponds to the modern configuration of version 4 of Mozilla's server side TLS recommendations. /// See its [documentation][docs] for more details on specifics. /// /// [docs]: https://wiki.mozilla.org/Security/Server_Side_TLS // FIXME remove in next major version pub fn mozilla_modern(method: SslMethod) -> Result { let mut ctx = ctx(method)?; ctx.set_options( SslOptions::CIPHER_SERVER_PREFERENCE | SslOptions::NO_TLSV1 | SslOptions::NO_TLSV1_1, ); #[cfg(ossl111)] ctx.set_options(SslOptions::NO_TLSV1_3); setup_curves(&mut ctx)?; ctx.set_cipher_list( "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:\ ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:\ ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256", )?; Ok(SslAcceptorBuilder(ctx)) } /// Initiates a server-side TLS session on a stream. pub fn accept(&self, stream: S) -> Result, HandshakeError> where S: Read + Write, { let ssl = Ssl::new(&self.0)?; ssl.accept(stream) } /// Consumes the `SslAcceptor`, returning the inner raw `SslContext`. pub fn into_context(self) -> SslContext { self.0 } /// Returns a shared reference to the inner raw `SslContext`. pub fn context(&self) -> &SslContextRef { &*self.0 } } /// A builder for `SslAcceptor`s. pub struct SslAcceptorBuilder(SslContextBuilder); impl SslAcceptorBuilder { /// Consumes the builder, returning a `SslAcceptor`. pub fn build(self) -> SslAcceptor { SslAcceptor(self.0.build()) } } impl Deref for SslAcceptorBuilder { type Target = SslContextBuilder; fn deref(&self) -> &SslContextBuilder { &self.0 } } impl DerefMut for SslAcceptorBuilder { fn deref_mut(&mut self) -> &mut SslContextBuilder { &mut self.0 } } cfg_if! { if #[cfg(ossl110)] { #[allow(clippy::unnecessary_wraps)] fn setup_curves(_: &mut SslContextBuilder) -> Result<(), ErrorStack> { Ok(()) } } else if #[cfg(any(ossl102, libressl))] { fn setup_curves(ctx: &mut SslContextBuilder) -> Result<(), ErrorStack> { ctx.set_ecdh_auto(true) } } else { fn setup_curves(ctx: &mut SslContextBuilder) -> Result<(), ErrorStack> { use crate::ec::EcKey; use crate::nid::Nid; let curve = EcKey::from_curve_name(Nid::X9_62_PRIME256V1)?; ctx.set_tmp_ecdh(&curve) } } } cfg_if! { if #[cfg(any(ossl102, libressl261))] { fn setup_verify(ctx: &mut SslContextBuilder) { ctx.set_verify(SslVerifyMode::PEER); } fn setup_verify_hostname(ssl: &mut SslRef, domain: &str) -> Result<(), ErrorStack> { use crate::x509::verify::X509CheckFlags; let param = ssl.param_mut(); param.set_hostflags(X509CheckFlags::NO_PARTIAL_WILDCARDS); match domain.parse() { Ok(ip) => param.set_ip(ip), Err(_) => param.set_host(domain), } } } else { fn setup_verify(ctx: &mut SslContextBuilder) { ctx.set_verify_callback(SslVerifyMode::PEER, verify::verify_callback); } fn setup_verify_hostname(ssl: &mut Ssl, domain: &str) -> Result<(), ErrorStack> { let domain = domain.to_string(); let hostname_idx = verify::try_get_hostname_idx()?; ssl.set_ex_data(*hostname_idx, domain); Ok(()) } mod verify { use std::net::IpAddr; use std::str; use once_cell::sync::OnceCell; use crate::error::ErrorStack; use crate::ex_data::Index; use crate::nid::Nid; use crate::ssl::Ssl; use crate::stack::Stack; use crate::x509::{ GeneralName, X509NameRef, X509Ref, X509StoreContext, X509StoreContextRef, X509VerifyResult, }; static HOSTNAME_IDX: OnceCell> = OnceCell::new(); pub fn try_get_hostname_idx() -> Result<&'static Index, ErrorStack> { HOSTNAME_IDX.get_or_try_init(Ssl::new_ex_index) } pub fn verify_callback(preverify_ok: bool, x509_ctx: &mut X509StoreContextRef) -> bool { if !preverify_ok || x509_ctx.error_depth() != 0 { return preverify_ok; } let hostname_idx = try_get_hostname_idx().expect("failed to initialize hostname index"); let ok = match ( x509_ctx.current_cert(), X509StoreContext::ssl_idx() .ok() .and_then(|idx| x509_ctx.ex_data(idx)) .and_then(|ssl| ssl.ex_data(*hostname_idx)), ) { (Some(x509), Some(domain)) => verify_hostname(domain, &x509), _ => true, }; if !ok { x509_ctx.set_error(X509VerifyResult::APPLICATION_VERIFICATION); } ok } fn verify_hostname(domain: &str, cert: &X509Ref) -> bool { match cert.subject_alt_names() { Some(names) => verify_subject_alt_names(domain, names), None => verify_subject_name(domain, &cert.subject_name()), } } fn verify_subject_alt_names(domain: &str, names: Stack) -> bool { let ip = domain.parse(); for name in &names { match ip { Ok(ip) => { if let Some(actual) = name.ipaddress() { if matches_ip(&ip, actual) { return true; } } } Err(_) => { if let Some(pattern) = name.dnsname() { if matches_dns(pattern, domain) { return true; } } } } } false } fn verify_subject_name(domain: &str, subject_name: &X509NameRef) -> bool { match subject_name.entries_by_nid(Nid::COMMONNAME).next() { Some(pattern) => { let pattern = match str::from_utf8(pattern.data().as_slice()) { Ok(pattern) => pattern, Err(_) => return false, }; // Unlike SANs, IP addresses in the subject name don't have a // different encoding. match domain.parse::() { Ok(ip) => pattern .parse::() .ok() .map_or(false, |pattern| pattern == ip), Err(_) => matches_dns(pattern, domain), } } None => false, } } fn matches_dns(mut pattern: &str, mut hostname: &str) -> bool { // first strip trailing . off of pattern and hostname to normalize if pattern.ends_with('.') { pattern = &pattern[..pattern.len() - 1]; } if hostname.ends_with('.') { hostname = &hostname[..hostname.len() - 1]; } matches_wildcard(pattern, hostname).unwrap_or_else(|| pattern.eq_ignore_ascii_case(hostname)) } fn matches_wildcard(pattern: &str, hostname: &str) -> Option { let wildcard_location = match pattern.find('*') { Some(l) => l, None => return None, }; let mut dot_idxs = pattern.match_indices('.').map(|(l, _)| l); let wildcard_end = match dot_idxs.next() { Some(l) => l, None => return None, }; // Never match wildcards if the pattern has less than 2 '.'s (no *.com) // // This is a bit dubious, as it doesn't disallow other TLDs like *.co.uk. // Chrome has a black- and white-list for this, but Firefox (via NSS) does // the same thing we do here. // // The Public Suffix (https://www.publicsuffix.org/) list could // potentially be used here, but it's both huge and updated frequently // enough that management would be a PITA. if dot_idxs.next().is_none() { return None; } // Wildcards can only be in the first component, and must be the entire first label if wildcard_location != 0 || wildcard_end != wildcard_location + 1 { return None; } let hostname_label_end = match hostname.find('.') { Some(l) => l, None => return None, }; let pattern_after_wildcard = &pattern[wildcard_end..]; let hostname_after_wildcard = &hostname[hostname_label_end..]; Some(pattern_after_wildcard.eq_ignore_ascii_case(hostname_after_wildcard)) } fn matches_ip(expected: &IpAddr, actual: &[u8]) -> bool { match *expected { IpAddr::V4(ref addr) => actual == addr.octets(), IpAddr::V6(ref addr) => actual == addr.octets(), } } #[test] fn test_dns_match() { use crate::ssl::connector::verify::matches_dns; assert!(matches_dns("website.tld", "website.tld")); // A name should match itself. assert!(matches_dns("website.tld", "wEbSiTe.tLd")); // DNS name matching ignores case of hostname. assert!(matches_dns("wEbSiTe.TlD", "website.tld")); // DNS name matching ignores case of subject. assert!(matches_dns("xn--bcher-kva.tld", "xn--bcher-kva.tld")); // Likewise, nothing special to punycode names. assert!(matches_dns("xn--bcher-kva.tld", "xn--BcHer-Kva.tLd")); // And punycode must be compared similarly case-insensitively. assert!(matches_dns("*.example.com", "subdomain.example.com")); // Wildcard matching works. assert!(matches_dns("*.eXaMpLe.cOm", "subdomain.example.com")); // Wildcard matching ignores case of subject. assert!(matches_dns("*.example.com", "sUbDoMaIn.eXaMpLe.cOm")); // Wildcard matching ignores case of hostname. assert!(!matches_dns("prefix*.example.com", "p.example.com")); // Prefix longer than the label works and does not match. assert!(!matches_dns("*suffix.example.com", "s.example.com")); // Suffix longer than the label works and does not match. assert!(!matches_dns("prefix*.example.com", "prefix.example.com")); // Partial wildcards do not work. assert!(!matches_dns("*suffix.example.com", "suffix.example.com")); // Partial wildcards do not work. assert!(!matches_dns("prefix*.example.com", "prefixdomain.example.com")); // Partial wildcards do not work. assert!(!matches_dns("*suffix.example.com", "domainsuffix.example.com")); // Partial wildcards do not work. assert!(!matches_dns("xn--*.example.com", "subdomain.example.com")); // Punycode domains with wildcard parts do not match. assert!(!matches_dns("xN--*.example.com", "subdomain.example.com")); // And we can't bypass a punycode test with weird casing. assert!(!matches_dns("Xn--*.example.com", "subdomain.example.com")); // And we can't bypass a punycode test with weird casing. assert!(!matches_dns("XN--*.example.com", "subdomain.example.com")); // And we can't bypass a punycode test with weird casing. } } } } vendor/openssl/src/symm.rs0000664000175000017500000015221614172417313016466 0ustar mwhudsonmwhudson//! High level interface to certain symmetric ciphers. //! //! # Examples //! //! Encrypt data in AES128 CBC mode //! //! ``` //! use openssl::symm::{encrypt, Cipher}; //! //! let cipher = Cipher::aes_128_cbc(); //! let data = b"Some Crypto Text"; //! let key = b"\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"; //! let iv = b"\x00\x01\x02\x03\x04\x05\x06\x07\x00\x01\x02\x03\x04\x05\x06\x07"; //! let ciphertext = encrypt( //! cipher, //! key, //! Some(iv), //! data).unwrap(); //! //! assert_eq!( //! b"\xB4\xB9\xE7\x30\xD6\xD6\xF7\xDE\x77\x3F\x1C\xFF\xB3\x3E\x44\x5A\x91\xD7\x27\x62\x87\x4D\ //! \xFB\x3C\x5E\xC4\x59\x72\x4A\xF4\x7C\xA1", //! &ciphertext[..]); //! ``` //! //! Encrypting an asymmetric key with a symmetric cipher //! //! ``` //! use openssl::rsa::{Padding, Rsa}; //! use openssl::symm::Cipher; //! //! // Generate keypair and encrypt private key: //! let keypair = Rsa::generate(2048).unwrap(); //! let cipher = Cipher::aes_256_cbc(); //! let pubkey_pem = keypair.public_key_to_pem_pkcs1().unwrap(); //! let privkey_pem = keypair.private_key_to_pem_passphrase(cipher, b"Rust").unwrap(); //! // pubkey_pem and privkey_pem could be written to file here. //! //! // Load private and public key from string: //! let pubkey = Rsa::public_key_from_pem_pkcs1(&pubkey_pem).unwrap(); //! let privkey = Rsa::private_key_from_pem_passphrase(&privkey_pem, b"Rust").unwrap(); //! //! // Use the asymmetric keys to encrypt and decrypt a short message: //! let msg = b"Foo bar"; //! let mut encrypted = vec![0; pubkey.size() as usize]; //! let mut decrypted = vec![0; privkey.size() as usize]; //! let len = pubkey.public_encrypt(msg, &mut encrypted, Padding::PKCS1).unwrap(); //! assert!(len > msg.len()); //! let len = privkey.private_decrypt(&encrypted, &mut decrypted, Padding::PKCS1).unwrap(); //! let output_string = String::from_utf8(decrypted[..len].to_vec()).unwrap(); //! assert_eq!("Foo bar", output_string); //! println!("Decrypted: '{}'", output_string); //! ``` use cfg_if::cfg_if; use libc::c_int; use std::cmp; use std::ptr; use crate::error::ErrorStack; use crate::nid::Nid; use crate::{cvt, cvt_p}; #[derive(Copy, Clone)] pub enum Mode { Encrypt, Decrypt, } /// Represents a particular cipher algorithm. /// /// See OpenSSL doc at [`EVP_EncryptInit`] for more information on each algorithms. /// /// [`EVP_EncryptInit`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_EncryptInit.html #[derive(Copy, Clone, PartialEq, Eq)] pub struct Cipher(*const ffi::EVP_CIPHER); impl Cipher { /// Looks up the cipher for a certain nid. /// /// This corresponds to [`EVP_get_cipherbynid`] /// /// [`EVP_get_cipherbynid`]: https://www.openssl.org/docs/man1.0.2/crypto/EVP_get_cipherbyname.html pub fn from_nid(nid: Nid) -> Option { let ptr = unsafe { ffi::EVP_get_cipherbyname(ffi::OBJ_nid2sn(nid.as_raw())) }; if ptr.is_null() { None } else { Some(Cipher(ptr)) } } /// Returns the cipher's Nid. /// /// This corresponds to [`EVP_CIPHER_nid`] /// /// [`EVP_CIPHER_nid`]: https://www.openssl.org/docs/man1.0.2/crypto/EVP_CIPHER_nid.html pub fn nid(&self) -> Nid { let nid = unsafe { ffi::EVP_CIPHER_nid(self.0) }; Nid::from_raw(nid) } pub fn aes_128_ecb() -> Cipher { unsafe { Cipher(ffi::EVP_aes_128_ecb()) } } pub fn aes_128_cbc() -> Cipher { unsafe { Cipher(ffi::EVP_aes_128_cbc()) } } pub fn aes_128_xts() -> Cipher { unsafe { Cipher(ffi::EVP_aes_128_xts()) } } pub fn aes_128_ctr() -> Cipher { unsafe { Cipher(ffi::EVP_aes_128_ctr()) } } pub fn aes_128_cfb1() -> Cipher { unsafe { Cipher(ffi::EVP_aes_128_cfb1()) } } pub fn aes_128_cfb128() -> Cipher { unsafe { Cipher(ffi::EVP_aes_128_cfb128()) } } pub fn aes_128_cfb8() -> Cipher { unsafe { Cipher(ffi::EVP_aes_128_cfb8()) } } pub fn aes_128_gcm() -> Cipher { unsafe { Cipher(ffi::EVP_aes_128_gcm()) } } pub fn aes_128_ccm() -> Cipher { unsafe { Cipher(ffi::EVP_aes_128_ccm()) } } pub fn aes_128_ofb() -> Cipher { unsafe { Cipher(ffi::EVP_aes_128_ofb()) } } /// Requires OpenSSL 1.1.0 or newer. #[cfg(ossl110)] pub fn aes_128_ocb() -> Cipher { unsafe { Cipher(ffi::EVP_aes_128_ocb()) } } pub fn aes_192_ecb() -> Cipher { unsafe { Cipher(ffi::EVP_aes_192_ecb()) } } pub fn aes_192_cbc() -> Cipher { unsafe { Cipher(ffi::EVP_aes_192_cbc()) } } pub fn aes_192_ctr() -> Cipher { unsafe { Cipher(ffi::EVP_aes_192_ctr()) } } pub fn aes_192_cfb1() -> Cipher { unsafe { Cipher(ffi::EVP_aes_192_cfb1()) } } pub fn aes_192_cfb128() -> Cipher { unsafe { Cipher(ffi::EVP_aes_192_cfb128()) } } pub fn aes_192_cfb8() -> Cipher { unsafe { Cipher(ffi::EVP_aes_192_cfb8()) } } pub fn aes_192_gcm() -> Cipher { unsafe { Cipher(ffi::EVP_aes_192_gcm()) } } pub fn aes_192_ccm() -> Cipher { unsafe { Cipher(ffi::EVP_aes_192_ccm()) } } pub fn aes_192_ofb() -> Cipher { unsafe { Cipher(ffi::EVP_aes_192_ofb()) } } /// Requires OpenSSL 1.1.0 or newer. #[cfg(ossl110)] pub fn aes_192_ocb() -> Cipher { unsafe { Cipher(ffi::EVP_aes_192_ocb()) } } pub fn aes_256_ecb() -> Cipher { unsafe { Cipher(ffi::EVP_aes_256_ecb()) } } pub fn aes_256_cbc() -> Cipher { unsafe { Cipher(ffi::EVP_aes_256_cbc()) } } pub fn aes_256_xts() -> Cipher { unsafe { Cipher(ffi::EVP_aes_256_xts()) } } pub fn aes_256_ctr() -> Cipher { unsafe { Cipher(ffi::EVP_aes_256_ctr()) } } pub fn aes_256_cfb1() -> Cipher { unsafe { Cipher(ffi::EVP_aes_256_cfb1()) } } pub fn aes_256_cfb128() -> Cipher { unsafe { Cipher(ffi::EVP_aes_256_cfb128()) } } pub fn aes_256_cfb8() -> Cipher { unsafe { Cipher(ffi::EVP_aes_256_cfb8()) } } pub fn aes_256_gcm() -> Cipher { unsafe { Cipher(ffi::EVP_aes_256_gcm()) } } pub fn aes_256_ccm() -> Cipher { unsafe { Cipher(ffi::EVP_aes_256_ccm()) } } pub fn aes_256_ofb() -> Cipher { unsafe { Cipher(ffi::EVP_aes_256_ofb()) } } /// Requires OpenSSL 1.1.0 or newer. #[cfg(ossl110)] pub fn aes_256_ocb() -> Cipher { unsafe { Cipher(ffi::EVP_aes_256_ocb()) } } #[cfg(not(osslconf = "OPENSSL_NO_BF"))] pub fn bf_cbc() -> Cipher { unsafe { Cipher(ffi::EVP_bf_cbc()) } } #[cfg(not(osslconf = "OPENSSL_NO_BF"))] pub fn bf_ecb() -> Cipher { unsafe { Cipher(ffi::EVP_bf_ecb()) } } #[cfg(not(osslconf = "OPENSSL_NO_BF"))] pub fn bf_cfb64() -> Cipher { unsafe { Cipher(ffi::EVP_bf_cfb64()) } } #[cfg(not(osslconf = "OPENSSL_NO_BF"))] pub fn bf_ofb() -> Cipher { unsafe { Cipher(ffi::EVP_bf_ofb()) } } pub fn des_cbc() -> Cipher { unsafe { Cipher(ffi::EVP_des_cbc()) } } pub fn des_ecb() -> Cipher { unsafe { Cipher(ffi::EVP_des_ecb()) } } pub fn des_ede3() -> Cipher { unsafe { Cipher(ffi::EVP_des_ede3()) } } pub fn des_ede3_cbc() -> Cipher { unsafe { Cipher(ffi::EVP_des_ede3_cbc()) } } pub fn des_ede3_cfb64() -> Cipher { unsafe { Cipher(ffi::EVP_des_ede3_cfb64()) } } pub fn rc4() -> Cipher { unsafe { Cipher(ffi::EVP_rc4()) } } /// Requires OpenSSL 1.1.0 or newer. #[cfg(all(ossl110, not(osslconf = "OPENSSL_NO_CHACHA")))] pub fn chacha20() -> Cipher { unsafe { Cipher(ffi::EVP_chacha20()) } } /// Requires OpenSSL 1.1.0 or newer. #[cfg(all(ossl110, not(osslconf = "OPENSSL_NO_CHACHA")))] pub fn chacha20_poly1305() -> Cipher { unsafe { Cipher(ffi::EVP_chacha20_poly1305()) } } #[cfg(not(osslconf = "OPENSSL_NO_SEED"))] pub fn seed_cbc() -> Cipher { unsafe { Cipher(ffi::EVP_seed_cbc()) } } #[cfg(not(osslconf = "OPENSSL_NO_SEED"))] pub fn seed_cfb128() -> Cipher { unsafe { Cipher(ffi::EVP_seed_cfb128()) } } #[cfg(not(osslconf = "OPENSSL_NO_SEED"))] pub fn seed_ecb() -> Cipher { unsafe { Cipher(ffi::EVP_seed_ecb()) } } #[cfg(not(osslconf = "OPENSSL_NO_SEED"))] pub fn seed_ofb() -> Cipher { unsafe { Cipher(ffi::EVP_seed_ofb()) } } /// Creates a `Cipher` from a raw pointer to its OpenSSL type. /// /// # Safety /// /// The caller must ensure the pointer is valid for the `'static` lifetime. pub unsafe fn from_ptr(ptr: *const ffi::EVP_CIPHER) -> Cipher { Cipher(ptr) } #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_ptr(&self) -> *const ffi::EVP_CIPHER { self.0 } /// Returns the length of keys used with this cipher. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn key_len(&self) -> usize { unsafe { EVP_CIPHER_key_length(self.0) as usize } } /// Returns the length of the IV used with this cipher, or `None` if the /// cipher does not use an IV. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn iv_len(&self) -> Option { unsafe { let len = EVP_CIPHER_iv_length(self.0) as usize; if len == 0 { None } else { Some(len) } } } /// Returns the block size of the cipher. /// /// # Note /// /// Stream ciphers such as RC4 have a block size of 1. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn block_size(&self) -> usize { unsafe { EVP_CIPHER_block_size(self.0) as usize } } /// Determines whether the cipher is using CCM mode fn is_ccm(self) -> bool { // NOTE: OpenSSL returns pointers to static structs, which makes this work as expected self == Cipher::aes_128_ccm() || self == Cipher::aes_256_ccm() } /// Determines whether the cipher is using OCB mode #[cfg(ossl110)] fn is_ocb(self) -> bool { self == Cipher::aes_128_ocb() || self == Cipher::aes_192_ocb() || self == Cipher::aes_256_ocb() } #[cfg(not(ossl110))] const fn is_ocb(self) -> bool { false } } unsafe impl Sync for Cipher {} unsafe impl Send for Cipher {} /// Represents a symmetric cipher context. /// /// Padding is enabled by default. /// /// # Examples /// /// Encrypt some plaintext in chunks, then decrypt the ciphertext back into plaintext, in AES 128 /// CBC mode. /// /// ``` /// use openssl::symm::{Cipher, Mode, Crypter}; /// /// let plaintexts: [&[u8]; 2] = [b"Some Stream of", b" Crypto Text"]; /// let key = b"\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"; /// let iv = b"\x00\x01\x02\x03\x04\x05\x06\x07\x00\x01\x02\x03\x04\x05\x06\x07"; /// let data_len = plaintexts.iter().fold(0, |sum, x| sum + x.len()); /// /// // Create a cipher context for encryption. /// let mut encrypter = Crypter::new( /// Cipher::aes_128_cbc(), /// Mode::Encrypt, /// key, /// Some(iv)).unwrap(); /// /// let block_size = Cipher::aes_128_cbc().block_size(); /// let mut ciphertext = vec![0; data_len + block_size]; /// /// // Encrypt 2 chunks of plaintexts successively. /// let mut count = encrypter.update(plaintexts[0], &mut ciphertext).unwrap(); /// count += encrypter.update(plaintexts[1], &mut ciphertext[count..]).unwrap(); /// count += encrypter.finalize(&mut ciphertext[count..]).unwrap(); /// ciphertext.truncate(count); /// /// assert_eq!( /// b"\x0F\x21\x83\x7E\xB2\x88\x04\xAF\xD9\xCC\xE2\x03\x49\xB4\x88\xF6\xC4\x61\x0E\x32\x1C\xF9\ /// \x0D\x66\xB1\xE6\x2C\x77\x76\x18\x8D\x99", /// &ciphertext[..] /// ); /// /// /// // Let's pretend we don't know the plaintext, and now decrypt the ciphertext. /// let data_len = ciphertext.len(); /// let ciphertexts = [&ciphertext[..9], &ciphertext[9..]]; /// /// // Create a cipher context for decryption. /// let mut decrypter = Crypter::new( /// Cipher::aes_128_cbc(), /// Mode::Decrypt, /// key, /// Some(iv)).unwrap(); /// let mut plaintext = vec![0; data_len + block_size]; /// /// // Decrypt 2 chunks of ciphertexts successively. /// let mut count = decrypter.update(ciphertexts[0], &mut plaintext).unwrap(); /// count += decrypter.update(ciphertexts[1], &mut plaintext[count..]).unwrap(); /// count += decrypter.finalize(&mut plaintext[count..]).unwrap(); /// plaintext.truncate(count); /// /// assert_eq!(b"Some Stream of Crypto Text", &plaintext[..]); /// ``` pub struct Crypter { ctx: *mut ffi::EVP_CIPHER_CTX, block_size: usize, } unsafe impl Sync for Crypter {} unsafe impl Send for Crypter {} impl Crypter { /// Creates a new `Crypter`. The initialisation vector, `iv`, is not necessary for certain /// types of `Cipher`. /// /// # Panics /// /// Panics if an IV is required by the cipher but not provided. Also make sure that the key /// and IV size are appropriate for your cipher. pub fn new( t: Cipher, mode: Mode, key: &[u8], iv: Option<&[u8]>, ) -> Result { ffi::init(); unsafe { let ctx = cvt_p(ffi::EVP_CIPHER_CTX_new())?; let crypter = Crypter { ctx, block_size: t.block_size(), }; let mode = match mode { Mode::Encrypt => 1, Mode::Decrypt => 0, }; cvt(ffi::EVP_CipherInit_ex( crypter.ctx, t.as_ptr(), ptr::null_mut(), ptr::null_mut(), ptr::null_mut(), mode, ))?; assert!(key.len() <= c_int::max_value() as usize); cvt(ffi::EVP_CIPHER_CTX_set_key_length( crypter.ctx, key.len() as c_int, ))?; let key = key.as_ptr() as *mut _; let iv = match (iv, t.iv_len()) { (Some(iv), Some(len)) => { if iv.len() != len { assert!(iv.len() <= c_int::max_value() as usize); cvt(ffi::EVP_CIPHER_CTX_ctrl( crypter.ctx, ffi::EVP_CTRL_GCM_SET_IVLEN, iv.len() as c_int, ptr::null_mut(), ))?; } iv.as_ptr() as *mut _ } (Some(_), None) | (None, None) => ptr::null_mut(), (None, Some(_)) => panic!("an IV is required for this cipher"), }; cvt(ffi::EVP_CipherInit_ex( crypter.ctx, ptr::null(), ptr::null_mut(), key, iv, mode, ))?; Ok(crypter) } } /// Enables or disables padding. /// /// If padding is disabled, total amount of data encrypted/decrypted must /// be a multiple of the cipher's block size. pub fn pad(&mut self, padding: bool) { unsafe { ffi::EVP_CIPHER_CTX_set_padding(self.ctx, padding as c_int); } } /// Sets the tag used to authenticate ciphertext in AEAD ciphers such as AES GCM. /// /// When decrypting cipher text using an AEAD cipher, this must be called before `finalize`. pub fn set_tag(&mut self, tag: &[u8]) -> Result<(), ErrorStack> { unsafe { assert!(tag.len() <= c_int::max_value() as usize); // NB: this constant is actually more general than just GCM. cvt(ffi::EVP_CIPHER_CTX_ctrl( self.ctx, ffi::EVP_CTRL_GCM_SET_TAG, tag.len() as c_int, tag.as_ptr() as *mut _, )) .map(|_| ()) } } /// Sets the length of the authentication tag to generate in AES CCM. /// /// When encrypting with AES CCM, the tag length needs to be explicitly set in order /// to use a value different than the default 12 bytes. pub fn set_tag_len(&mut self, tag_len: usize) -> Result<(), ErrorStack> { unsafe { assert!(tag_len <= c_int::max_value() as usize); // NB: this constant is actually more general than just GCM. cvt(ffi::EVP_CIPHER_CTX_ctrl( self.ctx, ffi::EVP_CTRL_GCM_SET_TAG, tag_len as c_int, ptr::null_mut(), )) .map(|_| ()) } } /// Feeds total plaintext length to the cipher. /// /// The total plaintext or ciphertext length MUST be passed to the cipher when it operates in /// CCM mode. pub fn set_data_len(&mut self, data_len: usize) -> Result<(), ErrorStack> { unsafe { assert!(data_len <= c_int::max_value() as usize); let mut len = 0; cvt(ffi::EVP_CipherUpdate( self.ctx, ptr::null_mut(), &mut len, ptr::null_mut(), data_len as c_int, )) .map(|_| ()) } } /// Feeds Additional Authenticated Data (AAD) through the cipher. /// /// This can only be used with AEAD ciphers such as AES GCM. Data fed in is not encrypted, but /// is factored into the authentication tag. It must be called before the first call to /// `update`. pub fn aad_update(&mut self, input: &[u8]) -> Result<(), ErrorStack> { unsafe { assert!(input.len() <= c_int::max_value() as usize); let mut len = 0; cvt(ffi::EVP_CipherUpdate( self.ctx, ptr::null_mut(), &mut len, input.as_ptr(), input.len() as c_int, )) .map(|_| ()) } } /// Feeds data from `input` through the cipher, writing encrypted/decrypted /// bytes into `output`. /// /// The number of bytes written to `output` is returned. Note that this may /// not be equal to the length of `input`. /// /// # Panics /// /// Panics for stream ciphers if `output.len() < input.len()`. /// /// Panics for block ciphers if `output.len() < input.len() + block_size`, /// where `block_size` is the block size of the cipher (see `Cipher::block_size`). /// /// Panics if `output.len() > c_int::max_value()`. pub fn update(&mut self, input: &[u8], output: &mut [u8]) -> Result { unsafe { let block_size = if self.block_size > 1 { self.block_size } else { 0 }; assert!(output.len() >= input.len() + block_size); assert!(output.len() <= c_int::max_value() as usize); let mut outl = output.len() as c_int; let inl = input.len() as c_int; cvt(ffi::EVP_CipherUpdate( self.ctx, output.as_mut_ptr(), &mut outl, input.as_ptr(), inl, ))?; Ok(outl as usize) } } /// Finishes the encryption/decryption process, writing any remaining data /// to `output`. /// /// The number of bytes written to `output` is returned. /// /// `update` should not be called after this method. /// /// # Panics /// /// Panics for block ciphers if `output.len() < block_size`, /// where `block_size` is the block size of the cipher (see `Cipher::block_size`). pub fn finalize(&mut self, output: &mut [u8]) -> Result { unsafe { if self.block_size > 1 { assert!(output.len() >= self.block_size); } let mut outl = cmp::min(output.len(), c_int::max_value() as usize) as c_int; cvt(ffi::EVP_CipherFinal( self.ctx, output.as_mut_ptr(), &mut outl, ))?; Ok(outl as usize) } } /// Retrieves the authentication tag used to authenticate ciphertext in AEAD ciphers such /// as AES GCM. /// /// When encrypting data with an AEAD cipher, this must be called after `finalize`. /// /// The size of the buffer indicates the required size of the tag. While some ciphers support a /// range of tag sizes, it is recommended to pick the maximum size. For AES GCM, this is 16 /// bytes, for example. pub fn get_tag(&self, tag: &mut [u8]) -> Result<(), ErrorStack> { unsafe { assert!(tag.len() <= c_int::max_value() as usize); cvt(ffi::EVP_CIPHER_CTX_ctrl( self.ctx, ffi::EVP_CTRL_GCM_GET_TAG, tag.len() as c_int, tag.as_mut_ptr() as *mut _, )) .map(|_| ()) } } } impl Drop for Crypter { fn drop(&mut self) { unsafe { ffi::EVP_CIPHER_CTX_free(self.ctx); } } } /// Encrypts data in one go, and returns the encrypted data. /// /// Data is encrypted using the specified cipher type `t` in encrypt mode with the specified `key` /// and initialization vector `iv`. Padding is enabled. /// /// This is a convenient interface to `Crypter` to encrypt all data in one go. To encrypt a stream /// of data increamentally , use `Crypter` instead. /// /// # Examples /// /// Encrypt data in AES128 CBC mode /// /// ``` /// use openssl::symm::{encrypt, Cipher}; /// /// let cipher = Cipher::aes_128_cbc(); /// let data = b"Some Crypto Text"; /// let key = b"\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"; /// let iv = b"\x00\x01\x02\x03\x04\x05\x06\x07\x00\x01\x02\x03\x04\x05\x06\x07"; /// let ciphertext = encrypt( /// cipher, /// key, /// Some(iv), /// data).unwrap(); /// /// assert_eq!( /// b"\xB4\xB9\xE7\x30\xD6\xD6\xF7\xDE\x77\x3F\x1C\xFF\xB3\x3E\x44\x5A\x91\xD7\x27\x62\x87\x4D\ /// \xFB\x3C\x5E\xC4\x59\x72\x4A\xF4\x7C\xA1", /// &ciphertext[..]); /// ``` pub fn encrypt( t: Cipher, key: &[u8], iv: Option<&[u8]>, data: &[u8], ) -> Result, ErrorStack> { cipher(t, Mode::Encrypt, key, iv, data) } /// Decrypts data in one go, and returns the decrypted data. /// /// Data is decrypted using the specified cipher type `t` in decrypt mode with the specified `key` /// and initialization vector `iv`. Padding is enabled. /// /// This is a convenient interface to `Crypter` to decrypt all data in one go. To decrypt a stream /// of data increamentally , use `Crypter` instead. /// /// # Examples /// /// Decrypt data in AES128 CBC mode /// /// ``` /// use openssl::symm::{decrypt, Cipher}; /// /// let cipher = Cipher::aes_128_cbc(); /// let data = b"\xB4\xB9\xE7\x30\xD6\xD6\xF7\xDE\x77\x3F\x1C\xFF\xB3\x3E\x44\x5A\x91\xD7\x27\x62\ /// \x87\x4D\xFB\x3C\x5E\xC4\x59\x72\x4A\xF4\x7C\xA1"; /// let key = b"\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"; /// let iv = b"\x00\x01\x02\x03\x04\x05\x06\x07\x00\x01\x02\x03\x04\x05\x06\x07"; /// let ciphertext = decrypt( /// cipher, /// key, /// Some(iv), /// data).unwrap(); /// /// assert_eq!( /// b"Some Crypto Text", /// &ciphertext[..]); /// ``` pub fn decrypt( t: Cipher, key: &[u8], iv: Option<&[u8]>, data: &[u8], ) -> Result, ErrorStack> { cipher(t, Mode::Decrypt, key, iv, data) } fn cipher( t: Cipher, mode: Mode, key: &[u8], iv: Option<&[u8]>, data: &[u8], ) -> Result, ErrorStack> { let mut c = Crypter::new(t, mode, key, iv)?; let mut out = vec![0; data.len() + t.block_size()]; let count = c.update(data, &mut out)?; let rest = c.finalize(&mut out[count..])?; out.truncate(count + rest); Ok(out) } /// Like `encrypt`, but for AEAD ciphers such as AES GCM. /// /// Additional Authenticated Data can be provided in the `aad` field, and the authentication tag /// will be copied into the `tag` field. /// /// The size of the `tag` buffer indicates the required size of the tag. While some ciphers support /// a range of tag sizes, it is recommended to pick the maximum size. For AES GCM, this is 16 bytes, /// for example. pub fn encrypt_aead( t: Cipher, key: &[u8], iv: Option<&[u8]>, aad: &[u8], data: &[u8], tag: &mut [u8], ) -> Result, ErrorStack> { let mut c = Crypter::new(t, Mode::Encrypt, key, iv)?; let mut out = vec![0; data.len() + t.block_size()]; let is_ccm = t.is_ccm(); if is_ccm || t.is_ocb() { c.set_tag_len(tag.len())?; if is_ccm { c.set_data_len(data.len())?; } } c.aad_update(aad)?; let count = c.update(data, &mut out)?; let rest = c.finalize(&mut out[count..])?; c.get_tag(tag)?; out.truncate(count + rest); Ok(out) } /// Like `decrypt`, but for AEAD ciphers such as AES GCM. /// /// Additional Authenticated Data can be provided in the `aad` field, and the authentication tag /// should be provided in the `tag` field. pub fn decrypt_aead( t: Cipher, key: &[u8], iv: Option<&[u8]>, aad: &[u8], data: &[u8], tag: &[u8], ) -> Result, ErrorStack> { let mut c = Crypter::new(t, Mode::Decrypt, key, iv)?; let mut out = vec![0; data.len() + t.block_size()]; let is_ccm = t.is_ccm(); if is_ccm || t.is_ocb() { c.set_tag(tag)?; if is_ccm { c.set_data_len(data.len())?; } } c.aad_update(aad)?; let count = c.update(data, &mut out)?; let rest = if t.is_ccm() { 0 } else { c.set_tag(tag)?; c.finalize(&mut out[count..])? }; out.truncate(count + rest); Ok(out) } cfg_if! { if #[cfg(any(ossl110, libressl273))] { use ffi::{EVP_CIPHER_block_size, EVP_CIPHER_iv_length, EVP_CIPHER_key_length}; } else { #[allow(bad_style)] pub unsafe fn EVP_CIPHER_iv_length(ptr: *const ffi::EVP_CIPHER) -> c_int { (*ptr).iv_len } #[allow(bad_style)] pub unsafe fn EVP_CIPHER_block_size(ptr: *const ffi::EVP_CIPHER) -> c_int { (*ptr).block_size } #[allow(bad_style)] pub unsafe fn EVP_CIPHER_key_length(ptr: *const ffi::EVP_CIPHER) -> c_int { (*ptr).key_len } } } #[cfg(test)] mod tests { use super::*; use hex::{self, FromHex}; #[test] fn test_stream_cipher_output() { let key = [0u8; 16]; let iv = [0u8; 16]; let mut c = super::Crypter::new( super::Cipher::aes_128_ctr(), super::Mode::Encrypt, &key, Some(&iv), ) .unwrap(); assert_eq!(c.update(&[0u8; 15], &mut [0u8; 15]).unwrap(), 15); assert_eq!(c.update(&[0u8; 1], &mut [0u8; 1]).unwrap(), 1); assert_eq!(c.finalize(&mut [0u8; 0]).unwrap(), 0); } // Test vectors from FIPS-197: // http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf #[test] fn test_aes_256_ecb() { let k0 = [ 0x00u8, 0x01u8, 0x02u8, 0x03u8, 0x04u8, 0x05u8, 0x06u8, 0x07u8, 0x08u8, 0x09u8, 0x0au8, 0x0bu8, 0x0cu8, 0x0du8, 0x0eu8, 0x0fu8, 0x10u8, 0x11u8, 0x12u8, 0x13u8, 0x14u8, 0x15u8, 0x16u8, 0x17u8, 0x18u8, 0x19u8, 0x1au8, 0x1bu8, 0x1cu8, 0x1du8, 0x1eu8, 0x1fu8, ]; let p0 = [ 0x00u8, 0x11u8, 0x22u8, 0x33u8, 0x44u8, 0x55u8, 0x66u8, 0x77u8, 0x88u8, 0x99u8, 0xaau8, 0xbbu8, 0xccu8, 0xddu8, 0xeeu8, 0xffu8, ]; let c0 = [ 0x8eu8, 0xa2u8, 0xb7u8, 0xcau8, 0x51u8, 0x67u8, 0x45u8, 0xbfu8, 0xeau8, 0xfcu8, 0x49u8, 0x90u8, 0x4bu8, 0x49u8, 0x60u8, 0x89u8, ]; let mut c = super::Crypter::new( super::Cipher::aes_256_ecb(), super::Mode::Encrypt, &k0, None, ) .unwrap(); c.pad(false); let mut r0 = vec![0; c0.len() + super::Cipher::aes_256_ecb().block_size()]; let count = c.update(&p0, &mut r0).unwrap(); let rest = c.finalize(&mut r0[count..]).unwrap(); r0.truncate(count + rest); assert_eq!(hex::encode(&r0), hex::encode(c0)); let mut c = super::Crypter::new( super::Cipher::aes_256_ecb(), super::Mode::Decrypt, &k0, None, ) .unwrap(); c.pad(false); let mut p1 = vec![0; r0.len() + super::Cipher::aes_256_ecb().block_size()]; let count = c.update(&r0, &mut p1).unwrap(); let rest = c.finalize(&mut p1[count..]).unwrap(); p1.truncate(count + rest); assert_eq!(hex::encode(p1), hex::encode(p0)); } #[test] fn test_aes_256_cbc_decrypt() { let iv = [ 4_u8, 223_u8, 153_u8, 219_u8, 28_u8, 142_u8, 234_u8, 68_u8, 227_u8, 69_u8, 98_u8, 107_u8, 208_u8, 14_u8, 236_u8, 60_u8, ]; let data = [ 143_u8, 210_u8, 75_u8, 63_u8, 214_u8, 179_u8, 155_u8, 241_u8, 242_u8, 31_u8, 154_u8, 56_u8, 198_u8, 145_u8, 192_u8, 64_u8, 2_u8, 245_u8, 167_u8, 220_u8, 55_u8, 119_u8, 233_u8, 136_u8, 139_u8, 27_u8, 71_u8, 242_u8, 119_u8, 175_u8, 65_u8, 207_u8, ]; let ciphered_data = [ 0x4a_u8, 0x2e_u8, 0xe5_u8, 0x6_u8, 0xbf_u8, 0xcf_u8, 0xf2_u8, 0xd7_u8, 0xea_u8, 0x2d_u8, 0xb1_u8, 0x85_u8, 0x6c_u8, 0x93_u8, 0x65_u8, 0x6f_u8, ]; let mut cr = super::Crypter::new( super::Cipher::aes_256_cbc(), super::Mode::Decrypt, &data, Some(&iv), ) .unwrap(); cr.pad(false); let mut unciphered_data = vec![0; data.len() + super::Cipher::aes_256_cbc().block_size()]; let count = cr.update(&ciphered_data, &mut unciphered_data).unwrap(); let rest = cr.finalize(&mut unciphered_data[count..]).unwrap(); unciphered_data.truncate(count + rest); let expected_unciphered_data = b"I love turtles.\x01"; assert_eq!(&unciphered_data, expected_unciphered_data); } fn cipher_test(ciphertype: super::Cipher, pt: &str, ct: &str, key: &str, iv: &str) { let pt = Vec::from_hex(pt).unwrap(); let ct = Vec::from_hex(ct).unwrap(); let key = Vec::from_hex(key).unwrap(); let iv = Vec::from_hex(iv).unwrap(); let computed = super::decrypt(ciphertype, &key, Some(&iv), &ct).unwrap(); let expected = pt; if computed != expected { println!("Computed: {}", hex::encode(&computed)); println!("Expected: {}", hex::encode(&expected)); if computed.len() != expected.len() { println!( "Lengths differ: {} in computed vs {} expected", computed.len(), expected.len() ); } panic!("test failure"); } } fn cipher_test_nopad(ciphertype: super::Cipher, pt: &str, ct: &str, key: &str, iv: &str) { let pt = Vec::from_hex(pt).unwrap(); let ct = Vec::from_hex(ct).unwrap(); let key = Vec::from_hex(key).unwrap(); let iv = Vec::from_hex(iv).unwrap(); let computed = { let mut c = Crypter::new(ciphertype, Mode::Decrypt, &key, Some(&iv)).unwrap(); c.pad(false); let mut out = vec![0; ct.len() + ciphertype.block_size()]; let count = c.update(&ct, &mut out).unwrap(); let rest = c.finalize(&mut out[count..]).unwrap(); out.truncate(count + rest); out }; let expected = pt; if computed != expected { println!("Computed: {}", hex::encode(&computed)); println!("Expected: {}", hex::encode(&expected)); if computed.len() != expected.len() { println!( "Lengths differ: {} in computed vs {} expected", computed.len(), expected.len() ); } panic!("test failure"); } } #[test] #[cfg_attr(ossl300, ignore)] fn test_rc4() { let pt = "0000000000000000000000000000000000000000000000000000000000000000000000000000"; let ct = "A68686B04D686AA107BD8D4CAB191A3EEC0A6294BC78B60F65C25CB47BD7BB3A48EFC4D26BE4"; let key = "97CD440324DA5FD1F7955C1C13B6B466"; let iv = ""; cipher_test(super::Cipher::rc4(), pt, ct, key, iv); } #[test] fn test_aes256_xts() { // Test case 174 from // http://csrc.nist.gov/groups/STM/cavp/documents/aes/XTSTestVectors.zip let pt = "77f4ef63d734ebd028508da66c22cdebdd52ecd6ee2ab0a50bc8ad0cfd692ca5fcd4e6dedc45df7f\ 6503f462611dc542"; let ct = "ce7d905a7776ac72f240d22aafed5e4eb7566cdc7211220e970da634ce015f131a5ecb8d400bc9e8\ 4f0b81d8725dbbc7"; let key = "b6bfef891f83b5ff073f2231267be51eb084b791fa19a154399c0684c8b2dfcb37de77d28bbda3b\ 4180026ad640b74243b3133e7b9fae629403f6733423dae28"; let iv = "db200efb7eaaa737dbdf40babb68953f"; cipher_test(super::Cipher::aes_256_xts(), pt, ct, key, iv); } #[test] fn test_aes128_ctr() { let pt = "6BC1BEE22E409F96E93D7E117393172AAE2D8A571E03AC9C9EB76FAC45AF8E5130C81C46A35CE411\ E5FBC1191A0A52EFF69F2445DF4F9B17AD2B417BE66C3710"; let ct = "874D6191B620E3261BEF6864990DB6CE9806F66B7970FDFF8617187BB9FFFDFF5AE4DF3EDBD5D35E\ 5B4F09020DB03EAB1E031DDA2FBE03D1792170A0F3009CEE"; let key = "2B7E151628AED2A6ABF7158809CF4F3C"; let iv = "F0F1F2F3F4F5F6F7F8F9FAFBFCFDFEFF"; cipher_test(super::Cipher::aes_128_ctr(), pt, ct, key, iv); } #[test] fn test_aes128_cfb1() { // Lifted from http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf let pt = "6bc1"; let ct = "68b3"; let key = "2b7e151628aed2a6abf7158809cf4f3c"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_128_cfb1(), pt, ct, key, iv); } #[test] fn test_aes128_cfb128() { let pt = "6bc1bee22e409f96e93d7e117393172a"; let ct = "3b3fd92eb72dad20333449f8e83cfb4a"; let key = "2b7e151628aed2a6abf7158809cf4f3c"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_128_cfb128(), pt, ct, key, iv); } #[test] fn test_aes128_cfb8() { let pt = "6bc1bee22e409f96e93d7e117393172aae2d"; let ct = "3b79424c9c0dd436bace9e0ed4586a4f32b9"; let key = "2b7e151628aed2a6abf7158809cf4f3c"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_128_cfb8(), pt, ct, key, iv); } #[test] fn test_aes128_ofb() { // Lifted from http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf let pt = "6bc1bee22e409f96e93d7e117393172aae2d8a571e03ac9c9eb76fac45af8e5130c81c46a35ce411e5fbc1191a0a52eff69f2445df4f9b17ad2b417be66c3710"; let ct = "3b3fd92eb72dad20333449f8e83cfb4a7789508d16918f03f53c52dac54ed8259740051e9c5fecf64344f7a82260edcc304c6528f659c77866a510d9c1d6ae5e"; let key = "2b7e151628aed2a6abf7158809cf4f3c"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_128_ofb(), pt, ct, key, iv); } #[test] fn test_aes192_ctr() { // Lifted from http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf let pt = "6bc1bee22e409f96e93d7e117393172aae2d8a571e03ac9c9eb76fac45af8e5130c81c46a35ce411e5fbc1191a0a52eff69f2445df4f9b17ad2b417be66c3710"; let ct = "1abc932417521ca24f2b0459fe7e6e0b090339ec0aa6faefd5ccc2c6f4ce8e941e36b26bd1ebc670d1bd1d665620abf74f78a7f6d29809585a97daec58c6b050"; let key = "8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b"; let iv = "f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff"; cipher_test(super::Cipher::aes_192_ctr(), pt, ct, key, iv); } #[test] fn test_aes192_cfb1() { // Lifted from http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf let pt = "6bc1"; let ct = "9359"; let key = "8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_192_cfb1(), pt, ct, key, iv); } #[test] fn test_aes192_cfb128() { // Lifted from http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf let pt = "6bc1bee22e409f96e93d7e117393172aae2d8a571e03ac9c9eb76fac45af8e5130c81c46a35ce411e5fbc1191a0a52eff69f2445df4f9b17ad2b417be66c3710"; let ct = "cdc80d6fddf18cab34c25909c99a417467ce7f7f81173621961a2b70171d3d7a2e1e8a1dd59b88b1c8e60fed1efac4c9c05f9f9ca9834fa042ae8fba584b09ff"; let key = "8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_192_cfb128(), pt, ct, key, iv); } #[test] fn test_aes192_cfb8() { // Lifted from http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf let pt = "6bc1bee22e409f96e93d7e117393172aae2d"; let ct = "cda2521ef0a905ca44cd057cbf0d47a0678a"; let key = "8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_192_cfb8(), pt, ct, key, iv); } #[test] fn test_aes192_ofb() { // Lifted from http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf let pt = "6bc1bee22e409f96e93d7e117393172aae2d8a571e03ac9c9eb76fac45af8e5130c81c46a35ce411e5fbc1191a0a52eff69f2445df4f9b17ad2b417be66c3710"; let ct = "cdc80d6fddf18cab34c25909c99a4174fcc28b8d4c63837c09e81700c11004018d9a9aeac0f6596f559c6d4daf59a5f26d9f200857ca6c3e9cac524bd9acc92a"; let key = "8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_192_ofb(), pt, ct, key, iv); } #[test] fn test_aes256_cfb1() { let pt = "6bc1"; let ct = "9029"; let key = "603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_256_cfb1(), pt, ct, key, iv); } #[test] fn test_aes256_cfb128() { let pt = "6bc1bee22e409f96e93d7e117393172a"; let ct = "dc7e84bfda79164b7ecd8486985d3860"; let key = "603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_256_cfb128(), pt, ct, key, iv); } #[test] fn test_aes256_cfb8() { let pt = "6bc1bee22e409f96e93d7e117393172aae2d"; let ct = "dc1f1a8520a64db55fcc8ac554844e889700"; let key = "603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_256_cfb8(), pt, ct, key, iv); } #[test] fn test_aes256_ofb() { // Lifted from http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf let pt = "6bc1bee22e409f96e93d7e117393172aae2d8a571e03ac9c9eb76fac45af8e5130c81c46a35ce411e5fbc1191a0a52eff69f2445df4f9b17ad2b417be66c3710"; let ct = "dc7e84bfda79164b7ecd8486985d38604febdc6740d20b3ac88f6ad82a4fb08d71ab47a086e86eedf39d1c5bba97c4080126141d67f37be8538f5a8be740e484"; let key = "603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4"; let iv = "000102030405060708090a0b0c0d0e0f"; cipher_test(super::Cipher::aes_256_ofb(), pt, ct, key, iv); } #[test] #[cfg_attr(ossl300, ignore)] fn test_bf_cbc() { // https://www.schneier.com/code/vectors.txt let pt = "37363534333231204E6F77206973207468652074696D6520666F722000000000"; let ct = "6B77B4D63006DEE605B156E27403979358DEB9E7154616D959F1652BD5FF92CC"; let key = "0123456789ABCDEFF0E1D2C3B4A59687"; let iv = "FEDCBA9876543210"; cipher_test_nopad(super::Cipher::bf_cbc(), pt, ct, key, iv); } #[test] #[cfg_attr(ossl300, ignore)] fn test_bf_ecb() { let pt = "5CD54CA83DEF57DA"; let ct = "B1B8CC0B250F09A0"; let key = "0131D9619DC1376E"; let iv = "0000000000000000"; cipher_test_nopad(super::Cipher::bf_ecb(), pt, ct, key, iv); } #[test] #[cfg_attr(ossl300, ignore)] fn test_bf_cfb64() { let pt = "37363534333231204E6F77206973207468652074696D6520666F722000"; let ct = "E73214A2822139CAF26ECF6D2EB9E76E3DA3DE04D1517200519D57A6C3"; let key = "0123456789ABCDEFF0E1D2C3B4A59687"; let iv = "FEDCBA9876543210"; cipher_test_nopad(super::Cipher::bf_cfb64(), pt, ct, key, iv); } #[test] #[cfg_attr(ossl300, ignore)] fn test_bf_ofb() { let pt = "37363534333231204E6F77206973207468652074696D6520666F722000"; let ct = "E73214A2822139CA62B343CC5B65587310DD908D0C241B2263C2CF80DA"; let key = "0123456789ABCDEFF0E1D2C3B4A59687"; let iv = "FEDCBA9876543210"; cipher_test_nopad(super::Cipher::bf_ofb(), pt, ct, key, iv); } #[test] #[cfg_attr(ossl300, ignore)] fn test_des_cbc() { let pt = "54686973206973206120746573742e"; let ct = "6f2867cfefda048a4046ef7e556c7132"; let key = "7cb66337f3d3c0fe"; let iv = "0001020304050607"; cipher_test(super::Cipher::des_cbc(), pt, ct, key, iv); } #[test] #[cfg_attr(ossl300, ignore)] fn test_des_ecb() { let pt = "54686973206973206120746573742e"; let ct = "0050ab8aecec758843fe157b4dde938c"; let key = "7cb66337f3d3c0fe"; let iv = "0001020304050607"; cipher_test(super::Cipher::des_ecb(), pt, ct, key, iv); } #[test] fn test_des_ede3() { let pt = "9994f4c69d40ae4f34ff403b5cf39d4c8207ea5d3e19a5fd"; let ct = "9e5c4297d60582f81071ac8ab7d0698d4c79de8b94c519858207ea5d3e19a5fd"; let key = "010203040506070801020304050607080102030405060708"; let iv = "5cc118306dc702e4"; cipher_test(super::Cipher::des_ede3(), pt, ct, key, iv); } #[test] fn test_des_ede3_cbc() { let pt = "54686973206973206120746573742e"; let ct = "6f2867cfefda048a4046ef7e556c7132"; let key = "7cb66337f3d3c0fe7cb66337f3d3c0fe7cb66337f3d3c0fe"; let iv = "0001020304050607"; cipher_test(super::Cipher::des_ede3_cbc(), pt, ct, key, iv); } #[test] fn test_des_ede3_cfb64() { let pt = "2b1773784b5889dc788477367daa98ad"; let ct = "6f2867cfefda048a4046ef7e556c7132"; let key = "7cb66337f3d3c0fe7cb66337f3d3c0fe7cb66337f3d3c0fe"; let iv = "0001020304050607"; cipher_test(super::Cipher::des_ede3_cfb64(), pt, ct, key, iv); } #[test] fn test_aes128_gcm() { let key = "23dc8d23d95b6fd1251741a64f7d4f41"; let iv = "f416f48ad44d9efa1179e167"; let pt = "6cb9b71dd0ccd42cdf87e8e396fc581fd8e0d700e360f590593b748e105390de"; let aad = "45074844c97d515c65bbe37c210a5a4b08c21c588efe5c5f73c4d9c17d34dacddc0bb6a8a53f7bf477b9780c1c2a928660df87016b2873fe876b2b887fb5886bfd63216b7eaecc046372a82c047eb043f0b063226ee52a12c69b"; let ct = "8ad20486778e87387efb3f2574e509951c0626816722018129e578b2787969d3"; let tag = "91e1bc09"; // this tag is smaller than you'd normally want, but I pulled this test from the part of // the NIST test vectors that cover 4 byte tags. let mut actual_tag = [0; 4]; let out = encrypt_aead( Cipher::aes_128_gcm(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(iv).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(pt).unwrap(), &mut actual_tag, ) .unwrap(); assert_eq!(ct, hex::encode(out)); assert_eq!(tag, hex::encode(actual_tag)); let out = decrypt_aead( Cipher::aes_128_gcm(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(iv).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(ct).unwrap(), &Vec::from_hex(tag).unwrap(), ) .unwrap(); assert_eq!(pt, hex::encode(out)); } #[test] fn test_aes128_ccm() { let key = "3ee186594f110fb788a8bf8aa8be5d4a"; let nonce = "44f705d52acf27b7f17196aa9b"; let aad = "2c16724296ff85e079627be3053ea95adf35722c21886baba343bd6c79b5cb57"; let pt = "d71864877f2578db092daba2d6a1f9f4698a9c356c7830a1"; let ct = "b4dd74e7a0cc51aea45dfb401a41d5822c96901a83247ea0"; let tag = "d6965f5aa6e31302a9cc2b36"; let mut actual_tag = [0; 12]; let out = encrypt_aead( Cipher::aes_128_ccm(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(nonce).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(pt).unwrap(), &mut actual_tag, ) .unwrap(); assert_eq!(ct, hex::encode(out)); assert_eq!(tag, hex::encode(actual_tag)); let out = decrypt_aead( Cipher::aes_128_ccm(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(nonce).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(ct).unwrap(), &Vec::from_hex(tag).unwrap(), ) .unwrap(); assert_eq!(pt, hex::encode(out)); } #[test] fn test_aes128_ccm_verify_fail() { let key = "3ee186594f110fb788a8bf8aa8be5d4a"; let nonce = "44f705d52acf27b7f17196aa9b"; let aad = "2c16724296ff85e079627be3053ea95adf35722c21886baba343bd6c79b5cb57"; let ct = "b4dd74e7a0cc51aea45dfb401a41d5822c96901a83247ea0"; let tag = "00005f5aa6e31302a9cc2b36"; let out = decrypt_aead( Cipher::aes_128_ccm(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(nonce).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(ct).unwrap(), &Vec::from_hex(tag).unwrap(), ); assert!(out.is_err()); } #[test] fn test_aes256_ccm() { let key = "7f4af6765cad1d511db07e33aaafd57646ec279db629048aa6770af24849aa0d"; let nonce = "dde2a362ce81b2b6913abc3095"; let aad = "404f5df97ece7431987bc098cce994fc3c063b519ffa47b0365226a0015ef695"; let pt = "7ebef26bf4ecf6f0ebb2eb860edbf900f27b75b4a6340fdb"; let ct = "353022db9c568bd7183a13c40b1ba30fcc768c54264aa2cd"; let tag = "2927a053c9244d3217a7ad05"; let mut actual_tag = [0; 12]; let out = encrypt_aead( Cipher::aes_256_ccm(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(nonce).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(pt).unwrap(), &mut actual_tag, ) .unwrap(); assert_eq!(ct, hex::encode(out)); assert_eq!(tag, hex::encode(actual_tag)); let out = decrypt_aead( Cipher::aes_256_ccm(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(nonce).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(ct).unwrap(), &Vec::from_hex(tag).unwrap(), ) .unwrap(); assert_eq!(pt, hex::encode(out)); } #[test] fn test_aes256_ccm_verify_fail() { let key = "7f4af6765cad1d511db07e33aaafd57646ec279db629048aa6770af24849aa0d"; let nonce = "dde2a362ce81b2b6913abc3095"; let aad = "404f5df97ece7431987bc098cce994fc3c063b519ffa47b0365226a0015ef695"; let ct = "353022db9c568bd7183a13c40b1ba30fcc768c54264aa2cd"; let tag = "0000a053c9244d3217a7ad05"; let out = decrypt_aead( Cipher::aes_256_ccm(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(nonce).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(ct).unwrap(), &Vec::from_hex(tag).unwrap(), ); assert!(out.is_err()); } #[test] #[cfg(ossl110)] fn test_aes_128_ocb() { let key = "000102030405060708090a0b0c0d0e0f"; let aad = "0001020304050607"; let tag = "16dc76a46d47e1ead537209e8a96d14e"; let iv = "000102030405060708090a0b"; let pt = "0001020304050607"; let ct = "92b657130a74b85a"; let mut actual_tag = [0; 16]; let out = encrypt_aead( Cipher::aes_128_ocb(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(iv).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(pt).unwrap(), &mut actual_tag, ) .unwrap(); assert_eq!(ct, hex::encode(out)); assert_eq!(tag, hex::encode(actual_tag)); let out = decrypt_aead( Cipher::aes_128_ocb(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(iv).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(ct).unwrap(), &Vec::from_hex(tag).unwrap(), ) .unwrap(); assert_eq!(pt, hex::encode(out)); } #[test] #[cfg(ossl110)] fn test_aes_128_ocb_fail() { let key = "000102030405060708090a0b0c0d0e0f"; let aad = "0001020304050607"; let tag = "16dc76a46d47e1ead537209e8a96d14e"; let iv = "000000000405060708090a0b"; let ct = "92b657130a74b85a"; let out = decrypt_aead( Cipher::aes_128_ocb(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(iv).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(ct).unwrap(), &Vec::from_hex(tag).unwrap(), ); assert!(out.is_err()); } #[test] #[cfg(any(ossl110))] fn test_chacha20() { let key = "0000000000000000000000000000000000000000000000000000000000000000"; let iv = "00000000000000000000000000000000"; let pt = "000000000000000000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000"; let ct = "76b8e0ada0f13d90405d6ae55386bd28bdd219b8a08ded1aa836efcc8b770dc7da41597c5157488d7\ 724e03fb8d84a376a43b8f41518a11cc387b669b2ee6586"; cipher_test(Cipher::chacha20(), pt, ct, key, iv); } #[test] #[cfg(any(ossl110))] fn test_chacha20_poly1305() { let key = "808182838485868788898a8b8c8d8e8f909192939495969798999a9b9c9d9e9f"; let iv = "070000004041424344454647"; let aad = "50515253c0c1c2c3c4c5c6c7"; let pt = "4c616469657320616e642047656e746c656d656e206f662074686520636c617373206f66202739393\ a204966204920636f756c64206f6666657220796f75206f6e6c79206f6e652074697020666f722074\ 6865206675747572652c2073756e73637265656e20776f756c642062652069742e"; let ct = "d31a8d34648e60db7b86afbc53ef7ec2a4aded51296e08fea9e2b5a736ee62d63dbea45e8ca967128\ 2fafb69da92728b1a71de0a9e060b2905d6a5b67ecd3b3692ddbd7f2d778b8c9803aee328091b58fa\ b324e4fad675945585808b4831d7bc3ff4def08e4b7a9de576d26586cec64b6116"; let tag = "1ae10b594f09e26a7e902ecbd0600691"; let mut actual_tag = [0; 16]; let out = encrypt_aead( Cipher::chacha20_poly1305(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(iv).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(pt).unwrap(), &mut actual_tag, ) .unwrap(); assert_eq!(ct, hex::encode(out)); assert_eq!(tag, hex::encode(actual_tag)); let out = decrypt_aead( Cipher::chacha20_poly1305(), &Vec::from_hex(key).unwrap(), Some(&Vec::from_hex(iv).unwrap()), &Vec::from_hex(aad).unwrap(), &Vec::from_hex(ct).unwrap(), &Vec::from_hex(tag).unwrap(), ) .unwrap(); assert_eq!(pt, hex::encode(out)); } #[test] #[cfg(not(any(osslconf = "OPENSSL_NO_SEED", ossl300)))] fn test_seed_cbc() { let pt = "5363686f6b6f6c6164656e6b756368656e0a"; let ct = "c2edf0fb2eb11bf7b2f39417a8528896d34b24b6fd79e5923b116dfcd2aba5a4"; let key = "41414141414141414141414141414141"; let iv = "41414141414141414141414141414141"; cipher_test(super::Cipher::seed_cbc(), pt, ct, key, iv); } #[test] #[cfg(not(any(osslconf = "OPENSSL_NO_SEED", ossl300)))] fn test_seed_cfb128() { let pt = "5363686f6b6f6c6164656e6b756368656e0a"; let ct = "71d4d25fc1750cb7789259e7f34061939a41"; let key = "41414141414141414141414141414141"; let iv = "41414141414141414141414141414141"; cipher_test(super::Cipher::seed_cfb128(), pt, ct, key, iv); } #[test] #[cfg(not(any(osslconf = "OPENSSL_NO_SEED", ossl300)))] fn test_seed_ecb() { let pt = "5363686f6b6f6c6164656e6b756368656e0a"; let ct = "0263a9cd498cf0edb0ef72a3231761d00ce601f7d08ad19ad74f0815f2c77f7e"; let key = "41414141414141414141414141414141"; let iv = "41414141414141414141414141414141"; cipher_test(super::Cipher::seed_ecb(), pt, ct, key, iv); } #[test] #[cfg(not(any(osslconf = "OPENSSL_NO_SEED", ossl300)))] fn test_seed_ofb() { let pt = "5363686f6b6f6c6164656e6b756368656e0a"; let ct = "71d4d25fc1750cb7789259e7f34061930afd"; let key = "41414141414141414141414141414141"; let iv = "41414141414141414141414141414141"; cipher_test(super::Cipher::seed_ofb(), pt, ct, key, iv); } } vendor/openssl/src/ex_data.rs0000664000175000017500000000144314160055207017076 0ustar mwhudsonmwhudsonuse libc::c_int; use std::marker::PhantomData; /// A slot in a type's "extra data" structure. /// /// It is parameterized over the type containing the extra data as well as the /// type of the data in the slot. pub struct Index(c_int, PhantomData<(T, U)>); impl Copy for Index {} impl Clone for Index { fn clone(&self) -> Index { *self } } impl Index { /// Creates an `Index` from a raw integer index. /// /// # Safety /// /// The caller must ensure that the index correctly maps to a `U` value stored in a `T`. pub unsafe fn from_raw(idx: c_int) -> Index { Index(idx, PhantomData) } #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } } vendor/openssl/src/pkey.rs0000664000175000017500000010674214172417313016454 0ustar mwhudsonmwhudson//! Public/private key processing. //! //! Asymmetric public key algorithms solve the problem of establishing and sharing //! secret keys to securely send and receive messages. //! This system uses a pair of keys: a public key, which can be freely //! distributed, and a private key, which is kept to oneself. An entity may //! encrypt information using a user's public key. The encrypted information can //! only be deciphered using that user's private key. //! //! This module offers support for five popular algorithms: //! //! * RSA //! //! * DSA //! //! * Diffie-Hellman //! //! * Elliptic Curves //! //! * HMAC //! //! These algorithms rely on hard mathematical problems - namely integer factorization, //! discrete logarithms, and elliptic curve relationships - that currently do not //! yield efficient solutions. This property ensures the security of these //! cryptographic algorithms. //! //! # Example //! //! Generate a 2048-bit RSA public/private key pair and print the public key. //! //! ```rust //! use openssl::rsa::Rsa; //! use openssl::pkey::PKey; //! use std::str; //! //! let rsa = Rsa::generate(2048).unwrap(); //! let pkey = PKey::from_rsa(rsa).unwrap(); //! //! let pub_key: Vec = pkey.public_key_to_pem().unwrap(); //! println!("{:?}", str::from_utf8(pub_key.as_slice()).unwrap()); //! ``` use cfg_if::cfg_if; use foreign_types::{ForeignType, ForeignTypeRef}; use libc::{c_int, c_long}; use std::convert::TryFrom; use std::ffi::CString; use std::fmt; use std::mem; use std::ptr; use crate::bio::{MemBio, MemBioSlice}; use crate::dh::Dh; use crate::dsa::Dsa; use crate::ec::EcKey; use crate::error::ErrorStack; use crate::rsa::Rsa; use crate::symm::Cipher; use crate::util::{invoke_passwd_cb, CallbackState}; use crate::{cvt, cvt_p}; /// A tag type indicating that a key only has parameters. pub enum Params {} /// A tag type indicating that a key only has public components. pub enum Public {} /// A tag type indicating that a key has private components. pub enum Private {} /// An identifier of a kind of key. #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct Id(c_int); impl Id { pub const RSA: Id = Id(ffi::EVP_PKEY_RSA); pub const HMAC: Id = Id(ffi::EVP_PKEY_HMAC); pub const DSA: Id = Id(ffi::EVP_PKEY_DSA); pub const DH: Id = Id(ffi::EVP_PKEY_DH); pub const EC: Id = Id(ffi::EVP_PKEY_EC); #[cfg(ossl111)] pub const ED25519: Id = Id(ffi::EVP_PKEY_ED25519); #[cfg(ossl111)] pub const ED448: Id = Id(ffi::EVP_PKEY_ED448); #[cfg(ossl111)] pub const X25519: Id = Id(ffi::EVP_PKEY_X25519); #[cfg(ossl111)] pub const X448: Id = Id(ffi::EVP_PKEY_X448); /// Creates a `Id` from an integer representation. pub fn from_raw(value: c_int) -> Id { Id(value) } /// Returns the integer representation of the `Id`. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } } /// A trait indicating that a key has parameters. pub unsafe trait HasParams {} unsafe impl HasParams for Params {} unsafe impl HasParams for T where T: HasPublic {} /// A trait indicating that a key has public components. pub unsafe trait HasPublic {} unsafe impl HasPublic for Public {} unsafe impl HasPublic for T where T: HasPrivate {} /// A trait indicating that a key has private components. pub unsafe trait HasPrivate {} unsafe impl HasPrivate for Private {} generic_foreign_type_and_impl_send_sync! { type CType = ffi::EVP_PKEY; fn drop = ffi::EVP_PKEY_free; /// A public or private key. pub struct PKey; /// Reference to `PKey`. pub struct PKeyRef; } impl ToOwned for PKeyRef { type Owned = PKey; fn to_owned(&self) -> PKey { unsafe { EVP_PKEY_up_ref(self.as_ptr()); PKey::from_ptr(self.as_ptr()) } } } impl PKeyRef { /// Returns a copy of the internal RSA key. /// /// This corresponds to [`EVP_PKEY_get1_RSA`]. /// /// [`EVP_PKEY_get1_RSA`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_get1_RSA.html pub fn rsa(&self) -> Result, ErrorStack> { unsafe { let rsa = cvt_p(ffi::EVP_PKEY_get1_RSA(self.as_ptr()))?; Ok(Rsa::from_ptr(rsa)) } } /// Returns a copy of the internal DSA key. /// /// This corresponds to [`EVP_PKEY_get1_DSA`]. /// /// [`EVP_PKEY_get1_DSA`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_get1_DSA.html pub fn dsa(&self) -> Result, ErrorStack> { unsafe { let dsa = cvt_p(ffi::EVP_PKEY_get1_DSA(self.as_ptr()))?; Ok(Dsa::from_ptr(dsa)) } } /// Returns a copy of the internal DH key. /// /// This corresponds to [`EVP_PKEY_get1_DH`]. /// /// [`EVP_PKEY_get1_DH`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_get1_DH.html pub fn dh(&self) -> Result, ErrorStack> { unsafe { let dh = cvt_p(ffi::EVP_PKEY_get1_DH(self.as_ptr()))?; Ok(Dh::from_ptr(dh)) } } /// Returns a copy of the internal elliptic curve key. /// /// This corresponds to [`EVP_PKEY_get1_EC_KEY`]. /// /// [`EVP_PKEY_get1_EC_KEY`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_get1_EC_KEY.html pub fn ec_key(&self) -> Result, ErrorStack> { unsafe { let ec_key = cvt_p(ffi::EVP_PKEY_get1_EC_KEY(self.as_ptr()))?; Ok(EcKey::from_ptr(ec_key)) } } /// Returns the `Id` that represents the type of this key. /// /// This corresponds to [`EVP_PKEY_id`]. /// /// [`EVP_PKEY_id`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_id.html pub fn id(&self) -> Id { unsafe { Id::from_raw(ffi::EVP_PKEY_id(self.as_ptr())) } } /// Returns the maximum size of a signature in bytes. /// /// This corresponds to [`EVP_PKEY_size`]. /// /// [`EVP_PKEY_size`]: https://www.openssl.org/docs/man1.1.1/man3/EVP_PKEY_size.html pub fn size(&self) -> usize { unsafe { ffi::EVP_PKEY_size(self.as_ptr()) as usize } } } impl PKeyRef where T: HasPublic, { to_pem! { /// Serializes the public key into a PEM-encoded SubjectPublicKeyInfo structure. /// /// The output will have a header of `-----BEGIN PUBLIC KEY-----`. /// /// This corresponds to [`PEM_write_bio_PUBKEY`]. /// /// [`PEM_write_bio_PUBKEY`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_write_bio_PUBKEY.html public_key_to_pem, ffi::PEM_write_bio_PUBKEY } to_der! { /// Serializes the public key into a DER-encoded SubjectPublicKeyInfo structure. /// /// This corresponds to [`i2d_PUBKEY`]. /// /// [`i2d_PUBKEY`]: https://www.openssl.org/docs/man1.1.0/crypto/i2d_PUBKEY.html public_key_to_der, ffi::i2d_PUBKEY } /// Returns the size of the key. /// /// This corresponds to the bit length of the modulus of an RSA key, and the bit length of the /// group order for an elliptic curve key, for example. pub fn bits(&self) -> u32 { unsafe { ffi::EVP_PKEY_bits(self.as_ptr()) as u32 } } /// Compares the public component of this key with another. pub fn public_eq(&self, other: &PKeyRef) -> bool where U: HasPublic, { unsafe { ffi::EVP_PKEY_cmp(self.as_ptr(), other.as_ptr()) == 1 } } /// Raw byte representation of a public key /// /// This function only works for algorithms that support raw public keys. /// Currently this is: X25519, ED25519, X448 or ED448 /// /// This corresponds to [`EVP_PKEY_get_raw_public_key`]. /// /// [`EVP_PKEY_get_raw_public_key`]: https://www.openssl.org/docs/man1.1.1/man3/EVP_PKEY_get_raw_public_key.html #[cfg(ossl111)] pub fn raw_public_key(&self) -> Result, ErrorStack> { unsafe { let mut len = 0; cvt(ffi::EVP_PKEY_get_raw_public_key( self.as_ptr(), ptr::null_mut(), &mut len, ))?; let mut buf = vec![0u8; len]; cvt(ffi::EVP_PKEY_get_raw_public_key( self.as_ptr(), buf.as_mut_ptr(), &mut len, ))?; buf.truncate(len); Ok(buf) } } } impl PKeyRef where T: HasPrivate, { private_key_to_pem! { /// Serializes the private key to a PEM-encoded PKCS#8 PrivateKeyInfo structure. /// /// The output will have a header of `-----BEGIN PRIVATE KEY-----`. /// /// This corresponds to [`PEM_write_bio_PKCS8PrivateKey`]. /// /// [`PEM_write_bio_PKCS8PrivateKey`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_write_bio_PKCS8PrivateKey.html private_key_to_pem_pkcs8, /// Serializes the private key to a PEM-encoded PKCS#8 EncryptedPrivateKeyInfo structure. /// /// The output will have a header of `-----BEGIN ENCRYPTED PRIVATE KEY-----`. /// /// This corresponds to [`PEM_write_bio_PKCS8PrivateKey`]. /// /// [`PEM_write_bio_PKCS8PrivateKey`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_write_bio_PKCS8PrivateKey.html private_key_to_pem_pkcs8_passphrase, ffi::PEM_write_bio_PKCS8PrivateKey } to_der! { /// Serializes the private key to a DER-encoded key type specific format. /// /// This corresponds to [`i2d_PrivateKey`]. /// /// [`i2d_PrivateKey`]: https://www.openssl.org/docs/man1.0.2/crypto/i2d_PrivateKey.html private_key_to_der, ffi::i2d_PrivateKey } /// Raw byte representation of a private key /// /// This function only works for algorithms that support raw private keys. /// Currently this is: HMAC, X25519, ED25519, X448 or ED448 /// /// This corresponds to [`EVP_PKEY_get_raw_private_key`]. /// /// [`EVP_PKEY_get_raw_private_key`]: https://www.openssl.org/docs/man1.1.1/man3/EVP_PKEY_get_raw_private_key.html #[cfg(ossl111)] pub fn raw_private_key(&self) -> Result, ErrorStack> { unsafe { let mut len = 0; cvt(ffi::EVP_PKEY_get_raw_private_key( self.as_ptr(), ptr::null_mut(), &mut len, ))?; let mut buf = vec![0u8; len]; cvt(ffi::EVP_PKEY_get_raw_private_key( self.as_ptr(), buf.as_mut_ptr(), &mut len, ))?; buf.truncate(len); Ok(buf) } } /// Serializes a private key into a DER-formatted PKCS#8, using the supplied password to /// encrypt the key. /// /// # Panics /// /// Panics if `passphrase` contains an embedded null. pub fn private_key_to_pkcs8_passphrase( &self, cipher: Cipher, passphrase: &[u8], ) -> Result, ErrorStack> { unsafe { let bio = MemBio::new()?; let len = passphrase.len(); let passphrase = CString::new(passphrase).unwrap(); cvt(ffi::i2d_PKCS8PrivateKey_bio( bio.as_ptr(), self.as_ptr(), cipher.as_ptr(), passphrase.as_ptr() as *const _ as *mut _, len as ::libc::c_int, None, ptr::null_mut(), ))?; Ok(bio.get_buf().to_owned()) } } } impl fmt::Debug for PKey { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { let alg = match self.id() { Id::RSA => "RSA", Id::HMAC => "HMAC", Id::DSA => "DSA", Id::DH => "DH", Id::EC => "EC", #[cfg(ossl111)] Id::ED25519 => "Ed25519", #[cfg(ossl111)] Id::ED448 => "Ed448", _ => "unknown", }; fmt.debug_struct("PKey").field("algorithm", &alg).finish() // TODO: Print details for each specific type of key } } impl Clone for PKey { fn clone(&self) -> PKey { PKeyRef::to_owned(self) } } impl PKey { /// Creates a new `PKey` containing an RSA key. /// /// This corresponds to [`EVP_PKEY_assign_RSA`]. /// /// [`EVP_PKEY_assign_RSA`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_assign_RSA.html pub fn from_rsa(rsa: Rsa) -> Result, ErrorStack> { unsafe { let evp = cvt_p(ffi::EVP_PKEY_new())?; let pkey = PKey::from_ptr(evp); cvt(ffi::EVP_PKEY_assign( pkey.0, ffi::EVP_PKEY_RSA, rsa.as_ptr() as *mut _, ))?; mem::forget(rsa); Ok(pkey) } } /// Creates a new `PKey` containing a DSA key. /// /// This corresponds to [`EVP_PKEY_assign_DSA`]. /// /// [`EVP_PKEY_assign_DSA`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_assign_DSA.html pub fn from_dsa(dsa: Dsa) -> Result, ErrorStack> { unsafe { let evp = cvt_p(ffi::EVP_PKEY_new())?; let pkey = PKey::from_ptr(evp); cvt(ffi::EVP_PKEY_assign( pkey.0, ffi::EVP_PKEY_DSA, dsa.as_ptr() as *mut _, ))?; mem::forget(dsa); Ok(pkey) } } /// Creates a new `PKey` containing a Diffie-Hellman key. /// /// This corresponds to [`EVP_PKEY_assign_DH`]. /// /// [`EVP_PKEY_assign_DH`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_assign_DH.html pub fn from_dh(dh: Dh) -> Result, ErrorStack> { unsafe { let evp = cvt_p(ffi::EVP_PKEY_new())?; let pkey = PKey::from_ptr(evp); cvt(ffi::EVP_PKEY_assign( pkey.0, ffi::EVP_PKEY_DH, dh.as_ptr() as *mut _, ))?; mem::forget(dh); Ok(pkey) } } /// Creates a new `PKey` containing an elliptic curve key. /// /// This corresponds to [`EVP_PKEY_assign_EC_KEY`]. /// /// [`EVP_PKEY_assign_EC_KEY`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_assign_EC_KEY.html pub fn from_ec_key(ec_key: EcKey) -> Result, ErrorStack> { unsafe { let evp = cvt_p(ffi::EVP_PKEY_new())?; let pkey = PKey::from_ptr(evp); cvt(ffi::EVP_PKEY_assign( pkey.0, ffi::EVP_PKEY_EC, ec_key.as_ptr() as *mut _, ))?; mem::forget(ec_key); Ok(pkey) } } } impl PKey { /// Creates a new `PKey` containing an HMAC key. /// /// # Note /// /// To compute HMAC values, use the `sign` module. pub fn hmac(key: &[u8]) -> Result, ErrorStack> { unsafe { assert!(key.len() <= c_int::max_value() as usize); let key = cvt_p(ffi::EVP_PKEY_new_mac_key( ffi::EVP_PKEY_HMAC, ptr::null_mut(), key.as_ptr() as *const _, key.len() as c_int, ))?; Ok(PKey::from_ptr(key)) } } /// Creates a new `PKey` containing a CMAC key. /// /// Requires OpenSSL 1.1.0 or newer. /// /// # Note /// /// To compute CMAC values, use the `sign` module. #[cfg(ossl110)] #[allow(clippy::trivially_copy_pass_by_ref)] pub fn cmac(cipher: &Cipher, key: &[u8]) -> Result, ErrorStack> { unsafe { assert!(key.len() <= c_int::max_value() as usize); let kctx = cvt_p(ffi::EVP_PKEY_CTX_new_id( ffi::EVP_PKEY_CMAC, ptr::null_mut(), ))?; let ret = (|| { cvt(ffi::EVP_PKEY_keygen_init(kctx))?; // Set cipher for cmac cvt(ffi::EVP_PKEY_CTX_ctrl( kctx, -1, ffi::EVP_PKEY_OP_KEYGEN, ffi::EVP_PKEY_CTRL_CIPHER, 0, cipher.as_ptr() as *mut _, ))?; // Set the key data cvt(ffi::EVP_PKEY_CTX_ctrl( kctx, -1, ffi::EVP_PKEY_OP_KEYGEN, ffi::EVP_PKEY_CTRL_SET_MAC_KEY, key.len() as c_int, key.as_ptr() as *mut _, ))?; Ok(()) })(); if let Err(e) = ret { // Free memory ffi::EVP_PKEY_CTX_free(kctx); return Err(e); } // Generate key let mut key = ptr::null_mut(); let ret = cvt(ffi::EVP_PKEY_keygen(kctx, &mut key)); // Free memory ffi::EVP_PKEY_CTX_free(kctx); if let Err(e) = ret { return Err(e); } Ok(PKey::from_ptr(key)) } } #[cfg(ossl111)] fn generate_eddsa(nid: c_int) -> Result, ErrorStack> { unsafe { let kctx = cvt_p(ffi::EVP_PKEY_CTX_new_id(nid, ptr::null_mut()))?; let ret = cvt(ffi::EVP_PKEY_keygen_init(kctx)); if let Err(e) = ret { ffi::EVP_PKEY_CTX_free(kctx); return Err(e); } let mut key = ptr::null_mut(); let ret = cvt(ffi::EVP_PKEY_keygen(kctx, &mut key)); ffi::EVP_PKEY_CTX_free(kctx); if let Err(e) = ret { return Err(e); } Ok(PKey::from_ptr(key)) } } /// Generates a new private Ed25519 key #[cfg(ossl111)] pub fn generate_x25519() -> Result, ErrorStack> { PKey::generate_eddsa(ffi::EVP_PKEY_X25519) } /// Generates a new private Ed448 key #[cfg(ossl111)] pub fn generate_x448() -> Result, ErrorStack> { PKey::generate_eddsa(ffi::EVP_PKEY_X448) } /// Generates a new private Ed25519 key #[cfg(ossl111)] pub fn generate_ed25519() -> Result, ErrorStack> { PKey::generate_eddsa(ffi::EVP_PKEY_ED25519) } /// Generates a new private Ed448 key #[cfg(ossl111)] pub fn generate_ed448() -> Result, ErrorStack> { PKey::generate_eddsa(ffi::EVP_PKEY_ED448) } /// Generates a new EC key using the provided curve. /// /// This corresponds to [`EVP_EC_gen`]. /// /// Requires OpenSSL 3.0.0 or newer. /// /// [`EVP_EC_gen`]: https://www.openssl.org/docs/manmaster/man3/EVP_EC_gen.html #[cfg(ossl300)] pub fn ec_gen(curve: &str) -> Result, ErrorStack> { let curve = CString::new(curve).unwrap(); unsafe { let ptr = cvt_p(ffi::EVP_EC_gen(curve.as_ptr()))?; Ok(PKey::from_ptr(ptr)) } } private_key_from_pem! { /// Deserializes a private key from a PEM-encoded key type specific format. /// /// This corresponds to [`PEM_read_bio_PrivateKey`]. /// /// [`PEM_read_bio_PrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_read_bio_PrivateKey.html private_key_from_pem, /// Deserializes a private key from a PEM-encoded encrypted key type specific format. /// /// This corresponds to [`PEM_read_bio_PrivateKey`]. /// /// [`PEM_read_bio_PrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_read_bio_PrivateKey.html private_key_from_pem_passphrase, /// Deserializes a private key from a PEM-encoded encrypted key type specific format. /// /// The callback should fill the password into the provided buffer and return its length. /// /// This corresponds to [`PEM_read_bio_PrivateKey`]. /// /// [`PEM_read_bio_PrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_read_bio_PrivateKey.html private_key_from_pem_callback, PKey, ffi::PEM_read_bio_PrivateKey } from_der! { /// Decodes a DER-encoded private key. /// /// This function will automatically attempt to detect the underlying key format, and /// supports the unencrypted PKCS#8 PrivateKeyInfo structures as well as key type specific /// formats. /// /// This corresponds to [`d2i_AutoPrivateKey`]. /// /// [`d2i_AutoPrivateKey`]: https://www.openssl.org/docs/man1.0.2/crypto/d2i_AutoPrivateKey.html private_key_from_der, PKey, ffi::d2i_AutoPrivateKey } /// Deserializes a DER-formatted PKCS#8 unencrypted private key. /// /// This method is mainly for interoperability reasons. Encrypted keyfiles should be preferred. pub fn private_key_from_pkcs8(der: &[u8]) -> Result, ErrorStack> { unsafe { ffi::init(); let len = der.len().min(c_long::max_value() as usize) as c_long; let p8inf = cvt_p(ffi::d2i_PKCS8_PRIV_KEY_INFO( ptr::null_mut(), &mut der.as_ptr(), len, ))?; let res = cvt_p(ffi::EVP_PKCS82PKEY(p8inf)).map(|p| PKey::from_ptr(p)); ffi::PKCS8_PRIV_KEY_INFO_free(p8inf); res } } /// Deserializes a DER-formatted PKCS#8 private key, using a callback to retrieve the password /// if the key is encrpyted. /// /// The callback should copy the password into the provided buffer and return the number of /// bytes written. pub fn private_key_from_pkcs8_callback( der: &[u8], callback: F, ) -> Result, ErrorStack> where F: FnOnce(&mut [u8]) -> Result, { unsafe { ffi::init(); let mut cb = CallbackState::new(callback); let bio = MemBioSlice::new(der)?; cvt_p(ffi::d2i_PKCS8PrivateKey_bio( bio.as_ptr(), ptr::null_mut(), Some(invoke_passwd_cb::), &mut cb as *mut _ as *mut _, )) .map(|p| PKey::from_ptr(p)) } } /// Deserializes a DER-formatted PKCS#8 private key, using the supplied password if the key is /// encrypted. /// /// # Panics /// /// Panics if `passphrase` contains an embedded null. pub fn private_key_from_pkcs8_passphrase( der: &[u8], passphrase: &[u8], ) -> Result, ErrorStack> { unsafe { ffi::init(); let bio = MemBioSlice::new(der)?; let passphrase = CString::new(passphrase).unwrap(); cvt_p(ffi::d2i_PKCS8PrivateKey_bio( bio.as_ptr(), ptr::null_mut(), None, passphrase.as_ptr() as *const _ as *mut _, )) .map(|p| PKey::from_ptr(p)) } } /// Creates a private key from its raw byte representation /// /// Algorithm types that support raw private keys are HMAC, X25519, ED25519, X448 or ED448 /// /// This corresponds to [`EVP_PKEY_new_raw_private_key`]. /// /// [`EVP_PKEY_new_raw_private_key`]: https://www.openssl.org/docs/man1.1.1/man3/EVP_PKEY_new_raw_private_key.html #[cfg(ossl111)] pub fn private_key_from_raw_bytes( bytes: &[u8], key_type: Id, ) -> Result, ErrorStack> { unsafe { ffi::init(); cvt_p(ffi::EVP_PKEY_new_raw_private_key( key_type.as_raw(), ptr::null_mut(), bytes.as_ptr(), bytes.len(), )) .map(|p| PKey::from_ptr(p)) } } } impl PKey { from_pem! { /// Decodes a PEM-encoded SubjectPublicKeyInfo structure. /// /// The input should have a header of `-----BEGIN PUBLIC KEY-----`. /// /// This corresponds to [`PEM_read_bio_PUBKEY`]. /// /// [`PEM_read_bio_PUBKEY`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_read_bio_PUBKEY.html public_key_from_pem, PKey, ffi::PEM_read_bio_PUBKEY } from_der! { /// Decodes a DER-encoded SubjectPublicKeyInfo structure. /// /// This corresponds to [`d2i_PUBKEY`]. /// /// [`d2i_PUBKEY`]: https://www.openssl.org/docs/man1.1.0/crypto/d2i_PUBKEY.html public_key_from_der, PKey, ffi::d2i_PUBKEY } /// Creates a public key from its raw byte representation /// /// Algorithm types that support raw public keys are X25519, ED25519, X448 or ED448 /// /// This corresponds to [`EVP_PKEY_new_raw_public_key`]. /// /// [`EVP_PKEY_new_raw_public_key`]: https://www.openssl.org/docs/man1.1.1/man3/EVP_PKEY_new_raw_public_key.html #[cfg(ossl111)] pub fn public_key_from_raw_bytes( bytes: &[u8], key_type: Id, ) -> Result, ErrorStack> { unsafe { ffi::init(); cvt_p(ffi::EVP_PKEY_new_raw_public_key( key_type.as_raw(), ptr::null_mut(), bytes.as_ptr(), bytes.len(), )) .map(|p| PKey::from_ptr(p)) } } } cfg_if! { if #[cfg(any(ossl110, libressl270))] { use ffi::EVP_PKEY_up_ref; } else { #[allow(bad_style)] unsafe extern "C" fn EVP_PKEY_up_ref(pkey: *mut ffi::EVP_PKEY) { ffi::CRYPTO_add_lock( &mut (*pkey).references, 1, ffi::CRYPTO_LOCK_EVP_PKEY, "pkey.rs\0".as_ptr() as *const _, line!() as c_int, ); } } } impl TryFrom> for PKey { type Error = ErrorStack; fn try_from(ec_key: EcKey) -> Result, ErrorStack> { PKey::from_ec_key(ec_key) } } impl TryFrom> for EcKey { type Error = ErrorStack; fn try_from(pkey: PKey) -> Result, ErrorStack> { pkey.ec_key() } } impl TryFrom> for PKey { type Error = ErrorStack; fn try_from(rsa: Rsa) -> Result, ErrorStack> { PKey::from_rsa(rsa) } } impl TryFrom> for Rsa { type Error = ErrorStack; fn try_from(pkey: PKey) -> Result, ErrorStack> { pkey.rsa() } } impl TryFrom> for PKey { type Error = ErrorStack; fn try_from(dsa: Dsa) -> Result, ErrorStack> { PKey::from_dsa(dsa) } } impl TryFrom> for Dsa { type Error = ErrorStack; fn try_from(pkey: PKey) -> Result, ErrorStack> { pkey.dsa() } } impl TryFrom> for PKey { type Error = ErrorStack; fn try_from(dh: Dh) -> Result, ErrorStack> { PKey::from_dh(dh) } } impl TryFrom> for Dh { type Error = ErrorStack; fn try_from(pkey: PKey) -> Result, ErrorStack> { pkey.dh() } } #[cfg(test)] mod tests { use std::convert::TryInto; use crate::dh::Dh; use crate::dsa::Dsa; use crate::ec::EcKey; use crate::nid::Nid; use crate::rsa::Rsa; use crate::symm::Cipher; use super::*; #[cfg(ossl111)] use crate::rand::rand_bytes; #[test] fn test_to_password() { let rsa = Rsa::generate(2048).unwrap(); let pkey = PKey::from_rsa(rsa).unwrap(); let pem = pkey .private_key_to_pem_pkcs8_passphrase(Cipher::aes_128_cbc(), b"foobar") .unwrap(); PKey::private_key_from_pem_passphrase(&pem, b"foobar").unwrap(); assert!(PKey::private_key_from_pem_passphrase(&pem, b"fizzbuzz").is_err()); } #[test] fn test_unencrypted_pkcs8() { let key = include_bytes!("../test/pkcs8-nocrypt.der"); PKey::private_key_from_pkcs8(key).unwrap(); } #[test] fn test_encrypted_pkcs8_passphrase() { let key = include_bytes!("../test/pkcs8.der"); PKey::private_key_from_pkcs8_passphrase(key, b"mypass").unwrap(); let rsa = Rsa::generate(2048).unwrap(); let pkey = PKey::from_rsa(rsa).unwrap(); let der = pkey .private_key_to_pkcs8_passphrase(Cipher::aes_128_cbc(), b"mypass") .unwrap(); let pkey2 = PKey::private_key_from_pkcs8_passphrase(&der, b"mypass").unwrap(); assert_eq!( pkey.private_key_to_der().unwrap(), pkey2.private_key_to_der().unwrap() ); } #[test] fn test_encrypted_pkcs8_callback() { let mut password_queried = false; let key = include_bytes!("../test/pkcs8.der"); PKey::private_key_from_pkcs8_callback(key, |password| { password_queried = true; password[..6].copy_from_slice(b"mypass"); Ok(6) }) .unwrap(); assert!(password_queried); } #[test] fn test_private_key_from_pem() { let key = include_bytes!("../test/key.pem"); PKey::private_key_from_pem(key).unwrap(); } #[test] fn test_public_key_from_pem() { let key = include_bytes!("../test/key.pem.pub"); PKey::public_key_from_pem(key).unwrap(); } #[test] fn test_public_key_from_der() { let key = include_bytes!("../test/key.der.pub"); PKey::public_key_from_der(key).unwrap(); } #[test] fn test_private_key_from_der() { let key = include_bytes!("../test/key.der"); PKey::private_key_from_der(key).unwrap(); } #[test] fn test_pem() { let key = include_bytes!("../test/key.pem"); let key = PKey::private_key_from_pem(key).unwrap(); let priv_key = key.private_key_to_pem_pkcs8().unwrap(); let pub_key = key.public_key_to_pem().unwrap(); // As a super-simple verification, just check that the buffers contain // the `PRIVATE KEY` or `PUBLIC KEY` strings. assert!(priv_key.windows(11).any(|s| s == b"PRIVATE KEY")); assert!(pub_key.windows(10).any(|s| s == b"PUBLIC KEY")); } #[test] fn test_rsa_accessor() { let rsa = Rsa::generate(2048).unwrap(); let pkey = PKey::from_rsa(rsa).unwrap(); pkey.rsa().unwrap(); assert_eq!(pkey.id(), Id::RSA); assert!(pkey.dsa().is_err()); } #[test] fn test_dsa_accessor() { let dsa = Dsa::generate(2048).unwrap(); let pkey = PKey::from_dsa(dsa).unwrap(); pkey.dsa().unwrap(); assert_eq!(pkey.id(), Id::DSA); assert!(pkey.rsa().is_err()); } #[test] fn test_dh_accessor() { let dh = include_bytes!("../test/dhparams.pem"); let dh = Dh::params_from_pem(dh).unwrap(); let pkey = PKey::from_dh(dh).unwrap(); pkey.dh().unwrap(); assert_eq!(pkey.id(), Id::DH); assert!(pkey.rsa().is_err()); } #[test] fn test_ec_key_accessor() { let ec_key = EcKey::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let pkey = PKey::from_ec_key(ec_key).unwrap(); pkey.ec_key().unwrap(); assert_eq!(pkey.id(), Id::EC); assert!(pkey.rsa().is_err()); } #[test] fn test_rsa_conversion() { let rsa = Rsa::generate(2048).unwrap(); let pkey: PKey = rsa.clone().try_into().unwrap(); let rsa_: Rsa = pkey.try_into().unwrap(); // Eq is missing assert_eq!(rsa.p(), rsa_.p()); assert_eq!(rsa.q(), rsa_.q()); } #[test] fn test_dsa_conversion() { let dsa = Dsa::generate(2048).unwrap(); let pkey: PKey = dsa.clone().try_into().unwrap(); let dsa_: Dsa = pkey.try_into().unwrap(); // Eq is missing assert_eq!(dsa.priv_key(), dsa_.priv_key()); } #[test] fn test_ec_key_conversion() { let group = crate::ec::EcGroup::from_curve_name(crate::nid::Nid::X9_62_PRIME256V1).unwrap(); let ec_key = EcKey::generate(&group).unwrap(); let pkey: PKey = ec_key.clone().try_into().unwrap(); let ec_key_: EcKey = pkey.try_into().unwrap(); // Eq is missing assert_eq!(ec_key.private_key(), ec_key_.private_key()); } #[test] fn test_dh_conversion() { let dh_params = include_bytes!("../test/dhparams.pem"); let dh_params = Dh::params_from_pem(dh_params).unwrap(); let dh = dh_params.generate_key().unwrap(); // Clone is missing for Dh, save the parameters let p = dh.prime_p().to_owned().unwrap(); let q = dh.prime_q().map(|q| q.to_owned().unwrap()); let g = dh.generator().to_owned().unwrap(); let pkey: PKey = dh.try_into().unwrap(); let dh_: Dh = pkey.try_into().unwrap(); // Eq is missing assert_eq!(&p, dh_.prime_p()); assert_eq!(q, dh_.prime_q().map(|q| q.to_owned().unwrap())); assert_eq!(&g, dh_.generator()); } #[cfg(ossl111)] fn test_raw_public_key(gen: fn() -> Result, ErrorStack>, key_type: Id) { // Generate a new key let key = gen().unwrap(); // Get the raw bytes, and create a new key from the raw bytes let raw = key.raw_public_key().unwrap(); let from_raw = PKey::public_key_from_raw_bytes(&raw, key_type).unwrap(); // Compare the der encoding of the original and raw / restored public key assert_eq!( key.public_key_to_der().unwrap(), from_raw.public_key_to_der().unwrap() ); } #[cfg(ossl111)] fn test_raw_private_key(gen: fn() -> Result, ErrorStack>, key_type: Id) { // Generate a new key let key = gen().unwrap(); // Get the raw bytes, and create a new key from the raw bytes let raw = key.raw_private_key().unwrap(); let from_raw = PKey::private_key_from_raw_bytes(&raw, key_type).unwrap(); // Compare the der encoding of the original and raw / restored public key assert_eq!( key.private_key_to_der().unwrap(), from_raw.private_key_to_der().unwrap() ); } #[cfg(ossl111)] #[test] fn test_raw_public_key_bytes() { test_raw_public_key(PKey::generate_x25519, Id::X25519); test_raw_public_key(PKey::generate_ed25519, Id::ED25519); test_raw_public_key(PKey::generate_x448, Id::X448); test_raw_public_key(PKey::generate_ed448, Id::ED448); } #[cfg(ossl111)] #[test] fn test_raw_private_key_bytes() { test_raw_private_key(PKey::generate_x25519, Id::X25519); test_raw_private_key(PKey::generate_ed25519, Id::ED25519); test_raw_private_key(PKey::generate_x448, Id::X448); test_raw_private_key(PKey::generate_ed448, Id::ED448); } #[cfg(ossl111)] #[test] fn test_raw_hmac() { let mut test_bytes = vec![0u8; 32]; rand_bytes(&mut test_bytes).unwrap(); let hmac_key = PKey::hmac(&test_bytes).unwrap(); assert!(hmac_key.raw_public_key().is_err()); let key_bytes = hmac_key.raw_private_key().unwrap(); assert_eq!(key_bytes, test_bytes); } #[cfg(ossl111)] #[test] fn test_raw_key_fail() { // Getting a raw byte representation will not work with Nist curves let group = crate::ec::EcGroup::from_curve_name(Nid::SECP256K1).unwrap(); let ec_key = EcKey::generate(&group).unwrap(); let pkey = PKey::from_ec_key(ec_key).unwrap(); assert!(pkey.raw_private_key().is_err()); assert!(pkey.raw_public_key().is_err()); } #[cfg(ossl300)] #[test] fn test_ec_gen() { let key = PKey::ec_gen("prime256v1").unwrap(); assert!(key.ec_key().is_ok()); } } vendor/openssl/src/bn.rs0000664000175000017500000013766414172417313016112 0ustar mwhudsonmwhudson//! BigNum implementation //! //! Large numbers are important for a cryptographic library. OpenSSL implementation //! of BigNum uses dynamically assigned memory to store an array of bit chunks. This //! allows numbers of any size to be compared and mathematical functions performed. //! //! OpenSSL wiki describes the [`BIGNUM`] data structure. //! //! # Examples //! //! ``` //! use openssl::bn::BigNum; //! use openssl::error::ErrorStack; //! //! fn main() -> Result<(), ErrorStack> { //! let a = BigNum::new()?; // a = 0 //! let b = BigNum::from_dec_str("1234567890123456789012345")?; //! let c = &a * &b; //! assert_eq!(a, c); //! Ok(()) //! } //! ``` //! //! [`BIGNUM`]: https://wiki.openssl.org/index.php/Manual:Bn_internal(3) use cfg_if::cfg_if; use foreign_types::{ForeignType, ForeignTypeRef}; use libc::c_int; use std::cmp::Ordering; use std::ffi::CString; use std::ops::{Add, Deref, Div, Mul, Neg, Rem, Shl, Shr, Sub}; use std::{fmt, ptr}; use crate::asn1::Asn1Integer; use crate::error::ErrorStack; use crate::string::OpensslString; use crate::{cvt, cvt_n, cvt_p}; cfg_if! { if #[cfg(ossl110)] { use ffi::{ BN_get_rfc2409_prime_1024, BN_get_rfc2409_prime_768, BN_get_rfc3526_prime_1536, BN_get_rfc3526_prime_2048, BN_get_rfc3526_prime_3072, BN_get_rfc3526_prime_4096, BN_get_rfc3526_prime_6144, BN_get_rfc3526_prime_8192, BN_is_negative, }; } else { use ffi::{ get_rfc2409_prime_1024 as BN_get_rfc2409_prime_1024, get_rfc2409_prime_768 as BN_get_rfc2409_prime_768, get_rfc3526_prime_1536 as BN_get_rfc3526_prime_1536, get_rfc3526_prime_2048 as BN_get_rfc3526_prime_2048, get_rfc3526_prime_3072 as BN_get_rfc3526_prime_3072, get_rfc3526_prime_4096 as BN_get_rfc3526_prime_4096, get_rfc3526_prime_6144 as BN_get_rfc3526_prime_6144, get_rfc3526_prime_8192 as BN_get_rfc3526_prime_8192, }; #[allow(bad_style)] unsafe fn BN_is_negative(bn: *const ffi::BIGNUM) -> c_int { (*bn).neg } } } /// Options for the most significant bits of a randomly generated `BigNum`. pub struct MsbOption(c_int); impl MsbOption { /// The most significant bit of the number may be 0. pub const MAYBE_ZERO: MsbOption = MsbOption(-1); /// The most significant bit of the number must be 1. pub const ONE: MsbOption = MsbOption(0); /// The most significant two bits of the number must be 1. /// /// The number of bits in the product of two such numbers will always be exactly twice the /// number of bits in the original numbers. pub const TWO_ONES: MsbOption = MsbOption(1); } foreign_type_and_impl_send_sync! { type CType = ffi::BN_CTX; fn drop = ffi::BN_CTX_free; /// Temporary storage for BigNums on the secure heap /// /// BigNum values are stored dynamically and therefore can be expensive /// to allocate. BigNumContext and the OpenSSL [`BN_CTX`] structure are used /// internally when passing BigNum values between subroutines. /// /// [`BN_CTX`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_CTX_new.html pub struct BigNumContext; /// Reference to [`BigNumContext`] /// /// [`BigNumContext`]: struct.BigNumContext.html pub struct BigNumContextRef; } impl BigNumContext { /// Returns a new `BigNumContext`. /// /// See OpenSSL documentation at [`BN_CTX_new`]. /// /// [`BN_CTX_new`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_CTX_new.html pub fn new() -> Result { unsafe { ffi::init(); cvt_p(ffi::BN_CTX_new()).map(BigNumContext) } } /// Returns a new secure `BigNumContext`. /// /// See OpenSSL documentation at [`BN_CTX_secure_new`]. /// /// [`BN_CTX_secure_new`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_CTX_secure_new.html #[cfg(ossl110)] pub fn new_secure() -> Result { unsafe { ffi::init(); cvt_p(ffi::BN_CTX_secure_new()).map(BigNumContext) } } } foreign_type_and_impl_send_sync! { type CType = ffi::BIGNUM; fn drop = ffi::BN_free; /// Dynamically sized large number implementation /// /// Perform large number mathematics. Create a new BigNum /// with [`new`]. Perform standard mathematics on large numbers using /// methods from [`Dref`] /// /// OpenSSL documentation at [`BN_new`]. /// /// [`new`]: struct.BigNum.html#method.new /// [`Dref`]: struct.BigNum.html#deref-methods /// [`BN_new`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_new.html /// /// # Examples /// ``` /// use openssl::bn::BigNum; /// # use openssl::error::ErrorStack; /// # fn bignums() -> Result< (), ErrorStack > { /// let little_big = BigNum::from_u32(std::u32::MAX)?; /// assert_eq!(*&little_big.num_bytes(), 4); /// # Ok(()) /// # } /// # fn main () { bignums(); } /// ``` pub struct BigNum; /// Reference to a [`BigNum`] /// /// [`BigNum`]: struct.BigNum.html pub struct BigNumRef; } impl BigNumRef { /// Erases the memory used by this `BigNum`, resetting its value to 0. /// /// This can be used to destroy sensitive data such as keys when they are no longer needed. /// /// OpenSSL documentation at [`BN_clear`] /// /// [`BN_clear`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_clear.html pub fn clear(&mut self) { unsafe { ffi::BN_clear(self.as_ptr()) } } /// Adds a `u32` to `self`. /// /// OpenSSL documentation at [`BN_add_word`] /// /// [`BN_add_word`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_add_word.html pub fn add_word(&mut self, w: u32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_add_word(self.as_ptr(), w as ffi::BN_ULONG)).map(|_| ()) } } /// Subtracts a `u32` from `self`. /// /// OpenSSL documentation at [`BN_sub_word`] /// /// [`BN_sub_word`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_sub_word.html pub fn sub_word(&mut self, w: u32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_sub_word(self.as_ptr(), w as ffi::BN_ULONG)).map(|_| ()) } } /// Multiplies a `u32` by `self`. /// /// OpenSSL documentation at [`BN_mul_word`] /// /// [`BN_mul_word`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_mul_word.html pub fn mul_word(&mut self, w: u32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_mul_word(self.as_ptr(), w as ffi::BN_ULONG)).map(|_| ()) } } /// Divides `self` by a `u32`, returning the remainder. /// /// OpenSSL documentation at [`BN_div_word`] /// /// [`BN_div_word`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_div_word.html #[allow(clippy::useless_conversion)] pub fn div_word(&mut self, w: u32) -> Result { unsafe { let r = ffi::BN_div_word(self.as_ptr(), w.into()); if r == ffi::BN_ULONG::max_value() { Err(ErrorStack::get()) } else { Ok(r.into()) } } } /// Returns the result of `self` modulo `w`. /// /// OpenSSL documentation at [`BN_mod_word`] /// /// [`BN_mod_word`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_mod_word.html #[allow(clippy::useless_conversion)] pub fn mod_word(&self, w: u32) -> Result { unsafe { let r = ffi::BN_mod_word(self.as_ptr(), w.into()); if r == ffi::BN_ULONG::max_value() { Err(ErrorStack::get()) } else { Ok(r.into()) } } } /// Places a cryptographically-secure pseudo-random nonnegative /// number less than `self` in `rnd`. /// /// OpenSSL documentation at [`BN_rand_range`] /// /// [`BN_rand_range`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_rand_range.html pub fn rand_range(&self, rnd: &mut BigNumRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_rand_range(rnd.as_ptr(), self.as_ptr())).map(|_| ()) } } /// The cryptographically weak counterpart to `rand_in_range`. /// /// OpenSSL documentation at [`BN_pseudo_rand_range`] /// /// [`BN_pseudo_rand_range`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_pseudo_rand_range.html pub fn pseudo_rand_range(&self, rnd: &mut BigNumRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_pseudo_rand_range(rnd.as_ptr(), self.as_ptr())).map(|_| ()) } } /// Sets bit `n`. Equivalent to `self |= (1 << n)`. /// /// When setting a bit outside of `self`, it is expanded. /// /// OpenSSL documentation at [`BN_set_bit`] /// /// [`BN_set_bit`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_set_bit.html #[allow(clippy::useless_conversion)] pub fn set_bit(&mut self, n: i32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_set_bit(self.as_ptr(), n.into())).map(|_| ()) } } /// Clears bit `n`, setting it to 0. Equivalent to `self &= ~(1 << n)`. /// /// When clearing a bit outside of `self`, an error is returned. /// /// OpenSSL documentation at [`BN_clear_bit`] /// /// [`BN_clear_bit`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_clear_bit.html #[allow(clippy::useless_conversion)] pub fn clear_bit(&mut self, n: i32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_clear_bit(self.as_ptr(), n.into())).map(|_| ()) } } /// Returns `true` if the `n`th bit of `self` is set to 1, `false` otherwise. /// /// OpenSSL documentation at [`BN_is_bit_set`] /// /// [`BN_is_bit_set`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_is_bit_set.html #[allow(clippy::useless_conversion)] pub fn is_bit_set(&self, n: i32) -> bool { unsafe { ffi::BN_is_bit_set(self.as_ptr(), n.into()) == 1 } } /// Truncates `self` to the lowest `n` bits. /// /// An error occurs if `self` is already shorter than `n` bits. /// /// OpenSSL documentation at [`BN_mask_bits`] /// /// [`BN_mask_bits`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_mask_bits.html #[allow(clippy::useless_conversion)] pub fn mask_bits(&mut self, n: i32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_mask_bits(self.as_ptr(), n.into())).map(|_| ()) } } /// Places `a << 1` in `self`. Equivalent to `self * 2`. /// /// OpenSSL documentation at [`BN_lshift1`] /// /// [`BN_lshift1`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_lshift1.html pub fn lshift1(&mut self, a: &BigNumRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_lshift1(self.as_ptr(), a.as_ptr())).map(|_| ()) } } /// Places `a >> 1` in `self`. Equivalent to `self / 2`. /// /// OpenSSL documentation at [`BN_rshift1`] /// /// [`BN_rshift1`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_rshift1.html pub fn rshift1(&mut self, a: &BigNumRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_rshift1(self.as_ptr(), a.as_ptr())).map(|_| ()) } } /// Places `a + b` in `self`. [`core::ops::Add`] is also implemented for `BigNumRef`. /// /// OpenSSL documentation at [`BN_add`] /// /// [`core::ops::Add`]: struct.BigNumRef.html#method.add /// [`BN_add`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_add.html pub fn checked_add(&mut self, a: &BigNumRef, b: &BigNumRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_add(self.as_ptr(), a.as_ptr(), b.as_ptr())).map(|_| ()) } } /// Places `a - b` in `self`. [`core::ops::Sub`] is also implemented for `BigNumRef`. /// /// OpenSSL documentation at [`BN_sub`] /// /// [`core::ops::Sub`]: struct.BigNumRef.html#method.sub /// [`BN_sub`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_sub.html pub fn checked_sub(&mut self, a: &BigNumRef, b: &BigNumRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_sub(self.as_ptr(), a.as_ptr(), b.as_ptr())).map(|_| ()) } } /// Places `a << n` in `self`. Equivalent to `a * 2 ^ n`. /// /// OpenSSL documentation at [`BN_lshift`] /// /// [`BN_lshift`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_lshift.html #[allow(clippy::useless_conversion)] pub fn lshift(&mut self, a: &BigNumRef, n: i32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_lshift(self.as_ptr(), a.as_ptr(), n.into())).map(|_| ()) } } /// Places `a >> n` in `self`. Equivalent to `a / 2 ^ n`. /// /// OpenSSL documentation at [`BN_rshift`] /// /// [`BN_rshift`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_rshift.html #[allow(clippy::useless_conversion)] pub fn rshift(&mut self, a: &BigNumRef, n: i32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_rshift(self.as_ptr(), a.as_ptr(), n.into())).map(|_| ()) } } /// Creates a new BigNum with the same value. /// /// OpenSSL documentation at [`BN_dup`] /// /// [`BN_dup`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_dup.html pub fn to_owned(&self) -> Result { unsafe { cvt_p(ffi::BN_dup(self.as_ptr())).map(|b| BigNum::from_ptr(b)) } } /// Sets the sign of `self`. Pass true to set `self` to a negative. False sets /// `self` positive. pub fn set_negative(&mut self, negative: bool) { unsafe { ffi::BN_set_negative(self.as_ptr(), negative as c_int) } } /// Compare the absolute values of `self` and `oth`. /// /// OpenSSL documentation at [`BN_ucmp`] /// /// [`BN_ucmp`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_ucmp.html /// /// # Examples /// /// ``` /// # use openssl::bn::BigNum; /// # use std::cmp::Ordering; /// let s = -BigNum::from_u32(8).unwrap(); /// let o = BigNum::from_u32(8).unwrap(); /// /// assert_eq!(s.ucmp(&o), Ordering::Equal); /// ``` pub fn ucmp(&self, oth: &BigNumRef) -> Ordering { unsafe { ffi::BN_ucmp(self.as_ptr(), oth.as_ptr()).cmp(&0) } } /// Returns `true` if `self` is negative. pub fn is_negative(&self) -> bool { unsafe { BN_is_negative(self.as_ptr()) == 1 } } /// Returns the number of significant bits in `self`. /// /// OpenSSL documentation at [`BN_num_bits`] /// /// [`BN_num_bits`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_num_bits.html pub fn num_bits(&self) -> i32 { unsafe { ffi::BN_num_bits(self.as_ptr()) as i32 } } /// Returns the size of `self` in bytes. Implemented natively. pub fn num_bytes(&self) -> i32 { (self.num_bits() + 7) / 8 } /// Generates a cryptographically strong pseudo-random `BigNum`, placing it in `self`. /// /// # Parameters /// /// * `bits`: Length of the number in bits. /// * `msb`: The desired properties of the most significant bit. See [`constants`]. /// * `odd`: If `true`, the generated number will be odd. /// /// # Examples /// /// ``` /// use openssl::bn::{BigNum, MsbOption}; /// use openssl::error::ErrorStack; /// /// fn generate_random() -> Result< BigNum, ErrorStack > { /// let mut big = BigNum::new()?; /// /// // Generates a 128-bit odd random number /// big.rand(128, MsbOption::MAYBE_ZERO, true); /// Ok((big)) /// } /// ``` /// /// OpenSSL documentation at [`BN_rand`] /// /// [`constants`]: index.html#constants /// [`BN_rand`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_rand.html #[allow(clippy::useless_conversion)] pub fn rand(&mut self, bits: i32, msb: MsbOption, odd: bool) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_rand( self.as_ptr(), bits.into(), msb.0, odd as c_int, )) .map(|_| ()) } } /// The cryptographically weak counterpart to `rand`. Not suitable for key generation. /// /// OpenSSL documentation at [`BN_psuedo_rand`] /// /// [`BN_psuedo_rand`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_pseudo_rand.html #[allow(clippy::useless_conversion)] pub fn pseudo_rand(&mut self, bits: i32, msb: MsbOption, odd: bool) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_pseudo_rand( self.as_ptr(), bits.into(), msb.0, odd as c_int, )) .map(|_| ()) } } /// Generates a prime number, placing it in `self`. /// /// # Parameters /// /// * `bits`: The length of the prime in bits (lower bound). /// * `safe`: If true, returns a "safe" prime `p` so that `(p-1)/2` is also prime. /// * `add`/`rem`: If `add` is set to `Some(add)`, `p % add == rem` will hold, where `p` is the /// generated prime and `rem` is `1` if not specified (`None`). /// /// # Examples /// /// ``` /// use openssl::bn::BigNum; /// use openssl::error::ErrorStack; /// /// fn generate_weak_prime() -> Result< BigNum, ErrorStack > { /// let mut big = BigNum::new()?; /// /// // Generates a 128-bit simple prime number /// big.generate_prime(128, false, None, None); /// Ok((big)) /// } /// ``` /// /// OpenSSL documentation at [`BN_generate_prime_ex`] /// /// [`BN_generate_prime_ex`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_generate_prime_ex.html pub fn generate_prime( &mut self, bits: i32, safe: bool, add: Option<&BigNumRef>, rem: Option<&BigNumRef>, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_generate_prime_ex( self.as_ptr(), bits as c_int, safe as c_int, add.map(|n| n.as_ptr()).unwrap_or(ptr::null_mut()), rem.map(|n| n.as_ptr()).unwrap_or(ptr::null_mut()), ptr::null_mut(), )) .map(|_| ()) } } /// Places the result of `a * b` in `self`. /// [`core::ops::Mul`] is also implemented for `BigNumRef`. /// /// OpenSSL documentation at [`BN_mul`] /// /// [`core::ops::Mul`]: struct.BigNumRef.html#method.mul /// [`BN_mul`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_mul.html pub fn checked_mul( &mut self, a: &BigNumRef, b: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_mul( self.as_ptr(), a.as_ptr(), b.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the result of `a / b` in `self`. The remainder is discarded. /// [`core::ops::Div`] is also implemented for `BigNumRef`. /// /// OpenSSL documentation at [`BN_div`] /// /// [`core::ops::Div`]: struct.BigNumRef.html#method.div /// [`BN_div`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_div.html pub fn checked_div( &mut self, a: &BigNumRef, b: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_div( self.as_ptr(), ptr::null_mut(), a.as_ptr(), b.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the result of `a % b` in `self`. /// /// OpenSSL documentation at [`BN_div`] /// /// [`BN_div`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_div.html pub fn checked_rem( &mut self, a: &BigNumRef, b: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_div( ptr::null_mut(), self.as_ptr(), a.as_ptr(), b.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the result of `a / b` in `self` and `a % b` in `rem`. /// /// OpenSSL documentation at [`BN_div`] /// /// [`BN_div`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_div.html pub fn div_rem( &mut self, rem: &mut BigNumRef, a: &BigNumRef, b: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_div( self.as_ptr(), rem.as_ptr(), a.as_ptr(), b.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the result of `a²` in `self`. /// /// OpenSSL documentation at [`BN_sqr`] /// /// [`BN_sqr`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_sqr.html pub fn sqr(&mut self, a: &BigNumRef, ctx: &mut BigNumContextRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_sqr(self.as_ptr(), a.as_ptr(), ctx.as_ptr())).map(|_| ()) } } /// Places the result of `a mod m` in `self`. As opposed to `div_rem` /// the result is non-negative. /// /// OpenSSL documentation at [`BN_nnmod`] /// /// [`BN_nnmod`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_nnmod.html pub fn nnmod( &mut self, a: &BigNumRef, m: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_nnmod( self.as_ptr(), a.as_ptr(), m.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the result of `(a + b) mod m` in `self`. /// /// OpenSSL documentation at [`BN_mod_add`] /// /// [`BN_mod_add`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_mod_add.html pub fn mod_add( &mut self, a: &BigNumRef, b: &BigNumRef, m: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_mod_add( self.as_ptr(), a.as_ptr(), b.as_ptr(), m.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the result of `(a - b) mod m` in `self`. /// /// OpenSSL documentation at [`BN_mod_sub`] /// /// [`BN_mod_sub`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_mod_sub.html pub fn mod_sub( &mut self, a: &BigNumRef, b: &BigNumRef, m: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_mod_sub( self.as_ptr(), a.as_ptr(), b.as_ptr(), m.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the result of `(a * b) mod m` in `self`. /// /// OpenSSL documentation at [`BN_mod_mul`] /// /// [`BN_mod_mul`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_mod_mul.html pub fn mod_mul( &mut self, a: &BigNumRef, b: &BigNumRef, m: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_mod_mul( self.as_ptr(), a.as_ptr(), b.as_ptr(), m.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the result of `a² mod m` in `self`. /// /// OpenSSL documentation at [`BN_mod_sqr`] /// /// [`BN_mod_sqr`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_mod_sqr.html pub fn mod_sqr( &mut self, a: &BigNumRef, m: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_mod_sqr( self.as_ptr(), a.as_ptr(), m.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the result of `a^p` in `self`. /// /// OpenSSL documentation at [`BN_exp`] /// /// [`BN_exp`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_exp.html pub fn exp( &mut self, a: &BigNumRef, p: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_exp( self.as_ptr(), a.as_ptr(), p.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the result of `a^p mod m` in `self`. /// /// OpenSSL documentation at [`BN_mod_exp`] /// /// [`BN_mod_exp`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_mod_exp.html pub fn mod_exp( &mut self, a: &BigNumRef, p: &BigNumRef, m: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_mod_exp( self.as_ptr(), a.as_ptr(), p.as_ptr(), m.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the inverse of `a` modulo `n` in `self`. pub fn mod_inverse( &mut self, a: &BigNumRef, n: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt_p(ffi::BN_mod_inverse( self.as_ptr(), a.as_ptr(), n.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Places the greatest common denominator of `a` and `b` in `self`. /// /// OpenSSL documentation at [`BN_gcd`] /// /// [`BN_gcd`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_gcd.html pub fn gcd( &mut self, a: &BigNumRef, b: &BigNumRef, ctx: &mut BigNumContextRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::BN_gcd( self.as_ptr(), a.as_ptr(), b.as_ptr(), ctx.as_ptr(), )) .map(|_| ()) } } /// Checks whether `self` is prime. /// /// Performs a Miller-Rabin probabilistic primality test with `checks` iterations. /// /// OpenSSL documentation at [`BN_is_prime_ex`] /// /// [`BN_is_prime_ex`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_is_prime_ex.html /// /// # Return Value /// /// Returns `true` if `self` is prime with an error probability of less than `0.25 ^ checks`. #[allow(clippy::useless_conversion)] pub fn is_prime(&self, checks: i32, ctx: &mut BigNumContextRef) -> Result { unsafe { cvt_n(ffi::BN_is_prime_ex( self.as_ptr(), checks.into(), ctx.as_ptr(), ptr::null_mut(), )) .map(|r| r != 0) } } /// Checks whether `self` is prime with optional trial division. /// /// If `do_trial_division` is `true`, first performs trial division by a number of small primes. /// Then, like `is_prime`, performs a Miller-Rabin probabilistic primality test with `checks` /// iterations. /// /// OpenSSL documentation at [`BN_is_prime_fasttest_ex`] /// /// [`BN_is_prime_fasttest_ex`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_is_prime_fasttest_ex.html /// /// # Return Value /// /// Returns `true` if `self` is prime with an error probability of less than `0.25 ^ checks`. #[allow(clippy::useless_conversion)] pub fn is_prime_fasttest( &self, checks: i32, ctx: &mut BigNumContextRef, do_trial_division: bool, ) -> Result { unsafe { cvt_n(ffi::BN_is_prime_fasttest_ex( self.as_ptr(), checks.into(), ctx.as_ptr(), do_trial_division as c_int, ptr::null_mut(), )) .map(|r| r != 0) } } /// Returns a big-endian byte vector representation of the absolute value of `self`. /// /// `self` can be recreated by using `from_slice`. /// /// ``` /// # use openssl::bn::BigNum; /// let s = -BigNum::from_u32(4543).unwrap(); /// let r = BigNum::from_u32(4543).unwrap(); /// /// let s_vec = s.to_vec(); /// assert_eq!(BigNum::from_slice(&s_vec).unwrap(), r); /// ``` pub fn to_vec(&self) -> Vec { let size = self.num_bytes() as usize; let mut v = Vec::with_capacity(size); unsafe { ffi::BN_bn2bin(self.as_ptr(), v.as_mut_ptr()); v.set_len(size); } v } /// Returns a big-endian byte vector representation of the absolute value of `self` padded /// to `pad_to` bytes. /// /// If `pad_to` is less than `self.num_bytes()` then an error is returned. /// /// `self` can be recreated by using `from_slice`. /// /// ``` /// # use openssl::bn::BigNum; /// let bn = BigNum::from_u32(0x4543).unwrap(); /// /// let bn_vec = bn.to_vec_padded(4).unwrap(); /// assert_eq!(&bn_vec, &[0, 0, 0x45, 0x43]); /// /// let r = bn.to_vec_padded(1); /// assert!(r.is_err()); /// /// let bn = -BigNum::from_u32(0x4543).unwrap(); /// let bn_vec = bn.to_vec_padded(4).unwrap(); /// assert_eq!(&bn_vec, &[0, 0, 0x45, 0x43]); /// ``` #[cfg(ossl110)] pub fn to_vec_padded(&self, pad_to: i32) -> Result, ErrorStack> { let mut v = Vec::with_capacity(pad_to as usize); unsafe { cvt(ffi::BN_bn2binpad(self.as_ptr(), v.as_mut_ptr(), pad_to))?; v.set_len(pad_to as usize); } Ok(v) } /// Returns a decimal string representation of `self`. /// /// ``` /// # use openssl::bn::BigNum; /// let s = -BigNum::from_u32(12345).unwrap(); /// /// assert_eq!(&**s.to_dec_str().unwrap(), "-12345"); /// ``` pub fn to_dec_str(&self) -> Result { unsafe { let buf = cvt_p(ffi::BN_bn2dec(self.as_ptr()))?; Ok(OpensslString::from_ptr(buf)) } } /// Returns a hexadecimal string representation of `self`. /// /// ``` /// # use openssl::bn::BigNum; /// let s = -BigNum::from_u32(0x99ff).unwrap(); /// /// assert_eq!(&**s.to_hex_str().unwrap(), "-99FF"); /// ``` pub fn to_hex_str(&self) -> Result { unsafe { let buf = cvt_p(ffi::BN_bn2hex(self.as_ptr()))?; Ok(OpensslString::from_ptr(buf)) } } /// Returns an `Asn1Integer` containing the value of `self`. pub fn to_asn1_integer(&self) -> Result { unsafe { cvt_p(ffi::BN_to_ASN1_INTEGER(self.as_ptr(), ptr::null_mut())) .map(|p| Asn1Integer::from_ptr(p)) } } /// Force constant time computation on this value. #[cfg(ossl110)] pub fn set_const_time(&mut self) { unsafe { ffi::BN_set_flags(self.as_ptr(), ffi::BN_FLG_CONSTTIME) } } /// Returns true if `self` is in const time mode. #[cfg(ossl110)] pub fn is_const_time(&self) -> bool { unsafe { let ret = ffi::BN_get_flags(self.as_ptr(), ffi::BN_FLG_CONSTTIME); ret == ffi::BN_FLG_CONSTTIME } } /// Returns true if `self` was created with [`BigNum::new_secure`]. #[cfg(ossl110)] pub fn is_secure(&self) -> bool { unsafe { let ret = ffi::BN_get_flags(self.as_ptr(), ffi::BN_FLG_SECURE); ret == ffi::BN_FLG_SECURE } } } impl BigNum { /// Creates a new `BigNum` with the value 0. pub fn new() -> Result { unsafe { ffi::init(); let v = cvt_p(ffi::BN_new())?; Ok(BigNum::from_ptr(v)) } } /// Returns a new secure `BigNum`. /// /// See OpenSSL documentation at [`BN_secure_new`]. /// /// [`BN_secure_new`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_secure_new.html #[cfg(ossl110)] pub fn new_secure() -> Result { unsafe { ffi::init(); let v = cvt_p(ffi::BN_secure_new())?; Ok(BigNum::from_ptr(v)) } } /// Creates a new `BigNum` with the given value. /// /// OpenSSL documentation at [`BN_set_word`] /// /// [`BN_set_word`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_set_word.html pub fn from_u32(n: u32) -> Result { BigNum::new().and_then(|v| unsafe { cvt(ffi::BN_set_word(v.as_ptr(), n as ffi::BN_ULONG)).map(|_| v) }) } /// Creates a `BigNum` from a decimal string. /// /// OpenSSL documentation at [`BN_dec2bn`] /// /// [`BN_dec2bn`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_dec2bn.html pub fn from_dec_str(s: &str) -> Result { unsafe { ffi::init(); let c_str = CString::new(s.as_bytes()).unwrap(); let mut bn = ptr::null_mut(); cvt(ffi::BN_dec2bn(&mut bn, c_str.as_ptr() as *const _))?; Ok(BigNum::from_ptr(bn)) } } /// Creates a `BigNum` from a hexadecimal string. /// /// OpenSSL documentation at [`BN_hex2bn`] /// /// [`BN_hex2bn`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_hex2bn.html pub fn from_hex_str(s: &str) -> Result { unsafe { ffi::init(); let c_str = CString::new(s.as_bytes()).unwrap(); let mut bn = ptr::null_mut(); cvt(ffi::BN_hex2bn(&mut bn, c_str.as_ptr() as *const _))?; Ok(BigNum::from_ptr(bn)) } } /// Returns a constant used in IKE as defined in [`RFC 2409`]. This prime number is in /// the order of magnitude of `2 ^ 768`. This number is used during calculated key /// exchanges such as Diffie-Hellman. This number is labeled Oakley group id 1. /// /// OpenSSL documentation at [`BN_get_rfc2409_prime_768`] /// /// [`RFC 2409`]: https://tools.ietf.org/html/rfc2409#page-21 /// [`BN_get_rfc2409_prime_768`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_get_rfc2409_prime_768.html pub fn get_rfc2409_prime_768() -> Result { unsafe { ffi::init(); cvt_p(BN_get_rfc2409_prime_768(ptr::null_mut())).map(BigNum) } } /// Returns a constant used in IKE as defined in [`RFC 2409`]. This prime number is in /// the order of magnitude of `2 ^ 1024`. This number is used during calculated key /// exchanges such as Diffie-Hellman. This number is labeled Oakly group 2. /// /// OpenSSL documentation at [`BN_get_rfc2409_prime_1024`] /// /// [`RFC 2409`]: https://tools.ietf.org/html/rfc2409#page-21 /// [`BN_get_rfc2409_prime_1024`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_get_rfc2409_prime_1024.html pub fn get_rfc2409_prime_1024() -> Result { unsafe { ffi::init(); cvt_p(BN_get_rfc2409_prime_1024(ptr::null_mut())).map(BigNum) } } /// Returns a constant used in IKE as defined in [`RFC 3526`]. The prime is in the order /// of magnitude of `2 ^ 1536`. This number is used during calculated key /// exchanges such as Diffie-Hellman. This number is labeled MODP group 5. /// /// OpenSSL documentation at [`BN_get_rfc3526_prime_1536`] /// /// [`RFC 3526`]: https://tools.ietf.org/html/rfc3526#page-3 /// [`BN_get_rfc3526_prime_1536`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_get_rfc3526_prime_1536.html pub fn get_rfc3526_prime_1536() -> Result { unsafe { ffi::init(); cvt_p(BN_get_rfc3526_prime_1536(ptr::null_mut())).map(BigNum) } } /// Returns a constant used in IKE as defined in [`RFC 3526`]. The prime is in the order /// of magnitude of `2 ^ 2048`. This number is used during calculated key /// exchanges such as Diffie-Hellman. This number is labeled MODP group 14. /// /// OpenSSL documentation at [`BN_get_rfc3526_prime_2048`] /// /// [`RFC 3526`]: https://tools.ietf.org/html/rfc3526#page-3 /// [`BN_get_rfc3526_prime_2048`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_get_rfc3526_prime_2048.html pub fn get_rfc3526_prime_2048() -> Result { unsafe { ffi::init(); cvt_p(BN_get_rfc3526_prime_2048(ptr::null_mut())).map(BigNum) } } /// Returns a constant used in IKE as defined in [`RFC 3526`]. The prime is in the order /// of magnitude of `2 ^ 3072`. This number is used during calculated key /// exchanges such as Diffie-Hellman. This number is labeled MODP group 15. /// /// OpenSSL documentation at [`BN_get_rfc3526_prime_3072`] /// /// [`RFC 3526`]: https://tools.ietf.org/html/rfc3526#page-4 /// [`BN_get_rfc3526_prime_3072`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_get_rfc3526_prime_3072.html pub fn get_rfc3526_prime_3072() -> Result { unsafe { ffi::init(); cvt_p(BN_get_rfc3526_prime_3072(ptr::null_mut())).map(BigNum) } } /// Returns a constant used in IKE as defined in [`RFC 3526`]. The prime is in the order /// of magnitude of `2 ^ 4096`. This number is used during calculated key /// exchanges such as Diffie-Hellman. This number is labeled MODP group 16. /// /// OpenSSL documentation at [`BN_get_rfc3526_prime_4096`] /// /// [`RFC 3526`]: https://tools.ietf.org/html/rfc3526#page-4 /// [`BN_get_rfc3526_prime_4096`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_get_rfc3526_prime_4096.html pub fn get_rfc3526_prime_4096() -> Result { unsafe { ffi::init(); cvt_p(BN_get_rfc3526_prime_4096(ptr::null_mut())).map(BigNum) } } /// Returns a constant used in IKE as defined in [`RFC 3526`]. The prime is in the order /// of magnitude of `2 ^ 6144`. This number is used during calculated key /// exchanges such as Diffie-Hellman. This number is labeled MODP group 17. /// /// OpenSSL documentation at [`BN_get_rfc3526_prime_6144`] /// /// [`RFC 3526`]: https://tools.ietf.org/html/rfc3526#page-6 /// [`BN_get_rfc3526_prime_6144`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_get_rfc3526_prime_6144.html pub fn get_rfc3526_prime_6144() -> Result { unsafe { ffi::init(); cvt_p(BN_get_rfc3526_prime_6144(ptr::null_mut())).map(BigNum) } } /// Returns a constant used in IKE as defined in [`RFC 3526`]. The prime is in the order /// of magnitude of `2 ^ 8192`. This number is used during calculated key /// exchanges such as Diffie-Hellman. This number is labeled MODP group 18. /// /// OpenSSL documentation at [`BN_get_rfc3526_prime_8192`] /// /// [`RFC 3526`]: https://tools.ietf.org/html/rfc3526#page-6 /// [`BN_get_rfc3526_prime_8192`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_get_rfc3526_prime_8192.html pub fn get_rfc3526_prime_8192() -> Result { unsafe { ffi::init(); cvt_p(BN_get_rfc3526_prime_8192(ptr::null_mut())).map(BigNum) } } /// Creates a new `BigNum` from an unsigned, big-endian encoded number of arbitrary length. /// /// OpenSSL documentation at [`BN_bin2bn`] /// /// [`BN_bin2bn`]: https://www.openssl.org/docs/man1.1.0/crypto/BN_bin2bn.html /// /// ``` /// # use openssl::bn::BigNum; /// let bignum = BigNum::from_slice(&[0x12, 0x00, 0x34]).unwrap(); /// /// assert_eq!(bignum, BigNum::from_u32(0x120034).unwrap()); /// ``` pub fn from_slice(n: &[u8]) -> Result { unsafe { ffi::init(); assert!(n.len() <= c_int::max_value() as usize); cvt_p(ffi::BN_bin2bn( n.as_ptr(), n.len() as c_int, ptr::null_mut(), )) .map(|p| BigNum::from_ptr(p)) } } } impl fmt::Debug for BigNumRef { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self.to_dec_str() { Ok(s) => f.write_str(&s), Err(e) => Err(e.into()), } } } impl fmt::Debug for BigNum { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self.to_dec_str() { Ok(s) => f.write_str(&s), Err(e) => Err(e.into()), } } } impl fmt::Display for BigNumRef { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self.to_dec_str() { Ok(s) => f.write_str(&s), Err(e) => Err(e.into()), } } } impl fmt::Display for BigNum { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self.to_dec_str() { Ok(s) => f.write_str(&s), Err(e) => Err(e.into()), } } } impl PartialEq for BigNumRef { fn eq(&self, oth: &BigNumRef) -> bool { self.cmp(oth) == Ordering::Equal } } impl PartialEq for BigNumRef { fn eq(&self, oth: &BigNum) -> bool { self.eq(oth.deref()) } } impl Eq for BigNumRef {} impl PartialEq for BigNum { fn eq(&self, oth: &BigNum) -> bool { self.deref().eq(oth) } } impl PartialEq for BigNum { fn eq(&self, oth: &BigNumRef) -> bool { self.deref().eq(oth) } } impl Eq for BigNum {} impl PartialOrd for BigNumRef { fn partial_cmp(&self, oth: &BigNumRef) -> Option { Some(self.cmp(oth)) } } impl PartialOrd for BigNumRef { fn partial_cmp(&self, oth: &BigNum) -> Option { Some(self.cmp(oth.deref())) } } impl Ord for BigNumRef { fn cmp(&self, oth: &BigNumRef) -> Ordering { unsafe { ffi::BN_cmp(self.as_ptr(), oth.as_ptr()).cmp(&0) } } } impl PartialOrd for BigNum { fn partial_cmp(&self, oth: &BigNum) -> Option { self.deref().partial_cmp(oth.deref()) } } impl PartialOrd for BigNum { fn partial_cmp(&self, oth: &BigNumRef) -> Option { self.deref().partial_cmp(oth) } } impl Ord for BigNum { fn cmp(&self, oth: &BigNum) -> Ordering { self.deref().cmp(oth.deref()) } } macro_rules! delegate { ($t:ident, $m:ident) => { impl<'a, 'b> $t<&'b BigNum> for &'a BigNumRef { type Output = BigNum; fn $m(self, oth: &BigNum) -> BigNum { $t::$m(self, oth.deref()) } } impl<'a, 'b> $t<&'b BigNumRef> for &'a BigNum { type Output = BigNum; fn $m(self, oth: &BigNumRef) -> BigNum { $t::$m(self.deref(), oth) } } impl<'a, 'b> $t<&'b BigNum> for &'a BigNum { type Output = BigNum; fn $m(self, oth: &BigNum) -> BigNum { $t::$m(self.deref(), oth.deref()) } } }; } impl<'a, 'b> Add<&'b BigNumRef> for &'a BigNumRef { type Output = BigNum; fn add(self, oth: &BigNumRef) -> BigNum { let mut r = BigNum::new().unwrap(); r.checked_add(self, oth).unwrap(); r } } delegate!(Add, add); impl<'a, 'b> Sub<&'b BigNumRef> for &'a BigNumRef { type Output = BigNum; fn sub(self, oth: &BigNumRef) -> BigNum { let mut r = BigNum::new().unwrap(); r.checked_sub(self, oth).unwrap(); r } } delegate!(Sub, sub); impl<'a, 'b> Mul<&'b BigNumRef> for &'a BigNumRef { type Output = BigNum; fn mul(self, oth: &BigNumRef) -> BigNum { let mut ctx = BigNumContext::new().unwrap(); let mut r = BigNum::new().unwrap(); r.checked_mul(self, oth, &mut ctx).unwrap(); r } } delegate!(Mul, mul); impl<'a, 'b> Div<&'b BigNumRef> for &'a BigNumRef { type Output = BigNum; fn div(self, oth: &'b BigNumRef) -> BigNum { let mut ctx = BigNumContext::new().unwrap(); let mut r = BigNum::new().unwrap(); r.checked_div(self, oth, &mut ctx).unwrap(); r } } delegate!(Div, div); impl<'a, 'b> Rem<&'b BigNumRef> for &'a BigNumRef { type Output = BigNum; fn rem(self, oth: &'b BigNumRef) -> BigNum { let mut ctx = BigNumContext::new().unwrap(); let mut r = BigNum::new().unwrap(); r.checked_rem(self, oth, &mut ctx).unwrap(); r } } delegate!(Rem, rem); impl<'a> Shl for &'a BigNumRef { type Output = BigNum; fn shl(self, n: i32) -> BigNum { let mut r = BigNum::new().unwrap(); r.lshift(self, n).unwrap(); r } } impl<'a> Shl for &'a BigNum { type Output = BigNum; fn shl(self, n: i32) -> BigNum { self.deref().shl(n) } } impl<'a> Shr for &'a BigNumRef { type Output = BigNum; fn shr(self, n: i32) -> BigNum { let mut r = BigNum::new().unwrap(); r.rshift(self, n).unwrap(); r } } impl<'a> Shr for &'a BigNum { type Output = BigNum; fn shr(self, n: i32) -> BigNum { self.deref().shr(n) } } impl<'a> Neg for &'a BigNumRef { type Output = BigNum; fn neg(self) -> BigNum { self.to_owned().unwrap().neg() } } impl<'a> Neg for &'a BigNum { type Output = BigNum; fn neg(self) -> BigNum { self.deref().neg() } } impl Neg for BigNum { type Output = BigNum; fn neg(mut self) -> BigNum { let negative = self.is_negative(); self.set_negative(!negative); self } } #[cfg(test)] mod tests { use crate::bn::{BigNum, BigNumContext}; #[test] fn test_to_from_slice() { let v0 = BigNum::from_u32(10_203_004).unwrap(); let vec = v0.to_vec(); let v1 = BigNum::from_slice(&vec).unwrap(); assert_eq!(v0, v1); } #[test] fn test_negation() { let a = BigNum::from_u32(909_829_283).unwrap(); assert!(!a.is_negative()); assert!((-a).is_negative()); } #[test] fn test_shift() { let a = BigNum::from_u32(909_829_283).unwrap(); assert_eq!(a, &(&a << 1) >> 1); } #[test] fn test_rand_range() { let range = BigNum::from_u32(909_829_283).unwrap(); let mut result = BigNum::from_dec_str(&range.to_dec_str().unwrap()).unwrap(); range.rand_range(&mut result).unwrap(); assert!(result >= BigNum::from_u32(0).unwrap() && result < range); } #[test] fn test_pseudo_rand_range() { let range = BigNum::from_u32(909_829_283).unwrap(); let mut result = BigNum::from_dec_str(&range.to_dec_str().unwrap()).unwrap(); range.pseudo_rand_range(&mut result).unwrap(); assert!(result >= BigNum::from_u32(0).unwrap() && result < range); } #[test] fn test_prime_numbers() { let a = BigNum::from_u32(19_029_017).unwrap(); let mut p = BigNum::new().unwrap(); p.generate_prime(128, true, None, Some(&a)).unwrap(); let mut ctx = BigNumContext::new().unwrap(); assert!(p.is_prime(100, &mut ctx).unwrap()); assert!(p.is_prime_fasttest(100, &mut ctx, true).unwrap()); } #[cfg(ossl110)] #[test] fn test_secure_bn_ctx() { let mut cxt = BigNumContext::new_secure().unwrap(); let a = BigNum::from_u32(8).unwrap(); let b = BigNum::from_u32(3).unwrap(); let mut remainder = BigNum::new().unwrap(); remainder.nnmod(&a, &b, &mut cxt).unwrap(); assert!(remainder.eq(&BigNum::from_u32(2).unwrap())); } #[cfg(ossl110)] #[test] fn test_secure_bn() { let a = BigNum::new().unwrap(); assert!(!a.is_secure()); let b = BigNum::new_secure().unwrap(); assert!(b.is_secure()) } #[cfg(ossl110)] #[test] fn test_const_time_bn() { let a = BigNum::new().unwrap(); assert!(!a.is_const_time()); let mut b = BigNum::new().unwrap(); b.set_const_time(); assert!(b.is_const_time()) } } vendor/openssl/src/util.rs0000664000175000017500000000504514160055207016450 0ustar mwhudsonmwhudsonuse foreign_types::{ForeignType, ForeignTypeRef}; use libc::{c_char, c_int, c_void}; use std::any::Any; use std::panic::{self, AssertUnwindSafe}; use std::slice; use crate::error::ErrorStack; /// Wraps a user-supplied callback and a slot for panics thrown inside the callback (while FFI /// frames are on the stack). /// /// When dropped, checks if the callback has panicked, and resumes unwinding if so. pub struct CallbackState { /// The user callback. Taken out of the `Option` when called. cb: Option, /// If the callback panics, we place the panic object here, to be re-thrown once OpenSSL /// returns. panic: Option>, } impl CallbackState { pub fn new(callback: F) -> Self { CallbackState { cb: Some(callback), panic: None, } } } impl Drop for CallbackState { fn drop(&mut self) { if let Some(panic) = self.panic.take() { panic::resume_unwind(panic); } } } /// Password callback function, passed to private key loading functions. /// /// `cb_state` is expected to be a pointer to a `CallbackState`. pub unsafe extern "C" fn invoke_passwd_cb( buf: *mut c_char, size: c_int, _rwflag: c_int, cb_state: *mut c_void, ) -> c_int where F: FnOnce(&mut [u8]) -> Result, { let callback = &mut *(cb_state as *mut CallbackState); let result = panic::catch_unwind(AssertUnwindSafe(|| { let pass_slice = slice::from_raw_parts_mut(buf as *mut u8, size as usize); callback.cb.take().unwrap()(pass_slice) })); match result { Ok(Ok(len)) => len as c_int, Ok(Err(_)) => { // FIXME restore error stack 0 } Err(err) => { callback.panic = Some(err); 0 } } } pub trait ForeignTypeExt: ForeignType { unsafe fn from_ptr_opt(ptr: *mut Self::CType) -> Option { if ptr.is_null() { None } else { Some(Self::from_ptr(ptr)) } } } impl ForeignTypeExt for FT {} pub trait ForeignTypeRefExt: ForeignTypeRef { unsafe fn from_const_ptr<'a>(ptr: *const Self::CType) -> &'a Self { Self::from_ptr(ptr as *mut Self::CType) } unsafe fn from_const_ptr_opt<'a>(ptr: *const Self::CType) -> Option<&'a Self> { if ptr.is_null() { None } else { Some(Self::from_const_ptr(ptr as *mut Self::CType)) } } } impl ForeignTypeRefExt for FT {} vendor/openssl/src/pkcs7.rs0000664000175000017500000003632414160055207016526 0ustar mwhudsonmwhudsonuse bitflags::bitflags; use foreign_types::{ForeignType, ForeignTypeRef}; use libc::c_int; use std::mem; use std::ptr; use crate::bio::{MemBio, MemBioSlice}; use crate::error::ErrorStack; use crate::pkey::{HasPrivate, PKeyRef}; use crate::stack::{Stack, StackRef}; use crate::symm::Cipher; use crate::x509::store::X509StoreRef; use crate::x509::{X509Ref, X509}; use crate::{cvt, cvt_p}; foreign_type_and_impl_send_sync! { type CType = ffi::PKCS7; fn drop = ffi::PKCS7_free; /// A PKCS#7 structure. /// /// Contains signed and/or encrypted data. pub struct Pkcs7; /// Reference to `Pkcs7` pub struct Pkcs7Ref; } bitflags! { pub struct Pkcs7Flags: c_int { const TEXT = ffi::PKCS7_TEXT; const NOCERTS = ffi::PKCS7_NOCERTS; const NOSIGS = ffi::PKCS7_NOSIGS; const NOCHAIN = ffi::PKCS7_NOCHAIN; const NOINTERN = ffi::PKCS7_NOINTERN; const NOVERIFY = ffi::PKCS7_NOVERIFY; const DETACHED = ffi::PKCS7_DETACHED; const BINARY = ffi::PKCS7_BINARY; const NOATTR = ffi::PKCS7_NOATTR; const NOSMIMECAP = ffi::PKCS7_NOSMIMECAP; const NOOLDMIMETYPE = ffi::PKCS7_NOOLDMIMETYPE; const CRLFEOL = ffi::PKCS7_CRLFEOL; const STREAM = ffi::PKCS7_STREAM; const NOCRL = ffi::PKCS7_NOCRL; const PARTIAL = ffi::PKCS7_PARTIAL; const REUSE_DIGEST = ffi::PKCS7_REUSE_DIGEST; #[cfg(not(any(ossl101, ossl102, libressl)))] const NO_DUAL_CONTENT = ffi::PKCS7_NO_DUAL_CONTENT; } } impl Pkcs7 { from_pem! { /// Deserializes a PEM-encoded PKCS#7 signature /// /// The input should have a header of `-----BEGIN PKCS7-----`. /// /// This corresponds to [`PEM_read_bio_PKCS7`]. /// /// [`PEM_read_bio_PKCS7`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_read_bio_PKCS7.html from_pem, Pkcs7, ffi::PEM_read_bio_PKCS7 } from_der! { /// Deserializes a DER-encoded PKCS#7 signature /// /// This corresponds to [`d2i_PKCS7`]. /// /// [`d2i_PKCS7`]: https://www.openssl.org/docs/man1.1.0/man3/d2i_PKCS7.html from_der, Pkcs7, ffi::d2i_PKCS7 } /// Parses a message in S/MIME format. /// /// Returns the loaded signature, along with the cleartext message (if /// available). /// /// This corresponds to [`SMIME_read_PKCS7`]. /// /// [`SMIME_read_PKCS7`]: https://www.openssl.org/docs/man1.1.0/crypto/SMIME_read_PKCS7.html pub fn from_smime(input: &[u8]) -> Result<(Pkcs7, Option>), ErrorStack> { ffi::init(); let input_bio = MemBioSlice::new(input)?; let mut bcont_bio = ptr::null_mut(); unsafe { let pkcs7 = cvt_p(ffi::SMIME_read_PKCS7(input_bio.as_ptr(), &mut bcont_bio)).map(Pkcs7)?; let out = if !bcont_bio.is_null() { let bcont_bio = MemBio::from_ptr(bcont_bio); Some(bcont_bio.get_buf().to_vec()) } else { None }; Ok((pkcs7, out)) } } /// Creates and returns a PKCS#7 `envelopedData` structure. /// /// `certs` is a list of recipient certificates. `input` is the content to be /// encrypted. `cipher` is the symmetric cipher to use. `flags` is an optional /// set of flags. /// /// This corresponds to [`PKCS7_encrypt`]. /// /// [`PKCS7_encrypt`]: https://www.openssl.org/docs/man1.0.2/crypto/PKCS7_encrypt.html pub fn encrypt( certs: &StackRef, input: &[u8], cipher: Cipher, flags: Pkcs7Flags, ) -> Result { let input_bio = MemBioSlice::new(input)?; unsafe { cvt_p(ffi::PKCS7_encrypt( certs.as_ptr(), input_bio.as_ptr(), cipher.as_ptr(), flags.bits, )) .map(Pkcs7) } } /// Creates and returns a PKCS#7 `signedData` structure. /// /// `signcert` is the certificate to sign with, `pkey` is the corresponding /// private key. `certs` is an optional additional set of certificates to /// include in the PKCS#7 structure (for example any intermediate CAs in the /// chain). /// /// This corresponds to [`PKCS7_sign`]. /// /// [`PKCS7_sign`]: https://www.openssl.org/docs/man1.0.2/crypto/PKCS7_sign.html pub fn sign( signcert: &X509Ref, pkey: &PKeyRef, certs: &StackRef, input: &[u8], flags: Pkcs7Flags, ) -> Result where PT: HasPrivate, { let input_bio = MemBioSlice::new(input)?; unsafe { cvt_p(ffi::PKCS7_sign( signcert.as_ptr(), pkey.as_ptr(), certs.as_ptr(), input_bio.as_ptr(), flags.bits, )) .map(Pkcs7) } } } impl Pkcs7Ref { /// Converts PKCS#7 structure to S/MIME format /// /// This corresponds to [`SMIME_write_PKCS7`]. /// /// [`SMIME_write_PKCS7`]: https://www.openssl.org/docs/man1.1.0/crypto/SMIME_write_PKCS7.html pub fn to_smime(&self, input: &[u8], flags: Pkcs7Flags) -> Result, ErrorStack> { let input_bio = MemBioSlice::new(input)?; let output = MemBio::new()?; unsafe { cvt(ffi::SMIME_write_PKCS7( output.as_ptr(), self.as_ptr(), input_bio.as_ptr(), flags.bits, )) .map(|_| output.get_buf().to_owned()) } } to_pem! { /// Serializes the data into a PEM-encoded PKCS#7 structure. /// /// The output will have a header of `-----BEGIN PKCS7-----`. /// /// This corresponds to [`PEM_write_bio_PKCS7`]. /// /// [`PEM_write_bio_PKCS7`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_write_bio_PKCS7.html to_pem, ffi::PEM_write_bio_PKCS7 } to_der! { /// Serializes the data into a DER-encoded PKCS#7 structure. /// /// This corresponds to [`i2d_PKCS7`]. /// /// [`i2d_PKCS7`]: https://www.openssl.org/docs/man1.1.0/man3/i2d_PKCS7.html to_der, ffi::i2d_PKCS7 } /// Decrypts data using the provided private key. /// /// `pkey` is the recipient's private key, and `cert` is the recipient's /// certificate. /// /// Returns the decrypted message. /// /// This corresponds to [`PKCS7_decrypt`]. /// /// [`PKCS7_decrypt`]: https://www.openssl.org/docs/man1.0.2/crypto/PKCS7_decrypt.html pub fn decrypt( &self, pkey: &PKeyRef, cert: &X509Ref, flags: Pkcs7Flags, ) -> Result, ErrorStack> where PT: HasPrivate, { let output = MemBio::new()?; unsafe { cvt(ffi::PKCS7_decrypt( self.as_ptr(), pkey.as_ptr(), cert.as_ptr(), output.as_ptr(), flags.bits, )) .map(|_| output.get_buf().to_owned()) } } /// Verifies the PKCS#7 `signedData` structure contained by `&self`. /// /// `certs` is a set of certificates in which to search for the signer's /// certificate. `store` is a trusted certificate store (used for chain /// verification). `indata` is the signed data if the content is not present /// in `&self`. The content is written to `out` if it is not `None`. /// /// This corresponds to [`PKCS7_verify`]. /// /// [`PKCS7_verify`]: https://www.openssl.org/docs/man1.0.2/crypto/PKCS7_verify.html pub fn verify( &self, certs: &StackRef, store: &X509StoreRef, indata: Option<&[u8]>, out: Option<&mut Vec>, flags: Pkcs7Flags, ) -> Result<(), ErrorStack> { let out_bio = MemBio::new()?; let indata_bio = match indata { Some(data) => Some(MemBioSlice::new(data)?), None => None, }; let indata_bio_ptr = indata_bio.as_ref().map_or(ptr::null_mut(), |p| p.as_ptr()); unsafe { cvt(ffi::PKCS7_verify( self.as_ptr(), certs.as_ptr(), store.as_ptr(), indata_bio_ptr, out_bio.as_ptr(), flags.bits, )) .map(|_| ())? } if let Some(data) = out { data.clear(); data.extend_from_slice(out_bio.get_buf()); } Ok(()) } /// Retrieve the signer's certificates from the PKCS#7 structure without verifying them. /// /// This corresponds to [`PKCS7_get0_signers`]. /// /// [`PKCS7_get0_signers`]: https://www.openssl.org/docs/man1.0.2/crypto/PKCS7_verify.html pub fn signers( &self, certs: &StackRef, flags: Pkcs7Flags, ) -> Result, ErrorStack> { unsafe { let ptr = cvt_p(ffi::PKCS7_get0_signers( self.as_ptr(), certs.as_ptr(), flags.bits, ))?; // The returned stack is owned by the caller, but the certs inside are not! Our stack interface can't deal // with that, so instead we just manually bump the refcount of the certs so that the whole stack is properly // owned. let stack = Stack::::from_ptr(ptr); for cert in &stack { mem::forget(cert.to_owned()); } Ok(stack) } } } #[cfg(test)] mod tests { use crate::hash::MessageDigest; use crate::pkcs7::{Pkcs7, Pkcs7Flags}; use crate::pkey::PKey; use crate::stack::Stack; use crate::symm::Cipher; use crate::x509::store::X509StoreBuilder; use crate::x509::X509; #[test] fn encrypt_decrypt_test() { let cert = include_bytes!("../test/certs.pem"); let cert = X509::from_pem(cert).unwrap(); let mut certs = Stack::new().unwrap(); certs.push(cert.clone()).unwrap(); let message: String = String::from("foo"); let cypher = Cipher::des_ede3_cbc(); let flags = Pkcs7Flags::STREAM; let pkey = include_bytes!("../test/key.pem"); let pkey = PKey::private_key_from_pem(pkey).unwrap(); let pkcs7 = Pkcs7::encrypt(&certs, message.as_bytes(), cypher, flags).expect("should succeed"); let encrypted = pkcs7 .to_smime(message.as_bytes(), flags) .expect("should succeed"); let (pkcs7_decoded, _) = Pkcs7::from_smime(encrypted.as_slice()).expect("should succeed"); let decoded = pkcs7_decoded .decrypt(&pkey, &cert, Pkcs7Flags::empty()) .expect("should succeed"); assert_eq!(decoded, message.into_bytes()); } #[test] fn sign_verify_test_detached() { let cert = include_bytes!("../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let certs = Stack::new().unwrap(); let message = "foo"; let flags = Pkcs7Flags::STREAM | Pkcs7Flags::DETACHED; let pkey = include_bytes!("../test/key.pem"); let pkey = PKey::private_key_from_pem(pkey).unwrap(); let mut store_builder = X509StoreBuilder::new().expect("should succeed"); let root_ca = include_bytes!("../test/root-ca.pem"); let root_ca = X509::from_pem(root_ca).unwrap(); store_builder.add_cert(root_ca).expect("should succeed"); let store = store_builder.build(); let pkcs7 = Pkcs7::sign(&cert, &pkey, &certs, message.as_bytes(), flags).expect("should succeed"); let signed = pkcs7 .to_smime(message.as_bytes(), flags) .expect("should succeed"); println!("{:?}", String::from_utf8(signed.clone()).unwrap()); let (pkcs7_decoded, content) = Pkcs7::from_smime(signed.as_slice()).expect("should succeed"); let mut output = Vec::new(); pkcs7_decoded .verify( &certs, &store, Some(message.as_bytes()), Some(&mut output), flags, ) .expect("should succeed"); assert_eq!(output, message.as_bytes()); assert_eq!(content.expect("should be non-empty"), message.as_bytes()); } #[test] fn sign_verify_test_normal() { let cert = include_bytes!("../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let certs = Stack::new().unwrap(); let message = "foo"; let flags = Pkcs7Flags::STREAM; let pkey = include_bytes!("../test/key.pem"); let pkey = PKey::private_key_from_pem(pkey).unwrap(); let mut store_builder = X509StoreBuilder::new().expect("should succeed"); let root_ca = include_bytes!("../test/root-ca.pem"); let root_ca = X509::from_pem(root_ca).unwrap(); store_builder.add_cert(root_ca).expect("should succeed"); let store = store_builder.build(); let pkcs7 = Pkcs7::sign(&cert, &pkey, &certs, message.as_bytes(), flags).expect("should succeed"); let signed = pkcs7 .to_smime(message.as_bytes(), flags) .expect("should succeed"); let (pkcs7_decoded, content) = Pkcs7::from_smime(signed.as_slice()).expect("should succeed"); let mut output = Vec::new(); pkcs7_decoded .verify(&certs, &store, None, Some(&mut output), flags) .expect("should succeed"); assert_eq!(output, message.as_bytes()); assert!(content.is_none()); } #[test] fn signers() { let cert = include_bytes!("../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let cert_digest = cert.digest(MessageDigest::sha256()).unwrap(); let certs = Stack::new().unwrap(); let message = "foo"; let flags = Pkcs7Flags::STREAM; let pkey = include_bytes!("../test/key.pem"); let pkey = PKey::private_key_from_pem(pkey).unwrap(); let mut store_builder = X509StoreBuilder::new().expect("should succeed"); let root_ca = include_bytes!("../test/root-ca.pem"); let root_ca = X509::from_pem(root_ca).unwrap(); store_builder.add_cert(root_ca).expect("should succeed"); let pkcs7 = Pkcs7::sign(&cert, &pkey, &certs, message.as_bytes(), flags).expect("should succeed"); let signed = pkcs7 .to_smime(message.as_bytes(), flags) .expect("should succeed"); let (pkcs7_decoded, _) = Pkcs7::from_smime(signed.as_slice()).expect("should succeed"); let empty_certs = Stack::new().unwrap(); let signer_certs = pkcs7_decoded .signers(&empty_certs, flags) .expect("should succeed"); assert_eq!(empty_certs.len(), 0); assert_eq!(signer_certs.len(), 1); let signer_digest = signer_certs[0].digest(MessageDigest::sha256()).unwrap(); assert_eq!(*cert_digest, *signer_digest); } #[test] fn invalid_from_smime() { let input = String::from("Invalid SMIME Message"); let result = Pkcs7::from_smime(input.as_bytes()); assert!(result.is_err()); } } vendor/openssl/src/nid.rs0000664000175000017500000021021614160055207016243 0ustar mwhudsonmwhudson//! A collection of numerical identifiers for OpenSSL objects. use libc::{c_char, c_int}; use std::ffi::CStr; use std::str; use crate::cvt_p; use crate::error::ErrorStack; /// The digest and public-key algorithms associated with a signature. pub struct SignatureAlgorithms { /// The signature's digest. /// /// If the signature does not specify a digest, this will be `NID::UNDEF`. pub digest: Nid, /// The signature's public-key. pub pkey: Nid, } /// A numerical identifier for an OpenSSL object. /// /// Objects in OpenSSL can have a short name, a long name, and /// a numerical identifier (NID). For convenience, objects /// are usually represented in source code using these numeric /// identifiers. /// /// Users should generally not need to create new `Nid`s. /// /// # Examples /// /// To view the integer representation of a `Nid`: /// /// ``` /// use openssl::nid::Nid; /// /// assert!(Nid::AES_256_GCM.as_raw() == 901); /// ``` /// /// # External Documentation /// /// The following documentation provides context about `Nid`s and their usage /// in OpenSSL. /// /// - [Obj_nid2obj](https://www.openssl.org/docs/man1.1.0/crypto/OBJ_create.html) #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)] pub struct Nid(c_int); #[allow(non_snake_case)] impl Nid { /// Create a `Nid` from an integer representation. pub fn from_raw(raw: c_int) -> Nid { Nid(raw) } /// Return the integer representation of a `Nid`. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } /// Returns the `Nid`s of the digest and public key algorithms associated with a signature ID. /// /// This corresponds to `OBJ_find_sigid_algs`. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn signature_algorithms(&self) -> Option { unsafe { let mut digest = 0; let mut pkey = 0; if ffi::OBJ_find_sigid_algs(self.0, &mut digest, &mut pkey) == 1 { Some(SignatureAlgorithms { digest: Nid(digest), pkey: Nid(pkey), }) } else { None } } } /// Return the string representation of a `Nid` (long) /// This corresponds to [`OBJ_nid2ln`] /// /// [`OBJ_nid2ln`]: https://www.openssl.org/docs/man1.1.0/crypto/OBJ_nid2ln.html #[allow(clippy::trivially_copy_pass_by_ref)] pub fn long_name(&self) -> Result<&'static str, ErrorStack> { unsafe { cvt_p(ffi::OBJ_nid2ln(self.0) as *mut c_char) .map(|nameptr| str::from_utf8(CStr::from_ptr(nameptr).to_bytes()).unwrap()) } } /// Return the string representation of a `Nid` (short) /// This corresponds to [`OBJ_nid2sn`] /// /// [`OBJ_nid2sn`]: https://www.openssl.org/docs/man1.1.0/crypto/OBJ_nid2sn.html #[allow(clippy::trivially_copy_pass_by_ref)] pub fn short_name(&self) -> Result<&'static str, ErrorStack> { unsafe { cvt_p(ffi::OBJ_nid2sn(self.0) as *mut c_char) .map(|nameptr| str::from_utf8(CStr::from_ptr(nameptr).to_bytes()).unwrap()) } } pub const UNDEF: Nid = Nid(ffi::NID_undef); pub const ITU_T: Nid = Nid(ffi::NID_itu_t); pub const CCITT: Nid = Nid(ffi::NID_ccitt); pub const ISO: Nid = Nid(ffi::NID_iso); pub const JOINT_ISO_ITU_T: Nid = Nid(ffi::NID_joint_iso_itu_t); pub const JOINT_ISO_CCITT: Nid = Nid(ffi::NID_joint_iso_ccitt); pub const MEMBER_BODY: Nid = Nid(ffi::NID_member_body); pub const IDENTIFIED_ORGANIZATION: Nid = Nid(ffi::NID_identified_organization); pub const HMAC_MD5: Nid = Nid(ffi::NID_hmac_md5); pub const HMAC_SHA1: Nid = Nid(ffi::NID_hmac_sha1); pub const CERTICOM_ARC: Nid = Nid(ffi::NID_certicom_arc); pub const INTERNATIONAL_ORGANIZATIONS: Nid = Nid(ffi::NID_international_organizations); pub const WAP: Nid = Nid(ffi::NID_wap); pub const WAP_WSG: Nid = Nid(ffi::NID_wap_wsg); pub const SELECTED_ATTRIBUTE_TYPES: Nid = Nid(ffi::NID_selected_attribute_types); pub const CLEARANCE: Nid = Nid(ffi::NID_clearance); pub const ISO_US: Nid = Nid(ffi::NID_ISO_US); pub const X9_57: Nid = Nid(ffi::NID_X9_57); pub const X9CM: Nid = Nid(ffi::NID_X9cm); pub const DSA: Nid = Nid(ffi::NID_dsa); pub const DSAWITHSHA1: Nid = Nid(ffi::NID_dsaWithSHA1); pub const ANSI_X9_62: Nid = Nid(ffi::NID_ansi_X9_62); pub const X9_62_PRIME_FIELD: Nid = Nid(ffi::NID_X9_62_prime_field); pub const X9_62_CHARACTERISTIC_TWO_FIELD: Nid = Nid(ffi::NID_X9_62_characteristic_two_field); pub const X9_62_ID_CHARACTERISTIC_TWO_BASIS: Nid = Nid(ffi::NID_X9_62_id_characteristic_two_basis); pub const X9_62_ONBASIS: Nid = Nid(ffi::NID_X9_62_onBasis); pub const X9_62_TPBASIS: Nid = Nid(ffi::NID_X9_62_tpBasis); pub const X9_62_PPBASIS: Nid = Nid(ffi::NID_X9_62_ppBasis); pub const X9_62_ID_ECPUBLICKEY: Nid = Nid(ffi::NID_X9_62_id_ecPublicKey); pub const X9_62_C2PNB163V1: Nid = Nid(ffi::NID_X9_62_c2pnb163v1); pub const X9_62_C2PNB163V2: Nid = Nid(ffi::NID_X9_62_c2pnb163v2); pub const X9_62_C2PNB163V3: Nid = Nid(ffi::NID_X9_62_c2pnb163v3); pub const X9_62_C2PNB176V1: Nid = Nid(ffi::NID_X9_62_c2pnb176v1); pub const X9_62_C2TNB191V1: Nid = Nid(ffi::NID_X9_62_c2tnb191v1); pub const X9_62_C2TNB191V2: Nid = Nid(ffi::NID_X9_62_c2tnb191v2); pub const X9_62_C2TNB191V3: Nid = Nid(ffi::NID_X9_62_c2tnb191v3); pub const X9_62_C2ONB191V4: Nid = Nid(ffi::NID_X9_62_c2onb191v4); pub const X9_62_C2ONB191V5: Nid = Nid(ffi::NID_X9_62_c2onb191v5); pub const X9_62_C2PNB208W1: Nid = Nid(ffi::NID_X9_62_c2pnb208w1); pub const X9_62_C2TNB239V1: Nid = Nid(ffi::NID_X9_62_c2tnb239v1); pub const X9_62_C2TNB239V2: Nid = Nid(ffi::NID_X9_62_c2tnb239v2); pub const X9_62_C2TNB239V3: Nid = Nid(ffi::NID_X9_62_c2tnb239v3); pub const X9_62_C2ONB239V4: Nid = Nid(ffi::NID_X9_62_c2onb239v4); pub const X9_62_C2ONB239V5: Nid = Nid(ffi::NID_X9_62_c2onb239v5); pub const X9_62_C2PNB272W1: Nid = Nid(ffi::NID_X9_62_c2pnb272w1); pub const X9_62_C2PNB304W1: Nid = Nid(ffi::NID_X9_62_c2pnb304w1); pub const X9_62_C2TNB359V1: Nid = Nid(ffi::NID_X9_62_c2tnb359v1); pub const X9_62_C2PNB368W1: Nid = Nid(ffi::NID_X9_62_c2pnb368w1); pub const X9_62_C2TNB431R1: Nid = Nid(ffi::NID_X9_62_c2tnb431r1); pub const X9_62_PRIME192V1: Nid = Nid(ffi::NID_X9_62_prime192v1); pub const X9_62_PRIME192V2: Nid = Nid(ffi::NID_X9_62_prime192v2); pub const X9_62_PRIME192V3: Nid = Nid(ffi::NID_X9_62_prime192v3); pub const X9_62_PRIME239V1: Nid = Nid(ffi::NID_X9_62_prime239v1); pub const X9_62_PRIME239V2: Nid = Nid(ffi::NID_X9_62_prime239v2); pub const X9_62_PRIME239V3: Nid = Nid(ffi::NID_X9_62_prime239v3); pub const X9_62_PRIME256V1: Nid = Nid(ffi::NID_X9_62_prime256v1); pub const ECDSA_WITH_SHA1: Nid = Nid(ffi::NID_ecdsa_with_SHA1); pub const ECDSA_WITH_RECOMMENDED: Nid = Nid(ffi::NID_ecdsa_with_Recommended); pub const ECDSA_WITH_SPECIFIED: Nid = Nid(ffi::NID_ecdsa_with_Specified); pub const ECDSA_WITH_SHA224: Nid = Nid(ffi::NID_ecdsa_with_SHA224); pub const ECDSA_WITH_SHA256: Nid = Nid(ffi::NID_ecdsa_with_SHA256); pub const ECDSA_WITH_SHA384: Nid = Nid(ffi::NID_ecdsa_with_SHA384); pub const ECDSA_WITH_SHA512: Nid = Nid(ffi::NID_ecdsa_with_SHA512); pub const SECP112R1: Nid = Nid(ffi::NID_secp112r1); pub const SECP112R2: Nid = Nid(ffi::NID_secp112r2); pub const SECP128R1: Nid = Nid(ffi::NID_secp128r1); pub const SECP128R2: Nid = Nid(ffi::NID_secp128r2); pub const SECP160K1: Nid = Nid(ffi::NID_secp160k1); pub const SECP160R1: Nid = Nid(ffi::NID_secp160r1); pub const SECP160R2: Nid = Nid(ffi::NID_secp160r2); pub const SECP192K1: Nid = Nid(ffi::NID_secp192k1); pub const SECP224K1: Nid = Nid(ffi::NID_secp224k1); pub const SECP224R1: Nid = Nid(ffi::NID_secp224r1); pub const SECP256K1: Nid = Nid(ffi::NID_secp256k1); pub const SECP384R1: Nid = Nid(ffi::NID_secp384r1); pub const SECP521R1: Nid = Nid(ffi::NID_secp521r1); pub const SECT113R1: Nid = Nid(ffi::NID_sect113r1); pub const SECT113R2: Nid = Nid(ffi::NID_sect113r2); pub const SECT131R1: Nid = Nid(ffi::NID_sect131r1); pub const SECT131R2: Nid = Nid(ffi::NID_sect131r2); pub const SECT163K1: Nid = Nid(ffi::NID_sect163k1); pub const SECT163R1: Nid = Nid(ffi::NID_sect163r1); pub const SECT163R2: Nid = Nid(ffi::NID_sect163r2); pub const SECT193R1: Nid = Nid(ffi::NID_sect193r1); pub const SECT193R2: Nid = Nid(ffi::NID_sect193r2); pub const SECT233K1: Nid = Nid(ffi::NID_sect233k1); pub const SECT233R1: Nid = Nid(ffi::NID_sect233r1); pub const SECT239K1: Nid = Nid(ffi::NID_sect239k1); pub const SECT283K1: Nid = Nid(ffi::NID_sect283k1); pub const SECT283R1: Nid = Nid(ffi::NID_sect283r1); pub const SECT409K1: Nid = Nid(ffi::NID_sect409k1); pub const SECT409R1: Nid = Nid(ffi::NID_sect409r1); pub const SECT571K1: Nid = Nid(ffi::NID_sect571k1); pub const SECT571R1: Nid = Nid(ffi::NID_sect571r1); pub const WAP_WSG_IDM_ECID_WTLS1: Nid = Nid(ffi::NID_wap_wsg_idm_ecid_wtls1); pub const WAP_WSG_IDM_ECID_WTLS3: Nid = Nid(ffi::NID_wap_wsg_idm_ecid_wtls3); pub const WAP_WSG_IDM_ECID_WTLS4: Nid = Nid(ffi::NID_wap_wsg_idm_ecid_wtls4); pub const WAP_WSG_IDM_ECID_WTLS5: Nid = Nid(ffi::NID_wap_wsg_idm_ecid_wtls5); pub const WAP_WSG_IDM_ECID_WTLS6: Nid = Nid(ffi::NID_wap_wsg_idm_ecid_wtls6); pub const WAP_WSG_IDM_ECID_WTLS7: Nid = Nid(ffi::NID_wap_wsg_idm_ecid_wtls7); pub const WAP_WSG_IDM_ECID_WTLS8: Nid = Nid(ffi::NID_wap_wsg_idm_ecid_wtls8); pub const WAP_WSG_IDM_ECID_WTLS9: Nid = Nid(ffi::NID_wap_wsg_idm_ecid_wtls9); pub const WAP_WSG_IDM_ECID_WTLS10: Nid = Nid(ffi::NID_wap_wsg_idm_ecid_wtls10); pub const WAP_WSG_IDM_ECID_WTLS11: Nid = Nid(ffi::NID_wap_wsg_idm_ecid_wtls11); pub const WAP_WSG_IDM_ECID_WTLS12: Nid = Nid(ffi::NID_wap_wsg_idm_ecid_wtls12); pub const CAST5_CBC: Nid = Nid(ffi::NID_cast5_cbc); pub const CAST5_ECB: Nid = Nid(ffi::NID_cast5_ecb); pub const CAST5_CFB64: Nid = Nid(ffi::NID_cast5_cfb64); pub const CAST5_OFB64: Nid = Nid(ffi::NID_cast5_ofb64); pub const PBEWITHMD5ANDCAST5_CBC: Nid = Nid(ffi::NID_pbeWithMD5AndCast5_CBC); pub const ID_PASSWORDBASEDMAC: Nid = Nid(ffi::NID_id_PasswordBasedMAC); pub const ID_DHBASEDMAC: Nid = Nid(ffi::NID_id_DHBasedMac); pub const RSADSI: Nid = Nid(ffi::NID_rsadsi); pub const PKCS: Nid = Nid(ffi::NID_pkcs); pub const PKCS1: Nid = Nid(ffi::NID_pkcs1); pub const RSAENCRYPTION: Nid = Nid(ffi::NID_rsaEncryption); pub const MD2WITHRSAENCRYPTION: Nid = Nid(ffi::NID_md2WithRSAEncryption); pub const MD4WITHRSAENCRYPTION: Nid = Nid(ffi::NID_md4WithRSAEncryption); pub const MD5WITHRSAENCRYPTION: Nid = Nid(ffi::NID_md5WithRSAEncryption); pub const SHA1WITHRSAENCRYPTION: Nid = Nid(ffi::NID_sha1WithRSAEncryption); pub const RSAESOAEP: Nid = Nid(ffi::NID_rsaesOaep); pub const MGF1: Nid = Nid(ffi::NID_mgf1); pub const RSASSAPSS: Nid = Nid(ffi::NID_rsassaPss); pub const SHA256WITHRSAENCRYPTION: Nid = Nid(ffi::NID_sha256WithRSAEncryption); pub const SHA384WITHRSAENCRYPTION: Nid = Nid(ffi::NID_sha384WithRSAEncryption); pub const SHA512WITHRSAENCRYPTION: Nid = Nid(ffi::NID_sha512WithRSAEncryption); pub const SHA224WITHRSAENCRYPTION: Nid = Nid(ffi::NID_sha224WithRSAEncryption); pub const PKCS3: Nid = Nid(ffi::NID_pkcs3); pub const DHKEYAGREEMENT: Nid = Nid(ffi::NID_dhKeyAgreement); pub const PKCS5: Nid = Nid(ffi::NID_pkcs5); pub const PBEWITHMD2ANDDES_CBC: Nid = Nid(ffi::NID_pbeWithMD2AndDES_CBC); pub const PBEWITHMD5ANDDES_CBC: Nid = Nid(ffi::NID_pbeWithMD5AndDES_CBC); pub const PBEWITHMD2ANDRC2_CBC: Nid = Nid(ffi::NID_pbeWithMD2AndRC2_CBC); pub const PBEWITHMD5ANDRC2_CBC: Nid = Nid(ffi::NID_pbeWithMD5AndRC2_CBC); pub const PBEWITHSHA1ANDDES_CBC: Nid = Nid(ffi::NID_pbeWithSHA1AndDES_CBC); pub const PBEWITHSHA1ANDRC2_CBC: Nid = Nid(ffi::NID_pbeWithSHA1AndRC2_CBC); pub const ID_PBKDF2: Nid = Nid(ffi::NID_id_pbkdf2); pub const PBES2: Nid = Nid(ffi::NID_pbes2); pub const PBMAC1: Nid = Nid(ffi::NID_pbmac1); pub const PKCS7: Nid = Nid(ffi::NID_pkcs7); pub const PKCS7_DATA: Nid = Nid(ffi::NID_pkcs7_data); pub const PKCS7_SIGNED: Nid = Nid(ffi::NID_pkcs7_signed); pub const PKCS7_ENVELOPED: Nid = Nid(ffi::NID_pkcs7_enveloped); pub const PKCS7_SIGNEDANDENVELOPED: Nid = Nid(ffi::NID_pkcs7_signedAndEnveloped); pub const PKCS7_DIGEST: Nid = Nid(ffi::NID_pkcs7_digest); pub const PKCS7_ENCRYPTED: Nid = Nid(ffi::NID_pkcs7_encrypted); pub const PKCS9: Nid = Nid(ffi::NID_pkcs9); pub const PKCS9_EMAILADDRESS: Nid = Nid(ffi::NID_pkcs9_emailAddress); pub const PKCS9_UNSTRUCTUREDNAME: Nid = Nid(ffi::NID_pkcs9_unstructuredName); pub const PKCS9_CONTENTTYPE: Nid = Nid(ffi::NID_pkcs9_contentType); pub const PKCS9_MESSAGEDIGEST: Nid = Nid(ffi::NID_pkcs9_messageDigest); pub const PKCS9_SIGNINGTIME: Nid = Nid(ffi::NID_pkcs9_signingTime); pub const PKCS9_COUNTERSIGNATURE: Nid = Nid(ffi::NID_pkcs9_countersignature); pub const PKCS9_CHALLENGEPASSWORD: Nid = Nid(ffi::NID_pkcs9_challengePassword); pub const PKCS9_UNSTRUCTUREDADDRESS: Nid = Nid(ffi::NID_pkcs9_unstructuredAddress); pub const PKCS9_EXTCERTATTRIBUTES: Nid = Nid(ffi::NID_pkcs9_extCertAttributes); pub const EXT_REQ: Nid = Nid(ffi::NID_ext_req); pub const SMIMECAPABILITIES: Nid = Nid(ffi::NID_SMIMECapabilities); pub const SMIME: Nid = Nid(ffi::NID_SMIME); pub const ID_SMIME_MOD: Nid = Nid(ffi::NID_id_smime_mod); pub const ID_SMIME_CT: Nid = Nid(ffi::NID_id_smime_ct); pub const ID_SMIME_AA: Nid = Nid(ffi::NID_id_smime_aa); pub const ID_SMIME_ALG: Nid = Nid(ffi::NID_id_smime_alg); pub const ID_SMIME_CD: Nid = Nid(ffi::NID_id_smime_cd); pub const ID_SMIME_SPQ: Nid = Nid(ffi::NID_id_smime_spq); pub const ID_SMIME_CTI: Nid = Nid(ffi::NID_id_smime_cti); pub const ID_SMIME_MOD_CMS: Nid = Nid(ffi::NID_id_smime_mod_cms); pub const ID_SMIME_MOD_ESS: Nid = Nid(ffi::NID_id_smime_mod_ess); pub const ID_SMIME_MOD_OID: Nid = Nid(ffi::NID_id_smime_mod_oid); pub const ID_SMIME_MOD_MSG_V3: Nid = Nid(ffi::NID_id_smime_mod_msg_v3); pub const ID_SMIME_MOD_ETS_ESIGNATURE_88: Nid = Nid(ffi::NID_id_smime_mod_ets_eSignature_88); pub const ID_SMIME_MOD_ETS_ESIGNATURE_97: Nid = Nid(ffi::NID_id_smime_mod_ets_eSignature_97); pub const ID_SMIME_MOD_ETS_ESIGPOLICY_88: Nid = Nid(ffi::NID_id_smime_mod_ets_eSigPolicy_88); pub const ID_SMIME_MOD_ETS_ESIGPOLICY_97: Nid = Nid(ffi::NID_id_smime_mod_ets_eSigPolicy_97); pub const ID_SMIME_CT_RECEIPT: Nid = Nid(ffi::NID_id_smime_ct_receipt); pub const ID_SMIME_CT_AUTHDATA: Nid = Nid(ffi::NID_id_smime_ct_authData); pub const ID_SMIME_CT_PUBLISHCERT: Nid = Nid(ffi::NID_id_smime_ct_publishCert); pub const ID_SMIME_CT_TSTINFO: Nid = Nid(ffi::NID_id_smime_ct_TSTInfo); pub const ID_SMIME_CT_TDTINFO: Nid = Nid(ffi::NID_id_smime_ct_TDTInfo); pub const ID_SMIME_CT_CONTENTINFO: Nid = Nid(ffi::NID_id_smime_ct_contentInfo); pub const ID_SMIME_CT_DVCSREQUESTDATA: Nid = Nid(ffi::NID_id_smime_ct_DVCSRequestData); pub const ID_SMIME_CT_DVCSRESPONSEDATA: Nid = Nid(ffi::NID_id_smime_ct_DVCSResponseData); pub const ID_SMIME_CT_COMPRESSEDDATA: Nid = Nid(ffi::NID_id_smime_ct_compressedData); pub const ID_CT_ASCIITEXTWITHCRLF: Nid = Nid(ffi::NID_id_ct_asciiTextWithCRLF); pub const ID_SMIME_AA_RECEIPTREQUEST: Nid = Nid(ffi::NID_id_smime_aa_receiptRequest); pub const ID_SMIME_AA_SECURITYLABEL: Nid = Nid(ffi::NID_id_smime_aa_securityLabel); pub const ID_SMIME_AA_MLEXPANDHISTORY: Nid = Nid(ffi::NID_id_smime_aa_mlExpandHistory); pub const ID_SMIME_AA_CONTENTHINT: Nid = Nid(ffi::NID_id_smime_aa_contentHint); pub const ID_SMIME_AA_MSGSIGDIGEST: Nid = Nid(ffi::NID_id_smime_aa_msgSigDigest); pub const ID_SMIME_AA_ENCAPCONTENTTYPE: Nid = Nid(ffi::NID_id_smime_aa_encapContentType); pub const ID_SMIME_AA_CONTENTIDENTIFIER: Nid = Nid(ffi::NID_id_smime_aa_contentIdentifier); pub const ID_SMIME_AA_MACVALUE: Nid = Nid(ffi::NID_id_smime_aa_macValue); pub const ID_SMIME_AA_EQUIVALENTLABELS: Nid = Nid(ffi::NID_id_smime_aa_equivalentLabels); pub const ID_SMIME_AA_CONTENTREFERENCE: Nid = Nid(ffi::NID_id_smime_aa_contentReference); pub const ID_SMIME_AA_ENCRYPKEYPREF: Nid = Nid(ffi::NID_id_smime_aa_encrypKeyPref); pub const ID_SMIME_AA_SIGNINGCERTIFICATE: Nid = Nid(ffi::NID_id_smime_aa_signingCertificate); pub const ID_SMIME_AA_SMIMEENCRYPTCERTS: Nid = Nid(ffi::NID_id_smime_aa_smimeEncryptCerts); pub const ID_SMIME_AA_TIMESTAMPTOKEN: Nid = Nid(ffi::NID_id_smime_aa_timeStampToken); pub const ID_SMIME_AA_ETS_SIGPOLICYID: Nid = Nid(ffi::NID_id_smime_aa_ets_sigPolicyId); pub const ID_SMIME_AA_ETS_COMMITMENTTYPE: Nid = Nid(ffi::NID_id_smime_aa_ets_commitmentType); pub const ID_SMIME_AA_ETS_SIGNERLOCATION: Nid = Nid(ffi::NID_id_smime_aa_ets_signerLocation); pub const ID_SMIME_AA_ETS_SIGNERATTR: Nid = Nid(ffi::NID_id_smime_aa_ets_signerAttr); pub const ID_SMIME_AA_ETS_OTHERSIGCERT: Nid = Nid(ffi::NID_id_smime_aa_ets_otherSigCert); pub const ID_SMIME_AA_ETS_CONTENTTIMESTAMP: Nid = Nid(ffi::NID_id_smime_aa_ets_contentTimestamp); pub const ID_SMIME_AA_ETS_CERTIFICATEREFS: Nid = Nid(ffi::NID_id_smime_aa_ets_CertificateRefs); pub const ID_SMIME_AA_ETS_REVOCATIONREFS: Nid = Nid(ffi::NID_id_smime_aa_ets_RevocationRefs); pub const ID_SMIME_AA_ETS_CERTVALUES: Nid = Nid(ffi::NID_id_smime_aa_ets_certValues); pub const ID_SMIME_AA_ETS_REVOCATIONVALUES: Nid = Nid(ffi::NID_id_smime_aa_ets_revocationValues); pub const ID_SMIME_AA_ETS_ESCTIMESTAMP: Nid = Nid(ffi::NID_id_smime_aa_ets_escTimeStamp); pub const ID_SMIME_AA_ETS_CERTCRLTIMESTAMP: Nid = Nid(ffi::NID_id_smime_aa_ets_certCRLTimestamp); pub const ID_SMIME_AA_ETS_ARCHIVETIMESTAMP: Nid = Nid(ffi::NID_id_smime_aa_ets_archiveTimeStamp); pub const ID_SMIME_AA_SIGNATURETYPE: Nid = Nid(ffi::NID_id_smime_aa_signatureType); pub const ID_SMIME_AA_DVCS_DVC: Nid = Nid(ffi::NID_id_smime_aa_dvcs_dvc); pub const ID_SMIME_ALG_ESDHWITH3DES: Nid = Nid(ffi::NID_id_smime_alg_ESDHwith3DES); pub const ID_SMIME_ALG_ESDHWITHRC2: Nid = Nid(ffi::NID_id_smime_alg_ESDHwithRC2); pub const ID_SMIME_ALG_3DESWRAP: Nid = Nid(ffi::NID_id_smime_alg_3DESwrap); pub const ID_SMIME_ALG_RC2WRAP: Nid = Nid(ffi::NID_id_smime_alg_RC2wrap); pub const ID_SMIME_ALG_ESDH: Nid = Nid(ffi::NID_id_smime_alg_ESDH); pub const ID_SMIME_ALG_CMS3DESWRAP: Nid = Nid(ffi::NID_id_smime_alg_CMS3DESwrap); pub const ID_SMIME_ALG_CMSRC2WRAP: Nid = Nid(ffi::NID_id_smime_alg_CMSRC2wrap); pub const ID_ALG_PWRI_KEK: Nid = Nid(ffi::NID_id_alg_PWRI_KEK); pub const ID_SMIME_CD_LDAP: Nid = Nid(ffi::NID_id_smime_cd_ldap); pub const ID_SMIME_SPQ_ETS_SQT_URI: Nid = Nid(ffi::NID_id_smime_spq_ets_sqt_uri); pub const ID_SMIME_SPQ_ETS_SQT_UNOTICE: Nid = Nid(ffi::NID_id_smime_spq_ets_sqt_unotice); pub const ID_SMIME_CTI_ETS_PROOFOFORIGIN: Nid = Nid(ffi::NID_id_smime_cti_ets_proofOfOrigin); pub const ID_SMIME_CTI_ETS_PROOFOFRECEIPT: Nid = Nid(ffi::NID_id_smime_cti_ets_proofOfReceipt); pub const ID_SMIME_CTI_ETS_PROOFOFDELIVERY: Nid = Nid(ffi::NID_id_smime_cti_ets_proofOfDelivery); pub const ID_SMIME_CTI_ETS_PROOFOFSENDER: Nid = Nid(ffi::NID_id_smime_cti_ets_proofOfSender); pub const ID_SMIME_CTI_ETS_PROOFOFAPPROVAL: Nid = Nid(ffi::NID_id_smime_cti_ets_proofOfApproval); pub const ID_SMIME_CTI_ETS_PROOFOFCREATION: Nid = Nid(ffi::NID_id_smime_cti_ets_proofOfCreation); pub const FRIENDLYNAME: Nid = Nid(ffi::NID_friendlyName); pub const LOCALKEYID: Nid = Nid(ffi::NID_localKeyID); pub const MS_CSP_NAME: Nid = Nid(ffi::NID_ms_csp_name); pub const LOCALKEYSET: Nid = Nid(ffi::NID_LocalKeySet); pub const X509CERTIFICATE: Nid = Nid(ffi::NID_x509Certificate); pub const SDSICERTIFICATE: Nid = Nid(ffi::NID_sdsiCertificate); pub const X509CRL: Nid = Nid(ffi::NID_x509Crl); pub const PBE_WITHSHA1AND128BITRC4: Nid = Nid(ffi::NID_pbe_WithSHA1And128BitRC4); pub const PBE_WITHSHA1AND40BITRC4: Nid = Nid(ffi::NID_pbe_WithSHA1And40BitRC4); pub const PBE_WITHSHA1AND3_KEY_TRIPLEDES_CBC: Nid = Nid(ffi::NID_pbe_WithSHA1And3_Key_TripleDES_CBC); pub const PBE_WITHSHA1AND2_KEY_TRIPLEDES_CBC: Nid = Nid(ffi::NID_pbe_WithSHA1And2_Key_TripleDES_CBC); pub const PBE_WITHSHA1AND128BITRC2_CBC: Nid = Nid(ffi::NID_pbe_WithSHA1And128BitRC2_CBC); pub const PBE_WITHSHA1AND40BITRC2_CBC: Nid = Nid(ffi::NID_pbe_WithSHA1And40BitRC2_CBC); pub const KEYBAG: Nid = Nid(ffi::NID_keyBag); pub const PKCS8SHROUDEDKEYBAG: Nid = Nid(ffi::NID_pkcs8ShroudedKeyBag); pub const CERTBAG: Nid = Nid(ffi::NID_certBag); pub const CRLBAG: Nid = Nid(ffi::NID_crlBag); pub const SECRETBAG: Nid = Nid(ffi::NID_secretBag); pub const SAFECONTENTSBAG: Nid = Nid(ffi::NID_safeContentsBag); pub const MD2: Nid = Nid(ffi::NID_md2); pub const MD4: Nid = Nid(ffi::NID_md4); pub const MD5: Nid = Nid(ffi::NID_md5); pub const MD5_SHA1: Nid = Nid(ffi::NID_md5_sha1); pub const HMACWITHMD5: Nid = Nid(ffi::NID_hmacWithMD5); pub const HMACWITHSHA1: Nid = Nid(ffi::NID_hmacWithSHA1); pub const HMACWITHSHA224: Nid = Nid(ffi::NID_hmacWithSHA224); pub const HMACWITHSHA256: Nid = Nid(ffi::NID_hmacWithSHA256); pub const HMACWITHSHA384: Nid = Nid(ffi::NID_hmacWithSHA384); pub const HMACWITHSHA512: Nid = Nid(ffi::NID_hmacWithSHA512); pub const RC2_CBC: Nid = Nid(ffi::NID_rc2_cbc); pub const RC2_ECB: Nid = Nid(ffi::NID_rc2_ecb); pub const RC2_CFB64: Nid = Nid(ffi::NID_rc2_cfb64); pub const RC2_OFB64: Nid = Nid(ffi::NID_rc2_ofb64); pub const RC2_40_CBC: Nid = Nid(ffi::NID_rc2_40_cbc); pub const RC2_64_CBC: Nid = Nid(ffi::NID_rc2_64_cbc); pub const RC4: Nid = Nid(ffi::NID_rc4); pub const RC4_40: Nid = Nid(ffi::NID_rc4_40); pub const DES_EDE3_CBC: Nid = Nid(ffi::NID_des_ede3_cbc); pub const RC5_CBC: Nid = Nid(ffi::NID_rc5_cbc); pub const RC5_ECB: Nid = Nid(ffi::NID_rc5_ecb); pub const RC5_CFB64: Nid = Nid(ffi::NID_rc5_cfb64); pub const RC5_OFB64: Nid = Nid(ffi::NID_rc5_ofb64); pub const MS_EXT_REQ: Nid = Nid(ffi::NID_ms_ext_req); pub const MS_CODE_IND: Nid = Nid(ffi::NID_ms_code_ind); pub const MS_CODE_COM: Nid = Nid(ffi::NID_ms_code_com); pub const MS_CTL_SIGN: Nid = Nid(ffi::NID_ms_ctl_sign); pub const MS_SGC: Nid = Nid(ffi::NID_ms_sgc); pub const MS_EFS: Nid = Nid(ffi::NID_ms_efs); pub const MS_SMARTCARD_LOGIN: Nid = Nid(ffi::NID_ms_smartcard_login); pub const MS_UPN: Nid = Nid(ffi::NID_ms_upn); pub const IDEA_CBC: Nid = Nid(ffi::NID_idea_cbc); pub const IDEA_ECB: Nid = Nid(ffi::NID_idea_ecb); pub const IDEA_CFB64: Nid = Nid(ffi::NID_idea_cfb64); pub const IDEA_OFB64: Nid = Nid(ffi::NID_idea_ofb64); pub const BF_CBC: Nid = Nid(ffi::NID_bf_cbc); pub const BF_ECB: Nid = Nid(ffi::NID_bf_ecb); pub const BF_CFB64: Nid = Nid(ffi::NID_bf_cfb64); pub const BF_OFB64: Nid = Nid(ffi::NID_bf_ofb64); pub const ID_PKIX: Nid = Nid(ffi::NID_id_pkix); pub const ID_PKIX_MOD: Nid = Nid(ffi::NID_id_pkix_mod); pub const ID_PE: Nid = Nid(ffi::NID_id_pe); pub const ID_QT: Nid = Nid(ffi::NID_id_qt); pub const ID_KP: Nid = Nid(ffi::NID_id_kp); pub const ID_IT: Nid = Nid(ffi::NID_id_it); pub const ID_PKIP: Nid = Nid(ffi::NID_id_pkip); pub const ID_ALG: Nid = Nid(ffi::NID_id_alg); pub const ID_CMC: Nid = Nid(ffi::NID_id_cmc); pub const ID_ON: Nid = Nid(ffi::NID_id_on); pub const ID_PDA: Nid = Nid(ffi::NID_id_pda); pub const ID_ACA: Nid = Nid(ffi::NID_id_aca); pub const ID_QCS: Nid = Nid(ffi::NID_id_qcs); pub const ID_CCT: Nid = Nid(ffi::NID_id_cct); pub const ID_PPL: Nid = Nid(ffi::NID_id_ppl); pub const ID_AD: Nid = Nid(ffi::NID_id_ad); pub const ID_PKIX1_EXPLICIT_88: Nid = Nid(ffi::NID_id_pkix1_explicit_88); pub const ID_PKIX1_IMPLICIT_88: Nid = Nid(ffi::NID_id_pkix1_implicit_88); pub const ID_PKIX1_EXPLICIT_93: Nid = Nid(ffi::NID_id_pkix1_explicit_93); pub const ID_PKIX1_IMPLICIT_93: Nid = Nid(ffi::NID_id_pkix1_implicit_93); pub const ID_MOD_CRMF: Nid = Nid(ffi::NID_id_mod_crmf); pub const ID_MOD_CMC: Nid = Nid(ffi::NID_id_mod_cmc); pub const ID_MOD_KEA_PROFILE_88: Nid = Nid(ffi::NID_id_mod_kea_profile_88); pub const ID_MOD_KEA_PROFILE_93: Nid = Nid(ffi::NID_id_mod_kea_profile_93); pub const ID_MOD_CMP: Nid = Nid(ffi::NID_id_mod_cmp); pub const ID_MOD_QUALIFIED_CERT_88: Nid = Nid(ffi::NID_id_mod_qualified_cert_88); pub const ID_MOD_QUALIFIED_CERT_93: Nid = Nid(ffi::NID_id_mod_qualified_cert_93); pub const ID_MOD_ATTRIBUTE_CERT: Nid = Nid(ffi::NID_id_mod_attribute_cert); pub const ID_MOD_TIMESTAMP_PROTOCOL: Nid = Nid(ffi::NID_id_mod_timestamp_protocol); pub const ID_MOD_OCSP: Nid = Nid(ffi::NID_id_mod_ocsp); pub const ID_MOD_DVCS: Nid = Nid(ffi::NID_id_mod_dvcs); pub const ID_MOD_CMP2000: Nid = Nid(ffi::NID_id_mod_cmp2000); pub const INFO_ACCESS: Nid = Nid(ffi::NID_info_access); pub const BIOMETRICINFO: Nid = Nid(ffi::NID_biometricInfo); pub const QCSTATEMENTS: Nid = Nid(ffi::NID_qcStatements); pub const AC_AUDITENTITY: Nid = Nid(ffi::NID_ac_auditEntity); pub const AC_TARGETING: Nid = Nid(ffi::NID_ac_targeting); pub const AACONTROLS: Nid = Nid(ffi::NID_aaControls); pub const SBGP_IPADDRBLOCK: Nid = Nid(ffi::NID_sbgp_ipAddrBlock); pub const SBGP_AUTONOMOUSSYSNUM: Nid = Nid(ffi::NID_sbgp_autonomousSysNum); pub const SBGP_ROUTERIDENTIFIER: Nid = Nid(ffi::NID_sbgp_routerIdentifier); pub const AC_PROXYING: Nid = Nid(ffi::NID_ac_proxying); pub const SINFO_ACCESS: Nid = Nid(ffi::NID_sinfo_access); pub const PROXYCERTINFO: Nid = Nid(ffi::NID_proxyCertInfo); pub const ID_QT_CPS: Nid = Nid(ffi::NID_id_qt_cps); pub const ID_QT_UNOTICE: Nid = Nid(ffi::NID_id_qt_unotice); pub const TEXTNOTICE: Nid = Nid(ffi::NID_textNotice); pub const SERVER_AUTH: Nid = Nid(ffi::NID_server_auth); pub const CLIENT_AUTH: Nid = Nid(ffi::NID_client_auth); pub const CODE_SIGN: Nid = Nid(ffi::NID_code_sign); pub const EMAIL_PROTECT: Nid = Nid(ffi::NID_email_protect); pub const IPSECENDSYSTEM: Nid = Nid(ffi::NID_ipsecEndSystem); pub const IPSECTUNNEL: Nid = Nid(ffi::NID_ipsecTunnel); pub const IPSECUSER: Nid = Nid(ffi::NID_ipsecUser); pub const TIME_STAMP: Nid = Nid(ffi::NID_time_stamp); pub const OCSP_SIGN: Nid = Nid(ffi::NID_OCSP_sign); pub const DVCS: Nid = Nid(ffi::NID_dvcs); pub const ID_IT_CAPROTENCCERT: Nid = Nid(ffi::NID_id_it_caProtEncCert); pub const ID_IT_SIGNKEYPAIRTYPES: Nid = Nid(ffi::NID_id_it_signKeyPairTypes); pub const ID_IT_ENCKEYPAIRTYPES: Nid = Nid(ffi::NID_id_it_encKeyPairTypes); pub const ID_IT_PREFERREDSYMMALG: Nid = Nid(ffi::NID_id_it_preferredSymmAlg); pub const ID_IT_CAKEYUPDATEINFO: Nid = Nid(ffi::NID_id_it_caKeyUpdateInfo); pub const ID_IT_CURRENTCRL: Nid = Nid(ffi::NID_id_it_currentCRL); pub const ID_IT_UNSUPPORTEDOIDS: Nid = Nid(ffi::NID_id_it_unsupportedOIDs); pub const ID_IT_SUBSCRIPTIONREQUEST: Nid = Nid(ffi::NID_id_it_subscriptionRequest); pub const ID_IT_SUBSCRIPTIONRESPONSE: Nid = Nid(ffi::NID_id_it_subscriptionResponse); pub const ID_IT_KEYPAIRPARAMREQ: Nid = Nid(ffi::NID_id_it_keyPairParamReq); pub const ID_IT_KEYPAIRPARAMREP: Nid = Nid(ffi::NID_id_it_keyPairParamRep); pub const ID_IT_REVPASSPHRASE: Nid = Nid(ffi::NID_id_it_revPassphrase); pub const ID_IT_IMPLICITCONFIRM: Nid = Nid(ffi::NID_id_it_implicitConfirm); pub const ID_IT_CONFIRMWAITTIME: Nid = Nid(ffi::NID_id_it_confirmWaitTime); pub const ID_IT_ORIGPKIMESSAGE: Nid = Nid(ffi::NID_id_it_origPKIMessage); pub const ID_IT_SUPPLANGTAGS: Nid = Nid(ffi::NID_id_it_suppLangTags); pub const ID_REGCTRL: Nid = Nid(ffi::NID_id_regCtrl); pub const ID_REGINFO: Nid = Nid(ffi::NID_id_regInfo); pub const ID_REGCTRL_REGTOKEN: Nid = Nid(ffi::NID_id_regCtrl_regToken); pub const ID_REGCTRL_AUTHENTICATOR: Nid = Nid(ffi::NID_id_regCtrl_authenticator); pub const ID_REGCTRL_PKIPUBLICATIONINFO: Nid = Nid(ffi::NID_id_regCtrl_pkiPublicationInfo); pub const ID_REGCTRL_PKIARCHIVEOPTIONS: Nid = Nid(ffi::NID_id_regCtrl_pkiArchiveOptions); pub const ID_REGCTRL_OLDCERTID: Nid = Nid(ffi::NID_id_regCtrl_oldCertID); pub const ID_REGCTRL_PROTOCOLENCRKEY: Nid = Nid(ffi::NID_id_regCtrl_protocolEncrKey); pub const ID_REGINFO_UTF8PAIRS: Nid = Nid(ffi::NID_id_regInfo_utf8Pairs); pub const ID_REGINFO_CERTREQ: Nid = Nid(ffi::NID_id_regInfo_certReq); pub const ID_ALG_DES40: Nid = Nid(ffi::NID_id_alg_des40); pub const ID_ALG_NOSIGNATURE: Nid = Nid(ffi::NID_id_alg_noSignature); pub const ID_ALG_DH_SIG_HMAC_SHA1: Nid = Nid(ffi::NID_id_alg_dh_sig_hmac_sha1); pub const ID_ALG_DH_POP: Nid = Nid(ffi::NID_id_alg_dh_pop); pub const ID_CMC_STATUSINFO: Nid = Nid(ffi::NID_id_cmc_statusInfo); pub const ID_CMC_IDENTIFICATION: Nid = Nid(ffi::NID_id_cmc_identification); pub const ID_CMC_IDENTITYPROOF: Nid = Nid(ffi::NID_id_cmc_identityProof); pub const ID_CMC_DATARETURN: Nid = Nid(ffi::NID_id_cmc_dataReturn); pub const ID_CMC_TRANSACTIONID: Nid = Nid(ffi::NID_id_cmc_transactionId); pub const ID_CMC_SENDERNONCE: Nid = Nid(ffi::NID_id_cmc_senderNonce); pub const ID_CMC_RECIPIENTNONCE: Nid = Nid(ffi::NID_id_cmc_recipientNonce); pub const ID_CMC_ADDEXTENSIONS: Nid = Nid(ffi::NID_id_cmc_addExtensions); pub const ID_CMC_ENCRYPTEDPOP: Nid = Nid(ffi::NID_id_cmc_encryptedPOP); pub const ID_CMC_DECRYPTEDPOP: Nid = Nid(ffi::NID_id_cmc_decryptedPOP); pub const ID_CMC_LRAPOPWITNESS: Nid = Nid(ffi::NID_id_cmc_lraPOPWitness); pub const ID_CMC_GETCERT: Nid = Nid(ffi::NID_id_cmc_getCert); pub const ID_CMC_GETCRL: Nid = Nid(ffi::NID_id_cmc_getCRL); pub const ID_CMC_REVOKEREQUEST: Nid = Nid(ffi::NID_id_cmc_revokeRequest); pub const ID_CMC_REGINFO: Nid = Nid(ffi::NID_id_cmc_regInfo); pub const ID_CMC_RESPONSEINFO: Nid = Nid(ffi::NID_id_cmc_responseInfo); pub const ID_CMC_QUERYPENDING: Nid = Nid(ffi::NID_id_cmc_queryPending); pub const ID_CMC_POPLINKRANDOM: Nid = Nid(ffi::NID_id_cmc_popLinkRandom); pub const ID_CMC_POPLINKWITNESS: Nid = Nid(ffi::NID_id_cmc_popLinkWitness); pub const ID_CMC_CONFIRMCERTACCEPTANCE: Nid = Nid(ffi::NID_id_cmc_confirmCertAcceptance); pub const ID_ON_PERSONALDATA: Nid = Nid(ffi::NID_id_on_personalData); pub const ID_ON_PERMANENTIDENTIFIER: Nid = Nid(ffi::NID_id_on_permanentIdentifier); pub const ID_PDA_DATEOFBIRTH: Nid = Nid(ffi::NID_id_pda_dateOfBirth); pub const ID_PDA_PLACEOFBIRTH: Nid = Nid(ffi::NID_id_pda_placeOfBirth); pub const ID_PDA_GENDER: Nid = Nid(ffi::NID_id_pda_gender); pub const ID_PDA_COUNTRYOFCITIZENSHIP: Nid = Nid(ffi::NID_id_pda_countryOfCitizenship); pub const ID_PDA_COUNTRYOFRESIDENCE: Nid = Nid(ffi::NID_id_pda_countryOfResidence); pub const ID_ACA_AUTHENTICATIONINFO: Nid = Nid(ffi::NID_id_aca_authenticationInfo); pub const ID_ACA_ACCESSIDENTITY: Nid = Nid(ffi::NID_id_aca_accessIdentity); pub const ID_ACA_CHARGINGIDENTITY: Nid = Nid(ffi::NID_id_aca_chargingIdentity); pub const ID_ACA_GROUP: Nid = Nid(ffi::NID_id_aca_group); pub const ID_ACA_ROLE: Nid = Nid(ffi::NID_id_aca_role); pub const ID_ACA_ENCATTRS: Nid = Nid(ffi::NID_id_aca_encAttrs); pub const ID_QCS_PKIXQCSYNTAX_V1: Nid = Nid(ffi::NID_id_qcs_pkixQCSyntax_v1); pub const ID_CCT_CRS: Nid = Nid(ffi::NID_id_cct_crs); pub const ID_CCT_PKIDATA: Nid = Nid(ffi::NID_id_cct_PKIData); pub const ID_CCT_PKIRESPONSE: Nid = Nid(ffi::NID_id_cct_PKIResponse); pub const ID_PPL_ANYLANGUAGE: Nid = Nid(ffi::NID_id_ppl_anyLanguage); pub const ID_PPL_INHERITALL: Nid = Nid(ffi::NID_id_ppl_inheritAll); pub const INDEPENDENT: Nid = Nid(ffi::NID_Independent); pub const AD_OCSP: Nid = Nid(ffi::NID_ad_OCSP); pub const AD_CA_ISSUERS: Nid = Nid(ffi::NID_ad_ca_issuers); pub const AD_TIMESTAMPING: Nid = Nid(ffi::NID_ad_timeStamping); pub const AD_DVCS: Nid = Nid(ffi::NID_ad_dvcs); pub const CAREPOSITORY: Nid = Nid(ffi::NID_caRepository); pub const ID_PKIX_OCSP_BASIC: Nid = Nid(ffi::NID_id_pkix_OCSP_basic); pub const ID_PKIX_OCSP_NONCE: Nid = Nid(ffi::NID_id_pkix_OCSP_Nonce); pub const ID_PKIX_OCSP_CRLID: Nid = Nid(ffi::NID_id_pkix_OCSP_CrlID); pub const ID_PKIX_OCSP_ACCEPTABLERESPONSES: Nid = Nid(ffi::NID_id_pkix_OCSP_acceptableResponses); pub const ID_PKIX_OCSP_NOCHECK: Nid = Nid(ffi::NID_id_pkix_OCSP_noCheck); pub const ID_PKIX_OCSP_ARCHIVECUTOFF: Nid = Nid(ffi::NID_id_pkix_OCSP_archiveCutoff); pub const ID_PKIX_OCSP_SERVICELOCATOR: Nid = Nid(ffi::NID_id_pkix_OCSP_serviceLocator); pub const ID_PKIX_OCSP_EXTENDEDSTATUS: Nid = Nid(ffi::NID_id_pkix_OCSP_extendedStatus); pub const ID_PKIX_OCSP_VALID: Nid = Nid(ffi::NID_id_pkix_OCSP_valid); pub const ID_PKIX_OCSP_PATH: Nid = Nid(ffi::NID_id_pkix_OCSP_path); pub const ID_PKIX_OCSP_TRUSTROOT: Nid = Nid(ffi::NID_id_pkix_OCSP_trustRoot); pub const ALGORITHM: Nid = Nid(ffi::NID_algorithm); pub const MD5WITHRSA: Nid = Nid(ffi::NID_md5WithRSA); pub const DES_ECB: Nid = Nid(ffi::NID_des_ecb); pub const DES_CBC: Nid = Nid(ffi::NID_des_cbc); pub const DES_OFB64: Nid = Nid(ffi::NID_des_ofb64); pub const DES_CFB64: Nid = Nid(ffi::NID_des_cfb64); pub const RSASIGNATURE: Nid = Nid(ffi::NID_rsaSignature); pub const DSA_2: Nid = Nid(ffi::NID_dsa_2); pub const DSAWITHSHA: Nid = Nid(ffi::NID_dsaWithSHA); pub const SHAWITHRSAENCRYPTION: Nid = Nid(ffi::NID_shaWithRSAEncryption); pub const DES_EDE_ECB: Nid = Nid(ffi::NID_des_ede_ecb); pub const DES_EDE3_ECB: Nid = Nid(ffi::NID_des_ede3_ecb); pub const DES_EDE_CBC: Nid = Nid(ffi::NID_des_ede_cbc); pub const DES_EDE_CFB64: Nid = Nid(ffi::NID_des_ede_cfb64); pub const DES_EDE3_CFB64: Nid = Nid(ffi::NID_des_ede3_cfb64); pub const DES_EDE_OFB64: Nid = Nid(ffi::NID_des_ede_ofb64); pub const DES_EDE3_OFB64: Nid = Nid(ffi::NID_des_ede3_ofb64); pub const DESX_CBC: Nid = Nid(ffi::NID_desx_cbc); pub const SHA: Nid = Nid(ffi::NID_sha); pub const SHA1: Nid = Nid(ffi::NID_sha1); pub const DSAWITHSHA1_2: Nid = Nid(ffi::NID_dsaWithSHA1_2); pub const SHA1WITHRSA: Nid = Nid(ffi::NID_sha1WithRSA); pub const RIPEMD160: Nid = Nid(ffi::NID_ripemd160); pub const RIPEMD160WITHRSA: Nid = Nid(ffi::NID_ripemd160WithRSA); pub const SXNET: Nid = Nid(ffi::NID_sxnet); pub const X500: Nid = Nid(ffi::NID_X500); pub const X509: Nid = Nid(ffi::NID_X509); pub const COMMONNAME: Nid = Nid(ffi::NID_commonName); pub const SURNAME: Nid = Nid(ffi::NID_surname); pub const SERIALNUMBER: Nid = Nid(ffi::NID_serialNumber); pub const COUNTRYNAME: Nid = Nid(ffi::NID_countryName); pub const LOCALITYNAME: Nid = Nid(ffi::NID_localityName); pub const STATEORPROVINCENAME: Nid = Nid(ffi::NID_stateOrProvinceName); pub const STREETADDRESS: Nid = Nid(ffi::NID_streetAddress); pub const ORGANIZATIONNAME: Nid = Nid(ffi::NID_organizationName); pub const ORGANIZATIONALUNITNAME: Nid = Nid(ffi::NID_organizationalUnitName); pub const TITLE: Nid = Nid(ffi::NID_title); pub const DESCRIPTION: Nid = Nid(ffi::NID_description); pub const SEARCHGUIDE: Nid = Nid(ffi::NID_searchGuide); pub const BUSINESSCATEGORY: Nid = Nid(ffi::NID_businessCategory); pub const POSTALADDRESS: Nid = Nid(ffi::NID_postalAddress); pub const POSTALCODE: Nid = Nid(ffi::NID_postalCode); pub const POSTOFFICEBOX: Nid = Nid(ffi::NID_postOfficeBox); pub const PHYSICALDELIVERYOFFICENAME: Nid = Nid(ffi::NID_physicalDeliveryOfficeName); pub const TELEPHONENUMBER: Nid = Nid(ffi::NID_telephoneNumber); pub const TELEXNUMBER: Nid = Nid(ffi::NID_telexNumber); pub const TELETEXTERMINALIDENTIFIER: Nid = Nid(ffi::NID_teletexTerminalIdentifier); pub const FACSIMILETELEPHONENUMBER: Nid = Nid(ffi::NID_facsimileTelephoneNumber); pub const X121ADDRESS: Nid = Nid(ffi::NID_x121Address); pub const INTERNATIONALISDNNUMBER: Nid = Nid(ffi::NID_internationaliSDNNumber); pub const REGISTEREDADDRESS: Nid = Nid(ffi::NID_registeredAddress); pub const DESTINATIONINDICATOR: Nid = Nid(ffi::NID_destinationIndicator); pub const PREFERREDDELIVERYMETHOD: Nid = Nid(ffi::NID_preferredDeliveryMethod); pub const PRESENTATIONADDRESS: Nid = Nid(ffi::NID_presentationAddress); pub const SUPPORTEDAPPLICATIONCONTEXT: Nid = Nid(ffi::NID_supportedApplicationContext); pub const MEMBER: Nid = Nid(ffi::NID_member); pub const OWNER: Nid = Nid(ffi::NID_owner); pub const ROLEOCCUPANT: Nid = Nid(ffi::NID_roleOccupant); pub const SEEALSO: Nid = Nid(ffi::NID_seeAlso); pub const USERPASSWORD: Nid = Nid(ffi::NID_userPassword); pub const USERCERTIFICATE: Nid = Nid(ffi::NID_userCertificate); pub const CACERTIFICATE: Nid = Nid(ffi::NID_cACertificate); pub const AUTHORITYREVOCATIONLIST: Nid = Nid(ffi::NID_authorityRevocationList); pub const CERTIFICATEREVOCATIONLIST: Nid = Nid(ffi::NID_certificateRevocationList); pub const CROSSCERTIFICATEPAIR: Nid = Nid(ffi::NID_crossCertificatePair); pub const NAME: Nid = Nid(ffi::NID_name); pub const GIVENNAME: Nid = Nid(ffi::NID_givenName); pub const INITIALS: Nid = Nid(ffi::NID_initials); pub const GENERATIONQUALIFIER: Nid = Nid(ffi::NID_generationQualifier); pub const X500UNIQUEIDENTIFIER: Nid = Nid(ffi::NID_x500UniqueIdentifier); pub const DNQUALIFIER: Nid = Nid(ffi::NID_dnQualifier); pub const ENHANCEDSEARCHGUIDE: Nid = Nid(ffi::NID_enhancedSearchGuide); pub const PROTOCOLINFORMATION: Nid = Nid(ffi::NID_protocolInformation); pub const DISTINGUISHEDNAME: Nid = Nid(ffi::NID_distinguishedName); pub const UNIQUEMEMBER: Nid = Nid(ffi::NID_uniqueMember); pub const HOUSEIDENTIFIER: Nid = Nid(ffi::NID_houseIdentifier); pub const SUPPORTEDALGORITHMS: Nid = Nid(ffi::NID_supportedAlgorithms); pub const DELTAREVOCATIONLIST: Nid = Nid(ffi::NID_deltaRevocationList); pub const DMDNAME: Nid = Nid(ffi::NID_dmdName); pub const PSEUDONYM: Nid = Nid(ffi::NID_pseudonym); pub const ROLE: Nid = Nid(ffi::NID_role); pub const X500ALGORITHMS: Nid = Nid(ffi::NID_X500algorithms); pub const RSA: Nid = Nid(ffi::NID_rsa); pub const MDC2WITHRSA: Nid = Nid(ffi::NID_mdc2WithRSA); pub const MDC2: Nid = Nid(ffi::NID_mdc2); pub const ID_CE: Nid = Nid(ffi::NID_id_ce); pub const SUBJECT_DIRECTORY_ATTRIBUTES: Nid = Nid(ffi::NID_subject_directory_attributes); pub const SUBJECT_KEY_IDENTIFIER: Nid = Nid(ffi::NID_subject_key_identifier); pub const KEY_USAGE: Nid = Nid(ffi::NID_key_usage); pub const PRIVATE_KEY_USAGE_PERIOD: Nid = Nid(ffi::NID_private_key_usage_period); pub const SUBJECT_ALT_NAME: Nid = Nid(ffi::NID_subject_alt_name); pub const ISSUER_ALT_NAME: Nid = Nid(ffi::NID_issuer_alt_name); pub const BASIC_CONSTRAINTS: Nid = Nid(ffi::NID_basic_constraints); pub const CRL_NUMBER: Nid = Nid(ffi::NID_crl_number); pub const CRL_REASON: Nid = Nid(ffi::NID_crl_reason); pub const INVALIDITY_DATE: Nid = Nid(ffi::NID_invalidity_date); pub const DELTA_CRL: Nid = Nid(ffi::NID_delta_crl); pub const ISSUING_DISTRIBUTION_POINT: Nid = Nid(ffi::NID_issuing_distribution_point); pub const CERTIFICATE_ISSUER: Nid = Nid(ffi::NID_certificate_issuer); pub const NAME_CONSTRAINTS: Nid = Nid(ffi::NID_name_constraints); pub const CRL_DISTRIBUTION_POINTS: Nid = Nid(ffi::NID_crl_distribution_points); pub const CERTIFICATE_POLICIES: Nid = Nid(ffi::NID_certificate_policies); pub const ANY_POLICY: Nid = Nid(ffi::NID_any_policy); pub const POLICY_MAPPINGS: Nid = Nid(ffi::NID_policy_mappings); pub const AUTHORITY_KEY_IDENTIFIER: Nid = Nid(ffi::NID_authority_key_identifier); pub const POLICY_CONSTRAINTS: Nid = Nid(ffi::NID_policy_constraints); pub const EXT_KEY_USAGE: Nid = Nid(ffi::NID_ext_key_usage); pub const FRESHEST_CRL: Nid = Nid(ffi::NID_freshest_crl); pub const INHIBIT_ANY_POLICY: Nid = Nid(ffi::NID_inhibit_any_policy); pub const TARGET_INFORMATION: Nid = Nid(ffi::NID_target_information); pub const NO_REV_AVAIL: Nid = Nid(ffi::NID_no_rev_avail); pub const ANYEXTENDEDKEYUSAGE: Nid = Nid(ffi::NID_anyExtendedKeyUsage); pub const NETSCAPE: Nid = Nid(ffi::NID_netscape); pub const NETSCAPE_CERT_EXTENSION: Nid = Nid(ffi::NID_netscape_cert_extension); pub const NETSCAPE_DATA_TYPE: Nid = Nid(ffi::NID_netscape_data_type); pub const NETSCAPE_CERT_TYPE: Nid = Nid(ffi::NID_netscape_cert_type); pub const NETSCAPE_BASE_URL: Nid = Nid(ffi::NID_netscape_base_url); pub const NETSCAPE_REVOCATION_URL: Nid = Nid(ffi::NID_netscape_revocation_url); pub const NETSCAPE_CA_REVOCATION_URL: Nid = Nid(ffi::NID_netscape_ca_revocation_url); pub const NETSCAPE_RENEWAL_URL: Nid = Nid(ffi::NID_netscape_renewal_url); pub const NETSCAPE_CA_POLICY_URL: Nid = Nid(ffi::NID_netscape_ca_policy_url); pub const NETSCAPE_SSL_SERVER_NAME: Nid = Nid(ffi::NID_netscape_ssl_server_name); pub const NETSCAPE_COMMENT: Nid = Nid(ffi::NID_netscape_comment); pub const NETSCAPE_CERT_SEQUENCE: Nid = Nid(ffi::NID_netscape_cert_sequence); pub const NS_SGC: Nid = Nid(ffi::NID_ns_sgc); pub const ORG: Nid = Nid(ffi::NID_org); pub const DOD: Nid = Nid(ffi::NID_dod); pub const IANA: Nid = Nid(ffi::NID_iana); pub const DIRECTORY: Nid = Nid(ffi::NID_Directory); pub const MANAGEMENT: Nid = Nid(ffi::NID_Management); pub const EXPERIMENTAL: Nid = Nid(ffi::NID_Experimental); pub const PRIVATE: Nid = Nid(ffi::NID_Private); pub const SECURITY: Nid = Nid(ffi::NID_Security); pub const SNMPV2: Nid = Nid(ffi::NID_SNMPv2); pub const MAIL: Nid = Nid(ffi::NID_Mail); pub const ENTERPRISES: Nid = Nid(ffi::NID_Enterprises); pub const DCOBJECT: Nid = Nid(ffi::NID_dcObject); pub const MIME_MHS: Nid = Nid(ffi::NID_mime_mhs); pub const MIME_MHS_HEADINGS: Nid = Nid(ffi::NID_mime_mhs_headings); pub const MIME_MHS_BODIES: Nid = Nid(ffi::NID_mime_mhs_bodies); pub const ID_HEX_PARTIAL_MESSAGE: Nid = Nid(ffi::NID_id_hex_partial_message); pub const ID_HEX_MULTIPART_MESSAGE: Nid = Nid(ffi::NID_id_hex_multipart_message); pub const ZLIB_COMPRESSION: Nid = Nid(ffi::NID_zlib_compression); pub const AES_128_ECB: Nid = Nid(ffi::NID_aes_128_ecb); pub const AES_128_CBC: Nid = Nid(ffi::NID_aes_128_cbc); pub const AES_128_OFB128: Nid = Nid(ffi::NID_aes_128_ofb128); pub const AES_128_CFB128: Nid = Nid(ffi::NID_aes_128_cfb128); pub const ID_AES128_WRAP: Nid = Nid(ffi::NID_id_aes128_wrap); pub const AES_128_GCM: Nid = Nid(ffi::NID_aes_128_gcm); pub const AES_128_CCM: Nid = Nid(ffi::NID_aes_128_ccm); pub const ID_AES128_WRAP_PAD: Nid = Nid(ffi::NID_id_aes128_wrap_pad); pub const AES_192_ECB: Nid = Nid(ffi::NID_aes_192_ecb); pub const AES_192_CBC: Nid = Nid(ffi::NID_aes_192_cbc); pub const AES_192_OFB128: Nid = Nid(ffi::NID_aes_192_ofb128); pub const AES_192_CFB128: Nid = Nid(ffi::NID_aes_192_cfb128); pub const ID_AES192_WRAP: Nid = Nid(ffi::NID_id_aes192_wrap); pub const AES_192_GCM: Nid = Nid(ffi::NID_aes_192_gcm); pub const AES_192_CCM: Nid = Nid(ffi::NID_aes_192_ccm); pub const ID_AES192_WRAP_PAD: Nid = Nid(ffi::NID_id_aes192_wrap_pad); pub const AES_256_ECB: Nid = Nid(ffi::NID_aes_256_ecb); pub const AES_256_CBC: Nid = Nid(ffi::NID_aes_256_cbc); pub const AES_256_OFB128: Nid = Nid(ffi::NID_aes_256_ofb128); pub const AES_256_CFB128: Nid = Nid(ffi::NID_aes_256_cfb128); pub const ID_AES256_WRAP: Nid = Nid(ffi::NID_id_aes256_wrap); pub const AES_256_GCM: Nid = Nid(ffi::NID_aes_256_gcm); pub const AES_256_CCM: Nid = Nid(ffi::NID_aes_256_ccm); pub const ID_AES256_WRAP_PAD: Nid = Nid(ffi::NID_id_aes256_wrap_pad); pub const AES_128_CFB1: Nid = Nid(ffi::NID_aes_128_cfb1); pub const AES_192_CFB1: Nid = Nid(ffi::NID_aes_192_cfb1); pub const AES_256_CFB1: Nid = Nid(ffi::NID_aes_256_cfb1); pub const AES_128_CFB8: Nid = Nid(ffi::NID_aes_128_cfb8); pub const AES_192_CFB8: Nid = Nid(ffi::NID_aes_192_cfb8); pub const AES_256_CFB8: Nid = Nid(ffi::NID_aes_256_cfb8); pub const AES_128_CTR: Nid = Nid(ffi::NID_aes_128_ctr); pub const AES_192_CTR: Nid = Nid(ffi::NID_aes_192_ctr); pub const AES_256_CTR: Nid = Nid(ffi::NID_aes_256_ctr); pub const AES_128_XTS: Nid = Nid(ffi::NID_aes_128_xts); pub const AES_256_XTS: Nid = Nid(ffi::NID_aes_256_xts); pub const DES_CFB1: Nid = Nid(ffi::NID_des_cfb1); pub const DES_CFB8: Nid = Nid(ffi::NID_des_cfb8); pub const DES_EDE3_CFB1: Nid = Nid(ffi::NID_des_ede3_cfb1); pub const DES_EDE3_CFB8: Nid = Nid(ffi::NID_des_ede3_cfb8); pub const SHA256: Nid = Nid(ffi::NID_sha256); pub const SHA384: Nid = Nid(ffi::NID_sha384); pub const SHA512: Nid = Nid(ffi::NID_sha512); pub const SHA224: Nid = Nid(ffi::NID_sha224); pub const DSA_WITH_SHA224: Nid = Nid(ffi::NID_dsa_with_SHA224); pub const DSA_WITH_SHA256: Nid = Nid(ffi::NID_dsa_with_SHA256); pub const HOLD_INSTRUCTION_CODE: Nid = Nid(ffi::NID_hold_instruction_code); pub const HOLD_INSTRUCTION_NONE: Nid = Nid(ffi::NID_hold_instruction_none); pub const HOLD_INSTRUCTION_CALL_ISSUER: Nid = Nid(ffi::NID_hold_instruction_call_issuer); pub const HOLD_INSTRUCTION_REJECT: Nid = Nid(ffi::NID_hold_instruction_reject); pub const DATA: Nid = Nid(ffi::NID_data); pub const PSS: Nid = Nid(ffi::NID_pss); pub const UCL: Nid = Nid(ffi::NID_ucl); pub const PILOT: Nid = Nid(ffi::NID_pilot); pub const PILOTATTRIBUTETYPE: Nid = Nid(ffi::NID_pilotAttributeType); pub const PILOTATTRIBUTESYNTAX: Nid = Nid(ffi::NID_pilotAttributeSyntax); pub const PILOTOBJECTCLASS: Nid = Nid(ffi::NID_pilotObjectClass); pub const PILOTGROUPS: Nid = Nid(ffi::NID_pilotGroups); pub const IA5STRINGSYNTAX: Nid = Nid(ffi::NID_iA5StringSyntax); pub const CASEIGNOREIA5STRINGSYNTAX: Nid = Nid(ffi::NID_caseIgnoreIA5StringSyntax); pub const PILOTOBJECT: Nid = Nid(ffi::NID_pilotObject); pub const PILOTPERSON: Nid = Nid(ffi::NID_pilotPerson); pub const ACCOUNT: Nid = Nid(ffi::NID_account); pub const DOCUMENT: Nid = Nid(ffi::NID_document); pub const ROOM: Nid = Nid(ffi::NID_room); pub const DOCUMENTSERIES: Nid = Nid(ffi::NID_documentSeries); pub const DOMAIN: Nid = Nid(ffi::NID_Domain); pub const RFC822LOCALPART: Nid = Nid(ffi::NID_rFC822localPart); pub const DNSDOMAIN: Nid = Nid(ffi::NID_dNSDomain); pub const DOMAINRELATEDOBJECT: Nid = Nid(ffi::NID_domainRelatedObject); pub const FRIENDLYCOUNTRY: Nid = Nid(ffi::NID_friendlyCountry); pub const SIMPLESECURITYOBJECT: Nid = Nid(ffi::NID_simpleSecurityObject); pub const PILOTORGANIZATION: Nid = Nid(ffi::NID_pilotOrganization); pub const PILOTDSA: Nid = Nid(ffi::NID_pilotDSA); pub const QUALITYLABELLEDDATA: Nid = Nid(ffi::NID_qualityLabelledData); pub const USERID: Nid = Nid(ffi::NID_userId); pub const TEXTENCODEDORADDRESS: Nid = Nid(ffi::NID_textEncodedORAddress); pub const RFC822MAILBOX: Nid = Nid(ffi::NID_rfc822Mailbox); pub const INFO: Nid = Nid(ffi::NID_info); pub const FAVOURITEDRINK: Nid = Nid(ffi::NID_favouriteDrink); pub const ROOMNUMBER: Nid = Nid(ffi::NID_roomNumber); pub const PHOTO: Nid = Nid(ffi::NID_photo); pub const USERCLASS: Nid = Nid(ffi::NID_userClass); pub const HOST: Nid = Nid(ffi::NID_host); pub const MANAGER: Nid = Nid(ffi::NID_manager); pub const DOCUMENTIDENTIFIER: Nid = Nid(ffi::NID_documentIdentifier); pub const DOCUMENTTITLE: Nid = Nid(ffi::NID_documentTitle); pub const DOCUMENTVERSION: Nid = Nid(ffi::NID_documentVersion); pub const DOCUMENTAUTHOR: Nid = Nid(ffi::NID_documentAuthor); pub const DOCUMENTLOCATION: Nid = Nid(ffi::NID_documentLocation); pub const HOMETELEPHONENUMBER: Nid = Nid(ffi::NID_homeTelephoneNumber); pub const SECRETARY: Nid = Nid(ffi::NID_secretary); pub const OTHERMAILBOX: Nid = Nid(ffi::NID_otherMailbox); pub const LASTMODIFIEDTIME: Nid = Nid(ffi::NID_lastModifiedTime); pub const LASTMODIFIEDBY: Nid = Nid(ffi::NID_lastModifiedBy); pub const DOMAINCOMPONENT: Nid = Nid(ffi::NID_domainComponent); pub const ARECORD: Nid = Nid(ffi::NID_aRecord); pub const PILOTATTRIBUTETYPE27: Nid = Nid(ffi::NID_pilotAttributeType27); pub const MXRECORD: Nid = Nid(ffi::NID_mXRecord); pub const NSRECORD: Nid = Nid(ffi::NID_nSRecord); pub const SOARECORD: Nid = Nid(ffi::NID_sOARecord); pub const CNAMERECORD: Nid = Nid(ffi::NID_cNAMERecord); pub const ASSOCIATEDDOMAIN: Nid = Nid(ffi::NID_associatedDomain); pub const ASSOCIATEDNAME: Nid = Nid(ffi::NID_associatedName); pub const HOMEPOSTALADDRESS: Nid = Nid(ffi::NID_homePostalAddress); pub const PERSONALTITLE: Nid = Nid(ffi::NID_personalTitle); pub const MOBILETELEPHONENUMBER: Nid = Nid(ffi::NID_mobileTelephoneNumber); pub const PAGERTELEPHONENUMBER: Nid = Nid(ffi::NID_pagerTelephoneNumber); pub const FRIENDLYCOUNTRYNAME: Nid = Nid(ffi::NID_friendlyCountryName); pub const ORGANIZATIONALSTATUS: Nid = Nid(ffi::NID_organizationalStatus); pub const JANETMAILBOX: Nid = Nid(ffi::NID_janetMailbox); pub const MAILPREFERENCEOPTION: Nid = Nid(ffi::NID_mailPreferenceOption); pub const BUILDINGNAME: Nid = Nid(ffi::NID_buildingName); pub const DSAQUALITY: Nid = Nid(ffi::NID_dSAQuality); pub const SINGLELEVELQUALITY: Nid = Nid(ffi::NID_singleLevelQuality); pub const SUBTREEMINIMUMQUALITY: Nid = Nid(ffi::NID_subtreeMinimumQuality); pub const SUBTREEMAXIMUMQUALITY: Nid = Nid(ffi::NID_subtreeMaximumQuality); pub const PERSONALSIGNATURE: Nid = Nid(ffi::NID_personalSignature); pub const DITREDIRECT: Nid = Nid(ffi::NID_dITRedirect); pub const AUDIO: Nid = Nid(ffi::NID_audio); pub const DOCUMENTPUBLISHER: Nid = Nid(ffi::NID_documentPublisher); pub const ID_SET: Nid = Nid(ffi::NID_id_set); pub const SET_CTYPE: Nid = Nid(ffi::NID_set_ctype); pub const SET_MSGEXT: Nid = Nid(ffi::NID_set_msgExt); pub const SET_ATTR: Nid = Nid(ffi::NID_set_attr); pub const SET_POLICY: Nid = Nid(ffi::NID_set_policy); pub const SET_CERTEXT: Nid = Nid(ffi::NID_set_certExt); pub const SET_BRAND: Nid = Nid(ffi::NID_set_brand); pub const SETCT_PANDATA: Nid = Nid(ffi::NID_setct_PANData); pub const SETCT_PANTOKEN: Nid = Nid(ffi::NID_setct_PANToken); pub const SETCT_PANONLY: Nid = Nid(ffi::NID_setct_PANOnly); pub const SETCT_OIDATA: Nid = Nid(ffi::NID_setct_OIData); pub const SETCT_PI: Nid = Nid(ffi::NID_setct_PI); pub const SETCT_PIDATA: Nid = Nid(ffi::NID_setct_PIData); pub const SETCT_PIDATAUNSIGNED: Nid = Nid(ffi::NID_setct_PIDataUnsigned); pub const SETCT_HODINPUT: Nid = Nid(ffi::NID_setct_HODInput); pub const SETCT_AUTHRESBAGGAGE: Nid = Nid(ffi::NID_setct_AuthResBaggage); pub const SETCT_AUTHREVREQBAGGAGE: Nid = Nid(ffi::NID_setct_AuthRevReqBaggage); pub const SETCT_AUTHREVRESBAGGAGE: Nid = Nid(ffi::NID_setct_AuthRevResBaggage); pub const SETCT_CAPTOKENSEQ: Nid = Nid(ffi::NID_setct_CapTokenSeq); pub const SETCT_PINITRESDATA: Nid = Nid(ffi::NID_setct_PInitResData); pub const SETCT_PI_TBS: Nid = Nid(ffi::NID_setct_PI_TBS); pub const SETCT_PRESDATA: Nid = Nid(ffi::NID_setct_PResData); pub const SETCT_AUTHREQTBS: Nid = Nid(ffi::NID_setct_AuthReqTBS); pub const SETCT_AUTHRESTBS: Nid = Nid(ffi::NID_setct_AuthResTBS); pub const SETCT_AUTHRESTBSX: Nid = Nid(ffi::NID_setct_AuthResTBSX); pub const SETCT_AUTHTOKENTBS: Nid = Nid(ffi::NID_setct_AuthTokenTBS); pub const SETCT_CAPTOKENDATA: Nid = Nid(ffi::NID_setct_CapTokenData); pub const SETCT_CAPTOKENTBS: Nid = Nid(ffi::NID_setct_CapTokenTBS); pub const SETCT_ACQCARDCODEMSG: Nid = Nid(ffi::NID_setct_AcqCardCodeMsg); pub const SETCT_AUTHREVREQTBS: Nid = Nid(ffi::NID_setct_AuthRevReqTBS); pub const SETCT_AUTHREVRESDATA: Nid = Nid(ffi::NID_setct_AuthRevResData); pub const SETCT_AUTHREVRESTBS: Nid = Nid(ffi::NID_setct_AuthRevResTBS); pub const SETCT_CAPREQTBS: Nid = Nid(ffi::NID_setct_CapReqTBS); pub const SETCT_CAPREQTBSX: Nid = Nid(ffi::NID_setct_CapReqTBSX); pub const SETCT_CAPRESDATA: Nid = Nid(ffi::NID_setct_CapResData); pub const SETCT_CAPREVREQTBS: Nid = Nid(ffi::NID_setct_CapRevReqTBS); pub const SETCT_CAPREVREQTBSX: Nid = Nid(ffi::NID_setct_CapRevReqTBSX); pub const SETCT_CAPREVRESDATA: Nid = Nid(ffi::NID_setct_CapRevResData); pub const SETCT_CREDREQTBS: Nid = Nid(ffi::NID_setct_CredReqTBS); pub const SETCT_CREDREQTBSX: Nid = Nid(ffi::NID_setct_CredReqTBSX); pub const SETCT_CREDRESDATA: Nid = Nid(ffi::NID_setct_CredResData); pub const SETCT_CREDREVREQTBS: Nid = Nid(ffi::NID_setct_CredRevReqTBS); pub const SETCT_CREDREVREQTBSX: Nid = Nid(ffi::NID_setct_CredRevReqTBSX); pub const SETCT_CREDREVRESDATA: Nid = Nid(ffi::NID_setct_CredRevResData); pub const SETCT_PCERTREQDATA: Nid = Nid(ffi::NID_setct_PCertReqData); pub const SETCT_PCERTRESTBS: Nid = Nid(ffi::NID_setct_PCertResTBS); pub const SETCT_BATCHADMINREQDATA: Nid = Nid(ffi::NID_setct_BatchAdminReqData); pub const SETCT_BATCHADMINRESDATA: Nid = Nid(ffi::NID_setct_BatchAdminResData); pub const SETCT_CARDCINITRESTBS: Nid = Nid(ffi::NID_setct_CardCInitResTBS); pub const SETCT_MEAQCINITRESTBS: Nid = Nid(ffi::NID_setct_MeAqCInitResTBS); pub const SETCT_REGFORMRESTBS: Nid = Nid(ffi::NID_setct_RegFormResTBS); pub const SETCT_CERTREQDATA: Nid = Nid(ffi::NID_setct_CertReqData); pub const SETCT_CERTREQTBS: Nid = Nid(ffi::NID_setct_CertReqTBS); pub const SETCT_CERTRESDATA: Nid = Nid(ffi::NID_setct_CertResData); pub const SETCT_CERTINQREQTBS: Nid = Nid(ffi::NID_setct_CertInqReqTBS); pub const SETCT_ERRORTBS: Nid = Nid(ffi::NID_setct_ErrorTBS); pub const SETCT_PIDUALSIGNEDTBE: Nid = Nid(ffi::NID_setct_PIDualSignedTBE); pub const SETCT_PIUNSIGNEDTBE: Nid = Nid(ffi::NID_setct_PIUnsignedTBE); pub const SETCT_AUTHREQTBE: Nid = Nid(ffi::NID_setct_AuthReqTBE); pub const SETCT_AUTHRESTBE: Nid = Nid(ffi::NID_setct_AuthResTBE); pub const SETCT_AUTHRESTBEX: Nid = Nid(ffi::NID_setct_AuthResTBEX); pub const SETCT_AUTHTOKENTBE: Nid = Nid(ffi::NID_setct_AuthTokenTBE); pub const SETCT_CAPTOKENTBE: Nid = Nid(ffi::NID_setct_CapTokenTBE); pub const SETCT_CAPTOKENTBEX: Nid = Nid(ffi::NID_setct_CapTokenTBEX); pub const SETCT_ACQCARDCODEMSGTBE: Nid = Nid(ffi::NID_setct_AcqCardCodeMsgTBE); pub const SETCT_AUTHREVREQTBE: Nid = Nid(ffi::NID_setct_AuthRevReqTBE); pub const SETCT_AUTHREVRESTBE: Nid = Nid(ffi::NID_setct_AuthRevResTBE); pub const SETCT_AUTHREVRESTBEB: Nid = Nid(ffi::NID_setct_AuthRevResTBEB); pub const SETCT_CAPREQTBE: Nid = Nid(ffi::NID_setct_CapReqTBE); pub const SETCT_CAPREQTBEX: Nid = Nid(ffi::NID_setct_CapReqTBEX); pub const SETCT_CAPRESTBE: Nid = Nid(ffi::NID_setct_CapResTBE); pub const SETCT_CAPREVREQTBE: Nid = Nid(ffi::NID_setct_CapRevReqTBE); pub const SETCT_CAPREVREQTBEX: Nid = Nid(ffi::NID_setct_CapRevReqTBEX); pub const SETCT_CAPREVRESTBE: Nid = Nid(ffi::NID_setct_CapRevResTBE); pub const SETCT_CREDREQTBE: Nid = Nid(ffi::NID_setct_CredReqTBE); pub const SETCT_CREDREQTBEX: Nid = Nid(ffi::NID_setct_CredReqTBEX); pub const SETCT_CREDRESTBE: Nid = Nid(ffi::NID_setct_CredResTBE); pub const SETCT_CREDREVREQTBE: Nid = Nid(ffi::NID_setct_CredRevReqTBE); pub const SETCT_CREDREVREQTBEX: Nid = Nid(ffi::NID_setct_CredRevReqTBEX); pub const SETCT_CREDREVRESTBE: Nid = Nid(ffi::NID_setct_CredRevResTBE); pub const SETCT_BATCHADMINREQTBE: Nid = Nid(ffi::NID_setct_BatchAdminReqTBE); pub const SETCT_BATCHADMINRESTBE: Nid = Nid(ffi::NID_setct_BatchAdminResTBE); pub const SETCT_REGFORMREQTBE: Nid = Nid(ffi::NID_setct_RegFormReqTBE); pub const SETCT_CERTREQTBE: Nid = Nid(ffi::NID_setct_CertReqTBE); pub const SETCT_CERTREQTBEX: Nid = Nid(ffi::NID_setct_CertReqTBEX); pub const SETCT_CERTRESTBE: Nid = Nid(ffi::NID_setct_CertResTBE); pub const SETCT_CRLNOTIFICATIONTBS: Nid = Nid(ffi::NID_setct_CRLNotificationTBS); pub const SETCT_CRLNOTIFICATIONRESTBS: Nid = Nid(ffi::NID_setct_CRLNotificationResTBS); pub const SETCT_BCIDISTRIBUTIONTBS: Nid = Nid(ffi::NID_setct_BCIDistributionTBS); pub const SETEXT_GENCRYPT: Nid = Nid(ffi::NID_setext_genCrypt); pub const SETEXT_MIAUTH: Nid = Nid(ffi::NID_setext_miAuth); pub const SETEXT_PINSECURE: Nid = Nid(ffi::NID_setext_pinSecure); pub const SETEXT_PINANY: Nid = Nid(ffi::NID_setext_pinAny); pub const SETEXT_TRACK2: Nid = Nid(ffi::NID_setext_track2); pub const SETEXT_CV: Nid = Nid(ffi::NID_setext_cv); pub const SET_POLICY_ROOT: Nid = Nid(ffi::NID_set_policy_root); pub const SETCEXT_HASHEDROOT: Nid = Nid(ffi::NID_setCext_hashedRoot); pub const SETCEXT_CERTTYPE: Nid = Nid(ffi::NID_setCext_certType); pub const SETCEXT_MERCHDATA: Nid = Nid(ffi::NID_setCext_merchData); pub const SETCEXT_CCERTREQUIRED: Nid = Nid(ffi::NID_setCext_cCertRequired); pub const SETCEXT_TUNNELING: Nid = Nid(ffi::NID_setCext_tunneling); pub const SETCEXT_SETEXT: Nid = Nid(ffi::NID_setCext_setExt); pub const SETCEXT_SETQUALF: Nid = Nid(ffi::NID_setCext_setQualf); pub const SETCEXT_PGWYCAPABILITIES: Nid = Nid(ffi::NID_setCext_PGWYcapabilities); pub const SETCEXT_TOKENIDENTIFIER: Nid = Nid(ffi::NID_setCext_TokenIdentifier); pub const SETCEXT_TRACK2DATA: Nid = Nid(ffi::NID_setCext_Track2Data); pub const SETCEXT_TOKENTYPE: Nid = Nid(ffi::NID_setCext_TokenType); pub const SETCEXT_ISSUERCAPABILITIES: Nid = Nid(ffi::NID_setCext_IssuerCapabilities); pub const SETATTR_CERT: Nid = Nid(ffi::NID_setAttr_Cert); pub const SETATTR_PGWYCAP: Nid = Nid(ffi::NID_setAttr_PGWYcap); pub const SETATTR_TOKENTYPE: Nid = Nid(ffi::NID_setAttr_TokenType); pub const SETATTR_ISSCAP: Nid = Nid(ffi::NID_setAttr_IssCap); pub const SET_ROOTKEYTHUMB: Nid = Nid(ffi::NID_set_rootKeyThumb); pub const SET_ADDPOLICY: Nid = Nid(ffi::NID_set_addPolicy); pub const SETATTR_TOKEN_EMV: Nid = Nid(ffi::NID_setAttr_Token_EMV); pub const SETATTR_TOKEN_B0PRIME: Nid = Nid(ffi::NID_setAttr_Token_B0Prime); pub const SETATTR_ISSCAP_CVM: Nid = Nid(ffi::NID_setAttr_IssCap_CVM); pub const SETATTR_ISSCAP_T2: Nid = Nid(ffi::NID_setAttr_IssCap_T2); pub const SETATTR_ISSCAP_SIG: Nid = Nid(ffi::NID_setAttr_IssCap_Sig); pub const SETATTR_GENCRYPTGRM: Nid = Nid(ffi::NID_setAttr_GenCryptgrm); pub const SETATTR_T2ENC: Nid = Nid(ffi::NID_setAttr_T2Enc); pub const SETATTR_T2CLEARTXT: Nid = Nid(ffi::NID_setAttr_T2cleartxt); pub const SETATTR_TOKICCSIG: Nid = Nid(ffi::NID_setAttr_TokICCsig); pub const SETATTR_SECDEVSIG: Nid = Nid(ffi::NID_setAttr_SecDevSig); pub const SET_BRAND_IATA_ATA: Nid = Nid(ffi::NID_set_brand_IATA_ATA); pub const SET_BRAND_DINERS: Nid = Nid(ffi::NID_set_brand_Diners); pub const SET_BRAND_AMERICANEXPRESS: Nid = Nid(ffi::NID_set_brand_AmericanExpress); pub const SET_BRAND_JCB: Nid = Nid(ffi::NID_set_brand_JCB); pub const SET_BRAND_VISA: Nid = Nid(ffi::NID_set_brand_Visa); pub const SET_BRAND_MASTERCARD: Nid = Nid(ffi::NID_set_brand_MasterCard); pub const SET_BRAND_NOVUS: Nid = Nid(ffi::NID_set_brand_Novus); pub const DES_CDMF: Nid = Nid(ffi::NID_des_cdmf); pub const RSAOAEPENCRYPTIONSET: Nid = Nid(ffi::NID_rsaOAEPEncryptionSET); pub const IPSEC3: Nid = Nid(ffi::NID_ipsec3); pub const IPSEC4: Nid = Nid(ffi::NID_ipsec4); pub const WHIRLPOOL: Nid = Nid(ffi::NID_whirlpool); pub const CRYPTOPRO: Nid = Nid(ffi::NID_cryptopro); pub const CRYPTOCOM: Nid = Nid(ffi::NID_cryptocom); pub const ID_GOSTR3411_94_WITH_GOSTR3410_2001: Nid = Nid(ffi::NID_id_GostR3411_94_with_GostR3410_2001); pub const ID_GOSTR3411_94_WITH_GOSTR3410_94: Nid = Nid(ffi::NID_id_GostR3411_94_with_GostR3410_94); pub const ID_GOSTR3411_94: Nid = Nid(ffi::NID_id_GostR3411_94); pub const ID_HMACGOSTR3411_94: Nid = Nid(ffi::NID_id_HMACGostR3411_94); pub const ID_GOSTR3410_2001: Nid = Nid(ffi::NID_id_GostR3410_2001); pub const ID_GOSTR3410_94: Nid = Nid(ffi::NID_id_GostR3410_94); pub const ID_GOST28147_89: Nid = Nid(ffi::NID_id_Gost28147_89); pub const GOST89_CNT: Nid = Nid(ffi::NID_gost89_cnt); pub const ID_GOST28147_89_MAC: Nid = Nid(ffi::NID_id_Gost28147_89_MAC); pub const ID_GOSTR3411_94_PRF: Nid = Nid(ffi::NID_id_GostR3411_94_prf); pub const ID_GOSTR3410_2001DH: Nid = Nid(ffi::NID_id_GostR3410_2001DH); pub const ID_GOSTR3410_94DH: Nid = Nid(ffi::NID_id_GostR3410_94DH); pub const ID_GOST28147_89_CRYPTOPRO_KEYMESHING: Nid = Nid(ffi::NID_id_Gost28147_89_CryptoPro_KeyMeshing); pub const ID_GOST28147_89_NONE_KEYMESHING: Nid = Nid(ffi::NID_id_Gost28147_89_None_KeyMeshing); pub const ID_GOSTR3411_94_TESTPARAMSET: Nid = Nid(ffi::NID_id_GostR3411_94_TestParamSet); pub const ID_GOSTR3411_94_CRYPTOPROPARAMSET: Nid = Nid(ffi::NID_id_GostR3411_94_CryptoProParamSet); pub const ID_GOST28147_89_TESTPARAMSET: Nid = Nid(ffi::NID_id_Gost28147_89_TestParamSet); pub const ID_GOST28147_89_CRYPTOPRO_A_PARAMSET: Nid = Nid(ffi::NID_id_Gost28147_89_CryptoPro_A_ParamSet); pub const ID_GOST28147_89_CRYPTOPRO_B_PARAMSET: Nid = Nid(ffi::NID_id_Gost28147_89_CryptoPro_B_ParamSet); pub const ID_GOST28147_89_CRYPTOPRO_C_PARAMSET: Nid = Nid(ffi::NID_id_Gost28147_89_CryptoPro_C_ParamSet); pub const ID_GOST28147_89_CRYPTOPRO_D_PARAMSET: Nid = Nid(ffi::NID_id_Gost28147_89_CryptoPro_D_ParamSet); pub const ID_GOST28147_89_CRYPTOPRO_OSCAR_1_1_PARAMSET: Nid = Nid(ffi::NID_id_Gost28147_89_CryptoPro_Oscar_1_1_ParamSet); pub const ID_GOST28147_89_CRYPTOPRO_OSCAR_1_0_PARAMSET: Nid = Nid(ffi::NID_id_Gost28147_89_CryptoPro_Oscar_1_0_ParamSet); pub const ID_GOST28147_89_CRYPTOPRO_RIC_1_PARAMSET: Nid = Nid(ffi::NID_id_Gost28147_89_CryptoPro_RIC_1_ParamSet); pub const ID_GOSTR3410_94_TESTPARAMSET: Nid = Nid(ffi::NID_id_GostR3410_94_TestParamSet); pub const ID_GOSTR3410_94_CRYPTOPRO_A_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_94_CryptoPro_A_ParamSet); pub const ID_GOSTR3410_94_CRYPTOPRO_B_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_94_CryptoPro_B_ParamSet); pub const ID_GOSTR3410_94_CRYPTOPRO_C_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_94_CryptoPro_C_ParamSet); pub const ID_GOSTR3410_94_CRYPTOPRO_D_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_94_CryptoPro_D_ParamSet); pub const ID_GOSTR3410_94_CRYPTOPRO_XCHA_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_94_CryptoPro_XchA_ParamSet); pub const ID_GOSTR3410_94_CRYPTOPRO_XCHB_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_94_CryptoPro_XchB_ParamSet); pub const ID_GOSTR3410_94_CRYPTOPRO_XCHC_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_94_CryptoPro_XchC_ParamSet); pub const ID_GOSTR3410_2001_TESTPARAMSET: Nid = Nid(ffi::NID_id_GostR3410_2001_TestParamSet); pub const ID_GOSTR3410_2001_CRYPTOPRO_A_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_2001_CryptoPro_A_ParamSet); pub const ID_GOSTR3410_2001_CRYPTOPRO_B_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_2001_CryptoPro_B_ParamSet); pub const ID_GOSTR3410_2001_CRYPTOPRO_C_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_2001_CryptoPro_C_ParamSet); pub const ID_GOSTR3410_2001_CRYPTOPRO_XCHA_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_2001_CryptoPro_XchA_ParamSet); pub const ID_GOSTR3410_2001_CRYPTOPRO_XCHB_PARAMSET: Nid = Nid(ffi::NID_id_GostR3410_2001_CryptoPro_XchB_ParamSet); pub const ID_GOSTR3410_94_A: Nid = Nid(ffi::NID_id_GostR3410_94_a); pub const ID_GOSTR3410_94_ABIS: Nid = Nid(ffi::NID_id_GostR3410_94_aBis); pub const ID_GOSTR3410_94_B: Nid = Nid(ffi::NID_id_GostR3410_94_b); pub const ID_GOSTR3410_94_BBIS: Nid = Nid(ffi::NID_id_GostR3410_94_bBis); pub const ID_GOST28147_89_CC: Nid = Nid(ffi::NID_id_Gost28147_89_cc); pub const ID_GOSTR3410_94_CC: Nid = Nid(ffi::NID_id_GostR3410_94_cc); pub const ID_GOSTR3410_2001_CC: Nid = Nid(ffi::NID_id_GostR3410_2001_cc); pub const ID_GOSTR3411_94_WITH_GOSTR3410_94_CC: Nid = Nid(ffi::NID_id_GostR3411_94_with_GostR3410_94_cc); pub const ID_GOSTR3411_94_WITH_GOSTR3410_2001_CC: Nid = Nid(ffi::NID_id_GostR3411_94_with_GostR3410_2001_cc); pub const ID_GOSTR3410_2001_PARAMSET_CC: Nid = Nid(ffi::NID_id_GostR3410_2001_ParamSet_cc); pub const CAMELLIA_128_CBC: Nid = Nid(ffi::NID_camellia_128_cbc); pub const CAMELLIA_192_CBC: Nid = Nid(ffi::NID_camellia_192_cbc); pub const CAMELLIA_256_CBC: Nid = Nid(ffi::NID_camellia_256_cbc); pub const ID_CAMELLIA128_WRAP: Nid = Nid(ffi::NID_id_camellia128_wrap); pub const ID_CAMELLIA192_WRAP: Nid = Nid(ffi::NID_id_camellia192_wrap); pub const ID_CAMELLIA256_WRAP: Nid = Nid(ffi::NID_id_camellia256_wrap); pub const CAMELLIA_128_ECB: Nid = Nid(ffi::NID_camellia_128_ecb); pub const CAMELLIA_128_OFB128: Nid = Nid(ffi::NID_camellia_128_ofb128); pub const CAMELLIA_128_CFB128: Nid = Nid(ffi::NID_camellia_128_cfb128); pub const CAMELLIA_192_ECB: Nid = Nid(ffi::NID_camellia_192_ecb); pub const CAMELLIA_192_OFB128: Nid = Nid(ffi::NID_camellia_192_ofb128); pub const CAMELLIA_192_CFB128: Nid = Nid(ffi::NID_camellia_192_cfb128); pub const CAMELLIA_256_ECB: Nid = Nid(ffi::NID_camellia_256_ecb); pub const CAMELLIA_256_OFB128: Nid = Nid(ffi::NID_camellia_256_ofb128); pub const CAMELLIA_256_CFB128: Nid = Nid(ffi::NID_camellia_256_cfb128); pub const CAMELLIA_128_CFB1: Nid = Nid(ffi::NID_camellia_128_cfb1); pub const CAMELLIA_192_CFB1: Nid = Nid(ffi::NID_camellia_192_cfb1); pub const CAMELLIA_256_CFB1: Nid = Nid(ffi::NID_camellia_256_cfb1); pub const CAMELLIA_128_CFB8: Nid = Nid(ffi::NID_camellia_128_cfb8); pub const CAMELLIA_192_CFB8: Nid = Nid(ffi::NID_camellia_192_cfb8); pub const CAMELLIA_256_CFB8: Nid = Nid(ffi::NID_camellia_256_cfb8); pub const KISA: Nid = Nid(ffi::NID_kisa); pub const SEED_ECB: Nid = Nid(ffi::NID_seed_ecb); pub const SEED_CBC: Nid = Nid(ffi::NID_seed_cbc); pub const SEED_CFB128: Nid = Nid(ffi::NID_seed_cfb128); pub const SEED_OFB128: Nid = Nid(ffi::NID_seed_ofb128); pub const HMAC: Nid = Nid(ffi::NID_hmac); pub const CMAC: Nid = Nid(ffi::NID_cmac); pub const RC4_HMAC_MD5: Nid = Nid(ffi::NID_rc4_hmac_md5); pub const AES_128_CBC_HMAC_SHA1: Nid = Nid(ffi::NID_aes_128_cbc_hmac_sha1); pub const AES_192_CBC_HMAC_SHA1: Nid = Nid(ffi::NID_aes_192_cbc_hmac_sha1); pub const AES_256_CBC_HMAC_SHA1: Nid = Nid(ffi::NID_aes_256_cbc_hmac_sha1); } #[cfg(test)] mod test { use super::Nid; #[test] fn signature_digest() { let algs = Nid::SHA256WITHRSAENCRYPTION.signature_algorithms().unwrap(); assert_eq!(algs.digest, Nid::SHA256); assert_eq!(algs.pkey, Nid::RSAENCRYPTION); } #[test] fn test_long_name_conversion() { let common_name = Nid::COMMONNAME; let organizational_unit_name = Nid::ORGANIZATIONALUNITNAME; let aes256_cbc_hmac_sha1 = Nid::AES_256_CBC_HMAC_SHA1; let id_cmc_lrapopwitness = Nid::ID_CMC_LRAPOPWITNESS; let ms_ctl_sign = Nid::MS_CTL_SIGN; let undefined_nid = Nid::from_raw(118); assert_eq!(common_name.long_name().unwrap(), "commonName"); assert_eq!( organizational_unit_name.long_name().unwrap(), "organizationalUnitName" ); assert_eq!( aes256_cbc_hmac_sha1.long_name().unwrap(), "aes-256-cbc-hmac-sha1" ); assert_eq!( id_cmc_lrapopwitness.long_name().unwrap(), "id-cmc-lraPOPWitness" ); assert_eq!( ms_ctl_sign.long_name().unwrap(), "Microsoft Trust List Signing" ); assert!( undefined_nid.long_name().is_err(), "undefined_nid should not return a valid value" ); } #[test] fn test_short_name_conversion() { let common_name = Nid::COMMONNAME; let organizational_unit_name = Nid::ORGANIZATIONALUNITNAME; let aes256_cbc_hmac_sha1 = Nid::AES_256_CBC_HMAC_SHA1; let id_cmc_lrapopwitness = Nid::ID_CMC_LRAPOPWITNESS; let ms_ctl_sign = Nid::MS_CTL_SIGN; let undefined_nid = Nid::from_raw(118); assert_eq!(common_name.short_name().unwrap(), "CN"); assert_eq!(organizational_unit_name.short_name().unwrap(), "OU"); assert_eq!( aes256_cbc_hmac_sha1.short_name().unwrap(), "AES-256-CBC-HMAC-SHA1" ); assert_eq!( id_cmc_lrapopwitness.short_name().unwrap(), "id-cmc-lraPOPWitness" ); assert_eq!(ms_ctl_sign.short_name().unwrap(), "msCTLSign"); assert!( undefined_nid.short_name().is_err(), "undefined_nid should not return a valid value" ); } } vendor/openssl/src/conf.rs0000664000175000017500000000256414160055207016423 0ustar mwhudsonmwhudson//! Interface for processing OpenSSL configuration files. use crate::cvt_p; use crate::error::ErrorStack; pub struct ConfMethod(*mut ffi::CONF_METHOD); impl ConfMethod { /// Retrieve handle to the default OpenSSL configuration file processing function. pub fn default() -> ConfMethod { unsafe { ffi::init(); // `NCONF` stands for "New Conf", as described in crypto/conf/conf_lib.c. This is // a newer API than the "CONF classic" functions. ConfMethod(ffi::NCONF_default()) } } /// Construct from raw pointer. /// /// # Safety /// /// The caller must ensure that the pointer is valid. pub unsafe fn from_ptr(ptr: *mut ffi::CONF_METHOD) -> ConfMethod { ConfMethod(ptr) } /// Convert to raw pointer. pub fn as_ptr(&self) -> *mut ffi::CONF_METHOD { self.0 } } foreign_type_and_impl_send_sync! { type CType = ffi::CONF; fn drop = ffi::NCONF_free; pub struct Conf; pub struct ConfRef; } impl Conf { /// Create a configuration parser. /// /// # Examples /// /// ``` /// use openssl::conf::{Conf, ConfMethod}; /// /// let conf = Conf::new(ConfMethod::default()); /// ``` pub fn new(method: ConfMethod) -> Result { unsafe { cvt_p(ffi::NCONF_new(method.as_ptr())).map(Conf) } } } vendor/openssl/src/bio.rs0000664000175000017500000000346714160055207016252 0ustar mwhudsonmwhudsonuse cfg_if::cfg_if; use libc::c_int; use std::marker::PhantomData; use std::ptr; use std::slice; use crate::cvt_p; use crate::error::ErrorStack; pub struct MemBioSlice<'a>(*mut ffi::BIO, PhantomData<&'a [u8]>); impl<'a> Drop for MemBioSlice<'a> { fn drop(&mut self) { unsafe { ffi::BIO_free_all(self.0); } } } impl<'a> MemBioSlice<'a> { pub fn new(buf: &'a [u8]) -> Result, ErrorStack> { ffi::init(); assert!(buf.len() <= c_int::max_value() as usize); let bio = unsafe { cvt_p(BIO_new_mem_buf( buf.as_ptr() as *const _, buf.len() as c_int, ))? }; Ok(MemBioSlice(bio, PhantomData)) } pub fn as_ptr(&self) -> *mut ffi::BIO { self.0 } } pub struct MemBio(*mut ffi::BIO); impl Drop for MemBio { fn drop(&mut self) { unsafe { ffi::BIO_free_all(self.0); } } } impl MemBio { pub fn new() -> Result { ffi::init(); let bio = unsafe { cvt_p(ffi::BIO_new(ffi::BIO_s_mem()))? }; Ok(MemBio(bio)) } pub fn as_ptr(&self) -> *mut ffi::BIO { self.0 } pub fn get_buf(&self) -> &[u8] { unsafe { let mut ptr = ptr::null_mut(); let len = ffi::BIO_get_mem_data(self.0, &mut ptr); slice::from_raw_parts(ptr as *const _ as *const _, len as usize) } } pub unsafe fn from_ptr(bio: *mut ffi::BIO) -> MemBio { MemBio(bio) } } cfg_if! { if #[cfg(ossl102)] { use ffi::BIO_new_mem_buf; } else { #[allow(bad_style)] unsafe fn BIO_new_mem_buf(buf: *const ::libc::c_void, len: ::libc::c_int) -> *mut ffi::BIO { ffi::BIO_new_mem_buf(buf as *mut _, len) } } } vendor/openssl/src/rand.rs0000664000175000017500000000303114160055207016410 0ustar mwhudsonmwhudson//! Utilities for secure random number generation. //! //! # Examples //! //! To generate a buffer with cryptographically strong bytes: //! //! ``` //! use openssl::rand::rand_bytes; //! //! let mut buf = [0; 256]; //! rand_bytes(&mut buf).unwrap(); //! ``` use libc::c_int; use crate::cvt; use crate::error::ErrorStack; /// Fill buffer with cryptographically strong pseudo-random bytes. /// /// This corresponds to [`RAND_bytes`]. /// /// # Examples /// /// To generate a buffer with cryptographically strong bytes: /// /// ``` /// use openssl::rand::rand_bytes; /// /// let mut buf = [0; 256]; /// rand_bytes(&mut buf).unwrap(); /// ``` /// /// [`RAND_bytes`]: https://www.openssl.org/docs/man1.1.0/crypto/RAND_bytes.html pub fn rand_bytes(buf: &mut [u8]) -> Result<(), ErrorStack> { unsafe { ffi::init(); assert!(buf.len() <= c_int::max_value() as usize); cvt(ffi::RAND_bytes(buf.as_mut_ptr(), buf.len() as c_int)).map(|_| ()) } } /// Controls random device file descriptor behavior. /// /// Requires OpenSSL 1.1.1 or newer. /// /// This corresponds to [`RAND_keep_random_devices_open`]. /// /// [`RAND_keep_random_devices_open`]: https://www.openssl.org/docs/manmaster/man3/RAND_keep_random_devices_open.html #[cfg(ossl111)] pub fn keep_random_devices_open(keep: bool) { unsafe { ffi::RAND_keep_random_devices_open(keep as c_int); } } #[cfg(test)] mod tests { use super::rand_bytes; #[test] fn test_rand_bytes() { let mut buf = [0; 32]; rand_bytes(&mut buf).unwrap(); } } vendor/openssl/src/x509/0000775000175000017500000000000014172417313015631 5ustar mwhudsonmwhudsonvendor/openssl/src/x509/extension.rs0000664000175000017500000004027514160055207020220 0ustar mwhudsonmwhudson//! Add extensions to an `X509` certificate or certificate request. //! //! The extensions defined for X.509 v3 certificates provide methods for //! associating additional attributes with users or public keys and for //! managing relationships between CAs. The extensions created using this //! module can be used with `X509v3Context` objects. //! //! # Example //! //! ```rust //! use openssl::x509::extension::BasicConstraints; //! use openssl::x509::X509Extension; //! //! let mut bc = BasicConstraints::new(); //! let bc = bc.critical().ca().pathlen(1); //! //! let extension: X509Extension = bc.build().unwrap(); //! ``` use std::fmt::Write; use crate::error::ErrorStack; use crate::nid::Nid; use crate::x509::{X509Extension, X509v3Context}; /// An extension which indicates whether a certificate is a CA certificate. pub struct BasicConstraints { critical: bool, ca: bool, pathlen: Option, } impl Default for BasicConstraints { fn default() -> BasicConstraints { BasicConstraints::new() } } impl BasicConstraints { /// Construct a new `BasicConstraints` extension. pub fn new() -> BasicConstraints { BasicConstraints { critical: false, ca: false, pathlen: None, } } /// Sets the `critical` flag to `true`. The extension will be critical. pub fn critical(&mut self) -> &mut BasicConstraints { self.critical = true; self } /// Sets the `ca` flag to `true`. pub fn ca(&mut self) -> &mut BasicConstraints { self.ca = true; self } /// Sets the pathlen to an optional non-negative value. The pathlen is the /// maximum number of CAs that can appear below this one in a chain. pub fn pathlen(&mut self, pathlen: u32) -> &mut BasicConstraints { self.pathlen = Some(pathlen); self } /// Return the `BasicConstraints` extension as an `X509Extension`. pub fn build(&self) -> Result { let mut value = String::new(); if self.critical { value.push_str("critical,"); } value.push_str("CA:"); if self.ca { value.push_str("TRUE"); } else { value.push_str("FALSE"); } if let Some(pathlen) = self.pathlen { write!(value, ",pathlen:{}", pathlen).unwrap(); } X509Extension::new_nid(None, None, Nid::BASIC_CONSTRAINTS, &value) } } /// An extension consisting of a list of names of the permitted key usages. pub struct KeyUsage { critical: bool, digital_signature: bool, non_repudiation: bool, key_encipherment: bool, data_encipherment: bool, key_agreement: bool, key_cert_sign: bool, crl_sign: bool, encipher_only: bool, decipher_only: bool, } impl Default for KeyUsage { fn default() -> KeyUsage { KeyUsage::new() } } impl KeyUsage { /// Construct a new `KeyUsage` extension. pub fn new() -> KeyUsage { KeyUsage { critical: false, digital_signature: false, non_repudiation: false, key_encipherment: false, data_encipherment: false, key_agreement: false, key_cert_sign: false, crl_sign: false, encipher_only: false, decipher_only: false, } } /// Sets the `critical` flag to `true`. The extension will be critical. pub fn critical(&mut self) -> &mut KeyUsage { self.critical = true; self } /// Sets the `digitalSignature` flag to `true`. pub fn digital_signature(&mut self) -> &mut KeyUsage { self.digital_signature = true; self } /// Sets the `nonRepudiation` flag to `true`. pub fn non_repudiation(&mut self) -> &mut KeyUsage { self.non_repudiation = true; self } /// Sets the `keyEncipherment` flag to `true`. pub fn key_encipherment(&mut self) -> &mut KeyUsage { self.key_encipherment = true; self } /// Sets the `dataEncipherment` flag to `true`. pub fn data_encipherment(&mut self) -> &mut KeyUsage { self.data_encipherment = true; self } /// Sets the `keyAgreement` flag to `true`. pub fn key_agreement(&mut self) -> &mut KeyUsage { self.key_agreement = true; self } /// Sets the `keyCertSign` flag to `true`. pub fn key_cert_sign(&mut self) -> &mut KeyUsage { self.key_cert_sign = true; self } /// Sets the `cRLSign` flag to `true`. pub fn crl_sign(&mut self) -> &mut KeyUsage { self.crl_sign = true; self } /// Sets the `encipherOnly` flag to `true`. pub fn encipher_only(&mut self) -> &mut KeyUsage { self.encipher_only = true; self } /// Sets the `decipherOnly` flag to `true`. pub fn decipher_only(&mut self) -> &mut KeyUsage { self.decipher_only = true; self } /// Return the `KeyUsage` extension as an `X509Extension`. pub fn build(&self) -> Result { let mut value = String::new(); let mut first = true; append(&mut value, &mut first, self.critical, "critical"); append( &mut value, &mut first, self.digital_signature, "digitalSignature", ); append( &mut value, &mut first, self.non_repudiation, "nonRepudiation", ); append( &mut value, &mut first, self.key_encipherment, "keyEncipherment", ); append( &mut value, &mut first, self.data_encipherment, "dataEncipherment", ); append(&mut value, &mut first, self.key_agreement, "keyAgreement"); append(&mut value, &mut first, self.key_cert_sign, "keyCertSign"); append(&mut value, &mut first, self.crl_sign, "cRLSign"); append(&mut value, &mut first, self.encipher_only, "encipherOnly"); append(&mut value, &mut first, self.decipher_only, "decipherOnly"); X509Extension::new_nid(None, None, Nid::KEY_USAGE, &value) } } /// An extension consisting of a list of usages indicating purposes /// for which the certificate public key can be used for. pub struct ExtendedKeyUsage { critical: bool, server_auth: bool, client_auth: bool, code_signing: bool, email_protection: bool, time_stamping: bool, ms_code_ind: bool, ms_code_com: bool, ms_ctl_sign: bool, ms_sgc: bool, ms_efs: bool, ns_sgc: bool, other: Vec, } impl Default for ExtendedKeyUsage { fn default() -> ExtendedKeyUsage { ExtendedKeyUsage::new() } } impl ExtendedKeyUsage { /// Construct a new `ExtendedKeyUsage` extension. pub fn new() -> ExtendedKeyUsage { ExtendedKeyUsage { critical: false, server_auth: false, client_auth: false, code_signing: false, email_protection: false, time_stamping: false, ms_code_ind: false, ms_code_com: false, ms_ctl_sign: false, ms_sgc: false, ms_efs: false, ns_sgc: false, other: vec![], } } /// Sets the `critical` flag to `true`. The extension will be critical. pub fn critical(&mut self) -> &mut ExtendedKeyUsage { self.critical = true; self } /// Sets the `serverAuth` flag to `true`. pub fn server_auth(&mut self) -> &mut ExtendedKeyUsage { self.server_auth = true; self } /// Sets the `clientAuth` flag to `true`. pub fn client_auth(&mut self) -> &mut ExtendedKeyUsage { self.client_auth = true; self } /// Sets the `codeSigning` flag to `true`. pub fn code_signing(&mut self) -> &mut ExtendedKeyUsage { self.code_signing = true; self } /// Sets the `emailProtection` flag to `true`. pub fn email_protection(&mut self) -> &mut ExtendedKeyUsage { self.email_protection = true; self } /// Sets the `timeStamping` flag to `true`. pub fn time_stamping(&mut self) -> &mut ExtendedKeyUsage { self.time_stamping = true; self } /// Sets the `msCodeInd` flag to `true`. pub fn ms_code_ind(&mut self) -> &mut ExtendedKeyUsage { self.ms_code_ind = true; self } /// Sets the `msCodeCom` flag to `true`. pub fn ms_code_com(&mut self) -> &mut ExtendedKeyUsage { self.ms_code_com = true; self } /// Sets the `msCTLSign` flag to `true`. pub fn ms_ctl_sign(&mut self) -> &mut ExtendedKeyUsage { self.ms_ctl_sign = true; self } /// Sets the `msSGC` flag to `true`. pub fn ms_sgc(&mut self) -> &mut ExtendedKeyUsage { self.ms_sgc = true; self } /// Sets the `msEFS` flag to `true`. pub fn ms_efs(&mut self) -> &mut ExtendedKeyUsage { self.ms_efs = true; self } /// Sets the `nsSGC` flag to `true`. pub fn ns_sgc(&mut self) -> &mut ExtendedKeyUsage { self.ns_sgc = true; self } /// Sets a flag not already defined. pub fn other(&mut self, other: &str) -> &mut ExtendedKeyUsage { self.other.push(other.to_owned()); self } /// Return the `ExtendedKeyUsage` extension as an `X509Extension`. pub fn build(&self) -> Result { let mut value = String::new(); let mut first = true; append(&mut value, &mut first, self.critical, "critical"); append(&mut value, &mut first, self.server_auth, "serverAuth"); append(&mut value, &mut first, self.client_auth, "clientAuth"); append(&mut value, &mut first, self.code_signing, "codeSigning"); append( &mut value, &mut first, self.email_protection, "emailProtection", ); append(&mut value, &mut first, self.time_stamping, "timeStamping"); append(&mut value, &mut first, self.ms_code_ind, "msCodeInd"); append(&mut value, &mut first, self.ms_code_com, "msCodeCom"); append(&mut value, &mut first, self.ms_ctl_sign, "msCTLSign"); append(&mut value, &mut first, self.ms_sgc, "msSGC"); append(&mut value, &mut first, self.ms_efs, "msEFS"); append(&mut value, &mut first, self.ns_sgc, "nsSGC"); for other in &self.other { append(&mut value, &mut first, true, other); } X509Extension::new_nid(None, None, Nid::EXT_KEY_USAGE, &value) } } /// An extension that provides a means of identifying certificates that contain a /// particular public key. pub struct SubjectKeyIdentifier { critical: bool, } impl Default for SubjectKeyIdentifier { fn default() -> SubjectKeyIdentifier { SubjectKeyIdentifier::new() } } impl SubjectKeyIdentifier { /// Construct a new `SubjectKeyIdentifier` extension. pub fn new() -> SubjectKeyIdentifier { SubjectKeyIdentifier { critical: false } } /// Sets the `critical` flag to `true`. The extension will be critical. pub fn critical(&mut self) -> &mut SubjectKeyIdentifier { self.critical = true; self } /// Return a `SubjectKeyIdentifier` extension as an `X509Extension`. pub fn build(&self, ctx: &X509v3Context<'_>) -> Result { let mut value = String::new(); let mut first = true; append(&mut value, &mut first, self.critical, "critical"); append(&mut value, &mut first, true, "hash"); X509Extension::new_nid(None, Some(ctx), Nid::SUBJECT_KEY_IDENTIFIER, &value) } } /// An extension that provides a means of identifying the public key corresponding /// to the private key used to sign a CRL. pub struct AuthorityKeyIdentifier { critical: bool, keyid: Option, issuer: Option, } impl Default for AuthorityKeyIdentifier { fn default() -> AuthorityKeyIdentifier { AuthorityKeyIdentifier::new() } } impl AuthorityKeyIdentifier { /// Construct a new `AuthorityKeyIdentifier` extension. pub fn new() -> AuthorityKeyIdentifier { AuthorityKeyIdentifier { critical: false, keyid: None, issuer: None, } } /// Sets the `critical` flag to `true`. The extension will be critical. pub fn critical(&mut self) -> &mut AuthorityKeyIdentifier { self.critical = true; self } /// Sets the `keyid` flag. pub fn keyid(&mut self, always: bool) -> &mut AuthorityKeyIdentifier { self.keyid = Some(always); self } /// Sets the `issuer` flag. pub fn issuer(&mut self, always: bool) -> &mut AuthorityKeyIdentifier { self.issuer = Some(always); self } /// Return a `AuthorityKeyIdentifier` extension as an `X509Extension`. pub fn build(&self, ctx: &X509v3Context<'_>) -> Result { let mut value = String::new(); let mut first = true; append(&mut value, &mut first, self.critical, "critical"); match self.keyid { Some(true) => append(&mut value, &mut first, true, "keyid:always"), Some(false) => append(&mut value, &mut first, true, "keyid"), None => {} } match self.issuer { Some(true) => append(&mut value, &mut first, true, "issuer:always"), Some(false) => append(&mut value, &mut first, true, "issuer"), None => {} } X509Extension::new_nid(None, Some(ctx), Nid::AUTHORITY_KEY_IDENTIFIER, &value) } } /// An extension that allows additional identities to be bound to the subject /// of the certificate. pub struct SubjectAlternativeName { critical: bool, names: Vec, } impl Default for SubjectAlternativeName { fn default() -> SubjectAlternativeName { SubjectAlternativeName::new() } } impl SubjectAlternativeName { /// Construct a new `SubjectAlternativeName` extension. pub fn new() -> SubjectAlternativeName { SubjectAlternativeName { critical: false, names: vec![], } } /// Sets the `critical` flag to `true`. The extension will be critical. pub fn critical(&mut self) -> &mut SubjectAlternativeName { self.critical = true; self } /// Sets the `email` flag. pub fn email(&mut self, email: &str) -> &mut SubjectAlternativeName { self.names.push(format!("email:{}", email)); self } /// Sets the `uri` flag. pub fn uri(&mut self, uri: &str) -> &mut SubjectAlternativeName { self.names.push(format!("URI:{}", uri)); self } /// Sets the `dns` flag. pub fn dns(&mut self, dns: &str) -> &mut SubjectAlternativeName { self.names.push(format!("DNS:{}", dns)); self } /// Sets the `rid` flag. pub fn rid(&mut self, rid: &str) -> &mut SubjectAlternativeName { self.names.push(format!("RID:{}", rid)); self } /// Sets the `ip` flag. pub fn ip(&mut self, ip: &str) -> &mut SubjectAlternativeName { self.names.push(format!("IP:{}", ip)); self } /// Sets the `dirName` flag. pub fn dir_name(&mut self, dir_name: &str) -> &mut SubjectAlternativeName { self.names.push(format!("dirName:{}", dir_name)); self } /// Sets the `otherName` flag. pub fn other_name(&mut self, other_name: &str) -> &mut SubjectAlternativeName { self.names.push(format!("otherName:{}", other_name)); self } /// Return a `SubjectAlternativeName` extension as an `X509Extension`. pub fn build(&self, ctx: &X509v3Context<'_>) -> Result { let mut value = String::new(); let mut first = true; append(&mut value, &mut first, self.critical, "critical"); for name in &self.names { append(&mut value, &mut first, true, name); } X509Extension::new_nid(None, Some(ctx), Nid::SUBJECT_ALT_NAME, &value) } } fn append(value: &mut String, first: &mut bool, should: bool, element: &str) { if !should { return; } if !*first { value.push(','); } *first = false; value.push_str(element); } vendor/openssl/src/x509/verify.rs0000664000175000017500000001374614160055207017513 0ustar mwhudsonmwhudsonuse bitflags::bitflags; use foreign_types::ForeignTypeRef; use libc::{c_uint, c_ulong}; use std::net::IpAddr; use crate::cvt; use crate::error::ErrorStack; bitflags! { /// Flags used to check an `X509` certificate. pub struct X509CheckFlags: c_uint { const ALWAYS_CHECK_SUBJECT = ffi::X509_CHECK_FLAG_ALWAYS_CHECK_SUBJECT; const NO_WILDCARDS = ffi::X509_CHECK_FLAG_NO_WILDCARDS; const NO_PARTIAL_WILDCARDS = ffi::X509_CHECK_FLAG_NO_PARTIAL_WILDCARDS; const MULTI_LABEL_WILDCARDS = ffi::X509_CHECK_FLAG_MULTI_LABEL_WILDCARDS; const SINGLE_LABEL_SUBDOMAINS = ffi::X509_CHECK_FLAG_SINGLE_LABEL_SUBDOMAINS; /// Requires OpenSSL 1.1.0 or newer. #[cfg(any(ossl110))] const NEVER_CHECK_SUBJECT = ffi::X509_CHECK_FLAG_NEVER_CHECK_SUBJECT; #[deprecated(since = "0.10.6", note = "renamed to NO_WILDCARDS")] const FLAG_NO_WILDCARDS = ffi::X509_CHECK_FLAG_NO_WILDCARDS; } } bitflags! { /// Flags used to verify an `X509` certificate chain. pub struct X509VerifyFlags: c_ulong { const CB_ISSUER_CHECK = ffi::X509_V_FLAG_CB_ISSUER_CHECK; const USE_CHECK_TIME = ffi::X509_V_FLAG_USE_CHECK_TIME; const CRL_CHECK = ffi::X509_V_FLAG_CRL_CHECK; const CRL_CHECK_ALL = ffi::X509_V_FLAG_CRL_CHECK_ALL; const IGNORE_CRITICAL = ffi::X509_V_FLAG_IGNORE_CRITICAL; const X509_STRICT = ffi::X509_V_FLAG_X509_STRICT; const ALLOW_PROXY_CERTS = ffi::X509_V_FLAG_ALLOW_PROXY_CERTS; const POLICY_CHECK = ffi::X509_V_FLAG_POLICY_CHECK; const EXPLICIT_POLICY = ffi::X509_V_FLAG_EXPLICIT_POLICY; const INHIBIT_ANY = ffi::X509_V_FLAG_INHIBIT_ANY; const INHIBIT_MAP = ffi::X509_V_FLAG_INHIBIT_MAP; const NOTIFY_POLICY = ffi::X509_V_FLAG_NOTIFY_POLICY; const EXTENDED_CRL_SUPPORT = ffi::X509_V_FLAG_EXTENDED_CRL_SUPPORT; const USE_DELTAS = ffi::X509_V_FLAG_USE_DELTAS; const CHECK_SS_SIGNATURE = ffi::X509_V_FLAG_CHECK_SS_SIGNATURE; #[cfg(ossl102)] const TRUSTED_FIRST = ffi::X509_V_FLAG_TRUSTED_FIRST; #[cfg(ossl102)] const SUITEB_128_LOS_ONLY = ffi::X509_V_FLAG_SUITEB_128_LOS_ONLY; #[cfg(ossl102)] const SUITEB_192_LOS = ffi::X509_V_FLAG_SUITEB_128_LOS; #[cfg(ossl102)] const SUITEB_128_LOS = ffi::X509_V_FLAG_SUITEB_192_LOS; #[cfg(ossl102)] const PARTIAL_CHAIN = ffi::X509_V_FLAG_PARTIAL_CHAIN; #[cfg(ossl110)] const NO_ALT_CHAINS = ffi::X509_V_FLAG_NO_ALT_CHAINS; #[cfg(ossl110)] const NO_CHECK_TIME = ffi::X509_V_FLAG_NO_CHECK_TIME; } } foreign_type_and_impl_send_sync! { type CType = ffi::X509_VERIFY_PARAM; fn drop = ffi::X509_VERIFY_PARAM_free; /// Adjust parameters associated with certificate verification. pub struct X509VerifyParam; /// Reference to `X509VerifyParam`. pub struct X509VerifyParamRef; } impl X509VerifyParamRef { /// Set the host flags. /// /// This corresponds to [`X509_VERIFY_PARAM_set_hostflags`]. /// /// [`X509_VERIFY_PARAM_set_hostflags`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_VERIFY_PARAM_set_hostflags.html pub fn set_hostflags(&mut self, hostflags: X509CheckFlags) { unsafe { ffi::X509_VERIFY_PARAM_set_hostflags(self.as_ptr(), hostflags.bits); } } /// Set verification flags. /// /// This corresponds to [`X509_VERIFY_PARAM_set_flags`]. /// /// [`X509_VERIFY_PARAM_set_flags`]: https://www.openssl.org/docs/man1.0.2/crypto/X509_VERIFY_PARAM_set_flags.html pub fn set_flags(&mut self, flags: X509VerifyFlags) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_VERIFY_PARAM_set_flags(self.as_ptr(), flags.bits)).map(|_| ()) } } /// Clear verification flags. /// /// This corresponds to [`X509_VERIFY_PARAM_clear_flags`]. /// /// [`X509_VERIFY_PARAM_clear_flags`]: https://www.openssl.org/docs/man1.0.2/crypto/X509_VERIFY_PARAM_clear_flags.html pub fn clear_flags(&mut self, flags: X509VerifyFlags) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_VERIFY_PARAM_clear_flags( self.as_ptr(), flags.bits, )) .map(|_| ()) } } /// Gets verification flags. /// /// This corresponds to [`X509_VERIFY_PARAM_get_flags`]. /// /// [`X509_VERIFY_PARAM_get_flags`]: https://www.openssl.org/docs/man1.0.2/crypto/X509_VERIFY_PARAM_get_flags.html pub fn flags(&mut self) -> X509VerifyFlags { let bits = unsafe { ffi::X509_VERIFY_PARAM_get_flags(self.as_ptr()) }; X509VerifyFlags { bits } } /// Set the expected DNS hostname. /// /// This corresponds to [`X509_VERIFY_PARAM_set1_host`]. /// /// [`X509_VERIFY_PARAM_set1_host`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_VERIFY_PARAM_set1_host.html pub fn set_host(&mut self, host: &str) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_VERIFY_PARAM_set1_host( self.as_ptr(), host.as_ptr() as *const _, host.len(), )) .map(|_| ()) } } /// Set the expected IPv4 or IPv6 address. /// /// This corresponds to [`X509_VERIFY_PARAM_set1_ip`]. /// /// [`X509_VERIFY_PARAM_set1_ip`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_VERIFY_PARAM_set1_ip.html pub fn set_ip(&mut self, ip: IpAddr) -> Result<(), ErrorStack> { unsafe { let mut buf = [0; 16]; let len = match ip { IpAddr::V4(addr) => { buf[..4].copy_from_slice(&addr.octets()); 4 } IpAddr::V6(addr) => { buf.copy_from_slice(&addr.octets()); 16 } }; cvt(ffi::X509_VERIFY_PARAM_set1_ip( self.as_ptr(), buf.as_ptr() as *const _, len, )) .map(|_| ()) } } } vendor/openssl/src/x509/mod.rs0000664000175000017500000015217014172417313016764 0ustar mwhudsonmwhudson//! The standard defining the format of public key certificates. //! //! An `X509` certificate binds an identity to a public key, and is either //! signed by a certificate authority (CA) or self-signed. An entity that gets //! a hold of a certificate can both verify your identity (via a CA) and encrypt //! data with the included public key. `X509` certificates are used in many //! Internet protocols, including SSL/TLS, which is the basis for HTTPS, //! the secure protocol for browsing the web. use cfg_if::cfg_if; use foreign_types::{ForeignType, ForeignTypeRef}; use libc::{c_int, c_long}; use std::error::Error; use std::ffi::{CStr, CString}; use std::fmt; use std::marker::PhantomData; use std::mem; use std::path::Path; use std::ptr; use std::slice; use std::str; use crate::asn1::{ Asn1BitStringRef, Asn1IntegerRef, Asn1ObjectRef, Asn1StringRef, Asn1TimeRef, Asn1Type, }; use crate::bio::MemBioSlice; use crate::conf::ConfRef; use crate::error::ErrorStack; use crate::ex_data::Index; use crate::hash::{DigestBytes, MessageDigest}; use crate::nid::Nid; use crate::pkey::{HasPrivate, HasPublic, PKey, PKeyRef, Public}; use crate::ssl::SslRef; use crate::stack::{Stack, StackRef, Stackable}; use crate::string::OpensslString; use crate::util::{ForeignTypeExt, ForeignTypeRefExt}; use crate::{cvt, cvt_n, cvt_p}; #[cfg(any(ossl102, libressl261))] pub mod verify; pub mod extension; pub mod store; #[cfg(test)] mod tests; foreign_type_and_impl_send_sync! { type CType = ffi::X509_STORE_CTX; fn drop = ffi::X509_STORE_CTX_free; /// An `X509` certificate store context. pub struct X509StoreContext; /// Reference to `X509StoreContext`. pub struct X509StoreContextRef; } impl X509StoreContext { /// Returns the index which can be used to obtain a reference to the `Ssl` associated with a /// context. pub fn ssl_idx() -> Result, ErrorStack> { unsafe { cvt_n(ffi::SSL_get_ex_data_X509_STORE_CTX_idx()).map(|idx| Index::from_raw(idx)) } } /// Creates a new `X509StoreContext` instance. /// /// This corresponds to [`X509_STORE_CTX_new`]. /// /// [`X509_STORE_CTX_new`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_STORE_CTX_new.html pub fn new() -> Result { unsafe { ffi::init(); cvt_p(ffi::X509_STORE_CTX_new()).map(X509StoreContext) } } } impl X509StoreContextRef { /// Returns application data pertaining to an `X509` store context. /// /// This corresponds to [`X509_STORE_CTX_get_ex_data`]. /// /// [`X509_STORE_CTX_get_ex_data`]: https://www.openssl.org/docs/man1.0.2/crypto/X509_STORE_CTX_get_ex_data.html pub fn ex_data(&self, index: Index) -> Option<&T> { unsafe { let data = ffi::X509_STORE_CTX_get_ex_data(self.as_ptr(), index.as_raw()); if data.is_null() { None } else { Some(&*(data as *const T)) } } } /// Returns the error code of the context. /// /// This corresponds to [`X509_STORE_CTX_get_error`]. /// /// [`X509_STORE_CTX_get_error`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_STORE_CTX_get_error.html pub fn error(&self) -> X509VerifyResult { unsafe { X509VerifyResult::from_raw(ffi::X509_STORE_CTX_get_error(self.as_ptr())) } } /// Initializes this context with the given certificate, certificates chain and certificate /// store. After initializing the context, the `with_context` closure is called with the prepared /// context. As long as the closure is running, the context stays initialized and can be used /// to e.g. verify a certificate. The context will be cleaned up, after the closure finished. /// /// * `trust` - The certificate store with the trusted certificates. /// * `cert` - The certificate that should be verified. /// * `cert_chain` - The certificates chain. /// * `with_context` - The closure that is called with the initialized context. /// /// This corresponds to [`X509_STORE_CTX_init`] before calling `with_context` and to /// [`X509_STORE_CTX_cleanup`] after calling `with_context`. /// /// [`X509_STORE_CTX_init`]: https://www.openssl.org/docs/man1.0.2/crypto/X509_STORE_CTX_init.html /// [`X509_STORE_CTX_cleanup`]: https://www.openssl.org/docs/man1.0.2/crypto/X509_STORE_CTX_cleanup.html pub fn init( &mut self, trust: &store::X509StoreRef, cert: &X509Ref, cert_chain: &StackRef, with_context: F, ) -> Result where F: FnOnce(&mut X509StoreContextRef) -> Result, { struct Cleanup<'a>(&'a mut X509StoreContextRef); impl<'a> Drop for Cleanup<'a> { fn drop(&mut self) { unsafe { ffi::X509_STORE_CTX_cleanup(self.0.as_ptr()); } } } unsafe { cvt(ffi::X509_STORE_CTX_init( self.as_ptr(), trust.as_ptr(), cert.as_ptr(), cert_chain.as_ptr(), ))?; let cleanup = Cleanup(self); with_context(cleanup.0) } } /// Verifies the stored certificate. /// /// Returns `true` if verification succeeds. The `error` method will return the specific /// validation error if the certificate was not valid. /// /// This will only work inside of a call to `init`. /// /// This corresponds to [`X509_verify_cert`]. /// /// [`X509_verify_cert`]: https://www.openssl.org/docs/man1.0.2/crypto/X509_verify_cert.html pub fn verify_cert(&mut self) -> Result { unsafe { cvt_n(ffi::X509_verify_cert(self.as_ptr())).map(|n| n != 0) } } /// Set the error code of the context. /// /// This corresponds to [`X509_STORE_CTX_set_error`]. /// /// [`X509_STORE_CTX_set_error`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_STORE_CTX_set_error.html pub fn set_error(&mut self, result: X509VerifyResult) { unsafe { ffi::X509_STORE_CTX_set_error(self.as_ptr(), result.as_raw()); } } /// Returns a reference to the certificate which caused the error or None if /// no certificate is relevant to the error. /// /// This corresponds to [`X509_STORE_CTX_get_current_cert`]. /// /// [`X509_STORE_CTX_get_current_cert`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_STORE_CTX_get_current_cert.html pub fn current_cert(&self) -> Option<&X509Ref> { unsafe { let ptr = ffi::X509_STORE_CTX_get_current_cert(self.as_ptr()); X509Ref::from_const_ptr_opt(ptr) } } /// Returns a non-negative integer representing the depth in the certificate /// chain where the error occurred. If it is zero it occurred in the end /// entity certificate, one if it is the certificate which signed the end /// entity certificate and so on. /// /// This corresponds to [`X509_STORE_CTX_get_error_depth`]. /// /// [`X509_STORE_CTX_get_error_depth`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_STORE_CTX_get_error_depth.html pub fn error_depth(&self) -> u32 { unsafe { ffi::X509_STORE_CTX_get_error_depth(self.as_ptr()) as u32 } } /// Returns a reference to a complete valid `X509` certificate chain. /// /// This corresponds to [`X509_STORE_CTX_get0_chain`]. /// /// [`X509_STORE_CTX_get0_chain`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_STORE_CTX_get0_chain.html pub fn chain(&self) -> Option<&StackRef> { unsafe { let chain = X509_STORE_CTX_get0_chain(self.as_ptr()); if chain.is_null() { None } else { Some(StackRef::from_ptr(chain)) } } } } /// A builder used to construct an `X509`. pub struct X509Builder(X509); impl X509Builder { /// Creates a new builder. pub fn new() -> Result { unsafe { ffi::init(); cvt_p(ffi::X509_new()).map(|p| X509Builder(X509(p))) } } /// Sets the notAfter constraint on the certificate. pub fn set_not_after(&mut self, not_after: &Asn1TimeRef) -> Result<(), ErrorStack> { unsafe { cvt(X509_set1_notAfter(self.0.as_ptr(), not_after.as_ptr())).map(|_| ()) } } /// Sets the notBefore constraint on the certificate. pub fn set_not_before(&mut self, not_before: &Asn1TimeRef) -> Result<(), ErrorStack> { unsafe { cvt(X509_set1_notBefore(self.0.as_ptr(), not_before.as_ptr())).map(|_| ()) } } /// Sets the version of the certificate. /// /// Note that the version is zero-indexed; that is, a certificate corresponding to version 3 of /// the X.509 standard should pass `2` to this method. pub fn set_version(&mut self, version: i32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_set_version(self.0.as_ptr(), version.into())).map(|_| ()) } } /// Sets the serial number of the certificate. pub fn set_serial_number(&mut self, serial_number: &Asn1IntegerRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_set_serialNumber( self.0.as_ptr(), serial_number.as_ptr(), )) .map(|_| ()) } } /// Sets the issuer name of the certificate. pub fn set_issuer_name(&mut self, issuer_name: &X509NameRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_set_issuer_name( self.0.as_ptr(), issuer_name.as_ptr(), )) .map(|_| ()) } } /// Sets the subject name of the certificate. /// /// When building certificates, the `C`, `ST`, and `O` options are common when using the openssl command line tools. /// The `CN` field is used for the common name, such as a DNS name. /// /// ``` /// use openssl::x509::{X509, X509NameBuilder}; /// /// let mut x509_name = openssl::x509::X509NameBuilder::new().unwrap(); /// x509_name.append_entry_by_text("C", "US").unwrap(); /// x509_name.append_entry_by_text("ST", "CA").unwrap(); /// x509_name.append_entry_by_text("O", "Some organization").unwrap(); /// x509_name.append_entry_by_text("CN", "www.example.com").unwrap(); /// let x509_name = x509_name.build(); /// /// let mut x509 = openssl::x509::X509::builder().unwrap(); /// x509.set_subject_name(&x509_name).unwrap(); /// ``` pub fn set_subject_name(&mut self, subject_name: &X509NameRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_set_subject_name( self.0.as_ptr(), subject_name.as_ptr(), )) .map(|_| ()) } } /// Sets the public key associated with the certificate. pub fn set_pubkey(&mut self, key: &PKeyRef) -> Result<(), ErrorStack> where T: HasPublic, { unsafe { cvt(ffi::X509_set_pubkey(self.0.as_ptr(), key.as_ptr())).map(|_| ()) } } /// Returns a context object which is needed to create certain X509 extension values. /// /// Set `issuer` to `None` if the certificate will be self-signed. pub fn x509v3_context<'a>( &'a self, issuer: Option<&'a X509Ref>, conf: Option<&'a ConfRef>, ) -> X509v3Context<'a> { unsafe { let mut ctx = mem::zeroed(); let issuer = match issuer { Some(issuer) => issuer.as_ptr(), None => self.0.as_ptr(), }; let subject = self.0.as_ptr(); ffi::X509V3_set_ctx( &mut ctx, issuer, subject, ptr::null_mut(), ptr::null_mut(), 0, ); // nodb case taken care of since we zeroed ctx above if let Some(conf) = conf { ffi::X509V3_set_nconf(&mut ctx, conf.as_ptr()); } X509v3Context(ctx, PhantomData) } } /// Adds an X509 extension value to the certificate. /// /// This works just as `append_extension` except it takes ownership of the `X509Extension`. pub fn append_extension(&mut self, extension: X509Extension) -> Result<(), ErrorStack> { self.append_extension2(&extension) } /// Adds an X509 extension value to the certificate. /// /// This corresponds to [`X509_add_ext`]. /// /// [`X509_add_ext`]: https://www.openssl.org/docs/man1.1.0/man3/X509_get_ext.html pub fn append_extension2(&mut self, extension: &X509ExtensionRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_add_ext(self.0.as_ptr(), extension.as_ptr(), -1))?; Ok(()) } } /// Signs the certificate with a private key. pub fn sign(&mut self, key: &PKeyRef, hash: MessageDigest) -> Result<(), ErrorStack> where T: HasPrivate, { unsafe { cvt(ffi::X509_sign(self.0.as_ptr(), key.as_ptr(), hash.as_ptr())).map(|_| ()) } } /// Consumes the builder, returning the certificate. pub fn build(self) -> X509 { self.0 } } foreign_type_and_impl_send_sync! { type CType = ffi::X509; fn drop = ffi::X509_free; /// An `X509` public key certificate. pub struct X509; /// Reference to `X509`. pub struct X509Ref; } impl X509Ref { /// Returns this certificate's subject name. /// /// This corresponds to [`X509_get_subject_name`]. /// /// [`X509_get_subject_name`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_get_subject_name.html pub fn subject_name(&self) -> &X509NameRef { unsafe { let name = ffi::X509_get_subject_name(self.as_ptr()); X509NameRef::from_const_ptr_opt(name).expect("subject name must not be null") } } /// Returns the hash of the certificates subject /// /// This corresponds to `X509_subject_name_hash`. pub fn subject_name_hash(&self) -> u32 { unsafe { ffi::X509_subject_name_hash(self.as_ptr()) as u32 } } /// Returns this certificate's issuer name. /// /// This corresponds to [`X509_get_issuer_name`]. /// /// [`X509_get_issuer_name`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_get_subject_name.html pub fn issuer_name(&self) -> &X509NameRef { unsafe { let name = ffi::X509_get_issuer_name(self.as_ptr()); X509NameRef::from_const_ptr_opt(name).expect("issuer name must not be null") } } /// Returns this certificate's subject alternative name entries, if they exist. /// /// This corresponds to [`X509_get_ext_d2i`] called with `NID_subject_alt_name`. /// /// [`X509_get_ext_d2i`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_get_ext_d2i.html pub fn subject_alt_names(&self) -> Option> { unsafe { let stack = ffi::X509_get_ext_d2i( self.as_ptr(), ffi::NID_subject_alt_name, ptr::null_mut(), ptr::null_mut(), ); Stack::from_ptr_opt(stack as *mut _) } } /// Returns this certificate's issuer alternative name entries, if they exist. /// /// This corresponds to [`X509_get_ext_d2i`] called with `NID_issuer_alt_name`. /// /// [`X509_get_ext_d2i`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_get_ext_d2i.html pub fn issuer_alt_names(&self) -> Option> { unsafe { let stack = ffi::X509_get_ext_d2i( self.as_ptr(), ffi::NID_issuer_alt_name, ptr::null_mut(), ptr::null_mut(), ); Stack::from_ptr_opt(stack as *mut _) } } /// Returns this certificate's [`authority information access`] entries, if they exist. /// /// This corresponds to [`X509_get_ext_d2i`] called with `NID_info_access`. /// /// [`X509_get_ext_d2i`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_get_ext_d2i.html /// [`authority information access`]: https://tools.ietf.org/html/rfc5280#section-4.2.2.1 pub fn authority_info(&self) -> Option> { unsafe { let stack = ffi::X509_get_ext_d2i( self.as_ptr(), ffi::NID_info_access, ptr::null_mut(), ptr::null_mut(), ); Stack::from_ptr_opt(stack as *mut _) } } pub fn public_key(&self) -> Result, ErrorStack> { unsafe { let pkey = cvt_p(ffi::X509_get_pubkey(self.as_ptr()))?; Ok(PKey::from_ptr(pkey)) } } /// Returns a digest of the DER representation of the certificate. /// /// This corresponds to [`X509_digest`]. /// /// [`X509_digest`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_digest.html pub fn digest(&self, hash_type: MessageDigest) -> Result { unsafe { let mut digest = DigestBytes { buf: [0; ffi::EVP_MAX_MD_SIZE as usize], len: ffi::EVP_MAX_MD_SIZE as usize, }; let mut len = ffi::EVP_MAX_MD_SIZE; cvt(ffi::X509_digest( self.as_ptr(), hash_type.as_ptr(), digest.buf.as_mut_ptr() as *mut _, &mut len, ))?; digest.len = len as usize; Ok(digest) } } #[deprecated(since = "0.10.9", note = "renamed to digest")] pub fn fingerprint(&self, hash_type: MessageDigest) -> Result, ErrorStack> { self.digest(hash_type).map(|b| b.to_vec()) } /// Returns the certificate's Not After validity period. pub fn not_after(&self) -> &Asn1TimeRef { unsafe { let date = X509_getm_notAfter(self.as_ptr()); Asn1TimeRef::from_const_ptr_opt(date).expect("not_after must not be null") } } /// Returns the certificate's Not Before validity period. pub fn not_before(&self) -> &Asn1TimeRef { unsafe { let date = X509_getm_notBefore(self.as_ptr()); Asn1TimeRef::from_const_ptr_opt(date).expect("not_before must not be null") } } /// Returns the certificate's signature pub fn signature(&self) -> &Asn1BitStringRef { unsafe { let mut signature = ptr::null(); X509_get0_signature(&mut signature, ptr::null_mut(), self.as_ptr()); Asn1BitStringRef::from_const_ptr_opt(signature).expect("signature must not be null") } } /// Returns the certificate's signature algorithm. pub fn signature_algorithm(&self) -> &X509AlgorithmRef { unsafe { let mut algor = ptr::null(); X509_get0_signature(ptr::null_mut(), &mut algor, self.as_ptr()); X509AlgorithmRef::from_const_ptr_opt(algor) .expect("signature algorithm must not be null") } } /// Returns the list of OCSP responder URLs specified in the certificate's Authority Information /// Access field. pub fn ocsp_responders(&self) -> Result, ErrorStack> { unsafe { cvt_p(ffi::X509_get1_ocsp(self.as_ptr())).map(|p| Stack::from_ptr(p)) } } /// Checks that this certificate issued `subject`. pub fn issued(&self, subject: &X509Ref) -> X509VerifyResult { unsafe { let r = ffi::X509_check_issued(self.as_ptr(), subject.as_ptr()); X509VerifyResult::from_raw(r) } } /// Returns certificate version. If this certificate has no explicit version set, it defaults to /// version 1. /// /// Note that `0` return value stands for version 1, `1` for version 2 and so on. /// /// This corresponds to [`X509_get_version`]. /// /// [`X509_get_version`]: https://www.openssl.org/docs/man1.1.1/man3/X509_get_version.html #[cfg(ossl110)] pub fn version(&self) -> i32 { // Covered with `x509_ref_version()`, `x509_ref_version_no_version_set()` tests unsafe { ffi::X509_get_version(self.as_ptr()) as i32 } } /// Check if the certificate is signed using the given public key. /// /// Only the signature is checked: no other checks (such as certificate chain validity) /// are performed. /// /// Returns `true` if verification succeeds. /// /// This corresponds to [`X509_verify"]. /// /// [`X509_verify`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_verify.html pub fn verify(&self, key: &PKeyRef) -> Result where T: HasPublic, { unsafe { cvt_n(ffi::X509_verify(self.as_ptr(), key.as_ptr())).map(|n| n != 0) } } /// Returns this certificate's serial number. /// /// This corresponds to [`X509_get_serialNumber`]. /// /// [`X509_get_serialNumber`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_get_serialNumber.html pub fn serial_number(&self) -> &Asn1IntegerRef { unsafe { let r = ffi::X509_get_serialNumber(self.as_ptr()); Asn1IntegerRef::from_const_ptr_opt(r).expect("serial number must not be null") } } to_pem! { /// Serializes the certificate into a PEM-encoded X509 structure. /// /// The output will have a header of `-----BEGIN CERTIFICATE-----`. /// /// This corresponds to [`PEM_write_bio_X509`]. /// /// [`PEM_write_bio_X509`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_write_bio_X509.html to_pem, ffi::PEM_write_bio_X509 } to_der! { /// Serializes the certificate into a DER-encoded X509 structure. /// /// This corresponds to [`i2d_X509`]. /// /// [`i2d_X509`]: https://www.openssl.org/docs/man1.1.0/crypto/i2d_X509.html to_der, ffi::i2d_X509 } } impl ToOwned for X509Ref { type Owned = X509; fn to_owned(&self) -> X509 { unsafe { X509_up_ref(self.as_ptr()); X509::from_ptr(self.as_ptr()) } } } impl X509 { /// Returns a new builder. pub fn builder() -> Result { X509Builder::new() } from_pem! { /// Deserializes a PEM-encoded X509 structure. /// /// The input should have a header of `-----BEGIN CERTIFICATE-----`. /// /// This corresponds to [`PEM_read_bio_X509`]. /// /// [`PEM_read_bio_X509`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_read_bio_X509.html from_pem, X509, ffi::PEM_read_bio_X509 } from_der! { /// Deserializes a DER-encoded X509 structure. /// /// This corresponds to [`d2i_X509`]. /// /// [`d2i_X509`]: https://www.openssl.org/docs/manmaster/man3/d2i_X509.html from_der, X509, ffi::d2i_X509 } /// Deserializes a list of PEM-formatted certificates. pub fn stack_from_pem(pem: &[u8]) -> Result, ErrorStack> { unsafe { ffi::init(); let bio = MemBioSlice::new(pem)?; let mut certs = vec![]; loop { let r = ffi::PEM_read_bio_X509(bio.as_ptr(), ptr::null_mut(), None, ptr::null_mut()); if r.is_null() { let err = ffi::ERR_peek_last_error(); if ffi::ERR_GET_LIB(err) == ffi::ERR_LIB_PEM && ffi::ERR_GET_REASON(err) == ffi::PEM_R_NO_START_LINE { ffi::ERR_clear_error(); break; } return Err(ErrorStack::get()); } else { certs.push(X509(r)); } } Ok(certs) } } } impl Clone for X509 { fn clone(&self) -> X509 { X509Ref::to_owned(self) } } impl fmt::Debug for X509 { fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result { let serial = match &self.serial_number().to_bn() { Ok(bn) => match bn.to_hex_str() { Ok(hex) => hex.to_string(), Err(_) => "".to_string(), }, Err(_) => "".to_string(), }; let mut debug_struct = formatter.debug_struct("X509"); debug_struct.field("serial_number", &serial); debug_struct.field("signature_algorithm", &self.signature_algorithm().object()); debug_struct.field("issuer", &self.issuer_name()); debug_struct.field("subject", &self.subject_name()); if let Some(subject_alt_names) = &self.subject_alt_names() { debug_struct.field("subject_alt_names", subject_alt_names); } debug_struct.field("not_before", &self.not_before()); debug_struct.field("not_after", &self.not_after()); if let Ok(public_key) = &self.public_key() { debug_struct.field("public_key", public_key); }; // TODO: Print extensions once they are supported on the X509 struct. debug_struct.finish() } } impl AsRef for X509Ref { fn as_ref(&self) -> &X509Ref { self } } impl Stackable for X509 { type StackType = ffi::stack_st_X509; } /// A context object required to construct certain `X509` extension values. pub struct X509v3Context<'a>(ffi::X509V3_CTX, PhantomData<(&'a X509Ref, &'a ConfRef)>); impl<'a> X509v3Context<'a> { pub fn as_ptr(&self) -> *mut ffi::X509V3_CTX { &self.0 as *const _ as *mut _ } } foreign_type_and_impl_send_sync! { type CType = ffi::X509_EXTENSION; fn drop = ffi::X509_EXTENSION_free; /// Permit additional fields to be added to an `X509` v3 certificate. pub struct X509Extension; /// Reference to `X509Extension`. pub struct X509ExtensionRef; } impl Stackable for X509Extension { type StackType = ffi::stack_st_X509_EXTENSION; } impl X509Extension { /// Constructs an X509 extension value. See `man x509v3_config` for information on supported /// names and their value formats. /// /// Some extension types, such as `subjectAlternativeName`, require an `X509v3Context` to be /// provided. /// /// See the extension module for builder types which will construct certain common extensions. pub fn new( conf: Option<&ConfRef>, context: Option<&X509v3Context<'_>>, name: &str, value: &str, ) -> Result { let name = CString::new(name).unwrap(); let value = CString::new(value).unwrap(); unsafe { ffi::init(); let conf = conf.map_or(ptr::null_mut(), ConfRef::as_ptr); let context = context.map_or(ptr::null_mut(), X509v3Context::as_ptr); let name = name.as_ptr() as *mut _; let value = value.as_ptr() as *mut _; cvt_p(ffi::X509V3_EXT_nconf(conf, context, name, value)).map(X509Extension) } } /// Constructs an X509 extension value. See `man x509v3_config` for information on supported /// extensions and their value formats. /// /// Some extension types, such as `nid::SUBJECT_ALTERNATIVE_NAME`, require an `X509v3Context` to /// be provided. /// /// See the extension module for builder types which will construct certain common extensions. pub fn new_nid( conf: Option<&ConfRef>, context: Option<&X509v3Context<'_>>, name: Nid, value: &str, ) -> Result { let value = CString::new(value).unwrap(); unsafe { ffi::init(); let conf = conf.map_or(ptr::null_mut(), ConfRef::as_ptr); let context = context.map_or(ptr::null_mut(), X509v3Context::as_ptr); let name = name.as_raw(); let value = value.as_ptr() as *mut _; cvt_p(ffi::X509V3_EXT_nconf_nid(conf, context, name, value)).map(X509Extension) } } } /// A builder used to construct an `X509Name`. pub struct X509NameBuilder(X509Name); impl X509NameBuilder { /// Creates a new builder. pub fn new() -> Result { unsafe { ffi::init(); cvt_p(ffi::X509_NAME_new()).map(|p| X509NameBuilder(X509Name(p))) } } /// Add a field entry by str. /// /// This corresponds to [`X509_NAME_add_entry_by_txt`]. /// /// [`X509_NAME_add_entry_by_txt`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_NAME_add_entry_by_txt.html pub fn append_entry_by_text(&mut self, field: &str, value: &str) -> Result<(), ErrorStack> { unsafe { let field = CString::new(field).unwrap(); assert!(value.len() <= c_int::max_value() as usize); cvt(ffi::X509_NAME_add_entry_by_txt( self.0.as_ptr(), field.as_ptr() as *mut _, ffi::MBSTRING_UTF8, value.as_ptr(), value.len() as c_int, -1, 0, )) .map(|_| ()) } } /// Add a field entry by str with a specific type. /// /// This corresponds to [`X509_NAME_add_entry_by_txt`]. /// /// [`X509_NAME_add_entry_by_txt`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_NAME_add_entry_by_txt.html pub fn append_entry_by_text_with_type( &mut self, field: &str, value: &str, ty: Asn1Type, ) -> Result<(), ErrorStack> { unsafe { let field = CString::new(field).unwrap(); assert!(value.len() <= c_int::max_value() as usize); cvt(ffi::X509_NAME_add_entry_by_txt( self.0.as_ptr(), field.as_ptr() as *mut _, ty.as_raw(), value.as_ptr(), value.len() as c_int, -1, 0, )) .map(|_| ()) } } /// Add a field entry by NID. /// /// This corresponds to [`X509_NAME_add_entry_by_NID`]. /// /// [`X509_NAME_add_entry_by_NID`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_NAME_add_entry_by_NID.html pub fn append_entry_by_nid(&mut self, field: Nid, value: &str) -> Result<(), ErrorStack> { unsafe { assert!(value.len() <= c_int::max_value() as usize); cvt(ffi::X509_NAME_add_entry_by_NID( self.0.as_ptr(), field.as_raw(), ffi::MBSTRING_UTF8, value.as_ptr() as *mut _, value.len() as c_int, -1, 0, )) .map(|_| ()) } } /// Add a field entry by NID with a specific type. /// /// This corresponds to [`X509_NAME_add_entry_by_NID`]. /// /// [`X509_NAME_add_entry_by_NID`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_NAME_add_entry_by_NID.html pub fn append_entry_by_nid_with_type( &mut self, field: Nid, value: &str, ty: Asn1Type, ) -> Result<(), ErrorStack> { unsafe { assert!(value.len() <= c_int::max_value() as usize); cvt(ffi::X509_NAME_add_entry_by_NID( self.0.as_ptr(), field.as_raw(), ty.as_raw(), value.as_ptr() as *mut _, value.len() as c_int, -1, 0, )) .map(|_| ()) } } /// Return an `X509Name`. pub fn build(self) -> X509Name { self.0 } } foreign_type_and_impl_send_sync! { type CType = ffi::X509_NAME; fn drop = ffi::X509_NAME_free; /// The names of an `X509` certificate. pub struct X509Name; /// Reference to `X509Name`. pub struct X509NameRef; } impl X509Name { /// Returns a new builder. pub fn builder() -> Result { X509NameBuilder::new() } /// Loads subject names from a file containing PEM-formatted certificates. /// /// This is commonly used in conjunction with `SslContextBuilder::set_client_ca_list`. pub fn load_client_ca_file>(file: P) -> Result, ErrorStack> { let file = CString::new(file.as_ref().as_os_str().to_str().unwrap()).unwrap(); unsafe { cvt_p(ffi::SSL_load_client_CA_file(file.as_ptr())).map(|p| Stack::from_ptr(p)) } } from_der! { /// Deserializes a DER-encoded X509 name structure. /// /// This corresponds to [`d2i_X509_NAME`]. /// /// [`d2i_X509_NAME`]: https://www.openssl.org/docs/manmaster/man3/d2i_X509_NAME.html from_der, X509Name, ffi::d2i_X509_NAME } } impl Stackable for X509Name { type StackType = ffi::stack_st_X509_NAME; } impl X509NameRef { /// Returns the name entries by the nid. pub fn entries_by_nid(&self, nid: Nid) -> X509NameEntries<'_> { X509NameEntries { name: self, nid: Some(nid), loc: -1, } } /// Returns an iterator over all `X509NameEntry` values pub fn entries(&self) -> X509NameEntries<'_> { X509NameEntries { name: self, nid: None, loc: -1, } } to_der! { /// Serializes the certificate into a DER-encoded X509 name structure. /// /// This corresponds to [`i2d_X509_NAME`]. /// /// [`i2d_X509_NAME`]: https://www.openssl.org/docs/man1.1.0/crypto/i2d_X509_NAME.html to_der, ffi::i2d_X509_NAME } } impl fmt::Debug for X509NameRef { fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result { formatter.debug_list().entries(self.entries()).finish() } } /// A type to destructure and examine an `X509Name`. pub struct X509NameEntries<'a> { name: &'a X509NameRef, nid: Option, loc: c_int, } impl<'a> Iterator for X509NameEntries<'a> { type Item = &'a X509NameEntryRef; fn next(&mut self) -> Option<&'a X509NameEntryRef> { unsafe { match self.nid { Some(nid) => { // There is a `Nid` specified to search for self.loc = ffi::X509_NAME_get_index_by_NID(self.name.as_ptr(), nid.as_raw(), self.loc); if self.loc == -1 { return None; } } None => { // Iterate over all `Nid`s self.loc += 1; if self.loc >= ffi::X509_NAME_entry_count(self.name.as_ptr()) { return None; } } } let entry = ffi::X509_NAME_get_entry(self.name.as_ptr(), self.loc); Some(X509NameEntryRef::from_const_ptr_opt(entry).expect("entry must not be null")) } } } foreign_type_and_impl_send_sync! { type CType = ffi::X509_NAME_ENTRY; fn drop = ffi::X509_NAME_ENTRY_free; /// A name entry associated with a `X509Name`. pub struct X509NameEntry; /// Reference to `X509NameEntry`. pub struct X509NameEntryRef; } impl X509NameEntryRef { /// Returns the field value of an `X509NameEntry`. /// /// This corresponds to [`X509_NAME_ENTRY_get_data`]. /// /// [`X509_NAME_ENTRY_get_data`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_NAME_ENTRY_get_data.html pub fn data(&self) -> &Asn1StringRef { unsafe { let data = ffi::X509_NAME_ENTRY_get_data(self.as_ptr()); Asn1StringRef::from_ptr(data) } } /// Returns the `Asn1Object` value of an `X509NameEntry`. /// This is useful for finding out about the actual `Nid` when iterating over all `X509NameEntries`. /// /// This corresponds to [`X509_NAME_ENTRY_get_object`]. /// /// [`X509_NAME_ENTRY_get_object`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_NAME_ENTRY_get_object.html pub fn object(&self) -> &Asn1ObjectRef { unsafe { let object = ffi::X509_NAME_ENTRY_get_object(self.as_ptr()); Asn1ObjectRef::from_ptr(object) } } } impl fmt::Debug for X509NameEntryRef { fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result { formatter.write_fmt(format_args!("{:?} = {:?}", self.object(), self.data())) } } /// A builder used to construct an `X509Req`. pub struct X509ReqBuilder(X509Req); impl X509ReqBuilder { /// Returns a builder for a certificate request. /// /// This corresponds to [`X509_REQ_new`]. /// ///[`X509_REQ_new`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_REQ_new.html pub fn new() -> Result { unsafe { ffi::init(); cvt_p(ffi::X509_REQ_new()).map(|p| X509ReqBuilder(X509Req(p))) } } /// Set the numerical value of the version field. /// /// This corresponds to [`X509_REQ_set_version`]. /// ///[`X509_REQ_set_version`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_REQ_set_version.html pub fn set_version(&mut self, version: i32) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_REQ_set_version(self.0.as_ptr(), version.into())).map(|_| ()) } } /// Set the issuer name. /// /// This corresponds to [`X509_REQ_set_subject_name`]. /// /// [`X509_REQ_set_subject_name`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_REQ_set_subject_name.html pub fn set_subject_name(&mut self, subject_name: &X509NameRef) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_REQ_set_subject_name( self.0.as_ptr(), subject_name.as_ptr(), )) .map(|_| ()) } } /// Set the public key. /// /// This corresponds to [`X509_REQ_set_pubkey`]. /// /// [`X509_REQ_set_pubkey`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_REQ_set_pubkey.html pub fn set_pubkey(&mut self, key: &PKeyRef) -> Result<(), ErrorStack> where T: HasPublic, { unsafe { cvt(ffi::X509_REQ_set_pubkey(self.0.as_ptr(), key.as_ptr())).map(|_| ()) } } /// Return an `X509v3Context`. This context object can be used to construct /// certain `X509` extensions. pub fn x509v3_context<'a>(&'a self, conf: Option<&'a ConfRef>) -> X509v3Context<'a> { unsafe { let mut ctx = mem::zeroed(); ffi::X509V3_set_ctx( &mut ctx, ptr::null_mut(), ptr::null_mut(), self.0.as_ptr(), ptr::null_mut(), 0, ); // nodb case taken care of since we zeroed ctx above if let Some(conf) = conf { ffi::X509V3_set_nconf(&mut ctx, conf.as_ptr()); } X509v3Context(ctx, PhantomData) } } /// Permits any number of extension fields to be added to the certificate. pub fn add_extensions( &mut self, extensions: &StackRef, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_REQ_add_extensions( self.0.as_ptr(), extensions.as_ptr(), )) .map(|_| ()) } } /// Sign the request using a private key. /// /// This corresponds to [`X509_REQ_sign`]. /// /// [`X509_REQ_sign`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_REQ_sign.html pub fn sign(&mut self, key: &PKeyRef, hash: MessageDigest) -> Result<(), ErrorStack> where T: HasPrivate, { unsafe { cvt(ffi::X509_REQ_sign( self.0.as_ptr(), key.as_ptr(), hash.as_ptr(), )) .map(|_| ()) } } /// Returns the `X509Req`. pub fn build(self) -> X509Req { self.0 } } foreign_type_and_impl_send_sync! { type CType = ffi::X509_REQ; fn drop = ffi::X509_REQ_free; /// An `X509` certificate request. pub struct X509Req; /// Reference to `X509Req`. pub struct X509ReqRef; } impl X509Req { /// A builder for `X509Req`. pub fn builder() -> Result { X509ReqBuilder::new() } from_pem! { /// Deserializes a PEM-encoded PKCS#10 certificate request structure. /// /// The input should have a header of `-----BEGIN CERTIFICATE REQUEST-----`. /// /// This corresponds to [`PEM_read_bio_X509_REQ`]. /// /// [`PEM_read_bio_X509_REQ`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_read_bio_X509_REQ.html from_pem, X509Req, ffi::PEM_read_bio_X509_REQ } from_der! { /// Deserializes a DER-encoded PKCS#10 certificate request structure. /// /// This corresponds to [`d2i_X509_REQ`]. /// /// [`d2i_X509_REQ`]: https://www.openssl.org/docs/man1.1.0/crypto/d2i_X509_REQ.html from_der, X509Req, ffi::d2i_X509_REQ } } impl X509ReqRef { to_pem! { /// Serializes the certificate request to a PEM-encoded PKCS#10 structure. /// /// The output will have a header of `-----BEGIN CERTIFICATE REQUEST-----`. /// /// This corresponds to [`PEM_write_bio_X509_REQ`]. /// /// [`PEM_write_bio_X509_REQ`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_write_bio_X509_REQ.html to_pem, ffi::PEM_write_bio_X509_REQ } to_der! { /// Serializes the certificate request to a DER-encoded PKCS#10 structure. /// /// This corresponds to [`i2d_X509_REQ`]. /// /// [`i2d_X509_REQ`]: https://www.openssl.org/docs/man1.0.2/crypto/i2d_X509_REQ.html to_der, ffi::i2d_X509_REQ } /// Returns the numerical value of the version field of the certificate request. /// /// This corresponds to [`X509_REQ_get_version`] /// /// [`X509_REQ_get_version`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_REQ_get_version.html pub fn version(&self) -> i32 { unsafe { X509_REQ_get_version(self.as_ptr()) as i32 } } /// Returns the subject name of the certificate request. /// /// This corresponds to [`X509_REQ_get_subject_name`] /// /// [`X509_REQ_get_subject_name`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_REQ_get_subject_name.html pub fn subject_name(&self) -> &X509NameRef { unsafe { let name = X509_REQ_get_subject_name(self.as_ptr()); X509NameRef::from_const_ptr_opt(name).expect("subject name must not be null") } } /// Returns the public key of the certificate request. /// /// This corresponds to [`X509_REQ_get_pubkey"] /// /// [`X509_REQ_get_pubkey`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_REQ_get_pubkey.html pub fn public_key(&self) -> Result, ErrorStack> { unsafe { let key = cvt_p(ffi::X509_REQ_get_pubkey(self.as_ptr()))?; Ok(PKey::from_ptr(key)) } } /// Check if the certificate request is signed using the given public key. /// /// Returns `true` if verification succeeds. /// /// This corresponds to [`X509_REQ_verify"]. /// /// [`X509_REQ_verify`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_REQ_verify.html pub fn verify(&self, key: &PKeyRef) -> Result where T: HasPublic, { unsafe { cvt_n(ffi::X509_REQ_verify(self.as_ptr(), key.as_ptr())).map(|n| n != 0) } } /// Returns the extensions of the certificate request. /// /// This corresponds to [`X509_REQ_get_extensions"] pub fn extensions(&self) -> Result, ErrorStack> { unsafe { let extensions = cvt_p(ffi::X509_REQ_get_extensions(self.as_ptr()))?; Ok(Stack::from_ptr(extensions)) } } } /// The result of peer certificate verification. #[derive(Copy, Clone, PartialEq, Eq)] pub struct X509VerifyResult(c_int); impl fmt::Debug for X509VerifyResult { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt.debug_struct("X509VerifyResult") .field("code", &self.0) .field("error", &self.error_string()) .finish() } } impl fmt::Display for X509VerifyResult { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt.write_str(self.error_string()) } } impl Error for X509VerifyResult {} impl X509VerifyResult { /// Creates an `X509VerifyResult` from a raw error number. /// /// # Safety /// /// Some methods on `X509VerifyResult` are not thread safe if the error /// number is invalid. pub unsafe fn from_raw(err: c_int) -> X509VerifyResult { X509VerifyResult(err) } /// Return the integer representation of an `X509VerifyResult`. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } /// Return a human readable error string from the verification error. /// /// This corresponds to [`X509_verify_cert_error_string`]. /// /// [`X509_verify_cert_error_string`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_verify_cert_error_string.html #[allow(clippy::trivially_copy_pass_by_ref)] pub fn error_string(&self) -> &'static str { ffi::init(); unsafe { let s = ffi::X509_verify_cert_error_string(self.0 as c_long); str::from_utf8(CStr::from_ptr(s).to_bytes()).unwrap() } } /// Successful peer certifiate verification. pub const OK: X509VerifyResult = X509VerifyResult(ffi::X509_V_OK); /// Application verification failure. pub const APPLICATION_VERIFICATION: X509VerifyResult = X509VerifyResult(ffi::X509_V_ERR_APPLICATION_VERIFICATION); } foreign_type_and_impl_send_sync! { type CType = ffi::GENERAL_NAME; fn drop = ffi::GENERAL_NAME_free; /// An `X509` certificate alternative names. pub struct GeneralName; /// Reference to `GeneralName`. pub struct GeneralNameRef; } impl GeneralNameRef { fn ia5_string(&self, ffi_type: c_int) -> Option<&str> { unsafe { if (*self.as_ptr()).type_ != ffi_type { return None; } let ptr = ASN1_STRING_get0_data((*self.as_ptr()).d as *mut _); let len = ffi::ASN1_STRING_length((*self.as_ptr()).d as *mut _); let slice = slice::from_raw_parts(ptr as *const u8, len as usize); // IA5Strings are stated to be ASCII (specifically IA5). Hopefully // OpenSSL checks that when loading a certificate but if not we'll // use this instead of from_utf8_unchecked just in case. str::from_utf8(slice).ok() } } /// Returns the contents of this `GeneralName` if it is an `rfc822Name`. pub fn email(&self) -> Option<&str> { self.ia5_string(ffi::GEN_EMAIL) } /// Returns the contents of this `GeneralName` if it is a `dNSName`. pub fn dnsname(&self) -> Option<&str> { self.ia5_string(ffi::GEN_DNS) } /// Returns the contents of this `GeneralName` if it is an `uniformResourceIdentifier`. pub fn uri(&self) -> Option<&str> { self.ia5_string(ffi::GEN_URI) } /// Returns the contents of this `GeneralName` if it is an `iPAddress`. pub fn ipaddress(&self) -> Option<&[u8]> { unsafe { if (*self.as_ptr()).type_ != ffi::GEN_IPADD { return None; } let ptr = ASN1_STRING_get0_data((*self.as_ptr()).d as *mut _); let len = ffi::ASN1_STRING_length((*self.as_ptr()).d as *mut _); Some(slice::from_raw_parts(ptr as *const u8, len as usize)) } } } impl fmt::Debug for GeneralNameRef { fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result { if let Some(email) = self.email() { formatter.write_str(email) } else if let Some(dnsname) = self.dnsname() { formatter.write_str(dnsname) } else if let Some(uri) = self.uri() { formatter.write_str(uri) } else if let Some(ipaddress) = self.ipaddress() { let result = String::from_utf8_lossy(ipaddress); formatter.write_str(&result) } else { formatter.write_str("(empty)") } } } impl Stackable for GeneralName { type StackType = ffi::stack_st_GENERAL_NAME; } foreign_type_and_impl_send_sync! { type CType = ffi::ACCESS_DESCRIPTION; fn drop = ffi::ACCESS_DESCRIPTION_free; /// `AccessDescription` of certificate authority information. pub struct AccessDescription; /// Reference to `AccessDescription`. pub struct AccessDescriptionRef; } impl AccessDescriptionRef { /// Returns the access method OID. pub fn method(&self) -> &Asn1ObjectRef { unsafe { Asn1ObjectRef::from_ptr((*self.as_ptr()).method) } } // Returns the access location. pub fn location(&self) -> &GeneralNameRef { unsafe { GeneralNameRef::from_ptr((*self.as_ptr()).location) } } } impl Stackable for AccessDescription { type StackType = ffi::stack_st_ACCESS_DESCRIPTION; } foreign_type_and_impl_send_sync! { type CType = ffi::X509_ALGOR; fn drop = ffi::X509_ALGOR_free; /// An `X509` certificate signature algorithm. pub struct X509Algorithm; /// Reference to `X509Algorithm`. pub struct X509AlgorithmRef; } impl X509AlgorithmRef { /// Returns the ASN.1 OID of this algorithm. pub fn object(&self) -> &Asn1ObjectRef { unsafe { let mut oid = ptr::null(); X509_ALGOR_get0(&mut oid, ptr::null_mut(), ptr::null_mut(), self.as_ptr()); Asn1ObjectRef::from_const_ptr_opt(oid).expect("algorithm oid must not be null") } } } foreign_type_and_impl_send_sync! { type CType = ffi::X509_OBJECT; fn drop = X509_OBJECT_free; /// An `X509` or an X509 certificate revocation list. pub struct X509Object; /// Reference to `X509Object` pub struct X509ObjectRef; } impl X509ObjectRef { pub fn x509(&self) -> Option<&X509Ref> { unsafe { let ptr = X509_OBJECT_get0_X509(self.as_ptr()); X509Ref::from_const_ptr_opt(ptr) } } } impl Stackable for X509Object { type StackType = ffi::stack_st_X509_OBJECT; } cfg_if! { if #[cfg(any(ossl110, libressl273))] { use ffi::{X509_getm_notAfter, X509_getm_notBefore, X509_up_ref, X509_get0_signature}; } else { #[allow(bad_style)] unsafe fn X509_getm_notAfter(x: *mut ffi::X509) -> *mut ffi::ASN1_TIME { (*(*(*x).cert_info).validity).notAfter } #[allow(bad_style)] unsafe fn X509_getm_notBefore(x: *mut ffi::X509) -> *mut ffi::ASN1_TIME { (*(*(*x).cert_info).validity).notBefore } #[allow(bad_style)] unsafe fn X509_up_ref(x: *mut ffi::X509) { ffi::CRYPTO_add_lock( &mut (*x).references, 1, ffi::CRYPTO_LOCK_X509, "mod.rs\0".as_ptr() as *const _, line!() as c_int, ); } #[allow(bad_style)] unsafe fn X509_get0_signature( psig: *mut *const ffi::ASN1_BIT_STRING, palg: *mut *const ffi::X509_ALGOR, x: *const ffi::X509, ) { if !psig.is_null() { *psig = (*x).signature; } if !palg.is_null() { *palg = (*x).sig_alg; } } } } cfg_if! { if #[cfg(ossl110)] { use ffi::{ X509_ALGOR_get0, ASN1_STRING_get0_data, X509_STORE_CTX_get0_chain, X509_set1_notAfter, X509_set1_notBefore, X509_REQ_get_version, X509_REQ_get_subject_name, }; } else { use ffi::{ ASN1_STRING_data as ASN1_STRING_get0_data, X509_STORE_CTX_get_chain as X509_STORE_CTX_get0_chain, X509_set_notAfter as X509_set1_notAfter, X509_set_notBefore as X509_set1_notBefore, }; #[allow(bad_style)] unsafe fn X509_REQ_get_version(x: *mut ffi::X509_REQ) -> ::libc::c_long { ffi::ASN1_INTEGER_get((*(*x).req_info).version) } #[allow(bad_style)] unsafe fn X509_REQ_get_subject_name(x: *mut ffi::X509_REQ) -> *mut ::ffi::X509_NAME { (*(*x).req_info).subject } #[allow(bad_style)] unsafe fn X509_ALGOR_get0( paobj: *mut *const ffi::ASN1_OBJECT, pptype: *mut c_int, pval: *mut *mut ::libc::c_void, alg: *const ffi::X509_ALGOR, ) { if !paobj.is_null() { *paobj = (*alg).algorithm; } assert!(pptype.is_null()); assert!(pval.is_null()); } } } cfg_if! { if #[cfg(any(ossl110, libressl270))] { use ffi::X509_OBJECT_get0_X509; } else { #[allow(bad_style)] unsafe fn X509_OBJECT_get0_X509(x: *mut ffi::X509_OBJECT) -> *mut ffi::X509 { if (*x).type_ == ffi::X509_LU_X509 { (*x).data.x509 } else { ptr::null_mut() } } } } cfg_if! { if #[cfg(ossl110)] { use ffi::X509_OBJECT_free; } else { #[allow(bad_style)] unsafe fn X509_OBJECT_free(x: *mut ffi::X509_OBJECT) { ffi::X509_OBJECT_free_contents(x); ffi::CRYPTO_free(x as *mut libc::c_void); } } } vendor/openssl/src/x509/tests.rs0000664000175000017500000003660514172417313017353 0ustar mwhudsonmwhudsonuse crate::asn1::Asn1Time; use crate::bn::{BigNum, MsbOption}; use crate::hash::MessageDigest; use crate::nid::Nid; use crate::pkey::{PKey, Private}; use crate::rsa::Rsa; use crate::stack::Stack; use crate::x509::extension::{ AuthorityKeyIdentifier, BasicConstraints, ExtendedKeyUsage, KeyUsage, SubjectAlternativeName, SubjectKeyIdentifier, }; use crate::x509::store::X509StoreBuilder; #[cfg(any(ossl102, libressl261))] use crate::x509::verify::X509VerifyFlags; #[cfg(ossl110)] use crate::x509::X509Builder; use crate::x509::{X509Name, X509Req, X509StoreContext, X509VerifyResult, X509}; use hex::{self, FromHex}; fn pkey() -> PKey { let rsa = Rsa::generate(2048).unwrap(); PKey::from_rsa(rsa).unwrap() } #[test] fn test_cert_loading() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let fingerprint = cert.digest(MessageDigest::sha1()).unwrap(); let hash_str = "59172d9313e84459bcff27f967e79e6e9217e584"; let hash_vec = Vec::from_hex(hash_str).unwrap(); assert_eq!(hash_vec, &*fingerprint); } #[test] fn test_debug() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let debugged = format!("{:#?}", cert); assert!(debugged.contains(r#"serial_number: "8771F7BDEE982FA5""#)); assert!(debugged.contains(r#"signature_algorithm: sha256WithRSAEncryption"#)); assert!(debugged.contains(r#"countryName = "AU""#)); assert!(debugged.contains(r#"stateOrProvinceName = "Some-State""#)); assert!(debugged.contains(r#"not_before: Aug 14 17:00:03 2016 GMT"#)); assert!(debugged.contains(r#"not_after: Aug 12 17:00:03 2026 GMT"#)); } #[test] fn test_cert_issue_validity() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let not_before = cert.not_before().to_string(); let not_after = cert.not_after().to_string(); assert_eq!(not_before, "Aug 14 17:00:03 2016 GMT"); assert_eq!(not_after, "Aug 12 17:00:03 2026 GMT"); } #[test] fn test_save_der() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let der = cert.to_der().unwrap(); assert!(!der.is_empty()); } #[test] fn test_subject_read_cn() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let subject = cert.subject_name(); let cn = subject.entries_by_nid(Nid::COMMONNAME).next().unwrap(); assert_eq!(cn.data().as_slice(), b"foobar.com") } #[test] fn test_nid_values() { let cert = include_bytes!("../../test/nid_test_cert.pem"); let cert = X509::from_pem(cert).unwrap(); let subject = cert.subject_name(); let cn = subject.entries_by_nid(Nid::COMMONNAME).next().unwrap(); assert_eq!(cn.data().as_slice(), b"example.com"); let email = subject .entries_by_nid(Nid::PKCS9_EMAILADDRESS) .next() .unwrap(); assert_eq!(email.data().as_slice(), b"test@example.com"); let friendly = subject.entries_by_nid(Nid::FRIENDLYNAME).next().unwrap(); assert_eq!(&**friendly.data().as_utf8().unwrap(), "Example"); } #[test] fn test_nameref_iterator() { let cert = include_bytes!("../../test/nid_test_cert.pem"); let cert = X509::from_pem(cert).unwrap(); let subject = cert.subject_name(); let mut all_entries = subject.entries(); let email = all_entries.next().unwrap(); assert_eq!( email.object().nid().as_raw(), Nid::PKCS9_EMAILADDRESS.as_raw() ); assert_eq!(email.data().as_slice(), b"test@example.com"); let cn = all_entries.next().unwrap(); assert_eq!(cn.object().nid().as_raw(), Nid::COMMONNAME.as_raw()); assert_eq!(cn.data().as_slice(), b"example.com"); let friendly = all_entries.next().unwrap(); assert_eq!(friendly.object().nid().as_raw(), Nid::FRIENDLYNAME.as_raw()); assert_eq!(&**friendly.data().as_utf8().unwrap(), "Example"); if all_entries.next().is_some() { panic!(); } } #[test] fn test_nid_uid_value() { let cert = include_bytes!("../../test/nid_uid_test_cert.pem"); let cert = X509::from_pem(cert).unwrap(); let subject = cert.subject_name(); let cn = subject.entries_by_nid(Nid::USERID).next().unwrap(); assert_eq!(cn.data().as_slice(), b"this is the userId"); } #[test] fn test_subject_alt_name() { let cert = include_bytes!("../../test/alt_name_cert.pem"); let cert = X509::from_pem(cert).unwrap(); let subject_alt_names = cert.subject_alt_names().unwrap(); assert_eq!(5, subject_alt_names.len()); assert_eq!(Some("example.com"), subject_alt_names[0].dnsname()); assert_eq!(subject_alt_names[1].ipaddress(), Some(&[127, 0, 0, 1][..])); assert_eq!( subject_alt_names[2].ipaddress(), Some(&b"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01"[..]) ); assert_eq!(Some("test@example.com"), subject_alt_names[3].email()); assert_eq!(Some("http://www.example.com"), subject_alt_names[4].uri()); } #[test] fn test_subject_alt_name_iter() { let cert = include_bytes!("../../test/alt_name_cert.pem"); let cert = X509::from_pem(cert).unwrap(); let subject_alt_names = cert.subject_alt_names().unwrap(); let mut subject_alt_names_iter = subject_alt_names.iter(); assert_eq!( subject_alt_names_iter.next().unwrap().dnsname(), Some("example.com") ); assert_eq!( subject_alt_names_iter.next().unwrap().ipaddress(), Some(&[127, 0, 0, 1][..]) ); assert_eq!( subject_alt_names_iter.next().unwrap().ipaddress(), Some(&b"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01"[..]) ); assert_eq!( subject_alt_names_iter.next().unwrap().email(), Some("test@example.com") ); assert_eq!( subject_alt_names_iter.next().unwrap().uri(), Some("http://www.example.com") ); assert!(subject_alt_names_iter.next().is_none()); } #[test] fn test_aia_ca_issuer() { // With AIA let cert = include_bytes!("../../test/aia_test_cert.pem"); let cert = X509::from_pem(cert).unwrap(); let authority_info = cert.authority_info().unwrap(); assert_eq!(authority_info.len(), 1); assert_eq!(authority_info[0].method().to_string(), "CA Issuers"); assert_eq!( authority_info[0].location().uri(), Some("http://www.example.com/cert.pem") ); // Without AIA let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); assert!(cert.authority_info().is_none()); } #[test] fn x509_builder() { let pkey = pkey(); let mut name = X509Name::builder().unwrap(); name.append_entry_by_nid(Nid::COMMONNAME, "foobar.com") .unwrap(); let name = name.build(); let mut builder = X509::builder().unwrap(); builder.set_version(2).unwrap(); builder.set_subject_name(&name).unwrap(); builder.set_issuer_name(&name).unwrap(); builder .set_not_before(&Asn1Time::days_from_now(0).unwrap()) .unwrap(); builder .set_not_after(&Asn1Time::days_from_now(365).unwrap()) .unwrap(); builder.set_pubkey(&pkey).unwrap(); let mut serial = BigNum::new().unwrap(); serial.rand(128, MsbOption::MAYBE_ZERO, false).unwrap(); builder .set_serial_number(&serial.to_asn1_integer().unwrap()) .unwrap(); let basic_constraints = BasicConstraints::new().critical().ca().build().unwrap(); builder.append_extension(basic_constraints).unwrap(); let key_usage = KeyUsage::new() .digital_signature() .key_encipherment() .build() .unwrap(); builder.append_extension(key_usage).unwrap(); let ext_key_usage = ExtendedKeyUsage::new() .client_auth() .server_auth() .other("2.999.1") .build() .unwrap(); builder.append_extension(ext_key_usage).unwrap(); let subject_key_identifier = SubjectKeyIdentifier::new() .build(&builder.x509v3_context(None, None)) .unwrap(); builder.append_extension(subject_key_identifier).unwrap(); let authority_key_identifier = AuthorityKeyIdentifier::new() .keyid(true) .build(&builder.x509v3_context(None, None)) .unwrap(); builder.append_extension(authority_key_identifier).unwrap(); let subject_alternative_name = SubjectAlternativeName::new() .dns("example.com") .build(&builder.x509v3_context(None, None)) .unwrap(); builder.append_extension(subject_alternative_name).unwrap(); builder.sign(&pkey, MessageDigest::sha256()).unwrap(); let x509 = builder.build(); assert!(pkey.public_eq(&x509.public_key().unwrap())); assert!(x509.verify(&pkey).unwrap()); let cn = x509 .subject_name() .entries_by_nid(Nid::COMMONNAME) .next() .unwrap(); assert_eq!(cn.data().as_slice(), b"foobar.com"); assert_eq!(serial, x509.serial_number().to_bn().unwrap()); } #[test] fn x509_req_builder() { let pkey = pkey(); let mut name = X509Name::builder().unwrap(); name.append_entry_by_nid(Nid::COMMONNAME, "foobar.com") .unwrap(); let name = name.build(); let mut builder = X509Req::builder().unwrap(); builder.set_version(2).unwrap(); builder.set_subject_name(&name).unwrap(); builder.set_pubkey(&pkey).unwrap(); let mut extensions = Stack::new().unwrap(); let key_usage = KeyUsage::new() .digital_signature() .key_encipherment() .build() .unwrap(); extensions.push(key_usage).unwrap(); let subject_alternative_name = SubjectAlternativeName::new() .dns("example.com") .build(&builder.x509v3_context(None)) .unwrap(); extensions.push(subject_alternative_name).unwrap(); builder.add_extensions(&extensions).unwrap(); builder.sign(&pkey, MessageDigest::sha256()).unwrap(); let req = builder.build(); assert!(req.public_key().unwrap().public_eq(&pkey)); assert_eq!(req.extensions().unwrap().len(), extensions.len()); assert!(req.verify(&pkey).unwrap()); } #[test] fn test_stack_from_pem() { let certs = include_bytes!("../../test/certs.pem"); let certs = X509::stack_from_pem(certs).unwrap(); assert_eq!(certs.len(), 2); assert_eq!( hex::encode(certs[0].digest(MessageDigest::sha1()).unwrap()), "59172d9313e84459bcff27f967e79e6e9217e584" ); assert_eq!( hex::encode(certs[1].digest(MessageDigest::sha1()).unwrap()), "c0cbdf7cdd03c9773e5468e1f6d2da7d5cbb1875" ); } #[test] fn issued() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let ca = include_bytes!("../../test/root-ca.pem"); let ca = X509::from_pem(ca).unwrap(); assert_eq!(ca.issued(&cert), X509VerifyResult::OK); assert_ne!(cert.issued(&cert), X509VerifyResult::OK); } #[test] fn signature() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let signature = cert.signature(); assert_eq!( hex::encode(signature.as_slice()), "4af607b889790b43470442cfa551cdb8b6d0b0340d2958f76b9e3ef6ad4992230cead6842587f0ecad5\ 78e6e11a221521e940187e3d6652de14e84e82f6671f097cc47932e022add3c0cb54a26bf27fa84c107\ 4971caa6bee2e42d34a5b066c427f2d452038082b8073993399548088429de034fdd589dcfb0dd33be7\ ebdfdf698a28d628a89568881d658151276bde333600969502c4e62e1d3470a683364dfb241f78d310a\ 89c119297df093eb36b7fd7540224f488806780305d1e79ffc938fe2275441726522ab36d88348e6c51\ f13dcc46b5e1cdac23c974fd5ef86aa41e91c9311655090a52333bc79687c748d833595d4c5f987508f\ e121997410d37c" ); let algorithm = cert.signature_algorithm(); assert_eq!(algorithm.object().nid(), Nid::SHA256WITHRSAENCRYPTION); assert_eq!(algorithm.object().to_string(), "sha256WithRSAEncryption"); } #[test] #[allow(clippy::redundant_clone)] fn clone_x509() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); drop(cert.clone()); } #[test] fn test_verify_cert() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let ca = include_bytes!("../../test/root-ca.pem"); let ca = X509::from_pem(ca).unwrap(); let chain = Stack::new().unwrap(); let mut store_bldr = X509StoreBuilder::new().unwrap(); store_bldr.add_cert(ca).unwrap(); let store = store_bldr.build(); let mut context = X509StoreContext::new().unwrap(); assert!(context .init(&store, &cert, &chain, |c| c.verify_cert()) .unwrap()); assert!(context .init(&store, &cert, &chain, |c| c.verify_cert()) .unwrap()); } #[test] fn test_verify_fails() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let ca = include_bytes!("../../test/alt_name_cert.pem"); let ca = X509::from_pem(ca).unwrap(); let chain = Stack::new().unwrap(); let mut store_bldr = X509StoreBuilder::new().unwrap(); store_bldr.add_cert(ca).unwrap(); let store = store_bldr.build(); let mut context = X509StoreContext::new().unwrap(); assert!(!context .init(&store, &cert, &chain, |c| c.verify_cert()) .unwrap()); } #[test] #[cfg(any(ossl102, libressl261))] fn test_verify_fails_with_crl_flag_set_and_no_crl() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let ca = include_bytes!("../../test/root-ca.pem"); let ca = X509::from_pem(ca).unwrap(); let chain = Stack::new().unwrap(); let mut store_bldr = X509StoreBuilder::new().unwrap(); store_bldr.add_cert(ca).unwrap(); store_bldr.set_flags(X509VerifyFlags::CRL_CHECK).unwrap(); let store = store_bldr.build(); let mut context = X509StoreContext::new().unwrap(); assert_eq!( context .init(&store, &cert, &chain, |c| { c.verify_cert()?; Ok(c.error()) }) .unwrap() .error_string(), "unable to get certificate CRL" ) } #[cfg(ossl110)] #[test] fn x509_ref_version() { let mut builder = X509Builder::new().unwrap(); let expected_version = 2; builder .set_version(expected_version) .expect("Failed to set certificate version"); let cert = builder.build(); let actual_version = cert.version(); assert_eq!( expected_version, actual_version, "Obtained certificate version is incorrect", ); } #[cfg(ossl110)] #[test] fn x509_ref_version_no_version_set() { let cert = X509Builder::new().unwrap().build(); let actual_version = cert.version(); assert_eq!( 0, actual_version, "Default certificate version is incorrect", ); } #[test] fn test_save_subject_der() { let cert = include_bytes!("../../test/cert.pem"); let cert = X509::from_pem(cert).unwrap(); let der = cert.subject_name().to_der().unwrap(); println!("der: {:?}", der); assert!(!der.is_empty()); } #[test] fn test_load_subject_der() { // The subject from ../../test/cert.pem const SUBJECT_DER: &[u8] = &[ 48, 90, 49, 11, 48, 9, 6, 3, 85, 4, 6, 19, 2, 65, 85, 49, 19, 48, 17, 6, 3, 85, 4, 8, 12, 10, 83, 111, 109, 101, 45, 83, 116, 97, 116, 101, 49, 33, 48, 31, 6, 3, 85, 4, 10, 12, 24, 73, 110, 116, 101, 114, 110, 101, 116, 32, 87, 105, 100, 103, 105, 116, 115, 32, 80, 116, 121, 32, 76, 116, 100, 49, 19, 48, 17, 6, 3, 85, 4, 3, 12, 10, 102, 111, 111, 98, 97, 114, 46, 99, 111, 109, ]; X509Name::from_der(SUBJECT_DER).unwrap(); } vendor/openssl/src/x509/store.rs0000664000175000017500000001551614160055207017340 0ustar mwhudsonmwhudson//! Describe a context in which to verify an `X509` certificate. //! //! The `X509` certificate store holds trusted CA certificates used to verify //! peer certificates. //! //! # Example //! //! ```rust //! use openssl::x509::store::{X509StoreBuilder, X509Store}; //! use openssl::x509::{X509, X509Name}; //! use openssl::pkey::PKey; //! use openssl::hash::MessageDigest; //! use openssl::rsa::Rsa; //! use openssl::nid::Nid; //! //! let rsa = Rsa::generate(2048).unwrap(); //! let pkey = PKey::from_rsa(rsa).unwrap(); //! //! let mut name = X509Name::builder().unwrap(); //! name.append_entry_by_nid(Nid::COMMONNAME, "foobar.com").unwrap(); //! let name = name.build(); //! //! let mut builder = X509::builder().unwrap(); //! builder.set_version(2).unwrap(); //! builder.set_subject_name(&name).unwrap(); //! builder.set_issuer_name(&name).unwrap(); //! builder.set_pubkey(&pkey).unwrap(); //! builder.sign(&pkey, MessageDigest::sha256()).unwrap(); //! //! let certificate: X509 = builder.build(); //! //! let mut builder = X509StoreBuilder::new().unwrap(); //! let _ = builder.add_cert(certificate); //! //! let store: X509Store = builder.build(); //! ``` use cfg_if::cfg_if; use foreign_types::ForeignTypeRef; use std::mem; use crate::error::ErrorStack; use crate::stack::StackRef; #[cfg(any(ossl102, libressl261))] use crate::x509::verify::X509VerifyFlags; use crate::x509::{X509Object, X509}; use crate::{cvt, cvt_p}; foreign_type_and_impl_send_sync! { type CType = ffi::X509_STORE; fn drop = ffi::X509_STORE_free; /// A builder type used to construct an `X509Store`. pub struct X509StoreBuilder; /// Reference to an `X509StoreBuilder`. pub struct X509StoreBuilderRef; } impl X509StoreBuilder { /// Returns a builder for a certificate store. /// /// The store is initially empty. pub fn new() -> Result { unsafe { ffi::init(); cvt_p(ffi::X509_STORE_new()).map(X509StoreBuilder) } } /// Constructs the `X509Store`. pub fn build(self) -> X509Store { let store = X509Store(self.0); mem::forget(self); store } } impl X509StoreBuilderRef { /// Adds a certificate to the certificate store. // FIXME should take an &X509Ref pub fn add_cert(&mut self, cert: X509) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_STORE_add_cert(self.as_ptr(), cert.as_ptr())).map(|_| ()) } } /// Load certificates from their default locations. /// /// These locations are read from the `SSL_CERT_FILE` and `SSL_CERT_DIR` /// environment variables if present, or defaults specified at OpenSSL /// build time otherwise. pub fn set_default_paths(&mut self) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_STORE_set_default_paths(self.as_ptr())).map(|_| ()) } } /// Adds a lookup method to the store. /// /// This corresponds to [`X509_STORE_add_lookup`]. /// /// [`X509_STORE_add_lookup`]: https://www.openssl.org/docs/man1.1.1/man3/X509_STORE_add_lookup.html pub fn add_lookup( &mut self, method: &'static X509LookupMethodRef, ) -> Result<&mut X509LookupRef, ErrorStack> { let lookup = unsafe { ffi::X509_STORE_add_lookup(self.as_ptr(), method.as_ptr()) }; cvt_p(lookup).map(|ptr| unsafe { X509LookupRef::from_ptr_mut(ptr) }) } /// Sets certificate chain validation related flags. /// /// This corresponds to [`X509_STORE_set_flags`]. /// /// [`X509_STORE_set_flags`]: https://www.openssl.org/docs/man1.1.1/man3/X509_STORE_set_flags.html #[cfg(any(ossl102, libressl261))] pub fn set_flags(&mut self, flags: X509VerifyFlags) -> Result<(), ErrorStack> { unsafe { cvt(ffi::X509_STORE_set_flags(self.as_ptr(), flags.bits())).map(|_| ()) } } } generic_foreign_type_and_impl_send_sync! { type CType = ffi::X509_LOOKUP; fn drop = ffi::X509_LOOKUP_free; /// Information used by an `X509Store` to look up certificates and CRLs. pub struct X509Lookup; /// Reference to an `X509Lookup`. pub struct X509LookupRef; } /// Marker type corresponding to the [`X509_LOOKUP_hash_dir`] lookup method. /// /// [`X509_LOOKUP_hash_dir`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_LOOKUP_hash_dir.html pub struct HashDir; impl X509Lookup { /// Lookup method that loads certificates and CRLs on demand and caches /// them in memory once they are loaded. It also checks for newer CRLs upon /// each lookup, so that newer CRLs are used as soon as they appear in the /// directory. /// /// This corresponds to [`X509_LOOKUP_hash_dir`]. /// /// [`X509_LOOKUP_hash_dir`]: https://www.openssl.org/docs/man1.1.0/crypto/X509_LOOKUP_hash_dir.html pub fn hash_dir() -> &'static X509LookupMethodRef { unsafe { X509LookupMethodRef::from_ptr(ffi::X509_LOOKUP_hash_dir()) } } } impl X509LookupRef { /// Specifies a directory from which certificates and CRLs will be loaded /// on-demand. Must be used with `X509Lookup::hash_dir`. /// /// This corresponds to [`X509_LOOKUP_add_dir`]. /// /// [`X509_LOOKUP_add_dir`]: https://www.openssl.org/docs/man1.1.1/man3/X509_LOOKUP_add_dir.html pub fn add_dir( &mut self, name: &str, file_type: crate::ssl::SslFiletype, ) -> Result<(), ErrorStack> { let name = std::ffi::CString::new(name).unwrap(); unsafe { cvt(ffi::X509_LOOKUP_add_dir( self.as_ptr(), name.as_ptr(), file_type.as_raw(), )) .map(|_| ()) } } } generic_foreign_type_and_impl_send_sync! { type CType = ffi::X509_LOOKUP_METHOD; fn drop = X509_LOOKUP_meth_free; /// Method used to look up certificates and CRLs. pub struct X509LookupMethod; /// Reference to an `X509LookupMethod`. pub struct X509LookupMethodRef; } foreign_type_and_impl_send_sync! { type CType = ffi::X509_STORE; fn drop = ffi::X509_STORE_free; /// A certificate store to hold trusted `X509` certificates. pub struct X509Store; /// Reference to an `X509Store`. pub struct X509StoreRef; } impl X509StoreRef { /// Get a reference to the cache of certificates in this store. pub fn objects(&self) -> &StackRef { unsafe { StackRef::from_ptr(X509_STORE_get0_objects(self.as_ptr())) } } } cfg_if! { if #[cfg(any(ossl110, libressl270))] { use ffi::X509_STORE_get0_objects; } else { #[allow(bad_style)] unsafe fn X509_STORE_get0_objects(x: *mut ffi::X509_STORE) -> *mut ffi::stack_st_X509_OBJECT { (*x).objs } } } cfg_if! { if #[cfg(ossl110)] { use ffi::X509_LOOKUP_meth_free; } else { #[allow(bad_style)] unsafe fn X509_LOOKUP_meth_free(_x: *mut ffi::X509_LOOKUP_METHOD) {} } } vendor/openssl/src/macros.rs0000664000175000017500000002106614160055207016760 0ustar mwhudsonmwhudsonmacro_rules! private_key_from_pem { ($(#[$m:meta])* $n:ident, $(#[$m2:meta])* $n2:ident, $(#[$m3:meta])* $n3:ident, $t:ty, $f:path) => { from_pem!($(#[$m])* $n, $t, $f); $(#[$m2])* pub fn $n2(pem: &[u8], passphrase: &[u8]) -> Result<$t, crate::error::ErrorStack> { unsafe { ffi::init(); let bio = crate::bio::MemBioSlice::new(pem)?; let passphrase = ::std::ffi::CString::new(passphrase).unwrap(); cvt_p($f(bio.as_ptr(), ptr::null_mut(), None, passphrase.as_ptr() as *const _ as *mut _)) .map(|p| ::foreign_types::ForeignType::from_ptr(p)) } } $(#[$m3])* pub fn $n3(pem: &[u8], callback: F) -> Result<$t, crate::error::ErrorStack> where F: FnOnce(&mut [u8]) -> Result { unsafe { ffi::init(); let mut cb = crate::util::CallbackState::new(callback); let bio = crate::bio::MemBioSlice::new(pem)?; cvt_p($f(bio.as_ptr(), ptr::null_mut(), Some(crate::util::invoke_passwd_cb::), &mut cb as *mut _ as *mut _)) .map(|p| ::foreign_types::ForeignType::from_ptr(p)) } } } } macro_rules! private_key_to_pem { ($(#[$m:meta])* $n:ident, $(#[$m2:meta])* $n2:ident, $f:path) => { $(#[$m])* pub fn $n(&self) -> Result, crate::error::ErrorStack> { unsafe { let bio = crate::bio::MemBio::new()?; cvt($f(bio.as_ptr(), self.as_ptr(), ptr::null(), ptr::null_mut(), -1, None, ptr::null_mut()))?; Ok(bio.get_buf().to_owned()) } } $(#[$m2])* pub fn $n2( &self, cipher: crate::symm::Cipher, passphrase: &[u8] ) -> Result, crate::error::ErrorStack> { unsafe { let bio = crate::bio::MemBio::new()?; assert!(passphrase.len() <= ::libc::c_int::max_value() as usize); cvt($f(bio.as_ptr(), self.as_ptr(), cipher.as_ptr(), passphrase.as_ptr() as *const _ as *mut _, passphrase.len() as ::libc::c_int, None, ptr::null_mut()))?; Ok(bio.get_buf().to_owned()) } } } } macro_rules! to_pem { ($(#[$m:meta])* $n:ident, $f:path) => { $(#[$m])* pub fn $n(&self) -> Result, crate::error::ErrorStack> { unsafe { let bio = crate::bio::MemBio::new()?; cvt($f(bio.as_ptr(), self.as_ptr()))?; Ok(bio.get_buf().to_owned()) } } } } macro_rules! to_der { ($(#[$m:meta])* $n:ident, $f:path) => { $(#[$m])* pub fn $n(&self) -> Result, crate::error::ErrorStack> { unsafe { let len = crate::cvt($f(::foreign_types::ForeignTypeRef::as_ptr(self), ptr::null_mut()))?; let mut buf = vec![0; len as usize]; crate::cvt($f(::foreign_types::ForeignTypeRef::as_ptr(self), &mut buf.as_mut_ptr()))?; Ok(buf) } } }; } macro_rules! from_der { ($(#[$m:meta])* $n:ident, $t:ty, $f:path) => { $(#[$m])* pub fn $n(der: &[u8]) -> Result<$t, crate::error::ErrorStack> { unsafe { ffi::init(); let len = ::std::cmp::min(der.len(), ::libc::c_long::max_value() as usize) as ::libc::c_long; crate::cvt_p($f(::std::ptr::null_mut(), &mut der.as_ptr(), len)) .map(|p| ::foreign_types::ForeignType::from_ptr(p)) } } } } macro_rules! from_pem { ($(#[$m:meta])* $n:ident, $t:ty, $f:path) => { $(#[$m])* pub fn $n(pem: &[u8]) -> Result<$t, crate::error::ErrorStack> { unsafe { crate::init(); let bio = crate::bio::MemBioSlice::new(pem)?; cvt_p($f(bio.as_ptr(), ::std::ptr::null_mut(), None, ::std::ptr::null_mut())) .map(|p| ::foreign_types::ForeignType::from_ptr(p)) } } } } macro_rules! foreign_type_and_impl_send_sync { ( $(#[$impl_attr:meta])* type CType = $ctype:ty; fn drop = $drop:expr; $(fn clone = $clone:expr;)* $(#[$owned_attr:meta])* pub struct $owned:ident; $(#[$borrowed_attr:meta])* pub struct $borrowed:ident; ) => { ::foreign_types::foreign_type! { $(#[$impl_attr])* type CType = $ctype; fn drop = $drop; $(fn clone = $clone;)* $(#[$owned_attr])* pub struct $owned; $(#[$borrowed_attr])* pub struct $borrowed; } unsafe impl Send for $owned{} unsafe impl Send for $borrowed{} unsafe impl Sync for $owned{} unsafe impl Sync for $borrowed{} }; } macro_rules! generic_foreign_type_and_impl_send_sync { ( $(#[$impl_attr:meta])* type CType = $ctype:ty; fn drop = $drop:expr; $(fn clone = $clone:expr;)* $(#[$owned_attr:meta])* pub struct $owned:ident; $(#[$borrowed_attr:meta])* pub struct $borrowed:ident; ) => { $(#[$owned_attr])* pub struct $owned(*mut $ctype, ::std::marker::PhantomData); $(#[$impl_attr])* impl ::foreign_types::ForeignType for $owned { type CType = $ctype; type Ref = $borrowed; #[inline] unsafe fn from_ptr(ptr: *mut $ctype) -> $owned { $owned(ptr, ::std::marker::PhantomData) } #[inline] fn as_ptr(&self) -> *mut $ctype { self.0 } } impl Drop for $owned { #[inline] fn drop(&mut self) { unsafe { $drop(self.0) } } } $( impl Clone for $owned { #[inline] fn clone(&self) -> $owned { unsafe { let handle: *mut $ctype = $clone(self.0); ::foreign_types::ForeignType::from_ptr(handle) } } } impl ::std::borrow::ToOwned for $borrowed { type Owned = $owned; #[inline] fn to_owned(&self) -> $owned { unsafe { let handle: *mut $ctype = $clone(::foreign_types::ForeignTypeRef::as_ptr(self)); $crate::ForeignType::from_ptr(handle) } } } )* impl ::std::ops::Deref for $owned { type Target = $borrowed; #[inline] fn deref(&self) -> &$borrowed { unsafe { ::foreign_types::ForeignTypeRef::from_ptr(self.0) } } } impl ::std::ops::DerefMut for $owned { #[inline] fn deref_mut(&mut self) -> &mut $borrowed { unsafe { ::foreign_types::ForeignTypeRef::from_ptr_mut(self.0) } } } impl ::std::borrow::Borrow<$borrowed> for $owned { #[inline] fn borrow(&self) -> &$borrowed { &**self } } impl ::std::convert::AsRef<$borrowed> for $owned { #[inline] fn as_ref(&self) -> &$borrowed { &**self } } $(#[$borrowed_attr])* pub struct $borrowed(::foreign_types::Opaque, ::std::marker::PhantomData); $(#[$impl_attr])* impl ::foreign_types::ForeignTypeRef for $borrowed { type CType = $ctype; } unsafe impl Send for $owned{} unsafe impl Send for $borrowed{} unsafe impl Sync for $owned{} unsafe impl Sync for $borrowed{} }; } vendor/openssl/src/derive.rs0000664000175000017500000001035514160055207016751 0ustar mwhudsonmwhudson//! Shared secret derivation. use foreign_types::ForeignTypeRef; use std::marker::PhantomData; use std::ptr; use crate::error::ErrorStack; use crate::pkey::{HasPrivate, HasPublic, PKeyRef}; use crate::{cvt, cvt_p}; /// A type used to derive a shared secret between two keys. pub struct Deriver<'a>(*mut ffi::EVP_PKEY_CTX, PhantomData<&'a ()>); unsafe impl<'a> Sync for Deriver<'a> {} unsafe impl<'a> Send for Deriver<'a> {} #[allow(clippy::len_without_is_empty)] impl<'a> Deriver<'a> { /// Creates a new `Deriver` using the provided private key. /// /// This corresponds to [`EVP_PKEY_derive_init`]. /// /// [`EVP_PKEY_derive_init`]: https://www.openssl.org/docs/man1.0.2/crypto/EVP_PKEY_derive_init.html pub fn new(key: &'a PKeyRef) -> Result, ErrorStack> where T: HasPrivate, { unsafe { cvt_p(ffi::EVP_PKEY_CTX_new(key.as_ptr(), ptr::null_mut())) .map(|p| Deriver(p, PhantomData)) .and_then(|ctx| cvt(ffi::EVP_PKEY_derive_init(ctx.0)).map(|_| ctx)) } } /// Sets the peer key used for secret derivation. /// /// This corresponds to [`EVP_PKEY_derive_set_peer`]: /// /// [`EVP_PKEY_derive_set_peer`]: https://www.openssl.org/docs/man1.0.2/crypto/EVP_PKEY_derive_init.html pub fn set_peer(&mut self, key: &'a PKeyRef) -> Result<(), ErrorStack> where T: HasPublic, { unsafe { cvt(ffi::EVP_PKEY_derive_set_peer(self.0, key.as_ptr())).map(|_| ()) } } /// Returns the size of the shared secret. /// /// It can be used to size the buffer passed to [`Deriver::derive`]. /// /// This corresponds to [`EVP_PKEY_derive`]. /// /// [`Deriver::derive`]: #method.derive /// [`EVP_PKEY_derive`]: https://www.openssl.org/docs/man1.0.2/crypto/EVP_PKEY_derive_init.html pub fn len(&mut self) -> Result { unsafe { let mut len = 0; cvt(ffi::EVP_PKEY_derive(self.0, ptr::null_mut(), &mut len)).map(|_| len) } } /// Derives a shared secret between the two keys, writing it into the buffer. /// /// Returns the number of bytes written. /// /// This corresponds to [`EVP_PKEY_derive`]. /// /// [`EVP_PKEY_derive`]: https://www.openssl.org/docs/man1.0.2/crypto/EVP_PKEY_derive_init.html pub fn derive(&mut self, buf: &mut [u8]) -> Result { let mut len = buf.len(); unsafe { cvt(ffi::EVP_PKEY_derive( self.0, buf.as_mut_ptr() as *mut _, &mut len, )) .map(|_| len) } } /// A convenience function which derives a shared secret and returns it in a new buffer. /// /// This simply wraps [`Deriver::len`] and [`Deriver::derive`]. /// /// [`Deriver::len`]: #method.len /// [`Deriver::derive`]: #method.derive pub fn derive_to_vec(&mut self) -> Result, ErrorStack> { let len = self.len()?; let mut buf = vec![0; len]; let len = self.derive(&mut buf)?; buf.truncate(len); Ok(buf) } } impl<'a> Drop for Deriver<'a> { fn drop(&mut self) { unsafe { ffi::EVP_PKEY_CTX_free(self.0); } } } #[cfg(test)] mod test { use super::*; use crate::ec::{EcGroup, EcKey}; use crate::nid::Nid; use crate::pkey::PKey; #[test] fn derive_without_peer() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let ec_key = EcKey::generate(&group).unwrap(); let pkey = PKey::from_ec_key(ec_key).unwrap(); let mut deriver = Deriver::new(&pkey).unwrap(); deriver.derive_to_vec().unwrap_err(); } #[test] fn test_ec_key_derive() { let group = EcGroup::from_curve_name(Nid::X9_62_PRIME256V1).unwrap(); let ec_key = EcKey::generate(&group).unwrap(); let ec_key2 = EcKey::generate(&group).unwrap(); let pkey = PKey::from_ec_key(ec_key).unwrap(); let pkey2 = PKey::from_ec_key(ec_key2).unwrap(); let mut deriver = Deriver::new(&pkey).unwrap(); deriver.set_peer(&pkey2).unwrap(); let shared = deriver.derive_to_vec().unwrap(); assert!(!shared.is_empty()); } } vendor/openssl/src/rsa.rs0000664000175000017500000007255114160055207016266 0ustar mwhudsonmwhudson//! Rivest–Shamir–Adleman cryptosystem //! //! RSA is one of the earliest asymmetric public key encryption schemes. //! Like many other cryptosystems, RSA relies on the presumed difficulty of a hard //! mathematical problem, namely factorization of the product of two large prime //! numbers. At the moment there does not exist an algorithm that can factor such //! large numbers in reasonable time. RSA is used in a wide variety of //! applications including digital signatures and key exchanges such as //! establishing a TLS/SSL connection. //! //! The RSA acronym is derived from the first letters of the surnames of the //! algorithm's founding trio. //! //! # Example //! //! Generate a 2048-bit RSA key pair and use the public key to encrypt some data. //! //! ```rust //! use openssl::rsa::{Rsa, Padding}; //! //! let rsa = Rsa::generate(2048).unwrap(); //! let data = b"foobar"; //! let mut buf = vec![0; rsa.size() as usize]; //! let encrypted_len = rsa.public_encrypt(data, &mut buf, Padding::PKCS1).unwrap(); //! ``` use cfg_if::cfg_if; use foreign_types::{ForeignType, ForeignTypeRef}; use libc::c_int; use std::fmt; use std::mem; use std::ptr; use crate::bn::{BigNum, BigNumRef}; use crate::error::ErrorStack; use crate::pkey::{HasPrivate, HasPublic, Private, Public}; use crate::util::ForeignTypeRefExt; use crate::{cvt, cvt_n, cvt_p}; /// Type of encryption padding to use. /// /// Random length padding is primarily used to prevent attackers from /// predicting or knowing the exact length of a plaintext message that /// can possibly lead to breaking encryption. #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct Padding(c_int); impl Padding { pub const NONE: Padding = Padding(ffi::RSA_NO_PADDING); pub const PKCS1: Padding = Padding(ffi::RSA_PKCS1_PADDING); pub const PKCS1_OAEP: Padding = Padding(ffi::RSA_PKCS1_OAEP_PADDING); pub const PKCS1_PSS: Padding = Padding(ffi::RSA_PKCS1_PSS_PADDING); /// Creates a `Padding` from an integer representation. pub fn from_raw(value: c_int) -> Padding { Padding(value) } /// Returns the integer representation of `Padding`. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } } generic_foreign_type_and_impl_send_sync! { type CType = ffi::RSA; fn drop = ffi::RSA_free; /// An RSA key. pub struct Rsa; /// Reference to `RSA` pub struct RsaRef; } impl Clone for Rsa { fn clone(&self) -> Rsa { (**self).to_owned() } } impl ToOwned for RsaRef { type Owned = Rsa; fn to_owned(&self) -> Rsa { unsafe { ffi::RSA_up_ref(self.as_ptr()); Rsa::from_ptr(self.as_ptr()) } } } impl RsaRef where T: HasPrivate, { private_key_to_pem! { /// Serializes the private key to a PEM-encoded PKCS#1 RSAPrivateKey structure. /// /// The output will have a header of `-----BEGIN RSA PRIVATE KEY-----`. /// /// This corresponds to [`PEM_write_bio_RSAPrivateKey`]. /// /// [`PEM_write_bio_RSAPrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_write_bio_RSAPrivateKey.html private_key_to_pem, /// Serializes the private key to a PEM-encoded encrypted PKCS#1 RSAPrivateKey structure. /// /// The output will have a header of `-----BEGIN RSA PRIVATE KEY-----`. /// /// This corresponds to [`PEM_write_bio_RSAPrivateKey`]. /// /// [`PEM_write_bio_RSAPrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_write_bio_RSAPrivateKey.html private_key_to_pem_passphrase, ffi::PEM_write_bio_RSAPrivateKey } to_der! { /// Serializes the private key to a DER-encoded PKCS#1 RSAPrivateKey structure. /// /// This corresponds to [`i2d_RSAPrivateKey`]. /// /// [`i2d_RSAPrivateKey`]: https://www.openssl.org/docs/man1.0.2/crypto/i2d_RSAPrivateKey.html private_key_to_der, ffi::i2d_RSAPrivateKey } /// Decrypts data using the private key, returning the number of decrypted bytes. /// /// # Panics /// /// Panics if `self` has no private components, or if `to` is smaller /// than `self.size()`. pub fn private_decrypt( &self, from: &[u8], to: &mut [u8], padding: Padding, ) -> Result { assert!(from.len() <= i32::max_value() as usize); assert!(to.len() >= self.size() as usize); unsafe { let len = cvt_n(ffi::RSA_private_decrypt( from.len() as c_int, from.as_ptr(), to.as_mut_ptr(), self.as_ptr(), padding.0, ))?; Ok(len as usize) } } /// Encrypts data using the private key, returning the number of encrypted bytes. /// /// # Panics /// /// Panics if `self` has no private components, or if `to` is smaller /// than `self.size()`. pub fn private_encrypt( &self, from: &[u8], to: &mut [u8], padding: Padding, ) -> Result { assert!(from.len() <= i32::max_value() as usize); assert!(to.len() >= self.size() as usize); unsafe { let len = cvt_n(ffi::RSA_private_encrypt( from.len() as c_int, from.as_ptr(), to.as_mut_ptr(), self.as_ptr(), padding.0, ))?; Ok(len as usize) } } /// Returns a reference to the private exponent of the key. /// /// This corresponds to [`RSA_get0_key`]. /// /// [`RSA_get0_key`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_get0_key.html pub fn d(&self) -> &BigNumRef { unsafe { let mut d = ptr::null(); RSA_get0_key(self.as_ptr(), ptr::null_mut(), ptr::null_mut(), &mut d); BigNumRef::from_const_ptr(d) } } /// Returns a reference to the first factor of the exponent of the key. /// /// This corresponds to [`RSA_get0_factors`]. /// /// [`RSA_get0_factors`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_get0_key.html pub fn p(&self) -> Option<&BigNumRef> { unsafe { let mut p = ptr::null(); RSA_get0_factors(self.as_ptr(), &mut p, ptr::null_mut()); BigNumRef::from_const_ptr_opt(p) } } /// Returns a reference to the second factor of the exponent of the key. /// /// This corresponds to [`RSA_get0_factors`]. /// /// [`RSA_get0_factors`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_get0_key.html pub fn q(&self) -> Option<&BigNumRef> { unsafe { let mut q = ptr::null(); RSA_get0_factors(self.as_ptr(), ptr::null_mut(), &mut q); BigNumRef::from_const_ptr_opt(q) } } /// Returns a reference to the first exponent used for CRT calculations. /// /// This corresponds to [`RSA_get0_crt_params`]. /// /// [`RSA_get0_crt_params`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_get0_key.html pub fn dmp1(&self) -> Option<&BigNumRef> { unsafe { let mut dp = ptr::null(); RSA_get0_crt_params(self.as_ptr(), &mut dp, ptr::null_mut(), ptr::null_mut()); BigNumRef::from_const_ptr_opt(dp) } } /// Returns a reference to the second exponent used for CRT calculations. /// /// This corresponds to [`RSA_get0_crt_params`]. /// /// [`RSA_get0_crt_params`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_get0_key.html pub fn dmq1(&self) -> Option<&BigNumRef> { unsafe { let mut dq = ptr::null(); RSA_get0_crt_params(self.as_ptr(), ptr::null_mut(), &mut dq, ptr::null_mut()); BigNumRef::from_const_ptr_opt(dq) } } /// Returns a reference to the coefficient used for CRT calculations. /// /// This corresponds to [`RSA_get0_crt_params`]. /// /// [`RSA_get0_crt_params`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_get0_key.html pub fn iqmp(&self) -> Option<&BigNumRef> { unsafe { let mut qi = ptr::null(); RSA_get0_crt_params(self.as_ptr(), ptr::null_mut(), ptr::null_mut(), &mut qi); BigNumRef::from_const_ptr_opt(qi) } } /// Validates RSA parameters for correctness /// /// This corresponds to [`RSA_check_key`]. /// /// [`RSA_check_key`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_check_key.html pub fn check_key(&self) -> Result { unsafe { let result = ffi::RSA_check_key(self.as_ptr()) as i32; if result == -1 { Err(ErrorStack::get()) } else { Ok(result == 1) } } } } impl RsaRef where T: HasPublic, { to_pem! { /// Serializes the public key into a PEM-encoded SubjectPublicKeyInfo structure. /// /// The output will have a header of `-----BEGIN PUBLIC KEY-----`. /// /// This corresponds to [`PEM_write_bio_RSA_PUBKEY`]. /// /// [`PEM_write_bio_RSA_PUBKEY`]: https://www.openssl.org/docs/man1.0.2/crypto/pem.html public_key_to_pem, ffi::PEM_write_bio_RSA_PUBKEY } to_der! { /// Serializes the public key into a DER-encoded SubjectPublicKeyInfo structure. /// /// This corresponds to [`i2d_RSA_PUBKEY`]. /// /// [`i2d_RSA_PUBKEY`]: https://www.openssl.org/docs/man1.1.0/crypto/i2d_RSA_PUBKEY.html public_key_to_der, ffi::i2d_RSA_PUBKEY } to_pem! { /// Serializes the public key into a PEM-encoded PKCS#1 RSAPublicKey structure. /// /// The output will have a header of `-----BEGIN RSA PUBLIC KEY-----`. /// /// This corresponds to [`PEM_write_bio_RSAPublicKey`]. /// /// [`PEM_write_bio_RSAPublicKey`]: https://www.openssl.org/docs/man1.0.2/crypto/pem.html public_key_to_pem_pkcs1, ffi::PEM_write_bio_RSAPublicKey } to_der! { /// Serializes the public key into a DER-encoded PKCS#1 RSAPublicKey structure. /// /// This corresponds to [`i2d_RSAPublicKey`]. /// /// [`i2d_RSAPublicKey`]: https://www.openssl.org/docs/man1.0.2/crypto/i2d_RSAPublicKey.html public_key_to_der_pkcs1, ffi::i2d_RSAPublicKey } /// Returns the size of the modulus in bytes. /// /// This corresponds to [`RSA_size`]. /// /// [`RSA_size`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_size.html pub fn size(&self) -> u32 { unsafe { ffi::RSA_size(self.as_ptr()) as u32 } } /// Decrypts data using the public key, returning the number of decrypted bytes. /// /// # Panics /// /// Panics if `to` is smaller than `self.size()`. pub fn public_decrypt( &self, from: &[u8], to: &mut [u8], padding: Padding, ) -> Result { assert!(from.len() <= i32::max_value() as usize); assert!(to.len() >= self.size() as usize); unsafe { let len = cvt_n(ffi::RSA_public_decrypt( from.len() as c_int, from.as_ptr(), to.as_mut_ptr(), self.as_ptr(), padding.0, ))?; Ok(len as usize) } } /// Encrypts data using the public key, returning the number of encrypted bytes. /// /// # Panics /// /// Panics if `to` is smaller than `self.size()`. pub fn public_encrypt( &self, from: &[u8], to: &mut [u8], padding: Padding, ) -> Result { assert!(from.len() <= i32::max_value() as usize); assert!(to.len() >= self.size() as usize); unsafe { let len = cvt_n(ffi::RSA_public_encrypt( from.len() as c_int, from.as_ptr(), to.as_mut_ptr(), self.as_ptr(), padding.0, ))?; Ok(len as usize) } } /// Returns a reference to the modulus of the key. /// /// This corresponds to [`RSA_get0_key`]. /// /// [`RSA_get0_key`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_get0_key.html pub fn n(&self) -> &BigNumRef { unsafe { let mut n = ptr::null(); RSA_get0_key(self.as_ptr(), &mut n, ptr::null_mut(), ptr::null_mut()); BigNumRef::from_const_ptr(n) } } /// Returns a reference to the public exponent of the key. /// /// This corresponds to [`RSA_get0_key`]. /// /// [`RSA_get0_key`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_get0_key.html pub fn e(&self) -> &BigNumRef { unsafe { let mut e = ptr::null(); RSA_get0_key(self.as_ptr(), ptr::null_mut(), &mut e, ptr::null_mut()); BigNumRef::from_const_ptr(e) } } } impl Rsa { /// Creates a new RSA key with only public components. /// /// `n` is the modulus common to both public and private key. /// `e` is the public exponent. /// /// This corresponds to [`RSA_new`] and uses [`RSA_set0_key`]. /// /// [`RSA_new`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_new.html /// [`RSA_set0_key`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_set0_key.html pub fn from_public_components(n: BigNum, e: BigNum) -> Result, ErrorStack> { unsafe { let rsa = cvt_p(ffi::RSA_new())?; RSA_set0_key(rsa, n.as_ptr(), e.as_ptr(), ptr::null_mut()); mem::forget((n, e)); Ok(Rsa::from_ptr(rsa)) } } from_pem! { /// Decodes a PEM-encoded SubjectPublicKeyInfo structure containing an RSA key. /// /// The input should have a header of `-----BEGIN PUBLIC KEY-----`. /// /// This corresponds to [`PEM_read_bio_RSA_PUBKEY`]. /// /// [`PEM_read_bio_RSA_PUBKEY`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_read_bio_RSA_PUBKEY.html public_key_from_pem, Rsa, ffi::PEM_read_bio_RSA_PUBKEY } from_pem! { /// Decodes a PEM-encoded PKCS#1 RSAPublicKey structure. /// /// The input should have a header of `-----BEGIN RSA PUBLIC KEY-----`. /// /// This corresponds to [`PEM_read_bio_RSAPublicKey`]. /// /// [`PEM_read_bio_RSAPublicKey`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_read_bio_RSAPublicKey.html public_key_from_pem_pkcs1, Rsa, ffi::PEM_read_bio_RSAPublicKey } from_der! { /// Decodes a DER-encoded SubjectPublicKeyInfo structure containing an RSA key. /// /// This corresponds to [`d2i_RSA_PUBKEY`]. /// /// [`d2i_RSA_PUBKEY`]: https://www.openssl.org/docs/man1.0.2/crypto/d2i_RSA_PUBKEY.html public_key_from_der, Rsa, ffi::d2i_RSA_PUBKEY } from_der! { /// Decodes a DER-encoded PKCS#1 RSAPublicKey structure. /// /// This corresponds to [`d2i_RSAPublicKey`]. /// /// [`d2i_RSAPublicKey`]: https://www.openssl.org/docs/man1.0.2/crypto/d2i_RSA_PUBKEY.html public_key_from_der_pkcs1, Rsa, ffi::d2i_RSAPublicKey } } pub struct RsaPrivateKeyBuilder { rsa: Rsa, } impl RsaPrivateKeyBuilder { /// Creates a new `RsaPrivateKeyBuilder`. /// /// `n` is the modulus common to both public and private key. /// `e` is the public exponent and `d` is the private exponent. /// /// This corresponds to [`RSA_new`] and uses [`RSA_set0_key`]. /// /// [`RSA_new`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_new.html /// [`RSA_set0_key`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_set0_key.html pub fn new(n: BigNum, e: BigNum, d: BigNum) -> Result { unsafe { let rsa = cvt_p(ffi::RSA_new())?; RSA_set0_key(rsa, n.as_ptr(), e.as_ptr(), d.as_ptr()); mem::forget((n, e, d)); Ok(RsaPrivateKeyBuilder { rsa: Rsa::from_ptr(rsa), }) } } /// Sets the factors of the Rsa key. /// /// `p` and `q` are the first and second factors of `n`. /// /// This correspond to [`RSA_set0_factors`]. /// /// [`RSA_set0_factors`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_set0_factors.html // FIXME should be infallible pub fn set_factors(self, p: BigNum, q: BigNum) -> Result { unsafe { RSA_set0_factors(self.rsa.as_ptr(), p.as_ptr(), q.as_ptr()); mem::forget((p, q)); } Ok(self) } /// Sets the Chinese Remainder Theorem params of the Rsa key. /// /// `dmp1`, `dmq1`, and `iqmp` are the exponents and coefficient for /// CRT calculations which is used to speed up RSA operations. /// /// This correspond to [`RSA_set0_crt_params`]. /// /// [`RSA_set0_crt_params`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_set0_crt_params.html // FIXME should be infallible pub fn set_crt_params( self, dmp1: BigNum, dmq1: BigNum, iqmp: BigNum, ) -> Result { unsafe { RSA_set0_crt_params( self.rsa.as_ptr(), dmp1.as_ptr(), dmq1.as_ptr(), iqmp.as_ptr(), ); mem::forget((dmp1, dmq1, iqmp)); } Ok(self) } /// Returns the Rsa key. pub fn build(self) -> Rsa { self.rsa } } impl Rsa { /// Creates a new RSA key with private components (public components are assumed). /// /// This a convenience method over /// `Rsa::build(n, e, d)?.set_factors(p, q)?.set_crt_params(dmp1, dmq1, iqmp)?.build()` #[allow(clippy::too_many_arguments, clippy::many_single_char_names)] pub fn from_private_components( n: BigNum, e: BigNum, d: BigNum, p: BigNum, q: BigNum, dmp1: BigNum, dmq1: BigNum, iqmp: BigNum, ) -> Result, ErrorStack> { Ok(RsaPrivateKeyBuilder::new(n, e, d)? .set_factors(p, q)? .set_crt_params(dmp1, dmq1, iqmp)? .build()) } /// Generates a public/private key pair with the specified size. /// /// The public exponent will be 65537. /// /// This corresponds to [`RSA_generate_key_ex`]. /// /// [`RSA_generate_key_ex`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_generate_key_ex.html pub fn generate(bits: u32) -> Result, ErrorStack> { let e = BigNum::from_u32(ffi::RSA_F4 as u32)?; Rsa::generate_with_e(bits, &e) } /// Generates a public/private key pair with the specified size and a custom exponent. /// /// Unless you have specific needs and know what you're doing, use `Rsa::generate` instead. /// /// This corresponds to [`RSA_generate_key_ex`]. /// /// [`RSA_generate_key_ex`]: https://www.openssl.org/docs/man1.1.0/crypto/RSA_generate_key_ex.html pub fn generate_with_e(bits: u32, e: &BigNumRef) -> Result, ErrorStack> { unsafe { let rsa = Rsa::from_ptr(cvt_p(ffi::RSA_new())?); cvt(ffi::RSA_generate_key_ex( rsa.0, bits as c_int, e.as_ptr(), ptr::null_mut(), ))?; Ok(rsa) } } // FIXME these need to identify input formats private_key_from_pem! { /// Deserializes a private key from a PEM-encoded PKCS#1 RSAPrivateKey structure. /// /// This corresponds to [`PEM_read_bio_RSAPrivateKey`]. /// /// [`PEM_read_bio_RSAPrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_read_bio_RSAPrivateKey.html private_key_from_pem, /// Deserializes a private key from a PEM-encoded encrypted PKCS#1 RSAPrivateKey structure. /// /// This corresponds to [`PEM_read_bio_RSAPrivateKey`]. /// /// [`PEM_read_bio_RSAPrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_read_bio_RSAPrivateKey.html private_key_from_pem_passphrase, /// Deserializes a private key from a PEM-encoded encrypted PKCS#1 RSAPrivateKey structure. /// /// The callback should fill the password into the provided buffer and return its length. /// /// This corresponds to [`PEM_read_bio_RSAPrivateKey`]. /// /// [`PEM_read_bio_RSAPrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_read_bio_RSAPrivateKey.html private_key_from_pem_callback, Rsa, ffi::PEM_read_bio_RSAPrivateKey } from_der! { /// Decodes a DER-encoded PKCS#1 RSAPrivateKey structure. /// /// This corresponds to [`d2i_RSAPrivateKey`]. /// /// [`d2i_RSAPrivateKey`]: https://www.openssl.org/docs/man1.0.2/crypto/d2i_RSA_PUBKEY.html private_key_from_der, Rsa, ffi::d2i_RSAPrivateKey } } impl fmt::Debug for Rsa { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "Rsa") } } cfg_if! { if #[cfg(any(ossl110, libressl273))] { use ffi::{ RSA_get0_key, RSA_get0_factors, RSA_get0_crt_params, RSA_set0_key, RSA_set0_factors, RSA_set0_crt_params, }; } else { #[allow(bad_style)] unsafe fn RSA_get0_key( r: *const ffi::RSA, n: *mut *const ffi::BIGNUM, e: *mut *const ffi::BIGNUM, d: *mut *const ffi::BIGNUM, ) { if !n.is_null() { *n = (*r).n; } if !e.is_null() { *e = (*r).e; } if !d.is_null() { *d = (*r).d; } } #[allow(bad_style)] unsafe fn RSA_get0_factors( r: *const ffi::RSA, p: *mut *const ffi::BIGNUM, q: *mut *const ffi::BIGNUM, ) { if !p.is_null() { *p = (*r).p; } if !q.is_null() { *q = (*r).q; } } #[allow(bad_style)] unsafe fn RSA_get0_crt_params( r: *const ffi::RSA, dmp1: *mut *const ffi::BIGNUM, dmq1: *mut *const ffi::BIGNUM, iqmp: *mut *const ffi::BIGNUM, ) { if !dmp1.is_null() { *dmp1 = (*r).dmp1; } if !dmq1.is_null() { *dmq1 = (*r).dmq1; } if !iqmp.is_null() { *iqmp = (*r).iqmp; } } #[allow(bad_style)] unsafe fn RSA_set0_key( r: *mut ffi::RSA, n: *mut ffi::BIGNUM, e: *mut ffi::BIGNUM, d: *mut ffi::BIGNUM, ) -> c_int { (*r).n = n; (*r).e = e; (*r).d = d; 1 } #[allow(bad_style)] unsafe fn RSA_set0_factors( r: *mut ffi::RSA, p: *mut ffi::BIGNUM, q: *mut ffi::BIGNUM, ) -> c_int { (*r).p = p; (*r).q = q; 1 } #[allow(bad_style)] unsafe fn RSA_set0_crt_params( r: *mut ffi::RSA, dmp1: *mut ffi::BIGNUM, dmq1: *mut ffi::BIGNUM, iqmp: *mut ffi::BIGNUM, ) -> c_int { (*r).dmp1 = dmp1; (*r).dmq1 = dmq1; (*r).iqmp = iqmp; 1 } } } #[cfg(test)] mod test { use crate::symm::Cipher; use super::*; #[test] fn test_from_password() { let key = include_bytes!("../test/rsa-encrypted.pem"); Rsa::private_key_from_pem_passphrase(key, b"mypass").unwrap(); } #[test] fn test_from_password_callback() { let mut password_queried = false; let key = include_bytes!("../test/rsa-encrypted.pem"); Rsa::private_key_from_pem_callback(key, |password| { password_queried = true; password[..6].copy_from_slice(b"mypass"); Ok(6) }) .unwrap(); assert!(password_queried); } #[test] fn test_to_password() { let key = Rsa::generate(2048).unwrap(); let pem = key .private_key_to_pem_passphrase(Cipher::aes_128_cbc(), b"foobar") .unwrap(); Rsa::private_key_from_pem_passphrase(&pem, b"foobar").unwrap(); assert!(Rsa::private_key_from_pem_passphrase(&pem, b"fizzbuzz").is_err()); } #[test] fn test_public_encrypt_private_decrypt_with_padding() { let key = include_bytes!("../test/rsa.pem.pub"); let public_key = Rsa::public_key_from_pem(key).unwrap(); let mut result = vec![0; public_key.size() as usize]; let original_data = b"This is test"; let len = public_key .public_encrypt(original_data, &mut result, Padding::PKCS1) .unwrap(); assert_eq!(len, 256); let pkey = include_bytes!("../test/rsa.pem"); let private_key = Rsa::private_key_from_pem(pkey).unwrap(); let mut dec_result = vec![0; private_key.size() as usize]; let len = private_key .private_decrypt(&result, &mut dec_result, Padding::PKCS1) .unwrap(); assert_eq!(&dec_result[..len], original_data); } #[test] fn test_private_encrypt() { let k0 = super::Rsa::generate(512).unwrap(); let k0pkey = k0.public_key_to_pem().unwrap(); let k1 = super::Rsa::public_key_from_pem(&k0pkey).unwrap(); let msg = vec![0xdeu8, 0xadu8, 0xd0u8, 0x0du8]; let mut emesg = vec![0; k0.size() as usize]; k0.private_encrypt(&msg, &mut emesg, Padding::PKCS1) .unwrap(); let mut dmesg = vec![0; k1.size() as usize]; let len = k1 .public_decrypt(&emesg, &mut dmesg, Padding::PKCS1) .unwrap(); assert_eq!(msg, &dmesg[..len]); } #[test] fn test_public_encrypt() { let k0 = super::Rsa::generate(512).unwrap(); let k0pkey = k0.private_key_to_pem().unwrap(); let k1 = super::Rsa::private_key_from_pem(&k0pkey).unwrap(); let msg = vec![0xdeu8, 0xadu8, 0xd0u8, 0x0du8]; let mut emesg = vec![0; k0.size() as usize]; k0.public_encrypt(&msg, &mut emesg, Padding::PKCS1).unwrap(); let mut dmesg = vec![0; k1.size() as usize]; let len = k1 .private_decrypt(&emesg, &mut dmesg, Padding::PKCS1) .unwrap(); assert_eq!(msg, &dmesg[..len]); } #[test] fn test_public_key_from_pem_pkcs1() { let key = include_bytes!("../test/pkcs1.pem.pub"); Rsa::public_key_from_pem_pkcs1(key).unwrap(); } #[test] #[should_panic] fn test_public_key_from_pem_pkcs1_file_panic() { let key = include_bytes!("../test/key.pem.pub"); Rsa::public_key_from_pem_pkcs1(key).unwrap(); } #[test] fn test_public_key_to_pem_pkcs1() { let keypair = super::Rsa::generate(512).unwrap(); let pubkey_pem = keypair.public_key_to_pem_pkcs1().unwrap(); super::Rsa::public_key_from_pem_pkcs1(&pubkey_pem).unwrap(); } #[test] #[should_panic] fn test_public_key_from_pem_pkcs1_generate_panic() { let keypair = super::Rsa::generate(512).unwrap(); let pubkey_pem = keypair.public_key_to_pem().unwrap(); super::Rsa::public_key_from_pem_pkcs1(&pubkey_pem).unwrap(); } #[test] fn test_pem_pkcs1_encrypt() { let keypair = super::Rsa::generate(2048).unwrap(); let pubkey_pem = keypair.public_key_to_pem_pkcs1().unwrap(); let pubkey = super::Rsa::public_key_from_pem_pkcs1(&pubkey_pem).unwrap(); let msg = b"Hello, world!"; let mut encrypted = vec![0; pubkey.size() as usize]; let len = pubkey .public_encrypt(msg, &mut encrypted, Padding::PKCS1) .unwrap(); assert!(len > msg.len()); let mut decrypted = vec![0; keypair.size() as usize]; let len = keypair .private_decrypt(&encrypted, &mut decrypted, Padding::PKCS1) .unwrap(); assert_eq!(len, msg.len()); assert_eq!(&decrypted[..len], msg); } #[test] fn test_pem_pkcs1_padding() { let keypair = super::Rsa::generate(2048).unwrap(); let pubkey_pem = keypair.public_key_to_pem_pkcs1().unwrap(); let pubkey = super::Rsa::public_key_from_pem_pkcs1(&pubkey_pem).unwrap(); let msg = b"foo"; let mut encrypted1 = vec![0; pubkey.size() as usize]; let mut encrypted2 = vec![0; pubkey.size() as usize]; let len1 = pubkey .public_encrypt(msg, &mut encrypted1, Padding::PKCS1) .unwrap(); let len2 = pubkey .public_encrypt(msg, &mut encrypted2, Padding::PKCS1) .unwrap(); assert!(len1 > (msg.len() + 1)); assert_eq!(len1, len2); assert_ne!(encrypted1, encrypted2); } #[test] #[allow(clippy::redundant_clone)] fn clone() { let key = Rsa::generate(2048).unwrap(); drop(key.clone()); } #[test] fn generate_with_e() { let e = BigNum::from_u32(0x10001).unwrap(); Rsa::generate_with_e(2048, &e).unwrap(); } } vendor/openssl/src/dsa.rs0000664000175000017500000003410514160055207016241 0ustar mwhudsonmwhudson//! Digital Signatures //! //! DSA ensures a message originated from a known sender, and was not modified. //! DSA uses asymmetrical keys and an algorithm to output a signature of the message //! using the private key that can be validated with the public key but not be generated //! without the private key. use cfg_if::cfg_if; use foreign_types::{ForeignType, ForeignTypeRef}; use libc::c_int; use std::fmt; use std::mem; use std::ptr; use crate::bn::{BigNum, BigNumRef}; use crate::error::ErrorStack; use crate::pkey::{HasParams, HasPrivate, HasPublic, Private, Public}; use crate::util::ForeignTypeRefExt; use crate::{cvt, cvt_p}; generic_foreign_type_and_impl_send_sync! { type CType = ffi::DSA; fn drop = ffi::DSA_free; /// Object representing DSA keys. /// /// A DSA object contains the parameters p, q, and g. There is a private /// and public key. The values p, g, and q are: /// /// * `p`: DSA prime parameter /// * `q`: DSA sub-prime parameter /// * `g`: DSA base parameter /// /// These values are used to calculate a pair of asymmetrical keys used for /// signing. /// /// OpenSSL documentation at [`DSA_new`] /// /// [`DSA_new`]: https://www.openssl.org/docs/man1.1.0/crypto/DSA_new.html /// /// # Examples /// /// ``` /// use openssl::dsa::Dsa; /// use openssl::error::ErrorStack; /// use openssl::pkey::Private; /// /// fn create_dsa() -> Result, ErrorStack> { /// let sign = Dsa::generate(2048)?; /// Ok(sign) /// } /// # fn main() { /// # create_dsa(); /// # } /// ``` pub struct Dsa; /// Reference to [`Dsa`]. /// /// [`Dsa`]: struct.Dsa.html pub struct DsaRef; } impl Clone for Dsa { fn clone(&self) -> Dsa { (**self).to_owned() } } impl ToOwned for DsaRef { type Owned = Dsa; fn to_owned(&self) -> Dsa { unsafe { ffi::DSA_up_ref(self.as_ptr()); Dsa::from_ptr(self.as_ptr()) } } } impl DsaRef where T: HasPublic, { to_pem! { /// Serialies the public key into a PEM-encoded SubjectPublicKeyInfo structure. /// /// The output will have a header of `-----BEGIN PUBLIC KEY-----`. /// /// This corresponds to [`PEM_write_bio_DSA_PUBKEY`]. /// /// [`PEM_write_bio_DSA_PUBKEY`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_write_bio_DSA_PUBKEY.html public_key_to_pem, ffi::PEM_write_bio_DSA_PUBKEY } to_der! { /// Serializes the public key into a DER-encoded SubjectPublicKeyInfo structure. /// /// This corresponds to [`i2d_DSA_PUBKEY`]. /// /// [`i2d_DSA_PUBKEY`]: https://www.openssl.org/docs/man1.1.0/crypto/i2d_DSA_PUBKEY.html public_key_to_der, ffi::i2d_DSA_PUBKEY } /// Returns a reference to the public key component of `self`. pub fn pub_key(&self) -> &BigNumRef { unsafe { let mut pub_key = ptr::null(); DSA_get0_key(self.as_ptr(), &mut pub_key, ptr::null_mut()); BigNumRef::from_const_ptr(pub_key) } } } impl DsaRef where T: HasPrivate, { private_key_to_pem! { /// Serializes the private key to a PEM-encoded DSAPrivateKey structure. /// /// The output will have a header of `-----BEGIN DSA PRIVATE KEY-----`. /// /// This corresponds to [`PEM_write_bio_DSAPrivateKey`]. /// /// [`PEM_write_bio_DSAPrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_write_bio_DSAPrivateKey.html private_key_to_pem, /// Serializes the private key to a PEM-encoded encrypted DSAPrivateKey structure. /// /// The output will have a header of `-----BEGIN DSA PRIVATE KEY-----`. /// /// This corresponds to [`PEM_write_bio_DSAPrivateKey`]. /// /// [`PEM_write_bio_DSAPrivateKey`]: https://www.openssl.org/docs/man1.1.0/crypto/PEM_write_bio_DSAPrivateKey.html private_key_to_pem_passphrase, ffi::PEM_write_bio_DSAPrivateKey } /// Returns a reference to the private key component of `self`. pub fn priv_key(&self) -> &BigNumRef { unsafe { let mut priv_key = ptr::null(); DSA_get0_key(self.as_ptr(), ptr::null_mut(), &mut priv_key); BigNumRef::from_const_ptr(priv_key) } } } impl DsaRef where T: HasParams, { /// Returns the maximum size of the signature output by `self` in bytes. /// /// OpenSSL documentation at [`DSA_size`] /// /// [`DSA_size`]: https://www.openssl.org/docs/man1.1.0/crypto/DSA_size.html pub fn size(&self) -> u32 { unsafe { ffi::DSA_size(self.as_ptr()) as u32 } } /// Returns the DSA prime parameter of `self`. pub fn p(&self) -> &BigNumRef { unsafe { let mut p = ptr::null(); DSA_get0_pqg(self.as_ptr(), &mut p, ptr::null_mut(), ptr::null_mut()); BigNumRef::from_const_ptr(p) } } /// Returns the DSA sub-prime parameter of `self`. pub fn q(&self) -> &BigNumRef { unsafe { let mut q = ptr::null(); DSA_get0_pqg(self.as_ptr(), ptr::null_mut(), &mut q, ptr::null_mut()); BigNumRef::from_const_ptr(q) } } /// Returns the DSA base parameter of `self`. pub fn g(&self) -> &BigNumRef { unsafe { let mut g = ptr::null(); DSA_get0_pqg(self.as_ptr(), ptr::null_mut(), ptr::null_mut(), &mut g); BigNumRef::from_const_ptr(g) } } } impl Dsa { /// Generate a DSA key pair. /// /// Calls [`DSA_generate_parameters_ex`] to populate the `p`, `g`, and `q` values. /// These values are used to generate the key pair with [`DSA_generate_key`]. /// /// The `bits` parameter corresponds to the length of the prime `p`. /// /// [`DSA_generate_parameters_ex`]: https://www.openssl.org/docs/man1.1.0/crypto/DSA_generate_parameters_ex.html /// [`DSA_generate_key`]: https://www.openssl.org/docs/man1.1.0/crypto/DSA_generate_key.html pub fn generate(bits: u32) -> Result, ErrorStack> { ffi::init(); unsafe { let dsa = Dsa::from_ptr(cvt_p(ffi::DSA_new())?); cvt(ffi::DSA_generate_parameters_ex( dsa.0, bits as c_int, ptr::null(), 0, ptr::null_mut(), ptr::null_mut(), ptr::null_mut(), ))?; cvt(ffi::DSA_generate_key(dsa.0))?; Ok(dsa) } } /// Create a DSA key pair with the given parameters /// /// `p`, `q` and `g` are the common parameters. /// `priv_key` is the private component of the key pair. /// `pub_key` is the public component of the key. Can be computed via `g^(priv_key) mod p` pub fn from_private_components( p: BigNum, q: BigNum, g: BigNum, priv_key: BigNum, pub_key: BigNum, ) -> Result, ErrorStack> { ffi::init(); unsafe { let dsa = Dsa::from_ptr(cvt_p(ffi::DSA_new())?); cvt(DSA_set0_pqg(dsa.0, p.as_ptr(), q.as_ptr(), g.as_ptr()))?; mem::forget((p, q, g)); cvt(DSA_set0_key(dsa.0, pub_key.as_ptr(), priv_key.as_ptr()))?; mem::forget((pub_key, priv_key)); Ok(dsa) } } } impl Dsa { from_pem! { /// Decodes a PEM-encoded SubjectPublicKeyInfo structure containing a DSA key. /// /// The input should have a header of `-----BEGIN PUBLIC KEY-----`. /// /// This corresponds to [`PEM_read_bio_DSA_PUBKEY`]. /// /// [`PEM_read_bio_DSA_PUBKEY`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_read_bio_DSA_PUBKEY.html public_key_from_pem, Dsa, ffi::PEM_read_bio_DSA_PUBKEY } from_der! { /// Decodes a DER-encoded SubjectPublicKeyInfo structure containing a DSA key. /// /// This corresponds to [`d2i_DSA_PUBKEY`]. /// /// [`d2i_DSA_PUBKEY`]: https://www.openssl.org/docs/man1.0.2/crypto/d2i_DSA_PUBKEY.html public_key_from_der, Dsa, ffi::d2i_DSA_PUBKEY } /// Create a new DSA key with only public components. /// /// `p`, `q` and `g` are the common parameters. /// `pub_key` is the public component of the key. pub fn from_public_components( p: BigNum, q: BigNum, g: BigNum, pub_key: BigNum, ) -> Result, ErrorStack> { ffi::init(); unsafe { let dsa = Dsa::from_ptr(cvt_p(ffi::DSA_new())?); cvt(DSA_set0_pqg(dsa.0, p.as_ptr(), q.as_ptr(), g.as_ptr()))?; mem::forget((p, q, g)); cvt(DSA_set0_key(dsa.0, pub_key.as_ptr(), ptr::null_mut()))?; mem::forget(pub_key); Ok(dsa) } } } impl fmt::Debug for Dsa { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "DSA") } } cfg_if! { if #[cfg(any(ossl110, libressl273))] { use ffi::{DSA_get0_key, DSA_get0_pqg, DSA_set0_key, DSA_set0_pqg}; } else { #[allow(bad_style)] unsafe fn DSA_get0_pqg( d: *mut ffi::DSA, p: *mut *const ffi::BIGNUM, q: *mut *const ffi::BIGNUM, g: *mut *const ffi::BIGNUM) { if !p.is_null() { *p = (*d).p; } if !q.is_null() { *q = (*d).q; } if !g.is_null() { *g = (*d).g; } } #[allow(bad_style)] unsafe fn DSA_get0_key( d: *mut ffi::DSA, pub_key: *mut *const ffi::BIGNUM, priv_key: *mut *const ffi::BIGNUM) { if !pub_key.is_null() { *pub_key = (*d).pub_key; } if !priv_key.is_null() { *priv_key = (*d).priv_key; } } #[allow(bad_style)] unsafe fn DSA_set0_key( d: *mut ffi::DSA, pub_key: *mut ffi::BIGNUM, priv_key: *mut ffi::BIGNUM) -> c_int { (*d).pub_key = pub_key; (*d).priv_key = priv_key; 1 } #[allow(bad_style)] unsafe fn DSA_set0_pqg( d: *mut ffi::DSA, p: *mut ffi::BIGNUM, q: *mut ffi::BIGNUM, g: *mut ffi::BIGNUM) -> c_int { (*d).p = p; (*d).q = q; (*d).g = g; 1 } } } #[cfg(test)] mod test { use super::*; use crate::bn::BigNumContext; use crate::hash::MessageDigest; use crate::pkey::PKey; use crate::sign::{Signer, Verifier}; #[test] pub fn test_generate() { Dsa::generate(1024).unwrap(); } #[test] fn test_pubkey_generation() { let dsa = Dsa::generate(1024).unwrap(); let p = dsa.p(); let g = dsa.g(); let priv_key = dsa.priv_key(); let pub_key = dsa.pub_key(); let mut ctx = BigNumContext::new().unwrap(); let mut calc = BigNum::new().unwrap(); calc.mod_exp(g, priv_key, p, &mut ctx).unwrap(); assert_eq!(&calc, pub_key) } #[test] fn test_priv_key_from_parts() { let p = BigNum::from_u32(283).unwrap(); let q = BigNum::from_u32(47).unwrap(); let g = BigNum::from_u32(60).unwrap(); let priv_key = BigNum::from_u32(15).unwrap(); let pub_key = BigNum::from_u32(207).unwrap(); let dsa = Dsa::from_private_components(p, q, g, priv_key, pub_key).unwrap(); assert_eq!(dsa.pub_key(), &BigNum::from_u32(207).unwrap()); assert_eq!(dsa.priv_key(), &BigNum::from_u32(15).unwrap()); assert_eq!(dsa.p(), &BigNum::from_u32(283).unwrap()); assert_eq!(dsa.q(), &BigNum::from_u32(47).unwrap()); assert_eq!(dsa.g(), &BigNum::from_u32(60).unwrap()); } #[test] fn test_pub_key_from_parts() { let p = BigNum::from_u32(283).unwrap(); let q = BigNum::from_u32(47).unwrap(); let g = BigNum::from_u32(60).unwrap(); let pub_key = BigNum::from_u32(207).unwrap(); let dsa = Dsa::from_public_components(p, q, g, pub_key).unwrap(); assert_eq!(dsa.pub_key(), &BigNum::from_u32(207).unwrap()); assert_eq!(dsa.p(), &BigNum::from_u32(283).unwrap()); assert_eq!(dsa.q(), &BigNum::from_u32(47).unwrap()); assert_eq!(dsa.g(), &BigNum::from_u32(60).unwrap()); } #[test] fn test_signature() { const TEST_DATA: &[u8] = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]; let dsa_ref = Dsa::generate(1024).unwrap(); let p = dsa_ref.p(); let q = dsa_ref.q(); let g = dsa_ref.g(); let pub_key = dsa_ref.pub_key(); let priv_key = dsa_ref.priv_key(); let priv_key = Dsa::from_private_components( BigNumRef::to_owned(p).unwrap(), BigNumRef::to_owned(q).unwrap(), BigNumRef::to_owned(g).unwrap(), BigNumRef::to_owned(priv_key).unwrap(), BigNumRef::to_owned(pub_key).unwrap(), ) .unwrap(); let priv_key = PKey::from_dsa(priv_key).unwrap(); let pub_key = Dsa::from_public_components( BigNumRef::to_owned(p).unwrap(), BigNumRef::to_owned(q).unwrap(), BigNumRef::to_owned(g).unwrap(), BigNumRef::to_owned(pub_key).unwrap(), ) .unwrap(); let pub_key = PKey::from_dsa(pub_key).unwrap(); let mut signer = Signer::new(MessageDigest::sha256(), &priv_key).unwrap(); signer.update(TEST_DATA).unwrap(); let signature = signer.sign_to_vec().unwrap(); let mut verifier = Verifier::new(MessageDigest::sha256(), &pub_key).unwrap(); verifier.update(TEST_DATA).unwrap(); assert!(verifier.verify(&signature[..]).unwrap()); } #[test] #[allow(clippy::redundant_clone)] fn clone() { let key = Dsa::generate(2048).unwrap(); drop(key.clone()); } } vendor/openssl/src/pkcs12.rs0000664000175000017500000002053614160055207016600 0ustar mwhudsonmwhudson//! PKCS #12 archives. use foreign_types::{ForeignType, ForeignTypeRef}; use libc::c_int; use std::ffi::CString; use std::ptr; use crate::error::ErrorStack; use crate::nid::Nid; use crate::pkey::{HasPrivate, PKey, PKeyRef, Private}; use crate::stack::Stack; use crate::util::ForeignTypeExt; use crate::x509::{X509Ref, X509}; use crate::{cvt, cvt_p}; foreign_type_and_impl_send_sync! { type CType = ffi::PKCS12; fn drop = ffi::PKCS12_free; pub struct Pkcs12; pub struct Pkcs12Ref; } impl Pkcs12Ref { to_der! { /// Serializes the `Pkcs12` to its standard DER encoding. /// /// This corresponds to [`i2d_PKCS12`]. /// /// [`i2d_PKCS12`]: https://www.openssl.org/docs/manmaster/man3/i2d_PKCS12.html to_der, ffi::i2d_PKCS12 } /// Extracts the contents of the `Pkcs12`. pub fn parse(&self, pass: &str) -> Result { unsafe { let pass = CString::new(pass.as_bytes()).unwrap(); let mut pkey = ptr::null_mut(); let mut cert = ptr::null_mut(); let mut chain = ptr::null_mut(); cvt(ffi::PKCS12_parse( self.as_ptr(), pass.as_ptr(), &mut pkey, &mut cert, &mut chain, ))?; let pkey = PKey::from_ptr(pkey); let cert = X509::from_ptr(cert); let chain = Stack::from_ptr_opt(chain); Ok(ParsedPkcs12 { pkey, cert, chain }) } } } impl Pkcs12 { from_der! { /// Deserializes a DER-encoded PKCS#12 archive. /// /// This corresponds to [`d2i_PKCS12`]. /// /// [`d2i_PKCS12`]: https://www.openssl.org/docs/man1.1.0/crypto/d2i_PKCS12.html from_der, Pkcs12, ffi::d2i_PKCS12 } /// Creates a new builder for a protected pkcs12 certificate. /// /// This uses the defaults from the OpenSSL library: /// /// * `nid_key` - `nid::PBE_WITHSHA1AND3_KEY_TRIPLEDES_CBC` /// * `nid_cert` - `nid::PBE_WITHSHA1AND40BITRC2_CBC` /// * `iter` - `2048` /// * `mac_iter` - `2048` pub fn builder() -> Pkcs12Builder { ffi::init(); Pkcs12Builder { nid_key: Nid::UNDEF, //nid::PBE_WITHSHA1AND3_KEY_TRIPLEDES_CBC, nid_cert: Nid::UNDEF, //nid::PBE_WITHSHA1AND40BITRC2_CBC, iter: ffi::PKCS12_DEFAULT_ITER, mac_iter: ffi::PKCS12_DEFAULT_ITER, ca: None, } } } pub struct ParsedPkcs12 { pub pkey: PKey, pub cert: X509, pub chain: Option>, } pub struct Pkcs12Builder { nid_key: Nid, nid_cert: Nid, iter: c_int, mac_iter: c_int, ca: Option>, } impl Pkcs12Builder { /// The encryption algorithm that should be used for the key pub fn key_algorithm(&mut self, nid: Nid) -> &mut Self { self.nid_key = nid; self } /// The encryption algorithm that should be used for the cert pub fn cert_algorithm(&mut self, nid: Nid) -> &mut Self { self.nid_cert = nid; self } /// Key iteration count, default is 2048 as of this writing pub fn key_iter(&mut self, iter: u32) -> &mut Self { self.iter = iter as c_int; self } /// MAC iteration count, default is the same as key_iter. /// /// Old implementations don't understand MAC iterations greater than 1, (pre 1.0.1?), if such /// compatibility is required this should be set to 1. pub fn mac_iter(&mut self, mac_iter: u32) -> &mut Self { self.mac_iter = mac_iter as c_int; self } /// An additional set of certificates to include in the archive beyond the one provided to /// `build`. pub fn ca(&mut self, ca: Stack) -> &mut Self { self.ca = Some(ca); self } /// Builds the PKCS #12 object /// /// # Arguments /// /// * `password` - the password used to encrypt the key and certificate /// * `friendly_name` - user defined name for the certificate /// * `pkey` - key to store /// * `cert` - certificate to store pub fn build( self, password: &str, friendly_name: &str, pkey: &PKeyRef, cert: &X509Ref, ) -> Result where T: HasPrivate, { unsafe { let pass = CString::new(password).unwrap(); let friendly_name = CString::new(friendly_name).unwrap(); let pkey = pkey.as_ptr(); let cert = cert.as_ptr(); let ca = self .ca .as_ref() .map(|ca| ca.as_ptr()) .unwrap_or(ptr::null_mut()); let nid_key = self.nid_key.as_raw(); let nid_cert = self.nid_cert.as_raw(); // According to the OpenSSL docs, keytype is a non-standard extension for MSIE, // It's values are KEY_SIG or KEY_EX, see the OpenSSL docs for more information: // https://www.openssl.org/docs/man1.0.2/crypto/PKCS12_create.html let keytype = 0; cvt_p(ffi::PKCS12_create( pass.as_ptr() as *const _ as *mut _, friendly_name.as_ptr() as *const _ as *mut _, pkey, cert, ca, nid_key, nid_cert, self.iter, self.mac_iter, keytype, )) .map(Pkcs12) } } } #[cfg(test)] mod test { use crate::asn1::Asn1Time; use crate::hash::MessageDigest; use crate::nid::Nid; use crate::pkey::PKey; use crate::rsa::Rsa; use crate::x509::extension::KeyUsage; use crate::x509::{X509Name, X509}; use super::*; #[test] #[cfg_attr(ossl300, ignore)] // https://github.com/openssl/openssl/issues/11672 fn parse() { let der = include_bytes!("../test/identity.p12"); let pkcs12 = Pkcs12::from_der(der).unwrap(); let parsed = pkcs12.parse("mypass").unwrap(); assert_eq!( hex::encode(parsed.cert.digest(MessageDigest::sha1()).unwrap()), "59172d9313e84459bcff27f967e79e6e9217e584" ); let chain = parsed.chain.unwrap(); assert_eq!(chain.len(), 1); assert_eq!( hex::encode(chain[0].digest(MessageDigest::sha1()).unwrap()), "c0cbdf7cdd03c9773e5468e1f6d2da7d5cbb1875" ); } #[test] #[cfg_attr(ossl300, ignore)] // https://github.com/openssl/openssl/issues/11672 fn parse_empty_chain() { let der = include_bytes!("../test/keystore-empty-chain.p12"); let pkcs12 = Pkcs12::from_der(der).unwrap(); let parsed = pkcs12.parse("cassandra").unwrap(); assert!(parsed.chain.is_none()); } #[test] #[cfg_attr(ossl300, ignore)] // https://github.com/openssl/openssl/issues/11672 fn create() { let subject_name = "ns.example.com"; let rsa = Rsa::generate(2048).unwrap(); let pkey = PKey::from_rsa(rsa).unwrap(); let mut name = X509Name::builder().unwrap(); name.append_entry_by_nid(Nid::COMMONNAME, subject_name) .unwrap(); let name = name.build(); let key_usage = KeyUsage::new().digital_signature().build().unwrap(); let mut builder = X509::builder().unwrap(); builder.set_version(2).unwrap(); builder .set_not_before(&Asn1Time::days_from_now(0).unwrap()) .unwrap(); builder .set_not_after(&Asn1Time::days_from_now(365).unwrap()) .unwrap(); builder.set_subject_name(&name).unwrap(); builder.set_issuer_name(&name).unwrap(); builder.append_extension(key_usage).unwrap(); builder.set_pubkey(&pkey).unwrap(); builder.sign(&pkey, MessageDigest::sha256()).unwrap(); let cert = builder.build(); let pkcs12_builder = Pkcs12::builder(); let pkcs12 = pkcs12_builder .build("mypass", subject_name, &pkey, &cert) .unwrap(); let der = pkcs12.to_der().unwrap(); let pkcs12 = Pkcs12::from_der(&der).unwrap(); let parsed = pkcs12.parse("mypass").unwrap(); assert_eq!( &*parsed.cert.digest(MessageDigest::sha1()).unwrap(), &*cert.digest(MessageDigest::sha1()).unwrap() ); assert!(parsed.pkey.public_eq(&pkey)); } } vendor/openssl/src/hash.rs0000664000175000017500000004330714160055207016421 0ustar mwhudsonmwhudsonuse cfg_if::cfg_if; use std::ffi::CString; use std::fmt; use std::io; use std::io::prelude::*; use std::ops::{Deref, DerefMut}; use std::ptr; use crate::error::ErrorStack; use crate::nid::Nid; use crate::{cvt, cvt_p}; cfg_if! { if #[cfg(ossl110)] { use ffi::{EVP_MD_CTX_free, EVP_MD_CTX_new}; } else { use ffi::{EVP_MD_CTX_create as EVP_MD_CTX_new, EVP_MD_CTX_destroy as EVP_MD_CTX_free}; } } #[derive(Copy, Clone, PartialEq, Eq)] pub struct MessageDigest(*const ffi::EVP_MD); impl MessageDigest { /// Creates a `MessageDigest` from a raw OpenSSL pointer. /// /// # Safety /// /// The caller must ensure the pointer is valid. pub unsafe fn from_ptr(x: *const ffi::EVP_MD) -> Self { MessageDigest(x) } /// Returns the `MessageDigest` corresponding to an `Nid`. /// /// This corresponds to [`EVP_get_digestbynid`]. /// /// [`EVP_get_digestbynid`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_DigestInit.html pub fn from_nid(type_: Nid) -> Option { unsafe { let ptr = ffi::EVP_get_digestbynid(type_.as_raw()); if ptr.is_null() { None } else { Some(MessageDigest(ptr)) } } } /// Returns the `MessageDigest` corresponding to an algorithm name. /// /// This corresponds to [`EVP_get_digestbyname`]. /// /// [`EVP_get_digestbyname`]: https://www.openssl.org/docs/man1.1.0/crypto/EVP_DigestInit.html pub fn from_name(name: &str) -> Option { ffi::init(); let name = CString::new(name).ok()?; unsafe { let ptr = ffi::EVP_get_digestbyname(name.as_ptr()); if ptr.is_null() { None } else { Some(MessageDigest(ptr)) } } } pub fn null() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_md_null()) } } pub fn md5() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_md5()) } } pub fn sha1() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_sha1()) } } pub fn sha224() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_sha224()) } } pub fn sha256() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_sha256()) } } pub fn sha384() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_sha384()) } } pub fn sha512() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_sha512()) } } #[cfg(ossl111)] pub fn sha3_224() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_sha3_224()) } } #[cfg(ossl111)] pub fn sha3_256() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_sha3_256()) } } #[cfg(ossl111)] pub fn sha3_384() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_sha3_384()) } } #[cfg(ossl111)] pub fn sha3_512() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_sha3_512()) } } #[cfg(ossl111)] pub fn shake_128() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_shake128()) } } #[cfg(ossl111)] pub fn shake_256() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_shake256()) } } #[cfg(not(osslconf = "OPENSSL_NO_RMD160"))] pub fn ripemd160() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_ripemd160()) } } #[cfg(all(any(ossl111, libressl291), not(osslconf = "OPENSSL_NO_SM3")))] pub fn sm3() -> MessageDigest { unsafe { MessageDigest(ffi::EVP_sm3()) } } #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_ptr(&self) -> *const ffi::EVP_MD { self.0 } /// The size of the digest in bytes. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn size(&self) -> usize { unsafe { ffi::EVP_MD_size(self.0) as usize } } /// The name of the digest. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn type_(&self) -> Nid { Nid::from_raw(unsafe { ffi::EVP_MD_type(self.0) }) } } unsafe impl Sync for MessageDigest {} unsafe impl Send for MessageDigest {} #[derive(PartialEq, Copy, Clone)] enum State { Reset, Updated, Finalized, } use self::State::*; /// Provides message digest (hash) computation. /// /// # Examples /// /// Calculate a hash in one go: /// /// ``` /// use openssl::hash::{hash, MessageDigest}; /// /// let data = b"\x42\xF4\x97\xE0"; /// let spec = b"\x7c\x43\x0f\x17\x8a\xef\xdf\x14\x87\xfe\xe7\x14\x4e\x96\x41\xe2"; /// let res = hash(MessageDigest::md5(), data).unwrap(); /// assert_eq!(&*res, spec); /// ``` /// /// Supply the input in chunks: /// /// ``` /// use openssl::hash::{Hasher, MessageDigest}; /// /// let data = [b"\x42\xF4", b"\x97\xE0"]; /// let spec = b"\x7c\x43\x0f\x17\x8a\xef\xdf\x14\x87\xfe\xe7\x14\x4e\x96\x41\xe2"; /// let mut h = Hasher::new(MessageDigest::md5()).unwrap(); /// h.update(data[0]).unwrap(); /// h.update(data[1]).unwrap(); /// let res = h.finish().unwrap(); /// assert_eq!(&*res, spec); /// ``` /// /// Use an XOF hasher (OpenSSL 1.1.1+): /// /// ``` /// #[cfg(ossl111)] /// { /// use openssl::hash::{hash_xof, MessageDigest}; /// /// let data = b"\x41\x6c\x6c\x20\x79\x6f\x75\x72\x20\x62\x61\x73\x65\x20\x61\x72\x65\x20\x62\x65\x6c\x6f\x6e\x67\x20\x74\x6f\x20\x75\x73"; /// let spec = b"\x49\xd0\x69\x7f\xf5\x08\x11\x1d\x8b\x84\xf1\x5e\x46\xda\xf1\x35"; /// let mut buf = vec![0; 16]; /// hash_xof(MessageDigest::shake_128(), data, buf.as_mut_slice()).unwrap(); /// assert_eq!(buf, spec); /// } /// ``` /// /// # Warning /// /// Don't actually use MD5 and SHA-1 hashes, they're not secure anymore. /// /// Don't ever hash passwords, use the functions in the `pkcs5` module or bcrypt/scrypt instead. /// /// For extendable output functions (XOFs, i.e. SHAKE128/SHAKE256), you must use finish_xof instead /// of finish and provide a buf to store the hash. The hash will be as long as the buf. pub struct Hasher { ctx: *mut ffi::EVP_MD_CTX, md: *const ffi::EVP_MD, type_: MessageDigest, state: State, } unsafe impl Sync for Hasher {} unsafe impl Send for Hasher {} impl Hasher { /// Creates a new `Hasher` with the specified hash type. pub fn new(ty: MessageDigest) -> Result { ffi::init(); let ctx = unsafe { cvt_p(EVP_MD_CTX_new())? }; let mut h = Hasher { ctx, md: ty.as_ptr(), type_: ty, state: Finalized, }; h.init()?; Ok(h) } fn init(&mut self) -> Result<(), ErrorStack> { match self.state { Reset => return Ok(()), Updated => { self.finish()?; } Finalized => (), } unsafe { cvt(ffi::EVP_DigestInit_ex(self.ctx, self.md, ptr::null_mut()))?; } self.state = Reset; Ok(()) } /// Feeds data into the hasher. pub fn update(&mut self, data: &[u8]) -> Result<(), ErrorStack> { if self.state == Finalized { self.init()?; } unsafe { cvt(ffi::EVP_DigestUpdate( self.ctx, data.as_ptr() as *mut _, data.len(), ))?; } self.state = Updated; Ok(()) } /// Returns the hash of the data written and resets the non-XOF hasher. pub fn finish(&mut self) -> Result { if self.state == Finalized { self.init()?; } unsafe { let mut len = ffi::EVP_MAX_MD_SIZE; let mut buf = [0; ffi::EVP_MAX_MD_SIZE as usize]; cvt(ffi::EVP_DigestFinal_ex( self.ctx, buf.as_mut_ptr(), &mut len, ))?; self.state = Finalized; Ok(DigestBytes { buf, len: len as usize, }) } } /// Writes the hash of the data into the supplied buf and resets the XOF hasher. /// The hash will be as long as the buf. #[cfg(ossl111)] pub fn finish_xof(&mut self, buf: &mut [u8]) -> Result<(), ErrorStack> { if self.state == Finalized { self.init()?; } unsafe { cvt(ffi::EVP_DigestFinalXOF( self.ctx, buf.as_mut_ptr(), buf.len(), ))?; self.state = Finalized; Ok(()) } } } impl Write for Hasher { #[inline] fn write(&mut self, buf: &[u8]) -> io::Result { self.update(buf)?; Ok(buf.len()) } fn flush(&mut self) -> io::Result<()> { Ok(()) } } impl Clone for Hasher { fn clone(&self) -> Hasher { let ctx = unsafe { let ctx = EVP_MD_CTX_new(); assert!(!ctx.is_null()); let r = ffi::EVP_MD_CTX_copy_ex(ctx, self.ctx); assert_eq!(r, 1); ctx }; Hasher { ctx, md: self.md, type_: self.type_, state: self.state, } } } impl Drop for Hasher { fn drop(&mut self) { unsafe { if self.state != Finalized { drop(self.finish()); } EVP_MD_CTX_free(self.ctx); } } } /// The resulting bytes of a digest. /// /// This type derefs to a byte slice - it exists to avoid allocating memory to /// store the digest data. #[derive(Copy)] pub struct DigestBytes { pub(crate) buf: [u8; ffi::EVP_MAX_MD_SIZE as usize], pub(crate) len: usize, } impl Clone for DigestBytes { #[inline] fn clone(&self) -> DigestBytes { *self } } impl Deref for DigestBytes { type Target = [u8]; #[inline] fn deref(&self) -> &[u8] { &self.buf[..self.len] } } impl DerefMut for DigestBytes { #[inline] fn deref_mut(&mut self) -> &mut [u8] { &mut self.buf[..self.len] } } impl AsRef<[u8]> for DigestBytes { #[inline] fn as_ref(&self) -> &[u8] { self.deref() } } impl fmt::Debug for DigestBytes { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt::Debug::fmt(&**self, fmt) } } /// Computes the hash of the `data` with the non-XOF hasher `t`. pub fn hash(t: MessageDigest, data: &[u8]) -> Result { let mut h = Hasher::new(t)?; h.update(data)?; h.finish() } /// Computes the hash of the `data` with the XOF hasher `t` and stores it in `buf`. #[cfg(ossl111)] pub fn hash_xof(t: MessageDigest, data: &[u8], buf: &mut [u8]) -> Result<(), ErrorStack> { let mut h = Hasher::new(t)?; h.update(data)?; h.finish_xof(buf) } #[cfg(test)] mod tests { use hex::{self, FromHex}; use std::io::prelude::*; use super::*; fn hash_test(hashtype: MessageDigest, hashtest: &(&str, &str)) { let res = hash(hashtype, &Vec::from_hex(hashtest.0).unwrap()).unwrap(); assert_eq!(hex::encode(res), hashtest.1); } #[cfg(ossl111)] fn hash_xof_test(hashtype: MessageDigest, hashtest: &(&str, &str)) { let expected = Vec::from_hex(hashtest.1).unwrap(); let mut buf = vec![0; expected.len()]; hash_xof( hashtype, &Vec::from_hex(hashtest.0).unwrap(), buf.as_mut_slice(), ) .unwrap(); assert_eq!(buf, expected); } fn hash_recycle_test(h: &mut Hasher, hashtest: &(&str, &str)) { h.write_all(&Vec::from_hex(hashtest.0).unwrap()).unwrap(); let res = h.finish().unwrap(); assert_eq!(hex::encode(res), hashtest.1); } // Test vectors from http://www.nsrl.nist.gov/testdata/ const MD5_TESTS: [(&str, &str); 13] = [ ("", "d41d8cd98f00b204e9800998ecf8427e"), ("7F", "83acb6e67e50e31db6ed341dd2de1595"), ("EC9C", "0b07f0d4ca797d8ac58874f887cb0b68"), ("FEE57A", "e0d583171eb06d56198fc0ef22173907"), ("42F497E0", "7c430f178aefdf1487fee7144e9641e2"), ("C53B777F1C", "75ef141d64cb37ec423da2d9d440c925"), ("89D5B576327B", "ebbaf15eb0ed784c6faa9dc32831bf33"), ("5D4CCE781EB190", "ce175c4b08172019f05e6b5279889f2c"), ("81901FE94932D7B9", "cd4d2f62b8cdb3a0cf968a735a239281"), ("C9FFDEE7788EFB4EC9", "e0841a231ab698db30c6c0f3f246c014"), ("66AC4B7EBA95E53DC10B", "a3b3cea71910d9af56742aa0bb2fe329"), ("A510CD18F7A56852EB0319", "577e216843dd11573574d3fb209b97d8"), ( "AAED18DBE8938C19ED734A8D", "6f80fb775f27e0a4ce5c2f42fc72c5f1", ), ]; #[test] fn test_md5() { for test in MD5_TESTS.iter() { hash_test(MessageDigest::md5(), test); } } #[test] fn test_md5_recycle() { let mut h = Hasher::new(MessageDigest::md5()).unwrap(); for test in MD5_TESTS.iter() { hash_recycle_test(&mut h, test); } } #[test] fn test_finish_twice() { let mut h = Hasher::new(MessageDigest::md5()).unwrap(); h.write_all(&Vec::from_hex(MD5_TESTS[6].0).unwrap()) .unwrap(); h.finish().unwrap(); let res = h.finish().unwrap(); let null = hash(MessageDigest::md5(), &[]).unwrap(); assert_eq!(&*res, &*null); } #[test] #[allow(clippy::redundant_clone)] fn test_clone() { let i = 7; let inp = Vec::from_hex(MD5_TESTS[i].0).unwrap(); assert!(inp.len() > 2); let p = inp.len() / 2; let h0 = Hasher::new(MessageDigest::md5()).unwrap(); println!("Clone a new hasher"); let mut h1 = h0.clone(); h1.write_all(&inp[..p]).unwrap(); { println!("Clone an updated hasher"); let mut h2 = h1.clone(); h2.write_all(&inp[p..]).unwrap(); let res = h2.finish().unwrap(); assert_eq!(hex::encode(res), MD5_TESTS[i].1); } h1.write_all(&inp[p..]).unwrap(); let res = h1.finish().unwrap(); assert_eq!(hex::encode(res), MD5_TESTS[i].1); println!("Clone a finished hasher"); let mut h3 = h1.clone(); h3.write_all(&Vec::from_hex(MD5_TESTS[i + 1].0).unwrap()) .unwrap(); let res = h3.finish().unwrap(); assert_eq!(hex::encode(res), MD5_TESTS[i + 1].1); } #[test] fn test_sha1() { let tests = [("616263", "a9993e364706816aba3e25717850c26c9cd0d89d")]; for test in tests.iter() { hash_test(MessageDigest::sha1(), test); } } #[test] fn test_sha256() { let tests = [( "616263", "ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad", )]; for test in tests.iter() { hash_test(MessageDigest::sha256(), test); } } #[cfg(ossl111)] #[test] fn test_sha3_224() { let tests = [( "416c6c20796f75722062617365206172652062656c6f6e6720746f207573", "1de092dd9fbcbbf450f26264f4778abd48af851f2832924554c56913", )]; for test in tests.iter() { hash_test(MessageDigest::sha3_224(), test); } } #[cfg(ossl111)] #[test] fn test_sha3_256() { let tests = [( "416c6c20796f75722062617365206172652062656c6f6e6720746f207573", "b38e38f08bc1c0091ed4b5f060fe13e86aa4179578513ad11a6e3abba0062f61", )]; for test in tests.iter() { hash_test(MessageDigest::sha3_256(), test); } } #[cfg(ossl111)] #[test] fn test_sha3_384() { let tests = [("416c6c20796f75722062617365206172652062656c6f6e6720746f207573", "966ee786ab3482dd811bf7c8fa8db79aa1f52f6c3c369942ef14240ebd857c6ff626ec35d9e131ff64d328\ ef2008ff16" )]; for test in tests.iter() { hash_test(MessageDigest::sha3_384(), test); } } #[cfg(ossl111)] #[test] fn test_sha3_512() { let tests = [("416c6c20796f75722062617365206172652062656c6f6e6720746f207573", "c072288ef728cd53a029c47687960b9225893532f42b923156e37020bdc1eda753aafbf30af859d4f4c3a1\ 807caee3a79f8eb02dcd61589fbbdf5f40c8787a72" )]; for test in tests.iter() { hash_test(MessageDigest::sha3_512(), test); } } #[cfg(ossl111)] #[test] fn test_shake_128() { let tests = [( "416c6c20796f75722062617365206172652062656c6f6e6720746f207573", "49d0697ff508111d8b84f15e46daf135", )]; for test in tests.iter() { hash_xof_test(MessageDigest::shake_128(), test); } } #[cfg(ossl111)] #[test] fn test_shake_256() { let tests = [( "416c6c20796f75722062617365206172652062656c6f6e6720746f207573", "4e2dfdaa75d1e049d0eaeffe28e76b17cea47b650fb8826fe48b94664326a697", )]; for test in tests.iter() { hash_xof_test(MessageDigest::shake_256(), test); } } #[test] #[cfg_attr(ossl300, ignore)] fn test_ripemd160() { let tests = [("616263", "8eb208f7e05d987a9b044a8e98c6b087f15a0bfc")]; for test in tests.iter() { hash_test(MessageDigest::ripemd160(), test); } } #[cfg(all(any(ossl111, libressl291), not(osslconf = "OPENSSL_NO_SM3")))] #[test] fn test_sm3() { let tests = [( "616263", "66c7f0f462eeedd9d1f2d46bdc10e4e24167c4875cf2f7a2297da02b8f4ba8e0", )]; for test in tests.iter() { hash_test(MessageDigest::sm3(), test); } } #[test] fn from_nid() { assert_eq!( MessageDigest::from_nid(Nid::SHA256).unwrap().as_ptr(), MessageDigest::sha256().as_ptr() ); } #[test] fn from_name() { assert_eq!( MessageDigest::from_name("SHA256").unwrap().as_ptr(), MessageDigest::sha256().as_ptr() ) } } vendor/openssl/src/envelope.rs0000664000175000017500000002167014160055207017312 0ustar mwhudsonmwhudson//! Envelope encryption. //! //! # Example //! //! ```rust //! use openssl::rsa::Rsa; //! use openssl::envelope::Seal; //! use openssl::pkey::PKey; //! use openssl::symm::Cipher; //! //! let rsa = Rsa::generate(2048).unwrap(); //! let key = PKey::from_rsa(rsa).unwrap(); //! //! let cipher = Cipher::aes_256_cbc(); //! let mut seal = Seal::new(cipher, &[key]).unwrap(); //! //! let secret = b"My secret message"; //! let mut encrypted = vec![0; secret.len() + cipher.block_size()]; //! //! let mut enc_len = seal.update(secret, &mut encrypted).unwrap(); //! enc_len += seal.finalize(&mut encrypted[enc_len..]).unwrap(); //! encrypted.truncate(enc_len); //! ``` use crate::error::ErrorStack; use crate::pkey::{HasPrivate, HasPublic, PKey, PKeyRef}; use crate::symm::Cipher; use crate::{cvt, cvt_p}; use foreign_types::{ForeignType, ForeignTypeRef}; use libc::c_int; use std::cmp; use std::ptr; /// Represents an EVP_Seal context. pub struct Seal { ctx: *mut ffi::EVP_CIPHER_CTX, block_size: usize, iv: Option>, enc_keys: Vec>, } impl Seal { /// Creates a new `Seal`. pub fn new(cipher: Cipher, pub_keys: &[PKey]) -> Result where T: HasPublic, { unsafe { assert!(pub_keys.len() <= c_int::max_value() as usize); let ctx = cvt_p(ffi::EVP_CIPHER_CTX_new())?; let mut enc_key_ptrs = vec![]; let mut pub_key_ptrs = vec![]; let mut enc_keys = vec![]; for key in pub_keys { let mut enc_key = vec![0; key.size()]; let enc_key_ptr = enc_key.as_mut_ptr(); enc_keys.push(enc_key); enc_key_ptrs.push(enc_key_ptr); pub_key_ptrs.push(key.as_ptr()); } let mut iv = cipher.iv_len().map(|len| vec![0; len]); let iv_ptr = iv.as_mut().map_or(ptr::null_mut(), |v| v.as_mut_ptr()); let mut enc_key_lens = vec![0; enc_keys.len()]; cvt(ffi::EVP_SealInit( ctx, cipher.as_ptr(), enc_key_ptrs.as_mut_ptr(), enc_key_lens.as_mut_ptr(), iv_ptr, pub_key_ptrs.as_mut_ptr(), pub_key_ptrs.len() as c_int, ))?; for (buf, len) in enc_keys.iter_mut().zip(&enc_key_lens) { buf.truncate(*len as usize); } Ok(Seal { ctx, block_size: cipher.block_size(), iv, enc_keys, }) } } /// Returns the initialization vector, if the cipher uses one. #[allow(clippy::option_as_ref_deref)] pub fn iv(&self) -> Option<&[u8]> { self.iv.as_ref().map(|v| &**v) } /// Returns the encrypted keys. pub fn encrypted_keys(&self) -> &[Vec] { &self.enc_keys } /// Feeds data from `input` through the cipher, writing encrypted bytes into `output`. /// /// The number of bytes written to `output` is returned. Note that this may /// not be equal to the length of `input`. /// /// # Panics /// /// Panics if `output.len() < input.len() + block_size` where `block_size` is /// the block size of the cipher (see `Cipher::block_size`), or if /// `output.len() > c_int::max_value()`. pub fn update(&mut self, input: &[u8], output: &mut [u8]) -> Result { unsafe { assert!(output.len() >= input.len() + self.block_size); assert!(output.len() <= c_int::max_value() as usize); let mut outl = output.len() as c_int; let inl = input.len() as c_int; cvt(ffi::EVP_EncryptUpdate( self.ctx, output.as_mut_ptr(), &mut outl, input.as_ptr(), inl, ))?; Ok(outl as usize) } } /// Finishes the encryption process, writing any remaining data to `output`. /// /// The number of bytes written to `output` is returned. /// /// `update` should not be called after this method. /// /// # Panics /// /// Panics if `output` is less than the cipher's block size. pub fn finalize(&mut self, output: &mut [u8]) -> Result { unsafe { assert!(output.len() >= self.block_size); let mut outl = cmp::min(output.len(), c_int::max_value() as usize) as c_int; cvt(ffi::EVP_SealFinal(self.ctx, output.as_mut_ptr(), &mut outl))?; Ok(outl as usize) } } } impl Drop for Seal { fn drop(&mut self) { unsafe { ffi::EVP_CIPHER_CTX_free(self.ctx); } } } /// Represents an EVP_Open context. pub struct Open { ctx: *mut ffi::EVP_CIPHER_CTX, block_size: usize, } impl Open { /// Creates a new `Open`. pub fn new( cipher: Cipher, priv_key: &PKeyRef, iv: Option<&[u8]>, encrypted_key: &[u8], ) -> Result where T: HasPrivate, { unsafe { assert!(encrypted_key.len() <= c_int::max_value() as usize); match (cipher.iv_len(), iv) { (Some(len), Some(iv)) => assert_eq!(len, iv.len(), "IV length mismatch"), (None, None) => {} (Some(_), None) => panic!("an IV was required but not provided"), (None, Some(_)) => panic!("an IV was provided but not required"), } let ctx = cvt_p(ffi::EVP_CIPHER_CTX_new())?; cvt(ffi::EVP_OpenInit( ctx, cipher.as_ptr(), encrypted_key.as_ptr(), encrypted_key.len() as c_int, iv.map_or(ptr::null(), |v| v.as_ptr()), priv_key.as_ptr(), ))?; Ok(Open { ctx, block_size: cipher.block_size(), }) } } /// Feeds data from `input` through the cipher, writing decrypted bytes into `output`. /// /// The number of bytes written to `output` is returned. Note that this may /// not be equal to the length of `input`. /// /// # Panics /// /// Panics if `output.len() < input.len() + block_size` where /// `block_size` is the block size of the cipher (see `Cipher::block_size`), /// or if `output.len() > c_int::max_value()`. pub fn update(&mut self, input: &[u8], output: &mut [u8]) -> Result { unsafe { assert!(output.len() >= input.len() + self.block_size); assert!(output.len() <= c_int::max_value() as usize); let mut outl = output.len() as c_int; let inl = input.len() as c_int; cvt(ffi::EVP_DecryptUpdate( self.ctx, output.as_mut_ptr(), &mut outl, input.as_ptr(), inl, ))?; Ok(outl as usize) } } /// Finishes the decryption process, writing any remaining data to `output`. /// /// The number of bytes written to `output` is returned. /// /// `update` should not be called after this method. /// /// # Panics /// /// Panics if `output` is less than the cipher's block size. pub fn finalize(&mut self, output: &mut [u8]) -> Result { unsafe { assert!(output.len() >= self.block_size); let mut outl = cmp::min(output.len(), c_int::max_value() as usize) as c_int; cvt(ffi::EVP_OpenFinal(self.ctx, output.as_mut_ptr(), &mut outl))?; Ok(outl as usize) } } } impl Drop for Open { fn drop(&mut self) { unsafe { ffi::EVP_CIPHER_CTX_free(self.ctx); } } } #[cfg(test)] mod test { use super::*; use crate::pkey::PKey; use crate::symm::Cipher; #[test] fn public_encrypt_private_decrypt() { let private_pem = include_bytes!("../test/rsa.pem"); let public_pem = include_bytes!("../test/rsa.pem.pub"); let private_key = PKey::private_key_from_pem(private_pem).unwrap(); let public_key = PKey::public_key_from_pem(public_pem).unwrap(); let cipher = Cipher::aes_256_cbc(); let secret = b"My secret message"; let mut seal = Seal::new(cipher, &[public_key]).unwrap(); let mut encrypted = vec![0; secret.len() + cipher.block_size()]; let mut enc_len = seal.update(secret, &mut encrypted).unwrap(); enc_len += seal.finalize(&mut encrypted[enc_len..]).unwrap(); let iv = seal.iv(); let encrypted_key = &seal.encrypted_keys()[0]; let mut open = Open::new(cipher, &private_key, iv, encrypted_key).unwrap(); let mut decrypted = vec![0; enc_len + cipher.block_size()]; let mut dec_len = open.update(&encrypted[..enc_len], &mut decrypted).unwrap(); dec_len += open.finalize(&mut decrypted[dec_len..]).unwrap(); assert_eq!(&secret[..], &decrypted[..dec_len]); } } vendor/openssl/src/sha.rs0000664000175000017500000003246014160055207016247 0ustar mwhudsonmwhudson//! The SHA family of hashes. //! //! SHA, or Secure Hash Algorithms, are a family of cryptographic hashing algorithms published by //! the National Institute of Standards and Technology (NIST). Hash algorithms such as those in //! the SHA family are used to map data of an arbitrary size to a fixed-size string of bytes. //! As cryptographic hashing algorithms, these mappings have the property of being irreversible. //! This property makes hash algorithms like these excellent for uses such as verifying the //! contents of a file- if you know the hash you expect beforehand, then you can verify that the //! data you have is correct if it hashes to the same value. //! //! # Examples //! //! When dealing with data that becomes available in chunks, such as while buffering data from IO, //! you can create a hasher that you can repeatedly update to add bytes to. //! //! ```rust //! use openssl::sha; //! //! let mut hasher = sha::Sha256::new(); //! //! hasher.update(b"Hello, "); //! hasher.update(b"world"); //! //! let hash = hasher.finish(); //! println!("Hashed \"Hello, world\" to {}", hex::encode(hash)); //! ``` //! //! On the other hand, if you already have access to all of the data you would like to hash, you //! may prefer to use the slightly simpler method of simply calling the hash function corresponding //! to the algorithm you want to use. //! //! ```rust //! use openssl::sha::sha256; //! //! let hash = sha256(b"your data or message"); //! println!("Hash = {}", hex::encode(hash)); //! ``` use cfg_if::cfg_if; use libc::c_void; use std::mem::MaybeUninit; /// Computes the SHA1 hash of some data. /// /// # Warning /// /// SHA1 is known to be insecure - it should not be used unless required for /// compatibility with existing systems. #[inline] pub fn sha1(data: &[u8]) -> [u8; 20] { unsafe { let mut hash = MaybeUninit::<[u8; 20]>::uninit(); ffi::SHA1(data.as_ptr(), data.len(), hash.as_mut_ptr() as *mut _); hash.assume_init() } } /// Computes the SHA224 hash of some data. #[inline] pub fn sha224(data: &[u8]) -> [u8; 28] { unsafe { let mut hash = MaybeUninit::<[u8; 28]>::uninit(); ffi::SHA224(data.as_ptr(), data.len(), hash.as_mut_ptr() as *mut _); hash.assume_init() } } /// Computes the SHA256 hash of some data. #[inline] pub fn sha256(data: &[u8]) -> [u8; 32] { unsafe { let mut hash = MaybeUninit::<[u8; 32]>::uninit(); ffi::SHA256(data.as_ptr(), data.len(), hash.as_mut_ptr() as *mut _); hash.assume_init() } } /// Computes the SHA384 hash of some data. #[inline] pub fn sha384(data: &[u8]) -> [u8; 48] { unsafe { let mut hash = MaybeUninit::<[u8; 48]>::uninit(); ffi::SHA384(data.as_ptr(), data.len(), hash.as_mut_ptr() as *mut _); hash.assume_init() } } /// Computes the SHA512 hash of some data. #[inline] pub fn sha512(data: &[u8]) -> [u8; 64] { unsafe { let mut hash = MaybeUninit::<[u8; 64]>::uninit(); ffi::SHA512(data.as_ptr(), data.len(), hash.as_mut_ptr() as *mut _); hash.assume_init() } } cfg_if! { if #[cfg(not(osslconf = "OPENSSL_NO_DEPRECATED_3_0"))] { /// An object which calculates a SHA1 hash of some data. /// /// # Warning /// /// SHA1 is known to be insecure - it should not be used unless required for /// compatibility with existing systems. #[derive(Clone)] pub struct Sha1(ffi::SHA_CTX); impl Default for Sha1 { #[inline] fn default() -> Sha1 { Sha1::new() } } impl Sha1 { /// Creates a new hasher. #[inline] pub fn new() -> Sha1 { unsafe { let mut ctx = MaybeUninit::uninit(); ffi::SHA1_Init( ctx.as_mut_ptr()); Sha1(ctx.assume_init()) } } /// Feeds some data into the hasher. /// /// This can be called multiple times. #[inline] pub fn update(&mut self, buf: &[u8]) { unsafe { ffi::SHA1_Update(&mut self.0, buf.as_ptr() as *const c_void, buf.len()); } } /// Returns the hash of the data. #[inline] pub fn finish(mut self) -> [u8; 20] { unsafe { let mut hash = MaybeUninit::<[u8; 20]>::uninit(); ffi::SHA1_Final(hash.as_mut_ptr() as *mut _, &mut self.0); hash.assume_init() } } } /// An object which calculates a SHA224 hash of some data. #[derive(Clone)] pub struct Sha224(ffi::SHA256_CTX); impl Default for Sha224 { #[inline] fn default() -> Sha224 { Sha224::new() } } impl Sha224 { /// Creates a new hasher. #[inline] pub fn new() -> Sha224 { unsafe { let mut ctx = MaybeUninit::uninit(); ffi::SHA224_Init(ctx.as_mut_ptr()); Sha224(ctx.assume_init()) } } /// Feeds some data into the hasher. /// /// This can be called multiple times. #[inline] pub fn update(&mut self, buf: &[u8]) { unsafe { ffi::SHA224_Update(&mut self.0, buf.as_ptr() as *const c_void, buf.len()); } } /// Returns the hash of the data. #[inline] pub fn finish(mut self) -> [u8; 28] { unsafe { let mut hash = MaybeUninit::<[u8; 28]>::uninit(); ffi::SHA224_Final(hash.as_mut_ptr() as *mut _, &mut self.0); hash.assume_init() } } } /// An object which calculates a SHA256 hash of some data. #[derive(Clone)] pub struct Sha256(ffi::SHA256_CTX); impl Default for Sha256 { #[inline] fn default() -> Sha256 { Sha256::new() } } impl Sha256 { /// Creates a new hasher. #[inline] pub fn new() -> Sha256 { unsafe { let mut ctx = MaybeUninit::uninit(); ffi::SHA256_Init(ctx.as_mut_ptr()); Sha256(ctx.assume_init()) } } /// Feeds some data into the hasher. /// /// This can be called multiple times. #[inline] pub fn update(&mut self, buf: &[u8]) { unsafe { ffi::SHA256_Update(&mut self.0, buf.as_ptr() as *const c_void, buf.len()); } } /// Returns the hash of the data. #[inline] pub fn finish(mut self) -> [u8; 32] { unsafe { let mut hash = MaybeUninit::<[u8; 32]>::uninit(); ffi::SHA256_Final(hash.as_mut_ptr() as *mut _, &mut self.0); hash.assume_init() } } } /// An object which calculates a SHA384 hash of some data. #[derive(Clone)] pub struct Sha384(ffi::SHA512_CTX); impl Default for Sha384 { #[inline] fn default() -> Sha384 { Sha384::new() } } impl Sha384 { /// Creates a new hasher. #[inline] pub fn new() -> Sha384 { unsafe { let mut ctx = MaybeUninit::uninit(); ffi::SHA384_Init(ctx.as_mut_ptr()); Sha384(ctx.assume_init()) } } /// Feeds some data into the hasher. /// /// This can be called multiple times. #[inline] pub fn update(&mut self, buf: &[u8]) { unsafe { ffi::SHA384_Update(&mut self.0, buf.as_ptr() as *const c_void, buf.len()); } } /// Returns the hash of the data. #[inline] pub fn finish(mut self) -> [u8; 48] { unsafe { let mut hash = MaybeUninit::<[u8; 48]>::uninit(); ffi::SHA384_Final(hash.as_mut_ptr() as *mut _, &mut self.0); hash.assume_init() } } } /// An object which calculates a SHA512 hash of some data. #[derive(Clone)] pub struct Sha512(ffi::SHA512_CTX); impl Default for Sha512 { #[inline] fn default() -> Sha512 { Sha512::new() } } impl Sha512 { /// Creates a new hasher. #[inline] pub fn new() -> Sha512 { unsafe { let mut ctx = MaybeUninit::uninit(); ffi::SHA512_Init(ctx.as_mut_ptr()); Sha512(ctx.assume_init()) } } /// Feeds some data into the hasher. /// /// This can be called multiple times. #[inline] pub fn update(&mut self, buf: &[u8]) { unsafe { ffi::SHA512_Update(&mut self.0, buf.as_ptr() as *const c_void, buf.len()); } } /// Returns the hash of the data. #[inline] pub fn finish(mut self) -> [u8; 64] { unsafe { let mut hash= MaybeUninit::<[u8; 64]>::uninit(); ffi::SHA512_Final(hash.as_mut_ptr() as *mut _, &mut self.0); hash.assume_init() } } } } } #[cfg(test)] mod test { use super::*; #[test] fn standalone_1() { let data = b"abc"; let expected = "a9993e364706816aba3e25717850c26c9cd0d89d"; assert_eq!(hex::encode(sha1(data)), expected); } #[test] #[cfg(not(osslconf = "OPENSSL_NO_DEPRECATED_3_0"))] fn struct_1() { let expected = "a9993e364706816aba3e25717850c26c9cd0d89d"; let mut hasher = Sha1::new(); hasher.update(b"a"); hasher.update(b"bc"); assert_eq!(hex::encode(hasher.finish()), expected); } #[test] #[cfg(not(osslconf = "OPENSSL_NO_DEPRECATED_3_0"))] fn cloning_allows_incremental_hashing() { let expected = "a9993e364706816aba3e25717850c26c9cd0d89d"; let mut hasher = Sha1::new(); hasher.update(b"a"); let mut incr_hasher = hasher.clone(); incr_hasher.update(b"bc"); assert_eq!(hex::encode(incr_hasher.finish()), expected); assert_ne!(hex::encode(hasher.finish()), expected); } #[test] fn standalone_224() { let data = b"abc"; let expected = "23097d223405d8228642a477bda255b32aadbce4bda0b3f7e36c9da7"; assert_eq!(hex::encode(sha224(data)), expected); } #[test] #[cfg(not(osslconf = "OPENSSL_NO_DEPRECATED_3_0"))] fn struct_224() { let expected = "23097d223405d8228642a477bda255b32aadbce4bda0b3f7e36c9da7"; let mut hasher = Sha224::new(); hasher.update(b"a"); hasher.update(b"bc"); assert_eq!(hex::encode(hasher.finish()), expected); } #[test] fn standalone_256() { let data = b"abc"; let expected = "ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad"; assert_eq!(hex::encode(sha256(data)), expected); } #[test] #[cfg(not(osslconf = "OPENSSL_NO_DEPRECATED_3_0"))] fn struct_256() { let expected = "ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad"; let mut hasher = Sha256::new(); hasher.update(b"a"); hasher.update(b"bc"); assert_eq!(hex::encode(hasher.finish()), expected); } #[test] fn standalone_384() { let data = b"abc"; let expected = "cb00753f45a35e8bb5a03d699ac65007272c32ab0eded1631a8b605a43ff5bed8086072ba1e\ 7cc2358baeca134c825a7"; assert_eq!(hex::encode(&sha384(data)[..]), expected); } #[test] #[cfg(not(osslconf = "OPENSSL_NO_DEPRECATED_3_0"))] fn struct_384() { let expected = "cb00753f45a35e8bb5a03d699ac65007272c32ab0eded1631a8b605a43ff5bed8086072ba1e\ 7cc2358baeca134c825a7"; let mut hasher = Sha384::new(); hasher.update(b"a"); hasher.update(b"bc"); assert_eq!(hex::encode(&hasher.finish()[..]), expected); } #[test] fn standalone_512() { let data = b"abc"; let expected = "ddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274\ fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f"; assert_eq!(hex::encode(&sha512(data)[..]), expected); } #[test] #[cfg(not(osslconf = "OPENSSL_NO_DEPRECATED_3_0"))] fn struct_512() { let expected = "ddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274\ fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f"; let mut hasher = Sha512::new(); hasher.update(b"a"); hasher.update(b"bc"); assert_eq!(hex::encode(&hasher.finish()[..]), expected); } } vendor/openssl/src/srtp.rs0000664000175000017500000000421114172417313016460 0ustar mwhudsonmwhudsonuse crate::stack::Stackable; use foreign_types::ForeignTypeRef; use libc::c_ulong; use std::ffi::CStr; use std::str; /// fake free method, since SRTP_PROTECTION_PROFILE is static unsafe fn free(_profile: *mut ffi::SRTP_PROTECTION_PROFILE) {} foreign_type_and_impl_send_sync! { type CType = ffi::SRTP_PROTECTION_PROFILE; fn drop = free; pub struct SrtpProtectionProfile; /// Reference to `SrtpProtectionProfile`. pub struct SrtpProtectionProfileRef; } impl Stackable for SrtpProtectionProfile { type StackType = ffi::stack_st_SRTP_PROTECTION_PROFILE; } impl SrtpProtectionProfileRef { pub fn id(&self) -> SrtpProfileId { SrtpProfileId::from_raw(unsafe { (*self.as_ptr()).id }) } pub fn name(&self) -> &'static str { unsafe { CStr::from_ptr((*self.as_ptr()).name as *const _) } .to_str() .expect("should be UTF-8") } } /// An identifier of an SRTP protection profile. #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct SrtpProfileId(c_ulong); impl SrtpProfileId { pub const SRTP_AES128_CM_SHA1_80: SrtpProfileId = SrtpProfileId(ffi::SRTP_AES128_CM_SHA1_80); pub const SRTP_AES128_CM_SHA1_32: SrtpProfileId = SrtpProfileId(ffi::SRTP_AES128_CM_SHA1_32); pub const SRTP_AES128_F8_SHA1_80: SrtpProfileId = SrtpProfileId(ffi::SRTP_AES128_F8_SHA1_80); pub const SRTP_AES128_F8_SHA1_32: SrtpProfileId = SrtpProfileId(ffi::SRTP_AES128_F8_SHA1_32); pub const SRTP_NULL_SHA1_80: SrtpProfileId = SrtpProfileId(ffi::SRTP_NULL_SHA1_80); pub const SRTP_NULL_SHA1_32: SrtpProfileId = SrtpProfileId(ffi::SRTP_NULL_SHA1_32); #[cfg(ossl110)] pub const SRTP_AEAD_AES_128_GCM: SrtpProfileId = SrtpProfileId(ffi::SRTP_AEAD_AES_128_GCM); #[cfg(ossl110)] pub const SRTP_AEAD_AES_256_GCM: SrtpProfileId = SrtpProfileId(ffi::SRTP_AEAD_AES_256_GCM); /// Creates a `SrtpProfileId` from an integer representation. pub fn from_raw(value: c_ulong) -> SrtpProfileId { SrtpProfileId(value) } /// Returns the integer representation of `SrtpProfileId`. #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_ulong { self.0 } } vendor/openssl/src/ocsp.rs0000664000175000017500000002630014160055207016434 0ustar mwhudsonmwhudsonuse bitflags::bitflags; use foreign_types::ForeignTypeRef; use libc::{c_int, c_long, c_ulong}; use std::mem; use std::ptr; use crate::asn1::Asn1GeneralizedTimeRef; use crate::error::ErrorStack; use crate::hash::MessageDigest; use crate::stack::StackRef; use crate::util::ForeignTypeRefExt; use crate::x509::store::X509StoreRef; use crate::x509::{X509Ref, X509}; use crate::{cvt, cvt_p}; bitflags! { pub struct OcspFlag: c_ulong { const NO_CERTS = ffi::OCSP_NOCERTS; const NO_INTERN = ffi::OCSP_NOINTERN; const NO_CHAIN = ffi::OCSP_NOCHAIN; const NO_VERIFY = ffi::OCSP_NOVERIFY; const NO_EXPLICIT = ffi::OCSP_NOEXPLICIT; const NO_CA_SIGN = ffi::OCSP_NOCASIGN; const NO_DELEGATED = ffi::OCSP_NODELEGATED; const NO_CHECKS = ffi::OCSP_NOCHECKS; const TRUST_OTHER = ffi::OCSP_TRUSTOTHER; const RESPID_KEY = ffi::OCSP_RESPID_KEY; const NO_TIME = ffi::OCSP_NOTIME; } } #[derive(Copy, Clone, Debug, PartialEq, Eq)] pub struct OcspResponseStatus(c_int); impl OcspResponseStatus { pub const SUCCESSFUL: OcspResponseStatus = OcspResponseStatus(ffi::OCSP_RESPONSE_STATUS_SUCCESSFUL); pub const MALFORMED_REQUEST: OcspResponseStatus = OcspResponseStatus(ffi::OCSP_RESPONSE_STATUS_MALFORMEDREQUEST); pub const INTERNAL_ERROR: OcspResponseStatus = OcspResponseStatus(ffi::OCSP_RESPONSE_STATUS_INTERNALERROR); pub const TRY_LATER: OcspResponseStatus = OcspResponseStatus(ffi::OCSP_RESPONSE_STATUS_TRYLATER); pub const SIG_REQUIRED: OcspResponseStatus = OcspResponseStatus(ffi::OCSP_RESPONSE_STATUS_SIGREQUIRED); pub const UNAUTHORIZED: OcspResponseStatus = OcspResponseStatus(ffi::OCSP_RESPONSE_STATUS_UNAUTHORIZED); pub fn from_raw(raw: c_int) -> OcspResponseStatus { OcspResponseStatus(raw) } #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } } #[derive(Copy, Clone, Debug, PartialEq, Eq)] pub struct OcspCertStatus(c_int); impl OcspCertStatus { pub const GOOD: OcspCertStatus = OcspCertStatus(ffi::V_OCSP_CERTSTATUS_GOOD); pub const REVOKED: OcspCertStatus = OcspCertStatus(ffi::V_OCSP_CERTSTATUS_REVOKED); pub const UNKNOWN: OcspCertStatus = OcspCertStatus(ffi::V_OCSP_CERTSTATUS_UNKNOWN); pub fn from_raw(raw: c_int) -> OcspCertStatus { OcspCertStatus(raw) } #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } } #[derive(Copy, Clone, Debug, PartialEq, Eq)] pub struct OcspRevokedStatus(c_int); impl OcspRevokedStatus { pub const NO_STATUS: OcspRevokedStatus = OcspRevokedStatus(ffi::OCSP_REVOKED_STATUS_NOSTATUS); pub const UNSPECIFIED: OcspRevokedStatus = OcspRevokedStatus(ffi::OCSP_REVOKED_STATUS_UNSPECIFIED); pub const KEY_COMPROMISE: OcspRevokedStatus = OcspRevokedStatus(ffi::OCSP_REVOKED_STATUS_KEYCOMPROMISE); pub const CA_COMPROMISE: OcspRevokedStatus = OcspRevokedStatus(ffi::OCSP_REVOKED_STATUS_CACOMPROMISE); pub const AFFILIATION_CHANGED: OcspRevokedStatus = OcspRevokedStatus(ffi::OCSP_REVOKED_STATUS_AFFILIATIONCHANGED); pub const STATUS_SUPERSEDED: OcspRevokedStatus = OcspRevokedStatus(ffi::OCSP_REVOKED_STATUS_SUPERSEDED); pub const STATUS_CESSATION_OF_OPERATION: OcspRevokedStatus = OcspRevokedStatus(ffi::OCSP_REVOKED_STATUS_CESSATIONOFOPERATION); pub const STATUS_CERTIFICATE_HOLD: OcspRevokedStatus = OcspRevokedStatus(ffi::OCSP_REVOKED_STATUS_CERTIFICATEHOLD); pub const REMOVE_FROM_CRL: OcspRevokedStatus = OcspRevokedStatus(ffi::OCSP_REVOKED_STATUS_REMOVEFROMCRL); pub fn from_raw(raw: c_int) -> OcspRevokedStatus { OcspRevokedStatus(raw) } #[allow(clippy::trivially_copy_pass_by_ref)] pub fn as_raw(&self) -> c_int { self.0 } } pub struct OcspStatus<'a> { /// The overall status of the response. pub status: OcspCertStatus, /// If `status` is `CERT_STATUS_REVOKED`, the reason for the revocation. pub reason: OcspRevokedStatus, /// If `status` is `CERT_STATUS_REVOKED`, the time at which the certificate was revoked. pub revocation_time: Option<&'a Asn1GeneralizedTimeRef>, /// The time that this revocation check was performed. pub this_update: &'a Asn1GeneralizedTimeRef, /// The time at which this revocation check expires. pub next_update: &'a Asn1GeneralizedTimeRef, } impl<'a> OcspStatus<'a> { /// Checks validity of the `this_update` and `next_update` fields. /// /// The `nsec` parameter specifies an amount of slack time that will be used when comparing /// those times with the current time to account for delays and clock skew. /// /// The `maxsec` parameter limits the maximum age of the `this_update` parameter to prohibit /// very old responses. pub fn check_validity(&self, nsec: u32, maxsec: Option) -> Result<(), ErrorStack> { unsafe { cvt(ffi::OCSP_check_validity( self.this_update.as_ptr(), self.next_update.as_ptr(), nsec as c_long, maxsec.map(|n| n as c_long).unwrap_or(-1), )) .map(|_| ()) } } } foreign_type_and_impl_send_sync! { type CType = ffi::OCSP_BASICRESP; fn drop = ffi::OCSP_BASICRESP_free; pub struct OcspBasicResponse; pub struct OcspBasicResponseRef; } impl OcspBasicResponseRef { /// Verifies the validity of the response. /// /// The `certs` parameter contains a set of certificates that will be searched when locating the /// OCSP response signing certificate. Some responders do not include this in the response. pub fn verify( &self, certs: &StackRef, store: &X509StoreRef, flags: OcspFlag, ) -> Result<(), ErrorStack> { unsafe { cvt(ffi::OCSP_basic_verify( self.as_ptr(), certs.as_ptr(), store.as_ptr(), flags.bits(), )) .map(|_| ()) } } /// Looks up the status for the specified certificate ID. pub fn find_status<'a>(&'a self, id: &OcspCertIdRef) -> Option> { unsafe { let mut status = ffi::V_OCSP_CERTSTATUS_UNKNOWN; let mut reason = ffi::OCSP_REVOKED_STATUS_NOSTATUS; let mut revocation_time = ptr::null_mut(); let mut this_update = ptr::null_mut(); let mut next_update = ptr::null_mut(); let r = ffi::OCSP_resp_find_status( self.as_ptr(), id.as_ptr(), &mut status, &mut reason, &mut revocation_time, &mut this_update, &mut next_update, ); if r == 1 { let revocation_time = Asn1GeneralizedTimeRef::from_const_ptr_opt(revocation_time); Some(OcspStatus { status: OcspCertStatus(status), reason: OcspRevokedStatus(status), revocation_time, this_update: Asn1GeneralizedTimeRef::from_ptr(this_update), next_update: Asn1GeneralizedTimeRef::from_ptr(next_update), }) } else { None } } } } foreign_type_and_impl_send_sync! { type CType = ffi::OCSP_CERTID; fn drop = ffi::OCSP_CERTID_free; pub struct OcspCertId; pub struct OcspCertIdRef; } impl OcspCertId { /// Constructs a certificate ID for certificate `subject`. pub fn from_cert( digest: MessageDigest, subject: &X509Ref, issuer: &X509Ref, ) -> Result { unsafe { cvt_p(ffi::OCSP_cert_to_id( digest.as_ptr(), subject.as_ptr(), issuer.as_ptr(), )) .map(OcspCertId) } } } foreign_type_and_impl_send_sync! { type CType = ffi::OCSP_RESPONSE; fn drop = ffi::OCSP_RESPONSE_free; pub struct OcspResponse; pub struct OcspResponseRef; } impl OcspResponse { /// Creates an OCSP response from the status and optional body. /// /// A body should only be provided if `status` is `RESPONSE_STATUS_SUCCESSFUL`. pub fn create( status: OcspResponseStatus, body: Option<&OcspBasicResponseRef>, ) -> Result { unsafe { ffi::init(); cvt_p(ffi::OCSP_response_create( status.as_raw(), body.map(|r| r.as_ptr()).unwrap_or(ptr::null_mut()), )) .map(OcspResponse) } } from_der! { /// Deserializes a DER-encoded OCSP response. /// /// This corresponds to [`d2i_OCSP_RESPONSE`]. /// /// [`d2i_OCSP_RESPONSE`]: https://www.openssl.org/docs/man1.1.0/crypto/d2i_OCSP_RESPONSE.html from_der, OcspResponse, ffi::d2i_OCSP_RESPONSE } } impl OcspResponseRef { to_der! { /// Serializes the response to its standard DER encoding. /// /// This corresponds to [`i2d_OCSP_RESPONSE`]. /// /// [`i2d_OCSP_RESPONSE`]: https://www.openssl.org/docs/man1.1.0/crypto/i2d_OCSP_RESPONSE.html to_der, ffi::i2d_OCSP_RESPONSE } /// Returns the status of the response. pub fn status(&self) -> OcspResponseStatus { unsafe { OcspResponseStatus(ffi::OCSP_response_status(self.as_ptr())) } } /// Returns the basic response. /// /// This will only succeed if `status()` returns `RESPONSE_STATUS_SUCCESSFUL`. pub fn basic(&self) -> Result { unsafe { cvt_p(ffi::OCSP_response_get1_basic(self.as_ptr())).map(OcspBasicResponse) } } } foreign_type_and_impl_send_sync! { type CType = ffi::OCSP_REQUEST; fn drop = ffi::OCSP_REQUEST_free; pub struct OcspRequest; pub struct OcspRequestRef; } impl OcspRequest { pub fn new() -> Result { unsafe { ffi::init(); cvt_p(ffi::OCSP_REQUEST_new()).map(OcspRequest) } } from_der! { /// Deserializes a DER-encoded OCSP request. /// /// This corresponds to [`d2i_OCSP_REQUEST`]. /// /// [`d2i_OCSP_REQUEST`]: https://www.openssl.org/docs/man1.1.0/crypto/d2i_OCSP_REQUEST.html from_der, OcspRequest, ffi::d2i_OCSP_REQUEST } } impl OcspRequestRef { to_der! { /// Serializes the request to its standard DER encoding. /// /// This corresponds to [`i2d_OCSP_REQUEST`]. /// /// [`i2d_OCSP_REQUEST`]: https://www.openssl.org/docs/man1.1.0/crypto/i2d_OCSP_REQUEST.html to_der, ffi::i2d_OCSP_REQUEST } pub fn add_id(&mut self, id: OcspCertId) -> Result<&mut OcspOneReqRef, ErrorStack> { unsafe { let ptr = cvt_p(ffi::OCSP_request_add0_id(self.as_ptr(), id.as_ptr()))?; mem::forget(id); Ok(OcspOneReqRef::from_ptr_mut(ptr)) } } } foreign_type_and_impl_send_sync! { type CType = ffi::OCSP_ONEREQ; fn drop = ffi::OCSP_ONEREQ_free; pub struct OcspOneReq; pub struct OcspOneReqRef; } vendor/openssl/src/base64.rs0000664000175000017500000000777114160055207016567 0ustar mwhudsonmwhudson//! Base64 encoding support. use crate::cvt_n; use crate::error::ErrorStack; use libc::c_int; /// Encodes a slice of bytes to a base64 string. /// /// This corresponds to [`EVP_EncodeBlock`]. /// /// # Panics /// /// Panics if the input length or computed output length overflow a signed C integer. /// /// [`EVP_EncodeBlock`]: https://www.openssl.org/docs/man1.1.1/man3/EVP_DecodeBlock.html pub fn encode_block(src: &[u8]) -> String { assert!(src.len() <= c_int::max_value() as usize); let src_len = src.len() as c_int; let len = encoded_len(src_len).unwrap(); let mut out = Vec::with_capacity(len as usize); // SAFETY: `encoded_len` ensures space for 4 output characters // for every 3 input bytes including padding and nul terminator. // `EVP_EncodeBlock` will write only single byte ASCII characters. // `EVP_EncodeBlock` will only write to not read from `out`. unsafe { let out_len = ffi::EVP_EncodeBlock(out.as_mut_ptr(), src.as_ptr(), src_len); out.set_len(out_len as usize); String::from_utf8_unchecked(out) } } /// Decodes a base64-encoded string to bytes. /// /// This corresponds to [`EVP_DecodeBlock`]. /// /// # Panics /// /// Panics if the input length or computed output length overflow a signed C integer. /// /// [`EVP_DecodeBlock`]: https://www.openssl.org/docs/man1.1.1/man3/EVP_DecodeBlock.html pub fn decode_block(src: &str) -> Result, ErrorStack> { let src = src.trim(); // https://github.com/openssl/openssl/issues/12143 if src.is_empty() { return Ok(vec![]); } assert!(src.len() <= c_int::max_value() as usize); let src_len = src.len() as c_int; let len = decoded_len(src_len).unwrap(); let mut out = Vec::with_capacity(len as usize); // SAFETY: `decoded_len` ensures space for 3 output bytes // for every 4 input characters including padding. // `EVP_DecodeBlock` can write fewer bytes after stripping // leading and trailing whitespace, but never more. // `EVP_DecodeBlock` will only write to not read from `out`. unsafe { let out_len = cvt_n(ffi::EVP_DecodeBlock( out.as_mut_ptr(), src.as_ptr(), src_len, ))?; out.set_len(out_len as usize); } if src.ends_with('=') { out.pop(); if src.ends_with("==") { out.pop(); } } Ok(out) } fn encoded_len(src_len: c_int) -> Option { let mut len = (src_len / 3).checked_mul(4)?; if src_len % 3 != 0 { len = len.checked_add(4)?; } len = len.checked_add(1)?; Some(len) } fn decoded_len(src_len: c_int) -> Option { let mut len = (src_len / 4).checked_mul(3)?; if src_len % 4 != 0 { len = len.checked_add(3)?; } Some(len) } #[cfg(test)] mod tests { use super::*; #[test] fn test_encode_block() { assert_eq!("".to_string(), encode_block(b"")); assert_eq!("Zg==".to_string(), encode_block(b"f")); assert_eq!("Zm8=".to_string(), encode_block(b"fo")); assert_eq!("Zm9v".to_string(), encode_block(b"foo")); assert_eq!("Zm9vYg==".to_string(), encode_block(b"foob")); assert_eq!("Zm9vYmE=".to_string(), encode_block(b"fooba")); assert_eq!("Zm9vYmFy".to_string(), encode_block(b"foobar")); } #[test] fn test_decode_block() { assert_eq!(b"".to_vec(), decode_block("").unwrap()); assert_eq!(b"f".to_vec(), decode_block("Zg==").unwrap()); assert_eq!(b"fo".to_vec(), decode_block("Zm8=").unwrap()); assert_eq!(b"foo".to_vec(), decode_block("Zm9v").unwrap()); assert_eq!(b"foob".to_vec(), decode_block("Zm9vYg==").unwrap()); assert_eq!(b"fooba".to_vec(), decode_block("Zm9vYmE=").unwrap()); assert_eq!(b"foobar".to_vec(), decode_block("Zm9vYmFy").unwrap()); } #[test] fn test_strip_whitespace() { assert_eq!(b"foobar".to_vec(), decode_block(" Zm9vYmFy\n").unwrap()); assert_eq!(b"foob".to_vec(), decode_block(" Zm9vYg==\n").unwrap()); } } vendor/openssl/src/dh.rs0000664000175000017500000003427014160055207016070 0ustar mwhudsonmwhudsonuse cfg_if::cfg_if; use foreign_types::{ForeignType, ForeignTypeRef}; use std::mem; use std::ptr; use crate::bn::{BigNum, BigNumRef}; use crate::error::ErrorStack; use crate::pkey::{HasParams, HasPrivate, HasPublic, Params, Private}; use crate::{cvt, cvt_p}; generic_foreign_type_and_impl_send_sync! { type CType = ffi::DH; fn drop = ffi::DH_free; pub struct Dh; pub struct DhRef; } impl DhRef where T: HasParams, { to_pem! { /// Serializes the parameters into a PEM-encoded PKCS#3 DHparameter structure. /// /// The output will have a header of `-----BEGIN DH PARAMETERS-----`. /// /// This corresponds to [`PEM_write_bio_DHparams`]. /// /// [`PEM_write_bio_DHparams`]: https://www.openssl.org/docs/manmaster/man3/PEM_write_bio_DHparams.html params_to_pem, ffi::PEM_write_bio_DHparams } to_der! { /// Serializes the parameters into a DER-encoded PKCS#3 DHparameter structure. /// /// This corresponds to [`i2d_DHparams`]. /// /// [`i2d_DHparams`]: https://www.openssl.org/docs/man1.1.0/crypto/i2d_DHparams.html params_to_der, ffi::i2d_DHparams } } impl Dh { pub fn from_params(p: BigNum, g: BigNum, q: BigNum) -> Result, ErrorStack> { Self::from_pqg(p, Some(q), g) } /// Creates a DH instance based upon the given primes and generator params. /// /// This corresponds to [`DH_new`] and [`DH_set0_pqg`]. /// /// [`DH_new`]: https://www.openssl.org/docs/man1.1.0/crypto/DH_new.html /// [`DH_set0_pqg`]: https://www.openssl.org/docs/man1.1.0/crypto/DH_set0_pqg.html pub fn from_pqg( prime_p: BigNum, prime_q: Option, generator: BigNum, ) -> Result, ErrorStack> { unsafe { let dh = Dh::from_ptr(cvt_p(ffi::DH_new())?); cvt(DH_set0_pqg( dh.0, prime_p.as_ptr(), prime_q.as_ref().map_or(ptr::null_mut(), |q| q.as_ptr()), generator.as_ptr(), ))?; mem::forget((prime_p, prime_q, generator)); Ok(dh) } } /// Sets the private key on the DH object and recomputes the public key. pub fn set_private_key(self, priv_key: BigNum) -> Result, ErrorStack> { unsafe { let dh_ptr = self.0; cvt(DH_set0_key(dh_ptr, ptr::null_mut(), priv_key.as_ptr()))?; mem::forget(priv_key); cvt(ffi::DH_generate_key(dh_ptr))?; mem::forget(self); Ok(Dh::from_ptr(dh_ptr)) } } /// Generates DH params based on the given `prime_len` and a fixed `generator` value. /// /// This corresponds to [`DH_generate_parameters_ex`]. /// /// [`DH_generate_parameters_ex`]: https://www.openssl.org/docs/man1.1.0/crypto/DH_generate_parameters.html pub fn generate_params(prime_len: u32, generator: u32) -> Result, ErrorStack> { unsafe { let dh = Dh::from_ptr(cvt_p(ffi::DH_new())?); cvt(ffi::DH_generate_parameters_ex( dh.0, prime_len as i32, generator as i32, ptr::null_mut(), ))?; Ok(dh) } } /// Generates a public and a private key based on the DH params. /// /// This corresponds to [`DH_generate_key`]. /// /// [`DH_generate_key`]: https://www.openssl.org/docs/man1.1.0/crypto/DH_generate_key.html pub fn generate_key(self) -> Result, ErrorStack> { unsafe { let dh_ptr = self.0; cvt(ffi::DH_generate_key(dh_ptr))?; mem::forget(self); Ok(Dh::from_ptr(dh_ptr)) } } from_pem! { /// Deserializes a PEM-encoded PKCS#3 DHpararameters structure. /// /// The input should have a header of `-----BEGIN DH PARAMETERS-----`. /// /// This corresponds to [`PEM_read_bio_DHparams`]. /// /// [`PEM_read_bio_DHparams`]: https://www.openssl.org/docs/man1.0.2/crypto/PEM_read_bio_DHparams.html params_from_pem, Dh, ffi::PEM_read_bio_DHparams } from_der! { /// Deserializes a DER-encoded PKCS#3 DHparameters structure. /// /// This corresponds to [`d2i_DHparams`]. /// /// [`d2i_DHparams`]: https://www.openssl.org/docs/man1.1.0/crypto/d2i_DHparams.html params_from_der, Dh, ffi::d2i_DHparams } /// Requires OpenSSL 1.0.2 or newer. #[cfg(any(ossl102, ossl110))] pub fn get_1024_160() -> Result, ErrorStack> { unsafe { ffi::init(); cvt_p(ffi::DH_get_1024_160()).map(|p| Dh::from_ptr(p)) } } /// Requires OpenSSL 1.0.2 or newer. #[cfg(any(ossl102, ossl110))] pub fn get_2048_224() -> Result, ErrorStack> { unsafe { ffi::init(); cvt_p(ffi::DH_get_2048_224()).map(|p| Dh::from_ptr(p)) } } /// Requires OpenSSL 1.0.2 or newer. #[cfg(any(ossl102, ossl110))] pub fn get_2048_256() -> Result, ErrorStack> { unsafe { ffi::init(); cvt_p(ffi::DH_get_2048_256()).map(|p| Dh::from_ptr(p)) } } } impl Dh where T: HasParams, { /// Returns the prime `p` from the DH instance. /// /// This corresponds to [`DH_get0_pqg`]. /// /// [`DH_get0_pqg`]: https://www.openssl.org/docs/man1.1.0/crypto/DH_get0_pqg.html pub fn prime_p(&self) -> &BigNumRef { let mut p = ptr::null(); unsafe { DH_get0_pqg(self.as_ptr(), &mut p, ptr::null_mut(), ptr::null_mut()); BigNumRef::from_ptr(p as *mut _) } } /// Returns the prime `q` from the DH instance. /// /// This corresponds to [`DH_get0_pqg`]. /// /// [`DH_get0_pqg`]: https://www.openssl.org/docs/man1.1.0/crypto/DH_get0_pqg.html pub fn prime_q(&self) -> Option<&BigNumRef> { let mut q = ptr::null(); unsafe { DH_get0_pqg(self.as_ptr(), ptr::null_mut(), &mut q, ptr::null_mut()); if q.is_null() { None } else { Some(BigNumRef::from_ptr(q as *mut _)) } } } /// Returns the generator from the DH instance. /// /// This corresponds to [`DH_get0_pqg`]. /// /// [`DH_get0_pqg`]: https://www.openssl.org/docs/man1.1.0/crypto/DH_get0_pqg.html pub fn generator(&self) -> &BigNumRef { let mut g = ptr::null(); unsafe { DH_get0_pqg(self.as_ptr(), ptr::null_mut(), ptr::null_mut(), &mut g); BigNumRef::from_ptr(g as *mut _) } } } impl DhRef where T: HasPublic, { /// Returns the public key from the DH instance. /// /// This corresponds to [`DH_get0_key`]. /// /// [`DH_get0_key`]: https://www.openssl.org/docs/man1.1.0/crypto/DH_get0_key.html pub fn public_key(&self) -> &BigNumRef { let mut pub_key = ptr::null(); unsafe { DH_get0_key(self.as_ptr(), &mut pub_key, ptr::null_mut()); BigNumRef::from_ptr(pub_key as *mut _) } } } impl DhRef where T: HasPrivate, { /// Computes a shared secret from the own private key and the given `public_key`. /// /// This corresponds to [`DH_compute_key`]. /// /// [`DH_compute_key`]: https://www.openssl.org/docs/man1.1.0/crypto/DH_compute_key.html pub fn compute_key(&self, public_key: &BigNumRef) -> Result, ErrorStack> { unsafe { let key_len = ffi::DH_size(self.as_ptr()); let mut key = vec![0u8; key_len as usize]; cvt(ffi::DH_compute_key( key.as_mut_ptr(), public_key.as_ptr(), self.as_ptr(), ))?; Ok(key) } } /// Returns the private key from the DH instance. /// /// This corresponds to [`DH_get0_key`]. /// /// [`DH_get0_key`]: https://www.openssl.org/docs/man1.1.0/crypto/DH_get0_key.html pub fn private_key(&self) -> &BigNumRef { let mut priv_key = ptr::null(); unsafe { DH_get0_key(self.as_ptr(), ptr::null_mut(), &mut priv_key); BigNumRef::from_ptr(priv_key as *mut _) } } } cfg_if! { if #[cfg(any(ossl110, libressl270))] { use ffi::{DH_set0_pqg, DH_get0_pqg, DH_get0_key, DH_set0_key}; } else { #[allow(bad_style)] unsafe fn DH_set0_pqg( dh: *mut ffi::DH, p: *mut ffi::BIGNUM, q: *mut ffi::BIGNUM, g: *mut ffi::BIGNUM, ) -> ::libc::c_int { (*dh).p = p; (*dh).q = q; (*dh).g = g; 1 } #[allow(bad_style)] unsafe fn DH_get0_pqg( dh: *mut ffi::DH, p: *mut *const ffi::BIGNUM, q: *mut *const ffi::BIGNUM, g: *mut *const ffi::BIGNUM, ) { if !p.is_null() { *p = (*dh).p; } if !q.is_null() { *q = (*dh).q; } if !g.is_null() { *g = (*dh).g; } } #[allow(bad_style)] unsafe fn DH_set0_key( dh: *mut ffi::DH, pub_key: *mut ffi::BIGNUM, priv_key: *mut ffi::BIGNUM, ) -> ::libc::c_int { (*dh).pub_key = pub_key; (*dh).priv_key = priv_key; 1 } #[allow(bad_style)] unsafe fn DH_get0_key( dh: *mut ffi::DH, pub_key: *mut *const ffi::BIGNUM, priv_key: *mut *const ffi::BIGNUM, ) { if !pub_key.is_null() { *pub_key = (*dh).pub_key; } if !priv_key.is_null() { *priv_key = (*dh).priv_key; } } } } #[cfg(test)] mod tests { use crate::bn::BigNum; use crate::dh::Dh; use crate::ssl::{SslContext, SslMethod}; #[test] #[cfg(ossl102)] fn test_dh_rfc5114() { let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); let dh2 = Dh::get_2048_224().unwrap(); ctx.set_tmp_dh(&dh2).unwrap(); let dh3 = Dh::get_2048_256().unwrap(); ctx.set_tmp_dh(&dh3).unwrap(); } #[test] fn test_dh_params() { let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); let prime_p = BigNum::from_hex_str( "87A8E61DB4B6663CFFBBD19C651959998CEEF608660DD0F25D2CEED4435E3B00E00DF8F1D61957D4FAF7DF\ 4561B2AA3016C3D91134096FAA3BF4296D830E9A7C209E0C6497517ABD5A8A9D306BCF67ED91F9E6725B47\ 58C022E0B1EF4275BF7B6C5BFC11D45F9088B941F54EB1E59BB8BC39A0BF12307F5C4FDB70C581B23F76B6\ 3ACAE1CAA6B7902D52526735488A0EF13C6D9A51BFA4AB3AD8347796524D8EF6A167B5A41825D967E144E5\ 140564251CCACB83E6B486F6B3CA3F7971506026C0B857F689962856DED4010ABD0BE621C3A3960A54E710\ C375F26375D7014103A4B54330C198AF126116D2276E11715F693877FAD7EF09CADB094AE91E1A1597", ).unwrap(); let prime_q = BigNum::from_hex_str( "3FB32C9B73134D0B2E77506660EDBD484CA7B18F21EF205407F4793A1A0BA12510DBC15077BE463FFF4FED\ 4AAC0BB555BE3A6C1B0C6B47B1BC3773BF7E8C6F62901228F8C28CBB18A55AE31341000A650196F931C77A\ 57F2DDF463E5E9EC144B777DE62AAAB8A8628AC376D282D6ED3864E67982428EBC831D14348F6F2F9193B5\ 045AF2767164E1DFC967C1FB3F2E55A4BD1BFFE83B9C80D052B985D182EA0ADB2A3B7313D3FE14C8484B1E\ 052588B9B7D2BBD2DF016199ECD06E1557CD0915B3353BBB64E0EC377FD028370DF92B52C7891428CDC67E\ B6184B523D1DB246C32F63078490F00EF8D647D148D47954515E2327CFEF98C582664B4C0F6CC41659", ).unwrap(); let generator = BigNum::from_hex_str( "8CF83642A709A097B447997640129DA299B1A47D1EB3750BA308B0FE64F5FBD3", ) .unwrap(); let dh = Dh::from_params( prime_p.to_owned().unwrap(), generator.to_owned().unwrap(), prime_q.to_owned().unwrap(), ) .unwrap(); ctx.set_tmp_dh(&dh).unwrap(); assert_eq!(dh.prime_p(), &prime_p); assert_eq!(dh.prime_q().unwrap(), &prime_q); assert_eq!(dh.generator(), &generator); } #[test] #[cfg(ossl102)] fn test_dh_stored_restored() { let dh1 = Dh::get_2048_256().unwrap(); let key1 = dh1.generate_key().unwrap(); let dh2 = Dh::get_2048_256().unwrap(); let key2 = dh2 .set_private_key(key1.private_key().to_owned().unwrap()) .unwrap(); assert_eq!(key1.public_key(), key2.public_key()); assert_eq!(key1.private_key(), key2.private_key()); } #[test] fn test_dh_from_pem() { let mut ctx = SslContext::builder(SslMethod::tls()).unwrap(); let params = include_bytes!("../test/dhparams.pem"); let dh = Dh::params_from_pem(params).unwrap(); ctx.set_tmp_dh(&dh).unwrap(); } #[test] fn test_dh_from_der() { let params = include_bytes!("../test/dhparams.pem"); let dh = Dh::params_from_pem(params).unwrap(); let der = dh.params_to_der().unwrap(); Dh::params_from_der(&der).unwrap(); } #[test] #[cfg(ossl102)] fn test_dh_generate_key_compute_key() { let dh1 = Dh::get_2048_224().unwrap().generate_key().unwrap(); let dh2 = Dh::get_2048_224().unwrap().generate_key().unwrap(); let shared_a = dh1.compute_key(dh2.public_key()).unwrap(); let shared_b = dh2.compute_key(dh1.public_key()).unwrap(); assert_eq!(shared_a, shared_b); } #[test] fn test_dh_generate_params_generate_key_compute_key() { let dh_params1 = Dh::generate_params(512, 2).unwrap(); let dh_params2 = Dh::from_pqg( dh_params1.prime_p().to_owned().unwrap(), None, dh_params1.generator().to_owned().unwrap(), ) .unwrap(); let dh1 = dh_params1.generate_key().unwrap(); let dh2 = dh_params2.generate_key().unwrap(); let shared_a = dh1.compute_key(dh2.public_key()).unwrap(); let shared_b = dh2.compute_key(dh1.public_key()).unwrap(); assert_eq!(shared_a, shared_b); } } vendor/openssl/src/version.rs0000664000175000017500000001104214172417313017155 0ustar mwhudsonmwhudson// Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // use cfg_if::cfg_if; use std::ffi::CStr; cfg_if! { if #[cfg(any(ossl110, libressl271))] { use ffi::{ OPENSSL_VERSION, OPENSSL_CFLAGS, OPENSSL_BUILT_ON, OPENSSL_PLATFORM, OPENSSL_DIR, OpenSSL_version_num, OpenSSL_version, }; } else { use ffi::{ SSLEAY_VERSION as OPENSSL_VERSION, SSLEAY_CFLAGS as OPENSSL_CFLAGS, SSLEAY_BUILT_ON as OPENSSL_BUILT_ON, SSLEAY_PLATFORM as OPENSSL_PLATFORM, SSLEAY_DIR as OPENSSL_DIR, SSLeay as OpenSSL_version_num, SSLeay_version as OpenSSL_version, }; } } /// OPENSSL_VERSION_NUMBER is a numeric release version identifier: /// /// `MNNFFPPS: major minor fix patch status` /// /// The status nibble has one of the values 0 for development, 1 to e for betas 1 to 14, and f for release. /// /// for example /// /// `0x000906000 == 0.9.6 dev` /// `0x000906023 == 0.9.6b beta 3` /// `0x00090605f == 0.9.6e release` /// /// Versions prior to 0.9.3 have identifiers < 0x0930. Versions between 0.9.3 and 0.9.5 had a version identifier with this interpretation: /// /// `MMNNFFRBB major minor fix final beta/patch` /// /// for example /// /// `0x000904100 == 0.9.4 release` /// `0x000905000 == 0.9.5 dev` /// /// Version 0.9.5a had an interim interpretation that is like the current one, except the patch level got the highest bit set, to keep continuity. The number was therefore 0x0090581f /// /// The return value of this function can be compared to the macro to make sure that the correct version of the library has been loaded, especially when using DLLs on Windows systems. pub fn number() -> i64 { unsafe { OpenSSL_version_num() as i64 } } /// The text variant of the version number and the release date. For example, "OpenSSL 0.9.5a 1 Apr 2000". pub fn version() -> &'static str { unsafe { CStr::from_ptr(OpenSSL_version(OPENSSL_VERSION)) .to_str() .unwrap() } } /// The compiler flags set for the compilation process in the form "compiler: ..." if available or /// "compiler: information not available" otherwise. pub fn c_flags() -> &'static str { unsafe { CStr::from_ptr(OpenSSL_version(OPENSSL_CFLAGS)) .to_str() .unwrap() } } /// The date of the build process in the form "built on: ..." if available or "built on: date not available" otherwise. pub fn built_on() -> &'static str { unsafe { CStr::from_ptr(OpenSSL_version(OPENSSL_BUILT_ON)) .to_str() .unwrap() } } /// The "Configure" target of the library build in the form "platform: ..." if available or "platform: information not available" otherwise. pub fn platform() -> &'static str { unsafe { CStr::from_ptr(OpenSSL_version(OPENSSL_PLATFORM)) .to_str() .unwrap() } } /// The "OPENSSLDIR" setting of the library build in the form "OPENSSLDIR: "..."" if available or "OPENSSLDIR: N/A" otherwise. pub fn dir() -> &'static str { unsafe { CStr::from_ptr(OpenSSL_version(OPENSSL_DIR)) .to_str() .unwrap() } } /// This test ensures that we do not segfault when calling the functions of this module /// and that the strings respect a reasonable format. #[test] fn test_versions() { println!("Number: '{}'", number()); println!("Version: '{}'", version()); println!("C flags: '{}'", c_flags()); println!("Built on: '{}'", built_on()); println!("Platform: '{}'", platform()); println!("Dir: '{}'", dir()); #[cfg(not(libressl))] fn expected_name() -> &'static str { "OpenSSL" } #[cfg(libressl)] fn expected_name() -> &'static str { "LibreSSL" } assert!(number() > 0); assert!(version().starts_with(expected_name())); assert!(c_flags().starts_with("compiler:")); // some distributions patch out dates out of openssl so that the builds are reproducible if !built_on().is_empty() { assert!(built_on().starts_with("built on:")); } assert!(dir().starts_with("OPENSSLDIR:")); } vendor/openssl/src/lib.rs0000664000175000017500000001356014172417313016245 0ustar mwhudsonmwhudson//! Bindings to OpenSSL //! //! This crate provides a safe interface to the popular OpenSSL cryptography library. OpenSSL versions 1.0.1 through //! 1.1.1 and LibreSSL versions 2.5 through 3.4.0 are supported. //! //! # Building //! //! Both OpenSSL libraries and headers are required to build this crate. There are multiple options available to locate //! OpenSSL. //! //! ## Vendored //! //! If the `vendored` Cargo feature is enabled, the `openssl-src` crate will be used to compile and statically link to //! a copy of OpenSSL. The build process requires a C compiler, perl, and make. The OpenSSL version will generally track //! the newest OpenSSL release, and changes to the version are *not* considered breaking changes. //! //! ```toml //! [dependencies] //! openssl = { version = "0.10", features = ["vendored"] } //! ``` //! //! The vendored copy will not be configured to automatically find the system's root certificates, but the //! `openssl-probe` crate can be used to do that instead. //! //! ## Automatic //! //! The `openssl-sys` crate will automatically detect OpenSSL installations via Homebrew on macOS and vcpkg on Windows. //! Additionally, it will use `pkg-config` on Unix-like systems to find the system installation. //! //! ```not_rust //! # macOS (Homebrew) //! $ brew install openssl@1.1 //! //! # macOS (MacPorts) //! $ sudo port install openssl //! //! # macOS (pkgsrc) //! $ sudo pkgin install openssl //! //! # Arch Linux //! $ sudo pacman -S pkg-config openssl //! //! # Debian and Ubuntu //! $ sudo apt-get install pkg-config libssl-dev //! //! # Fedora //! $ sudo dnf install pkg-config openssl-devel //! ``` //! //! ## Manual //! //! A set of environment variables can be used to point `openssl-sys` towards an OpenSSL installation. They will //! override the automatic detection logic. //! //! * `OPENSSL_DIR` - If specified, the directory of an OpenSSL installation. The directory should contain `lib` and //! `include` subdirectories containing the libraries and headers respectively. //! * `OPENSSL_LIB_DIR` and `OPENSSL_INCLUDE_DIR` - If specified, the directories containing the OpenSSL libraries and //! headers respectively. This can be used if the OpenSSL installation is split in a nonstandard directory layout. //! * `OPENSSL_STATIC` - If set, the crate will statically link to OpenSSL rather than dynamically link. //! * `OPENSSL_LIBS` - If set, a `:`-separated list of library names to link to (e.g. `ssl:crypto`). This can be used //! if nonstandard library names were used for whatever reason. //! * `OPENSSL_NO_VENDOR` - If set, always find OpenSSL in the system, even if the `vendored` feature is enabled. //! //! Additionally, these variables can be prefixed with the upper-cased target architecture (e.g. //! `X86_64_UNKNOWN_LINUX_GNU_OPENSSL_DIR`), which can be useful when cross compiling. //! //! # Feature Detection //! //! APIs have been added to and removed from the various supported OpenSSL versions, and this library exposes the //! functionality available in the version being linked against. This means that methods, constants, and even modules //! will be present when building against one version of OpenSSL but not when building against another! APIs will //! document any version-specific availability restrictions. //! //! A build script can be used to detect the OpenSSL or LibreSSL version at compile time if needed. The `openssl-sys` //! crate propagates the version via the `DEP_OPENSSL_VERSION_NUMBER` and `DEP_OPENSSL_LIBRESSL_VERSION_NUMBER` //! environment variables to build scripts. The version format is a hex-encoding of the OpenSSL release version: //! `0xMNNFFPPS`. For example, version 1.0.2g's encoding is `0x1_00_02_07_0`. //! //! For example, let's say we want to adjust the TLSv1.3 cipher suites used by a client, but also want to compile //! against OpenSSL versions that don't support TLSv1.3: //! //! Cargo.toml: //! //! ```toml //! [dependencies] //! openssl-sys = "0.9" //! openssl = "0.10" //! ``` //! //! build.rs: //! //! ``` //! use std::env; //! //! fn main() { //! if let Ok(v) = env::var("DEP_OPENSSL_VERSION_NUMBER") { //! let version = u64::from_str_radix(&v, 16).unwrap(); //! //! if version >= 0x1_01_01_00_0 { //! println!("cargo:rustc-cfg=openssl111"); //! } //! } //! } //! ``` //! //! lib.rs: //! //! ``` //! use openssl::ssl::{SslConnector, SslMethod}; //! //! let mut ctx = SslConnector::builder(SslMethod::tls()).unwrap(); //! //! // set_ciphersuites was added in OpenSSL 1.1.1, so we can only call it when linking against that version //! #[cfg(openssl111)] //! ctx.set_ciphersuites("TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256").unwrap(); //! ``` #![doc(html_root_url = "https://docs.rs/openssl/0.10")] #![warn(rust_2018_idioms)] #[doc(inline)] pub use ffi::init; use libc::c_int; use crate::error::ErrorStack; #[macro_use] mod macros; mod bio; #[macro_use] mod util; pub mod aes; pub mod asn1; pub mod base64; pub mod bn; #[cfg(all(not(libressl), not(osslconf = "OPENSSL_NO_CMS")))] pub mod cms; pub mod conf; pub mod derive; pub mod dh; pub mod dsa; pub mod ec; pub mod ecdsa; pub mod encrypt; pub mod envelope; pub mod error; pub mod ex_data; #[cfg(not(any(libressl, ossl300)))] pub mod fips; pub mod hash; pub mod memcmp; pub mod nid; #[cfg(not(osslconf = "OPENSSL_NO_OCSP"))] pub mod ocsp; pub mod pkcs12; pub mod pkcs5; pub mod pkcs7; pub mod pkey; pub mod rand; pub mod rsa; pub mod sha; pub mod sign; pub mod srtp; pub mod ssl; pub mod stack; pub mod string; pub mod symm; pub mod version; pub mod x509; fn cvt_p(r: *mut T) -> Result<*mut T, ErrorStack> { if r.is_null() { Err(ErrorStack::get()) } else { Ok(r) } } fn cvt(r: c_int) -> Result { if r <= 0 { Err(ErrorStack::get()) } else { Ok(r) } } fn cvt_n(r: c_int) -> Result { if r < 0 { Err(ErrorStack::get()) } else { Ok(r) } } vendor/openssl/src/string.rs0000664000175000017500000000363214160055207017001 0ustar mwhudsonmwhudsonuse foreign_types::ForeignTypeRef; use libc::{c_char, c_void}; use std::convert::AsRef; use std::ffi::CStr; use std::fmt; use std::ops::Deref; use std::str; use crate::stack::Stackable; foreign_type_and_impl_send_sync! { type CType = c_char; fn drop = free; pub struct OpensslString; pub struct OpensslStringRef; } impl fmt::Display for OpensslString { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fmt::Display::fmt(&**self, f) } } impl fmt::Debug for OpensslString { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fmt::Debug::fmt(&**self, f) } } impl Stackable for OpensslString { type StackType = ffi::stack_st_OPENSSL_STRING; } impl AsRef for OpensslString { fn as_ref(&self) -> &str { &**self } } impl AsRef<[u8]> for OpensslString { fn as_ref(&self) -> &[u8] { self.as_bytes() } } impl Deref for OpensslStringRef { type Target = str; fn deref(&self) -> &str { unsafe { let slice = CStr::from_ptr(self.as_ptr()).to_bytes(); str::from_utf8_unchecked(slice) } } } impl AsRef for OpensslStringRef { fn as_ref(&self) -> &str { &*self } } impl AsRef<[u8]> for OpensslStringRef { fn as_ref(&self) -> &[u8] { self.as_bytes() } } impl fmt::Display for OpensslStringRef { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fmt::Display::fmt(&**self, f) } } impl fmt::Debug for OpensslStringRef { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fmt::Debug::fmt(&**self, f) } } #[cfg(not(ossl110))] unsafe fn free(buf: *mut c_char) { ::ffi::CRYPTO_free(buf as *mut c_void); } #[cfg(ossl110)] unsafe fn free(buf: *mut c_char) { ffi::CRYPTO_free( buf as *mut c_void, concat!(file!(), "\0").as_ptr() as *const c_char, line!() as ::libc::c_int, ); } vendor/openssl/test/0000775000175000017500000000000014160055207015311 5ustar mwhudsonmwhudsonvendor/openssl/test/dsa.pem0000664000175000017500000000123414160055207016563 0ustar mwhudsonmwhudson-----BEGIN DSA PRIVATE KEY----- MIIBuwIBAAKBgQCkKe/jtYKJNQafaE7kg2aaJOEPUV0Doi451jkXHp5UfLh6+t42 eabSGkE9WBAlILgaB8yHckLe9+zozN39+SUDp94kb2r38/8w/9Ffhbsep9uiyOj2 ZRQur6SkpKQDKcnAd6IMZXZcvdSgPC90A6qraYUZKq7Csjn63gbC+IvXHwIVAIgS PE43lXD8/rGYxos4cxCgGGAxAoGASMV56WhLvVQtWMVI36WSIxbZnC2EsnNIKeVW yXnP/OmPJ2mdezG7i1alcwsO2TnSLbvjvGPlyzIqZzHvWC8EmDqsfbU+n8we/Eal sm5nloC8m9ECWpbTzbNdvrAAj9UPVWjcDwg7grAGGysh6lGbBv5P+4zL/niq1UiE LnKcifgCgYEAo6mAasO0+MVcu8shxxUXXNeTLsZ8NB/BIx9EZ/dzE23ivNW8dq1A eecAAYhssI2m/CspQvyKw+seCvg4FccxJgB3+mGOe+blFHwO3eAwoyRn/t3DZDHh FjxKKRsQdy4BkZv+vhTyIYYCw0iPZ5Wfln+pyGGTveIDED1MPG+J6c8CFCJAUlEl 4nHvbC15xLXXpd46zycY -----END DSA PRIVATE KEY----- vendor/openssl/test/key.der0000664000175000017500000000225114160055207016575 0ustar mwhudsonmwhudson0‚¥‚¨ô%ŒD³£P…þ#‘‡Ðx6Iÿ-GÓåÞl6ü–¹?ò7Þ¿ï3ºeÅòä·JI©Ê"do ôÕ4‘N¨å?¿ÕrP€¡–’К±šŠ6bOõBÈõê™ÌÍt¹ äé_Z%%õ¼¬5SRÂ!ÀÔÈWú.Ö¿ÊÒxªúNá:HexY›T«`^U|ë ‘®Ô’Ö'“Ù«° Žf¥¡“gÚ“  Uƒ„ሹ·Îû–ªõÀIlÔeÎÈÝFlÞ´;¡³nvz¡=ˆ“NÅ׌KRVªþºê^zæd‘±Ó3ÛÏQÍæÝ”Ü¤ß„ïæNPY\ öЭ;6è* 5‚¢ã胉¥è•ÌéÍ¿“ #AzK\±åtî¥'÷äß= ¹cïævñå\¸½M±þ)m³÷xËÁèPªcfn+c2nPI^ÇnNÔª‡c÷³¥v ¾øáMÜ•¯ÐŹ$ýºþ&ñ¹WÊß_XÄ#jÀ=U¯‚)€~ÀÐ?ž;Á ÌdÛ89+ ²X—Ûù¾—íÌÞÄá3ê¦58èa‘ØÍùWá¦Mçµü_Yƃ—ó¤Û_lëcJÌÝÅÌ`,9&ðQ¶+Wùš=È ,á4ó¾ä‹ÅÞ6Æî`“ßø>(ÊdD-[—…W©>Þj5¯aÇEX#¢…–¬ïß!Üû­m/pa½»pQ ˆaïôa%½úô•¯‘4¨Ò«o,WE„®\¶ËL¯¦®3§†ÊÕ‚0ñ€ã»'~ï>.øÈnÄå(ª S¯Ñùšd÷ ÿö®Æœ­À4ÏUà|鹆Ø`¦J—b]²×«–Š Cç ñ5~j¿­ú4yêz&{é).צŽÏm‡Qe¦ÖòÓûuôÓΈ ÞþЄz‹n‰Y´k­îIð0 ‹¡ñøcf‡>eÒTOÿ²æU<úÕZÙ¼ $ÎÜKQYHEûjÐ%¯¸ø $²+Ä´„ ô@—7úb W{Æå ÅM–¤Jcì¢õ&í$ITd ‚‹íâK$¿üv'¿ÆNO8 –Ë_ÊÓ(oæTy0èr?6vg=:Ò[^“âìWõy¯·ùöPšTh~!Tz>G"uÑ7›‹½˜ž»jWÚÞÉéÜW`KW ç‘¿ÐgÌzäÖÐX7b²­‡È£/TV—» ö;£^4yfND@øy«›èß|õ^ˆlL]òø—Ì•»¬7Ö‚kdœrÀ iv[öÛxâ}ÞVõª.‹ #e.£¬ùWs–W”uÄWçMlt¢¥‡çúÉ 1ƒ‰gÉRõ…)Kò@þMÕ˜â'—qòòš×¢5&þ"íשÄWw¸36Ó­<l1ÐÔáÛbq]ÕCÞ(¢Õ3ž[µñ+Rœ€:ô‘Ey„A ¢ìžÚ\ñ ‡èïÔ’±@ ³#âLjΡ‡ôÙeÜmÁ^ò­tTšõPí±¼«jEe+ø’y¤îº®?\ŒâºãÛ†hË2$ãðŸK1k¾å¨`«@?{?¸âØ•°Ê ëçËQìaèß÷C×(^€¯Ø`ˆü¹²§10:2â3ùF|‹B8· J²·f„0r†a‚h41íø:¾@†=$ÈnËš›Ü7"—÷ zš%»ì½àTËkIG-‡dtìGq¢V*Hà°&yêW©0”.«3@.ï Yµšî#’Yxq-&Ñ‚Âvç!Šô¸ë£r¡á±ó´×ÇÈŠƒ6Zæø¾òŽG¦VqªIGÞSh•_ÃCö¥eà•ÅM·×9Âtby¹Íc|ÜÚ´Qc ©Ü¸ê¨ÍÀw¨Ç[̈‡—‘¶f¬ÞßÉzUPØH$—½vÉÔ­¿+:-–ªXTüßK­ µ‘AéiÏU,&lgP¬„Æ P=oU~«~èFA;oSMJÅÏÿªLßFUâ$þ:-Póq”°5¹:¬ÐrYn¨¬A Ã@PºËâÌü¥.aØp5x±ãq±Î‹dG0°Ÿ‰‰F~Ò*K,¢ ÞÖº:‡gÔÛWx2v>“dÏ£›O&±ˆWº™~«‡²@꽌õ›º¨* |SúySÜ:&Æ}é]R ø`)N0$‚µ3%&HX}Ž)d¾ø=4?ûa§ñ‡½áÎÌcã0õ]¶'•NòK±ü øHªiw®5júó=qÃ@ËE¹Ñ²[àSñ“ùšƒ12Æ1F0! *†H†÷  1docker-db0! *†H†÷  1Time 14878352479100‚û *†H†÷  ‚ì0‚è0‚á *†H†÷ 0( *†H†÷  0.·—_½CÝ+Cf†Œ¸“'Szñ0€‚¨=š¾ß-Ùw1R$? Ç §XOÒ•{ä4ñeIþ8;‰…¢ÔÊ %#‰BŽz0Ø5;$~o¹ 6Œ ·…Gå€Í¡ðÛ{¶.Ó<¤VPÙ”: J•þf²ÑÃ/!\qs­ñåÆj‚T4ƸâYÍí`HË/T@¼8õž‡gfͼ4§´¯Öî” ÿ“¤"Á¥£Lè_¬e6Ô8¿²“~ò)nú‚ÑýïÚ:ª3nð r™u&añ…# ¸|w=æ‡)W¢º'¥Æp[DÊ2¨| ™f~íä*±¦æþ!똚&©;å‚VxYÈH°7†Ê‚‚áçz U¯gĺ¹•ƒ]!8QkIFŽMIL’ÅŒ]Œs.‹îLêvg•;&Êgfd‚ìÇa¸ ò&[XgιH{ÞÛÙ“4"¨@„†$]0§M×ÛÑh¦N϶w°}ƒ™[çm|ši"Š-i„Ö §jS¿§¤_ÁEjW—׉‰¬K_Q¦“Ï|ÙGøÜEqB–éÞj£Ò/  TG}"¥Û3o?_Ä8ä w“VÑ»éVÌ ¿r¼­lœ@§&·È„Te<.³êV"MÇC—2t8ïã1ø•§‘à €Ð&3æªPãåŒöýl>Fª'6ø—ù|³ëÅßZóU ¨Øùê]`©!Ý*Û û«Dìå`$Í_½SÌ…ª-½tjÚ˱TRA]–é½@ößDWNrS+5^Lj i)ÐÌûÃÏ]è1OM%•Êè®äŽú&ÞEZ€ÿän¿0^¼•x7[jWZÖÇõÞÖDH6)ljº‰/Í! bZ\tŽ®ùˆL†‰0¹qÕ˜áýµ^»Kvß­#žG6 Ò5ÛUZŽ *SŒÖU×5ñ8YHþ£f‹¹œ´&ã¹÷B]º<Óâ~Ã¥¬Ã¸ÁBÚMí.a¤Ðf¸g¯Fý„Ȫ:¼_µ¤û“Sæäè8ÐÖ¡Dáÿÿ“Ú¼H.Œj²AxÑ83`ÖïÑB/>®Y *I²4°±ì!ÁÝ= [Lnáóÿ‘Xsah{«þfä6tõÜ'¿ä;W–è±ñ–«Xõ«ìNkÇâZKY0¶A¹W^¸V| »¾é‘:V‰1ÒÞŽÆ”ÙÒ96TN¨Q.¦o4nà7sߨékE‚ue´ó_Zë§+Š¡nA?ùfOÇ’Eo屃ƒnJúIÛˆ‚ðG¸ld‘{›ð"70=0!0 +ª¢ÊñÀ)ôÒ‚ãq¯{<{›ì¯$.m\ÀË‹œ¤ÚRǯF£LËvendor/openssl/test/dsa.pem.pub0000664000175000017500000000121614160055207017350 0ustar mwhudsonmwhudson-----BEGIN PUBLIC KEY----- MIIBtzCCASsGByqGSM44BAEwggEeAoGBAKQp7+O1gok1Bp9oTuSDZpok4Q9RXQOi LjnWORcenlR8uHr63jZ5ptIaQT1YECUguBoHzIdyQt737OjM3f35JQOn3iRvavfz /zD/0V+Fux6n26LI6PZlFC6vpKSkpAMpycB3ogxldly91KA8L3QDqqtphRkqrsKy OfreBsL4i9cfAhUAiBI8TjeVcPz+sZjGizhzEKAYYDECgYBIxXnpaEu9VC1YxUjf pZIjFtmcLYSyc0gp5VbJec/86Y8naZ17MbuLVqVzCw7ZOdItu+O8Y+XLMipnMe9Y LwSYOqx9tT6fzB78RqWybmeWgLyb0QJaltPNs12+sACP1Q9VaNwPCDuCsAYbKyHq UZsG/k/7jMv+eKrVSIQucpyJ+AOBhQACgYEAo6mAasO0+MVcu8shxxUXXNeTLsZ8 NB/BIx9EZ/dzE23ivNW8dq1AeecAAYhssI2m/CspQvyKw+seCvg4FccxJgB3+mGO e+blFHwO3eAwoyRn/t3DZDHhFjxKKRsQdy4BkZv+vhTyIYYCw0iPZ5Wfln+pyGGT veIDED1MPG+J6c8= -----END PUBLIC KEY----- vendor/openssl/test/rsa-encrypted.pem0000664000175000017500000000334614160055207020602 0ustar mwhudsonmwhudson-----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC,E2F16153E2BA3D617285A68C896BA6AF vO9SnhtGjGe8pG1pN//vsONnvJr+DjU+lFCiSqGMPT7tezDnbehLfS+9kus2HV7r HmI14JvVG9O7NpF7zMyBRlHYdWcCCWED9Yar0NsWN9419e5pMe/bqIXAzAiJbtT4 OB9U5XF3m+349zjN1dVXPPLGRmMC1pcHAlofeb5nIUFTvUi5xcsbe1itGjgkkvHb Bt8NioHTBun8kKrlsFQOuB55ylBU/eWG8DQBtvFOmQ7iWp0RnGQfh8k5e5rcZNpQ fD9ygc7UVISl0xTrIG4IH15g34H+nrBauKtIPOpNPuXQPOMHCZv3XH8wnhrWHHwT ZFnQBdXbSpQtMsRh0phG2G+VIlyCgSn4+CxjCJ+TgFtsoK/tU0unmRYc59QnTxxb qkHYsPs3E0NApQAgH1ENEGl1M+FGLYQH7gftjc3ophBTeRA17sRmD7Y4QBInggsq Gv6tImPVBdekAjz/Ls/EyMwjAvvrL5eAokqrIsAarGo+zmbJKHzknw2KUz2En0+k YYaxB4oy9u7bzuQlvio6xYHJEb4K197bby4Dldmqv7YCCJBJwhOBAInMD687viKv vcUwL8YuS6cW5E8MbvEENlY4+lvKKj3M8Bnyb79cYIPQe92EuCwXU9DZXPRMLwwM oFEJpF5E/PmNJzu+B52ahHtDrh83WSx71fWqjdTqwkPZhAYo3ztsfFkb/UqUcq8u rBSebeUjZh0XZ9B04eshZQ5vJUcXGtYIe/77beV3Pv89/fw+zTZjpiP9Q3sZALzf Qt0YGp0/6qBuqR1tcqdu65AS2hun7yFw7uRavqYKvww4axRiz2do+xWmZFuoCAwD EWktaUujltpvAc1lo7lg4C6nByefJB9Xqk22N/vpqOsWr1NbAntT42Qj/HF9BVWR osvN3yMnKYWYe6oSTVnNBDM5obWAIHd3I9gcxTOTb1KsEwt2RrDs5EpB5ptS3Fjo JfBRhNZQ3cXttrIIhsHgDn9BDNg865/xpIgktKj0gEd60Abx0PqkAIm6IZTh4Efg 7uZwfzxB+saOcddbrW2gNdzVZMC0s2Ye3sqHhtLbAJ3BlXYTxE4CAvTg54Ny+5hF IjvjlOKgXceSG1cSfk21/wyp9RY3Ft0AEYvvp0kZScWZaoA2aSFDUrchXVhgrEbn lJ7UptjefwRFIreAlwbKSbIDDNWnyzvIWyHfQ2aYqgnb7W7XqNPSgH9cALCfzirI dlRHjha0bMUtrjPCC/YfMXzJBVniy0gG6Pd5uC7vz/Awn6/6HRQVNaTQASphPBQ7 bJuz+JTfzI9OUVCMRMdnb6b35U4P9tibFmnPvzTIPe+3WUmf8aRsLS3NN3G1Webd PMYVZpMycPaAI0Ht87axhsOzlxCWHYWjdHa+WoNNc1J90TxLCmAHquh5BDaWvjMK 0DySftJZjV7Tf1p2KosmU83LRl39B5NHMbZb1xOEZl9IWwhT/PVKTVZ25xdxWLfb hF4l8rfvKehIp5r4t8zW1bvI2Hl6vrUvmcUVWt3BfKjxlgwRVD0vvwonMt1INesF 204vUBeXbDsUUicLwOyUgaFvJ3XU3dOyvL9MhOgM5OgoFRRhG+4AS8a5JCD8iLtq -----END RSA PRIVATE KEY----- vendor/openssl/test/cms_pubkey.der0000664000175000017500000000126014160055207020145 0ustar mwhudsonmwhudson0‚¬0‚ rVö!_™ /„Ç´N¨B¢Ó0  *†H†÷  0g1 0 UUS10U Some-State10U openssl-rust10U openssl-rust10U openssl-rust0  190123211015Z22921106211015Z0g1 0 UUS10U Some-State10U openssl-rust10U openssl-rust10U openssl-rust0Ÿ0  *†H†÷ 0‰ÃSJyf•æ1}ò¿É)¢ñµÀ³æÛOäÑ`†Nªaû­Ñ¬×²ÚZä.ÎñÑf²’Îý‘jü=X'üøiÖ(ýËôWízEØôô¤[ð´¡YUqLOã°OÙ¦ÚJ¬ŠQ‹X º šJ¸úA¿ÐzÂkÍÉ}Ë"y£S0Q0U=†í¸¤î°m?Ô¶?Ic–åà†!0U#0€=†í¸¤î°m?Ô¶?Ic–åà†!0Uÿ0ÿ0  *†H†÷  Mvʇ^{ç=—iNŽ~~À.³£¯S¦jr&‡Ô¤$lT‚ñ,%ï o¼Xóôíž¶"Ëðª’ÚH$—Cò¨¥:M.¦M;ð•ÿŠI¢»[üíMR…¨HÓ̦a¾Ž"Çtüe嵋Ü&™‡¾E°:´e2Ì9QA¾U@LTPkvendor/openssl/test/identity.p120000664000175000017500000000647214160055207017477 0ustar mwhudsonmwhudson0‚ 60‚ ü *†H†÷  ‚ í‚ é0‚ å0‚w *†H†÷  ‚h0‚d0‚] *†H†÷ 0 *†H†÷  0u W’7ÿÔ€‚0†l•É<Ϻ6- §öÌI\?Ó³]úª:ÅWäh¿ aXI3Á÷."!çDc3†½§,[LðÓ©‹óBó':Sl’àòZuSØ! $ø9B°ãÚÃb&õ˜¾AHþõUFCeƒÓw!=ñÞÅù@ˆ2þ xñ˜›¬ŸGê"Џ2Gô`DRx矱˜RþþÛa¿ÌèØ}Pz³´ 6Ô¤)}` YŽ'ÚÎ夃‘B½Ð‚)-ÉkøHÌž6á‡.K7Š.˜hÍžƒœkºŽçgî9»ô|½PõctG‘ñØ™·À³NcŸ·hk »]ü”¾Ó¢¼É›¢Ø€¾ñÄ®¶T‰×:“Ëáà9í\ =åƒþ 0‹‚RT{ûO0«'2öíÛÁ3.R ™' { ‡JOVåD‹Ëö‹c5„äV¸-3¨™E ‹_iŠ^O²º* µ‚)‰¿'ô[䓬=ÆÁý¯d$ ¨–üË%v‘h”êYóÇy<°«4Q\ä¼1S(ö²|Aîd)’•5ÿ9·iú¨Ê…üZ‚¯|ORv”£¦¡àóNÀyO¨Kåé'ô HàÝA#£B¿ø€$ÍÙ»÷8N®JB²« °ü_F} ˜Tï/f£%.LÂ~ÕÿZæå/ÅǹÊœ_Û ×¡]—º·±½Üǰbf@Z f4TÜrè‡ÕB!æº™ó´ø…»’U•œÔ콆B/IŸéíÔÞ$79ëIÐdz¿èù žä¾.QˆÛ^ò¬$ý@L*üf¨Yµ½Œl€Ñ÷x.OI˜†æ–Œ;úŒê”›%RÛa—Ú㜥\´¬ZcyxŸ£ÿ|K|>”ûñ"\ÞÕ±W2àôgô_žâ?*0SãQ°ŸQØl†«µç FÏð‹¾/snù³¿g÷®^$Õ3‰«½äðĬ‹ªh€'èÛï0þpñª—áÐ|.2€2*cÍ «K ~"bµÃgàëY*Tn̤ÆzÛz¾Rû¹€Ö9r(7mà૘zmÏ ±WÍ™н“ä3§-P}ÿüèHÔŠ@â_z•ê4DiÛ©€±örµpgÍ$ÀT™#ðE$Lçþ =/°›MÒ4 qÊ`Ç•ÇИRÎ&Ý÷7Á³k' *~òŒÝgÿ¼Òô¯ü“zˆcØš°óùq@ùµÎ†)âUµÑø­Ër—n§X­ÔΧW,Ù.M=÷ùé9sûýÃIþOôXOš€ ´§#ä“IÍŠ’ƒäౠȸҙÔ\¯†:2Sï1ØìؾÑ_ ’j:µPïóuÛr›dÈ+úƒ8™»p=¨ý,2ßZÐdS™MþK×PfÏ«£¤|#¦œ.Ù+"â‘;× "èµ$ÊK_ª\£>%]´o LòBŒšG À5×U®t!³±uùÈoPêfd6ýŠ É Zµ³IZúeùÅÛöXfO^®àKå¬'–hpçÁ²ÿ©ÐÐIY=ˆ¯¼¤Ú Äêû0U˜-uh1ÃGxî|ÖCÌ[ôœýß"ñ–y6]ýCîÑ\ ‚ÌNÖº é‡OêV:6‹bGUÁʶ¨+=,.{Ážú*ì2îœæ›ŠÐ$˜6ïAX•KS2>¼ûµ9R¤L™·wÚÿ–“Š6Ó…ƒn~§ûx‹7*Ž-bštJÖ›-rÄw£n n uý©ìÇò13+€÷$)uÕ7ÅÞ þXÌD/gZâðí(D£Y:ˆþ“ IÑ[N'Œ† ’Òh㾯ã38ãHï Å|v0òeŠ”4 å¹ô£Í›@)MZw oûyÈ=”­ÊOru±í7¤é¦=”ÏêEè°5_B²σ®ëàŸ¨ÏíRY‘¼ò‡P {œ£h%s.Ôo…YœÉêê™4ÿñÖígaÉ X¨Þ­s„ùÁÀ)[Å>UdѰ¾Ýô˜ 1CÆ6àqçµJ×½ïÁnEÍ´³‘å0käV¹Ãé»äîéEGËä>¾µ¼,„QòXA*Z­F4»GXdu‚ÎüÈžƒvº–†u[JÈdŒʘ¼Žábß"ÂMHuý«¥fäQPXfëŸ=æÇÄ7»hÚ3tï¾­Hñÿìr°ýìå&ý2×)öTÙTÜt"±^=¡¯ò@Ó•(ÂïÑÇGò/¤¾À;zãã‘-Û%3ÖXÄÓbijžkîÁ&nœRÔP –ª8σ¦z°äHX4ÎÀG—œÿCôØÊ¥ßMõshÕ‰R/ ÿuÈ®Á ë]¨d(¼ NûΛ ŽUº|5úu[Å?DSÝb¯Ç¤§•¼VáÍEÄ÷ ï9G¿ý4gž0‚f *†H†÷  ‚W‚S0‚O0‚K *†H†÷   ‚î0‚ê0 *†H†÷  0iK[Kq=Ä‚È7¥_Þ¼5Ue eTJ²ú ÿc¸,vÎ@ã°áD÷Ëÿž$¥´åCOHÝpÈ®¨x‘M»„™ž€?¸âøÍ’i±ô@ g*Vª„€þ@ä†6è>¢pà5àeî­¶ OHÝ¢f‚ÔOgù<Ô`¢”÷“Qæ¶ ¶„´°=S;¦‹Ò]ZhP¯O@ô=ê:€²lÎpYÀÄ¥e m²E³¸4œÀoöð’žò‚e¿Dçà ×cM„®ñßz°¶Òç’°=‘ »ì©‰q膔D"%a¦ÅZG7k±q„v|°ôsc"§m±wOÅ€†%õúʱŠw¸Ó3÷VÄ£Ÿ?.úòÃZ¥[dõËÌþk;g2M¶ñ¥5†¤¼OªÑû6rªÇd¹Ö÷ªÙYNcáù¨;Â.Û5ä9вœæ:ÈÛü7¦º_IHR Ui9"¸²Ö¾I⋼æ€+¹Åú¦{aƒŸŽzw}Ed¤|·|yÔ¨‰Z±4#€.¿¦Æl$‰2l¤¬;ª¹íD=¹ú›ÿ…;JŠõÖHÇÆ#Ǻ”óÄ1ÎÔü>¥ZÖ é}êy:…‚ä‚Ö” “;ˆ2·ÄñQå Bê‚Íi|…KVA¾ð S«C§Z× –^1”Ês^Zk# åÞ¦¿+³»ºŒ>†¡/^wïÌŒÃ;,]ýõÙ²Š®™=^¥¬†Äiîÿšf ,™XÇœ@ì÷Àgk½äŽWt””!¥ðqÐŽiu4µÙábÝYWÌó&Y"CÀbð¶öú ;\«ˆRVˆ·­´íæ) ÍKkCpþ@ª ‰ÉãQQ‹è‹‡Z“Ë¡åÙ¿:ß4zÉå[Ç<··µ† ‡°ê%}Øív»œ6=OÇ6÷•‘ ª<úÛxuvA»¬Ç-Eó˜b›Ö;3 È™ŠLÕ(>€saëÜßÏàBv¸Šƒº¬®ú±È„ñËä—ï9ê[К¢ËࢣQ¨~¢ (V‘-Òqñõ€Ï’ï:ðGnнҡ"M˜Œ5½!/Za` Ï=&¸ÀžŒ}ÜSÿÅÓ|m í«B`™²’ºº” -ÔH6©½µ~=$$!kƒ›À[«¹ž&pQxëS†¯htÜn+¿õhW:f3ñ>sú„¥2ñ«ðó6a÷Ká–3}O™V¨"pÐoYˆÔÇus½ìÑÿº!㿬†¢Ía5á sI;íô&¬^ñÇÝ­"g©ÿ2ÒÈ@­<]~ Ç9,ê˜Ùf˜y€!#¦PìœïQ#Þ›u˜¶gsžO¡×eõÜ`×#ÓÞÀϨõÑCõc=>Úˆ•ñƒåâû1J0# *†H†÷  1foobar.com0# *†H†÷  1Y-“èDY¼ÿ'ùgçžn’å„010!0 +ôd)œŽIÉIÔ Z‹)ÁŠñÁZ#FÕ³‚"oÈvendor/openssl/test/root-ca.pem0000664000175000017500000000231514160055207017361 0ustar mwhudsonmwhudson-----BEGIN CERTIFICATE----- MIIDXTCCAkWgAwIBAgIJAOIvDiVb18eVMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQwHhcNMTYwODE0MTY1NjExWhcNMjYwODEyMTY1NjExWjBF MQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50 ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB CgKCAQEArVHWFn52Lbl1l59exduZntVSZyDYpzDND+S2LUcO6fRBWhV/1Kzox+2G ZptbuMGmfI3iAnb0CFT4uC3kBkQQlXonGATSVyaFTFR+jq/lc0SP+9Bd7SBXieIV eIXlY1TvlwIvj3Ntw9zX+scTA4SXxH6M0rKv9gTOub2vCMSHeF16X8DQr4XsZuQr 7Cp7j1I4aqOJyap5JTl5ijmG8cnu0n+8UcRlBzy99dLWJG0AfI3VRJdWpGTNVZ92 aFff3RpK3F/WI2gp3qV1ynRAKuvmncGC3LDvYfcc2dgsc1N6Ffq8GIrkgRob6eBc klDHp1d023Lwre+VaVDSo1//Y72UFwIDAQABo1AwTjAdBgNVHQ4EFgQUbNOlA6sN XyzJjYqciKeId7g3/ZowHwYDVR0jBBgwFoAUbNOlA6sNXyzJjYqciKeId7g3/Zow DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAVVaR5QWLZIRR4Dw6TSBn BQiLpBSXN6oAxdDw6n4PtwW6CzydaA+creiK6LfwEsiifUfQe9f+T+TBSpdIYtMv Z2H2tjlFX8VrjUFvPrvn5c28CuLI0foBgY8XGSkR2YMYzWw2jPEq3Th/KM5Catn3 AFm3bGKWMtGPR4v+90chEN0jzaAmJYRrVUh9vea27bOCn31Nse6XXQPmSI6Gyncy OAPUsvPClF3IjeL1tmBotWqSGn1cYxLo+Lwjk22A9h6vjcNQRyZF2VLVvtwYrNU3 mwJ6GCLsLHpwW/yjyvn8iEltnJvByM/eeRnfXV6WDObyiZsE/n6DxIRJodQzFqy9 GA== -----END CERTIFICATE----- vendor/openssl/test/pkcs8.der0000664000175000017500000000242214160055207017035 0ustar mwhudsonmwhudson0‚0@ *†H†÷  030 *†H†÷  0¡ýusÂ’%0*†H†÷ 1íªÖyÍ­‚ÈyàgymšH$F}{Ìo1ÒØ#V7LÏ;Ã27Ç:÷æcü›$_5¡"¥znðZv¢hoÁùð˜Â˜'ĉrDÌ{µï )‰¢D¦Û;ø³³ xÄOAw?˜ôòã§1ìi"Yd üʼnëUd´ªÁ :< Ý^³ý!Là\–µü˜gÆ¢ðŠ”\÷ÍRÁÊЄ݅Ÿz}ÜÄŠ[Â}œ)Êu´OÉm,S˜!”„‘¸|œún§Íùi0] èí71÷…ÊÕÏXSºjªÙËš4ɨ«…µ’^ìÃ<4TÑ]!´óQ„‹8–Â*:n ù^p£}?v²0huq‰M¼ø‘èTxž(_0÷›@ÒEu~³¶ù#ZÙ,™“—2%LÒè<»ocµ‰|ý¾ ì5Dl8koækÌ¢¹kŒ§A\ޤàŠÄ »x‰ªA”‡XLCo¼M§*«Ñêw”{Ò¶„¥-¢·+6ÀvO¢gFÒµ<0E!:|A^5"¡`&ù'-Ìß³0­âhÿøÉ…y(]{Ù&ա憬þŸŠ/ñ¸©çÕeÄ¡:3RO½,ŸÃ°-ø§R¾ûðÚ9ð7 nÏ䃦Í;j¼Ø­ç %|Âør¦š+Òù‰ÃSÊ:öfŒ§5@×*¢Œá´¬¨y.ÐÛ]oÖ¡îÐsN {ª°‡äõ v¦Ì³†·Ku"o\ÙÙ”òó9-¬:ÒS>K '¯9‘l¶yǾÂ)% šJÞ@ýÁ|’œ.ã>9E‡íæã gã´DGÊÉ‚jú¸hAo`4÷«ÐÑÄŒGÑ ÷È_ulðó){ƒÜTgúhý]ÖH¨È“8ÿ§v]œËV 8G8#JVƒ¸èˆýÄ·¾AL”ýV¬)ˆ 8| è˜,Ž/Ï€ôÏ_­ÐëZ²¥[äCcIxæÒÊ€» © Դδ\8jÊÝ{?V`R#ëY]˜ã’_JêÉ¥æ)-EEÞ•>0h+³P?¶¹]‘¿üû‚™ÓëêTÊ»˜' OZF‘ù\‰|qË·/½ÌÀfÜU-ò E=q­”ª ÉÏ2H‚B1DíÊ 4N `]çÈOa¶=fgâ]-%ÅÆiµÿLóžÙëΓº’«uý:‰Õ¬ÖS*ØOU #5§¼Á:HçkH‰Lp˜\÷RÎÀÃüxk¦ÕOF‘¿Âó&]5}œá9‚¡9±“{ÃôiV´XÆÐ²½bb´êòßß×ãZ>SC¦5¼ŽZ–hq¾5 9®¯d:ÿÞœ¾Ë*XGÄËÌgÞY­‡A ‘-ͺji“êˉl‰ÍjH7Û6ß³G‰0r#] ªRouúý$¶UH(-;9´˜«G$Ÿ6óX.n ta ሟ½«`~¢ŽÖsÕ¯Qõ,£{šÑ¦<…qZOöºEÝÞµ>Aguvendor/openssl/test/dhparams.pem0000664000175000017500000000065014160055207017614 0ustar mwhudsonmwhudson-----BEGIN DH PARAMETERS----- MIIBCAKCAQEAh3Betv+hf5jNsOmGXU8oxuABD2B8r0yU8FVgjnCZBSVo61qJ0A2d J6r8rYKbjtolnrZN/V4IPSzYvxurHbu8nbiFVyhOySPchI2Fu+YT/HsSe/0MH9bW gJTNzmutWoy9VxtWLCmXnOSZHep3MZ1ZNimno6Kh2qQ7VJr0+KF8GbxUKOPv4SqK NBwouIQXFc0pE9kGhcGKbr7TnHhyJFCRLNP1OVDQZbcoKjk1Vh+5sy7vM2VUTQmM yOToT2LEZVAUJXNumcYMki9MIwfYCwYZbNt0ZEolyHzUEesuyHfU1eJd6+sKEjUz 5GteQIR7AehxZIS+cytu7BXO7B0owLJ2awIBAg== -----END DH PARAMETERS----- vendor/openssl/test/rsa.pem.pub0000664000175000017500000000070314160055207017366 0ustar mwhudsonmwhudson-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAofgWCuLjybRlzo0tZWJj NiuSfb4p4fAkd/wWJcyQoTbji9k0l8W26mPddxHmfHQp+Vaw+4qPCJrcS2mJPMEz P1Pt0Bm4d4QlL+yRT+SFd2lZS+pCgNMsD1W/YpRPEwOWvG6b32690r2jZ47soMZo 9wGzjb/7OMg0LOL+bSf63kpaSHSXndS5z5rexMdbBYUsLA9e+KXBdQOS+UTo7WTB EMa2R2CapHg665xsmtdVMTBQY4uDZlxvb3qCo5ZwKh9kG4LT6/I5IhlJH7aGhyxX FvUK+DWNmoudF8NAco9/h9iaGNj8q2ethFkMLs91kzk2PAcDTW9gb54h4FRWyuXp oQIDAQAB -----END PUBLIC KEY----- vendor/openssl/test/key.pem.pub0000664000175000017500000000070314160055207017371 0ustar mwhudsonmwhudson-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAr1bXMptaIgOL9PVL8a7W KG/C8+IbxP018eMBQZT0SnPQmXp0Q8Aai/F+AEDE7b5sO5U7WdxU4GRYw0wqkQNF si78KNfoj2ZMlx6NRfl4UKuzrpGTPgQxuKDYedngPpWcbmW4P3zEL2Y7b18n9NJr atRUzH1Zh/ReRO525Xadu58aviPw1Mzgse7cKyzb03Gll9noLnYNIIpO8jL+QyrD 8qNmfacmR20U0a6XDTtmsmk7AitGETICbTT0KRf+oAP0yIHoonllPpNLUEPZQjrp ClS/S/wKdj7gaq9TaMbHULhFMjbCV8cuPu//rUAuWp3riaznZGOVQyn3Dp2CB3ad yQIDAQAB -----END PUBLIC KEY----- vendor/openssl/test/root-ca.key0000664000175000017500000000321714160055207017372 0ustar mwhudsonmwhudson-----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEArVHWFn52Lbl1l59exduZntVSZyDYpzDND+S2LUcO6fRBWhV/ 1Kzox+2GZptbuMGmfI3iAnb0CFT4uC3kBkQQlXonGATSVyaFTFR+jq/lc0SP+9Bd 7SBXieIVeIXlY1TvlwIvj3Ntw9zX+scTA4SXxH6M0rKv9gTOub2vCMSHeF16X8DQ r4XsZuQr7Cp7j1I4aqOJyap5JTl5ijmG8cnu0n+8UcRlBzy99dLWJG0AfI3VRJdW pGTNVZ92aFff3RpK3F/WI2gp3qV1ynRAKuvmncGC3LDvYfcc2dgsc1N6Ffq8GIrk gRob6eBcklDHp1d023Lwre+VaVDSo1//Y72UFwIDAQABAoIBAGZrnd/dC2kp11uq Sg8SHk3GMdPPjTf/lq51sVJAU4fdV2Eso0XCiCzdKDcqR6F+jiu8jHp4YO0riW8N b1pkjohGjyOaddIaaVsZ80/OkgDz20Ird9XQ7uoEODvopA12+755BDH5PDwqHVeM nKfPiwAK6Jz6CxGO9bq9ZNoBiSyO1uofaB4Cpp8t74XVeAuPiI/Bb6WJ8TW5K5dt x0Jihdo46QgZR+z4PnyWIoACkhSoQmtTb9NUrpKceBcxdCrZ/kEmYpnPq/PuSw6g 6HthjYP/H9Xulz69UR5Ez6z+1pU1rKFmQ46qK7X3zVHg233MlGekMzxdmShEjzCP BMGYpQECgYEA5tqTZsUJwx3HDhkaZ/XOtaQqwOnZm9wPwTjGbV1t4+NUJzsl5gjP ho+I8ZSGZ6MnNSh+ClpYhUHYBq0rTuAAYL2arcMOuOs1GrMmiZJbXm8zq8M7gYr5 V99H/7akSx66WV/agPkLIvh/BWxlWgQcoVAIzZibbLUxr7Ye50pCLfECgYEAwDLn mFz0mFMvGtaSp8RnTDTFCz9czCeDt0GujCxG1epdvtuxlg/S1QH+mGzA/AHkiu7z uzCwGKWozNTdRkqVwYoJTB+AYHseSkuGP+a1zr39w+xBW/vESb2oP95GIwprXcG2 b/qdeQVzuLQhYoqWI2u8CBwlHFfpQO4Bp2ea+ocCgYEAurIgLSfCqlpFpiAlG9hN 8NYwgU1d4E+LKj+JMd8yRO+PGh8amjub4X3pST5NqDjpN3Nk42iHWFWUqGmZsbM0 ewg7tLUgDeqiStKBoxaK8AdMqWc9k5lZ53e6mZISsnHKUQdVBaLjH8gJqdAs8yyK HudEB0mYwMSUxz6pJXIHrXECgYEAhJkaCpXm8chB8UQj/baUhZDKeI4IWZjRWHbq Ey7g1+hPMMOk6yCTlf1ARqyRH8u2ftuIL5bRhs+Te21IE5yVYOb4rxn0mZuXNC6S ujdTKwUMtESkeu9hZnaAQz/4J2ii1hY05WCDj+DhC4bKmY9/MYS8PuQb/kfwVqld Xr8tvrUCgYEAmslHocXBUFXyRDkEOx/aKo+t9fPBr95PBZzFUt9ejrTP4PXsLa46 3/PNOCGdrQxh5qHHcvLwR4bPL++Dj+qMUTJXANrArKPDpE2WqH6pqWIC6yaZvzUk 17QbpXR6bHcdJV045pWpw40UCStTocVynY1lBfOw8VqxBIBlpVBBzew= -----END RSA PRIVATE KEY----- vendor/openssl/test/nid_test_cert.pem0000664000175000017500000000127014160055207020642 0ustar mwhudsonmwhudson-----BEGIN CERTIFICATE----- MIIB1DCCAX6gAwIBAgIJAMzXWZGWHleWMA0GCSqGSIb3DQEBCwUAMFYxHzAdBgkq hkiG9w0BCQEWEHRlc3RAZXhhbXBsZS5jb20xFDASBgNVBAMMC2V4YW1wbGUuY29t MR0wGwYJKoZIhvcNAQkUHg4ARQB4AGEAbQBwAGwAZTAeFw0xNTA3MDEwNjQ3NDRa Fw0xNTA3MzEwNjQ3NDRaMFYxHzAdBgkqhkiG9w0BCQEWEHRlc3RAZXhhbXBsZS5j b20xFDASBgNVBAMMC2V4YW1wbGUuY29tMR0wGwYJKoZIhvcNAQkUHg4ARQB4AGEA bQBwAGwAZTBcMA0GCSqGSIb3DQEBAQUAA0sAMEgCQQCmejzp4+o35FD0hAnx2trL 08h07X5jZca9DgZH35hWXPh7fMucLt/IPXIRnz2zKEa/Mo6D2V/fx03Mqo0epid7 AgMBAAGjLzAtMB0GA1UdDgQWBBRQa57tXz3rZNRz+fTbo3w3jQJMBTAMBgNVHRME BTADAQH/MA0GCSqGSIb3DQEBCwUAA0EAm0iY9cr+gvC+vcQIebdofpQ4GcDW8U6W Bxs8ZXinLl69P0jYLum3+XITNFRiyQqcivaxdxthxDNOX7P+aKwkJA== -----END CERTIFICATE----- vendor/openssl/test/alt_name_cert.pem0000664000175000017500000000247214160055207020616 0ustar mwhudsonmwhudson-----BEGIN CERTIFICATE----- MIIDsDCCApigAwIBAgIBATANBgkqhkiG9w0BAQsFADBFMQswCQYDVQQGEwJBVTET MBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMB4XDTE4MDExNTExMDcwM1oXDTI4MDExMzExMDcwM1owfDELMAkGA1UE BhMCVVMxCzAJBgNVBAgMAk5ZMREwDwYDVQQHDAhOZXcgWW9yazEVMBMGA1UECgwM RXhhbXBsZSwgTExDMTYwNAYDVQQDDC1FeGFtcGxlIENvbXBhbnkvZW1haWxBZGRy ZXNzPXRlc3RAZXhhbXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK AoIBAQCo9CWMRLMXo1CF/iORh9B4NhtJF/8tR9PlG95sNvyWuQQ/8jfev+8zErpl xfLkt0pJqcoiZG8g9NU0kU6o5T+/1QgZclCAoZaS0Jqxmoo2Yk/1Qsj16pnMBc10 uSDk6V9aJSX1vKwONVNSwiHA1MhX+i7Wf7/K0niq+k7hOkhleFkWgZtUq41gXh1V fOugka7UktYnk9mrBbAMjmaloZNn2pMMAQxVg4ThiLm3zvuWqvXASWzUZc7IAd1G bN4AtDuhs252eqE9E4iTHk7F14wAS1JWqv666hReGHrmZJGx0xQTM9vPD1HN5t2U 3KTfhO/mTlAUWVyg9tCtOzboKgs1AgMBAAGjdDByMAkGA1UdEwQCMAAwCwYDVR0P BAQDAgWgMFgGA1UdEQRRME+CC2V4YW1wbGUuY29thwR/AAABhxAAAAAAAAAAAAAA AAAAAAABgRB0ZXN0QGV4YW1wbGUuY29thhZodHRwOi8vd3d3LmV4YW1wbGUuY29t MA0GCSqGSIb3DQEBCwUAA4IBAQAx14G99z/MnSbs8h5jSos+dgLvhc2IQB/3CChE hPyELc7iyw1iteRs7bS1m2NZx6gv6TZ6VydDrK1dnWSatQ7sskXTO+zfC6qjMwXl IV+u7T8EREwciniIA82d8GWs60BGyBL3zp2iUOr5ULG4+c/S6OLdlyJv+fDKv+Xo fKv1UGDi5rcvUBikeNkpEPTN9UsE9/A8XJfDyq+4RKuDW19EtzOOeVx4xpHOMnAy VVAQVMKJzhoXtLF4k2j409na+f6FIcZSBet+plmzfB+WZNIgUUi/7MQIXOFQRkj4 zH3SnsPm/IYpJzlH2vHhlqIBdaSoTWpGVWPq7D+H8OS3mmXF -----END CERTIFICATE----- vendor/openssl/test/nid_uid_test_cert.pem0000664000175000017500000000271014160055207021503 0ustar mwhudsonmwhudson-----BEGIN CERTIFICATE----- MIIEGTCCAwGgAwIBAgIJAItKTzcGfL1lMA0GCSqGSIb3DQEBCwUAMIGiMSIwIAYK CZImiZPyLGQBAQwSdGhpcyBpcyB0aGUgdXNlcklkMQswCQYDVQQGEwJVUzETMBEG A1UECAwKQ2FsaWZvcm5pYTESMBAGA1UEBwwJU3Vubnl2YWxlMRUwEwYDVQQKDAxS dXN0IE9wZW5TU0wxDDAKBgNVBAsMA09TUzEhMB8GA1UEAwwYcnVzdC1vcGVuc3Ns LmV4YW1wbGUuY29tMB4XDTE2MDIwMjE3MjIwMVoXDTE2MDMwMzE3MjIwMVowgaIx IjAgBgoJkiaJk/IsZAEBDBJ0aGlzIGlzIHRoZSB1c2VySWQxCzAJBgNVBAYTAlVT MRMwEQYDVQQIDApDYWxpZm9ybmlhMRIwEAYDVQQHDAlTdW5ueXZhbGUxFTATBgNV BAoMDFJ1c3QgT3BlblNTTDEMMAoGA1UECwwDT1NTMSEwHwYDVQQDDBhydXN0LW9w ZW5zc2wuZXhhbXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB AQDa3Gc+IE5DOhTv1m5DZW8qKiyNLd7v4DaAYLXSsDuLs+9wJ+Bs+wlBfrg+PT0t EJlPaLL9IfD5eR3WpFu62TUexYhnJh+3vhCGsFHOXcTjtM+wy/dzZtOVh2wTzvqE /FHBGw1eG3Ww+RkSFbwYmtm8JhIN8ffYxGn2O0yQpxypf5hNPYrC81zX+52X2w1h jDYLpYt55w+e6q+iRRFk0tKiWHEqqh/r6UQQRpj2EeS+xTloZlO6h0nl2NPkVF3r CXBoT8Ittxr7sqcYqf8TAA0I4qZRYXKYehFmv/VkSt85CcURJ/zXeoJ1TpxSvQie 2R9cRDkYROrIOAFbB/0mmHLBAgMBAAGjUDBOMB0GA1UdDgQWBBRKfPqtgrbdbTmH XR6RC/p8t/65GjAfBgNVHSMEGDAWgBRKfPqtgrbdbTmHXR6RC/p8t/65GjAMBgNV HRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQCKfeGRduGsIwKNiGcDUNkNrc7Z f8SWAmb/R6xiDfgjbhrtfBDowIZ5natEkTgf6kQPMJKyjg2NEM2uJWBc55rLOHIv es1wQOlYjfEUmFD3lTIt2TM/IUgXn2j+zV1CRkJthQLVFChXsidd0Bqq2fBjd3ad Yjzrxf3uOTBAs27koh2INNHfcUZCRsx8hP739zz2kw/r5NB/9iyENEyJKQvxo0jb oN0JK2joGZrWetDukQrqf032TsdkboW5JresYybbAD3326Ljp+hlT/3WINc+3nZJ Dn+pPMdpuZ5BUZ+u+XyNEPum3k3P3K19AF+zWYGooX0J1cmuCBrrqce20Lwy -----END CERTIFICATE----- vendor/openssl/test/certs.pem0000664000175000017500000000450014160055207017133 0ustar mwhudsonmwhudson-----BEGIN CERTIFICATE----- MIIDGzCCAgMCCQCHcfe97pgvpTANBgkqhkiG9w0BAQsFADBFMQswCQYDVQQGEwJB VTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0 cyBQdHkgTHRkMB4XDTE2MDgxNDE3MDAwM1oXDTI2MDgxMjE3MDAwM1owWjELMAkG A1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGEludGVybmV0 IFdpZGdpdHMgUHR5IEx0ZDETMBEGA1UEAwwKZm9vYmFyLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKj0JYxEsxejUIX+I5GH0Hg2G0kX/y1H0+Ub 3mw2/Ja5BD/yN96/7zMSumXF8uS3SkmpyiJkbyD01TSRTqjlP7/VCBlyUIChlpLQ mrGaijZiT/VCyPXqmcwFzXS5IOTpX1olJfW8rA41U1LCIcDUyFf6LtZ/v8rSeKr6 TuE6SGV4WRaBm1SrjWBeHVV866CRrtSS1ieT2asFsAyOZqWhk2fakwwBDFWDhOGI ubfO+5aq9cBJbNRlzsgB3UZs3gC0O6GzbnZ6oT0TiJMeTsXXjABLUlaq/rrqFF4Y euZkkbHTFBMz288PUc3m3ZTcpN+E7+ZOUBRZXKD20K07NugqCzUCAwEAATANBgkq hkiG9w0BAQsFAAOCAQEASvYHuIl5C0NHBELPpVHNuLbQsDQNKVj3a54+9q1JkiMM 6taEJYfw7K1Xjm4RoiFSHpQBh+PWZS3hToToL2Zx8JfMR5MuAirdPAy1Sia/J/qE wQdJccqmvuLkLTSlsGbEJ/LUUgOAgrgHOZM5lUgIhCneA0/dWJ3PsN0zvn69/faY oo1iiolWiIHWWBUSdr3jM2AJaVAsTmLh00cKaDNk37JB940xConBGSl98JPrNrf9 dUAiT0iIBngDBdHnn/yTj+InVEFyZSKrNtiDSObFHxPcxGteHNrCPJdP1e+GqkHp HJMRZVCQpSMzvHlofHSNgzWV1MX5h1CP4SGZdBDTfA== -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDXTCCAkWgAwIBAgIJAOIvDiVb18eVMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQwHhcNMTYwODE0MTY1NjExWhcNMjYwODEyMTY1NjExWjBF MQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50 ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB CgKCAQEArVHWFn52Lbl1l59exduZntVSZyDYpzDND+S2LUcO6fRBWhV/1Kzox+2G ZptbuMGmfI3iAnb0CFT4uC3kBkQQlXonGATSVyaFTFR+jq/lc0SP+9Bd7SBXieIV eIXlY1TvlwIvj3Ntw9zX+scTA4SXxH6M0rKv9gTOub2vCMSHeF16X8DQr4XsZuQr 7Cp7j1I4aqOJyap5JTl5ijmG8cnu0n+8UcRlBzy99dLWJG0AfI3VRJdWpGTNVZ92 aFff3RpK3F/WI2gp3qV1ynRAKuvmncGC3LDvYfcc2dgsc1N6Ffq8GIrkgRob6eBc klDHp1d023Lwre+VaVDSo1//Y72UFwIDAQABo1AwTjAdBgNVHQ4EFgQUbNOlA6sN XyzJjYqciKeId7g3/ZowHwYDVR0jBBgwFoAUbNOlA6sNXyzJjYqciKeId7g3/Zow DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAVVaR5QWLZIRR4Dw6TSBn BQiLpBSXN6oAxdDw6n4PtwW6CzydaA+creiK6LfwEsiifUfQe9f+T+TBSpdIYtMv Z2H2tjlFX8VrjUFvPrvn5c28CuLI0foBgY8XGSkR2YMYzWw2jPEq3Th/KM5Catn3 AFm3bGKWMtGPR4v+90chEN0jzaAmJYRrVUh9vea27bOCn31Nse6XXQPmSI6Gyncy OAPUsvPClF3IjeL1tmBotWqSGn1cYxLo+Lwjk22A9h6vjcNQRyZF2VLVvtwYrNU3 mwJ6GCLsLHpwW/yjyvn8iEltnJvByM/eeRnfXV6WDObyiZsE/n6DxIRJodQzFqy9 GA== -----END CERTIFICATE----- vendor/openssl/test/dsaparam.pem0000664000175000017500000000070714160055207017610 0ustar mwhudsonmwhudson-----BEGIN DSA PARAMETERS----- MIIBHgKBgQCkKe/jtYKJNQafaE7kg2aaJOEPUV0Doi451jkXHp5UfLh6+t42eabS GkE9WBAlILgaB8yHckLe9+zozN39+SUDp94kb2r38/8w/9Ffhbsep9uiyOj2ZRQu r6SkpKQDKcnAd6IMZXZcvdSgPC90A6qraYUZKq7Csjn63gbC+IvXHwIVAIgSPE43 lXD8/rGYxos4cxCgGGAxAoGASMV56WhLvVQtWMVI36WSIxbZnC2EsnNIKeVWyXnP /OmPJ2mdezG7i1alcwsO2TnSLbvjvGPlyzIqZzHvWC8EmDqsfbU+n8we/Ealsm5n loC8m9ECWpbTzbNdvrAAj9UPVWjcDwg7grAGGysh6lGbBv5P+4zL/niq1UiELnKc ifg= -----END DSA PARAMETERS----- vendor/openssl/test/key.der.pub0000664000175000017500000000044614160055207017366 0ustar mwhudsonmwhudson0‚"0  *†H†÷ ‚0‚ ‚¯V×2›Z"‹ôõKñ®Ö(oÂóâÄý5ñãA”ôJsЙztCÀ‹ñ~@Äí¾l;•;YÜTàdXÃL*‘E².ü(×èfL—EùxP«³®‘“>1¸ ØyÙà>•œne¸?|Ä/f;o_'ôÒkjÔTÌ}Y‡ô^Dîvåv»Ÿ¾#ðÔÌà±îÜ+,ÛÓq¥—Ùè.v ŠNò2þC*Ãò£f}§&GmÑ®— ;f²i;+F2m4ô)þ ôÈè¢ye>“KPCÙB:é T¿Kü v>àj¯ShÆÇP¸E26ÂWÇ.>ïÿ­@.Z뉬çdc•C)÷‚vÉvendor/openssl/test/pkcs8-nocrypt.der0000664000175000017500000000230014160055207020524 0ustar mwhudsonmwhudson0‚¼0  *†H†÷ ‚¦0‚¢‚¸)Š9¬{Ùéߢ3nfEO;»Þ]ŸR# ÂRͳ¡Vó\aJ^Î÷¶GRYoGÅF2Z× /‚銑â N.W]ÿ•Õ`ôK(>Ël‡¿mEœäNªdzQÓf³áLöÆñô€ÖIÑ,~¿áÚ*÷äcí?¡d€nr­ —¹ƒÚº©‹hè‘4Zû»ãÌ6Wƒe\v­V¸Ã0P(Òƒÿ¼ 8­ÐZ‡/¬CkŸ#¥/Æ]K[8³†9œ*µaU?D Ö´HMl&]Vƒäí£Çs°]–6X¹¬LÞ8^e×líNy¸±~V/À"8uè~IfQµÿƒN¥r½@Û‚=–~ÈlÛzsÑ;0RW¹eÎ1£ÕhŸè[!{š™1"Níþá]5÷ð€$–òµä#F«\þ/(ÖqýÊm»8lÞèŽáóœÉpÉU«2b˜?eš».Jô†çµGªx §®9rNæBÓçÑô}uó2é %”vÄЪÿú^«.i²®¡Æ°?ÄwR)' rÍ®mÄ' gG¤Y‡MÊ¡ZT:ŸU®p¦ßE"j%¯{ Gd`¥¢Ç-á“< šS{¼Æ†¥ 2§Ö–i;_á’ê…dPe¿n E%Ò“;@bΕn{²T}Ø™Âã*m̪® BVѪJ3lND¿²ùÔ±yïñ²Òó+”­uÌ€*˜ËaÐf!]IAÐ˜Ç R•Ë™d÷š1àúY Þ¢šWÀoØ+%ýnòMÅô¹ È|õVC”´Å^À?åàºûà°ìÒî®gOì‚"ܯñ˜fLã AŠï¿žCÜ]-gÌl=¶c)i•!ý‡¼’–Nµo”TªÏÄ|HÎ…º›ƒ#àÐî{»%x;^ù]8¨X?µ¯[¯jJdX|nv‘¨{(jèbâMŠÆ‹J ‹€å²˜&‚¥9%$?Ø×Õůœ¿ž$! †mÖ¥uÁâu…êëzÙRK§ ,î)2pïˆ)î ?|T¶ ¢ Å9×ÉÔÜ5€k¼`óRSÓŽ¢ßÉ„VÊ«™tÍ ïIþ@R"t"QÕbä–q‘çROŒ!n„IyF;~¿,ãìÈ.õ’ÎAÕ“9)Œ¤À¼ô/5€Ì&ü‘â®ü׎ci„Ö¯ûÙ™µ‘ä h„¾í¡~òg7Çú· ày©z+¹\2½€jÌÿª:hwun±6Ò@½„ßX»OŒr|èg–ÎØ˜Ë;N—#Z©Ì½³vÃlÄ<†´žQSrõ sZ[àv6WfÛx{·ûø !çþAíÌǤḻzت]Ô¹ |sç"¬” zý­¨üXG;à à ڵÀoe‹Çí€\ƒgØŠÇ#>™$ v²—*ÖʵâiNÍɵ…!Î’ž¥NȸZERL-PÂ%†Î¶Q¹Û›•WkÊ~‘×ù¾îç¼þ5'’\Ÿ¹‚L‡»Œ2Ên[n \æôÒf«dýªž­±Wfžî€êÂi9!Ý‹VªM™ vendor/openssl/test/rsa.pem0000664000175000017500000000321314160055207016600 0ustar mwhudsonmwhudson-----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAofgWCuLjybRlzo0tZWJjNiuSfb4p4fAkd/wWJcyQoTbji9k0 l8W26mPddxHmfHQp+Vaw+4qPCJrcS2mJPMEzP1Pt0Bm4d4QlL+yRT+SFd2lZS+pC gNMsD1W/YpRPEwOWvG6b32690r2jZ47soMZo9wGzjb/7OMg0LOL+bSf63kpaSHSX ndS5z5rexMdbBYUsLA9e+KXBdQOS+UTo7WTBEMa2R2CapHg665xsmtdVMTBQY4uD Zlxvb3qCo5ZwKh9kG4LT6/I5IhlJH7aGhyxXFvUK+DWNmoudF8NAco9/h9iaGNj8 q2ethFkMLs91kzk2PAcDTW9gb54h4FRWyuXpoQIDAQABAoIBABKucaRpzQorw35S bEUAVx8dYXUdZOlJcHtiWQ+dC6V8ljxAHj/PLyzTveyI5QO/xkObCyjIL303l2cf UhPu2MFaJdjVzqACXuOrLot/eSFvxjvqVidTtAZExqFRJ9mylUVAoLvhowVWmC1O n95fZCXxTUtxNEG1Xcc7m0rtzJKs45J+N/V9DP1edYH6USyPSWGp6wuA+KgHRnKK Vf9GRx80JQY7nVNkL17eHoTWEwga+lwi0FEoW9Y7lDtWXYmKBWhUE+U8PGxlJf8f 40493HDw1WRQ/aSLoS4QTp3rn7gYgeHEvfJdkkf0UMhlknlo53M09EFPdadQ4TlU bjqKc50CgYEA4BzEEOtIpmVdVEZNCqS7baC4crd0pqnRH/5IB3jw3bcxGn6QLvnE tfdUdiYrqBdss1l58BQ3KhooKeQTa9AB0Hw/Py5PJdTJNPY8cQn7ouZ2KKDcmnPG BY5t7yLc1QlQ5xHdwW1VhvKn+nXqhJTBgIPgtldC+KDV5z+y2XDwGUcCgYEAuQPE fgmVtjL0Uyyx88GZFF1fOunH3+7cepKmtH4pxhtCoHqpWmT8YAmZxaewHgHAjLYs p1ZSe7zFYHj7C6ul7TjeLQeZD/YwD66t62wDmpe/HlB+TnBA+njbglfIsRLtXlnD zQkv5dTltRJ11BKBBypeeF6689rjcJIDEz9RWdcCgYAHAp9XcCSrn8wVkMVkKdb7 DOX4IKjzdahm+ctDAJN4O/y7OW5FKebvUjdAIt2GuoTZ71iTG+7F0F+lP88jtjP4 U4qe7VHoewl4MKOfXZKTe+YCS1XbNvfgwJ3Ltyl1OH9hWvu2yza7q+d5PCsDzqtm 27kxuvULVeya+TEdAB1ijQKBgQCH/3r6YrVH/uCWGy6bzV1nGNOdjKc9tmkfOJmN 54dxdixdpozCQ6U4OxZrsj3FcOhHBsqAHvX2uuYjagqvo3cOj1TRqNocX40omfCC Mx3bD1yPPf/6TI2XECva/ggqEY2mYzmIiA5LVVmc5nrybr+lssFKneeyxN2Wq93S 0iJMdQKBgCGHewxzoa1r8ZMD0LETNrToK423K377UCYqXfg5XMclbrjPbEC3YI1Z NqMtuhdBJqUnBi6tjKMF+34Xf0CUN8ncuXGO2CAYvO8PdyCixHX52ybaDjy1FtCE 6yUXjoKNXKvUm7MWGsAYH6f4IegOetN5NvmUMFStCSkh7ixZLkN1 -----END RSA PRIVATE KEY----- vendor/openssl/test/aia_test_cert.pem0000664000175000017500000000245214160055207020625 0ustar mwhudsonmwhudson-----BEGIN CERTIFICATE----- MIIDozCCAougAwIBAgIJAJayG40CARAjMA0GCSqGSIb3DQEBCwUAMA8xDTALBgNV BAMMBHRlc3QwHhcNMjEwMzAyMDA1NzQ3WhcNNDgwNzE4MDA1NzQ3WjBzMQswCQYD VQQGEwJYWDELMAkGA1UECAwCWFgxEDAOBgNVBAcMB25vd2hlcmUxEDAOBgNVBAoM B3Rlc3RvcmcxEjAQBgNVBAsMCXRlc3Rncm91cDEfMB0GA1UEAwwWbWFjaGluZS0w Lm15aG9zdC5teW5ldDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANKA 3zhwC70hbxFVdC0dYk9BHaNntZ4LPUVwFSG2HBn34oO8zCp4wkH+VIi9vOhWiySK Gs3gW4qpjMbF82Gqc3dG2KfqUrOtWY+u54zAzqpgiJf08wmREHPoZmjqfCfgM3FO VMEA8g1BQxXEd+y7UEDoXhPIoeFnqzMu9sg4npnL9U5BLaQJiWnXHClnBrvAAKXW E8KDNmcavtFvo2xQVC09C6dJG5CrigWcZe4CaUl44rHiPaQd+jOp0HAccl/XLA0/ QyHvW6ksjco/mb7ia1U9ohaC/3NHmzUA1S3kdq/qgnkPsjmy5v8k5vizowNc5rFO XsV86BIv44rh1Jut52ECAwEAAaOBnTCBmjAMBgNVHRMEBTADAQH/MAsGA1UdDwQE AwIF4DAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwIQYDVR0RBBowGIIW bWFjaGluZS0wLm15aG9zdC5teW5ldDA7BggrBgEFBQcBAQQvMC0wKwYIKwYBBQUH MAKGH2h0dHA6Ly93d3cuZXhhbXBsZS5jb20vY2VydC5wZW0wDQYJKoZIhvcNAQEL BQADggEBAH+ayx8qGvxzrG57jgXJudq+z783O6E2xGBJn1cT9Jhrg1VnlU+tHcNd fFcsp0gdQZCmm3pu3E0m/FsgTpfHUgdCOmZQp45QrxCz2oRdWQM71SSA/x1VfQ9w 670iZOEY15/ss2nRl0woaYO7tBVadpZfymW5+OhsTKn5gL0pVmW3RciHuAmbIvQO bouUwzuZIJMfca7T1MqZYdrKoJrOBj0LaPTutjfQB7O/02vUCPjTTIH20aqsMe5K KXCrjiZO2jkxQ49Hz5uwfPx12dSVHNLpsnfOAH+MUToeW+SPx2OPvl/uAHcph2lj MLA6Wi64rSUxzkcFLFsGpKcK6QKcHUw= -----END CERTIFICATE----- vendor/openssl/test/key.pem0000664000175000017500000000325414160055207016610 0ustar mwhudsonmwhudson-----BEGIN PRIVATE KEY----- MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQCo9CWMRLMXo1CF /iORh9B4NhtJF/8tR9PlG95sNvyWuQQ/8jfev+8zErplxfLkt0pJqcoiZG8g9NU0 kU6o5T+/1QgZclCAoZaS0Jqxmoo2Yk/1Qsj16pnMBc10uSDk6V9aJSX1vKwONVNS wiHA1MhX+i7Wf7/K0niq+k7hOkhleFkWgZtUq41gXh1VfOugka7UktYnk9mrBbAM jmaloZNn2pMMAQxVg4ThiLm3zvuWqvXASWzUZc7IAd1GbN4AtDuhs252eqE9E4iT Hk7F14wAS1JWqv666hReGHrmZJGx0xQTM9vPD1HN5t2U3KTfhO/mTlAUWVyg9tCt OzboKgs1AgMBAAECggEBAKLj6IOJBKXolczpzb8UkyAjAkGBektcseV07gelJ/fk 3z0LuWPv5p12E/HlXB24vU2x/ikUbbP3eMsawRzDEahQqmNmPEkYAYUAy/Qpi9GN DYvn3LqDec4jVgeQKS+p9H2DzUpTogp8zR2//yzbuWBg2+F//xh7vU0S0RQCziPM x7RSBgbhxSfChfEJbS2sDnzfh0jRQmoY95iFv7puet1FJtzdZ4fgCd1RqmC2lFM5 H0eZtN/Cz19lieVs0b996DErdEBqClVZO00eYbRozCDaBzRU3ybB/dMrGJxhkkXm wb3kWMtziH9qOYsostuHIFu8eKFLloKxFnq2R4DGxOECgYEA2KUIZISOeGJSBcLJ JAUK2gvgXPNo4HHWIwOA9xeN3ZJlsnPlffXQNnm6t1st1V2gfMm9I2n0m/F0y2B/ n/XGSa8bghfPA9l0c2h58lkL3JQJR/paa8ycTz+YZPrznEyN7Qa0RrJXUvZv9lQL Hc3+FHcSHgMqDV2f2bHAEu9YGi0CgYEAx6VEIPNvrHFgjo/jk1RTuk+m0xEWQsZL Cs+izQMr2TaeJn8LG+93AvFuYn0J0nT3WuStLPrUg8i4IhSS6lf1tId5ivIZPm4r YwMyblBJXhnHbk7Uqodjfw/3s6V2HAu++B7hTdyVr9DFuST9uv4m8bkPV8rfX1jE I2rAPVWvgikCgYB+wNAQP547wQrMZBLbCDg5KwmyWJfb+b6X7czexOEz6humNTjo YZHYzY/5B1fhpk3ntQD8X1nGg5caBvOk21+QbOtjShrM3cXMYCw5JvBRtitX+Zo9 yBEMLOE0877ki8XeEDYZxu5gk98d+D4oygUGZEQtWxyXhVepPt5qNa8OYQKBgQDH RVgZI6KFlqzv3wMh3PutbS9wYQ+9GrtwUQuIYe/0YSW9+vSVr5E0qNKrD28sV39F hBauXLady0yvB6YUrjMbPFW+sCMuQzyfGWPO4+g3OrfqjFiM1ZIkE0YEU9Tt7XNx qTDtTI1D7bhNMnTnniI1B6ge0und+3XafAThs5L48QKBgQCTTpfqMt8kU3tcI9sf 0MK03y7kA76d5uw0pZbWFy7KI4qnzWutCzb+FMPWWsoFtLJLPZy//u/ZCUVFVa4d 0Y/ASNQIESVPXFLAltlLo4MSmsg1vCBsbviEEaPeEjvMrgki93pYtd/aOSgkYC1T mEq154s5rmqh+h+XRIf7Au0SLw== -----END PRIVATE KEY----- vendor/openssl/test/pkcs1.pem.pub0000664000175000017500000000065214160055207017625 0ustar mwhudsonmwhudson-----BEGIN RSA PUBLIC KEY----- MIIBCgKCAQEAyrcf7lv42BCoiDd3LYmF8eaGO4rhmGzGgi+NSZowkEuLhibHGQle FkZC7h1VKsxKFgy7Fx+GYHkv9OLm9H5fdp3HhYlo19bZVGvSJ66OJe/Bc4S02bBb Y8vwpc/N5O77m5J/nHLuL7XJtpfSKkX+3NPiX1X2L99iipt7F0a7hNws3G3Lxg6t P3Yc55TPjXzXvDIgjt/fag6iF8L/bR3augJJdDhLzNucR8A5HcvPtIVo51R631Zq MCh+dZvgz9zGCXwsvSky/iOJTHN3wnpsWuCAzS1iJMfjR783Tfv6sWFs19FH7pHP xBA3b2enPM9KBzINGOly0eM4h0fh+VBltQIDAQAB -----END RSA PUBLIC KEY----- vendor/openssl/test/cms.p120000664000175000017500000000325514160055207016424 0ustar mwhudsonmwhudson0‚©0‚o *†H†÷  ‚`‚\0‚X0‚W *†H†÷  ‚H0‚D0‚= *†H†÷ 0 *†H†÷  0:ì-`úó˜"€‚¦—²0uxöìÉÁ ŸYÁ‘P²Ë ¸¨N,È7~[þê,öDúßЧm#²V¢·Ÿ”hæQõ3\DEéúžïÞÿχØ=´!*Z˜ £ƒ±UQ٥ƳM€(üZoUþIJ´ïÒKc/îuCã?`^¢z0<ÔkC›æuiªqÛå)7ÐËûŽ3üYÜþçå?ß[ôïI— øb l»ŽÈØe…W'‘‡i6OòN:k¸Â1ÝÑ>d·³bŠÒ6,j5úqS ÌûZxîþª=º4«ãMl+lÕ¤¢ß¡½‹¶Ìh9±o‰Q3ìŽ&5 ºú²‘wI‘HÚß8HK¥„¹N~˜Ë˜È üø^’öÙEs=ìîD'ì)a#4A~]ƒR&šfIߤ¢ %**# td#ó„ª"ÏW.d`“ánÁnÁðÂéSyÎ]N{Gaе“§ë›»“kœ¾ˆöÅWù6Òa µröøê=üý~Òq—Œ)ûv©ûzÇnÚöO–faÏažÇfª™³º3üÔ•èd×››¤”C«sp—±ˆ=a-£b_IW™oW ëîÍEÄçE_ZÖ2ógŒ ‘ü÷_ —HÌÐɸB°éÛgeôݼ÷½†Ç’€%þ;“Dˆ T+/S쎑Tu¶!ïè†?}‹û^‰Ó4<«û¶VÔŠI±Ñ–6m1ŸÎ ~±Ç™1Nö)_0â3Z ‰Îä5Ù½ÀúlÐg lü›š‰**­²úQeÇبj®.Ei5 ’0Ÿw'aâ"V²£mŠteÄQÌ`Ðùà`Ÿ†Ó0nŠ&}”µÜSÄ'¨ó lDÀeÉ¿UpL±–wkL§V¯†áéÈ}–_ñ^hSµ".H[Qfi-Iéœ/T¼æ(0‚ù *†H†÷  ‚ê‚æ0‚â0‚Þ *†H†÷   ‚¦0‚¢0 *†H†÷  0¶q€À6C–‚€>+Ë9®SÄgÙüJ’}0’ŒUpÌ-•î T—I7—ÙŸ¦·{l°ÑnÁ ˆuÖa`VÿP±y—[J–°:(Lì«ëÐsqžvÛ*Ñ/ d’s%v9h®9ô™*—hj¢Ä±i—PÌöJ7XïlÉ‚«ƒÓj*"á Úî›DlfȲd$¡ ¦ôéÖÇm<ÉÆ5"tN@ý+Ô‹¨Í?Iͳ½U@ÿ­*=IýÓ@yójõK7q¢íЩɔT ‡Qy^ZÝi– áš–IK^‰ ¨¨Ã”zl­â'àÌ÷D£j’ßõ'±Wâh “y.GÚ%Q¸þX“\¶‚5W¼îx ãÇ©´WÌäyI 3¶³:BÝÎýU úQ2¸ÞGxí9ªT¯~aé –‡jµ)I,uÃÅK"¾‰_×NÉÄÄA=ð¨!øBÙâƒ~ ß%‰/>2ó«lÍÛlCÛnËiÊvÅ›¹”Äyƒa0.CöOôæFn†'°U½êb ‹n/žsýÈ—W“½E§&‰qwX<=Xßq¦y¼¼½"có.HfÛ^ÿië,Ø:›¯k4À„ËÉì}\Oíï´ƒƒeÔ›Xá';€yT«(­·²óDDuó¸Óò¤æk…ø]T‘ä6´n^™¿n™D'Yj(²±Ùæû;¦é¨¾Û$v”"ç|-~7NßÌ!oº]«žãF®éà áÐJ›ûÖ~:PGæþôë!ßxRÞŽ/ByÍb@=O“Ä7ž{OÑ„'{ÒÔ]ž¦°è¹#Š2,:E£1%0# *†H†÷  15V‚R)áa`ÈüpÒ¢Ÿæœ010!0 +Tú†«ÝѰCñJ,QÞOXeÌ¡ˆ³”Jˆvendor/openssl/test/cert.pem0000664000175000017500000000216314160055207016753 0ustar mwhudsonmwhudson-----BEGIN CERTIFICATE----- MIIDGzCCAgMCCQCHcfe97pgvpTANBgkqhkiG9w0BAQsFADBFMQswCQYDVQQGEwJB VTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0 cyBQdHkgTHRkMB4XDTE2MDgxNDE3MDAwM1oXDTI2MDgxMjE3MDAwM1owWjELMAkG A1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGEludGVybmV0 IFdpZGdpdHMgUHR5IEx0ZDETMBEGA1UEAwwKZm9vYmFyLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKj0JYxEsxejUIX+I5GH0Hg2G0kX/y1H0+Ub 3mw2/Ja5BD/yN96/7zMSumXF8uS3SkmpyiJkbyD01TSRTqjlP7/VCBlyUIChlpLQ mrGaijZiT/VCyPXqmcwFzXS5IOTpX1olJfW8rA41U1LCIcDUyFf6LtZ/v8rSeKr6 TuE6SGV4WRaBm1SrjWBeHVV866CRrtSS1ieT2asFsAyOZqWhk2fakwwBDFWDhOGI ubfO+5aq9cBJbNRlzsgB3UZs3gC0O6GzbnZ6oT0TiJMeTsXXjABLUlaq/rrqFF4Y euZkkbHTFBMz288PUc3m3ZTcpN+E7+ZOUBRZXKD20K07NugqCzUCAwEAATANBgkq hkiG9w0BAQsFAAOCAQEASvYHuIl5C0NHBELPpVHNuLbQsDQNKVj3a54+9q1JkiMM 6taEJYfw7K1Xjm4RoiFSHpQBh+PWZS3hToToL2Zx8JfMR5MuAirdPAy1Sia/J/qE wQdJccqmvuLkLTSlsGbEJ/LUUgOAgrgHOZM5lUgIhCneA0/dWJ3PsN0zvn69/faY oo1iiolWiIHWWBUSdr3jM2AJaVAsTmLh00cKaDNk37JB940xConBGSl98JPrNrf9 dUAiT0iIBngDBdHnn/yTj+InVEFyZSKrNtiDSObFHxPcxGteHNrCPJdP1e+GqkHp HJMRZVCQpSMzvHlofHSNgzWV1MX5h1CP4SGZdBDTfA== -----END CERTIFICATE----- vendor/openssl/examples/0000775000175000017500000000000014160055207016150 5ustar mwhudsonmwhudsonvendor/openssl/examples/mk_certs.rs0000664000175000017500000001254714160055207020336 0ustar mwhudsonmwhudson//! A program that generates ca certs, certs verified by the ca, and public //! and private keys. use openssl::asn1::Asn1Time; use openssl::bn::{BigNum, MsbOption}; use openssl::error::ErrorStack; use openssl::hash::MessageDigest; use openssl::pkey::{PKey, PKeyRef, Private}; use openssl::rsa::Rsa; use openssl::x509::extension::{ AuthorityKeyIdentifier, BasicConstraints, KeyUsage, SubjectAlternativeName, SubjectKeyIdentifier, }; use openssl::x509::{X509NameBuilder, X509Ref, X509Req, X509ReqBuilder, X509VerifyResult, X509}; /// Make a CA certificate and private key fn mk_ca_cert() -> Result<(X509, PKey), ErrorStack> { let rsa = Rsa::generate(2048)?; let key_pair = PKey::from_rsa(rsa)?; let mut x509_name = X509NameBuilder::new()?; x509_name.append_entry_by_text("C", "US")?; x509_name.append_entry_by_text("ST", "TX")?; x509_name.append_entry_by_text("O", "Some CA organization")?; x509_name.append_entry_by_text("CN", "ca test")?; let x509_name = x509_name.build(); let mut cert_builder = X509::builder()?; cert_builder.set_version(2)?; let serial_number = { let mut serial = BigNum::new()?; serial.rand(159, MsbOption::MAYBE_ZERO, false)?; serial.to_asn1_integer()? }; cert_builder.set_serial_number(&serial_number)?; cert_builder.set_subject_name(&x509_name)?; cert_builder.set_issuer_name(&x509_name)?; cert_builder.set_pubkey(&key_pair)?; let not_before = Asn1Time::days_from_now(0)?; cert_builder.set_not_before(¬_before)?; let not_after = Asn1Time::days_from_now(365)?; cert_builder.set_not_after(¬_after)?; cert_builder.append_extension(BasicConstraints::new().critical().ca().build()?)?; cert_builder.append_extension( KeyUsage::new() .critical() .key_cert_sign() .crl_sign() .build()?, )?; let subject_key_identifier = SubjectKeyIdentifier::new().build(&cert_builder.x509v3_context(None, None))?; cert_builder.append_extension(subject_key_identifier)?; cert_builder.sign(&key_pair, MessageDigest::sha256())?; let cert = cert_builder.build(); Ok((cert, key_pair)) } /// Make a X509 request with the given private key fn mk_request(key_pair: &PKey) -> Result { let mut req_builder = X509ReqBuilder::new()?; req_builder.set_pubkey(key_pair)?; let mut x509_name = X509NameBuilder::new()?; x509_name.append_entry_by_text("C", "US")?; x509_name.append_entry_by_text("ST", "TX")?; x509_name.append_entry_by_text("O", "Some organization")?; x509_name.append_entry_by_text("CN", "www.example.com")?; let x509_name = x509_name.build(); req_builder.set_subject_name(&x509_name)?; req_builder.sign(key_pair, MessageDigest::sha256())?; let req = req_builder.build(); Ok(req) } /// Make a certificate and private key signed by the given CA cert and private key fn mk_ca_signed_cert( ca_cert: &X509Ref, ca_key_pair: &PKeyRef, ) -> Result<(X509, PKey), ErrorStack> { let rsa = Rsa::generate(2048)?; let key_pair = PKey::from_rsa(rsa)?; let req = mk_request(&key_pair)?; let mut cert_builder = X509::builder()?; cert_builder.set_version(2)?; let serial_number = { let mut serial = BigNum::new()?; serial.rand(159, MsbOption::MAYBE_ZERO, false)?; serial.to_asn1_integer()? }; cert_builder.set_serial_number(&serial_number)?; cert_builder.set_subject_name(req.subject_name())?; cert_builder.set_issuer_name(ca_cert.subject_name())?; cert_builder.set_pubkey(&key_pair)?; let not_before = Asn1Time::days_from_now(0)?; cert_builder.set_not_before(¬_before)?; let not_after = Asn1Time::days_from_now(365)?; cert_builder.set_not_after(¬_after)?; cert_builder.append_extension(BasicConstraints::new().build()?)?; cert_builder.append_extension( KeyUsage::new() .critical() .non_repudiation() .digital_signature() .key_encipherment() .build()?, )?; let subject_key_identifier = SubjectKeyIdentifier::new().build(&cert_builder.x509v3_context(Some(ca_cert), None))?; cert_builder.append_extension(subject_key_identifier)?; let auth_key_identifier = AuthorityKeyIdentifier::new() .keyid(false) .issuer(false) .build(&cert_builder.x509v3_context(Some(ca_cert), None))?; cert_builder.append_extension(auth_key_identifier)?; let subject_alt_name = SubjectAlternativeName::new() .dns("*.example.com") .dns("hello.com") .build(&cert_builder.x509v3_context(Some(ca_cert), None))?; cert_builder.append_extension(subject_alt_name)?; cert_builder.sign(ca_key_pair, MessageDigest::sha256())?; let cert = cert_builder.build(); Ok((cert, key_pair)) } fn real_main() -> Result<(), ErrorStack> { let (ca_cert, ca_key_pair) = mk_ca_cert()?; let (cert, _key_pair) = mk_ca_signed_cert(&ca_cert, &ca_key_pair)?; // Verify that this cert was issued by this ca match ca_cert.issued(&cert) { X509VerifyResult::OK => println!("Certificate verified!"), ver_err => println!("Failed to verify certificate: {}", ver_err), }; Ok(()) } fn main() { match real_main() { Ok(()) => println!("Finished."), Err(e) => println!("Error: {}", e), }; } vendor/openssl/README.md0000664000175000017500000000147614160055207015621 0ustar mwhudsonmwhudson# rust-openssl [![crates.io](https://img.shields.io/crates/v/openssl.svg)](https://crates.io/crates/openssl) OpenSSL bindings for the Rust programming language. [Documentation](https://docs.rs/openssl). ## Release Support The current supported release of `openssl` is 0.10 and `openssl-sys` is 0.9. New major versions will be published at most once per year. After a new release, the previous major version will be partially supported with bug fixes for 3 months, after which support will be dropped entirely. ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed under the terms of both the Apache License, Version 2.0 and the MIT license without any additional terms or conditions. vendor/openssl/Cargo.lock0000664000175000017500000001216214172417313016244 0ustar mwhudsonmwhudson# This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "autocfg" version = "1.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a" [[package]] name = "bitflags" version = "1.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a" [[package]] name = "cc" version = "1.0.71" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "79c2681d6594606957bbb8631c4b90a7fcaaa72cdb714743a437b156d6a7eedd" [[package]] name = "cfg-if" version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "foreign-types" version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f6f339eb8adc052cd2ca78910fda869aefa38d22d5cb648e6485e4d3fc06f3b1" dependencies = [ "foreign-types-shared", ] [[package]] name = "foreign-types-shared" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "00b0228411908ca8685dba7fc2cdd70ec9990a6e753e89b6ac91a84c40fbaf4b" [[package]] name = "fuchsia-cprng" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a06f77d526c1a601b7c4cdd98f54b5eaabffc14d5f2f0296febdc7f357c6d3ba" [[package]] name = "hex" version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "805026a5d0141ffc30abb3be3173848ad46a1b1664fe632428479619a3644d77" [[package]] name = "libc" version = "0.2.106" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a60553f9a9e039a333b4e9b20573b9e9b9c0bb3a11e201ccc48ef4283456d673" [[package]] name = "once_cell" version = "1.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "692fcb63b64b1758029e0a96ee63e049ce8c5948587f2f7208df04625e5f6b56" [[package]] name = "openssl" version = "0.10.38" dependencies = [ "bitflags", "cfg-if", "foreign-types", "hex", "libc", "once_cell", "openssl-sys", "tempdir", ] [[package]] name = "openssl-src" version = "300.0.2+3.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "14a760a11390b1a5daf72074d4f6ff1a6e772534ae191f999f57e9ee8146d1fb" dependencies = [ "cc", ] [[package]] name = "openssl-sys" version = "0.9.69" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "14276c7942cb12d5ffab976d5b69789b0510d052576b230fcde58d8c581b8d1d" dependencies = [ "autocfg", "cc", "libc", "openssl-src", "pkg-config", "vcpkg", ] [[package]] name = "pkg-config" version = "0.3.22" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "12295df4f294471248581bc09bef3c38a5e46f1e36d6a37353621a0c6c357e1f" [[package]] name = "rand" version = "0.4.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "552840b97013b1a26992c11eac34bdd778e464601a4c2054b5f0bff7c6761293" dependencies = [ "fuchsia-cprng", "libc", "rand_core 0.3.1", "rdrand", "winapi", ] [[package]] name = "rand_core" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7a6fdeb83b075e8266dcc8762c22776f6877a63111121f5f8c7411e5be7eed4b" dependencies = [ "rand_core 0.4.2", ] [[package]] name = "rand_core" version = "0.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9c33a3c44ca05fa6f1807d8e6743f3824e8509beca625669633be0acbdf509dc" [[package]] name = "rdrand" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "678054eb77286b51581ba43620cc911abf02758c91f93f479767aed0f90458b2" dependencies = [ "rand_core 0.3.1", ] [[package]] name = "remove_dir_all" version = "0.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3acd125665422973a33ac9d3dd2df85edad0f4ae9b00dafb1a05e43a9f5ef8e7" dependencies = [ "winapi", ] [[package]] name = "tempdir" version = "0.3.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "15f2b5fb00ccdf689e0149d1b1b3c03fead81c2b37735d812fa8bddbbf41b6d8" dependencies = [ "rand", "remove_dir_all", ] [[package]] name = "vcpkg" version = "0.2.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "accd4ea62f7bb7a82fe23066fb0957d48ef677f6eeb8215f372f52e48bb32426" [[package]] name = "winapi" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419" dependencies = [ "winapi-i686-pc-windows-gnu", "winapi-x86_64-pc-windows-gnu", ] [[package]] name = "winapi-i686-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6" [[package]] name = "winapi-x86_64-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f" vendor/same-file/0000775000175000017500000000000014160055207014511 5ustar mwhudsonmwhudsonvendor/same-file/.cargo-checksum.json0000664000175000017500000000013114160055207020350 0ustar mwhudsonmwhudson{"files":{},"package":"93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502"}vendor/same-file/Cargo.toml0000664000175000017500000000213414160055207016441 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] edition = "2018" name = "same-file" version = "1.0.6" authors = ["Andrew Gallant "] exclude = ["/.github"] description = "A simple crate for determining whether two file paths point to the same file.\n" homepage = "https://github.com/BurntSushi/same-file" documentation = "https://docs.rs/same-file" readme = "README.md" keywords = ["same", "file", "equal", "inode"] license = "Unlicense/MIT" repository = "https://github.com/BurntSushi/same-file" [dev-dependencies.doc-comment] version = "0.3" [target."cfg(windows)".dependencies.winapi-util] version = "0.1.1" vendor/same-file/UNLICENSE0000664000175000017500000000227314160055207015765 0ustar mwhudsonmwhudsonThis is free and unencumbered software released into the public domain. Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means. In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain. We make this dedication for the benefit of the public at large and to the detriment of our heirs and successors. We intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. For more information, please refer to vendor/same-file/src/0000775000175000017500000000000014160055207015300 5ustar mwhudsonmwhudsonvendor/same-file/src/win.rs0000664000175000017500000001322514160055207016446 0ustar mwhudsonmwhudsonuse std::fs::File; use std::hash::{Hash, Hasher}; use std::io; use std::os::windows::io::{AsRawHandle, IntoRawHandle, RawHandle}; use std::path::Path; use winapi_util as winutil; // For correctness, it is critical that both file handles remain open while // their attributes are checked for equality. In particular, the file index // numbers on a Windows stat object are not guaranteed to remain stable over // time. // // See the docs and remarks on MSDN: // https://msdn.microsoft.com/en-us/library/windows/desktop/aa363788(v=vs.85).aspx // // It gets worse. It appears that the index numbers are not always // guaranteed to be unique. Namely, ReFS uses 128 bit numbers for unique // identifiers. This requires a distinct syscall to get `FILE_ID_INFO` // documented here: // https://msdn.microsoft.com/en-us/library/windows/desktop/hh802691(v=vs.85).aspx // // It seems straight-forward enough to modify this code to use // `FILE_ID_INFO` when available (minimum Windows Server 2012), but I don't // have access to such Windows machines. // // Two notes. // // 1. Java's NIO uses the approach implemented here and appears to ignore // `FILE_ID_INFO` altogether. So Java's NIO and this code are // susceptible to bugs when running on a file system where // `nFileIndex{Low,High}` are not unique. // // 2. LLVM has a bug where they fetch the id of a file and continue to use // it even after the handle has been closed, so that uniqueness is no // longer guaranteed (when `nFileIndex{Low,High}` are unique). // bug report: http://lists.llvm.org/pipermail/llvm-bugs/2014-December/037218.html // // All said and done, checking whether two files are the same on Windows // seems quite tricky. Moreover, even if the code is technically incorrect, // it seems like the chances of actually observing incorrect behavior are // extremely small. Nevertheless, we mitigate this by checking size too. // // In the case where this code is erroneous, two files will be reported // as equivalent when they are in fact distinct. This will cause the loop // detection code to report a false positive, which will prevent descending // into the offending directory. As far as failure modes goes, this isn't // that bad. #[derive(Debug)] pub struct Handle { kind: HandleKind, key: Option, } #[derive(Debug)] enum HandleKind { /// Used when opening a file or acquiring ownership of a file. Owned(winutil::Handle), /// Used for stdio. Borrowed(winutil::HandleRef), } #[derive(Debug, Eq, PartialEq, Hash)] struct Key { volume: u64, index: u64, } impl Eq for Handle {} impl PartialEq for Handle { fn eq(&self, other: &Handle) -> bool { // Need this branch to satisfy `Eq` since `Handle`s with // `key.is_none()` wouldn't otherwise. if self as *const Handle == other as *const Handle { return true; } else if self.key.is_none() || other.key.is_none() { return false; } self.key == other.key } } impl AsRawHandle for crate::Handle { fn as_raw_handle(&self) -> RawHandle { match self.0.kind { HandleKind::Owned(ref h) => h.as_raw_handle(), HandleKind::Borrowed(ref h) => h.as_raw_handle(), } } } impl IntoRawHandle for crate::Handle { fn into_raw_handle(self) -> RawHandle { match self.0.kind { HandleKind::Owned(h) => h.into_raw_handle(), HandleKind::Borrowed(h) => h.as_raw_handle(), } } } impl Hash for Handle { fn hash(&self, state: &mut H) { self.key.hash(state); } } impl Handle { pub fn from_path>(p: P) -> io::Result { let h = winutil::Handle::from_path_any(p)?; let info = winutil::file::information(&h)?; Ok(Handle::from_info(HandleKind::Owned(h), info)) } pub fn from_file(file: File) -> io::Result { let h = winutil::Handle::from_file(file); let info = winutil::file::information(&h)?; Ok(Handle::from_info(HandleKind::Owned(h), info)) } fn from_std_handle(h: winutil::HandleRef) -> io::Result { match winutil::file::information(&h) { Ok(info) => Ok(Handle::from_info(HandleKind::Borrowed(h), info)), // In a Windows console, if there is no pipe attached to a STD // handle, then GetFileInformationByHandle will return an error. // We don't really care. The only thing we care about is that // this handle is never equivalent to any other handle, which is // accomplished by setting key to None. Err(_) => Ok(Handle { kind: HandleKind::Borrowed(h), key: None }), } } fn from_info( kind: HandleKind, info: winutil::file::Information, ) -> Handle { Handle { kind: kind, key: Some(Key { volume: info.volume_serial_number(), index: info.file_index(), }), } } pub fn stdin() -> io::Result { Handle::from_std_handle(winutil::HandleRef::stdin()) } pub fn stdout() -> io::Result { Handle::from_std_handle(winutil::HandleRef::stdout()) } pub fn stderr() -> io::Result { Handle::from_std_handle(winutil::HandleRef::stderr()) } pub fn as_file(&self) -> &File { match self.kind { HandleKind::Owned(ref h) => h.as_file(), HandleKind::Borrowed(ref h) => h.as_file(), } } pub fn as_file_mut(&mut self) -> &mut File { match self.kind { HandleKind::Owned(ref mut h) => h.as_file_mut(), HandleKind::Borrowed(ref mut h) => h.as_file_mut(), } } } vendor/same-file/src/unknown.rs0000664000175000017500000000215414160055207017347 0ustar mwhudsonmwhudsonuse std::fs::File; use std::io; use std::path::Path; static ERROR_MESSAGE: &str = "same-file is not supported on this platform."; // This implementation is to allow same-file to be compiled on // unsupported platforms in case it was incidentally included // as a transitive, unused dependency #[derive(Debug, Hash)] pub struct Handle; impl Eq for Handle {} impl PartialEq for Handle { fn eq(&self, _other: &Handle) -> bool { unreachable!(ERROR_MESSAGE); } } impl Handle { pub fn from_path>(_p: P) -> io::Result { error() } pub fn from_file(_file: File) -> io::Result { error() } pub fn stdin() -> io::Result { error() } pub fn stdout() -> io::Result { error() } pub fn stderr() -> io::Result { error() } pub fn as_file(&self) -> &File { unreachable!(ERROR_MESSAGE); } pub fn as_file_mut(&self) -> &mut File { unreachable!(ERROR_MESSAGE); } } fn error() -> io::Result { Err(io::Error::new(io::ErrorKind::Other, ERROR_MESSAGE)) } vendor/same-file/src/unix.rs0000664000175000017500000000567014160055207016641 0ustar mwhudsonmwhudsonuse std::fs::{File, OpenOptions}; use std::hash::{Hash, Hasher}; use std::io; use std::os::unix::fs::MetadataExt; use std::os::unix::io::{AsRawFd, FromRawFd, IntoRawFd, RawFd}; use std::path::Path; #[derive(Debug)] pub struct Handle { file: Option, // If is_std is true, then we don't drop the corresponding File since it // will close the handle. is_std: bool, dev: u64, ino: u64, } impl Drop for Handle { fn drop(&mut self) { if self.is_std { // unwrap() will not panic. Since we were able to open an // std stream successfully, then `file` is guaranteed to be Some() self.file.take().unwrap().into_raw_fd(); } } } impl Eq for Handle {} impl PartialEq for Handle { fn eq(&self, other: &Handle) -> bool { (self.dev, self.ino) == (other.dev, other.ino) } } impl AsRawFd for crate::Handle { fn as_raw_fd(&self) -> RawFd { // unwrap() will not panic. Since we were able to open the // file successfully, then `file` is guaranteed to be Some() self.0.file.as_ref().take().unwrap().as_raw_fd() } } impl IntoRawFd for crate::Handle { fn into_raw_fd(mut self) -> RawFd { // unwrap() will not panic. Since we were able to open the // file successfully, then `file` is guaranteed to be Some() self.0.file.take().unwrap().into_raw_fd() } } impl Hash for Handle { fn hash(&self, state: &mut H) { self.dev.hash(state); self.ino.hash(state); } } impl Handle { pub fn from_path>(p: P) -> io::Result { Handle::from_file(OpenOptions::new().read(true).open(p)?) } pub fn from_file(file: File) -> io::Result { let md = file.metadata()?; Ok(Handle { file: Some(file), is_std: false, dev: md.dev(), ino: md.ino(), }) } pub fn from_std(file: File) -> io::Result { Handle::from_file(file).map(|mut h| { h.is_std = true; h }) } pub fn stdin() -> io::Result { Handle::from_std(unsafe { File::from_raw_fd(0) }) } pub fn stdout() -> io::Result { Handle::from_std(unsafe { File::from_raw_fd(1) }) } pub fn stderr() -> io::Result { Handle::from_std(unsafe { File::from_raw_fd(2) }) } pub fn as_file(&self) -> &File { // unwrap() will not panic. Since we were able to open the // file successfully, then `file` is guaranteed to be Some() self.file.as_ref().take().unwrap() } pub fn as_file_mut(&mut self) -> &mut File { // unwrap() will not panic. Since we were able to open the // file successfully, then `file` is guaranteed to be Some() self.file.as_mut().take().unwrap() } pub fn dev(&self) -> u64 { self.dev } pub fn ino(&self) -> u64 { self.ino } } vendor/same-file/src/lib.rs0000664000175000017500000003770614160055207016431 0ustar mwhudsonmwhudson/*! This crate provides a safe and simple **cross platform** way to determine whether two file paths refer to the same file or directory. Most uses of this crate should be limited to the top-level [`is_same_file`] function, which takes two file paths and returns true if they refer to the same file or directory: ```rust,no_run # use std::error::Error; use same_file::is_same_file; # fn try_main() -> Result<(), Box> { assert!(is_same_file("/bin/sh", "/usr/bin/sh")?); # Ok(()) # } # # fn main() { # try_main().unwrap(); # } ``` Additionally, this crate provides a [`Handle`] type that permits a more efficient equality check depending on your access pattern. For example, if one wanted to check whether any path in a list of paths corresponded to the process' stdout handle, then one could build a handle once for stdout. The equality check for each file in the list then only requires one stat call instead of two. The code might look like this: ```rust,no_run # use std::error::Error; use same_file::Handle; # fn try_main() -> Result<(), Box> { let candidates = &[ "examples/is_same_file.rs", "examples/is_stderr.rs", "examples/stderr", ]; let stdout_handle = Handle::stdout()?; for candidate in candidates { let handle = Handle::from_path(candidate)?; if stdout_handle == handle { println!("{:?} is stdout!", candidate); } else { println!("{:?} is NOT stdout!", candidate); } } # Ok(()) # } # # fn main() { # try_main().unwrap(); # } ``` See [`examples/is_stderr.rs`] for a runnable example and compare the output of: - `cargo run --example is_stderr 2> examples/stderr` and - `cargo run --example is_stderr`. [`is_same_file`]: fn.is_same_file.html [`Handle`]: struct.Handle.html [`examples/is_stderr.rs`]: https://github.com/BurntSushi/same-file/blob/master/examples/is_same_file.rs */ #![allow(bare_trait_objects, unknown_lints)] #![deny(missing_docs)] #[cfg(test)] doc_comment::doctest!("../README.md"); use std::fs::File; use std::io; use std::path::Path; #[cfg(any(target_os = "redox", unix))] use crate::unix as imp; #[cfg(not(any(target_os = "redox", unix, windows)))] use unknown as imp; #[cfg(windows)] use win as imp; #[cfg(any(target_os = "redox", unix))] mod unix; #[cfg(not(any(target_os = "redox", unix, windows)))] mod unknown; #[cfg(windows)] mod win; /// A handle to a file that can be tested for equality with other handles. /// /// If two files are the same, then any two handles of those files will compare /// equal. If two files are not the same, then any two handles of those files /// will compare not-equal. /// /// A handle consumes an open file resource as long as it exists. /// /// Equality is determined by comparing inode numbers on Unix and a combination /// of identifier, volume serial, and file size on Windows. Note that it's /// possible for comparing two handles to produce a false positive on some /// platforms. Namely, two handles can compare equal even if the two handles /// *don't* point to the same file. Check the [source] for specific /// implementation details. /// /// [source]: https://github.com/BurntSushi/same-file/tree/master/src #[derive(Debug, Eq, PartialEq, Hash)] pub struct Handle(imp::Handle); impl Handle { /// Construct a handle from a path. /// /// Note that the underlying [`File`] is opened in read-only mode on all /// platforms. /// /// [`File`]: https://doc.rust-lang.org/std/fs/struct.File.html /// /// # Errors /// This method will return an [`io::Error`] if the path cannot /// be opened, or the file's metadata cannot be obtained. /// The most common reasons for this are: the path does not /// exist, or there were not enough permissions. /// /// [`io::Error`]: https://doc.rust-lang.org/std/io/struct.Error.html /// /// # Examples /// Check that two paths are not the same file: /// /// ```rust,no_run /// # use std::error::Error; /// use same_file::Handle; /// /// # fn try_main() -> Result<(), Box> { /// let source = Handle::from_path("./source")?; /// let target = Handle::from_path("./target")?; /// assert_ne!(source, target, "The files are the same."); /// # Ok(()) /// # } /// # /// # fn main() { /// # try_main().unwrap(); /// # } /// ``` pub fn from_path>(p: P) -> io::Result { imp::Handle::from_path(p).map(Handle) } /// Construct a handle from a file. /// /// # Errors /// This method will return an [`io::Error`] if the metadata for /// the given [`File`] cannot be obtained. /// /// [`io::Error`]: https://doc.rust-lang.org/std/io/struct.Error.html /// [`File`]: https://doc.rust-lang.org/std/fs/struct.File.html /// /// # Examples /// Check that two files are not in fact the same file: /// /// ```rust,no_run /// # use std::error::Error; /// # use std::fs::File; /// use same_file::Handle; /// /// # fn try_main() -> Result<(), Box> { /// let source = File::open("./source")?; /// let target = File::open("./target")?; /// /// assert_ne!( /// Handle::from_file(source)?, /// Handle::from_file(target)?, /// "The files are the same." /// ); /// # Ok(()) /// # } /// # /// # fn main() { /// # try_main().unwrap(); /// # } /// ``` pub fn from_file(file: File) -> io::Result { imp::Handle::from_file(file).map(Handle) } /// Construct a handle from stdin. /// /// # Errors /// This method will return an [`io::Error`] if stdin cannot /// be opened due to any I/O-related reason. /// /// [`io::Error`]: https://doc.rust-lang.org/std/io/struct.Error.html /// /// # Examples /// /// ```rust /// # use std::error::Error; /// use same_file::Handle; /// /// # fn try_main() -> Result<(), Box> { /// let stdin = Handle::stdin()?; /// let stdout = Handle::stdout()?; /// let stderr = Handle::stderr()?; /// /// if stdin == stdout { /// println!("stdin == stdout"); /// } /// if stdin == stderr { /// println!("stdin == stderr"); /// } /// if stdout == stderr { /// println!("stdout == stderr"); /// } /// # /// # Ok(()) /// # } /// # /// # fn main() { /// # try_main().unwrap(); /// # } /// ``` /// /// The output differs depending on the platform. /// /// On Linux: /// /// ```text /// $ ./example /// stdin == stdout /// stdin == stderr /// stdout == stderr /// $ ./example > result /// $ cat result /// stdin == stderr /// $ ./example > result 2>&1 /// $ cat result /// stdout == stderr /// ``` /// /// Windows: /// /// ```text /// > example /// > example > result 2>&1 /// > type result /// stdout == stderr /// ``` pub fn stdin() -> io::Result { imp::Handle::stdin().map(Handle) } /// Construct a handle from stdout. /// /// # Errors /// This method will return an [`io::Error`] if stdout cannot /// be opened due to any I/O-related reason. /// /// [`io::Error`]: https://doc.rust-lang.org/std/io/struct.Error.html /// /// # Examples /// See the example for [`stdin()`]. /// /// [`stdin()`]: #method.stdin pub fn stdout() -> io::Result { imp::Handle::stdout().map(Handle) } /// Construct a handle from stderr. /// /// # Errors /// This method will return an [`io::Error`] if stderr cannot /// be opened due to any I/O-related reason. /// /// [`io::Error`]: https://doc.rust-lang.org/std/io/struct.Error.html /// /// # Examples /// See the example for [`stdin()`]. /// /// [`stdin()`]: #method.stdin pub fn stderr() -> io::Result { imp::Handle::stderr().map(Handle) } /// Return a reference to the underlying file. /// /// # Examples /// Ensure that the target file is not the same as the source one, /// and copy the data to it: /// /// ```rust,no_run /// # use std::error::Error; /// use std::io::prelude::*; /// use std::io::Write; /// use std::fs::File; /// use same_file::Handle; /// /// # fn try_main() -> Result<(), Box> { /// let source = File::open("source")?; /// let target = File::create("target")?; /// /// let source_handle = Handle::from_file(source)?; /// let mut target_handle = Handle::from_file(target)?; /// assert_ne!(source_handle, target_handle, "The files are the same."); /// /// let mut source = source_handle.as_file(); /// let target = target_handle.as_file_mut(); /// /// let mut buffer = Vec::new(); /// // data copy is simplified for the purposes of the example /// source.read_to_end(&mut buffer)?; /// target.write_all(&buffer)?; /// # /// # Ok(()) /// # } /// # /// # fn main() { /// # try_main().unwrap(); /// # } /// ``` pub fn as_file(&self) -> &File { self.0.as_file() } /// Return a mutable reference to the underlying file. /// /// # Examples /// See the example for [`as_file()`]. /// /// [`as_file()`]: #method.as_file pub fn as_file_mut(&mut self) -> &mut File { self.0.as_file_mut() } /// Return the underlying device number of this handle. /// /// Note that this only works on unix platforms. #[cfg(any(target_os = "redox", unix))] pub fn dev(&self) -> u64 { self.0.dev() } /// Return the underlying inode number of this handle. /// /// Note that this only works on unix platforms. #[cfg(any(target_os = "redox", unix))] pub fn ino(&self) -> u64 { self.0.ino() } } /// Returns true if the two file paths may correspond to the same file. /// /// Note that it's possible for this to produce a false positive on some /// platforms. Namely, this can return true even if the two file paths *don't* /// resolve to the same file. /// # Errors /// This function will return an [`io::Error`] if any of the two paths cannot /// be opened. The most common reasons for this are: the path does not exist, /// or there were not enough permissions. /// /// [`io::Error`]: https://doc.rust-lang.org/std/io/struct.Error.html /// /// # Example /// /// ```rust,no_run /// use same_file::is_same_file; /// /// assert!(is_same_file("./foo", "././foo").unwrap_or(false)); /// ``` pub fn is_same_file(path1: P, path2: Q) -> io::Result where P: AsRef, Q: AsRef, { Ok(Handle::from_path(path1)? == Handle::from_path(path2)?) } #[cfg(test)] mod tests { use std::env; use std::error; use std::fs::{self, File}; use std::io; use std::path::{Path, PathBuf}; use std::result; use super::is_same_file; type Result = result::Result>; /// Create an error from a format!-like syntax. macro_rules! err { ($($tt:tt)*) => { Box::::from(format!($($tt)*)) } } /// A simple wrapper for creating a temporary directory that is /// automatically deleted when it's dropped. /// /// We use this in lieu of tempfile because tempfile brings in too many /// dependencies. #[derive(Debug)] struct TempDir(PathBuf); impl Drop for TempDir { fn drop(&mut self) { fs::remove_dir_all(&self.0).unwrap(); } } impl TempDir { /// Create a new empty temporary directory under the system's /// configured temporary directory. fn new() -> Result { #![allow(deprecated)] use std::sync::atomic::{ AtomicUsize, Ordering, ATOMIC_USIZE_INIT, }; static TRIES: usize = 100; static COUNTER: AtomicUsize = ATOMIC_USIZE_INIT; let tmpdir = env::temp_dir(); for _ in 0..TRIES { let count = COUNTER.fetch_add(1, Ordering::SeqCst); let path = tmpdir.join("rust-walkdir").join(count.to_string()); if path.is_dir() { continue; } fs::create_dir_all(&path).map_err(|e| { err!("failed to create {}: {}", path.display(), e) })?; return Ok(TempDir(path)); } Err(err!("failed to create temp dir after {} tries", TRIES)) } /// Return the underlying path to this temporary directory. fn path(&self) -> &Path { &self.0 } } fn tmpdir() -> TempDir { TempDir::new().unwrap() } #[cfg(unix)] pub fn soft_link_dir, Q: AsRef>( src: P, dst: Q, ) -> io::Result<()> { use std::os::unix::fs::symlink; symlink(src, dst) } #[cfg(unix)] pub fn soft_link_file, Q: AsRef>( src: P, dst: Q, ) -> io::Result<()> { soft_link_dir(src, dst) } #[cfg(windows)] pub fn soft_link_dir, Q: AsRef>( src: P, dst: Q, ) -> io::Result<()> { use std::os::windows::fs::symlink_dir; symlink_dir(src, dst) } #[cfg(windows)] pub fn soft_link_file, Q: AsRef>( src: P, dst: Q, ) -> io::Result<()> { use std::os::windows::fs::symlink_file; symlink_file(src, dst) } // These tests are rather uninteresting. The really interesting tests // would stress the edge cases. On Unix, this might be comparing two files // on different mount points with the same inode number. On Windows, this // might be comparing two files whose file indices are the same on file // systems where such things aren't guaranteed to be unique. // // Alas, I don't know how to create those environmental conditions. ---AG #[test] fn same_file_trivial() { let tdir = tmpdir(); let dir = tdir.path(); File::create(dir.join("a")).unwrap(); assert!(is_same_file(dir.join("a"), dir.join("a")).unwrap()); } #[test] fn same_dir_trivial() { let tdir = tmpdir(); let dir = tdir.path(); fs::create_dir(dir.join("a")).unwrap(); assert!(is_same_file(dir.join("a"), dir.join("a")).unwrap()); } #[test] fn not_same_file_trivial() { let tdir = tmpdir(); let dir = tdir.path(); File::create(dir.join("a")).unwrap(); File::create(dir.join("b")).unwrap(); assert!(!is_same_file(dir.join("a"), dir.join("b")).unwrap()); } #[test] fn not_same_dir_trivial() { let tdir = tmpdir(); let dir = tdir.path(); fs::create_dir(dir.join("a")).unwrap(); fs::create_dir(dir.join("b")).unwrap(); assert!(!is_same_file(dir.join("a"), dir.join("b")).unwrap()); } #[test] fn same_file_hard() { let tdir = tmpdir(); let dir = tdir.path(); File::create(dir.join("a")).unwrap(); fs::hard_link(dir.join("a"), dir.join("alink")).unwrap(); assert!(is_same_file(dir.join("a"), dir.join("alink")).unwrap()); } #[test] fn same_file_soft() { let tdir = tmpdir(); let dir = tdir.path(); File::create(dir.join("a")).unwrap(); soft_link_file(dir.join("a"), dir.join("alink")).unwrap(); assert!(is_same_file(dir.join("a"), dir.join("alink")).unwrap()); } #[test] fn same_dir_soft() { let tdir = tmpdir(); let dir = tdir.path(); fs::create_dir(dir.join("a")).unwrap(); soft_link_dir(dir.join("a"), dir.join("alink")).unwrap(); assert!(is_same_file(dir.join("a"), dir.join("alink")).unwrap()); } #[test] fn test_send() { fn assert_send() {} assert_send::(); } #[test] fn test_sync() { fn assert_sync() {} assert_sync::(); } } vendor/same-file/examples/0000775000175000017500000000000014160055207016327 5ustar mwhudsonmwhudsonvendor/same-file/examples/is_stderr.rs0000664000175000017500000000134314160055207020674 0ustar mwhudsonmwhudsonuse std::io; use std::process; use same_file::Handle; fn main() { if let Err(err) = run() { println!("{}", err); process::exit(1); } } fn run() -> io::Result<()> { // Run with `cargo run --example is_stderr 2> examples/stderr` to see // interesting output. let candidates = &[ "examples/is_same_file.rs", "examples/is_stderr.rs", "examples/stderr", ]; let stderr_handle = Handle::stderr()?; for candidate in candidates { let handle = Handle::from_path(candidate)?; if stderr_handle == handle { println!("{:?} is stderr!", candidate); } else { println!("{:?} is NOT stderr!", candidate); } } Ok(()) } vendor/same-file/examples/is_same_file.rs0000664000175000017500000000027714160055207021322 0ustar mwhudsonmwhudsonuse same_file::is_same_file; use std::io; fn try_main() -> Result<(), io::Error> { assert!(is_same_file("/bin/sh", "/usr/bin/sh")?); Ok(()) } fn main() { try_main().unwrap(); } vendor/same-file/rustfmt.toml0000664000175000017500000000005414160055207017111 0ustar mwhudsonmwhudsonmax_width = 79 use_small_heuristics = "max" vendor/same-file/LICENSE-MIT0000664000175000017500000000207114160055207016145 0ustar mwhudsonmwhudsonThe MIT License (MIT) Copyright (c) 2017 Andrew Gallant Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/same-file/COPYING0000664000175000017500000000017614160055207015550 0ustar mwhudsonmwhudsonThis project is dual-licensed under the Unlicense and MIT licenses. You may use this code under the terms of either license. vendor/same-file/README.md0000664000175000017500000000251114160055207015767 0ustar mwhudsonmwhudsonsame-file ========= A safe and cross platform crate to determine whether two files or directories are the same. [![Build status](https://github.com/BurntSushi/same-file/workflows/ci/badge.svg)](https://github.com/BurntSushi/same-file/actions) [![](http://meritbadge.herokuapp.com/same-file)](https://crates.io/crates/same-file) Dual-licensed under MIT or the [UNLICENSE](http://unlicense.org). ### Documentation https://docs.rs/same-file ### Usage Add this to your `Cargo.toml`: ```toml [dependencies] same-file = "1" ``` ### Example The simplest use of this crate is to use the `is_same_file` function, which takes two file paths and returns true if and only if they refer to the same file: ```rust,no_run use same_file::is_same_file; fn main() { assert!(is_same_file("/bin/sh", "/usr/bin/sh").unwrap()); } ``` ### Minimum Rust version policy This crate's minimum supported `rustc` version is `1.34.0`. The current policy is that the minimum Rust version required to use this crate can be increased in minor version updates. For example, if `crate 1.0` requires Rust 1.20.0, then `crate 1.0.z` for all values of `z` will also require Rust 1.20.0 or newer. However, `crate 1.y` for `y > 0` may require a newer minimum version of Rust. In general, this crate will be conservative with respect to the minimum supported version of Rust. vendor/same-file/Cargo.lock0000664000175000017500000000376314160055207016427 0ustar mwhudsonmwhudson# This file is automatically @generated by Cargo. # It is not intended for manual editing. [[package]] name = "doc-comment" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" [[package]] name = "same-file" version = "1.0.6" dependencies = [ "doc-comment 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)", "winapi-util 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)", ] [[package]] name = "winapi" version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" dependencies = [ "winapi-i686-pc-windows-gnu 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)", "winapi-x86_64-pc-windows-gnu 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)", ] [[package]] name = "winapi-i686-pc-windows-gnu" version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" [[package]] name = "winapi-util" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" dependencies = [ "winapi 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)", ] [[package]] name = "winapi-x86_64-pc-windows-gnu" version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" [metadata] "checksum doc-comment 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)" = "923dea538cea0aa3025e8685b20d6ee21ef99c4f77e954a30febbaac5ec73a97" "checksum winapi 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)" = "890b38836c01d72fdb636d15c9cfc52ec7fd783b330abc93cd1686f4308dfccc" "checksum winapi-i686-pc-windows-gnu 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)" = "ec6667f60c23eca65c561e63a13d81b44234c2e38a6b6c959025ee907ec614cc" "checksum winapi-util 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "afc5508759c5bf4285e61feb862b6083c8480aec864fa17a81fdec6f69b461ab" "checksum winapi-x86_64-pc-windows-gnu 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)" = "98f12c52b2630cd05d2c3ffd8e008f7f48252c042b4871c72aed9dc733b96668" vendor/git2/0000775000175000017500000000000014160055207013514 5ustar mwhudsonmwhudsonvendor/git2/.cargo-checksum.json0000664000175000017500000000013114160055207017353 0ustar mwhudsonmwhudson{"files":{},"package":"2a8057932925d3a9d9e4434ea016570d37420ddb1ceed45a174d577f24ed6700"}vendor/git2/LICENSE-APACHE0000664000175000017500000002513714160055207015450 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/git2/Cargo.toml0000664000175000017500000000351314160055207015446 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2018" name = "git2" version = "0.13.23" authors = ["Josh Triplett ", "Alex Crichton "] description = "Bindings to libgit2 for interoperating with git repositories. This library is\nboth threadsafe and memory safe and allows both reading and writing git\nrepositories.\n" documentation = "https://docs.rs/git2" readme = "README.md" keywords = ["git"] categories = ["api-bindings"] license = "MIT/Apache-2.0" repository = "https://github.com/rust-lang/git2-rs" [dependencies.bitflags] version = "1.1.0" [dependencies.libc] version = "0.2" [dependencies.libgit2-sys] version = "0.12.24" [dependencies.log] version = "0.4.8" [dependencies.url] version = "2.0" [dev-dependencies.paste] version = "1" [dev-dependencies.structopt] version = "0.3" [dev-dependencies.tempfile] version = "3.1.0" [dev-dependencies.thread-id] version = "3.3.0" [dev-dependencies.time] version = "0.1.39" [features] default = ["ssh", "https", "ssh_key_from_memory"] https = ["libgit2-sys/https", "openssl-sys", "openssl-probe"] ssh = ["libgit2-sys/ssh"] ssh_key_from_memory = ["libgit2-sys/ssh_key_from_memory"] unstable = [] [target."cfg(all(unix, not(target_os = \"macos\")))".dependencies.openssl-probe] version = "0.1" optional = true [target."cfg(all(unix, not(target_os = \"macos\")))".dependencies.openssl-sys] version = "0.9.0" optional = true vendor/git2/debian/0000775000175000017500000000000014160055207014736 5ustar mwhudsonmwhudsonvendor/git2/debian/patches/0000775000175000017500000000000014160055207016365 5ustar mwhudsonmwhudsonvendor/git2/debian/patches/disable-vendor.patch0000664000175000017500000000064714160055207022313 0ustar mwhudsonmwhudson--- a/Cargo.toml +++ b/Cargo.toml @@ -56,8 +56,6 @@ ssh = ["libgit2-sys/ssh"] ssh_key_from_memory = ["libgit2-sys/ssh_key_from_memory"] unstable = [] -vendored-libgit2 = ["libgit2-sys/vendored"] -vendored-openssl = ["openssl-sys/vendored", "libgit2-sys/vendored-openssl"] zlib-ng-compat = ["libgit2-sys/zlib-ng-compat"] [target."cfg(all(unix, not(target_os = \"macos\")))".dependencies.openssl-probe] version = "0.1" vendor/git2/debian/patches/remove-zlib-ng-compat.patch0000664000175000017500000000047614160055207023533 0ustar mwhudsonmwhudson--- a/Cargo.toml +++ b/Cargo.toml @@ -56,7 +56,6 @@ ssh = ["libgit2-sys/ssh"] ssh_key_from_memory = ["libgit2-sys/ssh_key_from_memory"] unstable = [] -zlib-ng-compat = ["libgit2-sys/zlib-ng-compat"] [target."cfg(all(unix, not(target_os = \"macos\")))".dependencies.openssl-probe] version = "0.1" optional = true vendor/git2/debian/patches/series0000664000175000017500000000013114160055207017575 0ustar mwhudsonmwhudsondisable-vendor.patch remove-zlib-ng-compat.patch skip-credential_helper5-if-no-git.patch vendor/git2/debian/patches/skip-credential_helper5-if-no-git.patch0000664000175000017500000000061214160055207025676 0ustar mwhudsonmwhudsonSkip the "credential_helper5" test if git is not installled. --- a/src/cred.rs +++ b/src/cred.rs @@ -563,6 +563,9 @@ #[test] fn credential_helper5() { + if !Path::new("/usr/bin/git").exists() { + return; + } //this test does not work if git is not installed if cfg!(windows) { return; } // shell scripts don't work on Windows vendor/git2/src/0000775000175000017500000000000014160055207014303 5ustar mwhudsonmwhudsonvendor/git2/src/branch.rs0000664000175000017500000001430414160055207016110 0ustar mwhudsonmwhudsonuse std::ffi::CString; use std::marker; use std::ptr; use std::str; use crate::util::Binding; use crate::{raw, BranchType, Error, Reference, References}; /// A structure to represent a git [branch][1] /// /// A branch is currently just a wrapper to an underlying `Reference`. The /// reference can be accessed through the `get` and `into_reference` methods. /// /// [1]: http://git-scm.com/book/en/Git-Branching-What-a-Branch-Is pub struct Branch<'repo> { inner: Reference<'repo>, } /// An iterator over the branches inside of a repository. pub struct Branches<'repo> { raw: *mut raw::git_branch_iterator, _marker: marker::PhantomData>, } impl<'repo> Branch<'repo> { /// Creates Branch type from a Reference pub fn wrap(reference: Reference<'_>) -> Branch<'_> { Branch { inner: reference } } /// Ensure the branch name is well-formed. pub fn name_is_valid(name: &str) -> Result { crate::init(); let name = CString::new(name)?; let mut valid: libc::c_int = 0; unsafe { try_call!(raw::git_branch_name_is_valid(&mut valid, name.as_ptr())); } Ok(valid == 1) } /// Gain access to the reference that is this branch pub fn get(&self) -> &Reference<'repo> { &self.inner } /// Gain mutable access to the reference that is this branch pub fn get_mut(&mut self) -> &mut Reference<'repo> { &mut self.inner } /// Take ownership of the underlying reference. pub fn into_reference(self) -> Reference<'repo> { self.inner } /// Delete an existing branch reference. pub fn delete(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_branch_delete(self.get().raw())); } Ok(()) } /// Determine if the current local branch is pointed at by HEAD. pub fn is_head(&self) -> bool { unsafe { raw::git_branch_is_head(&*self.get().raw()) == 1 } } /// Move/rename an existing local branch reference. pub fn rename(&mut self, new_branch_name: &str, force: bool) -> Result, Error> { let mut ret = ptr::null_mut(); let new_branch_name = CString::new(new_branch_name)?; unsafe { try_call!(raw::git_branch_move( &mut ret, self.get().raw(), new_branch_name, force )); Ok(Branch::wrap(Binding::from_raw(ret))) } } /// Return the name of the given local or remote branch. /// /// May return `Ok(None)` if the name is not valid utf-8. pub fn name(&self) -> Result, Error> { self.name_bytes().map(|s| str::from_utf8(s).ok()) } /// Return the name of the given local or remote branch. pub fn name_bytes(&self) -> Result<&[u8], Error> { let mut ret = ptr::null(); unsafe { try_call!(raw::git_branch_name(&mut ret, &*self.get().raw())); Ok(crate::opt_bytes(self, ret).unwrap()) } } /// Return the reference supporting the remote tracking branch, given a /// local branch reference. pub fn upstream(&self) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_branch_upstream(&mut ret, &*self.get().raw())); Ok(Branch::wrap(Binding::from_raw(ret))) } } /// Set the upstream configuration for a given local branch. /// /// If `None` is specified, then the upstream branch is unset. The name /// provided is the name of the branch to set as upstream. pub fn set_upstream(&mut self, upstream_name: Option<&str>) -> Result<(), Error> { let upstream_name = crate::opt_cstr(upstream_name)?; unsafe { try_call!(raw::git_branch_set_upstream( self.get().raw(), upstream_name )); Ok(()) } } } impl<'repo> Branches<'repo> { /// Creates a new iterator from the raw pointer given. /// /// This function is unsafe as it is not guaranteed that `raw` is a valid /// pointer. pub unsafe fn from_raw(raw: *mut raw::git_branch_iterator) -> Branches<'repo> { Branches { raw, _marker: marker::PhantomData, } } } impl<'repo> Iterator for Branches<'repo> { type Item = Result<(Branch<'repo>, BranchType), Error>; fn next(&mut self) -> Option, BranchType), Error>> { let mut ret = ptr::null_mut(); let mut typ = raw::GIT_BRANCH_LOCAL; unsafe { try_call_iter!(raw::git_branch_next(&mut ret, &mut typ, self.raw)); let typ = match typ { raw::GIT_BRANCH_LOCAL => BranchType::Local, raw::GIT_BRANCH_REMOTE => BranchType::Remote, n => panic!("unexected branch type: {}", n), }; Some(Ok((Branch::wrap(Binding::from_raw(ret)), typ))) } } } impl<'repo> Drop for Branches<'repo> { fn drop(&mut self) { unsafe { raw::git_branch_iterator_free(self.raw) } } } #[cfg(test)] mod tests { use crate::{Branch, BranchType}; #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let head = repo.head().unwrap(); let target = head.target().unwrap(); let commit = repo.find_commit(target).unwrap(); let mut b1 = repo.branch("foo", &commit, false).unwrap(); assert!(!b1.is_head()); repo.branch("foo2", &commit, false).unwrap(); assert_eq!(repo.branches(None).unwrap().count(), 3); repo.find_branch("foo", BranchType::Local).unwrap(); let mut b1 = b1.rename("bar", false).unwrap(); assert_eq!(b1.name().unwrap(), Some("bar")); assert!(b1.upstream().is_err()); b1.set_upstream(Some("main")).unwrap(); b1.upstream().unwrap(); b1.set_upstream(None).unwrap(); b1.delete().unwrap(); } #[test] fn name_is_valid() { assert!(Branch::name_is_valid("foo").unwrap()); assert!(!Branch::name_is_valid("").unwrap()); assert!(!Branch::name_is_valid("with spaces").unwrap()); assert!(!Branch::name_is_valid("~tilde").unwrap()); } } vendor/git2/src/blob.rs0000664000175000017500000001345614160055207015600 0ustar mwhudsonmwhudsonuse std::io; use std::marker; use std::mem; use std::slice; use crate::util::Binding; use crate::{raw, Error, Object, Oid}; /// A structure to represent a git [blob][1] /// /// [1]: http://git-scm.com/book/en/Git-Internals-Git-Objects pub struct Blob<'repo> { raw: *mut raw::git_blob, _marker: marker::PhantomData>, } impl<'repo> Blob<'repo> { /// Get the id (SHA1) of a repository blob pub fn id(&self) -> Oid { unsafe { Binding::from_raw(raw::git_blob_id(&*self.raw)) } } /// Determine if the blob content is most certainly binary or not. pub fn is_binary(&self) -> bool { unsafe { raw::git_blob_is_binary(&*self.raw) == 1 } } /// Get the content of this blob. pub fn content(&self) -> &[u8] { unsafe { let data = raw::git_blob_rawcontent(&*self.raw) as *const u8; let len = raw::git_blob_rawsize(&*self.raw) as usize; slice::from_raw_parts(data, len) } } /// Get the size in bytes of the contents of this blob. pub fn size(&self) -> usize { unsafe { raw::git_blob_rawsize(&*self.raw) as usize } } /// Casts this Blob to be usable as an `Object` pub fn as_object(&self) -> &Object<'repo> { unsafe { &*(self as *const _ as *const Object<'repo>) } } /// Consumes Blob to be returned as an `Object` pub fn into_object(self) -> Object<'repo> { assert_eq!(mem::size_of_val(&self), mem::size_of::>()); unsafe { mem::transmute(self) } } } impl<'repo> Binding for Blob<'repo> { type Raw = *mut raw::git_blob; unsafe fn from_raw(raw: *mut raw::git_blob) -> Blob<'repo> { Blob { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_blob { self.raw } } impl<'repo> std::fmt::Debug for Blob<'repo> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { f.debug_struct("Blob").field("id", &self.id()).finish() } } impl<'repo> Clone for Blob<'repo> { fn clone(&self) -> Self { self.as_object().clone().into_blob().ok().unwrap() } } impl<'repo> Drop for Blob<'repo> { fn drop(&mut self) { unsafe { raw::git_blob_free(self.raw) } } } /// A structure to represent a git writestream for blobs pub struct BlobWriter<'repo> { raw: *mut raw::git_writestream, need_cleanup: bool, _marker: marker::PhantomData>, } impl<'repo> BlobWriter<'repo> { /// Finalize blob writing stream and write the blob to the object db pub fn commit(mut self) -> Result { // After commit we already doesn't need cleanup on drop self.need_cleanup = false; let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_blob_create_fromstream_commit(&mut raw, self.raw)); Ok(Binding::from_raw(&raw as *const _)) } } } impl<'repo> Binding for BlobWriter<'repo> { type Raw = *mut raw::git_writestream; unsafe fn from_raw(raw: *mut raw::git_writestream) -> BlobWriter<'repo> { BlobWriter { raw, need_cleanup: true, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_writestream { self.raw } } impl<'repo> Drop for BlobWriter<'repo> { fn drop(&mut self) { // We need cleanup in case the stream has not been committed if self.need_cleanup { unsafe { if let Some(f) = (*self.raw).free { f(self.raw) } } } } } impl<'repo> io::Write for BlobWriter<'repo> { fn write(&mut self, buf: &[u8]) -> io::Result { unsafe { if let Some(f) = (*self.raw).write { let res = f(self.raw, buf.as_ptr() as *const _, buf.len()); if res < 0 { Err(io::Error::new(io::ErrorKind::Other, "Write error")) } else { Ok(buf.len()) } } else { Err(io::Error::new(io::ErrorKind::Other, "no write callback")) } } } fn flush(&mut self) -> io::Result<()> { Ok(()) } } #[cfg(test)] mod tests { use crate::Repository; use std::fs::File; use std::io::prelude::*; use std::path::Path; use tempfile::TempDir; #[test] fn buffer() { let td = TempDir::new().unwrap(); let repo = Repository::init(td.path()).unwrap(); let id = repo.blob(&[5, 4, 6]).unwrap(); let blob = repo.find_blob(id).unwrap(); assert_eq!(blob.id(), id); assert_eq!(blob.size(), 3); assert_eq!(blob.content(), [5, 4, 6]); assert!(blob.is_binary()); repo.find_object(id, None).unwrap().as_blob().unwrap(); repo.find_object(id, None) .unwrap() .into_blob() .ok() .unwrap(); } #[test] fn path() { let td = TempDir::new().unwrap(); let path = td.path().join("foo"); File::create(&path).unwrap().write_all(&[7, 8, 9]).unwrap(); let repo = Repository::init(td.path()).unwrap(); let id = repo.blob_path(&path).unwrap(); let blob = repo.find_blob(id).unwrap(); assert_eq!(blob.content(), [7, 8, 9]); blob.into_object(); } #[test] fn stream() { let td = TempDir::new().unwrap(); let repo = Repository::init(td.path()).unwrap(); let mut ws = repo.blob_writer(Some(Path::new("foo"))).unwrap(); let wl = ws.write(&[10, 11, 12]).unwrap(); assert_eq!(wl, 3); let id = ws.commit().unwrap(); let blob = repo.find_blob(id).unwrap(); assert_eq!(blob.content(), [10, 11, 12]); blob.into_object(); } } vendor/git2/src/proxy_options.rs0000664000175000017500000000311514160055207017605 0ustar mwhudsonmwhudsonuse std::ffi::CString; use std::marker; use std::ptr; use crate::raw; use crate::util::Binding; /// Options which can be specified to various fetch operations. #[derive(Default)] pub struct ProxyOptions<'a> { url: Option, proxy_kind: raw::git_proxy_t, _marker: marker::PhantomData<&'a i32>, } impl<'a> ProxyOptions<'a> { /// Creates a new set of proxy options ready to be configured. pub fn new() -> ProxyOptions<'a> { Default::default() } /// Try to auto-detect the proxy from the git configuration. /// /// Note that this will override `url` specified before. pub fn auto(&mut self) -> &mut Self { self.proxy_kind = raw::GIT_PROXY_AUTO; self } /// Specify the exact URL of the proxy to use. /// /// Note that this will override `auto` specified before. pub fn url(&mut self, url: &str) -> &mut Self { self.proxy_kind = raw::GIT_PROXY_SPECIFIED; self.url = Some(CString::new(url).unwrap()); self } } impl<'a> Binding for ProxyOptions<'a> { type Raw = raw::git_proxy_options; unsafe fn from_raw(_raw: raw::git_proxy_options) -> ProxyOptions<'a> { panic!("can't create proxy from raw options") } fn raw(&self) -> raw::git_proxy_options { raw::git_proxy_options { version: raw::GIT_PROXY_OPTIONS_VERSION, kind: self.proxy_kind, url: self.url.as_ref().map(|s| s.as_ptr()).unwrap_or(ptr::null()), credentials: None, certificate_check: None, payload: ptr::null_mut(), } } } vendor/git2/src/diff.rs0000664000175000017500000017172414160055207015575 0ustar mwhudsonmwhudsonuse libc::{c_char, c_int, c_void, size_t}; use std::ffi::CString; use std::marker; use std::mem; use std::ops::Range; use std::path::Path; use std::ptr; use std::slice; use crate::util::{self, Binding}; use crate::{panic, raw, Buf, Delta, DiffFormat, Error, FileMode, Oid, Repository}; use crate::{DiffFlags, DiffStatsFormat, IntoCString}; /// The diff object that contains all individual file deltas. /// /// This is an opaque structure which will be allocated by one of the diff /// generator functions on the `Repository` structure (e.g. `diff_tree_to_tree` /// or other `diff_*` functions). pub struct Diff<'repo> { raw: *mut raw::git_diff, _marker: marker::PhantomData<&'repo Repository>, } unsafe impl<'repo> Send for Diff<'repo> {} /// Description of changes to one entry. pub struct DiffDelta<'a> { raw: *mut raw::git_diff_delta, _marker: marker::PhantomData<&'a raw::git_diff_delta>, } /// Description of one side of a delta. /// /// Although this is called a "file" it could represent a file, a symbolic /// link, a submodule commit id, or even a tree (although that only happens if /// you are tracking type changes or ignored/untracked directories). pub struct DiffFile<'a> { raw: *const raw::git_diff_file, _marker: marker::PhantomData<&'a raw::git_diff_file>, } /// Structure describing options about how the diff should be executed. pub struct DiffOptions { pathspec: Vec, pathspec_ptrs: Vec<*const c_char>, old_prefix: Option, new_prefix: Option, raw: raw::git_diff_options, } /// Control behavior of rename and copy detection pub struct DiffFindOptions { raw: raw::git_diff_find_options, } /// Control behavior of formatting emails pub struct DiffFormatEmailOptions { raw: raw::git_diff_format_email_options, } /// Control behavior of formatting emails pub struct DiffPatchidOptions { raw: raw::git_diff_patchid_options, } /// An iterator over the diffs in a delta pub struct Deltas<'diff> { range: Range, diff: &'diff Diff<'diff>, } /// Structure describing a line (or data span) of a diff. pub struct DiffLine<'a> { raw: *const raw::git_diff_line, _marker: marker::PhantomData<&'a raw::git_diff_line>, } /// Structure describing a hunk of a diff. pub struct DiffHunk<'a> { raw: *const raw::git_diff_hunk, _marker: marker::PhantomData<&'a raw::git_diff_hunk>, } /// Structure describing a hunk of a diff. pub struct DiffStats { raw: *mut raw::git_diff_stats, } /// Structure describing the binary contents of a diff. pub struct DiffBinary<'a> { raw: *const raw::git_diff_binary, _marker: marker::PhantomData<&'a raw::git_diff_binary>, } /// The contents of one of the files in a binary diff. pub struct DiffBinaryFile<'a> { raw: *const raw::git_diff_binary_file, _marker: marker::PhantomData<&'a raw::git_diff_binary_file>, } /// When producing a binary diff, the binary data returned will be /// either the deflated full ("literal") contents of the file, or /// the deflated binary delta between the two sides (whichever is /// smaller). #[derive(Copy, Clone, Debug)] pub enum DiffBinaryKind { /// There is no binary delta None, /// The binary data is the literal contents of the file Literal, /// The binary data is the delta from one side to the other Delta, } type PrintCb<'a> = dyn FnMut(DiffDelta<'_>, Option>, DiffLine<'_>) -> bool + 'a; pub type FileCb<'a> = dyn FnMut(DiffDelta<'_>, f32) -> bool + 'a; pub type BinaryCb<'a> = dyn FnMut(DiffDelta<'_>, DiffBinary<'_>) -> bool + 'a; pub type HunkCb<'a> = dyn FnMut(DiffDelta<'_>, DiffHunk<'_>) -> bool + 'a; pub type LineCb<'a> = dyn FnMut(DiffDelta<'_>, Option>, DiffLine<'_>) -> bool + 'a; pub struct DiffCallbacks<'a, 'b, 'c, 'd, 'e, 'f, 'g, 'h> { pub file: Option<&'a mut FileCb<'b>>, pub binary: Option<&'c mut BinaryCb<'d>>, pub hunk: Option<&'e mut HunkCb<'f>>, pub line: Option<&'g mut LineCb<'h>>, } impl<'repo> Diff<'repo> { /// Merge one diff into another. /// /// This merges items from the "from" list into the "self" list. The /// resulting diff will have all items that appear in either list. /// If an item appears in both lists, then it will be "merged" to appear /// as if the old version was from the "onto" list and the new version /// is from the "from" list (with the exception that if the item has a /// pending DELETE in the middle, then it will show as deleted). pub fn merge(&mut self, from: &Diff<'repo>) -> Result<(), Error> { unsafe { try_call!(raw::git_diff_merge(self.raw, &*from.raw)); } Ok(()) } /// Returns an iterator over the deltas in this diff. pub fn deltas(&self) -> Deltas<'_> { let num_deltas = unsafe { raw::git_diff_num_deltas(&*self.raw) }; Deltas { range: 0..(num_deltas as usize), diff: self, } } /// Return the diff delta for an entry in the diff list. pub fn get_delta(&self, i: usize) -> Option> { unsafe { let ptr = raw::git_diff_get_delta(&*self.raw, i as size_t); Binding::from_raw_opt(ptr as *mut _) } } /// Check if deltas are sorted case sensitively or insensitively. pub fn is_sorted_icase(&self) -> bool { unsafe { raw::git_diff_is_sorted_icase(&*self.raw) == 1 } } /// Iterate over a diff generating formatted text output. /// /// Returning `false` from the callback will terminate the iteration and /// return an error from this function. pub fn print(&self, format: DiffFormat, mut cb: F) -> Result<(), Error> where F: FnMut(DiffDelta<'_>, Option>, DiffLine<'_>) -> bool, { let mut cb: &mut PrintCb<'_> = &mut cb; let ptr = &mut cb as *mut _; let print: raw::git_diff_line_cb = Some(print_cb); unsafe { try_call!(raw::git_diff_print(self.raw, format, print, ptr as *mut _)); Ok(()) } } /// Loop over all deltas in a diff issuing callbacks. /// /// Returning `false` from any callback will terminate the iteration and /// return an error from this function. pub fn foreach( &self, file_cb: &mut FileCb<'_>, binary_cb: Option<&mut BinaryCb<'_>>, hunk_cb: Option<&mut HunkCb<'_>>, line_cb: Option<&mut LineCb<'_>>, ) -> Result<(), Error> { let mut cbs = DiffCallbacks { file: Some(file_cb), binary: binary_cb, hunk: hunk_cb, line: line_cb, }; let ptr = &mut cbs as *mut _; unsafe { let binary_cb_c: raw::git_diff_binary_cb = if cbs.binary.is_some() { Some(binary_cb_c) } else { None }; let hunk_cb_c: raw::git_diff_hunk_cb = if cbs.hunk.is_some() { Some(hunk_cb_c) } else { None }; let line_cb_c: raw::git_diff_line_cb = if cbs.line.is_some() { Some(line_cb_c) } else { None }; let file_cb: raw::git_diff_file_cb = Some(file_cb_c); try_call!(raw::git_diff_foreach( self.raw, file_cb, binary_cb_c, hunk_cb_c, line_cb_c, ptr as *mut _ )); Ok(()) } } /// Accumulate diff statistics for all patches. pub fn stats(&self) -> Result { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_diff_get_stats(&mut ret, self.raw)); Ok(Binding::from_raw(ret)) } } /// Transform a diff marking file renames, copies, etc. /// /// This modifies a diff in place, replacing old entries that look like /// renames or copies with new entries reflecting those changes. This also /// will, if requested, break modified files into add/remove pairs if the /// amount of change is above a threshold. pub fn find_similar(&mut self, opts: Option<&mut DiffFindOptions>) -> Result<(), Error> { let opts = opts.map(|opts| &opts.raw); unsafe { try_call!(raw::git_diff_find_similar(self.raw, opts)); } Ok(()) } /// Create an e-mail ready patch from a diff. /// /// Matches the format created by `git format-patch` pub fn format_email( &mut self, patch_no: usize, total_patches: usize, commit: &crate::Commit<'repo>, opts: Option<&mut DiffFormatEmailOptions>, ) -> Result { assert!(patch_no > 0); assert!(patch_no <= total_patches); let mut default = DiffFormatEmailOptions::default(); let mut raw_opts = opts.map_or(&mut default.raw, |opts| &mut opts.raw); let summary = commit.summary_bytes().unwrap(); let mut message = commit.message_bytes(); assert!(message.starts_with(summary)); message = &message[summary.len()..]; raw_opts.patch_no = patch_no; raw_opts.total_patches = total_patches; let id = commit.id(); raw_opts.id = id.raw(); raw_opts.summary = summary.as_ptr() as *const _; raw_opts.body = message.as_ptr() as *const _; raw_opts.author = commit.author().raw(); let buf = Buf::new(); unsafe { try_call!(raw::git_diff_format_email(buf.raw(), self.raw, &*raw_opts)); } Ok(buf) } /// Create an patchid from a diff. pub fn patchid(&self, opts: Option<&mut DiffPatchidOptions>) -> Result { let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_diff_patchid( &mut raw, self.raw, opts.map(|o| &mut o.raw) )); Ok(Binding::from_raw(&raw as *const _)) } } // TODO: num_deltas_of_type, find_similar } impl Diff<'static> { /// Read the contents of a git patch file into a `git_diff` object. /// /// The diff object produced is similar to the one that would be /// produced if you actually produced it computationally by comparing /// two trees, however there may be subtle differences. For example, /// a patch file likely contains abbreviated object IDs, so the /// object IDs parsed by this function will also be abreviated. pub fn from_buffer(buffer: &[u8]) -> Result, Error> { crate::init(); let mut diff: *mut raw::git_diff = std::ptr::null_mut(); unsafe { // NOTE: Doesn't depend on repo, so lifetime can be 'static try_call!(raw::git_diff_from_buffer( &mut diff, buffer.as_ptr() as *const c_char, buffer.len() )); Ok(Diff::from_raw(diff)) } } } pub extern "C" fn print_cb( delta: *const raw::git_diff_delta, hunk: *const raw::git_diff_hunk, line: *const raw::git_diff_line, data: *mut c_void, ) -> c_int { unsafe { let delta = Binding::from_raw(delta as *mut _); let hunk = Binding::from_raw_opt(hunk); let line = Binding::from_raw(line); let r = panic::wrap(|| { let data = data as *mut &mut PrintCb<'_>; (*data)(delta, hunk, line) }); if r == Some(true) { raw::GIT_OK } else { raw::GIT_EUSER } } } pub extern "C" fn file_cb_c( delta: *const raw::git_diff_delta, progress: f32, data: *mut c_void, ) -> c_int { unsafe { let delta = Binding::from_raw(delta as *mut _); let r = panic::wrap(|| { let cbs = data as *mut DiffCallbacks<'_, '_, '_, '_, '_, '_, '_, '_>; match (*cbs).file { Some(ref mut cb) => cb(delta, progress), None => false, } }); if r == Some(true) { raw::GIT_OK } else { raw::GIT_EUSER } } } pub extern "C" fn binary_cb_c( delta: *const raw::git_diff_delta, binary: *const raw::git_diff_binary, data: *mut c_void, ) -> c_int { unsafe { let delta = Binding::from_raw(delta as *mut _); let binary = Binding::from_raw(binary); let r = panic::wrap(|| { let cbs = data as *mut DiffCallbacks<'_, '_, '_, '_, '_, '_, '_, '_>; match (*cbs).binary { Some(ref mut cb) => cb(delta, binary), None => false, } }); if r == Some(true) { raw::GIT_OK } else { raw::GIT_EUSER } } } pub extern "C" fn hunk_cb_c( delta: *const raw::git_diff_delta, hunk: *const raw::git_diff_hunk, data: *mut c_void, ) -> c_int { unsafe { let delta = Binding::from_raw(delta as *mut _); let hunk = Binding::from_raw(hunk); let r = panic::wrap(|| { let cbs = data as *mut DiffCallbacks<'_, '_, '_, '_, '_, '_, '_, '_>; match (*cbs).hunk { Some(ref mut cb) => cb(delta, hunk), None => false, } }); if r == Some(true) { raw::GIT_OK } else { raw::GIT_EUSER } } } pub extern "C" fn line_cb_c( delta: *const raw::git_diff_delta, hunk: *const raw::git_diff_hunk, line: *const raw::git_diff_line, data: *mut c_void, ) -> c_int { unsafe { let delta = Binding::from_raw(delta as *mut _); let hunk = Binding::from_raw_opt(hunk); let line = Binding::from_raw(line); let r = panic::wrap(|| { let cbs = data as *mut DiffCallbacks<'_, '_, '_, '_, '_, '_, '_, '_>; match (*cbs).line { Some(ref mut cb) => cb(delta, hunk, line), None => false, } }); if r == Some(true) { raw::GIT_OK } else { raw::GIT_EUSER } } } impl<'repo> Binding for Diff<'repo> { type Raw = *mut raw::git_diff; unsafe fn from_raw(raw: *mut raw::git_diff) -> Diff<'repo> { Diff { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_diff { self.raw } } impl<'repo> Drop for Diff<'repo> { fn drop(&mut self) { unsafe { raw::git_diff_free(self.raw) } } } impl<'a> DiffDelta<'a> { /// Returns the flags on the delta. /// /// For more information, see `DiffFlags`'s documentation. pub fn flags(&self) -> DiffFlags { let flags = unsafe { (*self.raw).flags }; let mut result = DiffFlags::empty(); #[cfg(target_env = "msvc")] fn as_u32(flag: i32) -> u32 { flag as u32 } #[cfg(not(target_env = "msvc"))] fn as_u32(flag: u32) -> u32 { flag } if (flags & as_u32(raw::GIT_DIFF_FLAG_BINARY)) != 0 { result |= DiffFlags::BINARY; } if (flags & as_u32(raw::GIT_DIFF_FLAG_NOT_BINARY)) != 0 { result |= DiffFlags::NOT_BINARY; } if (flags & as_u32(raw::GIT_DIFF_FLAG_VALID_ID)) != 0 { result |= DiffFlags::VALID_ID; } if (flags & as_u32(raw::GIT_DIFF_FLAG_EXISTS)) != 0 { result |= DiffFlags::EXISTS; } result } // TODO: expose when diffs are more exposed // pub fn similarity(&self) -> u16 { // unsafe { (*self.raw).similarity } // } /// Returns the number of files in this delta. pub fn nfiles(&self) -> u16 { unsafe { (*self.raw).nfiles } } /// Returns the status of this entry /// /// For more information, see `Delta`'s documentation pub fn status(&self) -> Delta { match unsafe { (*self.raw).status } { raw::GIT_DELTA_UNMODIFIED => Delta::Unmodified, raw::GIT_DELTA_ADDED => Delta::Added, raw::GIT_DELTA_DELETED => Delta::Deleted, raw::GIT_DELTA_MODIFIED => Delta::Modified, raw::GIT_DELTA_RENAMED => Delta::Renamed, raw::GIT_DELTA_COPIED => Delta::Copied, raw::GIT_DELTA_IGNORED => Delta::Ignored, raw::GIT_DELTA_UNTRACKED => Delta::Untracked, raw::GIT_DELTA_TYPECHANGE => Delta::Typechange, raw::GIT_DELTA_UNREADABLE => Delta::Unreadable, raw::GIT_DELTA_CONFLICTED => Delta::Conflicted, n => panic!("unknown diff status: {}", n), } } /// Return the file which represents the "from" side of the diff. /// /// What side this means depends on the function that was used to generate /// the diff and will be documented on the function itself. pub fn old_file(&self) -> DiffFile<'a> { unsafe { Binding::from_raw(&(*self.raw).old_file as *const _) } } /// Return the file which represents the "to" side of the diff. /// /// What side this means depends on the function that was used to generate /// the diff and will be documented on the function itself. pub fn new_file(&self) -> DiffFile<'a> { unsafe { Binding::from_raw(&(*self.raw).new_file as *const _) } } } impl<'a> Binding for DiffDelta<'a> { type Raw = *mut raw::git_diff_delta; unsafe fn from_raw(raw: *mut raw::git_diff_delta) -> DiffDelta<'a> { DiffDelta { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_diff_delta { self.raw } } impl<'a> std::fmt::Debug for DiffDelta<'a> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { f.debug_struct("DiffDelta") .field("nfiles", &self.nfiles()) .field("status", &self.status()) .field("old_file", &self.old_file()) .field("new_file", &self.new_file()) .finish() } } impl<'a> DiffFile<'a> { /// Returns the Oid of this item. /// /// If this entry represents an absent side of a diff (e.g. the `old_file` /// of a `Added` delta), then the oid returned will be zeroes. pub fn id(&self) -> Oid { unsafe { Binding::from_raw(&(*self.raw).id as *const _) } } /// Returns the path, in bytes, of the entry relative to the working /// directory of the repository. pub fn path_bytes(&self) -> Option<&'a [u8]> { static FOO: () = (); unsafe { crate::opt_bytes(&FOO, (*self.raw).path) } } /// Returns the path of the entry relative to the working directory of the /// repository. pub fn path(&self) -> Option<&'a Path> { self.path_bytes().map(util::bytes2path) } /// Returns the size of this entry, in bytes pub fn size(&self) -> u64 { unsafe { (*self.raw).size as u64 } } /// Returns `true` if file(s) are treated as binary data. pub fn is_binary(&self) -> bool { unsafe { (*self.raw).flags & raw::GIT_DIFF_FLAG_BINARY as u32 != 0 } } /// Returns `true` if file(s) are treated as text data. pub fn is_not_binary(&self) -> bool { unsafe { (*self.raw).flags & raw::GIT_DIFF_FLAG_NOT_BINARY as u32 != 0 } } /// Returns `true` if `id` value is known correct. pub fn is_valid_id(&self) -> bool { unsafe { (*self.raw).flags & raw::GIT_DIFF_FLAG_VALID_ID as u32 != 0 } } /// Returns `true` if file exists at this side of the delta. pub fn exists(&self) -> bool { unsafe { (*self.raw).flags & raw::GIT_DIFF_FLAG_EXISTS as u32 != 0 } } /// Returns file mode. pub fn mode(&self) -> FileMode { match unsafe { (*self.raw).mode.into() } { raw::GIT_FILEMODE_UNREADABLE => FileMode::Unreadable, raw::GIT_FILEMODE_TREE => FileMode::Tree, raw::GIT_FILEMODE_BLOB => FileMode::Blob, raw::GIT_FILEMODE_BLOB_EXECUTABLE => FileMode::BlobExecutable, raw::GIT_FILEMODE_LINK => FileMode::Link, raw::GIT_FILEMODE_COMMIT => FileMode::Commit, mode => panic!("unknown mode: {}", mode), } } } impl<'a> Binding for DiffFile<'a> { type Raw = *const raw::git_diff_file; unsafe fn from_raw(raw: *const raw::git_diff_file) -> DiffFile<'a> { DiffFile { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *const raw::git_diff_file { self.raw } } impl<'a> std::fmt::Debug for DiffFile<'a> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { let mut ds = f.debug_struct("DiffFile"); ds.field("id", &self.id()); if let Some(path_bytes) = &self.path_bytes() { ds.field("path_bytes", path_bytes); } if let Some(path) = &self.path() { ds.field("path", path); } ds.field("size", &self.size()).finish() } } impl Default for DiffOptions { fn default() -> Self { Self::new() } } impl DiffOptions { /// Creates a new set of empty diff options. /// /// All flags and other options are defaulted to false or their otherwise /// zero equivalents. pub fn new() -> DiffOptions { let mut opts = DiffOptions { pathspec: Vec::new(), pathspec_ptrs: Vec::new(), raw: unsafe { mem::zeroed() }, old_prefix: None, new_prefix: None, }; assert_eq!(unsafe { raw::git_diff_init_options(&mut opts.raw, 1) }, 0); opts } fn flag(&mut self, opt: i32, val: bool) -> &mut DiffOptions { let opt = opt as u32; if val { self.raw.flags |= opt; } else { self.raw.flags &= !opt; } self } /// Flag indicating whether the sides of the diff will be reversed. pub fn reverse(&mut self, reverse: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_REVERSE, reverse) } /// Flag indicating whether ignored files are included. pub fn include_ignored(&mut self, include: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_INCLUDE_IGNORED, include) } /// Flag indicating whether ignored directories are traversed deeply or not. pub fn recurse_ignored_dirs(&mut self, recurse: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_RECURSE_IGNORED_DIRS, recurse) } /// Flag indicating whether untracked files are in the diff pub fn include_untracked(&mut self, include: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_INCLUDE_UNTRACKED, include) } /// Flag indicating whether untracked directories are traversed deeply or /// not. pub fn recurse_untracked_dirs(&mut self, recurse: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_RECURSE_UNTRACKED_DIRS, recurse) } /// Flag indicating whether unmodified files are in the diff. pub fn include_unmodified(&mut self, include: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_INCLUDE_UNMODIFIED, include) } /// If enabled, then Typechange delta records are generated. pub fn include_typechange(&mut self, include: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_INCLUDE_TYPECHANGE, include) } /// Event with `include_typechange`, the tree returned generally shows a /// deleted blob. This flag correctly labels the tree transitions as a /// typechange record with the `new_file`'s mode set to tree. /// /// Note that the tree SHA will not be available. pub fn include_typechange_trees(&mut self, include: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_INCLUDE_TYPECHANGE_TREES, include) } /// Flag indicating whether file mode changes are ignored. pub fn ignore_filemode(&mut self, ignore: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_IGNORE_FILEMODE, ignore) } /// Flag indicating whether all submodules should be treated as unmodified. pub fn ignore_submodules(&mut self, ignore: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_IGNORE_SUBMODULES, ignore) } /// Flag indicating whether case insensitive filenames should be used. pub fn ignore_case(&mut self, ignore: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_IGNORE_CASE, ignore) } /// If pathspecs are specified, this flag means that they should be applied /// as an exact match instead of a fnmatch pattern. pub fn disable_pathspec_match(&mut self, disable: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_DISABLE_PATHSPEC_MATCH, disable) } /// Disable updating the `binary` flag in delta records. This is useful when /// iterating over a diff if you don't need hunk and data callbacks and want /// to avoid having to load a file completely. pub fn skip_binary_check(&mut self, skip: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_SKIP_BINARY_CHECK, skip) } /// When diff finds an untracked directory, to match the behavior of core /// Git, it scans the contents for ignored and untracked files. If all /// contents are ignored, then the directory is ignored; if any contents are /// not ignored, then the directory is untracked. This is extra work that /// may not matter in many cases. /// /// This flag turns off that scan and immediately labels an untracked /// directory as untracked (changing the behavior to not match core git). pub fn enable_fast_untracked_dirs(&mut self, enable: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_ENABLE_FAST_UNTRACKED_DIRS, enable) } /// When diff finds a file in the working directory with stat information /// different from the index, but the OID ends up being the same, write the /// correct stat information into the index. Note: without this flag, diff /// will always leave the index untouched. pub fn update_index(&mut self, update: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_UPDATE_INDEX, update) } /// Include unreadable files in the diff pub fn include_unreadable(&mut self, include: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_INCLUDE_UNREADABLE, include) } /// Include unreadable files in the diff as untracked files pub fn include_unreadable_as_untracked(&mut self, include: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_INCLUDE_UNREADABLE_AS_UNTRACKED, include) } /// Treat all files as text, disabling binary attributes and detection. pub fn force_text(&mut self, force: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_FORCE_TEXT, force) } /// Treat all files as binary, disabling text diffs pub fn force_binary(&mut self, force: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_FORCE_BINARY, force) } /// Ignore all whitespace pub fn ignore_whitespace(&mut self, ignore: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_IGNORE_WHITESPACE, ignore) } /// Ignore changes in the amount of whitespace pub fn ignore_whitespace_change(&mut self, ignore: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_IGNORE_WHITESPACE_CHANGE, ignore) } /// Ignore whitespace at the end of line pub fn ignore_whitespace_eol(&mut self, ignore: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_IGNORE_WHITESPACE_EOL, ignore) } /// Ignore blank lines pub fn ignore_blank_lines(&mut self, ignore: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_IGNORE_BLANK_LINES, ignore) } /// When generating patch text, include the content of untracked files. /// /// This automatically turns on `include_untracked` but it does not turn on /// `recurse_untracked_dirs`. Add that flag if you want the content of every /// single untracked file. pub fn show_untracked_content(&mut self, show: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_SHOW_UNTRACKED_CONTENT, show) } /// When generating output, include the names of unmodified files if they /// are included in the `Diff`. Normally these are skipped in the formats /// that list files (e.g. name-only, name-status, raw). Even with this these /// will not be included in the patch format. pub fn show_unmodified(&mut self, show: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_SHOW_UNMODIFIED, show) } /// Use the "patience diff" algorithm pub fn patience(&mut self, patience: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_PATIENCE, patience) } /// Take extra time to find the minimal diff pub fn minimal(&mut self, minimal: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_MINIMAL, minimal) } /// Include the necessary deflate/delta information so that `git-apply` can /// apply given diff information to binary files. pub fn show_binary(&mut self, show: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_SHOW_BINARY, show) } /// Use a heuristic that takes indentation and whitespace into account /// which generally can produce better diffs when dealing with ambiguous /// diff hunks. pub fn indent_heuristic(&mut self, heuristic: bool) -> &mut DiffOptions { self.flag(raw::GIT_DIFF_INDENT_HEURISTIC, heuristic) } /// Set the number of unchanged lines that define the boundary of a hunk /// (and to display before and after). /// /// The default value for this is 3. pub fn context_lines(&mut self, lines: u32) -> &mut DiffOptions { self.raw.context_lines = lines; self } /// Set the maximum number of unchanged lines between hunk boundaries before /// the hunks will be merged into one. /// /// The default value for this is 0. pub fn interhunk_lines(&mut self, lines: u32) -> &mut DiffOptions { self.raw.interhunk_lines = lines; self } /// The default value for this is `core.abbrev` or 7 if unset. pub fn id_abbrev(&mut self, abbrev: u16) -> &mut DiffOptions { self.raw.id_abbrev = abbrev; self } /// Maximum size (in bytes) above which a blob will be marked as binary /// automatically. /// /// A negative value will disable this entirely. /// /// The default value for this is 512MB. pub fn max_size(&mut self, size: i64) -> &mut DiffOptions { self.raw.max_size = size as raw::git_off_t; self } /// The virtual "directory" to prefix old file names with in hunk headers. /// /// The default value for this is "a". pub fn old_prefix(&mut self, t: T) -> &mut DiffOptions { self.old_prefix = Some(t.into_c_string().unwrap()); self } /// The virtual "directory" to prefix new file names with in hunk headers. /// /// The default value for this is "b". pub fn new_prefix(&mut self, t: T) -> &mut DiffOptions { self.new_prefix = Some(t.into_c_string().unwrap()); self } /// Add to the array of paths/fnmatch patterns to constrain the diff. pub fn pathspec(&mut self, pathspec: T) -> &mut DiffOptions { let s = util::cstring_to_repo_path(pathspec).unwrap(); self.pathspec_ptrs.push(s.as_ptr()); self.pathspec.push(s); self } /// Acquire a pointer to the underlying raw options. /// /// This function is unsafe as the pointer is only valid so long as this /// structure is not moved, modified, or used elsewhere. pub unsafe fn raw(&mut self) -> *const raw::git_diff_options { self.raw.old_prefix = self .old_prefix .as_ref() .map(|s| s.as_ptr()) .unwrap_or(ptr::null()); self.raw.new_prefix = self .new_prefix .as_ref() .map(|s| s.as_ptr()) .unwrap_or(ptr::null()); self.raw.pathspec.count = self.pathspec_ptrs.len() as size_t; self.raw.pathspec.strings = self.pathspec_ptrs.as_ptr() as *mut _; &self.raw as *const _ } // TODO: expose ignore_submodules, notify_cb/notify_payload } impl<'diff> Iterator for Deltas<'diff> { type Item = DiffDelta<'diff>; fn next(&mut self) -> Option> { self.range.next().and_then(|i| self.diff.get_delta(i)) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } impl<'diff> DoubleEndedIterator for Deltas<'diff> { fn next_back(&mut self) -> Option> { self.range.next_back().and_then(|i| self.diff.get_delta(i)) } } impl<'diff> ExactSizeIterator for Deltas<'diff> {} /// Line origin constants. #[derive(Copy, Clone, Debug, PartialEq)] pub enum DiffLineType { /// These values will be sent to `git_diff_line_cb` along with the line Context, /// Addition, /// Deletion, /// Both files have no LF at end ContextEOFNL, /// Old has no LF at end, new does AddEOFNL, /// Old has LF at end, new does not DeleteEOFNL, /// The following values will only be sent to a `git_diff_line_cb` when /// the content of a diff is being formatted through `git_diff_print`. FileHeader, /// HunkHeader, /// For "Binary files x and y differ" Binary, } impl Binding for DiffLineType { type Raw = raw::git_diff_line_t; unsafe fn from_raw(raw: raw::git_diff_line_t) -> Self { match raw { raw::GIT_DIFF_LINE_CONTEXT => DiffLineType::Context, raw::GIT_DIFF_LINE_ADDITION => DiffLineType::Addition, raw::GIT_DIFF_LINE_DELETION => DiffLineType::Deletion, raw::GIT_DIFF_LINE_CONTEXT_EOFNL => DiffLineType::ContextEOFNL, raw::GIT_DIFF_LINE_ADD_EOFNL => DiffLineType::AddEOFNL, raw::GIT_DIFF_LINE_DEL_EOFNL => DiffLineType::DeleteEOFNL, raw::GIT_DIFF_LINE_FILE_HDR => DiffLineType::FileHeader, raw::GIT_DIFF_LINE_HUNK_HDR => DiffLineType::HunkHeader, raw::GIT_DIFF_LINE_BINARY => DiffLineType::Binary, _ => panic!("Unknown git diff line type"), } } fn raw(&self) -> raw::git_diff_line_t { match *self { DiffLineType::Context => raw::GIT_DIFF_LINE_CONTEXT, DiffLineType::Addition => raw::GIT_DIFF_LINE_ADDITION, DiffLineType::Deletion => raw::GIT_DIFF_LINE_DELETION, DiffLineType::ContextEOFNL => raw::GIT_DIFF_LINE_CONTEXT_EOFNL, DiffLineType::AddEOFNL => raw::GIT_DIFF_LINE_ADD_EOFNL, DiffLineType::DeleteEOFNL => raw::GIT_DIFF_LINE_DEL_EOFNL, DiffLineType::FileHeader => raw::GIT_DIFF_LINE_FILE_HDR, DiffLineType::HunkHeader => raw::GIT_DIFF_LINE_HUNK_HDR, DiffLineType::Binary => raw::GIT_DIFF_LINE_BINARY, } } } impl<'a> DiffLine<'a> { /// Line number in old file or `None` for added line pub fn old_lineno(&self) -> Option { match unsafe { (*self.raw).old_lineno } { n if n < 0 => None, n => Some(n as u32), } } /// Line number in new file or `None` for deleted line pub fn new_lineno(&self) -> Option { match unsafe { (*self.raw).new_lineno } { n if n < 0 => None, n => Some(n as u32), } } /// Number of newline characters in content pub fn num_lines(&self) -> u32 { unsafe { (*self.raw).num_lines as u32 } } /// Offset in the original file to the content pub fn content_offset(&self) -> i64 { unsafe { (*self.raw).content_offset as i64 } } /// Content of this line as bytes. pub fn content(&self) -> &'a [u8] { unsafe { slice::from_raw_parts( (*self.raw).content as *const u8, (*self.raw).content_len as usize, ) } } /// origin of this `DiffLine`. /// pub fn origin_value(&self) -> DiffLineType { unsafe { Binding::from_raw((*self.raw).origin as raw::git_diff_line_t) } } /// Sigil showing the origin of this `DiffLine`. /// /// * ` ` - Line context /// * `+` - Line addition /// * `-` - Line deletion /// * `=` - Context (End of file) /// * `>` - Add (End of file) /// * `<` - Remove (End of file) /// * `F` - File header /// * `H` - Hunk header /// * `B` - Line binary pub fn origin(&self) -> char { match unsafe { (*self.raw).origin as raw::git_diff_line_t } { raw::GIT_DIFF_LINE_CONTEXT => ' ', raw::GIT_DIFF_LINE_ADDITION => '+', raw::GIT_DIFF_LINE_DELETION => '-', raw::GIT_DIFF_LINE_CONTEXT_EOFNL => '=', raw::GIT_DIFF_LINE_ADD_EOFNL => '>', raw::GIT_DIFF_LINE_DEL_EOFNL => '<', raw::GIT_DIFF_LINE_FILE_HDR => 'F', raw::GIT_DIFF_LINE_HUNK_HDR => 'H', raw::GIT_DIFF_LINE_BINARY => 'B', _ => ' ', } } } impl<'a> Binding for DiffLine<'a> { type Raw = *const raw::git_diff_line; unsafe fn from_raw(raw: *const raw::git_diff_line) -> DiffLine<'a> { DiffLine { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *const raw::git_diff_line { self.raw } } impl<'a> std::fmt::Debug for DiffLine<'a> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { let mut ds = f.debug_struct("DiffLine"); if let Some(old_lineno) = &self.old_lineno() { ds.field("old_lineno", old_lineno); } if let Some(new_lineno) = &self.new_lineno() { ds.field("new_lineno", new_lineno); } ds.field("num_lines", &self.num_lines()) .field("content_offset", &self.content_offset()) .field("content", &self.content()) .field("origin", &self.origin()) .finish() } } impl<'a> DiffHunk<'a> { /// Starting line number in old_file pub fn old_start(&self) -> u32 { unsafe { (*self.raw).old_start as u32 } } /// Number of lines in old_file pub fn old_lines(&self) -> u32 { unsafe { (*self.raw).old_lines as u32 } } /// Starting line number in new_file pub fn new_start(&self) -> u32 { unsafe { (*self.raw).new_start as u32 } } /// Number of lines in new_file pub fn new_lines(&self) -> u32 { unsafe { (*self.raw).new_lines as u32 } } /// Header text pub fn header(&self) -> &'a [u8] { unsafe { slice::from_raw_parts( (*self.raw).header.as_ptr() as *const u8, (*self.raw).header_len as usize, ) } } } impl<'a> Binding for DiffHunk<'a> { type Raw = *const raw::git_diff_hunk; unsafe fn from_raw(raw: *const raw::git_diff_hunk) -> DiffHunk<'a> { DiffHunk { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *const raw::git_diff_hunk { self.raw } } impl<'a> std::fmt::Debug for DiffHunk<'a> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { f.debug_struct("DiffHunk") .field("old_start", &self.old_start()) .field("old_lines", &self.old_lines()) .field("new_start", &self.new_start()) .field("new_lines", &self.new_lines()) .field("header", &self.header()) .finish() } } impl DiffStats { /// Get the total number of files chaned in a diff. pub fn files_changed(&self) -> usize { unsafe { raw::git_diff_stats_files_changed(&*self.raw) as usize } } /// Get the total number of insertions in a diff pub fn insertions(&self) -> usize { unsafe { raw::git_diff_stats_insertions(&*self.raw) as usize } } /// Get the total number of deletions in a diff pub fn deletions(&self) -> usize { unsafe { raw::git_diff_stats_deletions(&*self.raw) as usize } } /// Print diff statistics to a Buf pub fn to_buf(&self, format: DiffStatsFormat, width: usize) -> Result { let buf = Buf::new(); unsafe { try_call!(raw::git_diff_stats_to_buf( buf.raw(), self.raw, format.bits(), width as size_t )); } Ok(buf) } } impl Binding for DiffStats { type Raw = *mut raw::git_diff_stats; unsafe fn from_raw(raw: *mut raw::git_diff_stats) -> DiffStats { DiffStats { raw } } fn raw(&self) -> *mut raw::git_diff_stats { self.raw } } impl Drop for DiffStats { fn drop(&mut self) { unsafe { raw::git_diff_stats_free(self.raw) } } } impl std::fmt::Debug for DiffStats { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { f.debug_struct("DiffStats") .field("files_changed", &self.files_changed()) .field("insertions", &self.insertions()) .field("deletions", &self.deletions()) .finish() } } impl<'a> DiffBinary<'a> { /// Returns whether there is data in this binary structure or not. /// /// If this is `true`, then this was produced and included binary content. /// If this is `false` then this was generated knowing only that a binary /// file changed but without providing the data, probably from a patch that /// said `Binary files a/file.txt and b/file.txt differ`. pub fn contains_data(&self) -> bool { unsafe { (*self.raw).contains_data == 1 } } /// The contents of the old file. pub fn old_file(&self) -> DiffBinaryFile<'a> { unsafe { Binding::from_raw(&(*self.raw).old_file as *const _) } } /// The contents of the new file. pub fn new_file(&self) -> DiffBinaryFile<'a> { unsafe { Binding::from_raw(&(*self.raw).new_file as *const _) } } } impl<'a> Binding for DiffBinary<'a> { type Raw = *const raw::git_diff_binary; unsafe fn from_raw(raw: *const raw::git_diff_binary) -> DiffBinary<'a> { DiffBinary { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *const raw::git_diff_binary { self.raw } } impl<'a> DiffBinaryFile<'a> { /// The type of binary data for this file pub fn kind(&self) -> DiffBinaryKind { unsafe { Binding::from_raw((*self.raw).kind) } } /// The binary data, deflated pub fn data(&self) -> &[u8] { unsafe { slice::from_raw_parts((*self.raw).data as *const u8, (*self.raw).datalen as usize) } } /// The length of the binary data after inflation pub fn inflated_len(&self) -> usize { unsafe { (*self.raw).inflatedlen as usize } } } impl<'a> Binding for DiffBinaryFile<'a> { type Raw = *const raw::git_diff_binary_file; unsafe fn from_raw(raw: *const raw::git_diff_binary_file) -> DiffBinaryFile<'a> { DiffBinaryFile { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *const raw::git_diff_binary_file { self.raw } } impl Binding for DiffBinaryKind { type Raw = raw::git_diff_binary_t; unsafe fn from_raw(raw: raw::git_diff_binary_t) -> DiffBinaryKind { match raw { raw::GIT_DIFF_BINARY_NONE => DiffBinaryKind::None, raw::GIT_DIFF_BINARY_LITERAL => DiffBinaryKind::Literal, raw::GIT_DIFF_BINARY_DELTA => DiffBinaryKind::Delta, _ => panic!("Unknown git diff binary kind"), } } fn raw(&self) -> raw::git_diff_binary_t { match *self { DiffBinaryKind::None => raw::GIT_DIFF_BINARY_NONE, DiffBinaryKind::Literal => raw::GIT_DIFF_BINARY_LITERAL, DiffBinaryKind::Delta => raw::GIT_DIFF_BINARY_DELTA, } } } impl Default for DiffFindOptions { fn default() -> Self { Self::new() } } impl DiffFindOptions { /// Creates a new set of empty diff find options. /// /// All flags and other options are defaulted to false or their otherwise /// zero equivalents. pub fn new() -> DiffFindOptions { let mut opts = DiffFindOptions { raw: unsafe { mem::zeroed() }, }; assert_eq!( unsafe { raw::git_diff_find_init_options(&mut opts.raw, 1) }, 0 ); opts } fn flag(&mut self, opt: u32, val: bool) -> &mut DiffFindOptions { if val { self.raw.flags |= opt; } else { self.raw.flags &= !opt; } self } /// Reset all flags back to their unset state, indicating that /// `diff.renames` should be used instead. This is overridden once any flag /// is set. pub fn by_config(&mut self) -> &mut DiffFindOptions { self.flag(0xffffffff, false) } /// Look for renames? pub fn renames(&mut self, find: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_RENAMES, find) } /// Consider old side of modified for renames? pub fn renames_from_rewrites(&mut self, find: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_RENAMES_FROM_REWRITES, find) } /// Look for copies? pub fn copies(&mut self, find: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_COPIES, find) } /// Consider unmodified as copy sources? /// /// For this to work correctly, use `include_unmodified` when the initial /// diff is being generated. pub fn copies_from_unmodified(&mut self, find: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_COPIES_FROM_UNMODIFIED, find) } /// Mark significant rewrites for split. pub fn rewrites(&mut self, find: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_REWRITES, find) } /// Actually split large rewrites into delete/add pairs pub fn break_rewrites(&mut self, find: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_BREAK_REWRITES, find) } #[doc(hidden)] pub fn break_rewries(&mut self, find: bool) -> &mut DiffFindOptions { self.break_rewrites(find) } /// Find renames/copies for untracked items in working directory. /// /// For this to work correctly use the `include_untracked` option when the /// initial diff is being generated. pub fn for_untracked(&mut self, find: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_FOR_UNTRACKED, find) } /// Turn on all finding features. pub fn all(&mut self, find: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_ALL, find) } /// Measure similarity ignoring leading whitespace (default) pub fn ignore_leading_whitespace(&mut self, ignore: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_IGNORE_LEADING_WHITESPACE, ignore) } /// Measure similarity ignoring all whitespace pub fn ignore_whitespace(&mut self, ignore: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_IGNORE_WHITESPACE, ignore) } /// Measure similarity including all data pub fn dont_ignore_whitespace(&mut self, dont: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_DONT_IGNORE_WHITESPACE, dont) } /// Measure similarity only by comparing SHAs (fast and cheap) pub fn exact_match_only(&mut self, exact: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_EXACT_MATCH_ONLY, exact) } /// Do not break rewrites unless they contribute to a rename. /// /// Normally, `break_rewrites` and `rewrites` will measure the /// self-similarity of modified files and split the ones that have changed a /// lot into a delete/add pair. Then the sides of that pair will be /// considered candidates for rename and copy detection /// /// If you add this flag in and the split pair is not used for an actual /// rename or copy, then the modified record will be restored to a regular /// modified record instead of being split. pub fn break_rewrites_for_renames_only(&mut self, b: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_BREAK_REWRITES_FOR_RENAMES_ONLY, b) } /// Remove any unmodified deltas after find_similar is done. /// /// Using `copies_from_unmodified` to emulate the `--find-copies-harder` /// behavior requires building a diff with the `include_unmodified` flag. If /// you do not want unmodified records in the final result, pas this flag to /// have them removed. pub fn remove_unmodified(&mut self, remove: bool) -> &mut DiffFindOptions { self.flag(raw::GIT_DIFF_FIND_REMOVE_UNMODIFIED, remove) } /// Similarity to consider a file renamed (default 50) pub fn rename_threshold(&mut self, thresh: u16) -> &mut DiffFindOptions { self.raw.rename_threshold = thresh; self } /// Similarity of modified to be glegible rename source (default 50) pub fn rename_from_rewrite_threshold(&mut self, thresh: u16) -> &mut DiffFindOptions { self.raw.rename_from_rewrite_threshold = thresh; self } /// Similarity to consider a file copy (default 50) pub fn copy_threshold(&mut self, thresh: u16) -> &mut DiffFindOptions { self.raw.copy_threshold = thresh; self } /// Similarity to split modify into delete/add pair (default 60) pub fn break_rewrite_threshold(&mut self, thresh: u16) -> &mut DiffFindOptions { self.raw.break_rewrite_threshold = thresh; self } /// Maximum similarity sources to examine for a file (somewhat like /// git-diff's `-l` option or `diff.renameLimit` config) /// /// Defaults to 200 pub fn rename_limit(&mut self, limit: usize) -> &mut DiffFindOptions { self.raw.rename_limit = limit as size_t; self } // TODO: expose git_diff_similarity_metric } impl Default for DiffFormatEmailOptions { fn default() -> Self { Self::new() } } impl DiffFormatEmailOptions { /// Creates a new set of email options, /// initialized to the default values pub fn new() -> Self { let mut opts = DiffFormatEmailOptions { raw: unsafe { mem::zeroed() }, }; assert_eq!( unsafe { raw::git_diff_format_email_options_init(&mut opts.raw, 1) }, 0 ); opts } fn flag(&mut self, opt: u32, val: bool) -> &mut Self { if val { self.raw.flags |= opt; } else { self.raw.flags &= !opt; } self } /// Exclude `[PATCH]` from the subject header pub fn exclude_subject_patch_header(&mut self, should_exclude: bool) -> &mut Self { self.flag( raw::GIT_DIFF_FORMAT_EMAIL_EXCLUDE_SUBJECT_PATCH_MARKER, should_exclude, ) } } impl DiffPatchidOptions { /// Creates a new set of patchid options, /// initialized to the default values pub fn new() -> Self { let mut opts = DiffPatchidOptions { raw: unsafe { mem::zeroed() }, }; assert_eq!( unsafe { raw::git_diff_patchid_options_init( &mut opts.raw, raw::GIT_DIFF_PATCHID_OPTIONS_VERSION, ) }, 0 ); opts } } #[cfg(test)] mod tests { use crate::{DiffLineType, DiffOptions, Oid, Signature, Time}; use std::borrow::Borrow; use std::fs::File; use std::io::Write; use std::path::Path; #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let diff = repo.diff_tree_to_workdir(None, None).unwrap(); assert_eq!(diff.deltas().len(), 0); let stats = diff.stats().unwrap(); assert_eq!(stats.insertions(), 0); assert_eq!(stats.deletions(), 0); assert_eq!(stats.files_changed(), 0); let patchid = diff.patchid(None).unwrap(); assert_ne!(patchid, Oid::zero()); } #[test] fn foreach_smoke() { let (_td, repo) = crate::test::repo_init(); let diff = t!(repo.diff_tree_to_workdir(None, None)); let mut count = 0; t!(diff.foreach( &mut |_file, _progress| { count = count + 1; true }, None, None, None )); assert_eq!(count, 0); } #[test] fn foreach_file_only() { let path = Path::new("foo"); let (td, repo) = crate::test::repo_init(); t!(t!(File::create(&td.path().join(path))).write_all(b"bar")); let mut opts = DiffOptions::new(); opts.include_untracked(true); let diff = t!(repo.diff_tree_to_workdir(None, Some(&mut opts))); let mut count = 0; let mut result = None; t!(diff.foreach( &mut |file, _progress| { count = count + 1; result = file.new_file().path().map(ToOwned::to_owned); true }, None, None, None )); assert_eq!(result.as_ref().map(Borrow::borrow), Some(path)); assert_eq!(count, 1); } #[test] fn foreach_file_and_hunk() { let path = Path::new("foo"); let (td, repo) = crate::test::repo_init(); t!(t!(File::create(&td.path().join(path))).write_all(b"bar")); let mut index = t!(repo.index()); t!(index.add_path(path)); let mut opts = DiffOptions::new(); opts.include_untracked(true); let diff = t!(repo.diff_tree_to_index(None, Some(&index), Some(&mut opts))); let mut new_lines = 0; t!(diff.foreach( &mut |_file, _progress| { true }, None, Some(&mut |_file, hunk| { new_lines = hunk.new_lines(); true }), None )); assert_eq!(new_lines, 1); } #[test] fn foreach_all_callbacks() { let fib = vec![0, 1, 1, 2, 3, 5, 8]; // Verified with a node implementation of deflate, might be worth // adding a deflate lib to do this inline here. let deflated_fib = vec![120, 156, 99, 96, 100, 100, 98, 102, 229, 0, 0, 0, 53, 0, 21]; let foo_path = Path::new("foo"); let bin_path = Path::new("bin"); let (td, repo) = crate::test::repo_init(); t!(t!(File::create(&td.path().join(foo_path))).write_all(b"bar\n")); t!(t!(File::create(&td.path().join(bin_path))).write_all(&fib)); let mut index = t!(repo.index()); t!(index.add_path(foo_path)); t!(index.add_path(bin_path)); let mut opts = DiffOptions::new(); opts.include_untracked(true).show_binary(true); let diff = t!(repo.diff_tree_to_index(None, Some(&index), Some(&mut opts))); let mut bin_content = None; let mut new_lines = 0; let mut line_content = None; t!(diff.foreach( &mut |_file, _progress| { true }, Some(&mut |_file, binary| { bin_content = Some(binary.new_file().data().to_owned()); true }), Some(&mut |_file, hunk| { new_lines = hunk.new_lines(); true }), Some(&mut |_file, _hunk, line| { line_content = String::from_utf8(line.content().into()).ok(); true }) )); assert_eq!(bin_content, Some(deflated_fib)); assert_eq!(new_lines, 1); assert_eq!(line_content, Some("bar\n".to_string())); } #[test] fn format_email_simple() { let (_td, repo) = crate::test::repo_init(); const COMMIT_MESSAGE: &str = "Modify some content"; const EXPECTED_EMAIL_START: &str = concat!( "From f1234fb0588b6ed670779a34ba5c51ef962f285f Mon Sep 17 00:00:00 2001\n", "From: Techcable \n", "Date: Tue, 11 Jan 1972 17:46:40 +0000\n", "Subject: [PATCH] Modify some content\n", "\n", "---\n", " file1.txt | 8 +++++---\n", " 1 file changed, 5 insertions(+), 3 deletions(-)\n", "\n", "diff --git a/file1.txt b/file1.txt\n", "index 94aaae8..af8f41d 100644\n", "--- a/file1.txt\n", "+++ b/file1.txt\n", "@@ -1,15 +1,17 @@\n", " file1.txt\n", " file1.txt\n", "+_file1.txt_\n", " file1.txt\n", " file1.txt\n", " file1.txt\n", " file1.txt\n", "+\n", "+\n", " file1.txt\n", " file1.txt\n", " file1.txt\n", " file1.txt\n", " file1.txt\n", "-file1.txt\n", "-file1.txt\n", "-file1.txt\n", "+_file1.txt_\n", "+_file1.txt_\n", " file1.txt\n", "--\n" ); const ORIGINAL_FILE: &str = concat!( "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n" ); const UPDATED_FILE: &str = concat!( "file1.txt\n", "file1.txt\n", "_file1.txt_\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "\n", "\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "file1.txt\n", "_file1.txt_\n", "_file1.txt_\n", "file1.txt\n" ); const FILE_MODE: i32 = 0o100644; let original_file = repo.blob(ORIGINAL_FILE.as_bytes()).unwrap(); let updated_file = repo.blob(UPDATED_FILE.as_bytes()).unwrap(); let mut original_tree = repo.treebuilder(None).unwrap(); original_tree .insert("file1.txt", original_file, FILE_MODE) .unwrap(); let original_tree = original_tree.write().unwrap(); let mut updated_tree = repo.treebuilder(None).unwrap(); updated_tree .insert("file1.txt", updated_file, FILE_MODE) .unwrap(); let updated_tree = updated_tree.write().unwrap(); let time = Time::new(64_000_000, 0); let author = Signature::new("Techcable", "dummy@dummy.org", &time).unwrap(); let updated_commit = repo .commit( None, &author, &author, COMMIT_MESSAGE, &repo.find_tree(updated_tree).unwrap(), &[], // NOTE: Have no parents to ensure stable hash ) .unwrap(); let updated_commit = repo.find_commit(updated_commit).unwrap(); let mut diff = repo .diff_tree_to_tree( Some(&repo.find_tree(original_tree).unwrap()), Some(&repo.find_tree(updated_tree).unwrap()), None, ) .unwrap(); let actual_email = diff.format_email(1, 1, &updated_commit, None).unwrap(); let actual_email = actual_email.as_str().unwrap(); assert!( actual_email.starts_with(EXPECTED_EMAIL_START), "Unexpected email:\n{}", actual_email ); let mut remaining_lines = actual_email[EXPECTED_EMAIL_START.len()..].lines(); let version_line = remaining_lines.next(); assert!( version_line.unwrap().starts_with("libgit2"), "Invalid version line: {:?}", version_line ); while let Some(line) = remaining_lines.next() { assert_eq!(line.trim(), "") } } #[test] fn foreach_diff_line_origin_value() { let foo_path = Path::new("foo"); let (td, repo) = crate::test::repo_init(); t!(t!(File::create(&td.path().join(foo_path))).write_all(b"bar\n")); let mut index = t!(repo.index()); t!(index.add_path(foo_path)); let mut opts = DiffOptions::new(); opts.include_untracked(true); let diff = t!(repo.diff_tree_to_index(None, Some(&index), Some(&mut opts))); let mut origin_values: Vec = Vec::new(); t!(diff.foreach( &mut |_file, _progress| { true }, None, None, Some(&mut |_file, _hunk, line| { origin_values.push(line.origin_value()); true }) )); assert_eq!(origin_values.len(), 1); assert_eq!(origin_values[0], DiffLineType::Addition); } #[test] fn foreach_exits_with_euser() { let foo_path = Path::new("foo"); let bar_path = Path::new("foo"); let (td, repo) = crate::test::repo_init(); t!(t!(File::create(&td.path().join(foo_path))).write_all(b"bar\n")); let mut index = t!(repo.index()); t!(index.add_path(foo_path)); t!(index.add_path(bar_path)); let mut opts = DiffOptions::new(); opts.include_untracked(true); let diff = t!(repo.diff_tree_to_index(None, Some(&index), Some(&mut opts))); let mut calls = 0; let result = diff.foreach( &mut |_file, _progress| { calls += 1; false }, None, None, None, ); assert_eq!(result.unwrap_err().code(), crate::ErrorCode::User); } } vendor/git2/src/note.rs0000664000175000017500000001024014160055207015613 0ustar mwhudsonmwhudsonuse std::marker; use std::str; use crate::util::Binding; use crate::{raw, signature, Error, Oid, Repository, Signature}; /// A structure representing a [note][note] in git. /// /// [note]: http://alblue.bandlem.com/2011/11/git-tip-of-week-git-notes.html pub struct Note<'repo> { raw: *mut raw::git_note, // Hmm, the current libgit2 version does not have this inside of it, but // perhaps it's a good idea to keep it around? Can always remove it later I // suppose... _marker: marker::PhantomData<&'repo Repository>, } /// An iterator over all of the notes within a repository. pub struct Notes<'repo> { raw: *mut raw::git_note_iterator, _marker: marker::PhantomData<&'repo Repository>, } impl<'repo> Note<'repo> { /// Get the note author pub fn author(&self) -> Signature<'_> { unsafe { signature::from_raw_const(self, raw::git_note_author(&*self.raw)) } } /// Get the note committer pub fn committer(&self) -> Signature<'_> { unsafe { signature::from_raw_const(self, raw::git_note_committer(&*self.raw)) } } /// Get the note message, in bytes. pub fn message_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_note_message(&*self.raw)).unwrap() } } /// Get the note message as a string, returning `None` if it is not UTF-8. pub fn message(&self) -> Option<&str> { str::from_utf8(self.message_bytes()).ok() } /// Get the note object's id pub fn id(&self) -> Oid { unsafe { Binding::from_raw(raw::git_note_id(&*self.raw)) } } } impl<'repo> Binding for Note<'repo> { type Raw = *mut raw::git_note; unsafe fn from_raw(raw: *mut raw::git_note) -> Note<'repo> { Note { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_note { self.raw } } impl<'repo> std::fmt::Debug for Note<'repo> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { f.debug_struct("Note").field("id", &self.id()).finish() } } impl<'repo> Drop for Note<'repo> { fn drop(&mut self) { unsafe { raw::git_note_free(self.raw); } } } impl<'repo> Binding for Notes<'repo> { type Raw = *mut raw::git_note_iterator; unsafe fn from_raw(raw: *mut raw::git_note_iterator) -> Notes<'repo> { Notes { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_note_iterator { self.raw } } impl<'repo> Iterator for Notes<'repo> { type Item = Result<(Oid, Oid), Error>; fn next(&mut self) -> Option> { let mut note_id = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; let mut annotated_id = note_id; unsafe { try_call_iter!(raw::git_note_next( &mut note_id, &mut annotated_id, self.raw )); Some(Ok(( Binding::from_raw(¬e_id as *const _), Binding::from_raw(&annotated_id as *const _), ))) } } } impl<'repo> Drop for Notes<'repo> { fn drop(&mut self) { unsafe { raw::git_note_iterator_free(self.raw); } } } #[cfg(test)] mod tests { #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); assert!(repo.notes(None).is_err()); let sig = repo.signature().unwrap(); let head = repo.head().unwrap().target().unwrap(); let note = repo.note(&sig, &sig, None, head, "foo", false).unwrap(); assert_eq!(repo.notes(None).unwrap().count(), 1); let note_obj = repo.find_note(None, head).unwrap(); assert_eq!(note_obj.id(), note); assert_eq!(note_obj.message(), Some("foo")); let (a, b) = repo.notes(None).unwrap().next().unwrap().unwrap(); assert_eq!(a, note); assert_eq!(b, head); assert_eq!(repo.note_default_ref().unwrap(), "refs/notes/commits"); assert_eq!(sig.name(), note_obj.author().name()); assert_eq!(sig.name(), note_obj.committer().name()); assert!(sig.when() == note_obj.committer().when()); } } vendor/git2/src/panic.rs0000664000175000017500000000143314160055207015744 0ustar mwhudsonmwhudsonuse std::any::Any; use std::cell::RefCell; thread_local!(static LAST_ERROR: RefCell>> = { RefCell::new(None) }); pub fn wrap T + std::panic::UnwindSafe>(f: F) -> Option { use std::panic; if LAST_ERROR.with(|slot| slot.borrow().is_some()) { return None; } match panic::catch_unwind(f) { Ok(ret) => Some(ret), Err(e) => { LAST_ERROR.with(move |slot| { *slot.borrow_mut() = Some(e); }); None } } } pub fn check() { let err = LAST_ERROR.with(|slot| slot.borrow_mut().take()); if let Some(err) = err { std::panic::resume_unwind(err); } } pub fn panicked() -> bool { LAST_ERROR.with(|slot| slot.borrow().is_some()) } vendor/git2/src/blame.rs0000664000175000017500000002443714160055207015743 0ustar mwhudsonmwhudsonuse crate::util::{self, Binding}; use crate::{raw, signature, Oid, Repository, Signature}; use std::marker; use std::mem; use std::ops::Range; use std::path::Path; /// Opaque structure to hold blame results. pub struct Blame<'repo> { raw: *mut raw::git_blame, _marker: marker::PhantomData<&'repo Repository>, } /// Structure that represents a blame hunk. pub struct BlameHunk<'blame> { raw: *mut raw::git_blame_hunk, _marker: marker::PhantomData<&'blame raw::git_blame>, } /// Blame options pub struct BlameOptions { raw: raw::git_blame_options, } /// An iterator over the hunks in a blame. pub struct BlameIter<'blame> { range: Range, blame: &'blame Blame<'blame>, } impl<'repo> Blame<'repo> { /// Gets the number of hunks that exist in the blame structure. pub fn len(&self) -> usize { unsafe { raw::git_blame_get_hunk_count(self.raw) as usize } } /// Return `true` is there is no hunk in the blame structure. pub fn is_empty(&self) -> bool { self.len() == 0 } /// Gets the blame hunk at the given index. pub fn get_index(&self, index: usize) -> Option> { unsafe { let ptr = raw::git_blame_get_hunk_byindex(self.raw(), index as u32); if ptr.is_null() { None } else { Some(BlameHunk::from_raw_const(ptr)) } } } /// Gets the hunk that relates to the given line number in the newest /// commit. pub fn get_line(&self, lineno: usize) -> Option> { unsafe { let ptr = raw::git_blame_get_hunk_byline(self.raw(), lineno); if ptr.is_null() { None } else { Some(BlameHunk::from_raw_const(ptr)) } } } /// Returns an iterator over the hunks in this blame. pub fn iter(&self) -> BlameIter<'_> { BlameIter { range: 0..self.len(), blame: self, } } } impl<'blame> BlameHunk<'blame> { unsafe fn from_raw_const(raw: *const raw::git_blame_hunk) -> BlameHunk<'blame> { BlameHunk { raw: raw as *mut raw::git_blame_hunk, _marker: marker::PhantomData, } } /// Returns OID of the commit where this line was last changed pub fn final_commit_id(&self) -> Oid { unsafe { Oid::from_raw(&(*self.raw).final_commit_id) } } /// Returns signature of the commit. pub fn final_signature(&self) -> Signature<'_> { unsafe { signature::from_raw_const(self, (*self.raw).final_signature) } } /// Returns line number where this hunk begins. /// /// Note that the start line is counting from 1. pub fn final_start_line(&self) -> usize { unsafe { (*self.raw).final_start_line_number } } /// Returns the OID of the commit where this hunk was found. /// /// This will usually be the same as `final_commit_id`, /// except when `BlameOptions::track_copies_any_commit_copies` has been /// turned on pub fn orig_commit_id(&self) -> Oid { unsafe { Oid::from_raw(&(*self.raw).orig_commit_id) } } /// Returns signature of the commit. pub fn orig_signature(&self) -> Signature<'_> { unsafe { signature::from_raw_const(self, (*self.raw).orig_signature) } } /// Returns line number where this hunk begins. /// /// Note that the start line is counting from 1. pub fn orig_start_line(&self) -> usize { unsafe { (*self.raw).orig_start_line_number } } /// Returns path to the file where this hunk originated. /// /// Note: `None` could be returned for non-unicode paths on Widnows. pub fn path(&self) -> Option<&Path> { unsafe { if let Some(bytes) = crate::opt_bytes(self, (*self.raw).orig_path) { Some(util::bytes2path(bytes)) } else { None } } } /// Tests whether this hunk has been tracked to a boundary commit /// (the root, or the commit specified in git_blame_options.oldest_commit). pub fn is_boundary(&self) -> bool { unsafe { (*self.raw).boundary == 1 } } /// Returns number of lines in this hunk. pub fn lines_in_hunk(&self) -> usize { unsafe { (*self.raw).lines_in_hunk as usize } } } impl Default for BlameOptions { fn default() -> Self { Self::new() } } impl BlameOptions { /// Initialize options pub fn new() -> BlameOptions { unsafe { let mut raw: raw::git_blame_options = mem::zeroed(); assert_eq!( raw::git_blame_init_options(&mut raw, raw::GIT_BLAME_OPTIONS_VERSION), 0 ); Binding::from_raw(&raw as *const _ as *mut _) } } fn flag(&mut self, opt: u32, val: bool) -> &mut BlameOptions { if val { self.raw.flags |= opt; } else { self.raw.flags &= !opt; } self } /// Track lines that have moved within a file. pub fn track_copies_same_file(&mut self, opt: bool) -> &mut BlameOptions { self.flag(raw::GIT_BLAME_TRACK_COPIES_SAME_FILE, opt) } /// Track lines that have moved across files in the same commit. pub fn track_copies_same_commit_moves(&mut self, opt: bool) -> &mut BlameOptions { self.flag(raw::GIT_BLAME_TRACK_COPIES_SAME_COMMIT_MOVES, opt) } /// Track lines that have been copied from another file that exists /// in the same commit. pub fn track_copies_same_commit_copies(&mut self, opt: bool) -> &mut BlameOptions { self.flag(raw::GIT_BLAME_TRACK_COPIES_SAME_COMMIT_COPIES, opt) } /// Track lines that have been copied from another file that exists /// in any commit. pub fn track_copies_any_commit_copies(&mut self, opt: bool) -> &mut BlameOptions { self.flag(raw::GIT_BLAME_TRACK_COPIES_ANY_COMMIT_COPIES, opt) } /// Restrict the search of commits to those reachable following only /// the first parents. pub fn first_parent(&mut self, opt: bool) -> &mut BlameOptions { self.flag(raw::GIT_BLAME_FIRST_PARENT, opt) } /// Use mailmap file to map author and committer names and email addresses /// to canonical real names and email addresses. The mailmap will be read /// from the working directory, or HEAD in a bare repository. pub fn use_mailmap(&mut self, opt: bool) -> &mut BlameOptions { self.flag(raw::GIT_BLAME_USE_MAILMAP, opt) } /// Ignore whitespace differences. pub fn ignore_whitespace(&mut self, opt: bool) -> &mut BlameOptions { self.flag(raw::GIT_BLAME_IGNORE_WHITESPACE, opt) } /// Setter for the id of the newest commit to consider. pub fn newest_commit(&mut self, id: Oid) -> &mut BlameOptions { unsafe { self.raw.newest_commit = *id.raw(); } self } /// Setter for the id of the oldest commit to consider. pub fn oldest_commit(&mut self, id: Oid) -> &mut BlameOptions { unsafe { self.raw.oldest_commit = *id.raw(); } self } /// The first line in the file to blame. pub fn min_line(&mut self, lineno: usize) -> &mut BlameOptions { self.raw.min_line = lineno; self } /// The last line in the file to blame. pub fn max_line(&mut self, lineno: usize) -> &mut BlameOptions { self.raw.max_line = lineno; self } } impl<'repo> Binding for Blame<'repo> { type Raw = *mut raw::git_blame; unsafe fn from_raw(raw: *mut raw::git_blame) -> Blame<'repo> { Blame { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_blame { self.raw } } impl<'repo> Drop for Blame<'repo> { fn drop(&mut self) { unsafe { raw::git_blame_free(self.raw) } } } impl<'blame> Binding for BlameHunk<'blame> { type Raw = *mut raw::git_blame_hunk; unsafe fn from_raw(raw: *mut raw::git_blame_hunk) -> BlameHunk<'blame> { BlameHunk { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_blame_hunk { self.raw } } impl Binding for BlameOptions { type Raw = *mut raw::git_blame_options; unsafe fn from_raw(opts: *mut raw::git_blame_options) -> BlameOptions { BlameOptions { raw: *opts } } fn raw(&self) -> *mut raw::git_blame_options { &self.raw as *const _ as *mut _ } } impl<'blame> Iterator for BlameIter<'blame> { type Item = BlameHunk<'blame>; fn next(&mut self) -> Option> { self.range.next().and_then(|i| self.blame.get_index(i)) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } impl<'blame> DoubleEndedIterator for BlameIter<'blame> { fn next_back(&mut self) -> Option> { self.range.next_back().and_then(|i| self.blame.get_index(i)) } } impl<'blame> ExactSizeIterator for BlameIter<'blame> {} #[cfg(test)] mod tests { use std::fs::{self, File}; use std::path::Path; #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let mut index = repo.index().unwrap(); let root = repo.workdir().unwrap(); fs::create_dir(&root.join("foo")).unwrap(); File::create(&root.join("foo/bar")).unwrap(); index.add_path(Path::new("foo/bar")).unwrap(); let id = index.write_tree().unwrap(); let tree = repo.find_tree(id).unwrap(); let sig = repo.signature().unwrap(); let id = repo.refname_to_id("HEAD").unwrap(); let parent = repo.find_commit(id).unwrap(); let commit = repo .commit(Some("HEAD"), &sig, &sig, "commit", &tree, &[&parent]) .unwrap(); let blame = repo.blame_file(Path::new("foo/bar"), None).unwrap(); assert_eq!(blame.len(), 1); assert_eq!(blame.iter().count(), 1); let hunk = blame.get_index(0).unwrap(); assert_eq!(hunk.final_commit_id(), commit); assert_eq!(hunk.final_signature().name(), sig.name()); assert_eq!(hunk.final_signature().email(), sig.email()); assert_eq!(hunk.final_start_line(), 1); assert_eq!(hunk.path(), Some(Path::new("foo/bar"))); assert_eq!(hunk.lines_in_hunk(), 0); assert!(!hunk.is_boundary()) } } vendor/git2/src/call.rs0000664000175000017500000001754314160055207015576 0ustar mwhudsonmwhudson#![macro_use] use libc; use crate::Error; macro_rules! call { (raw::$p:ident ($($e:expr),*)) => ( raw::$p($(crate::call::convert(&$e)),*) ) } macro_rules! try_call { (raw::$p:ident ($($e:expr),*)) => ({ match crate::call::c_try(raw::$p($(crate::call::convert(&$e)),*)) { Ok(o) => o, Err(e) => { crate::panic::check(); return Err(e) } } }) } macro_rules! try_call_iter { ($($f:tt)*) => { match call!($($f)*) { 0 => {} raw::GIT_ITEROVER => return None, e => return Some(Err(crate::call::last_error(e))) } } } #[doc(hidden)] pub trait Convert { fn convert(&self) -> T; } pub fn convert>(u: &U) -> T { u.convert() } pub fn c_try(ret: libc::c_int) -> Result { match ret { n if n < 0 => Err(last_error(n)), n => Ok(n), } } pub fn last_error(code: libc::c_int) -> Error { // nowadays this unwrap is safe as `Error::last_error` always returns // `Some`. Error::last_error(code).unwrap() } mod impls { use std::ffi::CString; use std::ptr; use libc; use crate::call::Convert; use crate::{raw, BranchType, ConfigLevel, Direction, ObjectType, ResetType}; use crate::{ AutotagOption, DiffFormat, FetchPrune, FileFavor, SubmoduleIgnore, SubmoduleUpdate, }; impl Convert for T { fn convert(&self) -> T { *self } } impl Convert for bool { fn convert(&self) -> libc::c_int { *self as libc::c_int } } impl<'a, T> Convert<*const T> for &'a T { fn convert(&self) -> *const T { *self as *const T } } impl<'a, T> Convert<*mut T> for &'a mut T { fn convert(&self) -> *mut T { &**self as *const T as *mut T } } impl Convert<*const T> for *mut T { fn convert(&self) -> *const T { *self as *const T } } impl Convert<*const libc::c_char> for CString { fn convert(&self) -> *const libc::c_char { self.as_ptr() } } impl> Convert<*const T> for Option { fn convert(&self) -> *const T { self.as_ref().map(|s| s.convert()).unwrap_or(ptr::null()) } } impl> Convert<*mut T> for Option { fn convert(&self) -> *mut T { self.as_ref() .map(|s| s.convert()) .unwrap_or(ptr::null_mut()) } } impl Convert for ResetType { fn convert(&self) -> raw::git_reset_t { match *self { ResetType::Soft => raw::GIT_RESET_SOFT, ResetType::Hard => raw::GIT_RESET_HARD, ResetType::Mixed => raw::GIT_RESET_MIXED, } } } impl Convert for Direction { fn convert(&self) -> raw::git_direction { match *self { Direction::Push => raw::GIT_DIRECTION_PUSH, Direction::Fetch => raw::GIT_DIRECTION_FETCH, } } } impl Convert for ObjectType { fn convert(&self) -> raw::git_object_t { match *self { ObjectType::Any => raw::GIT_OBJECT_ANY, ObjectType::Commit => raw::GIT_OBJECT_COMMIT, ObjectType::Tree => raw::GIT_OBJECT_TREE, ObjectType::Blob => raw::GIT_OBJECT_BLOB, ObjectType::Tag => raw::GIT_OBJECT_TAG, } } } impl Convert for Option { fn convert(&self) -> raw::git_object_t { self.unwrap_or(ObjectType::Any).convert() } } impl Convert for BranchType { fn convert(&self) -> raw::git_branch_t { match *self { BranchType::Remote => raw::GIT_BRANCH_REMOTE, BranchType::Local => raw::GIT_BRANCH_LOCAL, } } } impl Convert for Option { fn convert(&self) -> raw::git_branch_t { self.map(|s| s.convert()).unwrap_or(raw::GIT_BRANCH_ALL) } } impl Convert for ConfigLevel { fn convert(&self) -> raw::git_config_level_t { match *self { ConfigLevel::ProgramData => raw::GIT_CONFIG_LEVEL_PROGRAMDATA, ConfigLevel::System => raw::GIT_CONFIG_LEVEL_SYSTEM, ConfigLevel::XDG => raw::GIT_CONFIG_LEVEL_XDG, ConfigLevel::Global => raw::GIT_CONFIG_LEVEL_GLOBAL, ConfigLevel::Local => raw::GIT_CONFIG_LEVEL_LOCAL, ConfigLevel::App => raw::GIT_CONFIG_LEVEL_APP, ConfigLevel::Highest => raw::GIT_CONFIG_HIGHEST_LEVEL, } } } impl Convert for DiffFormat { fn convert(&self) -> raw::git_diff_format_t { match *self { DiffFormat::Patch => raw::GIT_DIFF_FORMAT_PATCH, DiffFormat::PatchHeader => raw::GIT_DIFF_FORMAT_PATCH_HEADER, DiffFormat::Raw => raw::GIT_DIFF_FORMAT_RAW, DiffFormat::NameOnly => raw::GIT_DIFF_FORMAT_NAME_ONLY, DiffFormat::NameStatus => raw::GIT_DIFF_FORMAT_NAME_STATUS, DiffFormat::PatchId => raw::GIT_DIFF_FORMAT_PATCH_ID, } } } impl Convert for FileFavor { fn convert(&self) -> raw::git_merge_file_favor_t { match *self { FileFavor::Normal => raw::GIT_MERGE_FILE_FAVOR_NORMAL, FileFavor::Ours => raw::GIT_MERGE_FILE_FAVOR_OURS, FileFavor::Theirs => raw::GIT_MERGE_FILE_FAVOR_THEIRS, FileFavor::Union => raw::GIT_MERGE_FILE_FAVOR_UNION, } } } impl Convert for SubmoduleIgnore { fn convert(&self) -> raw::git_submodule_ignore_t { match *self { SubmoduleIgnore::Unspecified => raw::GIT_SUBMODULE_IGNORE_UNSPECIFIED, SubmoduleIgnore::None => raw::GIT_SUBMODULE_IGNORE_NONE, SubmoduleIgnore::Untracked => raw::GIT_SUBMODULE_IGNORE_UNTRACKED, SubmoduleIgnore::Dirty => raw::GIT_SUBMODULE_IGNORE_DIRTY, SubmoduleIgnore::All => raw::GIT_SUBMODULE_IGNORE_ALL, } } } impl Convert for SubmoduleUpdate { fn convert(&self) -> raw::git_submodule_update_t { match *self { SubmoduleUpdate::Checkout => raw::GIT_SUBMODULE_UPDATE_CHECKOUT, SubmoduleUpdate::Rebase => raw::GIT_SUBMODULE_UPDATE_REBASE, SubmoduleUpdate::Merge => raw::GIT_SUBMODULE_UPDATE_MERGE, SubmoduleUpdate::None => raw::GIT_SUBMODULE_UPDATE_NONE, SubmoduleUpdate::Default => raw::GIT_SUBMODULE_UPDATE_DEFAULT, } } } impl Convert for AutotagOption { fn convert(&self) -> raw::git_remote_autotag_option_t { match *self { AutotagOption::Unspecified => raw::GIT_REMOTE_DOWNLOAD_TAGS_UNSPECIFIED, AutotagOption::None => raw::GIT_REMOTE_DOWNLOAD_TAGS_NONE, AutotagOption::Auto => raw::GIT_REMOTE_DOWNLOAD_TAGS_AUTO, AutotagOption::All => raw::GIT_REMOTE_DOWNLOAD_TAGS_ALL, } } } impl Convert for FetchPrune { fn convert(&self) -> raw::git_fetch_prune_t { match *self { FetchPrune::Unspecified => raw::GIT_FETCH_PRUNE_UNSPECIFIED, FetchPrune::On => raw::GIT_FETCH_PRUNE, FetchPrune::Off => raw::GIT_FETCH_NO_PRUNE, } } } } vendor/git2/src/patch.rs0000664000175000017500000001607014160055207015754 0ustar mwhudsonmwhudsonuse libc::{c_int, c_void}; use std::marker::PhantomData; use std::path::Path; use std::ptr; use crate::diff::{print_cb, LineCb}; use crate::util::{into_opt_c_string, Binding}; use crate::{raw, Blob, Buf, Diff, DiffDelta, DiffHunk, DiffLine, DiffOptions, Error}; /// A structure representing the text changes in a single diff delta. /// /// This is an opaque structure. pub struct Patch<'buffers> { raw: *mut raw::git_patch, buffers: PhantomData<&'buffers ()>, } unsafe impl<'buffers> Send for Patch<'buffers> {} impl<'buffers> Binding for Patch<'buffers> { type Raw = *mut raw::git_patch; unsafe fn from_raw(raw: Self::Raw) -> Self { Patch { raw, buffers: PhantomData, } } fn raw(&self) -> Self::Raw { self.raw } } impl<'buffers> Drop for Patch<'buffers> { fn drop(&mut self) { unsafe { raw::git_patch_free(self.raw) } } } impl<'buffers> Patch<'buffers> { /// Return a Patch for one file in a Diff. /// /// Returns Ok(None) for an unchanged or binary file. pub fn from_diff(diff: &Diff<'buffers>, idx: usize) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_patch_from_diff(&mut ret, diff.raw(), idx)); Ok(Binding::from_raw_opt(ret)) } } /// Generate a Patch by diffing two blobs. pub fn from_blobs( old_blob: &Blob<'buffers>, old_path: Option<&Path>, new_blob: &Blob<'buffers>, new_path: Option<&Path>, opts: Option<&mut DiffOptions>, ) -> Result { let mut ret = ptr::null_mut(); let old_path = into_opt_c_string(old_path)?; let new_path = into_opt_c_string(new_path)?; unsafe { try_call!(raw::git_patch_from_blobs( &mut ret, old_blob.raw(), old_path, new_blob.raw(), new_path, opts.map(|s| s.raw()) )); Ok(Binding::from_raw(ret)) } } /// Generate a Patch by diffing a blob and a buffer. pub fn from_blob_and_buffer( old_blob: &Blob<'buffers>, old_path: Option<&Path>, new_buffer: &'buffers [u8], new_path: Option<&Path>, opts: Option<&mut DiffOptions>, ) -> Result { let mut ret = ptr::null_mut(); let old_path = into_opt_c_string(old_path)?; let new_path = into_opt_c_string(new_path)?; unsafe { try_call!(raw::git_patch_from_blob_and_buffer( &mut ret, old_blob.raw(), old_path, new_buffer.as_ptr() as *const c_void, new_buffer.len(), new_path, opts.map(|s| s.raw()) )); Ok(Binding::from_raw(ret)) } } /// Generate a Patch by diffing two buffers. pub fn from_buffers( old_buffer: &'buffers [u8], old_path: Option<&Path>, new_buffer: &'buffers [u8], new_path: Option<&Path>, opts: Option<&mut DiffOptions>, ) -> Result { crate::init(); let mut ret = ptr::null_mut(); let old_path = into_opt_c_string(old_path)?; let new_path = into_opt_c_string(new_path)?; unsafe { try_call!(raw::git_patch_from_buffers( &mut ret, old_buffer.as_ptr() as *const c_void, old_buffer.len(), old_path, new_buffer.as_ptr() as *const c_void, new_buffer.len(), new_path, opts.map(|s| s.raw()) )); Ok(Binding::from_raw(ret)) } } /// Get the DiffDelta associated with the Patch. pub fn delta(&self) -> DiffDelta<'buffers> { unsafe { Binding::from_raw(raw::git_patch_get_delta(self.raw) as *mut _) } } /// Get the number of hunks in the Patch. pub fn num_hunks(&self) -> usize { unsafe { raw::git_patch_num_hunks(self.raw) } } /// Get the number of lines of context, additions, and deletions in the Patch. pub fn line_stats(&self) -> Result<(usize, usize, usize), Error> { let mut context = 0; let mut additions = 0; let mut deletions = 0; unsafe { try_call!(raw::git_patch_line_stats( &mut context, &mut additions, &mut deletions, self.raw )); } Ok((context, additions, deletions)) } /// Get a DiffHunk and its total line count from the Patch. pub fn hunk(&self, hunk_idx: usize) -> Result<(DiffHunk<'buffers>, usize), Error> { let mut ret = ptr::null(); let mut lines = 0; unsafe { try_call!(raw::git_patch_get_hunk( &mut ret, &mut lines, self.raw, hunk_idx )); Ok((Binding::from_raw(ret), lines)) } } /// Get the number of lines in a hunk. pub fn num_lines_in_hunk(&self, hunk_idx: usize) -> Result { unsafe { Ok(try_call!(raw::git_patch_num_lines_in_hunk(self.raw, hunk_idx)) as usize) } } /// Get a DiffLine from a hunk of the Patch. pub fn line_in_hunk( &self, hunk_idx: usize, line_of_hunk: usize, ) -> Result, Error> { let mut ret = ptr::null(); unsafe { try_call!(raw::git_patch_get_line_in_hunk( &mut ret, self.raw, hunk_idx, line_of_hunk )); Ok(Binding::from_raw(ret)) } } /// Get the size of a Patch's diff data in bytes. pub fn size( &self, include_context: bool, include_hunk_headers: bool, include_file_headers: bool, ) -> usize { unsafe { raw::git_patch_size( self.raw, include_context as c_int, include_hunk_headers as c_int, include_file_headers as c_int, ) } } /// Print the Patch to text via a callback. pub fn print(&mut self, mut line_cb: &mut LineCb<'_>) -> Result<(), Error> { let ptr = &mut line_cb as *mut _ as *mut c_void; unsafe { let cb: raw::git_diff_line_cb = Some(print_cb); try_call!(raw::git_patch_print(self.raw, cb, ptr)); Ok(()) } } /// Get the Patch text as a Buf. pub fn to_buf(&mut self) -> Result { let buf = Buf::new(); unsafe { try_call!(raw::git_patch_to_buf(buf.raw(), self.raw)); } Ok(buf) } } impl<'buffers> std::fmt::Debug for Patch<'buffers> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { let mut ds = f.debug_struct("Patch"); ds.field("delta", &self.delta()) .field("num_hunks", &self.num_hunks()); if let Ok(line_stats) = &self.line_stats() { ds.field("line_stats", line_stats); } ds.finish() } } vendor/git2/src/error.rs0000664000175000017500000003745514160055207016020 0ustar mwhudsonmwhudsonuse libc::c_int; use std::env::JoinPathsError; use std::error; use std::ffi::{CStr, NulError}; use std::fmt; use std::str; use crate::{raw, ErrorClass, ErrorCode}; /// A structure to represent errors coming out of libgit2. #[derive(Debug, PartialEq)] pub struct Error { code: c_int, klass: c_int, message: String, } impl Error { /// Creates a new error. /// /// This is mainly intended for implementors of custom transports or /// database backends, where it is desirable to propagate an [`Error`] /// through `libgit2`. pub fn new>(code: ErrorCode, class: ErrorClass, message: S) -> Self { let mut err = Error::from_str(message.as_ref()); err.set_code(code); err.set_class(class); err } /// Returns the last error that happened with the code specified by `code`. /// /// The `code` argument typically comes from the return value of a function /// call. This code will later be returned from the `code` function. /// /// Historically this function returned `Some` or `None` based on the return /// value of `git_error_last` but nowadays it always returns `Some` so it's /// safe to unwrap the return value. This API will change in the next major /// version. pub fn last_error(code: c_int) -> Option { crate::init(); unsafe { // Note that whenever libgit2 returns an error any negative value // indicates that an error happened. Auxiliary information is // *usually* in `git_error_last` but unfortunately that's not always // the case. Sometimes a negative error code is returned from // libgit2 *without* calling `git_error_set` internally to configure // the error. // // To handle this case and hopefully provide better error messages // on our end we unconditionally call `git_error_clear` when we're done // with an error. This is an attempt to clear it as aggressively as // possible when we can to ensure that error information from one // api invocation doesn't leak over to the next api invocation. // // Additionally if `git_error_last` returns null then we returned a // canned error out. let ptr = raw::git_error_last(); let err = if ptr.is_null() { let mut error = Error::from_str("an unknown git error occurred"); error.code = code; error } else { Error::from_raw(code, ptr) }; raw::git_error_clear(); Some(err) } } unsafe fn from_raw(code: c_int, ptr: *const raw::git_error) -> Error { let message = CStr::from_ptr((*ptr).message as *const _).to_bytes(); let message = String::from_utf8_lossy(message).into_owned(); Error { code, klass: (*ptr).klass, message, } } /// Creates a new error from the given string as the error. /// /// The error returned will have the code `GIT_ERROR` and the class /// `GIT_ERROR_NONE`. pub fn from_str(s: &str) -> Error { Error { code: raw::GIT_ERROR as c_int, klass: raw::GIT_ERROR_NONE as c_int, message: s.to_string(), } } /// Return the error code associated with this error. /// /// An error code is intended to be programmatically actionable most of the /// time. For example the code `GIT_EAGAIN` indicates that an error could be /// fixed by trying again, while the code `GIT_ERROR` is more bland and /// doesn't convey anything in particular. pub fn code(&self) -> ErrorCode { match self.raw_code() { raw::GIT_OK => super::ErrorCode::GenericError, raw::GIT_ERROR => super::ErrorCode::GenericError, raw::GIT_ENOTFOUND => super::ErrorCode::NotFound, raw::GIT_EEXISTS => super::ErrorCode::Exists, raw::GIT_EAMBIGUOUS => super::ErrorCode::Ambiguous, raw::GIT_EBUFS => super::ErrorCode::BufSize, raw::GIT_EUSER => super::ErrorCode::User, raw::GIT_EBAREREPO => super::ErrorCode::BareRepo, raw::GIT_EUNBORNBRANCH => super::ErrorCode::UnbornBranch, raw::GIT_EUNMERGED => super::ErrorCode::Unmerged, raw::GIT_ENONFASTFORWARD => super::ErrorCode::NotFastForward, raw::GIT_EINVALIDSPEC => super::ErrorCode::InvalidSpec, raw::GIT_ECONFLICT => super::ErrorCode::Conflict, raw::GIT_ELOCKED => super::ErrorCode::Locked, raw::GIT_EMODIFIED => super::ErrorCode::Modified, raw::GIT_PASSTHROUGH => super::ErrorCode::GenericError, raw::GIT_ITEROVER => super::ErrorCode::GenericError, raw::GIT_EAUTH => super::ErrorCode::Auth, raw::GIT_ECERTIFICATE => super::ErrorCode::Certificate, raw::GIT_EAPPLIED => super::ErrorCode::Applied, raw::GIT_EPEEL => super::ErrorCode::Peel, raw::GIT_EEOF => super::ErrorCode::Eof, raw::GIT_EINVALID => super::ErrorCode::Invalid, raw::GIT_EUNCOMMITTED => super::ErrorCode::Uncommitted, raw::GIT_EDIRECTORY => super::ErrorCode::Directory, raw::GIT_EMERGECONFLICT => super::ErrorCode::MergeConflict, raw::GIT_EMISMATCH => super::ErrorCode::HashsumMismatch, raw::GIT_EINDEXDIRTY => super::ErrorCode::IndexDirty, raw::GIT_EAPPLYFAIL => super::ErrorCode::ApplyFail, _ => super::ErrorCode::GenericError, } } /// Modify the error code associated with this error. /// /// This is mainly intended to be used by implementors of custom transports /// or database backends, and should be used with care. pub fn set_code(&mut self, code: ErrorCode) { self.code = match code { ErrorCode::GenericError => raw::GIT_ERROR, ErrorCode::NotFound => raw::GIT_ENOTFOUND, ErrorCode::Exists => raw::GIT_EEXISTS, ErrorCode::Ambiguous => raw::GIT_EAMBIGUOUS, ErrorCode::BufSize => raw::GIT_EBUFS, ErrorCode::User => raw::GIT_EUSER, ErrorCode::BareRepo => raw::GIT_EBAREREPO, ErrorCode::UnbornBranch => raw::GIT_EUNBORNBRANCH, ErrorCode::Unmerged => raw::GIT_EUNMERGED, ErrorCode::NotFastForward => raw::GIT_ENONFASTFORWARD, ErrorCode::InvalidSpec => raw::GIT_EINVALIDSPEC, ErrorCode::Conflict => raw::GIT_ECONFLICT, ErrorCode::Locked => raw::GIT_ELOCKED, ErrorCode::Modified => raw::GIT_EMODIFIED, ErrorCode::Auth => raw::GIT_EAUTH, ErrorCode::Certificate => raw::GIT_ECERTIFICATE, ErrorCode::Applied => raw::GIT_EAPPLIED, ErrorCode::Peel => raw::GIT_EPEEL, ErrorCode::Eof => raw::GIT_EEOF, ErrorCode::Invalid => raw::GIT_EINVALID, ErrorCode::Uncommitted => raw::GIT_EUNCOMMITTED, ErrorCode::Directory => raw::GIT_EDIRECTORY, ErrorCode::MergeConflict => raw::GIT_EMERGECONFLICT, ErrorCode::HashsumMismatch => raw::GIT_EMISMATCH, ErrorCode::IndexDirty => raw::GIT_EINDEXDIRTY, ErrorCode::ApplyFail => raw::GIT_EAPPLYFAIL, }; } /// Return the error class associated with this error. /// /// Error classes are in general mostly just informative. For example the /// class will show up in the error message but otherwise an error class is /// typically not directly actionable. pub fn class(&self) -> ErrorClass { match self.raw_class() { raw::GIT_ERROR_NONE => super::ErrorClass::None, raw::GIT_ERROR_NOMEMORY => super::ErrorClass::NoMemory, raw::GIT_ERROR_OS => super::ErrorClass::Os, raw::GIT_ERROR_INVALID => super::ErrorClass::Invalid, raw::GIT_ERROR_REFERENCE => super::ErrorClass::Reference, raw::GIT_ERROR_ZLIB => super::ErrorClass::Zlib, raw::GIT_ERROR_REPOSITORY => super::ErrorClass::Repository, raw::GIT_ERROR_CONFIG => super::ErrorClass::Config, raw::GIT_ERROR_REGEX => super::ErrorClass::Regex, raw::GIT_ERROR_ODB => super::ErrorClass::Odb, raw::GIT_ERROR_INDEX => super::ErrorClass::Index, raw::GIT_ERROR_OBJECT => super::ErrorClass::Object, raw::GIT_ERROR_NET => super::ErrorClass::Net, raw::GIT_ERROR_TAG => super::ErrorClass::Tag, raw::GIT_ERROR_TREE => super::ErrorClass::Tree, raw::GIT_ERROR_INDEXER => super::ErrorClass::Indexer, raw::GIT_ERROR_SSL => super::ErrorClass::Ssl, raw::GIT_ERROR_SUBMODULE => super::ErrorClass::Submodule, raw::GIT_ERROR_THREAD => super::ErrorClass::Thread, raw::GIT_ERROR_STASH => super::ErrorClass::Stash, raw::GIT_ERROR_CHECKOUT => super::ErrorClass::Checkout, raw::GIT_ERROR_FETCHHEAD => super::ErrorClass::FetchHead, raw::GIT_ERROR_MERGE => super::ErrorClass::Merge, raw::GIT_ERROR_SSH => super::ErrorClass::Ssh, raw::GIT_ERROR_FILTER => super::ErrorClass::Filter, raw::GIT_ERROR_REVERT => super::ErrorClass::Revert, raw::GIT_ERROR_CALLBACK => super::ErrorClass::Callback, raw::GIT_ERROR_CHERRYPICK => super::ErrorClass::CherryPick, raw::GIT_ERROR_DESCRIBE => super::ErrorClass::Describe, raw::GIT_ERROR_REBASE => super::ErrorClass::Rebase, raw::GIT_ERROR_FILESYSTEM => super::ErrorClass::Filesystem, raw::GIT_ERROR_PATCH => super::ErrorClass::Patch, raw::GIT_ERROR_WORKTREE => super::ErrorClass::Worktree, raw::GIT_ERROR_SHA1 => super::ErrorClass::Sha1, raw::GIT_ERROR_HTTP => super::ErrorClass::Http, _ => super::ErrorClass::None, } } /// Modify the error class associated with this error. /// /// This is mainly intended to be used by implementors of custom transports /// or database backends, and should be used with care. pub fn set_class(&mut self, class: ErrorClass) { self.klass = match class { ErrorClass::None => raw::GIT_ERROR_NONE, ErrorClass::NoMemory => raw::GIT_ERROR_NOMEMORY, ErrorClass::Os => raw::GIT_ERROR_OS, ErrorClass::Invalid => raw::GIT_ERROR_INVALID, ErrorClass::Reference => raw::GIT_ERROR_REFERENCE, ErrorClass::Zlib => raw::GIT_ERROR_ZLIB, ErrorClass::Repository => raw::GIT_ERROR_REPOSITORY, ErrorClass::Config => raw::GIT_ERROR_CONFIG, ErrorClass::Regex => raw::GIT_ERROR_REGEX, ErrorClass::Odb => raw::GIT_ERROR_ODB, ErrorClass::Index => raw::GIT_ERROR_INDEX, ErrorClass::Object => raw::GIT_ERROR_OBJECT, ErrorClass::Net => raw::GIT_ERROR_NET, ErrorClass::Tag => raw::GIT_ERROR_TAG, ErrorClass::Tree => raw::GIT_ERROR_TREE, ErrorClass::Indexer => raw::GIT_ERROR_INDEXER, ErrorClass::Ssl => raw::GIT_ERROR_SSL, ErrorClass::Submodule => raw::GIT_ERROR_SUBMODULE, ErrorClass::Thread => raw::GIT_ERROR_THREAD, ErrorClass::Stash => raw::GIT_ERROR_STASH, ErrorClass::Checkout => raw::GIT_ERROR_CHECKOUT, ErrorClass::FetchHead => raw::GIT_ERROR_FETCHHEAD, ErrorClass::Merge => raw::GIT_ERROR_MERGE, ErrorClass::Ssh => raw::GIT_ERROR_SSH, ErrorClass::Filter => raw::GIT_ERROR_FILTER, ErrorClass::Revert => raw::GIT_ERROR_REVERT, ErrorClass::Callback => raw::GIT_ERROR_CALLBACK, ErrorClass::CherryPick => raw::GIT_ERROR_CHERRYPICK, ErrorClass::Describe => raw::GIT_ERROR_DESCRIBE, ErrorClass::Rebase => raw::GIT_ERROR_REBASE, ErrorClass::Filesystem => raw::GIT_ERROR_FILESYSTEM, ErrorClass::Patch => raw::GIT_ERROR_PATCH, ErrorClass::Worktree => raw::GIT_ERROR_WORKTREE, ErrorClass::Sha1 => raw::GIT_ERROR_SHA1, ErrorClass::Http => raw::GIT_ERROR_HTTP, } as c_int; } /// Return the raw error code associated with this error. pub fn raw_code(&self) -> raw::git_error_code { macro_rules! check( ($($e:ident,)*) => ( $(if self.code == raw::$e as c_int { raw::$e }) else * else { raw::GIT_ERROR } ) ); check!( GIT_OK, GIT_ERROR, GIT_ENOTFOUND, GIT_EEXISTS, GIT_EAMBIGUOUS, GIT_EBUFS, GIT_EUSER, GIT_EBAREREPO, GIT_EUNBORNBRANCH, GIT_EUNMERGED, GIT_ENONFASTFORWARD, GIT_EINVALIDSPEC, GIT_ECONFLICT, GIT_ELOCKED, GIT_EMODIFIED, GIT_EAUTH, GIT_ECERTIFICATE, GIT_EAPPLIED, GIT_EPEEL, GIT_EEOF, GIT_EINVALID, GIT_EUNCOMMITTED, GIT_PASSTHROUGH, GIT_ITEROVER, GIT_RETRY, GIT_EMISMATCH, GIT_EINDEXDIRTY, GIT_EAPPLYFAIL, ) } /// Return the raw error class associated with this error. pub fn raw_class(&self) -> raw::git_error_t { macro_rules! check( ($($e:ident,)*) => ( $(if self.klass == raw::$e as c_int { raw::$e }) else * else { raw::GIT_ERROR_NONE } ) ); check!( GIT_ERROR_NONE, GIT_ERROR_NOMEMORY, GIT_ERROR_OS, GIT_ERROR_INVALID, GIT_ERROR_REFERENCE, GIT_ERROR_ZLIB, GIT_ERROR_REPOSITORY, GIT_ERROR_CONFIG, GIT_ERROR_REGEX, GIT_ERROR_ODB, GIT_ERROR_INDEX, GIT_ERROR_OBJECT, GIT_ERROR_NET, GIT_ERROR_TAG, GIT_ERROR_TREE, GIT_ERROR_INDEXER, GIT_ERROR_SSL, GIT_ERROR_SUBMODULE, GIT_ERROR_THREAD, GIT_ERROR_STASH, GIT_ERROR_CHECKOUT, GIT_ERROR_FETCHHEAD, GIT_ERROR_MERGE, GIT_ERROR_SSH, GIT_ERROR_FILTER, GIT_ERROR_REVERT, GIT_ERROR_CALLBACK, GIT_ERROR_CHERRYPICK, GIT_ERROR_DESCRIBE, GIT_ERROR_REBASE, GIT_ERROR_FILESYSTEM, GIT_ERROR_PATCH, GIT_ERROR_WORKTREE, GIT_ERROR_SHA1, GIT_ERROR_HTTP, ) } /// Return the message associated with this error pub fn message(&self) -> &str { &self.message } } impl error::Error for Error {} impl fmt::Display for Error { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "{}", self.message)?; match self.class() { ErrorClass::None => {} other => write!(f, "; class={:?} ({})", other, self.klass)?, } match self.code() { ErrorCode::GenericError => {} other => write!(f, "; code={:?} ({})", other, self.code)?, } Ok(()) } } impl From for Error { fn from(_: NulError) -> Error { Error::from_str( "data contained a nul byte that could not be \ represented as a string", ) } } impl From for Error { fn from(e: JoinPathsError) -> Error { Error::from_str(&e.to_string()) } } #[cfg(test)] mod tests { use crate::{ErrorClass, ErrorCode}; #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let err = repo.find_submodule("does_not_exist").err().unwrap(); assert_eq!(err.code(), ErrorCode::NotFound); assert_eq!(err.class(), ErrorClass::Submodule); } } vendor/git2/src/revwalk.rs0000664000175000017500000002225114160055207016326 0ustar mwhudsonmwhudsonuse libc::{c_int, c_uint, c_void}; use std::ffi::CString; use std::marker; use crate::util::Binding; use crate::{panic, raw, Error, Oid, Repository, Sort}; /// A revwalk allows traversal of the commit graph defined by including one or /// more leaves and excluding one or more roots. pub struct Revwalk<'repo> { raw: *mut raw::git_revwalk, _marker: marker::PhantomData<&'repo Repository>, } /// A `Revwalk` with an assiciated "hide callback", see `with_hide_callback` pub struct RevwalkWithHideCb<'repo, 'cb, C> where C: FnMut(Oid) -> bool, { revwalk: Revwalk<'repo>, _marker: marker::PhantomData<&'cb C>, } extern "C" fn revwalk_hide_cb(commit_id: *const raw::git_oid, payload: *mut c_void) -> c_int where C: FnMut(Oid) -> bool, { panic::wrap(|| unsafe { let hide_cb = payload as *mut C; if (*hide_cb)(Oid::from_raw(commit_id)) { 1 } else { 0 } }) .unwrap_or(-1) } impl<'repo, 'cb, C: FnMut(Oid) -> bool> RevwalkWithHideCb<'repo, 'cb, C> { /// Consumes the `RevwalkWithHideCb` and returns the contained `Revwalk`. /// /// Note that this will reset the `Revwalk`. pub fn into_inner(mut self) -> Result, Error> { self.revwalk.reset()?; Ok(self.revwalk) } } impl<'repo> Revwalk<'repo> { /// Reset a revwalk to allow re-configuring it. /// /// The revwalk is automatically reset when iteration of its commits /// completes. pub fn reset(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_revwalk_reset(self.raw())); } Ok(()) } /// Set the order in which commits are visited. pub fn set_sorting(&mut self, sort_mode: Sort) -> Result<(), Error> { unsafe { try_call!(raw::git_revwalk_sorting( self.raw(), sort_mode.bits() as c_uint )); } Ok(()) } /// Simplify the history by first-parent /// /// No parents other than the first for each commit will be enqueued. pub fn simplify_first_parent(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_revwalk_simplify_first_parent(self.raw)); } Ok(()) } /// Mark a commit to start traversal from. /// /// The given OID must belong to a committish on the walked repository. /// /// The given commit will be used as one of the roots when starting the /// revision walk. At least one commit must be pushed onto the walker before /// a walk can be started. pub fn push(&mut self, oid: Oid) -> Result<(), Error> { unsafe { try_call!(raw::git_revwalk_push(self.raw(), oid.raw())); } Ok(()) } /// Push the repository's HEAD /// /// For more information, see `push`. pub fn push_head(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_revwalk_push_head(self.raw())); } Ok(()) } /// Push matching references /// /// The OIDs pointed to by the references that match the given glob pattern /// will be pushed to the revision walker. /// /// A leading 'refs/' is implied if not present as well as a trailing `/ \ /// *` if the glob lacks '?', ' \ *' or '['. /// /// Any references matching this glob which do not point to a committish /// will be ignored. pub fn push_glob(&mut self, glob: &str) -> Result<(), Error> { let glob = CString::new(glob)?; unsafe { try_call!(raw::git_revwalk_push_glob(self.raw, glob)); } Ok(()) } /// Push and hide the respective endpoints of the given range. /// /// The range should be of the form `..` where each /// `` is in the form accepted by `revparse_single`. The left-hand /// commit will be hidden and the right-hand commit pushed. pub fn push_range(&mut self, range: &str) -> Result<(), Error> { let range = CString::new(range)?; unsafe { try_call!(raw::git_revwalk_push_range(self.raw, range)); } Ok(()) } /// Push the OID pointed to by a reference /// /// The reference must point to a committish. pub fn push_ref(&mut self, reference: &str) -> Result<(), Error> { let reference = CString::new(reference)?; unsafe { try_call!(raw::git_revwalk_push_ref(self.raw, reference)); } Ok(()) } /// Mark a commit as not of interest to this revwalk. pub fn hide(&mut self, oid: Oid) -> Result<(), Error> { unsafe { try_call!(raw::git_revwalk_hide(self.raw(), oid.raw())); } Ok(()) } /// Hide all commits for which the callback returns true from /// the walk. pub fn with_hide_callback<'cb, C>( self, callback: &'cb C, ) -> Result, Error> where C: FnMut(Oid) -> bool, { let r = RevwalkWithHideCb { revwalk: self, _marker: marker::PhantomData, }; unsafe { raw::git_revwalk_add_hide_cb( r.revwalk.raw(), Some(revwalk_hide_cb::), callback as *const _ as *mut c_void, ); }; Ok(r) } /// Hide the repository's HEAD /// /// For more information, see `hide`. pub fn hide_head(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_revwalk_hide_head(self.raw())); } Ok(()) } /// Hide matching references. /// /// The OIDs pointed to by the references that match the given glob pattern /// and their ancestors will be hidden from the output on the revision walk. /// /// A leading 'refs/' is implied if not present as well as a trailing `/ \ /// *` if the glob lacks '?', ' \ *' or '['. /// /// Any references matching this glob which do not point to a committish /// will be ignored. pub fn hide_glob(&mut self, glob: &str) -> Result<(), Error> { let glob = CString::new(glob)?; unsafe { try_call!(raw::git_revwalk_hide_glob(self.raw, glob)); } Ok(()) } /// Hide the OID pointed to by a reference. /// /// The reference must point to a committish. pub fn hide_ref(&mut self, reference: &str) -> Result<(), Error> { let reference = CString::new(reference)?; unsafe { try_call!(raw::git_revwalk_hide_ref(self.raw, reference)); } Ok(()) } } impl<'repo> Binding for Revwalk<'repo> { type Raw = *mut raw::git_revwalk; unsafe fn from_raw(raw: *mut raw::git_revwalk) -> Revwalk<'repo> { Revwalk { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_revwalk { self.raw } } impl<'repo> Drop for Revwalk<'repo> { fn drop(&mut self) { unsafe { raw::git_revwalk_free(self.raw) } } } impl<'repo> Iterator for Revwalk<'repo> { type Item = Result; fn next(&mut self) -> Option> { let mut out: raw::git_oid = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call_iter!(raw::git_revwalk_next(&mut out, self.raw())); Some(Ok(Binding::from_raw(&out as *const _))) } } } impl<'repo, 'cb, C: FnMut(Oid) -> bool> Iterator for RevwalkWithHideCb<'repo, 'cb, C> { type Item = Result; fn next(&mut self) -> Option> { let out = self.revwalk.next(); crate::panic::check(); out } } #[cfg(test)] mod tests { #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let head = repo.head().unwrap(); let target = head.target().unwrap(); let mut walk = repo.revwalk().unwrap(); walk.push(target).unwrap(); let oids: Vec = walk.by_ref().collect::, _>>().unwrap(); assert_eq!(oids.len(), 1); assert_eq!(oids[0], target); walk.reset().unwrap(); walk.push_head().unwrap(); assert_eq!(walk.by_ref().count(), 1); walk.reset().unwrap(); walk.push_head().unwrap(); walk.hide_head().unwrap(); assert_eq!(walk.by_ref().count(), 0); } #[test] fn smoke_hide_cb() { let (_td, repo) = crate::test::repo_init(); let head = repo.head().unwrap(); let target = head.target().unwrap(); let mut walk = repo.revwalk().unwrap(); walk.push(target).unwrap(); let oids: Vec = walk.by_ref().collect::, _>>().unwrap(); assert_eq!(oids.len(), 1); assert_eq!(oids[0], target); walk.reset().unwrap(); walk.push_head().unwrap(); assert_eq!(walk.by_ref().count(), 1); walk.reset().unwrap(); walk.push_head().unwrap(); let hide_cb = |oid| oid == target; let mut walk = walk.with_hide_callback(&hide_cb).unwrap(); assert_eq!(walk.by_ref().count(), 0); let mut walk = walk.into_inner().unwrap(); walk.push_head().unwrap(); assert_eq!(walk.by_ref().count(), 1); } } vendor/git2/src/config.rs0000664000175000017500000006136314160055207016127 0ustar mwhudsonmwhudsonuse libc; use std::ffi::CString; use std::marker; use std::path::{Path, PathBuf}; use std::ptr; use std::str; use crate::util::{self, Binding}; use crate::{raw, Buf, ConfigLevel, Error, IntoCString}; /// A structure representing a git configuration key/value store pub struct Config { raw: *mut raw::git_config, } /// A struct representing a certain entry owned by a `Config` instance. /// /// An entry has a name, a value, and a level it applies to. pub struct ConfigEntry<'cfg> { raw: *mut raw::git_config_entry, _marker: marker::PhantomData<&'cfg Config>, owned: bool, } /// An iterator over the `ConfigEntry` values of a `Config` structure. pub struct ConfigEntries<'cfg> { raw: *mut raw::git_config_iterator, _marker: marker::PhantomData<&'cfg Config>, } impl Config { /// Allocate a new configuration object /// /// This object is empty, so you have to add a file to it before you can do /// anything with it. pub fn new() -> Result { crate::init(); let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_config_new(&mut raw)); Ok(Binding::from_raw(raw)) } } /// Create a new config instance containing a single on-disk file pub fn open(path: &Path) -> Result { crate::init(); let mut raw = ptr::null_mut(); // Normal file path OK (does not need Windows conversion). let path = path.into_c_string()?; unsafe { try_call!(raw::git_config_open_ondisk(&mut raw, path)); Ok(Binding::from_raw(raw)) } } /// Open the global, XDG and system configuration files /// /// Utility wrapper that finds the global, XDG and system configuration /// files and opens them into a single prioritized config object that can /// be used when accessing default config data outside a repository. pub fn open_default() -> Result { crate::init(); let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_config_open_default(&mut raw)); Ok(Binding::from_raw(raw)) } } /// Locate the path to the global configuration file /// /// The user or global configuration file is usually located in /// `$HOME/.gitconfig`. /// /// This method will try to guess the full path to that file, if the file /// exists. The returned path may be used on any method call to load /// the global configuration file. /// /// This method will not guess the path to the xdg compatible config file /// (`.config/git/config`). pub fn find_global() -> Result { crate::init(); let buf = Buf::new(); unsafe { try_call!(raw::git_config_find_global(buf.raw())); } Ok(util::bytes2path(&buf).to_path_buf()) } /// Locate the path to the system configuration file /// /// If /etc/gitconfig doesn't exist, it will look for %PROGRAMFILES% pub fn find_system() -> Result { crate::init(); let buf = Buf::new(); unsafe { try_call!(raw::git_config_find_system(buf.raw())); } Ok(util::bytes2path(&buf).to_path_buf()) } /// Locate the path to the global xdg compatible configuration file /// /// The xdg compatible configuration file is usually located in /// `$HOME/.config/git/config`. pub fn find_xdg() -> Result { crate::init(); let buf = Buf::new(); unsafe { try_call!(raw::git_config_find_xdg(buf.raw())); } Ok(util::bytes2path(&buf).to_path_buf()) } /// Add an on-disk config file instance to an existing config /// /// The on-disk file pointed at by path will be opened and parsed; it's /// expected to be a native Git config file following the default Git config /// syntax (see man git-config). /// /// Further queries on this config object will access each of the config /// file instances in order (instances with a higher priority level will be /// accessed first). pub fn add_file(&mut self, path: &Path, level: ConfigLevel, force: bool) -> Result<(), Error> { // Normal file path OK (does not need Windows conversion). let path = path.into_c_string()?; unsafe { try_call!(raw::git_config_add_file_ondisk( self.raw, path, level, ptr::null(), force )); Ok(()) } } /// Delete a config variable from the config file with the highest level /// (usually the local one). pub fn remove(&mut self, name: &str) -> Result<(), Error> { let name = CString::new(name)?; unsafe { try_call!(raw::git_config_delete_entry(self.raw, name)); Ok(()) } } /// Remove multivar config variables in the config file with the highest level (usually the /// local one). /// /// The regular expression is applied case-sensitively on the value. pub fn remove_multivar(&mut self, name: &str, regexp: &str) -> Result<(), Error> { let name = CString::new(name)?; let regexp = CString::new(regexp)?; unsafe { try_call!(raw::git_config_delete_multivar(self.raw, name, regexp)); } Ok(()) } /// Get the value of a boolean config variable. /// /// All config files will be looked into, in the order of their defined /// level. A higher level means a higher priority. The first occurrence of /// the variable will be returned here. pub fn get_bool(&self, name: &str) -> Result { let mut out = 0 as libc::c_int; let name = CString::new(name)?; unsafe { try_call!(raw::git_config_get_bool(&mut out, &*self.raw, name)); } Ok(out != 0) } /// Get the value of an integer config variable. /// /// All config files will be looked into, in the order of their defined /// level. A higher level means a higher priority. The first occurrence of /// the variable will be returned here. pub fn get_i32(&self, name: &str) -> Result { let mut out = 0i32; let name = CString::new(name)?; unsafe { try_call!(raw::git_config_get_int32(&mut out, &*self.raw, name)); } Ok(out) } /// Get the value of an integer config variable. /// /// All config files will be looked into, in the order of their defined /// level. A higher level means a higher priority. The first occurrence of /// the variable will be returned here. pub fn get_i64(&self, name: &str) -> Result { let mut out = 0i64; let name = CString::new(name)?; unsafe { try_call!(raw::git_config_get_int64(&mut out, &*self.raw, name)); } Ok(out) } /// Get the value of a string config variable. /// /// This is the same as `get_bytes` except that it may return `Err` if /// the bytes are not valid utf-8. /// /// This method will return an error if this `Config` is not a snapshot. pub fn get_str(&self, name: &str) -> Result<&str, Error> { str::from_utf8(self.get_bytes(name)?) .map_err(|_| Error::from_str("configuration value is not valid utf8")) } /// Get the value of a string config variable as a byte slice. /// /// This method will return an error if this `Config` is not a snapshot. pub fn get_bytes(&self, name: &str) -> Result<&[u8], Error> { let mut ret = ptr::null(); let name = CString::new(name)?; unsafe { try_call!(raw::git_config_get_string(&mut ret, &*self.raw, name)); Ok(crate::opt_bytes(self, ret).unwrap()) } } /// Get the value of a string config variable as an owned string. /// /// All config files will be looked into, in the order of their /// defined level. A higher level means a higher priority. The /// first occurrence of the variable will be returned here. /// /// An error will be returned if the config value is not valid utf-8. pub fn get_string(&self, name: &str) -> Result { let ret = Buf::new(); let name = CString::new(name)?; unsafe { try_call!(raw::git_config_get_string_buf(ret.raw(), self.raw, name)); } str::from_utf8(&ret) .map(|s| s.to_string()) .map_err(|_| Error::from_str("configuration value is not valid utf8")) } /// Get the value of a path config variable as an owned `PathBuf`. /// /// A leading '~' will be expanded to the global search path (which /// defaults to the user's home directory but can be overridden via /// [`raw::git_libgit2_opts`]. /// /// All config files will be looked into, in the order of their /// defined level. A higher level means a higher priority. The /// first occurrence of the variable will be returned here. pub fn get_path(&self, name: &str) -> Result { let ret = Buf::new(); let name = CString::new(name)?; unsafe { try_call!(raw::git_config_get_path(ret.raw(), self.raw, name)); } Ok(crate::util::bytes2path(&ret).to_path_buf()) } /// Get the ConfigEntry for a config variable. pub fn get_entry(&self, name: &str) -> Result, Error> { let mut ret = ptr::null_mut(); let name = CString::new(name)?; unsafe { try_call!(raw::git_config_get_entry(&mut ret, self.raw, name)); Ok(Binding::from_raw(ret)) } } /// Iterate over all the config variables /// /// If `glob` is `Some`, then the iterator will only iterate over all /// variables whose name matches the pattern. /// /// The regular expression is applied case-sensitively on the normalized form of /// the variable name: the section and variable parts are lower-cased. The /// subsection is left unchanged. /// /// # Example /// /// ``` /// # #![allow(unstable)] /// use git2::Config; /// /// let cfg = Config::new().unwrap(); /// /// for entry in &cfg.entries(None).unwrap() { /// let entry = entry.unwrap(); /// println!("{} => {}", entry.name().unwrap(), entry.value().unwrap()); /// } /// ``` pub fn entries(&self, glob: Option<&str>) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { match glob { Some(s) => { let s = CString::new(s)?; try_call!(raw::git_config_iterator_glob_new(&mut ret, &*self.raw, s)); } None => { try_call!(raw::git_config_iterator_new(&mut ret, &*self.raw)); } } Ok(Binding::from_raw(ret)) } } /// Iterate over the values of a multivar /// /// If `regexp` is `Some`, then the iterator will only iterate over all /// values which match the pattern. /// /// The regular expression is applied case-sensitively on the normalized form of /// the variable name: the section and variable parts are lower-cased. The /// subsection is left unchanged. pub fn multivar(&self, name: &str, regexp: Option<&str>) -> Result, Error> { let mut ret = ptr::null_mut(); let name = CString::new(name)?; let regexp = regexp.map(CString::new).transpose()?; unsafe { try_call!(raw::git_config_multivar_iterator_new( &mut ret, &*self.raw, name, regexp )); Ok(Binding::from_raw(ret)) } } /// Open the global/XDG configuration file according to git's rules /// /// Git allows you to store your global configuration at `$HOME/.config` or /// `$XDG_CONFIG_HOME/git/config`. For backwards compatability, the XDG file /// shouldn't be used unless the use has created it explicitly. With this /// function you'll open the correct one to write to. pub fn open_global(&mut self) -> Result { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_config_open_global(&mut raw, self.raw)); Ok(Binding::from_raw(raw)) } } /// Build a single-level focused config object from a multi-level one. /// /// The returned config object can be used to perform get/set/delete /// operations on a single specific level. pub fn open_level(&self, level: ConfigLevel) -> Result { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_config_open_level(&mut raw, &*self.raw, level)); Ok(Binding::from_raw(raw)) } } /// Set the value of a boolean config variable in the config file with the /// highest level (usually the local one). pub fn set_bool(&mut self, name: &str, value: bool) -> Result<(), Error> { let name = CString::new(name)?; unsafe { try_call!(raw::git_config_set_bool(self.raw, name, value)); } Ok(()) } /// Set the value of an integer config variable in the config file with the /// highest level (usually the local one). pub fn set_i32(&mut self, name: &str, value: i32) -> Result<(), Error> { let name = CString::new(name)?; unsafe { try_call!(raw::git_config_set_int32(self.raw, name, value)); } Ok(()) } /// Set the value of an integer config variable in the config file with the /// highest level (usually the local one). pub fn set_i64(&mut self, name: &str, value: i64) -> Result<(), Error> { let name = CString::new(name)?; unsafe { try_call!(raw::git_config_set_int64(self.raw, name, value)); } Ok(()) } /// Set the value of an multivar config variable in the config file with the /// highest level (usually the local one). /// /// The regular expression is applied case-sensitively on the value. pub fn set_multivar(&mut self, name: &str, regexp: &str, value: &str) -> Result<(), Error> { let name = CString::new(name)?; let regexp = CString::new(regexp)?; let value = CString::new(value)?; unsafe { try_call!(raw::git_config_set_multivar(self.raw, name, regexp, value)); } Ok(()) } /// Set the value of a string config variable in the config file with the /// highest level (usually the local one). pub fn set_str(&mut self, name: &str, value: &str) -> Result<(), Error> { let name = CString::new(name)?; let value = CString::new(value)?; unsafe { try_call!(raw::git_config_set_string(self.raw, name, value)); } Ok(()) } /// Create a snapshot of the configuration /// /// Create a snapshot of the current state of a configuration, which allows /// you to look into a consistent view of the configuration for looking up /// complex values (e.g. a remote, submodule). pub fn snapshot(&mut self) -> Result { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_config_snapshot(&mut ret, self.raw)); Ok(Binding::from_raw(ret)) } } /// Parse a string as a bool. /// /// Interprets "true", "yes", "on", 1, or any non-zero number as true. /// Interprets "false", "no", "off", 0, or an empty string as false. pub fn parse_bool(s: S) -> Result { let s = s.into_c_string()?; let mut out = 0; crate::init(); unsafe { try_call!(raw::git_config_parse_bool(&mut out, s)); } Ok(out != 0) } /// Parse a string as an i32; handles suffixes like k, M, or G, and /// multiplies by the appropriate power of 1024. pub fn parse_i32(s: S) -> Result { let s = s.into_c_string()?; let mut out = 0; crate::init(); unsafe { try_call!(raw::git_config_parse_int32(&mut out, s)); } Ok(out) } /// Parse a string as an i64; handles suffixes like k, M, or G, and /// multiplies by the appropriate power of 1024. pub fn parse_i64(s: S) -> Result { let s = s.into_c_string()?; let mut out = 0; crate::init(); unsafe { try_call!(raw::git_config_parse_int64(&mut out, s)); } Ok(out) } } impl Binding for Config { type Raw = *mut raw::git_config; unsafe fn from_raw(raw: *mut raw::git_config) -> Config { Config { raw } } fn raw(&self) -> *mut raw::git_config { self.raw } } impl Drop for Config { fn drop(&mut self) { unsafe { raw::git_config_free(self.raw) } } } impl<'cfg> ConfigEntry<'cfg> { /// Gets the name of this entry. /// /// May return `None` if the name is not valid utf-8 pub fn name(&self) -> Option<&str> { str::from_utf8(self.name_bytes()).ok() } /// Gets the name of this entry as a byte slice. pub fn name_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, (*self.raw).name).unwrap() } } /// Gets the value of this entry. /// /// May return `None` if the value is not valid utf-8 /// /// # Panics /// /// Panics when no value is defined. pub fn value(&self) -> Option<&str> { str::from_utf8(self.value_bytes()).ok() } /// Gets the value of this entry as a byte slice. /// /// # Panics /// /// Panics when no value is defined. pub fn value_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, (*self.raw).value).unwrap() } } /// Returns `true` when a value is defined otherwise `false`. /// /// No value defined is a short-hand to represent a Boolean `true`. pub fn has_value(&self) -> bool { unsafe { !(*self.raw).value.is_null() } } /// Gets the configuration level of this entry. pub fn level(&self) -> ConfigLevel { unsafe { ConfigLevel::from_raw((*self.raw).level) } } /// Depth of includes where this variable was found pub fn include_depth(&self) -> u32 { unsafe { (*self.raw).include_depth as u32 } } } impl<'cfg> Binding for ConfigEntry<'cfg> { type Raw = *mut raw::git_config_entry; unsafe fn from_raw(raw: *mut raw::git_config_entry) -> ConfigEntry<'cfg> { ConfigEntry { raw, _marker: marker::PhantomData, owned: true, } } fn raw(&self) -> *mut raw::git_config_entry { self.raw } } impl<'cfg> Binding for ConfigEntries<'cfg> { type Raw = *mut raw::git_config_iterator; unsafe fn from_raw(raw: *mut raw::git_config_iterator) -> ConfigEntries<'cfg> { ConfigEntries { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_config_iterator { self.raw } } // entries are only valid until the iterator is freed, so this impl is for // `&'b T` instead of `T` to have a lifetime to tie them to. // // It's also not implemented for `&'b mut T` so we can have multiple entries // (ok). impl<'cfg, 'b> Iterator for &'b ConfigEntries<'cfg> { type Item = Result, Error>; fn next(&mut self) -> Option, Error>> { let mut raw = ptr::null_mut(); unsafe { try_call_iter!(raw::git_config_next(&mut raw, self.raw)); Some(Ok(ConfigEntry { owned: false, raw, _marker: marker::PhantomData, })) } } } impl<'cfg> Drop for ConfigEntries<'cfg> { fn drop(&mut self) { unsafe { raw::git_config_iterator_free(self.raw) } } } impl<'cfg> Drop for ConfigEntry<'cfg> { fn drop(&mut self) { if self.owned { unsafe { raw::git_config_entry_free(self.raw) } } } } #[cfg(test)] mod tests { use std::fs::File; use tempfile::TempDir; use crate::Config; #[test] fn smoke() { let _cfg = Config::new().unwrap(); let _ = Config::find_global(); let _ = Config::find_system(); let _ = Config::find_xdg(); } #[test] fn persisted() { let td = TempDir::new().unwrap(); let path = td.path().join("foo"); File::create(&path).unwrap(); let mut cfg = Config::open(&path).unwrap(); assert!(cfg.get_bool("foo.bar").is_err()); cfg.set_bool("foo.k1", true).unwrap(); cfg.set_i32("foo.k2", 1).unwrap(); cfg.set_i64("foo.k3", 2).unwrap(); cfg.set_str("foo.k4", "bar").unwrap(); cfg.snapshot().unwrap(); drop(cfg); let cfg = Config::open(&path).unwrap().snapshot().unwrap(); assert_eq!(cfg.get_bool("foo.k1").unwrap(), true); assert_eq!(cfg.get_i32("foo.k2").unwrap(), 1); assert_eq!(cfg.get_i64("foo.k3").unwrap(), 2); assert_eq!(cfg.get_str("foo.k4").unwrap(), "bar"); for entry in &cfg.entries(None).unwrap() { let entry = entry.unwrap(); entry.name(); entry.value(); entry.level(); } } #[test] fn multivar() { let td = TempDir::new().unwrap(); let path = td.path().join("foo"); File::create(&path).unwrap(); let mut cfg = Config::open(&path).unwrap(); cfg.set_multivar("foo.bar", "^$", "baz").unwrap(); cfg.set_multivar("foo.bar", "^$", "qux").unwrap(); cfg.set_multivar("foo.bar", "^$", "quux").unwrap(); cfg.set_multivar("foo.baz", "^$", "oki").unwrap(); // `entries` filters by name let mut entries: Vec = cfg .entries(Some("foo.bar")) .unwrap() .into_iter() .map(|entry| entry.unwrap().value().unwrap().into()) .collect(); entries.sort(); assert_eq!(entries, ["baz", "quux", "qux"]); // which is the same as `multivar` without a regex let mut multivals: Vec = cfg .multivar("foo.bar", None) .unwrap() .into_iter() .map(|entry| entry.unwrap().value().unwrap().into()) .collect(); multivals.sort(); assert_eq!(multivals, entries); // yet _with_ a regex, `multivar` filters by value let mut quxish: Vec = cfg .multivar("foo.bar", Some("qu.*x")) .unwrap() .into_iter() .map(|entry| entry.unwrap().value().unwrap().into()) .collect(); quxish.sort(); assert_eq!(quxish, ["quux", "qux"]); cfg.remove_multivar("foo.bar", ".*").unwrap(); assert_eq!(cfg.entries(Some("foo.bar")).unwrap().count(), 0); assert_eq!(cfg.multivar("foo.bar", None).unwrap().count(), 0); } #[test] fn parse() { assert_eq!(Config::parse_bool("").unwrap(), false); assert_eq!(Config::parse_bool("false").unwrap(), false); assert_eq!(Config::parse_bool("no").unwrap(), false); assert_eq!(Config::parse_bool("off").unwrap(), false); assert_eq!(Config::parse_bool("0").unwrap(), false); assert_eq!(Config::parse_bool("true").unwrap(), true); assert_eq!(Config::parse_bool("yes").unwrap(), true); assert_eq!(Config::parse_bool("on").unwrap(), true); assert_eq!(Config::parse_bool("1").unwrap(), true); assert_eq!(Config::parse_bool("42").unwrap(), true); assert!(Config::parse_bool(" ").is_err()); assert!(Config::parse_bool("some-string").is_err()); assert!(Config::parse_bool("-").is_err()); assert_eq!(Config::parse_i32("0").unwrap(), 0); assert_eq!(Config::parse_i32("1").unwrap(), 1); assert_eq!(Config::parse_i32("100").unwrap(), 100); assert_eq!(Config::parse_i32("-1").unwrap(), -1); assert_eq!(Config::parse_i32("-100").unwrap(), -100); assert_eq!(Config::parse_i32("1k").unwrap(), 1024); assert_eq!(Config::parse_i32("4k").unwrap(), 4096); assert_eq!(Config::parse_i32("1M").unwrap(), 1048576); assert_eq!(Config::parse_i32("1G").unwrap(), 1024 * 1024 * 1024); assert_eq!(Config::parse_i64("0").unwrap(), 0); assert_eq!(Config::parse_i64("1").unwrap(), 1); assert_eq!(Config::parse_i64("100").unwrap(), 100); assert_eq!(Config::parse_i64("-1").unwrap(), -1); assert_eq!(Config::parse_i64("-100").unwrap(), -100); assert_eq!(Config::parse_i64("1k").unwrap(), 1024); assert_eq!(Config::parse_i64("4k").unwrap(), 4096); assert_eq!(Config::parse_i64("1M").unwrap(), 1048576); assert_eq!(Config::parse_i64("1G").unwrap(), 1024 * 1024 * 1024); assert_eq!(Config::parse_i64("100G").unwrap(), 100 * 1024 * 1024 * 1024); } } vendor/git2/src/mailmap.rs0000664000175000017500000000722014160055207016272 0ustar mwhudsonmwhudsonuse std::ffi::CString; use std::ptr; use crate::util::Binding; use crate::{raw, Error, Signature}; /// A structure to represent a repository's .mailmap file. /// /// The representation cannot be written to disk. pub struct Mailmap { raw: *mut raw::git_mailmap, } impl Binding for Mailmap { type Raw = *mut raw::git_mailmap; unsafe fn from_raw(ptr: *mut raw::git_mailmap) -> Mailmap { Mailmap { raw: ptr } } fn raw(&self) -> *mut raw::git_mailmap { self.raw } } impl Drop for Mailmap { fn drop(&mut self) { unsafe { raw::git_mailmap_free(self.raw); } } } impl Mailmap { /// Creates an empty, in-memory mailmap object. pub fn new() -> Result { crate::init(); let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_mailmap_new(&mut ret)); Ok(Binding::from_raw(ret)) } } /// Creates an in-memory mailmap object representing the given buffer. pub fn from_buffer(buf: &str) -> Result { crate::init(); let mut ret = ptr::null_mut(); let len = buf.len(); let buf = CString::new(buf)?; unsafe { try_call!(raw::git_mailmap_from_buffer(&mut ret, buf, len)); Ok(Binding::from_raw(ret)) } } /// Adds a new entry to this in-memory mailmap object. pub fn add_entry( &mut self, real_name: Option<&str>, real_email: Option<&str>, replace_name: Option<&str>, replace_email: &str, ) -> Result<(), Error> { let real_name = crate::opt_cstr(real_name)?; let real_email = crate::opt_cstr(real_email)?; let replace_name = crate::opt_cstr(replace_name)?; let replace_email = CString::new(replace_email)?; unsafe { try_call!(raw::git_mailmap_add_entry( self.raw, real_name, real_email, replace_name, replace_email )); Ok(()) } } /// Resolves a signature to its real name and email address. pub fn resolve_signature(&self, sig: &Signature<'_>) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_mailmap_resolve_signature( &mut ret, &*self.raw, sig.raw() )); Ok(Binding::from_raw(ret)) } } } #[cfg(test)] mod tests { use super::*; #[test] fn smoke() { let sig_name = "name"; let sig_email = "email"; let sig = t!(Signature::now(sig_name, sig_email)); let mut mm = t!(Mailmap::new()); let mailmapped_sig = t!(mm.resolve_signature(&sig)); assert_eq!(mailmapped_sig.name(), Some(sig_name)); assert_eq!(mailmapped_sig.email(), Some(sig_email)); t!(mm.add_entry(None, None, None, sig_email)); t!(mm.add_entry( Some("real name"), Some("real@email"), Some(sig_name), sig_email, )); let mailmapped_sig = t!(mm.resolve_signature(&sig)); assert_eq!(mailmapped_sig.name(), Some("real name")); assert_eq!(mailmapped_sig.email(), Some("real@email")); } #[test] fn from_buffer() { let buf = " "; let mm = t!(Mailmap::from_buffer(&buf)); let sig = t!(Signature::now("name", "email")); let mailmapped_sig = t!(mm.resolve_signature(&sig)); assert_eq!(mailmapped_sig.name(), Some("name")); assert_eq!(mailmapped_sig.email(), Some("prøper@emæil")); } } vendor/git2/src/mempack.rs0000664000175000017500000000254414160055207016273 0ustar mwhudsonmwhudsonuse std::marker; use crate::util::Binding; use crate::{raw, Buf, Error, Odb, Repository}; /// A structure to represent a mempack backend for the object database. The /// Mempack is bound to the Odb that it was created from, and cannot outlive /// that Odb. pub struct Mempack<'odb> { raw: *mut raw::git_odb_backend, _marker: marker::PhantomData<&'odb Odb<'odb>>, } impl<'odb> Binding for Mempack<'odb> { type Raw = *mut raw::git_odb_backend; unsafe fn from_raw(raw: *mut raw::git_odb_backend) -> Mempack<'odb> { Mempack { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_odb_backend { self.raw } } // We don't need to implement `Drop` for Mempack because it is owned by the // odb to which it is attached, and that will take care of freeing the mempack // and associated memory. impl<'odb> Mempack<'odb> { /// Dumps the contents of the mempack into the provided buffer. pub fn dump(&self, repo: &Repository, buf: &mut Buf) -> Result<(), Error> { unsafe { try_call!(raw::git_mempack_dump(buf.raw(), repo.raw(), self.raw)); } Ok(()) } /// Clears all data in the mempack. pub fn reset(&self) -> Result<(), Error> { unsafe { try_call!(raw::git_mempack_reset(self.raw)); } Ok(()) } } vendor/git2/src/commit.rs0000664000175000017500000003420614160055207016146 0ustar mwhudsonmwhudsonuse libc; use std::marker; use std::mem; use std::ops::Range; use std::ptr; use std::str; use crate::util::Binding; use crate::{raw, signature, Buf, Error, IntoCString, Mailmap, Object, Oid, Signature, Time, Tree}; /// A structure to represent a git [commit][1] /// /// [1]: http://git-scm.com/book/en/Git-Internals-Git-Objects pub struct Commit<'repo> { raw: *mut raw::git_commit, _marker: marker::PhantomData>, } /// An iterator over the parent commits of a commit. /// /// Aborts iteration when a commit cannot be found pub struct Parents<'commit, 'repo> { range: Range, commit: &'commit Commit<'repo>, } /// An iterator over the parent commits' ids of a commit. /// /// Aborts iteration when a commit cannot be found pub struct ParentIds<'commit> { range: Range, commit: &'commit Commit<'commit>, } impl<'repo> Commit<'repo> { /// Get the id (SHA1) of a repository commit pub fn id(&self) -> Oid { unsafe { Binding::from_raw(raw::git_commit_id(&*self.raw)) } } /// Get the id of the tree pointed to by this commit. /// /// No attempts are made to fetch an object from the ODB. pub fn tree_id(&self) -> Oid { unsafe { Binding::from_raw(raw::git_commit_tree_id(&*self.raw)) } } /// Get the tree pointed to by a commit. pub fn tree(&self) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_commit_tree(&mut ret, &*self.raw)); Ok(Binding::from_raw(ret)) } } /// Get access to the underlying raw pointer. pub fn raw(&self) -> *mut raw::git_commit { self.raw } /// Get the full message of a commit. /// /// The returned message will be slightly prettified by removing any /// potential leading newlines. /// /// `None` will be returned if the message is not valid utf-8 pub fn message(&self) -> Option<&str> { str::from_utf8(self.message_bytes()).ok() } /// Get the full message of a commit as a byte slice. /// /// The returned message will be slightly prettified by removing any /// potential leading newlines. pub fn message_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_commit_message(&*self.raw)).unwrap() } } /// Get the encoding for the message of a commit, as a string representing a /// standard encoding name. /// /// `None` will be returned if the encoding is not known pub fn message_encoding(&self) -> Option<&str> { let bytes = unsafe { crate::opt_bytes(self, raw::git_commit_message_encoding(&*self.raw)) }; bytes.and_then(|b| str::from_utf8(b).ok()) } /// Get the full raw message of a commit. /// /// `None` will be returned if the message is not valid utf-8 pub fn message_raw(&self) -> Option<&str> { str::from_utf8(self.message_raw_bytes()).ok() } /// Get the full raw message of a commit. pub fn message_raw_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_commit_message_raw(&*self.raw)).unwrap() } } /// Get the full raw text of the commit header. /// /// `None` will be returned if the message is not valid utf-8 pub fn raw_header(&self) -> Option<&str> { str::from_utf8(self.raw_header_bytes()).ok() } /// Get an arbitrary header field. pub fn header_field_bytes(&self, field: T) -> Result { let buf = Buf::new(); let raw_field = field.into_c_string()?; unsafe { try_call!(raw::git_commit_header_field( buf.raw(), &*self.raw, raw_field )); } Ok(buf) } /// Get the full raw text of the commit header. pub fn raw_header_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_commit_raw_header(&*self.raw)).unwrap() } } /// Get the short "summary" of the git commit message. /// /// The returned message is the summary of the commit, comprising the first /// paragraph of the message with whitespace trimmed and squashed. /// /// `None` may be returned if an error occurs or if the summary is not valid /// utf-8. pub fn summary(&self) -> Option<&str> { self.summary_bytes().and_then(|s| str::from_utf8(s).ok()) } /// Get the short "summary" of the git commit message. /// /// The returned message is the summary of the commit, comprising the first /// paragraph of the message with whitespace trimmed and squashed. /// /// `None` may be returned if an error occurs pub fn summary_bytes(&self) -> Option<&[u8]> { unsafe { crate::opt_bytes(self, raw::git_commit_summary(self.raw)) } } /// Get the commit time (i.e. committer time) of a commit. /// /// The first element of the tuple is the time, in seconds, since the epoch. /// The second element is the offset, in minutes, of the time zone of the /// committer's preferred time zone. pub fn time(&self) -> Time { unsafe { Time::new( raw::git_commit_time(&*self.raw) as i64, raw::git_commit_time_offset(&*self.raw) as i32, ) } } /// Creates a new iterator over the parents of this commit. pub fn parents<'a>(&'a self) -> Parents<'a, 'repo> { Parents { range: 0..self.parent_count(), commit: self, } } /// Creates a new iterator over the parents of this commit. pub fn parent_ids(&self) -> ParentIds<'_> { ParentIds { range: 0..self.parent_count(), commit: self, } } /// Get the author of this commit. pub fn author(&self) -> Signature<'_> { unsafe { let ptr = raw::git_commit_author(&*self.raw); signature::from_raw_const(self, ptr) } } /// Get the author of this commit, using the mailmap to map names and email /// addresses to canonical real names and email addresses. pub fn author_with_mailmap(&self, mailmap: &Mailmap) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_commit_author_with_mailmap( &mut ret, &*self.raw, &*mailmap.raw() )); Ok(Binding::from_raw(ret)) } } /// Get the committer of this commit. pub fn committer(&self) -> Signature<'_> { unsafe { let ptr = raw::git_commit_committer(&*self.raw); signature::from_raw_const(self, ptr) } } /// Get the committer of this commit, using the mailmap to map names and email /// addresses to canonical real names and email addresses. pub fn committer_with_mailmap(&self, mailmap: &Mailmap) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_commit_committer_with_mailmap( &mut ret, &*self.raw, &*mailmap.raw() )); Ok(Binding::from_raw(ret)) } } /// Amend this existing commit with all non-`None` values /// /// This creates a new commit that is exactly the same as the old commit, /// except that any non-`None` values will be updated. The new commit has /// the same parents as the old commit. /// /// For information about `update_ref`, see [`Repository::commit`]. /// /// [`Repository::commit`]: struct.Repository.html#method.commit pub fn amend( &self, update_ref: Option<&str>, author: Option<&Signature<'_>>, committer: Option<&Signature<'_>>, message_encoding: Option<&str>, message: Option<&str>, tree: Option<&Tree<'repo>>, ) -> Result { let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; let update_ref = crate::opt_cstr(update_ref)?; let encoding = crate::opt_cstr(message_encoding)?; let message = crate::opt_cstr(message)?; unsafe { try_call!(raw::git_commit_amend( &mut raw, self.raw(), update_ref, author.map(|s| s.raw()), committer.map(|s| s.raw()), encoding, message, tree.map(|t| t.raw()) )); Ok(Binding::from_raw(&raw as *const _)) } } /// Get the number of parents of this commit. /// /// Use the `parents` iterator to return an iterator over all parents. pub fn parent_count(&self) -> usize { unsafe { raw::git_commit_parentcount(&*self.raw) as usize } } /// Get the specified parent of the commit. /// /// Use the `parents` iterator to return an iterator over all parents. pub fn parent(&self, i: usize) -> Result, Error> { unsafe { let mut raw = ptr::null_mut(); try_call!(raw::git_commit_parent( &mut raw, &*self.raw, i as libc::c_uint )); Ok(Binding::from_raw(raw)) } } /// Get the specified parent id of the commit. /// /// This is different from `parent`, which will attempt to load the /// parent commit from the ODB. /// /// Use the `parent_ids` iterator to return an iterator over all parents. pub fn parent_id(&self, i: usize) -> Result { unsafe { let id = raw::git_commit_parent_id(self.raw, i as libc::c_uint); if id.is_null() { Err(Error::from_str("parent index out of bounds")) } else { Ok(Binding::from_raw(id)) } } } /// Casts this Commit to be usable as an `Object` pub fn as_object(&self) -> &Object<'repo> { unsafe { &*(self as *const _ as *const Object<'repo>) } } /// Consumes Commit to be returned as an `Object` pub fn into_object(self) -> Object<'repo> { assert_eq!(mem::size_of_val(&self), mem::size_of::>()); unsafe { mem::transmute(self) } } } impl<'repo> Binding for Commit<'repo> { type Raw = *mut raw::git_commit; unsafe fn from_raw(raw: *mut raw::git_commit) -> Commit<'repo> { Commit { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_commit { self.raw } } impl<'repo> std::fmt::Debug for Commit<'repo> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { let mut ds = f.debug_struct("Commit"); ds.field("id", &self.id()); if let Some(summary) = self.summary() { ds.field("summary", &summary); } ds.finish() } } /// Aborts iteration when a commit cannot be found impl<'repo, 'commit> Iterator for Parents<'commit, 'repo> { type Item = Commit<'repo>; fn next(&mut self) -> Option> { self.range.next().and_then(|i| self.commit.parent(i).ok()) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } /// Aborts iteration when a commit cannot be found impl<'repo, 'commit> DoubleEndedIterator for Parents<'commit, 'repo> { fn next_back(&mut self) -> Option> { self.range .next_back() .and_then(|i| self.commit.parent(i).ok()) } } impl<'repo, 'commit> ExactSizeIterator for Parents<'commit, 'repo> {} /// Aborts iteration when a commit cannot be found impl<'commit> Iterator for ParentIds<'commit> { type Item = Oid; fn next(&mut self) -> Option { self.range .next() .and_then(|i| self.commit.parent_id(i).ok()) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } /// Aborts iteration when a commit cannot be found impl<'commit> DoubleEndedIterator for ParentIds<'commit> { fn next_back(&mut self) -> Option { self.range .next_back() .and_then(|i| self.commit.parent_id(i).ok()) } } impl<'commit> ExactSizeIterator for ParentIds<'commit> {} impl<'repo> Clone for Commit<'repo> { fn clone(&self) -> Self { self.as_object().clone().into_commit().ok().unwrap() } } impl<'repo> Drop for Commit<'repo> { fn drop(&mut self) { unsafe { raw::git_commit_free(self.raw) } } } #[cfg(test)] mod tests { #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let head = repo.head().unwrap(); let target = head.target().unwrap(); let commit = repo.find_commit(target).unwrap(); assert_eq!(commit.message(), Some("initial")); assert_eq!(commit.id(), target); commit.message_raw().unwrap(); commit.raw_header().unwrap(); commit.message_encoding(); commit.summary().unwrap(); commit.tree_id(); commit.tree().unwrap(); assert_eq!(commit.parents().count(), 0); let tree_header_bytes = commit.header_field_bytes("tree").unwrap(); assert_eq!( crate::Oid::from_str(tree_header_bytes.as_str().unwrap()).unwrap(), commit.tree_id() ); assert_eq!(commit.author().name(), Some("name")); assert_eq!(commit.author().email(), Some("email")); assert_eq!(commit.committer().name(), Some("name")); assert_eq!(commit.committer().email(), Some("email")); let sig = repo.signature().unwrap(); let tree = repo.find_tree(commit.tree_id()).unwrap(); let id = repo .commit(Some("HEAD"), &sig, &sig, "bar", &tree, &[&commit]) .unwrap(); let head = repo.find_commit(id).unwrap(); let new_head = head .amend(Some("HEAD"), None, None, None, Some("new message"), None) .unwrap(); let new_head = repo.find_commit(new_head).unwrap(); assert_eq!(new_head.message(), Some("new message")); new_head.into_object(); repo.find_object(target, None).unwrap().as_commit().unwrap(); repo.find_object(target, None) .unwrap() .into_commit() .ok() .unwrap(); } } vendor/git2/src/remote.rs0000664000175000017500000007450214160055207016154 0ustar mwhudsonmwhudsonuse libc; use raw::git_strarray; use std::marker; use std::mem; use std::ops::Range; use std::ptr; use std::slice; use std::str; use std::{ffi::CString, os::raw::c_char}; use crate::string_array::StringArray; use crate::util::Binding; use crate::{raw, Buf, Direction, Error, FetchPrune, Oid, ProxyOptions, Refspec}; use crate::{AutotagOption, Progress, RemoteCallbacks, Repository}; /// A structure representing a [remote][1] of a git repository. /// /// [1]: http://git-scm.com/book/en/Git-Basics-Working-with-Remotes /// /// The lifetime is the lifetime of the repository that it is attached to. The /// remote is used to manage fetches and pushes as well as refspecs. pub struct Remote<'repo> { raw: *mut raw::git_remote, _marker: marker::PhantomData<&'repo Repository>, } /// An iterator over the refspecs that a remote contains. pub struct Refspecs<'remote> { range: Range, remote: &'remote Remote<'remote>, } /// Description of a reference advertised by a remote server, given out on calls /// to `list`. pub struct RemoteHead<'remote> { raw: *const raw::git_remote_head, _marker: marker::PhantomData<&'remote str>, } /// Options which can be specified to various fetch operations. pub struct FetchOptions<'cb> { callbacks: Option>, proxy: Option>, prune: FetchPrune, update_fetchhead: bool, download_tags: AutotagOption, custom_headers: Vec, custom_headers_ptrs: Vec<*const c_char>, } /// Options to control the behavior of a git push. pub struct PushOptions<'cb> { callbacks: Option>, proxy: Option>, pb_parallelism: u32, custom_headers: Vec, custom_headers_ptrs: Vec<*const c_char>, } /// Holds callbacks for a connection to a `Remote`. Disconnects when dropped pub struct RemoteConnection<'repo, 'connection, 'cb> { _callbacks: Box>, _proxy: ProxyOptions<'cb>, remote: &'connection mut Remote<'repo>, } pub fn remote_into_raw(remote: Remote<'_>) -> *mut raw::git_remote { let ret = remote.raw; mem::forget(remote); ret } impl<'repo> Remote<'repo> { /// Ensure the remote name is well-formed. pub fn is_valid_name(remote_name: &str) -> bool { crate::init(); let remote_name = CString::new(remote_name).unwrap(); unsafe { raw::git_remote_is_valid_name(remote_name.as_ptr()) == 1 } } /// Create a detached remote /// /// Create a remote with the given url in-memory. You can use this /// when you have a URL instead of a remote's name. /// Contrasted with an anonymous remote, a detached remote will not /// consider any repo configuration values. pub fn create_detached(url: &str) -> Result, Error> { crate::init(); let mut ret = ptr::null_mut(); let url = CString::new(url)?; unsafe { try_call!(raw::git_remote_create_detached(&mut ret, url)); Ok(Binding::from_raw(ret)) } } /// Get the remote's name. /// /// Returns `None` if this remote has not yet been named or if the name is /// not valid utf-8 pub fn name(&self) -> Option<&str> { self.name_bytes().and_then(|s| str::from_utf8(s).ok()) } /// Get the remote's name, in bytes. /// /// Returns `None` if this remote has not yet been named pub fn name_bytes(&self) -> Option<&[u8]> { unsafe { crate::opt_bytes(self, raw::git_remote_name(&*self.raw)) } } /// Get the remote's url. /// /// Returns `None` if the url is not valid utf-8 pub fn url(&self) -> Option<&str> { str::from_utf8(self.url_bytes()).ok() } /// Get the remote's url as a byte array. pub fn url_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_remote_url(&*self.raw)).unwrap() } } /// Get the remote's pushurl. /// /// Returns `None` if the pushurl is not valid utf-8 pub fn pushurl(&self) -> Option<&str> { self.pushurl_bytes().and_then(|s| str::from_utf8(s).ok()) } /// Get the remote's pushurl as a byte array. pub fn pushurl_bytes(&self) -> Option<&[u8]> { unsafe { crate::opt_bytes(self, raw::git_remote_pushurl(&*self.raw)) } } /// Get the remote's default branch. /// /// The remote (or more exactly its transport) must have connected to the /// remote repository. This default branch is available as soon as the /// connection to the remote is initiated and it remains available after /// disconnecting. pub fn default_branch(&self) -> Result { unsafe { let buf = Buf::new(); try_call!(raw::git_remote_default_branch(buf.raw(), self.raw)); Ok(buf) } } /// Open a connection to a remote. pub fn connect(&mut self, dir: Direction) -> Result<(), Error> { // TODO: can callbacks be exposed safely? unsafe { try_call!(raw::git_remote_connect( self.raw, dir, ptr::null(), ptr::null(), ptr::null() )); } Ok(()) } /// Open a connection to a remote with callbacks and proxy settings /// /// Returns a `RemoteConnection` that will disconnect once dropped pub fn connect_auth<'connection, 'cb>( &'connection mut self, dir: Direction, cb: Option>, proxy_options: Option>, ) -> Result, Error> { let cb = Box::new(cb.unwrap_or_else(RemoteCallbacks::new)); let proxy_options = proxy_options.unwrap_or_else(ProxyOptions::new); unsafe { try_call!(raw::git_remote_connect( self.raw, dir, &cb.raw(), &proxy_options.raw(), ptr::null() )); } Ok(RemoteConnection { _callbacks: cb, _proxy: proxy_options, remote: self, }) } /// Check whether the remote is connected pub fn connected(&mut self) -> bool { unsafe { raw::git_remote_connected(self.raw) == 1 } } /// Disconnect from the remote pub fn disconnect(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_remote_disconnect(self.raw)); } Ok(()) } /// Download and index the packfile /// /// Connect to the remote if it hasn't been done yet, negotiate with the /// remote git which objects are missing, download and index the packfile. /// /// The .idx file will be created and both it and the packfile with be /// renamed to their final name. /// /// The `specs` argument is a list of refspecs to use for this negotiation /// and download. Use an empty array to use the base refspecs. pub fn download + crate::IntoCString + Clone>( &mut self, specs: &[Str], opts: Option<&mut FetchOptions<'_>>, ) -> Result<(), Error> { let (_a, _b, arr) = crate::util::iter2cstrs(specs.iter())?; let raw = opts.map(|o| o.raw()); unsafe { try_call!(raw::git_remote_download(self.raw, &arr, raw.as_ref())); } Ok(()) } /// Cancel the operation /// /// At certain points in its operation, the network code checks whether the /// operation has been cancelled and if so stops the operation. pub fn stop(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_remote_stop(self.raw)); } Ok(()) } /// Get the number of refspecs for a remote pub fn refspecs(&self) -> Refspecs<'_> { let cnt = unsafe { raw::git_remote_refspec_count(&*self.raw) as usize }; Refspecs { range: 0..cnt, remote: self, } } /// Get the `nth` refspec from this remote. /// /// The `refspecs` iterator can be used to iterate over all refspecs. pub fn get_refspec(&self, i: usize) -> Option> { unsafe { let ptr = raw::git_remote_get_refspec(&*self.raw, i as libc::size_t); Binding::from_raw_opt(ptr) } } /// Download new data and update tips /// /// Convenience function to connect to a remote, download the data, /// disconnect and update the remote-tracking branches. /// /// # Examples /// /// Example of functionality similar to `git fetch origin/main`: /// /// ```no_run /// fn fetch_origin_main(repo: git2::Repository) -> Result<(), git2::Error> { /// repo.find_remote("origin")?.fetch(&["main"], None, None) /// } /// /// let repo = git2::Repository::discover("rust").unwrap(); /// fetch_origin_main(repo).unwrap(); /// ``` pub fn fetch + crate::IntoCString + Clone>( &mut self, refspecs: &[Str], opts: Option<&mut FetchOptions<'_>>, reflog_msg: Option<&str>, ) -> Result<(), Error> { let (_a, _b, arr) = crate::util::iter2cstrs(refspecs.iter())?; let msg = crate::opt_cstr(reflog_msg)?; let raw = opts.map(|o| o.raw()); unsafe { try_call!(raw::git_remote_fetch(self.raw, &arr, raw.as_ref(), msg)); } Ok(()) } /// Update the tips to the new state pub fn update_tips( &mut self, callbacks: Option<&mut RemoteCallbacks<'_>>, update_fetchhead: bool, download_tags: AutotagOption, msg: Option<&str>, ) -> Result<(), Error> { let msg = crate::opt_cstr(msg)?; let cbs = callbacks.map(|cb| cb.raw()); unsafe { try_call!(raw::git_remote_update_tips( self.raw, cbs.as_ref(), update_fetchhead, download_tags, msg )); } Ok(()) } /// Perform a push /// /// Perform all the steps for a push. If no refspecs are passed then the /// configured refspecs will be used. /// /// Note that you'll likely want to use `RemoteCallbacks` and set /// `push_update_reference` to test whether all the references were pushed /// successfully. pub fn push + crate::IntoCString + Clone>( &mut self, refspecs: &[Str], opts: Option<&mut PushOptions<'_>>, ) -> Result<(), Error> { let (_a, _b, arr) = crate::util::iter2cstrs(refspecs.iter())?; let raw = opts.map(|o| o.raw()); unsafe { try_call!(raw::git_remote_push(self.raw, &arr, raw.as_ref())); } Ok(()) } /// Get the statistics structure that is filled in by the fetch operation. pub fn stats(&self) -> Progress<'_> { unsafe { Binding::from_raw(raw::git_remote_stats(self.raw)) } } /// Get the remote repository's reference advertisement list. /// /// Get the list of references with which the server responds to a new /// connection. /// /// The remote (or more exactly its transport) must have connected to the /// remote repository. This list is available as soon as the connection to /// the remote is initiated and it remains available after disconnecting. pub fn list(&self) -> Result<&[RemoteHead<'_>], Error> { let mut size = 0; let mut base = ptr::null_mut(); unsafe { try_call!(raw::git_remote_ls(&mut base, &mut size, self.raw)); assert_eq!( mem::size_of::>(), mem::size_of::<*const raw::git_remote_head>() ); let slice = slice::from_raw_parts(base as *const _, size as usize); Ok(mem::transmute::< &[*const raw::git_remote_head], &[RemoteHead<'_>], >(slice)) } } /// Prune tracking refs that are no longer present on remote pub fn prune(&mut self, callbacks: Option>) -> Result<(), Error> { let cbs = Box::new(callbacks.unwrap_or_else(RemoteCallbacks::new)); unsafe { try_call!(raw::git_remote_prune(self.raw, &cbs.raw())); } Ok(()) } /// Get the remote's list of fetch refspecs pub fn fetch_refspecs(&self) -> Result { unsafe { let mut raw: raw::git_strarray = mem::zeroed(); try_call!(raw::git_remote_get_fetch_refspecs(&mut raw, self.raw)); Ok(StringArray::from_raw(raw)) } } /// Get the remote's list of push refspecs pub fn push_refspecs(&self) -> Result { unsafe { let mut raw: raw::git_strarray = mem::zeroed(); try_call!(raw::git_remote_get_push_refspecs(&mut raw, self.raw)); Ok(StringArray::from_raw(raw)) } } } impl<'repo> Clone for Remote<'repo> { fn clone(&self) -> Remote<'repo> { let mut ret = ptr::null_mut(); let rc = unsafe { call!(raw::git_remote_dup(&mut ret, self.raw)) }; assert_eq!(rc, 0); Remote { raw: ret, _marker: marker::PhantomData, } } } impl<'repo> Binding for Remote<'repo> { type Raw = *mut raw::git_remote; unsafe fn from_raw(raw: *mut raw::git_remote) -> Remote<'repo> { Remote { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_remote { self.raw } } impl<'repo> Drop for Remote<'repo> { fn drop(&mut self) { unsafe { raw::git_remote_free(self.raw) } } } impl<'repo> Iterator for Refspecs<'repo> { type Item = Refspec<'repo>; fn next(&mut self) -> Option> { self.range.next().and_then(|i| self.remote.get_refspec(i)) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } impl<'repo> DoubleEndedIterator for Refspecs<'repo> { fn next_back(&mut self) -> Option> { self.range .next_back() .and_then(|i| self.remote.get_refspec(i)) } } impl<'repo> ExactSizeIterator for Refspecs<'repo> {} #[allow(missing_docs)] // not documented in libgit2 :( impl<'remote> RemoteHead<'remote> { /// Flag if this is available locally. pub fn is_local(&self) -> bool { unsafe { (*self.raw).local != 0 } } pub fn oid(&self) -> Oid { unsafe { Binding::from_raw(&(*self.raw).oid as *const _) } } pub fn loid(&self) -> Oid { unsafe { Binding::from_raw(&(*self.raw).loid as *const _) } } pub fn name(&self) -> &str { let b = unsafe { crate::opt_bytes(self, (*self.raw).name).unwrap() }; str::from_utf8(b).unwrap() } pub fn symref_target(&self) -> Option<&str> { let b = unsafe { crate::opt_bytes(self, (*self.raw).symref_target) }; b.map(|b| str::from_utf8(b).unwrap()) } } impl<'cb> Default for FetchOptions<'cb> { fn default() -> Self { Self::new() } } impl<'cb> FetchOptions<'cb> { /// Creates a new blank set of fetch options pub fn new() -> FetchOptions<'cb> { FetchOptions { callbacks: None, proxy: None, prune: FetchPrune::Unspecified, update_fetchhead: true, download_tags: AutotagOption::Unspecified, custom_headers: Vec::new(), custom_headers_ptrs: Vec::new(), } } /// Set the callbacks to use for the fetch operation. pub fn remote_callbacks(&mut self, cbs: RemoteCallbacks<'cb>) -> &mut Self { self.callbacks = Some(cbs); self } /// Set the proxy options to use for the fetch operation. pub fn proxy_options(&mut self, opts: ProxyOptions<'cb>) -> &mut Self { self.proxy = Some(opts); self } /// Set whether to perform a prune after the fetch. pub fn prune(&mut self, prune: FetchPrune) -> &mut Self { self.prune = prune; self } /// Set whether to write the results to FETCH_HEAD. /// /// Defaults to `true`. pub fn update_fetchhead(&mut self, update: bool) -> &mut Self { self.update_fetchhead = update; self } /// Set how to behave regarding tags on the remote, such as auto-downloading /// tags for objects we're downloading or downloading all of them. /// /// The default is to auto-follow tags. pub fn download_tags(&mut self, opt: AutotagOption) -> &mut Self { self.download_tags = opt; self } /// Set extra headers for this fetch operation. pub fn custom_headers(&mut self, custom_headers: &[&str]) -> &mut Self { self.custom_headers = custom_headers .iter() .map(|&s| CString::new(s).unwrap()) .collect(); self.custom_headers_ptrs = self.custom_headers.iter().map(|s| s.as_ptr()).collect(); self } } impl<'cb> Binding for FetchOptions<'cb> { type Raw = raw::git_fetch_options; unsafe fn from_raw(_raw: raw::git_fetch_options) -> FetchOptions<'cb> { panic!("unimplemented"); } fn raw(&self) -> raw::git_fetch_options { raw::git_fetch_options { version: 1, callbacks: self .callbacks .as_ref() .map(|m| m.raw()) .unwrap_or_else(|| RemoteCallbacks::new().raw()), proxy_opts: self .proxy .as_ref() .map(|m| m.raw()) .unwrap_or_else(|| ProxyOptions::new().raw()), prune: crate::call::convert(&self.prune), update_fetchhead: crate::call::convert(&self.update_fetchhead), download_tags: crate::call::convert(&self.download_tags), custom_headers: git_strarray { count: self.custom_headers_ptrs.len(), strings: self.custom_headers_ptrs.as_ptr() as *mut _, }, } } } impl<'cb> Default for PushOptions<'cb> { fn default() -> Self { Self::new() } } impl<'cb> PushOptions<'cb> { /// Creates a new blank set of push options pub fn new() -> PushOptions<'cb> { PushOptions { callbacks: None, proxy: None, pb_parallelism: 1, custom_headers: Vec::new(), custom_headers_ptrs: Vec::new(), } } /// Set the callbacks to use for the fetch operation. pub fn remote_callbacks(&mut self, cbs: RemoteCallbacks<'cb>) -> &mut Self { self.callbacks = Some(cbs); self } /// Set the proxy options to use for the fetch operation. pub fn proxy_options(&mut self, opts: ProxyOptions<'cb>) -> &mut Self { self.proxy = Some(opts); self } /// If the transport being used to push to the remote requires the creation /// of a pack file, this controls the number of worker threads used by the /// packbuilder when creating that pack file to be sent to the remote. /// /// if set to 0 the packbuilder will auto-detect the number of threads to /// create, and the default value is 1. pub fn packbuilder_parallelism(&mut self, parallel: u32) -> &mut Self { self.pb_parallelism = parallel; self } /// Set extra headers for this push operation. pub fn custom_headers(&mut self, custom_headers: &[&str]) -> &mut Self { self.custom_headers = custom_headers .iter() .map(|&s| CString::new(s).unwrap()) .collect(); self.custom_headers_ptrs = self.custom_headers.iter().map(|s| s.as_ptr()).collect(); self } } impl<'cb> Binding for PushOptions<'cb> { type Raw = raw::git_push_options; unsafe fn from_raw(_raw: raw::git_push_options) -> PushOptions<'cb> { panic!("unimplemented"); } fn raw(&self) -> raw::git_push_options { raw::git_push_options { version: 1, callbacks: self .callbacks .as_ref() .map(|m| m.raw()) .unwrap_or_else(|| RemoteCallbacks::new().raw()), proxy_opts: self .proxy .as_ref() .map(|m| m.raw()) .unwrap_or_else(|| ProxyOptions::new().raw()), pb_parallelism: self.pb_parallelism as libc::c_uint, custom_headers: git_strarray { count: self.custom_headers_ptrs.len(), strings: self.custom_headers_ptrs.as_ptr() as *mut _, }, } } } impl<'repo, 'connection, 'cb> RemoteConnection<'repo, 'connection, 'cb> { /// Check whether the remote is (still) connected pub fn connected(&mut self) -> bool { self.remote.connected() } /// Get the remote repository's reference advertisement list. /// /// This list is available as soon as the connection to /// the remote is initiated and it remains available after disconnecting. pub fn list(&self) -> Result<&[RemoteHead<'_>], Error> { self.remote.list() } /// Get the remote's default branch. /// /// This default branch is available as soon as the connection to the remote /// is initiated and it remains available after disconnecting. pub fn default_branch(&self) -> Result { self.remote.default_branch() } /// access remote bound to this connection pub fn remote(&mut self) -> &mut Remote<'repo> { self.remote } } impl<'repo, 'connection, 'cb> Drop for RemoteConnection<'repo, 'connection, 'cb> { fn drop(&mut self) { drop(self.remote.disconnect()); } } #[cfg(test)] mod tests { use crate::{AutotagOption, PushOptions}; use crate::{Direction, FetchOptions, Remote, RemoteCallbacks, Repository}; use std::cell::Cell; use tempfile::TempDir; #[test] fn smoke() { let (td, repo) = crate::test::repo_init(); t!(repo.remote("origin", "/path/to/nowhere")); drop(repo); let repo = t!(Repository::init(td.path())); let mut origin = t!(repo.find_remote("origin")); assert_eq!(origin.name(), Some("origin")); assert_eq!(origin.url(), Some("/path/to/nowhere")); assert_eq!(origin.pushurl(), None); t!(repo.remote_set_url("origin", "/path/to/elsewhere")); t!(repo.remote_set_pushurl("origin", Some("/path/to/elsewhere"))); let stats = origin.stats(); assert_eq!(stats.total_objects(), 0); t!(origin.stop()); } #[test] fn create_remote() { let td = TempDir::new().unwrap(); let remote = td.path().join("remote"); Repository::init_bare(&remote).unwrap(); let (_td, repo) = crate::test::repo_init(); let url = if cfg!(unix) { format!("file://{}", remote.display()) } else { format!( "file:///{}", remote.display().to_string().replace("\\", "/") ) }; let mut origin = repo.remote("origin", &url).unwrap(); assert_eq!(origin.name(), Some("origin")); assert_eq!(origin.url(), Some(&url[..])); assert_eq!(origin.pushurl(), None); { let mut specs = origin.refspecs(); let spec = specs.next().unwrap(); assert!(specs.next().is_none()); assert_eq!(spec.str(), Some("+refs/heads/*:refs/remotes/origin/*")); assert_eq!(spec.dst(), Some("refs/remotes/origin/*")); assert_eq!(spec.src(), Some("refs/heads/*")); assert!(spec.is_force()); } assert!(origin.refspecs().next_back().is_some()); { let remotes = repo.remotes().unwrap(); assert_eq!(remotes.len(), 1); assert_eq!(remotes.get(0), Some("origin")); assert_eq!(remotes.iter().count(), 1); assert_eq!(remotes.iter().next().unwrap(), Some("origin")); } origin.connect(Direction::Push).unwrap(); assert!(origin.connected()); origin.disconnect().unwrap(); origin.connect(Direction::Fetch).unwrap(); assert!(origin.connected()); origin.download(&[] as &[&str], None).unwrap(); origin.disconnect().unwrap(); { let mut connection = origin.connect_auth(Direction::Push, None, None).unwrap(); assert!(connection.connected()); } assert!(!origin.connected()); { let mut connection = origin.connect_auth(Direction::Fetch, None, None).unwrap(); assert!(connection.connected()); } assert!(!origin.connected()); origin.fetch(&[] as &[&str], None, None).unwrap(); origin.fetch(&[] as &[&str], None, Some("foo")).unwrap(); origin .update_tips(None, true, AutotagOption::Unspecified, None) .unwrap(); origin .update_tips(None, true, AutotagOption::All, Some("foo")) .unwrap(); t!(repo.remote_add_fetch("origin", "foo")); t!(repo.remote_add_fetch("origin", "bar")); } #[test] fn rename_remote() { let (_td, repo) = crate::test::repo_init(); repo.remote("origin", "foo").unwrap(); drop(repo.remote_rename("origin", "foo")); drop(repo.remote_delete("foo")); } #[test] fn create_remote_anonymous() { let td = TempDir::new().unwrap(); let repo = Repository::init(td.path()).unwrap(); let origin = repo.remote_anonymous("/path/to/nowhere").unwrap(); assert_eq!(origin.name(), None); drop(origin.clone()); } #[test] fn is_valid() { assert!(Remote::is_valid_name("foobar")); assert!(!Remote::is_valid_name("\x01")); } #[test] fn transfer_cb() { let (td, _repo) = crate::test::repo_init(); let td2 = TempDir::new().unwrap(); let url = crate::test::path2url(&td.path()); let repo = Repository::init(td2.path()).unwrap(); let progress_hit = Cell::new(false); { let mut callbacks = RemoteCallbacks::new(); let mut origin = repo.remote("origin", &url).unwrap(); callbacks.transfer_progress(|_progress| { progress_hit.set(true); true }); origin .fetch( &[] as &[&str], Some(FetchOptions::new().remote_callbacks(callbacks)), None, ) .unwrap(); let list = t!(origin.list()); assert_eq!(list.len(), 2); assert_eq!(list[0].name(), "HEAD"); assert!(!list[0].is_local()); assert_eq!(list[1].name(), "refs/heads/main"); assert!(!list[1].is_local()); } assert!(progress_hit.get()); } /// This test is meant to assure that the callbacks provided to connect will not cause /// segfaults #[test] fn connect_list() { let (td, _repo) = crate::test::repo_init(); let td2 = TempDir::new().unwrap(); let url = crate::test::path2url(&td.path()); let repo = Repository::init(td2.path()).unwrap(); let mut callbacks = RemoteCallbacks::new(); callbacks.sideband_progress(|_progress| { // no-op true }); let mut origin = repo.remote("origin", &url).unwrap(); { let mut connection = origin .connect_auth(Direction::Fetch, Some(callbacks), None) .unwrap(); assert!(connection.connected()); let list = t!(connection.list()); assert_eq!(list.len(), 2); assert_eq!(list[0].name(), "HEAD"); assert!(!list[0].is_local()); assert_eq!(list[1].name(), "refs/heads/main"); assert!(!list[1].is_local()); } assert!(!origin.connected()); } #[test] fn push() { let (_td, repo) = crate::test::repo_init(); let td2 = TempDir::new().unwrap(); let td3 = TempDir::new().unwrap(); let url = crate::test::path2url(&td2.path()); let mut opts = crate::RepositoryInitOptions::new(); opts.bare(true); opts.initial_head("main"); Repository::init_opts(td2.path(), &opts).unwrap(); // git push let mut remote = repo.remote("origin", &url).unwrap(); let mut updated = false; { let mut callbacks = RemoteCallbacks::new(); callbacks.push_update_reference(|refname, status| { updated = true; assert_eq!(refname, "refs/heads/main"); assert_eq!(status, None); Ok(()) }); let mut options = PushOptions::new(); options.remote_callbacks(callbacks); remote .push(&["refs/heads/main"], Some(&mut options)) .unwrap(); } assert!(updated); let repo = Repository::clone(&url, td3.path()).unwrap(); let commit = repo.head().unwrap().target().unwrap(); let commit = repo.find_commit(commit).unwrap(); assert_eq!(commit.message(), Some("initial")); } #[test] fn prune() { let (td, remote_repo) = crate::test::repo_init(); let oid = remote_repo.head().unwrap().target().unwrap(); let commit = remote_repo.find_commit(oid).unwrap(); remote_repo.branch("stale", &commit, true).unwrap(); let td2 = TempDir::new().unwrap(); let url = crate::test::path2url(&td.path()); let repo = Repository::clone(&url, &td2).unwrap(); fn assert_branch_count(repo: &Repository, count: usize) { assert_eq!( repo.branches(Some(crate::BranchType::Remote)) .unwrap() .filter(|b| b.as_ref().unwrap().0.name().unwrap() == Some("origin/stale")) .count(), count, ); } assert_branch_count(&repo, 1); // delete `stale` branch on remote repo let mut stale_branch = remote_repo .find_branch("stale", crate::BranchType::Local) .unwrap(); stale_branch.delete().unwrap(); // prune let mut remote = repo.find_remote("origin").unwrap(); remote.connect(Direction::Push).unwrap(); let mut callbacks = RemoteCallbacks::new(); callbacks.update_tips(|refname, _a, b| { assert_eq!(refname, "refs/remotes/origin/stale"); assert!(b.is_zero()); true }); remote.prune(Some(callbacks)).unwrap(); assert_branch_count(&repo, 0); } } vendor/git2/src/describe.rs0000664000175000017500000001372514160055207016441 0ustar mwhudsonmwhudsonuse std::ffi::CString; use std::marker; use std::mem; use std::ptr; use libc::{c_int, c_uint}; use crate::util::Binding; use crate::{raw, Buf, Error, Repository}; /// The result of a `describe` operation on either an `Describe` or a /// `Repository`. pub struct Describe<'repo> { raw: *mut raw::git_describe_result, _marker: marker::PhantomData<&'repo Repository>, } /// Options which indicate how a `Describe` is created. pub struct DescribeOptions { raw: raw::git_describe_options, pattern: CString, } /// Options which can be used to customize how a description is formatted. pub struct DescribeFormatOptions { raw: raw::git_describe_format_options, dirty_suffix: CString, } impl<'repo> Describe<'repo> { /// Prints this describe result, returning the result as a string. pub fn format(&self, opts: Option<&DescribeFormatOptions>) -> Result { let buf = Buf::new(); let opts = opts.map(|o| &o.raw as *const _).unwrap_or(ptr::null()); unsafe { try_call!(raw::git_describe_format(buf.raw(), self.raw, opts)); } Ok(String::from_utf8(buf.to_vec()).unwrap()) } } impl<'repo> Binding for Describe<'repo> { type Raw = *mut raw::git_describe_result; unsafe fn from_raw(raw: *mut raw::git_describe_result) -> Describe<'repo> { Describe { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_describe_result { self.raw } } impl<'repo> Drop for Describe<'repo> { fn drop(&mut self) { unsafe { raw::git_describe_result_free(self.raw) } } } impl Default for DescribeFormatOptions { fn default() -> Self { Self::new() } } impl DescribeFormatOptions { /// Creates a new blank set of formatting options for a description. pub fn new() -> DescribeFormatOptions { let mut opts = DescribeFormatOptions { raw: unsafe { mem::zeroed() }, dirty_suffix: CString::new(Vec::new()).unwrap(), }; opts.raw.version = 1; opts.raw.abbreviated_size = 7; opts } /// Sets the size of the abbreviated commit id to use. /// /// The value is the lower bound for the length of the abbreviated string, /// and the default is 7. pub fn abbreviated_size(&mut self, size: u32) -> &mut Self { self.raw.abbreviated_size = size as c_uint; self } /// Sets whether or not the long format is used even when a shorter name /// could be used. pub fn always_use_long_format(&mut self, long: bool) -> &mut Self { self.raw.always_use_long_format = long as c_int; self } /// If the workdir is dirty and this is set, this string will be appended to /// the description string. pub fn dirty_suffix(&mut self, suffix: &str) -> &mut Self { self.dirty_suffix = CString::new(suffix).unwrap(); self.raw.dirty_suffix = self.dirty_suffix.as_ptr(); self } } impl Default for DescribeOptions { fn default() -> Self { Self::new() } } impl DescribeOptions { /// Creates a new blank set of formatting options for a description. pub fn new() -> DescribeOptions { let mut opts = DescribeOptions { raw: unsafe { mem::zeroed() }, pattern: CString::new(Vec::new()).unwrap(), }; opts.raw.version = 1; opts.raw.max_candidates_tags = 10; opts } #[allow(missing_docs)] pub fn max_candidates_tags(&mut self, max: u32) -> &mut Self { self.raw.max_candidates_tags = max as c_uint; self } /// Sets the reference lookup strategy /// /// This behaves like the `--tags` option to git-describe. pub fn describe_tags(&mut self) -> &mut Self { self.raw.describe_strategy = raw::GIT_DESCRIBE_TAGS as c_uint; self } /// Sets the reference lookup strategy /// /// This behaves like the `--all` option to git-describe. pub fn describe_all(&mut self) -> &mut Self { self.raw.describe_strategy = raw::GIT_DESCRIBE_ALL as c_uint; self } /// Indicates when calculating the distance from the matching tag or /// reference whether to only walk down the first-parent ancestry. pub fn only_follow_first_parent(&mut self, follow: bool) -> &mut Self { self.raw.only_follow_first_parent = follow as c_int; self } /// If no matching tag or reference is found whether a describe option would /// normally fail. This option indicates, however, that it will instead fall /// back to showing the full id of the commit. pub fn show_commit_oid_as_fallback(&mut self, show: bool) -> &mut Self { self.raw.show_commit_oid_as_fallback = show as c_int; self } #[allow(missing_docs)] pub fn pattern(&mut self, pattern: &str) -> &mut Self { self.pattern = CString::new(pattern).unwrap(); self.raw.pattern = self.pattern.as_ptr(); self } } impl Binding for DescribeOptions { type Raw = *mut raw::git_describe_options; unsafe fn from_raw(_raw: *mut raw::git_describe_options) -> DescribeOptions { panic!("unimplemened") } fn raw(&self) -> *mut raw::git_describe_options { &self.raw as *const _ as *mut _ } } #[cfg(test)] mod tests { use crate::DescribeOptions; #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let head = t!(repo.head()).target().unwrap(); let d = t!(repo.describe(DescribeOptions::new().show_commit_oid_as_fallback(true))); let id = head.to_string(); assert_eq!(t!(d.format(None)), &id[..7]); let obj = t!(repo.find_object(head, None)); let sig = t!(repo.signature()); t!(repo.tag("foo", &obj, &sig, "message", true)); let d = t!(repo.describe(&DescribeOptions::new())); assert_eq!(t!(d.format(None)), "foo"); let d = t!(obj.describe(&DescribeOptions::new())); assert_eq!(t!(d.format(None)), "foo"); } } vendor/git2/src/indexer.rs0000664000175000017500000000627314160055207016317 0ustar mwhudsonmwhudsonuse std::marker; use crate::raw; use crate::util::Binding; /// Struct representing the progress by an in-flight transfer. pub struct Progress<'a> { pub(crate) raw: ProgressState, pub(crate) _marker: marker::PhantomData<&'a raw::git_indexer_progress>, } pub(crate) enum ProgressState { Borrowed(*const raw::git_indexer_progress), Owned(raw::git_indexer_progress), } /// Callback to be invoked while indexing is in progress. /// /// This callback will be periodically called with updates to the progress of /// the indexing so far. The return value indicates whether the indexing or /// transfer should continue. A return value of `false` will cancel the /// indexing or transfer. /// /// * `progress` - the progress being made so far. pub type IndexerProgress<'a> = dyn FnMut(Progress<'_>) -> bool + 'a; impl<'a> Progress<'a> { /// Number of objects in the packfile being downloaded pub fn total_objects(&self) -> usize { unsafe { (*self.raw()).total_objects as usize } } /// Received objects that have been hashed pub fn indexed_objects(&self) -> usize { unsafe { (*self.raw()).indexed_objects as usize } } /// Objects which have been downloaded pub fn received_objects(&self) -> usize { unsafe { (*self.raw()).received_objects as usize } } /// Locally-available objects that have been injected in order to fix a thin /// pack. pub fn local_objects(&self) -> usize { unsafe { (*self.raw()).local_objects as usize } } /// Number of deltas in the packfile being downloaded pub fn total_deltas(&self) -> usize { unsafe { (*self.raw()).total_deltas as usize } } /// Received deltas that have been hashed. pub fn indexed_deltas(&self) -> usize { unsafe { (*self.raw()).indexed_deltas as usize } } /// Size of the packfile received up to now pub fn received_bytes(&self) -> usize { unsafe { (*self.raw()).received_bytes as usize } } /// Convert this to an owned version of `Progress`. pub fn to_owned(&self) -> Progress<'static> { Progress { raw: ProgressState::Owned(unsafe { *self.raw() }), _marker: marker::PhantomData, } } } impl<'a> Binding for Progress<'a> { type Raw = *const raw::git_indexer_progress; unsafe fn from_raw(raw: *const raw::git_indexer_progress) -> Progress<'a> { Progress { raw: ProgressState::Borrowed(raw), _marker: marker::PhantomData, } } fn raw(&self) -> *const raw::git_indexer_progress { match self.raw { ProgressState::Borrowed(raw) => raw, ProgressState::Owned(ref raw) => raw as *const _, } } } /// Callback to be invoked while a transfer is in progress. /// /// This callback will be periodically called with updates to the progress of /// the transfer so far. The return value indicates whether the transfer should /// continue. A return value of `false` will cancel the transfer. /// /// * `progress` - the progress being made so far. #[deprecated( since = "0.11.0", note = "renamed to `IndexerProgress` to match upstream" )] #[allow(dead_code)] pub type TransportProgress<'a> = IndexerProgress<'a>; vendor/git2/src/tree.rs0000664000175000017500000003744414160055207015624 0ustar mwhudsonmwhudsonuse libc::{self, c_char, c_int, c_void}; use std::cmp::Ordering; use std::ffi::{CStr, CString}; use std::marker; use std::mem; use std::ops::Range; use std::path::Path; use std::ptr; use std::str; use crate::util::{c_cmp_to_ordering, path_to_repo_path, Binding}; use crate::{panic, raw, Error, Object, ObjectType, Oid, Repository}; /// A structure to represent a git [tree][1] /// /// [1]: http://git-scm.com/book/en/Git-Internals-Git-Objects pub struct Tree<'repo> { raw: *mut raw::git_tree, _marker: marker::PhantomData>, } /// A structure representing an entry inside of a tree. An entry is borrowed /// from a tree. pub struct TreeEntry<'tree> { raw: *mut raw::git_tree_entry, owned: bool, _marker: marker::PhantomData<&'tree raw::git_tree_entry>, } /// An iterator over the entries in a tree. pub struct TreeIter<'tree> { range: Range, tree: &'tree Tree<'tree>, } /// A binary indicator of whether a tree walk should be performed in pre-order /// or post-order. pub enum TreeWalkMode { /// Runs the traversal in pre order. PreOrder = 0, /// Runs the traversal in post order. PostOrder = 1, } /// Possible return codes for tree walking callback functions. #[repr(i32)] pub enum TreeWalkResult { /// Continue with the traversal as normal. Ok = 0, /// Skip the current node (in pre-order mode). Skip = 1, /// Completely stop the traversal. Abort = raw::GIT_EUSER, } impl Into for TreeWalkResult { fn into(self) -> i32 { self as i32 } } impl Into for TreeWalkMode { #[cfg(target_env = "msvc")] fn into(self) -> raw::git_treewalk_mode { self as i32 } #[cfg(not(target_env = "msvc"))] fn into(self) -> raw::git_treewalk_mode { self as u32 } } impl<'repo> Tree<'repo> { /// Get the id (SHA1) of a repository object pub fn id(&self) -> Oid { unsafe { Binding::from_raw(raw::git_tree_id(&*self.raw)) } } /// Get the number of entries listed in this tree. pub fn len(&self) -> usize { unsafe { raw::git_tree_entrycount(&*self.raw) as usize } } /// Return `true` if there is not entry pub fn is_empty(&self) -> bool { self.len() == 0 } /// Returns an iterator over the entries in this tree. pub fn iter(&self) -> TreeIter<'_> { TreeIter { range: 0..self.len(), tree: self, } } /// Traverse the entries in a tree and its subtrees in post or pre order. /// The callback function will be run on each node of the tree that's /// walked. The return code of this function will determine how the walk /// continues. /// /// libgit requires that the callback be an integer, where 0 indicates a /// successful visit, 1 skips the node, and -1 aborts the traversal completely. /// You may opt to use the enum [`TreeWalkResult`](TreeWalkResult) instead. /// /// ```ignore /// let mut ct = 0; /// tree.walk(TreeWalkMode::PreOrder, |_, entry| { /// assert_eq!(entry.name(), Some("foo")); /// ct += 1; /// TreeWalkResult::Ok /// }).unwrap(); /// assert_eq!(ct, 1); /// ``` /// /// See [libgit documentation][1] for more information. /// /// [1]: https://libgit2.org/libgit2/#HEAD/group/tree/git_tree_walk pub fn walk(&self, mode: TreeWalkMode, mut callback: C) -> Result<(), Error> where C: FnMut(&str, &TreeEntry<'_>) -> T, T: Into, { #[allow(unused)] struct TreeWalkCbData<'a, T> { pub callback: &'a mut TreeWalkCb<'a, T>, } unsafe { let mut data = TreeWalkCbData { callback: &mut callback, }; raw::git_tree_walk( self.raw(), mode.into(), Some(treewalk_cb::), &mut data as *mut _ as *mut c_void, ); Ok(()) } } /// Lookup a tree entry by SHA value. pub fn get_id(&self, id: Oid) -> Option> { unsafe { let ptr = raw::git_tree_entry_byid(&*self.raw(), &*id.raw()); if ptr.is_null() { None } else { Some(entry_from_raw_const(ptr)) } } } /// Lookup a tree entry by its position in the tree pub fn get(&self, n: usize) -> Option> { unsafe { let ptr = raw::git_tree_entry_byindex(&*self.raw(), n as libc::size_t); if ptr.is_null() { None } else { Some(entry_from_raw_const(ptr)) } } } /// Lookup a tree entry by its filename pub fn get_name(&self, filename: &str) -> Option> { let filename = CString::new(filename).unwrap(); unsafe { let ptr = call!(raw::git_tree_entry_byname(&*self.raw(), filename)); if ptr.is_null() { None } else { Some(entry_from_raw_const(ptr)) } } } /// Retrieve a tree entry contained in a tree or in any of its subtrees, /// given its relative path. pub fn get_path(&self, path: &Path) -> Result, Error> { let path = path_to_repo_path(path)?; let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_tree_entry_bypath(&mut ret, &*self.raw(), path)); Ok(Binding::from_raw(ret)) } } /// Casts this Tree to be usable as an `Object` pub fn as_object(&self) -> &Object<'repo> { unsafe { &*(self as *const _ as *const Object<'repo>) } } /// Consumes Commit to be returned as an `Object` pub fn into_object(self) -> Object<'repo> { assert_eq!(mem::size_of_val(&self), mem::size_of::>()); unsafe { mem::transmute(self) } } } type TreeWalkCb<'a, T> = dyn FnMut(&str, &TreeEntry<'_>) -> T + 'a; extern "C" fn treewalk_cb>( root: *const c_char, entry: *const raw::git_tree_entry, payload: *mut c_void, ) -> c_int { match panic::wrap(|| unsafe { let root = match CStr::from_ptr(root).to_str() { Ok(value) => value, _ => return -1, }; let entry = entry_from_raw_const(entry); let payload = payload as *mut &mut TreeWalkCb<'_, T>; (*payload)(root, &entry).into() }) { Some(value) => value, None => -1, } } impl<'repo> Binding for Tree<'repo> { type Raw = *mut raw::git_tree; unsafe fn from_raw(raw: *mut raw::git_tree) -> Tree<'repo> { Tree { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_tree { self.raw } } impl<'repo> std::fmt::Debug for Tree<'repo> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { f.debug_struct("Tree").field("id", &self.id()).finish() } } impl<'repo> Clone for Tree<'repo> { fn clone(&self) -> Self { self.as_object().clone().into_tree().ok().unwrap() } } impl<'repo> Drop for Tree<'repo> { fn drop(&mut self) { unsafe { raw::git_tree_free(self.raw) } } } impl<'repo, 'iter> IntoIterator for &'iter Tree<'repo> { type Item = TreeEntry<'iter>; type IntoIter = TreeIter<'iter>; fn into_iter(self) -> Self::IntoIter { self.iter() } } /// Create a new tree entry from the raw pointer provided. /// /// The lifetime of the entry is tied to the tree provided and the function /// is unsafe because the validity of the pointer cannot be guaranteed. pub unsafe fn entry_from_raw_const<'tree>(raw: *const raw::git_tree_entry) -> TreeEntry<'tree> { TreeEntry { raw: raw as *mut raw::git_tree_entry, owned: false, _marker: marker::PhantomData, } } impl<'tree> TreeEntry<'tree> { /// Get the id of the object pointed by the entry pub fn id(&self) -> Oid { unsafe { Binding::from_raw(raw::git_tree_entry_id(&*self.raw)) } } /// Get the filename of a tree entry /// /// Returns `None` if the name is not valid utf-8 pub fn name(&self) -> Option<&str> { str::from_utf8(self.name_bytes()).ok() } /// Get the filename of a tree entry pub fn name_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_tree_entry_name(&*self.raw())).unwrap() } } /// Convert a tree entry to the object it points to. pub fn to_object<'a>(&self, repo: &'a Repository) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_tree_entry_to_object( &mut ret, repo.raw(), &*self.raw() )); Ok(Binding::from_raw(ret)) } } /// Get the type of the object pointed by the entry pub fn kind(&self) -> Option { ObjectType::from_raw(unsafe { raw::git_tree_entry_type(&*self.raw) }) } /// Get the UNIX file attributes of a tree entry pub fn filemode(&self) -> i32 { unsafe { raw::git_tree_entry_filemode(&*self.raw) as i32 } } /// Get the raw UNIX file attributes of a tree entry pub fn filemode_raw(&self) -> i32 { unsafe { raw::git_tree_entry_filemode_raw(&*self.raw) as i32 } } /// Convert this entry of any lifetime into an owned signature with a static /// lifetime. /// /// This will use the `Clone::clone` implementation under the hood. pub fn to_owned(&self) -> TreeEntry<'static> { unsafe { let me = mem::transmute::<&TreeEntry<'tree>, &TreeEntry<'static>>(self); me.clone() } } } impl<'a> Binding for TreeEntry<'a> { type Raw = *mut raw::git_tree_entry; unsafe fn from_raw(raw: *mut raw::git_tree_entry) -> TreeEntry<'a> { TreeEntry { raw, owned: true, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_tree_entry { self.raw } } impl<'a> Clone for TreeEntry<'a> { fn clone(&self) -> TreeEntry<'a> { let mut ret = ptr::null_mut(); unsafe { assert_eq!(raw::git_tree_entry_dup(&mut ret, &*self.raw()), 0); Binding::from_raw(ret) } } } impl<'a> PartialOrd for TreeEntry<'a> { fn partial_cmp(&self, other: &TreeEntry<'a>) -> Option { Some(self.cmp(other)) } } impl<'a> Ord for TreeEntry<'a> { fn cmp(&self, other: &TreeEntry<'a>) -> Ordering { c_cmp_to_ordering(unsafe { raw::git_tree_entry_cmp(&*self.raw(), &*other.raw()) }) } } impl<'a> PartialEq for TreeEntry<'a> { fn eq(&self, other: &TreeEntry<'a>) -> bool { self.cmp(other) == Ordering::Equal } } impl<'a> Eq for TreeEntry<'a> {} impl<'a> Drop for TreeEntry<'a> { fn drop(&mut self) { if self.owned { unsafe { raw::git_tree_entry_free(self.raw) } } } } impl<'tree> Iterator for TreeIter<'tree> { type Item = TreeEntry<'tree>; fn next(&mut self) -> Option> { self.range.next().and_then(|i| self.tree.get(i)) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } impl<'tree> DoubleEndedIterator for TreeIter<'tree> { fn next_back(&mut self) -> Option> { self.range.next_back().and_then(|i| self.tree.get(i)) } } impl<'tree> ExactSizeIterator for TreeIter<'tree> {} #[cfg(test)] mod tests { use super::{TreeWalkMode, TreeWalkResult}; use crate::{Object, ObjectType, Repository, Tree, TreeEntry}; use std::fs::File; use std::io::prelude::*; use std::path::Path; use tempfile::TempDir; pub struct TestTreeIter<'a> { entries: Vec>, repo: &'a Repository, } impl<'a> Iterator for TestTreeIter<'a> { type Item = TreeEntry<'a>; fn next(&mut self) -> Option> { if self.entries.is_empty() { None } else { let entry = self.entries.remove(0); match entry.kind() { Some(ObjectType::Tree) => { let obj: Object<'a> = entry.to_object(self.repo).unwrap(); let tree: &Tree<'a> = obj.as_tree().unwrap(); for entry in tree.iter() { self.entries.push(entry.to_owned()); } } _ => {} } Some(entry) } } } fn tree_iter<'repo>(tree: &Tree<'repo>, repo: &'repo Repository) -> TestTreeIter<'repo> { let mut initial = vec![]; for entry in tree.iter() { initial.push(entry.to_owned()); } TestTreeIter { entries: initial, repo: repo, } } #[test] fn smoke_tree_iter() { let (td, repo) = crate::test::repo_init(); setup_repo(&td, &repo); let head = repo.head().unwrap(); let target = head.target().unwrap(); let commit = repo.find_commit(target).unwrap(); let tree = repo.find_tree(commit.tree_id()).unwrap(); assert_eq!(tree.id(), commit.tree_id()); assert_eq!(tree.len(), 1); for entry in tree_iter(&tree, &repo) { println!("iter entry {:?}", entry.name()); } } fn setup_repo(td: &TempDir, repo: &Repository) { let mut index = repo.index().unwrap(); File::create(&td.path().join("foo")) .unwrap() .write_all(b"foo") .unwrap(); index.add_path(Path::new("foo")).unwrap(); let id = index.write_tree().unwrap(); let sig = repo.signature().unwrap(); let tree = repo.find_tree(id).unwrap(); let parent = repo .find_commit(repo.head().unwrap().target().unwrap()) .unwrap(); repo.commit( Some("HEAD"), &sig, &sig, "another commit", &tree, &[&parent], ) .unwrap(); } #[test] fn smoke() { let (td, repo) = crate::test::repo_init(); setup_repo(&td, &repo); let head = repo.head().unwrap(); let target = head.target().unwrap(); let commit = repo.find_commit(target).unwrap(); let tree = repo.find_tree(commit.tree_id()).unwrap(); assert_eq!(tree.id(), commit.tree_id()); assert_eq!(tree.len(), 1); { let e1 = tree.get(0).unwrap(); assert!(e1 == tree.get_id(e1.id()).unwrap()); assert!(e1 == tree.get_name("foo").unwrap()); assert!(e1 == tree.get_path(Path::new("foo")).unwrap()); assert_eq!(e1.name(), Some("foo")); e1.to_object(&repo).unwrap(); } tree.into_object(); repo.find_object(commit.tree_id(), None) .unwrap() .as_tree() .unwrap(); repo.find_object(commit.tree_id(), None) .unwrap() .into_tree() .ok() .unwrap(); } #[test] fn tree_walk() { let (td, repo) = crate::test::repo_init(); setup_repo(&td, &repo); let head = repo.head().unwrap(); let target = head.target().unwrap(); let commit = repo.find_commit(target).unwrap(); let tree = repo.find_tree(commit.tree_id()).unwrap(); let mut ct = 0; tree.walk(TreeWalkMode::PreOrder, |_, entry| { assert_eq!(entry.name(), Some("foo")); ct += 1; 0 }) .unwrap(); assert_eq!(ct, 1); let mut ct = 0; tree.walk(TreeWalkMode::PreOrder, |_, entry| { assert_eq!(entry.name(), Some("foo")); ct += 1; TreeWalkResult::Ok }) .unwrap(); assert_eq!(ct, 1); } } vendor/git2/src/reference.rs0000664000175000017500000004336714160055207016624 0ustar mwhudsonmwhudsonuse std::cmp::Ordering; use std::ffi::CString; use std::marker; use std::mem; use std::ptr; use std::str; use crate::object::CastOrPanic; use crate::util::{c_cmp_to_ordering, Binding}; use crate::{ raw, Blob, Commit, Error, Object, ObjectType, Oid, ReferenceFormat, ReferenceType, Repository, Tag, Tree, }; // Not in the public header files (yet?), but a hard limit used by libgit2 // internally const GIT_REFNAME_MAX: usize = 1024; struct Refdb<'repo>(&'repo Repository); /// A structure to represent a git [reference][1]. /// /// [1]: http://git-scm.com/book/en/Git-Internals-Git-References pub struct Reference<'repo> { raw: *mut raw::git_reference, _marker: marker::PhantomData>, } /// An iterator over the references in a repository. pub struct References<'repo> { raw: *mut raw::git_reference_iterator, _marker: marker::PhantomData>, } /// An iterator over the names of references in a repository. pub struct ReferenceNames<'repo, 'references> { inner: &'references mut References<'repo>, } impl<'repo> Reference<'repo> { /// Ensure the reference name is well-formed. /// /// Validation is performed as if [`ReferenceFormat::ALLOW_ONELEVEL`] /// was given to [`Reference::normalize_name`]. No normalization is /// performed, however. /// /// ```rust /// use git2::Reference; /// /// assert!(Reference::is_valid_name("HEAD")); /// assert!(Reference::is_valid_name("refs/heads/main")); /// /// // But: /// assert!(!Reference::is_valid_name("main")); /// assert!(!Reference::is_valid_name("refs/heads/*")); /// assert!(!Reference::is_valid_name("foo//bar")); /// ``` /// /// [`ReferenceFormat::ALLOW_ONELEVEL`]: /// struct.ReferenceFormat#associatedconstant.ALLOW_ONELEVEL /// [`Reference::normalize_name`]: struct.Reference#method.normalize_name pub fn is_valid_name(refname: &str) -> bool { crate::init(); let refname = CString::new(refname).unwrap(); unsafe { raw::git_reference_is_valid_name(refname.as_ptr()) == 1 } } /// Normalize reference name and check validity. /// /// This will normalize the reference name by collapsing runs of adjacent /// slashes between name components into a single slash. It also validates /// the name according to the following rules: /// /// 1. If [`ReferenceFormat::ALLOW_ONELEVEL`] is given, the name may /// contain only capital letters and underscores, and must begin and end /// with a letter. (e.g. "HEAD", "ORIG_HEAD"). /// 2. The flag [`ReferenceFormat::REFSPEC_SHORTHAND`] has an effect /// only when combined with [`ReferenceFormat::ALLOW_ONELEVEL`]. If /// it is given, "shorthand" branch names (i.e. those not prefixed by /// `refs/`, but consisting of a single word without `/` separators) /// become valid. For example, "main" would be accepted. /// 3. If [`ReferenceFormat::REFSPEC_PATTERN`] is given, the name may /// contain a single `*` in place of a full pathname component (e.g. /// `foo/*/bar`, `foo/bar*`). /// 4. Names prefixed with "refs/" can be almost anything. You must avoid /// the characters '~', '^', ':', '\\', '?', '[', and '*', and the /// sequences ".." and "@{" which have special meaning to revparse. /// /// If the reference passes validation, it is returned in normalized form, /// otherwise an [`Error`] with [`ErrorCode::InvalidSpec`] is returned. /// /// ```rust /// use git2::{Reference, ReferenceFormat}; /// /// assert_eq!( /// Reference::normalize_name( /// "foo//bar", /// ReferenceFormat::NORMAL /// ) /// .unwrap(), /// "foo/bar".to_owned() /// ); /// /// assert_eq!( /// Reference::normalize_name( /// "HEAD", /// ReferenceFormat::ALLOW_ONELEVEL /// ) /// .unwrap(), /// "HEAD".to_owned() /// ); /// /// assert_eq!( /// Reference::normalize_name( /// "refs/heads/*", /// ReferenceFormat::REFSPEC_PATTERN /// ) /// .unwrap(), /// "refs/heads/*".to_owned() /// ); /// /// assert_eq!( /// Reference::normalize_name( /// "main", /// ReferenceFormat::ALLOW_ONELEVEL | ReferenceFormat::REFSPEC_SHORTHAND /// ) /// .unwrap(), /// "main".to_owned() /// ); /// ``` /// /// [`ReferenceFormat::ALLOW_ONELEVEL`]: /// struct.ReferenceFormat#associatedconstant.ALLOW_ONELEVEL /// [`ReferenceFormat::REFSPEC_SHORTHAND`]: /// struct.ReferenceFormat#associatedconstant.REFSPEC_SHORTHAND /// [`ReferenceFormat::REFSPEC_PATTERN`]: /// struct.ReferenceFormat#associatedconstant.REFSPEC_PATTERN /// [`Error`]: struct.Error /// [`ErrorCode::InvalidSpec`]: enum.ErrorCode#variant.InvalidSpec pub fn normalize_name(refname: &str, flags: ReferenceFormat) -> Result { crate::init(); let mut dst = [0u8; GIT_REFNAME_MAX]; let refname = CString::new(refname)?; unsafe { try_call!(raw::git_reference_normalize_name( dst.as_mut_ptr() as *mut libc::c_char, dst.len() as libc::size_t, refname, flags.bits() )); let s = &dst[..dst.iter().position(|&a| a == 0).unwrap()]; Ok(str::from_utf8(s).unwrap().to_owned()) } } /// Get access to the underlying raw pointer. pub fn raw(&self) -> *mut raw::git_reference { self.raw } /// Delete an existing reference. /// /// This method works for both direct and symbolic references. The reference /// will be immediately removed on disk. /// /// This function will return an error if the reference has changed from the /// time it was looked up. pub fn delete(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_reference_delete(self.raw)); } Ok(()) } /// Check if a reference is a local branch. pub fn is_branch(&self) -> bool { unsafe { raw::git_reference_is_branch(&*self.raw) == 1 } } /// Check if a reference is a note. pub fn is_note(&self) -> bool { unsafe { raw::git_reference_is_note(&*self.raw) == 1 } } /// Check if a reference is a remote tracking branch pub fn is_remote(&self) -> bool { unsafe { raw::git_reference_is_remote(&*self.raw) == 1 } } /// Check if a reference is a tag pub fn is_tag(&self) -> bool { unsafe { raw::git_reference_is_tag(&*self.raw) == 1 } } /// Get the reference type of a reference. /// /// If the type is unknown, then `None` is returned. pub fn kind(&self) -> Option { ReferenceType::from_raw(unsafe { raw::git_reference_type(&*self.raw) }) } /// Get the full name of a reference. /// /// Returns `None` if the name is not valid utf-8. pub fn name(&self) -> Option<&str> { str::from_utf8(self.name_bytes()).ok() } /// Get the full name of a reference. pub fn name_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_reference_name(&*self.raw)).unwrap() } } /// Get the full shorthand of a reference. /// /// This will transform the reference name into a name "human-readable" /// version. If no shortname is appropriate, it will return the full name. /// /// Returns `None` if the shorthand is not valid utf-8. pub fn shorthand(&self) -> Option<&str> { str::from_utf8(self.shorthand_bytes()).ok() } /// Get the full shorthand of a reference. pub fn shorthand_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_reference_shorthand(&*self.raw)).unwrap() } } /// Get the OID pointed to by a direct reference. /// /// Only available if the reference is direct (i.e. an object id reference, /// not a symbolic one). pub fn target(&self) -> Option { unsafe { Binding::from_raw_opt(raw::git_reference_target(&*self.raw)) } } /// Return the peeled OID target of this reference. /// /// This peeled OID only applies to direct references that point to a hard /// Tag object: it is the result of peeling such Tag. pub fn target_peel(&self) -> Option { unsafe { Binding::from_raw_opt(raw::git_reference_target_peel(&*self.raw)) } } /// Get full name to the reference pointed to by a symbolic reference. /// /// May return `None` if the reference is either not symbolic or not a /// valid utf-8 string. pub fn symbolic_target(&self) -> Option<&str> { self.symbolic_target_bytes() .and_then(|s| str::from_utf8(s).ok()) } /// Get full name to the reference pointed to by a symbolic reference. /// /// Only available if the reference is symbolic. pub fn symbolic_target_bytes(&self) -> Option<&[u8]> { unsafe { crate::opt_bytes(self, raw::git_reference_symbolic_target(&*self.raw)) } } /// Resolve a symbolic reference to a direct reference. /// /// This method iteratively peels a symbolic reference until it resolves to /// a direct reference to an OID. /// /// If a direct reference is passed as an argument, a copy of that /// reference is returned. pub fn resolve(&self) -> Result, Error> { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_reference_resolve(&mut raw, &*self.raw)); Ok(Binding::from_raw(raw)) } } /// Peel a reference to an object /// /// This method recursively peels the reference until it reaches /// an object of the specified type. pub fn peel(&self, kind: ObjectType) -> Result, Error> { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_reference_peel(&mut raw, self.raw, kind)); Ok(Binding::from_raw(raw)) } } /// Peel a reference to a blob /// /// This method recursively peels the reference until it reaches /// a blob. pub fn peel_to_blob(&self) -> Result, Error> { Ok(self.peel(ObjectType::Blob)?.cast_or_panic(ObjectType::Blob)) } /// Peel a reference to a commit /// /// This method recursively peels the reference until it reaches /// a commit. pub fn peel_to_commit(&self) -> Result, Error> { Ok(self .peel(ObjectType::Commit)? .cast_or_panic(ObjectType::Commit)) } /// Peel a reference to a tree /// /// This method recursively peels the reference until it reaches /// a tree. pub fn peel_to_tree(&self) -> Result, Error> { Ok(self.peel(ObjectType::Tree)?.cast_or_panic(ObjectType::Tree)) } /// Peel a reference to a tag /// /// This method recursively peels the reference until it reaches /// a tag. pub fn peel_to_tag(&self) -> Result, Error> { Ok(self.peel(ObjectType::Tag)?.cast_or_panic(ObjectType::Tag)) } /// Rename an existing reference. /// /// This method works for both direct and symbolic references. /// /// If the force flag is not enabled, and there's already a reference with /// the given name, the renaming will fail. pub fn rename( &mut self, new_name: &str, force: bool, msg: &str, ) -> Result, Error> { let mut raw = ptr::null_mut(); let new_name = CString::new(new_name)?; let msg = CString::new(msg)?; unsafe { try_call!(raw::git_reference_rename( &mut raw, self.raw, new_name, force, msg )); Ok(Binding::from_raw(raw)) } } /// Conditionally create a new reference with the same name as the given /// reference but a different OID target. The reference must be a direct /// reference, otherwise this will fail. /// /// The new reference will be written to disk, overwriting the given /// reference. pub fn set_target(&mut self, id: Oid, reflog_msg: &str) -> Result, Error> { let mut raw = ptr::null_mut(); let msg = CString::new(reflog_msg)?; unsafe { try_call!(raw::git_reference_set_target( &mut raw, self.raw, id.raw(), msg )); Ok(Binding::from_raw(raw)) } } } impl<'repo> PartialOrd for Reference<'repo> { fn partial_cmp(&self, other: &Reference<'repo>) -> Option { Some(self.cmp(other)) } } impl<'repo> Ord for Reference<'repo> { fn cmp(&self, other: &Reference<'repo>) -> Ordering { c_cmp_to_ordering(unsafe { raw::git_reference_cmp(&*self.raw, &*other.raw) }) } } impl<'repo> PartialEq for Reference<'repo> { fn eq(&self, other: &Reference<'repo>) -> bool { self.cmp(other) == Ordering::Equal } } impl<'repo> Eq for Reference<'repo> {} impl<'repo> Binding for Reference<'repo> { type Raw = *mut raw::git_reference; unsafe fn from_raw(raw: *mut raw::git_reference) -> Reference<'repo> { Reference { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_reference { self.raw } } impl<'repo> Drop for Reference<'repo> { fn drop(&mut self) { unsafe { raw::git_reference_free(self.raw) } } } impl<'repo> References<'repo> { /// Consumes a `References` iterator to create an iterator over just the /// name of some references. /// /// This is more efficient if only the names are desired of references as /// the references themselves don't have to be allocated and deallocated. /// /// The returned iterator will yield strings as opposed to a `Reference`. pub fn names<'a>(&'a mut self) -> ReferenceNames<'repo, 'a> { ReferenceNames { inner: self } } } impl<'repo> Binding for References<'repo> { type Raw = *mut raw::git_reference_iterator; unsafe fn from_raw(raw: *mut raw::git_reference_iterator) -> References<'repo> { References { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_reference_iterator { self.raw } } impl<'repo> Iterator for References<'repo> { type Item = Result, Error>; fn next(&mut self) -> Option, Error>> { let mut out = ptr::null_mut(); unsafe { try_call_iter!(raw::git_reference_next(&mut out, self.raw)); Some(Ok(Binding::from_raw(out))) } } } impl<'repo> Drop for References<'repo> { fn drop(&mut self) { unsafe { raw::git_reference_iterator_free(self.raw) } } } impl<'repo, 'references> Iterator for ReferenceNames<'repo, 'references> { type Item = Result<&'references str, Error>; fn next(&mut self) -> Option> { let mut out = ptr::null(); unsafe { try_call_iter!(raw::git_reference_next_name(&mut out, self.inner.raw)); let bytes = crate::opt_bytes(self, out).unwrap(); let s = str::from_utf8(bytes).unwrap(); Some(Ok(mem::transmute::<&str, &'references str>(s))) } } } #[cfg(test)] mod tests { use crate::{ObjectType, Reference, ReferenceType}; #[test] fn smoke() { assert!(Reference::is_valid_name("refs/foo")); assert!(!Reference::is_valid_name("foo")); } #[test] fn smoke2() { let (_td, repo) = crate::test::repo_init(); let mut head = repo.head().unwrap(); assert!(head.is_branch()); assert!(!head.is_remote()); assert!(!head.is_tag()); assert!(!head.is_note()); // HEAD is a symbolic reference but git_repository_head resolves it // so it is a GIT_REFERENCE_DIRECT. assert_eq!(head.kind().unwrap(), ReferenceType::Direct); assert!(head == repo.head().unwrap()); assert_eq!(head.name(), Some("refs/heads/main")); assert!(head == repo.find_reference("refs/heads/main").unwrap()); assert_eq!( repo.refname_to_id("refs/heads/main").unwrap(), head.target().unwrap() ); assert!(head.symbolic_target().is_none()); assert!(head.target_peel().is_none()); assert_eq!(head.shorthand(), Some("main")); assert!(head.resolve().unwrap() == head); let mut tag1 = repo .reference("refs/tags/tag1", head.target().unwrap(), false, "test") .unwrap(); assert!(tag1.is_tag()); assert_eq!(tag1.kind().unwrap(), ReferenceType::Direct); let peeled_commit = tag1.peel(ObjectType::Commit).unwrap(); assert_eq!(ObjectType::Commit, peeled_commit.kind().unwrap()); assert_eq!(tag1.target().unwrap(), peeled_commit.id()); tag1.delete().unwrap(); let mut sym1 = repo .reference_symbolic("refs/tags/tag1", "refs/heads/main", false, "test") .unwrap(); assert_eq!(sym1.kind().unwrap(), ReferenceType::Symbolic); sym1.delete().unwrap(); { assert!(repo.references().unwrap().count() == 1); assert!(repo.references().unwrap().next().unwrap().unwrap() == head); let mut names = repo.references().unwrap(); let mut names = names.names(); assert_eq!(names.next().unwrap().unwrap(), "refs/heads/main"); assert!(names.next().is_none()); assert!(repo.references_glob("foo").unwrap().count() == 0); assert!(repo.references_glob("refs/heads/*").unwrap().count() == 1); } let mut head = head.rename("refs/foo", true, "test").unwrap(); head.delete().unwrap(); } } vendor/git2/src/revspec.rs0000664000175000017500000000155514160055207016326 0ustar mwhudsonmwhudsonuse crate::{Object, RevparseMode}; /// A revspec represents a range of revisions within a repository. pub struct Revspec<'repo> { from: Option>, to: Option>, mode: RevparseMode, } impl<'repo> Revspec<'repo> { /// Assembles a new revspec from the from/to components. pub fn from_objects( from: Option>, to: Option>, mode: RevparseMode, ) -> Revspec<'repo> { Revspec { from, to, mode } } /// Access the `from` range of this revspec. pub fn from(&self) -> Option<&Object<'repo>> { self.from.as_ref() } /// Access the `to` range of this revspec. pub fn to(&self) -> Option<&Object<'repo>> { self.to.as_ref() } /// Returns the intent of the revspec. pub fn mode(&self) -> RevparseMode { self.mode } } vendor/git2/src/buf.rs0000664000175000017500000000312514160055207015426 0ustar mwhudsonmwhudsonuse std::ops::{Deref, DerefMut}; use std::ptr; use std::slice; use std::str; use crate::raw; use crate::util::Binding; /// A structure to wrap an intermediate buffer used by libgit2. /// /// A buffer can be thought of a `Vec`, but the `Vec` type is not used to /// avoid copying data back and forth. pub struct Buf { raw: raw::git_buf, } impl Default for Buf { fn default() -> Self { Self::new() } } impl Buf { /// Creates a new empty buffer. pub fn new() -> Buf { crate::init(); unsafe { Binding::from_raw(&mut raw::git_buf { ptr: ptr::null_mut(), size: 0, asize: 0, } as *mut _) } } /// Attempt to view this buffer as a string slice. /// /// Returns `None` if the buffer is not valid utf-8. pub fn as_str(&self) -> Option<&str> { str::from_utf8(&**self).ok() } } impl Deref for Buf { type Target = [u8]; fn deref(&self) -> &[u8] { unsafe { slice::from_raw_parts(self.raw.ptr as *const u8, self.raw.size as usize) } } } impl DerefMut for Buf { fn deref_mut(&mut self) -> &mut [u8] { unsafe { slice::from_raw_parts_mut(self.raw.ptr as *mut u8, self.raw.size as usize) } } } impl Binding for Buf { type Raw = *mut raw::git_buf; unsafe fn from_raw(raw: *mut raw::git_buf) -> Buf { Buf { raw: *raw } } fn raw(&self) -> *mut raw::git_buf { &self.raw as *const _ as *mut _ } } impl Drop for Buf { fn drop(&mut self) { unsafe { raw::git_buf_dispose(&mut self.raw) } } } vendor/git2/src/attr.rs0000664000175000017500000001543314160055207015631 0ustar mwhudsonmwhudsonuse crate::raw; use std::ptr; use std::str; /// All possible states of an attribute. /// /// This enum is used to interpret the value returned by /// [`Repository::get_attr`](crate::Repository::get_attr) and /// [`Repository::get_attr_bytes`](crate::Repository::get_attr_bytes). #[derive(Debug, Clone, Copy, Eq)] pub enum AttrValue<'string> { /// The attribute is set to true. True, /// The attribute is unset (set to false). False, /// The attribute is set to a [valid UTF-8 string](prim@str). String(&'string str), /// The attribute is set to a string that might not be [valid UTF-8](prim@str). Bytes(&'string [u8]), /// The attribute is not specified. Unspecified, } macro_rules! from_value { ($value:expr => $string:expr) => { match unsafe { raw::git_attr_value($value.map_or(ptr::null(), |v| v.as_ptr().cast())) } { raw::GIT_ATTR_VALUE_TRUE => Self::True, raw::GIT_ATTR_VALUE_FALSE => Self::False, raw::GIT_ATTR_VALUE_STRING => $string, raw::GIT_ATTR_VALUE_UNSPECIFIED => Self::Unspecified, _ => unreachable!(), } }; } impl<'string> AttrValue<'string> { /// Returns the state of an attribute by inspecting its [value](crate::Repository::get_attr) /// by a [string](prim@str). /// /// This function always returns [`AttrValue::String`] and never returns [`AttrValue::Bytes`] /// when the attribute is set to a string. pub fn from_string(value: Option<&'string str>) -> Self { from_value!(value => Self::String(value.unwrap())) } /// Returns the state of an attribute by inspecting its [value](crate::Repository::get_attr_bytes) /// by a [byte](u8) [slice]. /// /// This function will perform UTF-8 validation when the attribute is set to a string, returns /// [`AttrValue::String`] if it's valid UTF-8 and [`AttrValue::Bytes`] otherwise. pub fn from_bytes(value: Option<&'string [u8]>) -> Self { let mut value = Self::always_bytes(value); if let Self::Bytes(bytes) = value { if let Ok(string) = str::from_utf8(bytes) { value = Self::String(string); } } value } /// Returns the state of an attribute just like [`AttrValue::from_bytes`], but skips UTF-8 /// validation and always returns [`AttrValue::Bytes`] when it's set to a string. pub fn always_bytes(value: Option<&'string [u8]>) -> Self { from_value!(value => Self::Bytes(value.unwrap())) } } /// Compare two [`AttrValue`]s. /// /// Note that this implementation does not differentiate between [`AttrValue::String`] and /// [`AttrValue::Bytes`]. impl PartialEq for AttrValue<'_> { fn eq(&self, other: &AttrValue<'_>) -> bool { match (self, other) { (Self::True, AttrValue::True) | (Self::False, AttrValue::False) | (Self::Unspecified, AttrValue::Unspecified) => true, (AttrValue::String(string), AttrValue::Bytes(bytes)) | (AttrValue::Bytes(bytes), AttrValue::String(string)) => string.as_bytes() == *bytes, (AttrValue::String(left), AttrValue::String(right)) => left == right, (AttrValue::Bytes(left), AttrValue::Bytes(right)) => left == right, _ => false, } } } #[cfg(test)] mod tests { use super::AttrValue; macro_rules! test_attr_value { ($function:ident, $variant:ident) => { const ATTR_TRUE: &str = "[internal]__TRUE__"; const ATTR_FALSE: &str = "[internal]__FALSE__"; const ATTR_UNSET: &str = "[internal]__UNSET__"; let as_bytes = AsRef::<[u8]>::as_ref; // Use `matches!` here since the `PartialEq` implementation does not differentiate // between `String` and `Bytes`. assert!(matches!( AttrValue::$function(Some(ATTR_TRUE.as_ref())), AttrValue::$variant(s) if as_bytes(s) == ATTR_TRUE.as_bytes() )); assert!(matches!( AttrValue::$function(Some(ATTR_FALSE.as_ref())), AttrValue::$variant(s) if as_bytes(s) == ATTR_FALSE.as_bytes() )); assert!(matches!( AttrValue::$function(Some(ATTR_UNSET.as_ref())), AttrValue::$variant(s) if as_bytes(s) == ATTR_UNSET.as_bytes() )); assert!(matches!( AttrValue::$function(Some("foo".as_ref())), AttrValue::$variant(s) if as_bytes(s) == b"foo" )); assert!(matches!( AttrValue::$function(Some("bar".as_ref())), AttrValue::$variant(s) if as_bytes(s) == b"bar" )); assert_eq!(AttrValue::$function(None), AttrValue::Unspecified); }; } #[test] fn attr_value_from_string() { test_attr_value!(from_string, String); } #[test] fn attr_value_from_bytes() { test_attr_value!(from_bytes, String); assert!(matches!( AttrValue::from_bytes(Some(&[0xff])), AttrValue::Bytes(&[0xff]) )); assert!(matches!( AttrValue::from_bytes(Some(b"\xffoobar")), AttrValue::Bytes(b"\xffoobar") )); } #[test] fn attr_value_always_bytes() { test_attr_value!(always_bytes, Bytes); assert!(matches!( AttrValue::always_bytes(Some(&[0xff; 2])), AttrValue::Bytes(&[0xff, 0xff]) )); assert!(matches!( AttrValue::always_bytes(Some(b"\xffoo")), AttrValue::Bytes(b"\xffoo") )); } #[test] fn attr_value_partial_eq() { assert_eq!(AttrValue::True, AttrValue::True); assert_eq!(AttrValue::False, AttrValue::False); assert_eq!(AttrValue::String("foo"), AttrValue::String("foo")); assert_eq!(AttrValue::Bytes(b"foo"), AttrValue::Bytes(b"foo")); assert_eq!(AttrValue::String("bar"), AttrValue::Bytes(b"bar")); assert_eq!(AttrValue::Bytes(b"bar"), AttrValue::String("bar")); assert_eq!(AttrValue::Unspecified, AttrValue::Unspecified); assert_ne!(AttrValue::True, AttrValue::False); assert_ne!(AttrValue::False, AttrValue::Unspecified); assert_ne!(AttrValue::Unspecified, AttrValue::True); assert_ne!(AttrValue::True, AttrValue::String("true")); assert_ne!(AttrValue::Unspecified, AttrValue::Bytes(b"unspecified")); assert_ne!(AttrValue::Bytes(b"false"), AttrValue::False); assert_ne!(AttrValue::String("unspecified"), AttrValue::Unspecified); assert_ne!(AttrValue::String("foo"), AttrValue::String("bar")); assert_ne!(AttrValue::Bytes(b"foo"), AttrValue::Bytes(b"bar")); assert_ne!(AttrValue::String("foo"), AttrValue::Bytes(b"bar")); assert_ne!(AttrValue::Bytes(b"foo"), AttrValue::String("bar")); } } vendor/git2/src/build.rs0000664000175000017500000006710714160055207015763 0ustar mwhudsonmwhudson//! Builder-pattern objects for configuration various git operations. use libc::{c_char, c_int, c_uint, c_void, size_t}; use std::ffi::{CStr, CString}; use std::mem; use std::path::Path; use std::ptr; use crate::util::{self, Binding}; use crate::{panic, raw, Error, FetchOptions, IntoCString, Oid, Repository, Tree}; use crate::{CheckoutNotificationType, DiffFile, FileMode, Remote}; /// A builder struct which is used to build configuration for cloning a new git /// repository. /// /// # Example /// /// Cloning using SSH: /// /// ```no_run /// use git2::{Cred, Error, RemoteCallbacks}; /// use std::env; /// use std::path::Path; /// /// // Prepare callbacks. /// let mut callbacks = RemoteCallbacks::new(); /// callbacks.credentials(|_url, username_from_url, _allowed_types| { /// Cred::ssh_key( /// username_from_url.unwrap(), /// None, /// std::path::Path::new(&format!("{}/.ssh/id_rsa", env::var("HOME").unwrap())), /// None, /// ) /// }); /// /// // Prepare fetch options. /// let mut fo = git2::FetchOptions::new(); /// fo.remote_callbacks(callbacks); /// /// // Prepare builder. /// let mut builder = git2::build::RepoBuilder::new(); /// builder.fetch_options(fo); /// /// // Clone the project. /// builder.clone( /// "git@github.com:rust-lang/git2-rs.git", /// Path::new("/tmp/git2-rs"), /// ); /// ``` pub struct RepoBuilder<'cb> { bare: bool, branch: Option, local: bool, hardlinks: bool, checkout: Option>, fetch_opts: Option>, clone_local: Option, remote_create: Option>>, } /// Type of callback passed to `RepoBuilder::remote_create`. /// /// The second and third arguments are the remote's name and the remote's url. pub type RemoteCreate<'cb> = dyn for<'a> FnMut(&'a Repository, &str, &str) -> Result, Error> + 'cb; /// A builder struct for git tree updates, for use with `git_tree_create_updated`. pub struct TreeUpdateBuilder { updates: Vec, paths: Vec, } /// A builder struct for configuring checkouts of a repository. pub struct CheckoutBuilder<'cb> { their_label: Option, our_label: Option, ancestor_label: Option, target_dir: Option, paths: Vec, path_ptrs: Vec<*const c_char>, file_perm: Option, dir_perm: Option, disable_filters: bool, checkout_opts: u32, progress: Option>>, notify: Option>>, notify_flags: CheckoutNotificationType, } /// Checkout progress notification callback. /// /// The first argument is the path for the notification, the next is the numver /// of completed steps so far, and the final is the total number of steps. pub type Progress<'a> = dyn FnMut(Option<&Path>, usize, usize) + 'a; /// Checkout notifications callback. /// /// The first argument is the notification type, the next is the path for the /// the notification, followed by the baseline diff, target diff, and workdir diff. /// /// The callback must return a bool specifying whether the checkout should /// continue. pub type Notify<'a> = dyn FnMut( CheckoutNotificationType, Option<&Path>, Option>, Option>, Option>, ) -> bool + 'a; impl<'cb> Default for RepoBuilder<'cb> { fn default() -> Self { Self::new() } } /// Options that can be passed to `RepoBuilder::clone_local`. #[derive(Clone, Copy)] pub enum CloneLocal { /// Auto-detect (default) /// /// Here libgit2 will bypass the git-aware transport for local paths, but /// use a normal fetch for `file://` urls. Auto = raw::GIT_CLONE_LOCAL_AUTO as isize, /// Bypass the git-aware transport even for `file://` urls. Local = raw::GIT_CLONE_LOCAL as isize, /// Never bypass the git-aware transport None = raw::GIT_CLONE_NO_LOCAL as isize, /// Bypass the git-aware transport, but don't try to use hardlinks. NoLinks = raw::GIT_CLONE_LOCAL_NO_LINKS as isize, #[doc(hidden)] __Nonexhaustive = 0xff, } impl<'cb> RepoBuilder<'cb> { /// Creates a new repository builder with all of the default configuration. /// /// When ready, the `clone()` method can be used to clone a new repository /// using this configuration. pub fn new() -> RepoBuilder<'cb> { crate::init(); RepoBuilder { bare: false, branch: None, local: true, clone_local: None, hardlinks: true, checkout: None, fetch_opts: None, remote_create: None, } } /// Indicate whether the repository will be cloned as a bare repository or /// not. pub fn bare(&mut self, bare: bool) -> &mut RepoBuilder<'cb> { self.bare = bare; self } /// Specify the name of the branch to check out after the clone. /// /// If not specified, the remote's default branch will be used. pub fn branch(&mut self, branch: &str) -> &mut RepoBuilder<'cb> { self.branch = Some(CString::new(branch).unwrap()); self } /// Configures options for bypassing the git-aware transport on clone. /// /// Bypassing it means that instead of a fetch libgit2 will copy the object /// database directory instead of figuring out what it needs, which is /// faster. If possible, it will hardlink the files to save space. pub fn clone_local(&mut self, clone_local: CloneLocal) -> &mut RepoBuilder<'cb> { self.clone_local = Some(clone_local); self } /// Set the flag for bypassing the git aware transport mechanism for local /// paths. /// /// If `true`, the git-aware transport will be bypassed for local paths. If /// `false`, the git-aware transport will not be bypassed. #[deprecated(note = "use `clone_local` instead")] #[doc(hidden)] pub fn local(&mut self, local: bool) -> &mut RepoBuilder<'cb> { self.local = local; self } /// Set the flag for whether hardlinks are used when using a local git-aware /// transport mechanism. #[deprecated(note = "use `clone_local` instead")] #[doc(hidden)] pub fn hardlinks(&mut self, links: bool) -> &mut RepoBuilder<'cb> { self.hardlinks = links; self } /// Configure the checkout which will be performed by consuming a checkout /// builder. pub fn with_checkout(&mut self, checkout: CheckoutBuilder<'cb>) -> &mut RepoBuilder<'cb> { self.checkout = Some(checkout); self } /// Options which control the fetch, including callbacks. /// /// The callbacks are used for reporting fetch progress, and for acquiring /// credentials in the event they are needed. pub fn fetch_options(&mut self, fetch_opts: FetchOptions<'cb>) -> &mut RepoBuilder<'cb> { self.fetch_opts = Some(fetch_opts); self } /// Configures a callback used to create the git remote, prior to its being /// used to perform the clone operation. pub fn remote_create(&mut self, f: F) -> &mut RepoBuilder<'cb> where F: for<'a> FnMut(&'a Repository, &str, &str) -> Result, Error> + 'cb, { self.remote_create = Some(Box::new(f)); self } /// Clone a remote repository. /// /// This will use the options configured so far to clone the specified url /// into the specified local path. pub fn clone(&mut self, url: &str, into: &Path) -> Result { let mut opts: raw::git_clone_options = unsafe { mem::zeroed() }; unsafe { try_call!(raw::git_clone_init_options( &mut opts, raw::GIT_CLONE_OPTIONS_VERSION )); } opts.bare = self.bare as c_int; opts.checkout_branch = self .branch .as_ref() .map(|s| s.as_ptr()) .unwrap_or(ptr::null()); if let Some(ref local) = self.clone_local { opts.local = *local as raw::git_clone_local_t; } else { opts.local = match (self.local, self.hardlinks) { (true, false) => raw::GIT_CLONE_LOCAL_NO_LINKS, (false, _) => raw::GIT_CLONE_NO_LOCAL, (true, _) => raw::GIT_CLONE_LOCAL_AUTO, }; } if let Some(ref mut cbs) = self.fetch_opts { opts.fetch_opts = cbs.raw(); } if let Some(ref mut c) = self.checkout { unsafe { c.configure(&mut opts.checkout_opts); } } if let Some(ref mut callback) = self.remote_create { opts.remote_cb = Some(remote_create_cb); opts.remote_cb_payload = callback as *mut _ as *mut _; } let url = CString::new(url)?; // Normal file path OK (does not need Windows conversion). let into = into.into_c_string()?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_clone(&mut raw, url, into, &opts)); Ok(Binding::from_raw(raw)) } } } extern "C" fn remote_create_cb( out: *mut *mut raw::git_remote, repo: *mut raw::git_repository, name: *const c_char, url: *const c_char, payload: *mut c_void, ) -> c_int { unsafe { let repo = Repository::from_raw(repo); let code = panic::wrap(|| { let name = CStr::from_ptr(name).to_str().unwrap(); let url = CStr::from_ptr(url).to_str().unwrap(); let f = payload as *mut Box>; match (*f)(&repo, name, url) { Ok(remote) => { *out = crate::remote::remote_into_raw(remote); 0 } Err(e) => e.raw_code(), } }); mem::forget(repo); code.unwrap_or(-1) } } impl<'cb> Default for CheckoutBuilder<'cb> { fn default() -> Self { Self::new() } } impl<'cb> CheckoutBuilder<'cb> { /// Creates a new builder for checkouts with all of its default /// configuration. pub fn new() -> CheckoutBuilder<'cb> { crate::init(); CheckoutBuilder { disable_filters: false, dir_perm: None, file_perm: None, path_ptrs: Vec::new(), paths: Vec::new(), target_dir: None, ancestor_label: None, our_label: None, their_label: None, checkout_opts: raw::GIT_CHECKOUT_SAFE as u32, progress: None, notify: None, notify_flags: CheckoutNotificationType::empty(), } } /// Indicate that this checkout should perform a dry run by checking for /// conflicts but not make any actual changes. pub fn dry_run(&mut self) -> &mut CheckoutBuilder<'cb> { self.checkout_opts &= !((1 << 4) - 1); self.checkout_opts |= raw::GIT_CHECKOUT_NONE as u32; self } /// Take any action necessary to get the working directory to match the /// target including potentially discarding modified files. pub fn force(&mut self) -> &mut CheckoutBuilder<'cb> { self.checkout_opts &= !((1 << 4) - 1); self.checkout_opts |= raw::GIT_CHECKOUT_FORCE as u32; self } /// Indicate that the checkout should be performed safely, allowing new /// files to be created but not overwriting extisting files or changes. /// /// This is the default. pub fn safe(&mut self) -> &mut CheckoutBuilder<'cb> { self.checkout_opts &= !((1 << 4) - 1); self.checkout_opts |= raw::GIT_CHECKOUT_SAFE as u32; self } fn flag(&mut self, bit: raw::git_checkout_strategy_t, on: bool) -> &mut CheckoutBuilder<'cb> { if on { self.checkout_opts |= bit as u32; } else { self.checkout_opts &= !(bit as u32); } self } /// In safe mode, create files that don't exist. /// /// Defaults to false. pub fn recreate_missing(&mut self, allow: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_RECREATE_MISSING, allow) } /// In safe mode, apply safe file updates even when there are conflicts /// instead of canceling the checkout. /// /// Defaults to false. pub fn allow_conflicts(&mut self, allow: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_ALLOW_CONFLICTS, allow) } /// Remove untracked files from the working dir. /// /// Defaults to false. pub fn remove_untracked(&mut self, remove: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_REMOVE_UNTRACKED, remove) } /// Remove ignored files from the working dir. /// /// Defaults to false. pub fn remove_ignored(&mut self, remove: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_REMOVE_IGNORED, remove) } /// Only update the contents of files that already exist. /// /// If set, files will not be created or deleted. /// /// Defaults to false. pub fn update_only(&mut self, update: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_UPDATE_ONLY, update) } /// Prevents checkout from writing the updated files' information to the /// index. /// /// Defaults to true. pub fn update_index(&mut self, update: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_DONT_UPDATE_INDEX, !update) } /// Indicate whether the index and git attributes should be refreshed from /// disk before any operations. /// /// Defaults to true, pub fn refresh(&mut self, refresh: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_NO_REFRESH, !refresh) } /// Skip files with unmerged index entries. /// /// Defaults to false. pub fn skip_unmerged(&mut self, skip: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_SKIP_UNMERGED, skip) } /// Indicate whether the checkout should proceed on conflicts by using the /// stage 2 version of the file ("ours"). /// /// Defaults to false. pub fn use_ours(&mut self, ours: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_USE_OURS, ours) } /// Indicate whether the checkout should proceed on conflicts by using the /// stage 3 version of the file ("theirs"). /// /// Defaults to false. pub fn use_theirs(&mut self, theirs: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_USE_THEIRS, theirs) } /// Indicate whether ignored files should be overwritten during the checkout. /// /// Defaults to true. pub fn overwrite_ignored(&mut self, overwrite: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_DONT_OVERWRITE_IGNORED, !overwrite) } /// Indicate whether a normal merge file should be written for conflicts. /// /// Defaults to false. pub fn conflict_style_merge(&mut self, on: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_CONFLICT_STYLE_MERGE, on) } /// Specify for which notification types to invoke the notification /// callback. /// /// Defaults to none. pub fn notify_on( &mut self, notification_types: CheckoutNotificationType, ) -> &mut CheckoutBuilder<'cb> { self.notify_flags = notification_types; self } /// Indicates whether to include common ancestor data in diff3 format files /// for conflicts. /// /// Defaults to false. pub fn conflict_style_diff3(&mut self, on: bool) -> &mut CheckoutBuilder<'cb> { self.flag(raw::GIT_CHECKOUT_CONFLICT_STYLE_DIFF3, on) } /// Indicate whether to apply filters like CRLF conversion. pub fn disable_filters(&mut self, disable: bool) -> &mut CheckoutBuilder<'cb> { self.disable_filters = disable; self } /// Set the mode with which new directories are created. /// /// Default is 0755 pub fn dir_perm(&mut self, perm: i32) -> &mut CheckoutBuilder<'cb> { self.dir_perm = Some(perm); self } /// Set the mode with which new files are created. /// /// The default is 0644 or 0755 as dictated by the blob. pub fn file_perm(&mut self, perm: i32) -> &mut CheckoutBuilder<'cb> { self.file_perm = Some(perm); self } /// Add a path to be checked out. /// /// If no paths are specified, then all files are checked out. Otherwise /// only these specified paths are checked out. pub fn path(&mut self, path: T) -> &mut CheckoutBuilder<'cb> { let path = util::cstring_to_repo_path(path).unwrap(); self.path_ptrs.push(path.as_ptr()); self.paths.push(path); self } /// Set the directory to check out to pub fn target_dir(&mut self, dst: &Path) -> &mut CheckoutBuilder<'cb> { // Normal file path OK (does not need Windows conversion). self.target_dir = Some(dst.into_c_string().unwrap()); self } /// The name of the common ancestor side of conflicts pub fn ancestor_label(&mut self, label: &str) -> &mut CheckoutBuilder<'cb> { self.ancestor_label = Some(CString::new(label).unwrap()); self } /// The name of the common our side of conflicts pub fn our_label(&mut self, label: &str) -> &mut CheckoutBuilder<'cb> { self.our_label = Some(CString::new(label).unwrap()); self } /// The name of the common their side of conflicts pub fn their_label(&mut self, label: &str) -> &mut CheckoutBuilder<'cb> { self.their_label = Some(CString::new(label).unwrap()); self } /// Set a callback to receive notifications of checkout progress. pub fn progress(&mut self, cb: F) -> &mut CheckoutBuilder<'cb> where F: FnMut(Option<&Path>, usize, usize) + 'cb, { self.progress = Some(Box::new(cb) as Box>); self } /// Set a callback to receive checkout notifications. /// /// Callbacks are invoked prior to modifying any files on disk. /// Returning `false` from the callback will cancel the checkout. pub fn notify(&mut self, cb: F) -> &mut CheckoutBuilder<'cb> where F: FnMut( CheckoutNotificationType, Option<&Path>, Option>, Option>, Option>, ) -> bool + 'cb, { self.notify = Some(Box::new(cb) as Box>); self } /// Configure a raw checkout options based on this configuration. /// /// This method is unsafe as there is no guarantee that this structure will /// outlive the provided checkout options. pub unsafe fn configure(&mut self, opts: &mut raw::git_checkout_options) { opts.version = raw::GIT_CHECKOUT_OPTIONS_VERSION; opts.disable_filters = self.disable_filters as c_int; opts.dir_mode = self.dir_perm.unwrap_or(0) as c_uint; opts.file_mode = self.file_perm.unwrap_or(0) as c_uint; if !self.path_ptrs.is_empty() { opts.paths.strings = self.path_ptrs.as_ptr() as *mut _; opts.paths.count = self.path_ptrs.len() as size_t; } if let Some(ref c) = self.target_dir { opts.target_directory = c.as_ptr(); } if let Some(ref c) = self.ancestor_label { opts.ancestor_label = c.as_ptr(); } if let Some(ref c) = self.our_label { opts.our_label = c.as_ptr(); } if let Some(ref c) = self.their_label { opts.their_label = c.as_ptr(); } if self.progress.is_some() { opts.progress_cb = Some(progress_cb); opts.progress_payload = self as *mut _ as *mut _; } if self.notify.is_some() { opts.notify_cb = Some(notify_cb); opts.notify_payload = self as *mut _ as *mut _; opts.notify_flags = self.notify_flags.bits() as c_uint; } opts.checkout_strategy = self.checkout_opts as c_uint; } } extern "C" fn progress_cb( path: *const c_char, completed: size_t, total: size_t, data: *mut c_void, ) { panic::wrap(|| unsafe { let payload = &mut *(data as *mut CheckoutBuilder<'_>); let callback = match payload.progress { Some(ref mut c) => c, None => return, }; let path = if path.is_null() { None } else { Some(util::bytes2path(CStr::from_ptr(path).to_bytes())) }; callback(path, completed as usize, total as usize) }); } extern "C" fn notify_cb( why: raw::git_checkout_notify_t, path: *const c_char, baseline: *const raw::git_diff_file, target: *const raw::git_diff_file, workdir: *const raw::git_diff_file, data: *mut c_void, ) -> c_int { // pack callback etc panic::wrap(|| unsafe { let payload = &mut *(data as *mut CheckoutBuilder<'_>); let callback = match payload.notify { Some(ref mut c) => c, None => return 0, }; let path = if path.is_null() { None } else { Some(util::bytes2path(CStr::from_ptr(path).to_bytes())) }; let baseline = if baseline.is_null() { None } else { Some(DiffFile::from_raw(baseline)) }; let target = if target.is_null() { None } else { Some(DiffFile::from_raw(target)) }; let workdir = if workdir.is_null() { None } else { Some(DiffFile::from_raw(workdir)) }; let why = CheckoutNotificationType::from_bits_truncate(why as u32); let keep_going = callback(why, path, baseline, target, workdir); if keep_going { 0 } else { 1 } }) .unwrap_or(2) } impl Default for TreeUpdateBuilder { fn default() -> Self { Self::new() } } impl TreeUpdateBuilder { /// Create a new empty series of updates. pub fn new() -> Self { Self { updates: Vec::new(), paths: Vec::new(), } } /// Add an update removing the specified `path` from a tree. pub fn remove(&mut self, path: T) -> &mut Self { let path = util::cstring_to_repo_path(path).unwrap(); let path_ptr = path.as_ptr(); self.paths.push(path); self.updates.push(raw::git_tree_update { action: raw::GIT_TREE_UPDATE_REMOVE, id: raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }, filemode: raw::GIT_FILEMODE_UNREADABLE, path: path_ptr, }); self } /// Add an update setting the specified `path` to a specific Oid, whether it currently exists /// or not. /// /// Note that libgit2 does not support an upsert of a previously removed path, or an upsert /// that changes the type of an object (such as from tree to blob or vice versa). pub fn upsert(&mut self, path: T, id: Oid, filemode: FileMode) -> &mut Self { let path = util::cstring_to_repo_path(path).unwrap(); let path_ptr = path.as_ptr(); self.paths.push(path); self.updates.push(raw::git_tree_update { action: raw::GIT_TREE_UPDATE_UPSERT, id: unsafe { *id.raw() }, filemode: u32::from(filemode) as raw::git_filemode_t, path: path_ptr, }); self } /// Create a new tree from the specified baseline and this series of updates. /// /// The baseline tree must exist in the specified repository. pub fn create_updated(&mut self, repo: &Repository, baseline: &Tree<'_>) -> Result { let mut ret = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_tree_create_updated( &mut ret, repo.raw(), baseline.raw(), self.updates.len(), self.updates.as_ptr() )); Ok(Binding::from_raw(&ret as *const _)) } } } #[cfg(test)] mod tests { use super::{CheckoutBuilder, RepoBuilder, TreeUpdateBuilder}; use crate::{CheckoutNotificationType, FileMode, Repository}; use std::fs; use std::path::Path; use tempfile::TempDir; #[test] fn smoke() { let r = RepoBuilder::new().clone("/path/to/nowhere", Path::new("foo")); assert!(r.is_err()); } #[test] fn smoke2() { let td = TempDir::new().unwrap(); Repository::init_bare(&td.path().join("bare")).unwrap(); let url = if cfg!(unix) { format!("file://{}/bare", td.path().display()) } else { format!( "file:///{}/bare", td.path().display().to_string().replace("\\", "/") ) }; let dst = td.path().join("foo"); RepoBuilder::new().clone(&url, &dst).unwrap(); fs::remove_dir_all(&dst).unwrap(); assert!(RepoBuilder::new().branch("foo").clone(&url, &dst).is_err()); } #[test] fn smoke_tree_create_updated() { let (_tempdir, repo) = crate::test::repo_init(); let (_, tree_id) = crate::test::commit(&repo); let tree = t!(repo.find_tree(tree_id)); assert!(tree.get_name("bar").is_none()); let foo_id = tree.get_name("foo").unwrap().id(); let tree2_id = t!(TreeUpdateBuilder::new() .remove("foo") .upsert("bar/baz", foo_id, FileMode::Blob) .create_updated(&repo, &tree)); let tree2 = t!(repo.find_tree(tree2_id)); assert!(tree2.get_name("foo").is_none()); let baz_id = tree2.get_path(Path::new("bar/baz")).unwrap().id(); assert_eq!(foo_id, baz_id); } /// Issue regression test #365 #[test] fn notify_callback() { let td = TempDir::new().unwrap(); let cd = TempDir::new().unwrap(); { let mut opts = crate::RepositoryInitOptions::new(); opts.initial_head("main"); let repo = Repository::init_opts(&td.path(), &opts).unwrap(); let mut config = repo.config().unwrap(); config.set_str("user.name", "name").unwrap(); config.set_str("user.email", "email").unwrap(); let mut index = repo.index().unwrap(); let p = Path::new(td.path()).join("file"); println!("using path {:?}", p); fs::File::create(&p).unwrap(); index.add_path(&Path::new("file")).unwrap(); let id = index.write_tree().unwrap(); let tree = repo.find_tree(id).unwrap(); let sig = repo.signature().unwrap(); repo.commit(Some("HEAD"), &sig, &sig, "initial", &tree, &[]) .unwrap(); } let repo = Repository::open_bare(&td.path().join(".git")).unwrap(); let tree = repo .revparse_single(&"main") .unwrap() .peel_to_tree() .unwrap(); let mut index = repo.index().unwrap(); index.read_tree(&tree).unwrap(); let mut checkout_opts = CheckoutBuilder::new(); checkout_opts.target_dir(&cd.path()); checkout_opts.notify_on(CheckoutNotificationType::all()); checkout_opts.notify(|_notif, _path, baseline, target, workdir| { assert!(baseline.is_none()); assert_eq!(target.unwrap().path(), Some(Path::new("file"))); assert!(workdir.is_none()); true }); repo.checkout_index(Some(&mut index), Some(&mut checkout_opts)) .unwrap(); } } vendor/git2/src/pathspec.rs0000664000175000017500000002652114160055207016466 0ustar mwhudsonmwhudsonuse libc::size_t; use std::iter::IntoIterator; use std::marker; use std::ops::Range; use std::path::Path; use std::ptr; use crate::util::{path_to_repo_path, Binding}; use crate::{raw, Diff, DiffDelta, Error, Index, IntoCString, PathspecFlags, Repository, Tree}; /// Structure representing a compiled pathspec used for matching against various /// structures. pub struct Pathspec { raw: *mut raw::git_pathspec, } /// List of filenames matching a pathspec. pub struct PathspecMatchList<'ps> { raw: *mut raw::git_pathspec_match_list, _marker: marker::PhantomData<&'ps Pathspec>, } /// Iterator over the matched paths in a pathspec. pub struct PathspecEntries<'list> { range: Range, list: &'list PathspecMatchList<'list>, } /// Iterator over the matching diff deltas. pub struct PathspecDiffEntries<'list> { range: Range, list: &'list PathspecMatchList<'list>, } /// Iterator over the failed list of pathspec items that did not match. pub struct PathspecFailedEntries<'list> { range: Range, list: &'list PathspecMatchList<'list>, } impl Pathspec { /// Creates a new pathspec from a list of specs to match against. pub fn new(specs: I) -> Result where T: IntoCString, I: IntoIterator, { crate::init(); let (_a, _b, arr) = crate::util::iter2cstrs_paths(specs)?; unsafe { let mut ret = ptr::null_mut(); try_call!(raw::git_pathspec_new(&mut ret, &arr)); Ok(Binding::from_raw(ret)) } } /// Match a pathspec against files in a diff. /// /// The list returned contains the list of all matched filenames (unless you /// pass `PATHSPEC_FAILURES_ONLY` in the flags) and may also contain the /// list of pathspecs with no match if the `PATHSPEC_FIND_FAILURES` flag is /// specified. pub fn match_diff( &self, diff: &Diff<'_>, flags: PathspecFlags, ) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_pathspec_match_diff( &mut ret, diff.raw(), flags.bits(), self.raw )); Ok(Binding::from_raw(ret)) } } /// Match a pathspec against files in a tree. /// /// The list returned contains the list of all matched filenames (unless you /// pass `PATHSPEC_FAILURES_ONLY` in the flags) and may also contain the /// list of pathspecs with no match if the `PATHSPEC_FIND_FAILURES` flag is /// specified. pub fn match_tree( &self, tree: &Tree<'_>, flags: PathspecFlags, ) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_pathspec_match_tree( &mut ret, tree.raw(), flags.bits(), self.raw )); Ok(Binding::from_raw(ret)) } } /// This matches the pathspec against the files in the repository index. /// /// The list returned contains the list of all matched filenames (unless you /// pass `PATHSPEC_FAILURES_ONLY` in the flags) and may also contain the /// list of pathspecs with no match if the `PATHSPEC_FIND_FAILURES` flag is /// specified. pub fn match_index( &self, index: &Index, flags: PathspecFlags, ) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_pathspec_match_index( &mut ret, index.raw(), flags.bits(), self.raw )); Ok(Binding::from_raw(ret)) } } /// Match a pathspec against the working directory of a repository. /// /// This matches the pathspec against the current files in the working /// directory of the repository. It is an error to invoke this on a bare /// repo. This handles git ignores (i.e. ignored files will not be /// considered to match the pathspec unless the file is tracked in the /// index). /// /// The list returned contains the list of all matched filenames (unless you /// pass `PATHSPEC_FAILURES_ONLY` in the flags) and may also contain the /// list of pathspecs with no match if the `PATHSPEC_FIND_FAILURES` flag is /// specified. pub fn match_workdir( &self, repo: &Repository, flags: PathspecFlags, ) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_pathspec_match_workdir( &mut ret, repo.raw(), flags.bits(), self.raw )); Ok(Binding::from_raw(ret)) } } /// Try to match a path against a pathspec /// /// Unlike most of the other pathspec matching functions, this will not fall /// back on the native case-sensitivity for your platform. You must /// explicitly pass flags to control case sensitivity or else this will fall /// back on being case sensitive. pub fn matches_path(&self, path: &Path, flags: PathspecFlags) -> bool { let path = path_to_repo_path(path).unwrap(); unsafe { raw::git_pathspec_matches_path(&*self.raw, flags.bits(), path.as_ptr()) == 1 } } } impl Binding for Pathspec { type Raw = *mut raw::git_pathspec; unsafe fn from_raw(raw: *mut raw::git_pathspec) -> Pathspec { Pathspec { raw } } fn raw(&self) -> *mut raw::git_pathspec { self.raw } } impl Drop for Pathspec { fn drop(&mut self) { unsafe { raw::git_pathspec_free(self.raw) } } } impl<'ps> PathspecMatchList<'ps> { fn entrycount(&self) -> usize { unsafe { raw::git_pathspec_match_list_entrycount(&*self.raw) as usize } } fn failed_entrycount(&self) -> usize { unsafe { raw::git_pathspec_match_list_failed_entrycount(&*self.raw) as usize } } /// Returns an iterator over the matching filenames in this list. pub fn entries(&self) -> PathspecEntries<'_> { let n = self.entrycount(); let n = if n > 0 && self.entry(0).is_none() { 0 } else { n }; PathspecEntries { range: 0..n, list: self, } } /// Get a matching filename by position. /// /// If this list was generated from a diff, then the return value will /// always be `None. pub fn entry(&self, i: usize) -> Option<&[u8]> { unsafe { let ptr = raw::git_pathspec_match_list_entry(&*self.raw, i as size_t); crate::opt_bytes(self, ptr) } } /// Returns an iterator over the matching diff entries in this list. pub fn diff_entries(&self) -> PathspecDiffEntries<'_> { let n = self.entrycount(); let n = if n > 0 && self.diff_entry(0).is_none() { 0 } else { n }; PathspecDiffEntries { range: 0..n, list: self, } } /// Get a matching diff delta by position. /// /// If the list was not generated from a diff, then the return value will /// always be `None`. pub fn diff_entry(&self, i: usize) -> Option> { unsafe { let ptr = raw::git_pathspec_match_list_diff_entry(&*self.raw, i as size_t); Binding::from_raw_opt(ptr as *mut _) } } /// Returns an iterator over the non-matching entries in this list. pub fn failed_entries(&self) -> PathspecFailedEntries<'_> { let n = self.failed_entrycount(); let n = if n > 0 && self.failed_entry(0).is_none() { 0 } else { n }; PathspecFailedEntries { range: 0..n, list: self, } } /// Get an original pathspec string that had no matches. pub fn failed_entry(&self, i: usize) -> Option<&[u8]> { unsafe { let ptr = raw::git_pathspec_match_list_failed_entry(&*self.raw, i as size_t); crate::opt_bytes(self, ptr) } } } impl<'ps> Binding for PathspecMatchList<'ps> { type Raw = *mut raw::git_pathspec_match_list; unsafe fn from_raw(raw: *mut raw::git_pathspec_match_list) -> PathspecMatchList<'ps> { PathspecMatchList { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_pathspec_match_list { self.raw } } impl<'ps> Drop for PathspecMatchList<'ps> { fn drop(&mut self) { unsafe { raw::git_pathspec_match_list_free(self.raw) } } } impl<'list> Iterator for PathspecEntries<'list> { type Item = &'list [u8]; fn next(&mut self) -> Option<&'list [u8]> { self.range.next().and_then(|i| self.list.entry(i)) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } impl<'list> DoubleEndedIterator for PathspecEntries<'list> { fn next_back(&mut self) -> Option<&'list [u8]> { self.range.next_back().and_then(|i| self.list.entry(i)) } } impl<'list> ExactSizeIterator for PathspecEntries<'list> {} impl<'list> Iterator for PathspecDiffEntries<'list> { type Item = DiffDelta<'list>; fn next(&mut self) -> Option> { self.range.next().and_then(|i| self.list.diff_entry(i)) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } impl<'list> DoubleEndedIterator for PathspecDiffEntries<'list> { fn next_back(&mut self) -> Option> { self.range.next_back().and_then(|i| self.list.diff_entry(i)) } } impl<'list> ExactSizeIterator for PathspecDiffEntries<'list> {} impl<'list> Iterator for PathspecFailedEntries<'list> { type Item = &'list [u8]; fn next(&mut self) -> Option<&'list [u8]> { self.range.next().and_then(|i| self.list.failed_entry(i)) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } impl<'list> DoubleEndedIterator for PathspecFailedEntries<'list> { fn next_back(&mut self) -> Option<&'list [u8]> { self.range .next_back() .and_then(|i| self.list.failed_entry(i)) } } impl<'list> ExactSizeIterator for PathspecFailedEntries<'list> {} #[cfg(test)] mod tests { use super::Pathspec; use crate::PathspecFlags; use std::fs::File; use std::path::Path; #[test] fn smoke() { let ps = Pathspec::new(["a"].iter()).unwrap(); assert!(ps.matches_path(Path::new("a"), PathspecFlags::DEFAULT)); assert!(ps.matches_path(Path::new("a/b"), PathspecFlags::DEFAULT)); assert!(!ps.matches_path(Path::new("b"), PathspecFlags::DEFAULT)); assert!(!ps.matches_path(Path::new("ab/c"), PathspecFlags::DEFAULT)); let (td, repo) = crate::test::repo_init(); let list = ps.match_workdir(&repo, PathspecFlags::DEFAULT).unwrap(); assert_eq!(list.entries().len(), 0); assert_eq!(list.diff_entries().len(), 0); assert_eq!(list.failed_entries().len(), 0); File::create(&td.path().join("a")).unwrap(); let list = ps .match_workdir(&repo, crate::PathspecFlags::FIND_FAILURES) .unwrap(); assert_eq!(list.entries().len(), 1); assert_eq!(list.entries().next(), Some("a".as_bytes())); } } vendor/git2/src/cherrypick.rs0000664000175000017500000000435414160055207017022 0ustar mwhudsonmwhudsonuse std::mem; use crate::build::CheckoutBuilder; use crate::merge::MergeOptions; use crate::raw; use std::ptr; /// Options to specify when cherry picking pub struct CherrypickOptions<'cb> { mainline: u32, checkout_builder: Option>, merge_opts: Option, } impl<'cb> CherrypickOptions<'cb> { /// Creates a default set of cherrypick options pub fn new() -> CherrypickOptions<'cb> { CherrypickOptions { mainline: 0, checkout_builder: None, merge_opts: None, } } /// Set the mainline value /// /// For merge commits, the "mainline" is treated as the parent. pub fn mainline(&mut self, mainline: u32) -> &mut Self { self.mainline = mainline; self } /// Set the checkout builder pub fn checkout_builder(&mut self, cb: CheckoutBuilder<'cb>) -> &mut Self { self.checkout_builder = Some(cb); self } /// Set the merge options pub fn merge_opts(&mut self, merge_opts: MergeOptions) -> &mut Self { self.merge_opts = Some(merge_opts); self } /// Obtain the raw struct pub fn raw(&mut self) -> raw::git_cherrypick_options { unsafe { let mut checkout_opts: raw::git_checkout_options = mem::zeroed(); raw::git_checkout_init_options(&mut checkout_opts, raw::GIT_CHECKOUT_OPTIONS_VERSION); if let Some(ref mut cb) = self.checkout_builder { cb.configure(&mut checkout_opts); } let mut merge_opts: raw::git_merge_options = mem::zeroed(); raw::git_merge_init_options(&mut merge_opts, raw::GIT_MERGE_OPTIONS_VERSION); if let Some(ref opts) = self.merge_opts { ptr::copy(opts.raw(), &mut merge_opts, 1); } let mut cherrypick_opts: raw::git_cherrypick_options = mem::zeroed(); raw::git_cherrypick_init_options( &mut cherrypick_opts, raw::GIT_CHERRYPICK_OPTIONS_VERSION, ); cherrypick_opts.mainline = self.mainline; cherrypick_opts.checkout_opts = checkout_opts; cherrypick_opts.merge_opts = merge_opts; cherrypick_opts } } } vendor/git2/src/string_array.rs0000664000175000017500000000701514160055207017360 0ustar mwhudsonmwhudson//! Bindings to libgit2's raw `git_strarray` type use std::ops::Range; use std::str; use crate::raw; use crate::util::Binding; /// A string array structure used by libgit2 /// /// Some apis return arrays of strings which originate from libgit2. This /// wrapper type behaves a little like `Vec<&str>` but does so without copying /// the underlying strings until necessary. pub struct StringArray { raw: raw::git_strarray, } /// A forward iterator over the strings of an array, casted to `&str`. pub struct Iter<'a> { range: Range, arr: &'a StringArray, } /// A forward iterator over the strings of an array, casted to `&[u8]`. pub struct IterBytes<'a> { range: Range, arr: &'a StringArray, } impl StringArray { /// Returns None if the i'th string is not utf8 or if i is out of bounds. pub fn get(&self, i: usize) -> Option<&str> { self.get_bytes(i).and_then(|s| str::from_utf8(s).ok()) } /// Returns None if `i` is out of bounds. pub fn get_bytes(&self, i: usize) -> Option<&[u8]> { if i < self.raw.count as usize { unsafe { let ptr = *self.raw.strings.add(i) as *const _; Some(crate::opt_bytes(self, ptr).unwrap()) } } else { None } } /// Returns an iterator over the strings contained within this array. /// /// The iterator yields `Option<&str>` as it is unknown whether the contents /// are utf-8 or not. pub fn iter(&self) -> Iter<'_> { Iter { range: 0..self.len(), arr: self, } } /// Returns an iterator over the strings contained within this array, /// yielding byte slices. pub fn iter_bytes(&self) -> IterBytes<'_> { IterBytes { range: 0..self.len(), arr: self, } } /// Returns the number of strings in this array. pub fn len(&self) -> usize { self.raw.count as usize } /// Return `true` if this array is empty. pub fn is_empty(&self) -> bool { self.len() == 0 } } impl Binding for StringArray { type Raw = raw::git_strarray; unsafe fn from_raw(raw: raw::git_strarray) -> StringArray { StringArray { raw } } fn raw(&self) -> raw::git_strarray { self.raw } } impl<'a> IntoIterator for &'a StringArray { type Item = Option<&'a str>; type IntoIter = Iter<'a>; fn into_iter(self) -> Self::IntoIter { self.iter() } } impl<'a> Iterator for Iter<'a> { type Item = Option<&'a str>; fn next(&mut self) -> Option> { self.range.next().map(|i| self.arr.get(i)) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } impl<'a> DoubleEndedIterator for Iter<'a> { fn next_back(&mut self) -> Option> { self.range.next_back().map(|i| self.arr.get(i)) } } impl<'a> ExactSizeIterator for Iter<'a> {} impl<'a> Iterator for IterBytes<'a> { type Item = &'a [u8]; fn next(&mut self) -> Option<&'a [u8]> { self.range.next().and_then(|i| self.arr.get_bytes(i)) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } impl<'a> DoubleEndedIterator for IterBytes<'a> { fn next_back(&mut self) -> Option<&'a [u8]> { self.range.next_back().and_then(|i| self.arr.get_bytes(i)) } } impl<'a> ExactSizeIterator for IterBytes<'a> {} impl Drop for StringArray { fn drop(&mut self) { unsafe { raw::git_strarray_free(&mut self.raw) } } } vendor/git2/src/tracing.rs0000664000175000017500000000450114160055207016300 0ustar mwhudsonmwhudsonuse std::sync::atomic::{AtomicUsize, Ordering}; use libc::c_char; use crate::{panic, raw, util::Binding}; /// Available tracing levels. When tracing is set to a particular level, /// callers will be provided tracing at the given level and all lower levels. #[derive(Copy, Clone, Debug)] pub enum TraceLevel { /// No tracing will be performed. None, /// Severe errors that may impact the program's execution Fatal, /// Errors that do not impact the program's execution Error, /// Warnings that suggest abnormal data Warn, /// Informational messages about program execution Info, /// Detailed data that allows for debugging Debug, /// Exceptionally detailed debugging data Trace, } impl Binding for TraceLevel { type Raw = raw::git_trace_level_t; unsafe fn from_raw(raw: raw::git_trace_level_t) -> Self { match raw { raw::GIT_TRACE_NONE => Self::None, raw::GIT_TRACE_FATAL => Self::Fatal, raw::GIT_TRACE_ERROR => Self::Error, raw::GIT_TRACE_WARN => Self::Warn, raw::GIT_TRACE_INFO => Self::Info, raw::GIT_TRACE_DEBUG => Self::Debug, raw::GIT_TRACE_TRACE => Self::Trace, _ => panic!("Unknown git trace level"), } } fn raw(&self) -> raw::git_trace_level_t { match *self { Self::None => raw::GIT_TRACE_NONE, Self::Fatal => raw::GIT_TRACE_FATAL, Self::Error => raw::GIT_TRACE_ERROR, Self::Warn => raw::GIT_TRACE_WARN, Self::Info => raw::GIT_TRACE_INFO, Self::Debug => raw::GIT_TRACE_DEBUG, Self::Trace => raw::GIT_TRACE_TRACE, } } } pub type TracingCb = fn(TraceLevel, &str); static CALLBACK: AtomicUsize = AtomicUsize::new(0); /// pub fn trace_set(level: TraceLevel, cb: TracingCb) -> bool { CALLBACK.store(cb as usize, Ordering::SeqCst); unsafe { raw::git_trace_set(level.raw(), Some(tracing_cb_c)); } return true; } extern "C" fn tracing_cb_c(level: raw::git_trace_level_t, msg: *const c_char) { let cb = CALLBACK.load(Ordering::SeqCst); panic::wrap(|| unsafe { let cb: TracingCb = std::mem::transmute(cb); let msg = std::ffi::CStr::from_ptr(msg).to_str().unwrap(); cb(Binding::from_raw(level), msg); }); } vendor/git2/src/odb.rs0000664000175000017500000005325614160055207015430 0ustar mwhudsonmwhudsonuse std::io; use std::marker; use std::mem::MaybeUninit; use std::ptr; use std::slice; use std::ffi::CString; use libc::{c_char, c_int, c_void, size_t}; use crate::panic; use crate::util::Binding; use crate::{raw, Error, IndexerProgress, Mempack, Object, ObjectType, Oid, Progress}; /// A structure to represent a git object database pub struct Odb<'repo> { raw: *mut raw::git_odb, _marker: marker::PhantomData>, } impl<'repo> Binding for Odb<'repo> { type Raw = *mut raw::git_odb; unsafe fn from_raw(raw: *mut raw::git_odb) -> Odb<'repo> { Odb { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_odb { self.raw } } impl<'repo> Drop for Odb<'repo> { fn drop(&mut self) { unsafe { raw::git_odb_free(self.raw) } } } impl<'repo> Odb<'repo> { /// Creates an object database without any backends. pub fn new<'a>() -> Result, Error> { crate::init(); unsafe { let mut out = ptr::null_mut(); try_call!(raw::git_odb_new(&mut out)); Ok(Odb::from_raw(out)) } } /// Create object database reading stream. /// /// Note that most backends do not support streaming reads because they store their objects as compressed/delta'ed blobs. /// If the backend does not support streaming reads, use the `read` method instead. pub fn reader(&self, oid: Oid) -> Result<(OdbReader<'_>, usize, ObjectType), Error> { let mut out = ptr::null_mut(); let mut size = 0usize; let mut otype: raw::git_object_t = ObjectType::Any.raw(); unsafe { try_call!(raw::git_odb_open_rstream( &mut out, &mut size, &mut otype, self.raw, oid.raw() )); Ok(( OdbReader::from_raw(out), size, ObjectType::from_raw(otype).unwrap(), )) } } /// Create object database writing stream. /// /// The type and final length of the object must be specified when opening the stream. /// If the backend does not support streaming writes, use the `write` method instead. pub fn writer(&self, size: usize, obj_type: ObjectType) -> Result, Error> { let mut out = ptr::null_mut(); unsafe { try_call!(raw::git_odb_open_wstream( &mut out, self.raw, size as raw::git_object_size_t, obj_type.raw() )); Ok(OdbWriter::from_raw(out)) } } /// Iterate over all objects in the object database.s pub fn foreach(&self, mut callback: C) -> Result<(), Error> where C: FnMut(&Oid) -> bool, { unsafe { let mut data = ForeachCbData { callback: &mut callback, }; let cb: raw::git_odb_foreach_cb = Some(foreach_cb); try_call!(raw::git_odb_foreach( self.raw(), cb, &mut data as *mut _ as *mut _ )); Ok(()) } } /// Read an object from the database. pub fn read(&self, oid: Oid) -> Result, Error> { let mut out = ptr::null_mut(); unsafe { try_call!(raw::git_odb_read(&mut out, self.raw, oid.raw())); Ok(OdbObject::from_raw(out)) } } /// Reads the header of an object from the database /// without reading the full content. pub fn read_header(&self, oid: Oid) -> Result<(usize, ObjectType), Error> { let mut size: usize = 0; let mut kind_id: i32 = ObjectType::Any.raw(); unsafe { try_call!(raw::git_odb_read_header( &mut size as *mut size_t, &mut kind_id as *mut raw::git_object_t, self.raw, oid.raw() )); Ok((size, ObjectType::from_raw(kind_id).unwrap())) } } /// Write an object to the database. pub fn write(&self, kind: ObjectType, data: &[u8]) -> Result { unsafe { let mut out = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; try_call!(raw::git_odb_write( &mut out, self.raw, data.as_ptr() as *const c_void, data.len(), kind.raw() )); Ok(Oid::from_raw(&mut out)) } } /// Create stream for writing a pack file to the ODB pub fn packwriter(&self) -> Result, Error> { let mut out = ptr::null_mut(); let progress = MaybeUninit::uninit(); let progress_cb: raw::git_indexer_progress_cb = Some(write_pack_progress_cb); let progress_payload = Box::new(OdbPackwriterCb { cb: None }); let progress_payload_ptr = Box::into_raw(progress_payload); unsafe { try_call!(raw::git_odb_write_pack( &mut out, self.raw, progress_cb, progress_payload_ptr as *mut c_void )); } Ok(OdbPackwriter { raw: out, progress, progress_payload_ptr, }) } /// Checks if the object database has an object. pub fn exists(&self, oid: Oid) -> bool { unsafe { raw::git_odb_exists(self.raw, oid.raw()) != 0 } } /// Potentially finds an object that starts with the given prefix. pub fn exists_prefix(&self, short_oid: Oid, len: usize) -> Result { unsafe { let mut out = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; try_call!(raw::git_odb_exists_prefix( &mut out, self.raw, short_oid.raw(), len )); Ok(Oid::from_raw(&out)) } } /// Refresh the object database. /// This should never be needed, and is /// provided purely for convenience. /// The object database will automatically /// refresh when an object is not found when /// requested. pub fn refresh(&self) -> Result<(), Error> { unsafe { try_call!(raw::git_odb_refresh(self.raw)); Ok(()) } } /// Adds an alternate disk backend to the object database. pub fn add_disk_alternate(&self, path: &str) -> Result<(), Error> { unsafe { let path = CString::new(path)?; try_call!(raw::git_odb_add_disk_alternate(self.raw, path)); Ok(()) } } /// Create a new mempack backend, and add it to this odb with the given /// priority. Higher values give the backend higher precedence. The default /// loose and pack backends have priorities 1 and 2 respectively (hard-coded /// in libgit2). A reference to the new mempack backend is returned on /// success. The lifetime of the backend must be contained within the /// lifetime of this odb, since deletion of the odb will also result in /// deletion of the mempack backend. /// /// Here is an example that fails to compile because it tries to hold the /// mempack reference beyond the odb's lifetime: /// /// ```compile_fail /// use git2::Odb; /// let mempack = { /// let odb = Odb::new().unwrap(); /// odb.add_new_mempack_backend(1000).unwrap() /// }; /// ``` pub fn add_new_mempack_backend<'odb>( &'odb self, priority: i32, ) -> Result, Error> { unsafe { let mut mempack = ptr::null_mut(); // The mempack backend object in libgit2 is only ever freed by an // odb that has the backend in its list. So to avoid potentially // leaking the mempack backend, this API ensures that the backend // is added to the odb before returning it. The lifetime of the // mempack is also bound to the lifetime of the odb, so that users // can't end up with a dangling reference to a mempack object that // was actually freed when the odb was destroyed. try_call!(raw::git_mempack_new(&mut mempack)); try_call!(raw::git_odb_add_backend( self.raw, mempack, priority as c_int )); Ok(Mempack::from_raw(mempack)) } } } /// An object from the Object Database. pub struct OdbObject<'a> { raw: *mut raw::git_odb_object, _marker: marker::PhantomData>, } impl<'a> Binding for OdbObject<'a> { type Raw = *mut raw::git_odb_object; unsafe fn from_raw(raw: *mut raw::git_odb_object) -> OdbObject<'a> { OdbObject { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_odb_object { self.raw } } impl<'a> Drop for OdbObject<'a> { fn drop(&mut self) { unsafe { raw::git_odb_object_free(self.raw) } } } impl<'a> OdbObject<'a> { /// Get the object type. pub fn kind(&self) -> ObjectType { unsafe { ObjectType::from_raw(raw::git_odb_object_type(self.raw)).unwrap() } } /// Get the object size. pub fn len(&self) -> usize { unsafe { raw::git_odb_object_size(self.raw) } } /// Get the object data. pub fn data(&self) -> &[u8] { unsafe { let size = self.len(); let ptr: *const u8 = raw::git_odb_object_data(self.raw) as *const u8; let buffer = slice::from_raw_parts(ptr, size); return buffer; } } /// Get the object id. pub fn id(&self) -> Oid { unsafe { Oid::from_raw(raw::git_odb_object_id(self.raw)) } } } /// A structure to represent a git ODB rstream pub struct OdbReader<'repo> { raw: *mut raw::git_odb_stream, _marker: marker::PhantomData>, } impl<'repo> Binding for OdbReader<'repo> { type Raw = *mut raw::git_odb_stream; unsafe fn from_raw(raw: *mut raw::git_odb_stream) -> OdbReader<'repo> { OdbReader { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_odb_stream { self.raw } } impl<'repo> Drop for OdbReader<'repo> { fn drop(&mut self) { unsafe { raw::git_odb_stream_free(self.raw) } } } impl<'repo> io::Read for OdbReader<'repo> { fn read(&mut self, buf: &mut [u8]) -> io::Result { unsafe { let ptr = buf.as_ptr() as *mut c_char; let len = buf.len(); let res = raw::git_odb_stream_read(self.raw, ptr, len); if res < 0 { Err(io::Error::new(io::ErrorKind::Other, "Read error")) } else { Ok(len) } } } } /// A structure to represent a git ODB wstream pub struct OdbWriter<'repo> { raw: *mut raw::git_odb_stream, _marker: marker::PhantomData>, } impl<'repo> OdbWriter<'repo> { /// Finish writing to an ODB stream /// /// This method can be used to finalize writing object to the database and get an identifier. /// The object will take its final name and will be available to the odb. /// This method will fail if the total number of received bytes differs from the size declared with odb_writer() /// Attepting write after finishing will be ignored. pub fn finalize(&mut self) -> Result { let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_odb_stream_finalize_write(&mut raw, self.raw)); Ok(Binding::from_raw(&raw as *const _)) } } } impl<'repo> Binding for OdbWriter<'repo> { type Raw = *mut raw::git_odb_stream; unsafe fn from_raw(raw: *mut raw::git_odb_stream) -> OdbWriter<'repo> { OdbWriter { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_odb_stream { self.raw } } impl<'repo> Drop for OdbWriter<'repo> { fn drop(&mut self) { unsafe { raw::git_odb_stream_free(self.raw) } } } impl<'repo> io::Write for OdbWriter<'repo> { fn write(&mut self, buf: &[u8]) -> io::Result { unsafe { let ptr = buf.as_ptr() as *const c_char; let len = buf.len(); let res = raw::git_odb_stream_write(self.raw, ptr, len); if res < 0 { Err(io::Error::new(io::ErrorKind::Other, "Write error")) } else { Ok(buf.len()) } } } fn flush(&mut self) -> io::Result<()> { Ok(()) } } struct OdbPackwriterCb<'repo> { cb: Option>>, } /// A stream to write a packfile to the ODB pub struct OdbPackwriter<'repo> { raw: *mut raw::git_odb_writepack, progress: MaybeUninit, progress_payload_ptr: *mut OdbPackwriterCb<'repo>, } impl<'repo> OdbPackwriter<'repo> { /// Finish writing the packfile pub fn commit(&mut self) -> Result { unsafe { let writepack = &*self.raw; let res = match writepack.commit { Some(commit) => commit(self.raw, self.progress.as_mut_ptr()), None => -1, }; if res < 0 { Err(Error::last_error(res).unwrap()) } else { Ok(res) } } } /// The callback through which progress is monitored. Be aware that this is /// called inline, so performance may be affected. pub fn progress(&mut self, cb: F) -> &mut OdbPackwriter<'repo> where F: FnMut(Progress<'_>) -> bool + 'repo, { let progress_payload = unsafe { &mut *(self.progress_payload_ptr as *mut OdbPackwriterCb<'_>) }; progress_payload.cb = Some(Box::new(cb) as Box>); self } } impl<'repo> io::Write for OdbPackwriter<'repo> { fn write(&mut self, buf: &[u8]) -> io::Result { unsafe { let ptr = buf.as_ptr() as *mut c_void; let len = buf.len(); let writepack = &*self.raw; let res = match writepack.append { Some(append) => append(self.raw, ptr, len, self.progress.as_mut_ptr()), None => -1, }; if res < 0 { Err(io::Error::new(io::ErrorKind::Other, "Write error")) } else { Ok(buf.len()) } } } fn flush(&mut self) -> io::Result<()> { Ok(()) } } impl<'repo> Drop for OdbPackwriter<'repo> { fn drop(&mut self) { unsafe { let writepack = &*self.raw; match writepack.free { Some(free) => free(self.raw), None => (), }; Box::from_raw(self.progress_payload_ptr); } } } pub type ForeachCb<'a> = dyn FnMut(&Oid) -> bool + 'a; struct ForeachCbData<'a> { pub callback: &'a mut ForeachCb<'a>, } extern "C" fn foreach_cb(id: *const raw::git_oid, payload: *mut c_void) -> c_int { panic::wrap(|| unsafe { let data = &mut *(payload as *mut ForeachCbData<'_>); let res = { let callback = &mut data.callback; callback(&Binding::from_raw(id)) }; if res { 0 } else { 1 } }) .unwrap_or(1) } extern "C" fn write_pack_progress_cb( stats: *const raw::git_indexer_progress, payload: *mut c_void, ) -> c_int { let ok = panic::wrap(|| unsafe { let payload = &mut *(payload as *mut OdbPackwriterCb<'_>); let callback = match payload.cb { Some(ref mut cb) => cb, None => return true, }; let progress: Progress<'_> = Binding::from_raw(stats); callback(progress) }); if ok == Some(true) { 0 } else { -1 } } #[cfg(test)] mod tests { use crate::{Buf, ObjectType, Oid, Repository}; use std::io::prelude::*; use tempfile::TempDir; #[test] fn read() { let td = TempDir::new().unwrap(); let repo = Repository::init(td.path()).unwrap(); let dat = [4, 3, 5, 6, 9]; let id = repo.blob(&dat).unwrap(); let db = repo.odb().unwrap(); let obj = db.read(id).unwrap(); let data = obj.data(); let size = obj.len(); assert_eq!(size, 5); assert_eq!(dat, data); assert_eq!(id, obj.id()); } #[test] fn read_header() { let td = TempDir::new().unwrap(); let repo = Repository::init(td.path()).unwrap(); let dat = [4, 3, 5, 6, 9]; let id = repo.blob(&dat).unwrap(); let db = repo.odb().unwrap(); let (size, kind) = db.read_header(id).unwrap(); assert_eq!(size, 5); assert_eq!(kind, ObjectType::Blob); } #[test] fn write() { let td = TempDir::new().unwrap(); let repo = Repository::init(td.path()).unwrap(); let dat = [4, 3, 5, 6, 9]; let db = repo.odb().unwrap(); let id = db.write(ObjectType::Blob, &dat).unwrap(); let blob = repo.find_blob(id).unwrap(); assert_eq!(blob.content(), dat); } #[test] fn writer() { let td = TempDir::new().unwrap(); let repo = Repository::init(td.path()).unwrap(); let dat = [4, 3, 5, 6, 9]; let db = repo.odb().unwrap(); let mut ws = db.writer(dat.len(), ObjectType::Blob).unwrap(); let wl = ws.write(&dat[0..3]).unwrap(); assert_eq!(wl, 3); let wl = ws.write(&dat[3..5]).unwrap(); assert_eq!(wl, 2); let id = ws.finalize().unwrap(); let blob = repo.find_blob(id).unwrap(); assert_eq!(blob.content(), dat); } #[test] fn exists() { let td = TempDir::new().unwrap(); let repo = Repository::init(td.path()).unwrap(); let dat = [4, 3, 5, 6, 9]; let db = repo.odb().unwrap(); let id = db.write(ObjectType::Blob, &dat).unwrap(); assert!(db.exists(id)); } #[test] fn exists_prefix() { let td = TempDir::new().unwrap(); let repo = Repository::init(td.path()).unwrap(); let dat = [4, 3, 5, 6, 9]; let db = repo.odb().unwrap(); let id = db.write(ObjectType::Blob, &dat).unwrap(); let id_prefix_str = &id.to_string()[0..10]; let id_prefix = Oid::from_str(id_prefix_str).unwrap(); let found_oid = db.exists_prefix(id_prefix, 10).unwrap(); assert_eq!(found_oid, id); } #[test] fn packwriter() { let (_td, repo_source) = crate::test::repo_init(); let (_td, repo_target) = crate::test::repo_init(); let mut builder = t!(repo_source.packbuilder()); let mut buf = Buf::new(); let (commit_source_id, _tree) = crate::test::commit(&repo_source); t!(builder.insert_object(commit_source_id, None)); t!(builder.write_buf(&mut buf)); let db = repo_target.odb().unwrap(); let mut packwriter = db.packwriter().unwrap(); packwriter.write(&buf).unwrap(); packwriter.commit().unwrap(); let commit_target = repo_target.find_commit(commit_source_id).unwrap(); assert_eq!(commit_target.id(), commit_source_id); } #[test] fn packwriter_progress() { let mut progress_called = false; { let (_td, repo_source) = crate::test::repo_init(); let (_td, repo_target) = crate::test::repo_init(); let mut builder = t!(repo_source.packbuilder()); let mut buf = Buf::new(); let (commit_source_id, _tree) = crate::test::commit(&repo_source); t!(builder.insert_object(commit_source_id, None)); t!(builder.write_buf(&mut buf)); let db = repo_target.odb().unwrap(); let mut packwriter = db.packwriter().unwrap(); packwriter.progress(|_| { progress_called = true; true }); packwriter.write(&buf).unwrap(); packwriter.commit().unwrap(); } assert_eq!(progress_called, true); } #[test] fn write_with_mempack() { use crate::{Buf, ResetType}; use std::io::Write; use std::path::Path; // Create a repo, add a mempack backend let (_td, repo) = crate::test::repo_init(); let odb = repo.odb().unwrap(); let mempack = odb.add_new_mempack_backend(1000).unwrap(); // Sanity check that foo doesn't exist initially let foo_file = Path::new(repo.workdir().unwrap()).join("foo"); assert!(!foo_file.exists()); // Make a commit that adds foo. This writes new stuff into the mempack // backend. let (oid1, _id) = crate::test::commit(&repo); let commit1 = repo.find_commit(oid1).unwrap(); t!(repo.reset(commit1.as_object(), ResetType::Hard, None)); assert!(foo_file.exists()); // Dump the mempack modifications into a buf, and reset it. This "erases" // commit-related objects from the repository. Ensure the commit appears // to have become invalid, by checking for failure in `reset --hard`. let mut buf = Buf::new(); mempack.dump(&repo, &mut buf).unwrap(); mempack.reset().unwrap(); assert!(repo .reset(commit1.as_object(), ResetType::Hard, None) .is_err()); // Write the buf into a packfile in the repo. This brings back the // missing objects, and we verify everything is good again. let mut packwriter = odb.packwriter().unwrap(); packwriter.write(&buf).unwrap(); packwriter.commit().unwrap(); t!(repo.reset(commit1.as_object(), ResetType::Hard, None)); assert!(foo_file.exists()); } } vendor/git2/src/revert.rs0000664000175000017500000000417414160055207016166 0ustar mwhudsonmwhudsonuse std::mem; use crate::build::CheckoutBuilder; use crate::merge::MergeOptions; use crate::raw; use std::ptr; /// Options to specify when reverting pub struct RevertOptions<'cb> { mainline: u32, checkout_builder: Option>, merge_opts: Option, } impl<'cb> RevertOptions<'cb> { /// Creates a default set of revert options pub fn new() -> RevertOptions<'cb> { RevertOptions { mainline: 0, checkout_builder: None, merge_opts: None, } } /// Set the mainline value /// /// For merge commits, the "mainline" is treated as the parent. pub fn mainline(&mut self, mainline: u32) -> &mut Self { self.mainline = mainline; self } /// Set the checkout builder pub fn checkout_builder(&mut self, cb: CheckoutBuilder<'cb>) -> &mut Self { self.checkout_builder = Some(cb); self } /// Set the merge options pub fn merge_opts(&mut self, merge_opts: MergeOptions) -> &mut Self { self.merge_opts = Some(merge_opts); self } /// Obtain the raw struct pub fn raw(&mut self) -> raw::git_revert_options { unsafe { let mut checkout_opts: raw::git_checkout_options = mem::zeroed(); raw::git_checkout_init_options(&mut checkout_opts, raw::GIT_CHECKOUT_OPTIONS_VERSION); if let Some(ref mut cb) = self.checkout_builder { cb.configure(&mut checkout_opts); } let mut merge_opts: raw::git_merge_options = mem::zeroed(); raw::git_merge_init_options(&mut merge_opts, raw::GIT_MERGE_OPTIONS_VERSION); if let Some(ref opts) = self.merge_opts { ptr::copy(opts.raw(), &mut merge_opts, 1); } let mut revert_opts: raw::git_revert_options = mem::zeroed(); raw::git_revert_options_init(&mut revert_opts, raw::GIT_REVERT_OPTIONS_VERSION); revert_opts.mainline = self.mainline; revert_opts.checkout_opts = checkout_opts; revert_opts.merge_opts = merge_opts; revert_opts } } } vendor/git2/src/stash.rs0000664000175000017500000001673414160055207016006 0ustar mwhudsonmwhudsonuse crate::build::CheckoutBuilder; use crate::util::Binding; use crate::{panic, raw, Oid, StashApplyProgress}; use libc::{c_char, c_int, c_void, size_t}; use std::ffi::CStr; use std::mem; /// Stash application progress notification function. /// /// Return `true` to continue processing, or `false` to /// abort the stash application. pub type StashApplyProgressCb<'a> = dyn FnMut(StashApplyProgress) -> bool + 'a; /// This is a callback function you can provide to iterate over all the /// stashed states that will be invoked per entry. pub type StashCb<'a> = dyn FnMut(usize, &str, &Oid) -> bool + 'a; #[allow(unused)] /// Stash application options structure pub struct StashApplyOptions<'cb> { progress: Option>>, checkout_options: Option>, raw_opts: raw::git_stash_apply_options, } impl<'cb> Default for StashApplyOptions<'cb> { fn default() -> Self { Self::new() } } impl<'cb> StashApplyOptions<'cb> { /// Creates a default set of merge options. pub fn new() -> StashApplyOptions<'cb> { let mut opts = StashApplyOptions { progress: None, checkout_options: None, raw_opts: unsafe { mem::zeroed() }, }; assert_eq!( unsafe { raw::git_stash_apply_init_options(&mut opts.raw_opts, 1) }, 0 ); opts } /// Set stash application flag to GIT_STASH_APPLY_REINSTATE_INDEX pub fn reinstantiate_index(&mut self) -> &mut StashApplyOptions<'cb> { self.raw_opts.flags = raw::GIT_STASH_APPLY_REINSTATE_INDEX as u32; self } /// Options to use when writing files to the working directory pub fn checkout_options(&mut self, opts: CheckoutBuilder<'cb>) -> &mut StashApplyOptions<'cb> { self.checkout_options = Some(opts); self } /// Optional callback to notify the consumer of application progress. /// /// Return `true` to continue processing, or `false` to /// abort the stash application. pub fn progress_cb(&mut self, callback: C) -> &mut StashApplyOptions<'cb> where C: FnMut(StashApplyProgress) -> bool + 'cb, { self.progress = Some(Box::new(callback) as Box>); self.raw_opts.progress_cb = Some(stash_apply_progress_cb); self.raw_opts.progress_payload = self as *mut _ as *mut _; self } /// Pointer to a raw git_stash_apply_options pub fn raw(&mut self) -> &raw::git_stash_apply_options { unsafe { if let Some(opts) = self.checkout_options.as_mut() { opts.configure(&mut self.raw_opts.checkout_options); } } &self.raw_opts } } #[allow(unused)] pub struct StashCbData<'a> { pub callback: &'a mut StashCb<'a>, } #[allow(unused)] pub extern "C" fn stash_cb( index: size_t, message: *const c_char, stash_id: *const raw::git_oid, payload: *mut c_void, ) -> c_int { panic::wrap(|| unsafe { let mut data = &mut *(payload as *mut StashCbData<'_>); let res = { let mut callback = &mut data.callback; callback( index, CStr::from_ptr(message).to_str().unwrap(), &Binding::from_raw(stash_id), ) }; if res { 0 } else { 1 } }) .unwrap_or(1) } fn convert_progress(progress: raw::git_stash_apply_progress_t) -> StashApplyProgress { match progress { raw::GIT_STASH_APPLY_PROGRESS_NONE => StashApplyProgress::None, raw::GIT_STASH_APPLY_PROGRESS_LOADING_STASH => StashApplyProgress::LoadingStash, raw::GIT_STASH_APPLY_PROGRESS_ANALYZE_INDEX => StashApplyProgress::AnalyzeIndex, raw::GIT_STASH_APPLY_PROGRESS_ANALYZE_MODIFIED => StashApplyProgress::AnalyzeModified, raw::GIT_STASH_APPLY_PROGRESS_ANALYZE_UNTRACKED => StashApplyProgress::AnalyzeUntracked, raw::GIT_STASH_APPLY_PROGRESS_CHECKOUT_UNTRACKED => StashApplyProgress::CheckoutUntracked, raw::GIT_STASH_APPLY_PROGRESS_CHECKOUT_MODIFIED => StashApplyProgress::CheckoutModified, raw::GIT_STASH_APPLY_PROGRESS_DONE => StashApplyProgress::Done, _ => StashApplyProgress::None, } } #[allow(unused)] extern "C" fn stash_apply_progress_cb( progress: raw::git_stash_apply_progress_t, payload: *mut c_void, ) -> c_int { panic::wrap(|| unsafe { let mut options = &mut *(payload as *mut StashApplyOptions<'_>); let res = { let mut callback = options.progress.as_mut().unwrap(); callback(convert_progress(progress)) }; if res { 0 } else { -1 } }) .unwrap_or(-1) } #[cfg(test)] mod tests { use crate::stash::StashApplyOptions; use crate::test::repo_init; use crate::{Repository, StashFlags, Status}; use std::fs; use std::io::Write; use std::path::Path; fn make_stash(next: C) where C: FnOnce(&mut Repository), { let (_td, mut repo) = repo_init(); let signature = repo.signature().unwrap(); let p = Path::new(repo.workdir().unwrap()).join("file_b.txt"); println!("using path {:?}", p); fs::File::create(&p) .unwrap() .write("data".as_bytes()) .unwrap(); let rel_p = Path::new("file_b.txt"); assert!(repo.status_file(&rel_p).unwrap() == Status::WT_NEW); repo.stash_save(&signature, "msg1", Some(StashFlags::INCLUDE_UNTRACKED)) .unwrap(); assert!(repo.status_file(&rel_p).is_err()); let mut count = 0; repo.stash_foreach(|index, name, _oid| { count += 1; assert!(index == 0); assert!(name == "On main: msg1"); true }) .unwrap(); assert!(count == 1); next(&mut repo); } fn count_stash(repo: &mut Repository) -> usize { let mut count = 0; repo.stash_foreach(|_, _, _| { count += 1; true }) .unwrap(); count } #[test] fn smoke_stash_save_drop() { make_stash(|repo| { repo.stash_drop(0).unwrap(); assert!(count_stash(repo) == 0) }) } #[test] fn smoke_stash_save_pop() { make_stash(|repo| { repo.stash_pop(0, None).unwrap(); assert!(count_stash(repo) == 0) }) } #[test] fn smoke_stash_save_apply() { make_stash(|repo| { let mut options = StashApplyOptions::new(); options.progress_cb(|progress| { println!("{:?}", progress); true }); repo.stash_apply(0, Some(&mut options)).unwrap(); assert!(count_stash(repo) == 1) }) } #[test] fn test_stash_save2_msg_none() { let (_td, mut repo) = repo_init(); let signature = repo.signature().unwrap(); let p = Path::new(repo.workdir().unwrap()).join("file_b.txt"); fs::File::create(&p) .unwrap() .write("data".as_bytes()) .unwrap(); repo.stash_save2(&signature, None, Some(StashFlags::INCLUDE_UNTRACKED)) .unwrap(); let mut stash_name = String::new(); repo.stash_foreach(|index, name, _oid| { assert_eq!(index, 0); stash_name = name.to_string(); true }) .unwrap(); assert!(stash_name.starts_with("WIP on main:")); } } vendor/git2/src/tagforeach.rs0000664000175000017500000000350514160055207016757 0ustar mwhudsonmwhudson//! git_tag_foreach support //! see original: use crate::{panic, raw, util::Binding, Oid}; use libc::{c_char, c_int}; use raw::git_oid; use std::ffi::{c_void, CStr}; /// boxed callback type pub(crate) type TagForeachCB<'a> = Box bool + 'a>; /// helper type to be able to pass callback to payload pub(crate) struct TagForeachData<'a> { /// callback pub(crate) cb: TagForeachCB<'a>, } /// c callback forwarding to rust callback inside `TagForeachData` /// see original: pub(crate) extern "C" fn tag_foreach_cb( name: *const c_char, oid: *mut git_oid, payload: *mut c_void, ) -> c_int { panic::wrap(|| unsafe { let id: Oid = Binding::from_raw(oid as *const _); let name = CStr::from_ptr(name); let name = name.to_bytes(); let payload = &mut *(payload as *mut TagForeachData<'_>); let cb = &mut payload.cb; let res = cb(id, name); if res { 0 } else { -1 } }) .unwrap_or(-1) } #[cfg(test)] mod tests { #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let head = repo.head().unwrap(); let id = head.target().unwrap(); assert!(repo.find_tag(id).is_err()); let obj = repo.find_object(id, None).unwrap(); let sig = repo.signature().unwrap(); let tag_id = repo.tag("foo", &obj, &sig, "msg", false).unwrap(); let mut tags = Vec::new(); repo.tag_foreach(|id, name| { tags.push((id, String::from_utf8(name.into()).unwrap())); true }) .unwrap(); assert_eq!(tags[0].0, tag_id); assert_eq!(tags[0].1, "refs/tags/foo"); } } vendor/git2/src/cert.rs0000664000175000017500000000576414160055207015622 0ustar mwhudsonmwhudson//! Certificate types which are passed to `CertificateCheck` in //! `RemoteCallbacks`. use std::marker; use std::mem; use std::slice; use crate::raw; use crate::util::Binding; /// A certificate for a remote connection, viewable as one of `CertHostkey` or /// `CertX509` currently. pub struct Cert<'a> { raw: *mut raw::git_cert, _marker: marker::PhantomData<&'a raw::git_cert>, } /// Hostkey information taken from libssh2 pub struct CertHostkey<'a> { raw: *mut raw::git_cert_hostkey, _marker: marker::PhantomData<&'a raw::git_cert>, } /// X.509 certificate information pub struct CertX509<'a> { raw: *mut raw::git_cert_x509, _marker: marker::PhantomData<&'a raw::git_cert>, } impl<'a> Cert<'a> { /// Attempt to view this certificate as an SSH hostkey. /// /// Returns `None` if this is not actually an SSH hostkey. pub fn as_hostkey(&self) -> Option<&CertHostkey<'a>> { self.cast(raw::GIT_CERT_HOSTKEY_LIBSSH2) } /// Attempt to view this certificate as an X.509 certificate. /// /// Returns `None` if this is not actually an X.509 certificate. pub fn as_x509(&self) -> Option<&CertX509<'a>> { self.cast(raw::GIT_CERT_X509) } fn cast(&self, kind: raw::git_cert_t) -> Option<&T> { assert_eq!(mem::size_of::>(), mem::size_of::()); unsafe { if kind == (*self.raw).cert_type { Some(&*(self as *const Cert<'a> as *const T)) } else { None } } } } impl<'a> CertHostkey<'a> { /// Returns the md5 hash of the hostkey, if available. pub fn hash_md5(&self) -> Option<&[u8; 16]> { unsafe { if (*self.raw).kind as u32 & raw::GIT_CERT_SSH_MD5 as u32 == 0 { None } else { Some(&(*self.raw).hash_md5) } } } /// Returns the SHA-1 hash of the hostkey, if available. pub fn hash_sha1(&self) -> Option<&[u8; 20]> { unsafe { if (*self.raw).kind as u32 & raw::GIT_CERT_SSH_SHA1 as u32 == 0 { None } else { Some(&(*self.raw).hash_sha1) } } } /// Returns the SHA-256 hash of the hostkey, if available. pub fn hash_sha256(&self) -> Option<&[u8; 32]> { unsafe { if (*self.raw).kind as u32 & raw::GIT_CERT_SSH_SHA256 as u32 == 0 { None } else { Some(&(*self.raw).hash_sha256) } } } } impl<'a> CertX509<'a> { /// Return the X.509 certificate data as a byte slice pub fn data(&self) -> &[u8] { unsafe { slice::from_raw_parts((*self.raw).data as *const u8, (*self.raw).len as usize) } } } impl<'a> Binding for Cert<'a> { type Raw = *mut raw::git_cert; unsafe fn from_raw(raw: *mut raw::git_cert) -> Cert<'a> { Cert { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_cert { self.raw } } vendor/git2/src/treebuilder.rs0000664000175000017500000001527214160055207017166 0ustar mwhudsonmwhudsonuse std::marker; use std::ptr; use libc::{c_int, c_void}; use crate::util::{Binding, IntoCString}; use crate::{panic, raw, tree, Error, Oid, Repository, TreeEntry}; /// Constructor for in-memory trees pub struct TreeBuilder<'repo> { raw: *mut raw::git_treebuilder, _marker: marker::PhantomData<&'repo Repository>, } impl<'repo> TreeBuilder<'repo> { /// Clear all the entries in the builder pub fn clear(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_treebuilder_clear(self.raw)); } Ok(()) } /// Get the number of entries pub fn len(&self) -> usize { unsafe { raw::git_treebuilder_entrycount(self.raw) as usize } } /// Return `true` if there is no entry pub fn is_empty(&self) -> bool { self.len() == 0 } /// Get en entry from the builder from its filename pub fn get

(&self, filename: P) -> Result>, Error> where P: IntoCString, { let filename = filename.into_c_string()?; unsafe { let ret = raw::git_treebuilder_get(self.raw, filename.as_ptr()); if ret.is_null() { Ok(None) } else { Ok(Some(tree::entry_from_raw_const(ret))) } } } /// Add or update an entry in the builder /// /// No attempt is made to ensure that the provided Oid points to /// an object of a reasonable type (or any object at all). /// /// The mode given must be one of 0o040000, 0o100644, 0o100755, 0o120000 or /// 0o160000 currently. pub fn insert( &mut self, filename: P, oid: Oid, filemode: i32, ) -> Result, Error> { let filename = filename.into_c_string()?; let filemode = filemode as raw::git_filemode_t; let mut ret = ptr::null(); unsafe { try_call!(raw::git_treebuilder_insert( &mut ret, self.raw, filename, oid.raw(), filemode )); Ok(tree::entry_from_raw_const(ret)) } } /// Remove an entry from the builder by its filename pub fn remove(&mut self, filename: P) -> Result<(), Error> { let filename = filename.into_c_string()?; unsafe { try_call!(raw::git_treebuilder_remove(self.raw, filename)); } Ok(()) } /// Selectively remove entries from the tree /// /// Values for which the filter returns `true` will be kept. Note /// that this behavior is different from the libgit2 C interface. pub fn filter(&mut self, mut filter: F) -> Result<(), Error> where F: FnMut(&TreeEntry<'_>) -> bool, { let mut cb: &mut FilterCb<'_> = &mut filter; let ptr = &mut cb as *mut _; let cb: raw::git_treebuilder_filter_cb = Some(filter_cb); unsafe { try_call!(raw::git_treebuilder_filter(self.raw, cb, ptr as *mut _)); panic::check(); } Ok(()) } /// Write the contents of the TreeBuilder as a Tree object and /// return its Oid pub fn write(&self) -> Result { let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_treebuilder_write(&mut raw, self.raw())); Ok(Binding::from_raw(&raw as *const _)) } } } type FilterCb<'a> = dyn FnMut(&TreeEntry<'_>) -> bool + 'a; extern "C" fn filter_cb(entry: *const raw::git_tree_entry, payload: *mut c_void) -> c_int { let ret = panic::wrap(|| unsafe { // There's no way to return early from git_treebuilder_filter. if panic::panicked() { true } else { let entry = tree::entry_from_raw_const(entry); let payload = payload as *mut &mut FilterCb<'_>; (*payload)(&entry) } }); if ret == Some(false) { 1 } else { 0 } } impl<'repo> Binding for TreeBuilder<'repo> { type Raw = *mut raw::git_treebuilder; unsafe fn from_raw(raw: *mut raw::git_treebuilder) -> TreeBuilder<'repo> { TreeBuilder { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_treebuilder { self.raw } } impl<'repo> Drop for TreeBuilder<'repo> { fn drop(&mut self) { unsafe { raw::git_treebuilder_free(self.raw) } } } #[cfg(test)] mod tests { use crate::ObjectType; #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let mut builder = repo.treebuilder(None).unwrap(); assert_eq!(builder.len(), 0); let blob = repo.blob(b"data").unwrap(); { let entry = builder.insert("a", blob, 0o100644).unwrap(); assert_eq!(entry.kind(), Some(ObjectType::Blob)); } builder.insert("b", blob, 0o100644).unwrap(); assert_eq!(builder.len(), 2); builder.remove("a").unwrap(); assert_eq!(builder.len(), 1); assert_eq!(builder.get("b").unwrap().unwrap().id(), blob); builder.clear().unwrap(); assert_eq!(builder.len(), 0); } #[test] fn write() { let (_td, repo) = crate::test::repo_init(); let mut builder = repo.treebuilder(None).unwrap(); let data = repo.blob(b"data").unwrap(); builder.insert("name", data, 0o100644).unwrap(); let tree = builder.write().unwrap(); let tree = repo.find_tree(tree).unwrap(); let entry = tree.get(0).unwrap(); assert_eq!(entry.name(), Some("name")); let blob = entry.to_object(&repo).unwrap(); let blob = blob.as_blob().unwrap(); assert_eq!(blob.content(), b"data"); let builder = repo.treebuilder(Some(&tree)).unwrap(); assert_eq!(builder.len(), 1); } #[test] fn filter() { let (_td, repo) = crate::test::repo_init(); let mut builder = repo.treebuilder(None).unwrap(); let blob = repo.blob(b"data").unwrap(); let tree = { let head = repo.head().unwrap().peel(ObjectType::Commit).unwrap(); let head = head.as_commit().unwrap(); head.tree_id() }; builder.insert("blob", blob, 0o100644).unwrap(); builder.insert("dir", tree, 0o040000).unwrap(); builder.insert("dir2", tree, 0o040000).unwrap(); builder.filter(|_| true).unwrap(); assert_eq!(builder.len(), 3); builder .filter(|e| e.kind().unwrap() != ObjectType::Blob) .unwrap(); assert_eq!(builder.len(), 2); builder.filter(|_| false).unwrap(); assert_eq!(builder.len(), 0); } } vendor/git2/src/repo.rs0000664000175000017500000042577014160055207015635 0ustar mwhudsonmwhudsonuse libc::{c_char, c_int, c_uint, c_void, size_t}; use std::env; use std::ffi::{CStr, CString, OsStr}; use std::iter::IntoIterator; use std::mem; use std::path::Path; use std::ptr; use std::str; use crate::build::{CheckoutBuilder, RepoBuilder}; use crate::diff::{ binary_cb_c, file_cb_c, hunk_cb_c, line_cb_c, BinaryCb, DiffCallbacks, FileCb, HunkCb, LineCb, }; use crate::oid_array::OidArray; use crate::stash::{stash_cb, StashApplyOptions, StashCbData}; use crate::string_array::StringArray; use crate::tagforeach::{tag_foreach_cb, TagForeachCB, TagForeachData}; use crate::util::{self, path_to_repo_path, Binding}; use crate::worktree::{Worktree, WorktreeAddOptions}; use crate::CherrypickOptions; use crate::RevertOptions; use crate::{mailmap::Mailmap, panic}; use crate::{ raw, AttrCheckFlags, Buf, Error, Object, Remote, RepositoryOpenFlags, RepositoryState, Revspec, StashFlags, }; use crate::{ AnnotatedCommit, MergeAnalysis, MergeOptions, MergePreference, SubmoduleIgnore, SubmoduleStatus, SubmoduleUpdate, }; use crate::{ApplyLocation, ApplyOptions, Rebase, RebaseOptions}; use crate::{Blame, BlameOptions, Reference, References, ResetType, Signature, Submodule}; use crate::{Blob, BlobWriter, Branch, BranchType, Branches, Commit, Config, Index, Oid, Tree}; use crate::{Describe, IntoCString, Reflog, RepositoryInitMode, RevparseMode}; use crate::{DescribeOptions, Diff, DiffOptions, Odb, PackBuilder, TreeBuilder}; use crate::{Note, Notes, ObjectType, Revwalk, Status, StatusOptions, Statuses, Tag, Transaction}; type MergeheadForeachCb<'a> = dyn FnMut(&Oid) -> bool + 'a; type FetchheadForeachCb<'a> = dyn FnMut(&str, &[u8], &Oid, bool) -> bool + 'a; struct FetchheadForeachCbData<'a> { callback: &'a mut FetchheadForeachCb<'a>, } struct MergeheadForeachCbData<'a> { callback: &'a mut MergeheadForeachCb<'a>, } extern "C" fn mergehead_foreach_cb(oid: *const raw::git_oid, payload: *mut c_void) -> c_int { panic::wrap(|| unsafe { let data = &mut *(payload as *mut MergeheadForeachCbData<'_>); let res = { let callback = &mut data.callback; callback(&Binding::from_raw(oid)) }; if res { 0 } else { 1 } }) .unwrap_or(1) } extern "C" fn fetchhead_foreach_cb( ref_name: *const c_char, remote_url: *const c_char, oid: *const raw::git_oid, is_merge: c_uint, payload: *mut c_void, ) -> c_int { panic::wrap(|| unsafe { let data = &mut *(payload as *mut FetchheadForeachCbData<'_>); let res = { let callback = &mut data.callback; assert!(!ref_name.is_null()); assert!(!remote_url.is_null()); assert!(!oid.is_null()); let ref_name = str::from_utf8(CStr::from_ptr(ref_name).to_bytes()).unwrap(); let remote_url = CStr::from_ptr(remote_url).to_bytes(); let oid = Binding::from_raw(oid); let is_merge = is_merge == 1; callback(&ref_name, remote_url, &oid, is_merge) }; if res { 0 } else { 1 } }) .unwrap_or(1) } /// An owned git repository, representing all state associated with the /// underlying filesystem. /// /// This structure corresponds to a `git_repository` in libgit2. Many other /// types in git2-rs are derivative from this structure and are attached to its /// lifetime. /// /// When a repository goes out of scope it is freed in memory but not deleted /// from the filesystem. pub struct Repository { raw: *mut raw::git_repository, } // It is the current belief that a `Repository` can be sent among threads, or // even shared among threads in a mutex. unsafe impl Send for Repository {} /// Options which can be used to configure how a repository is initialized pub struct RepositoryInitOptions { flags: u32, mode: u32, workdir_path: Option, description: Option, template_path: Option, initial_head: Option, origin_url: Option, } impl Repository { /// Attempt to open an already-existing repository at `path`. /// /// The path can point to either a normal or bare repository. pub fn open>(path: P) -> Result { crate::init(); // Normal file path OK (does not need Windows conversion). let path = path.as_ref().into_c_string()?; let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_repository_open(&mut ret, path)); Ok(Binding::from_raw(ret)) } } /// Attempt to open an already-existing bare repository at `path`. /// /// The path can point to only a bare repository. pub fn open_bare>(path: P) -> Result { crate::init(); // Normal file path OK (does not need Windows conversion). let path = path.as_ref().into_c_string()?; let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_repository_open_bare(&mut ret, path)); Ok(Binding::from_raw(ret)) } } /// Find and open an existing repository, respecting git environment /// variables. This acts like `open_ext` with the /// `REPOSITORY_OPEN_FROM_ENV` flag, but additionally respects `$GIT_DIR`. /// With `$GIT_DIR` unset, this will search for a repository starting in /// the current directory. pub fn open_from_env() -> Result { crate::init(); let mut ret = ptr::null_mut(); let flags = raw::GIT_REPOSITORY_OPEN_FROM_ENV; unsafe { try_call!(raw::git_repository_open_ext( &mut ret, ptr::null(), flags as c_uint, ptr::null() )); Ok(Binding::from_raw(ret)) } } /// Find and open an existing repository, with additional options. /// /// If flags contains REPOSITORY_OPEN_NO_SEARCH, the path must point /// directly to a repository; otherwise, this may point to a subdirectory /// of a repository, and `open_ext` will search up through parent /// directories. /// /// If flags contains REPOSITORY_OPEN_CROSS_FS, the search through parent /// directories will not cross a filesystem boundary (detected when the /// stat st_dev field changes). /// /// If flags contains REPOSITORY_OPEN_BARE, force opening the repository as /// bare even if it isn't, ignoring any working directory, and defer /// loading the repository configuration for performance. /// /// If flags contains REPOSITORY_OPEN_NO_DOTGIT, don't try appending /// `/.git` to `path`. /// /// If flags contains REPOSITORY_OPEN_FROM_ENV, `open_ext` will ignore /// other flags and `ceiling_dirs`, and respect the same environment /// variables git does. Note, however, that `path` overrides `$GIT_DIR`; to /// respect `$GIT_DIR` as well, use `open_from_env`. /// /// ceiling_dirs specifies a list of paths that the search through parent /// directories will stop before entering. Use the functions in std::env /// to construct or manipulate such a path list. pub fn open_ext( path: P, flags: RepositoryOpenFlags, ceiling_dirs: I, ) -> Result where P: AsRef, O: AsRef, I: IntoIterator, { crate::init(); // Normal file path OK (does not need Windows conversion). let path = path.as_ref().into_c_string()?; let ceiling_dirs_os = env::join_paths(ceiling_dirs)?; let ceiling_dirs = ceiling_dirs_os.into_c_string()?; let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_repository_open_ext( &mut ret, path, flags.bits() as c_uint, ceiling_dirs )); Ok(Binding::from_raw(ret)) } } /// Attempt to open an already-existing repository from a worktree. pub fn open_from_worktree(worktree: &Worktree) -> Result { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_repository_open_from_worktree( &mut ret, worktree.raw() )); Ok(Binding::from_raw(ret)) } } /// Attempt to open an already-existing repository at or above `path` /// /// This starts at `path` and looks up the filesystem hierarchy /// until it finds a repository. pub fn discover>(path: P) -> Result { // TODO: this diverges significantly from the libgit2 API crate::init(); let buf = Buf::new(); // Normal file path OK (does not need Windows conversion). let path = path.as_ref().into_c_string()?; unsafe { try_call!(raw::git_repository_discover( buf.raw(), path, 1, ptr::null() )); } Repository::open(util::bytes2path(&*buf)) } /// Creates a new repository in the specified folder. /// /// This by default will create any necessary directories to create the /// repository, and it will read any user-specified templates when creating /// the repository. This behavior can be configured through `init_opts`. pub fn init>(path: P) -> Result { Repository::init_opts(path, &RepositoryInitOptions::new()) } /// Creates a new `--bare` repository in the specified folder. /// /// The folder must exist prior to invoking this function. pub fn init_bare>(path: P) -> Result { Repository::init_opts(path, RepositoryInitOptions::new().bare(true)) } /// Creates a new repository in the specified folder with the given options. /// /// See `RepositoryInitOptions` struct for more information. pub fn init_opts>( path: P, opts: &RepositoryInitOptions, ) -> Result { crate::init(); // Normal file path OK (does not need Windows conversion). let path = path.as_ref().into_c_string()?; let mut ret = ptr::null_mut(); unsafe { let mut opts = opts.raw(); try_call!(raw::git_repository_init_ext(&mut ret, path, &mut opts)); Ok(Binding::from_raw(ret)) } } /// Clone a remote repository. /// /// See the `RepoBuilder` struct for more information. This function will /// delegate to a fresh `RepoBuilder` pub fn clone>(url: &str, into: P) -> Result { crate::init(); RepoBuilder::new().clone(url, into.as_ref()) } /// Clone a remote repository, initialize and update its submodules /// recursively. /// /// This is similar to `git clone --recursive`. pub fn clone_recurse>(url: &str, into: P) -> Result { let repo = Repository::clone(url, into)?; repo.update_submodules()?; Ok(repo) } /// Attempt to wrap an object database as a repository. pub fn from_odb(odb: Odb<'_>) -> Result { crate::init(); let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_repository_wrap_odb(&mut ret, odb.raw())); Ok(Binding::from_raw(ret)) } } /// Update submodules recursively. /// /// Uninitialized submodules will be initialized. fn update_submodules(&self) -> Result<(), Error> { fn add_subrepos(repo: &Repository, list: &mut Vec) -> Result<(), Error> { for mut subm in repo.submodules()? { subm.update(true, None)?; list.push(subm.open()?); } Ok(()) } let mut repos = Vec::new(); add_subrepos(self, &mut repos)?; while let Some(repo) = repos.pop() { add_subrepos(&repo, &mut repos)?; } Ok(()) } /// Execute a rev-parse operation against the `spec` listed. /// /// The resulting revision specification is returned, or an error is /// returned if one occurs. pub fn revparse(&self, spec: &str) -> Result, Error> { let mut raw = raw::git_revspec { from: ptr::null_mut(), to: ptr::null_mut(), flags: 0, }; let spec = CString::new(spec)?; unsafe { try_call!(raw::git_revparse(&mut raw, self.raw, spec)); let to = Binding::from_raw_opt(raw.to); let from = Binding::from_raw_opt(raw.from); let mode = RevparseMode::from_bits_truncate(raw.flags as u32); Ok(Revspec::from_objects(from, to, mode)) } } /// Find a single object, as specified by a revision string. pub fn revparse_single(&self, spec: &str) -> Result, Error> { let spec = CString::new(spec)?; let mut obj = ptr::null_mut(); unsafe { try_call!(raw::git_revparse_single(&mut obj, self.raw, spec)); assert!(!obj.is_null()); Ok(Binding::from_raw(obj)) } } /// Find a single object and intermediate reference by a revision string. /// /// See `man gitrevisions`, or /// for /// information on the syntax accepted. /// /// In some cases (`@{<-n>}` or `@{upstream}`), the expression /// may point to an intermediate reference. When such expressions are being /// passed in, this intermediate reference is returned. pub fn revparse_ext(&self, spec: &str) -> Result<(Object<'_>, Option>), Error> { let spec = CString::new(spec)?; let mut git_obj = ptr::null_mut(); let mut git_ref = ptr::null_mut(); unsafe { try_call!(raw::git_revparse_ext( &mut git_obj, &mut git_ref, self.raw, spec )); assert!(!git_obj.is_null()); Ok((Binding::from_raw(git_obj), Binding::from_raw_opt(git_ref))) } } /// Tests whether this repository is a bare repository or not. pub fn is_bare(&self) -> bool { unsafe { raw::git_repository_is_bare(self.raw) == 1 } } /// Tests whether this repository is a shallow clone. pub fn is_shallow(&self) -> bool { unsafe { raw::git_repository_is_shallow(self.raw) == 1 } } /// Tests whether this repository is a worktree. pub fn is_worktree(&self) -> bool { unsafe { raw::git_repository_is_worktree(self.raw) == 1 } } /// Tests whether this repository is empty. pub fn is_empty(&self) -> Result { let empty = unsafe { try_call!(raw::git_repository_is_empty(self.raw)) }; Ok(empty == 1) } /// Returns the path to the `.git` folder for normal repositories or the /// repository itself for bare repositories. pub fn path(&self) -> &Path { unsafe { let ptr = raw::git_repository_path(self.raw); util::bytes2path(crate::opt_bytes(self, ptr).unwrap()) } } /// Returns the current state of this repository pub fn state(&self) -> RepositoryState { let state = unsafe { raw::git_repository_state(self.raw) }; macro_rules! check( ($($raw:ident => $real:ident),*) => ( $(if state == raw::$raw as c_int { super::RepositoryState::$real }) else * else { panic!("unknown repository state: {}", state) } ) ); check!( GIT_REPOSITORY_STATE_NONE => Clean, GIT_REPOSITORY_STATE_MERGE => Merge, GIT_REPOSITORY_STATE_REVERT => Revert, GIT_REPOSITORY_STATE_REVERT_SEQUENCE => RevertSequence, GIT_REPOSITORY_STATE_CHERRYPICK => CherryPick, GIT_REPOSITORY_STATE_CHERRYPICK_SEQUENCE => CherryPickSequence, GIT_REPOSITORY_STATE_BISECT => Bisect, GIT_REPOSITORY_STATE_REBASE => Rebase, GIT_REPOSITORY_STATE_REBASE_INTERACTIVE => RebaseInteractive, GIT_REPOSITORY_STATE_REBASE_MERGE => RebaseMerge, GIT_REPOSITORY_STATE_APPLY_MAILBOX => ApplyMailbox, GIT_REPOSITORY_STATE_APPLY_MAILBOX_OR_REBASE => ApplyMailboxOrRebase ) } /// Get the path of the working directory for this repository. /// /// If this repository is bare, then `None` is returned. pub fn workdir(&self) -> Option<&Path> { unsafe { let ptr = raw::git_repository_workdir(self.raw); if ptr.is_null() { None } else { Some(util::bytes2path(CStr::from_ptr(ptr).to_bytes())) } } } /// Set the path to the working directory for this repository. /// /// If `update_link` is true, create/update the gitlink file in the workdir /// and set config "core.worktree" (if workdir is not the parent of the .git /// directory). pub fn set_workdir(&self, path: &Path, update_gitlink: bool) -> Result<(), Error> { // Normal file path OK (does not need Windows conversion). let path = path.into_c_string()?; unsafe { try_call!(raw::git_repository_set_workdir( self.raw(), path, update_gitlink )); } Ok(()) } /// Get the currently active namespace for this repository. /// /// If there is no namespace, or the namespace is not a valid utf8 string, /// `None` is returned. pub fn namespace(&self) -> Option<&str> { self.namespace_bytes().and_then(|s| str::from_utf8(s).ok()) } /// Get the currently active namespace for this repository as a byte array. /// /// If there is no namespace, `None` is returned. pub fn namespace_bytes(&self) -> Option<&[u8]> { unsafe { crate::opt_bytes(self, raw::git_repository_get_namespace(self.raw)) } } /// Set the active namespace for this repository. pub fn set_namespace(&self, namespace: &str) -> Result<(), Error> { self.set_namespace_bytes(namespace.as_bytes()) } /// Set the active namespace for this repository as a byte array. pub fn set_namespace_bytes(&self, namespace: &[u8]) -> Result<(), Error> { unsafe { let namespace = CString::new(namespace)?; try_call!(raw::git_repository_set_namespace(self.raw, namespace)); Ok(()) } } /// Remove the active namespace for this repository. pub fn remove_namespace(&self) -> Result<(), Error> { unsafe { try_call!(raw::git_repository_set_namespace(self.raw, ptr::null())); Ok(()) } } /// Retrieves the Git merge message. /// Remember to remove the message when finished. pub fn message(&self) -> Result { unsafe { let buf = Buf::new(); try_call!(raw::git_repository_message(buf.raw(), self.raw)); Ok(str::from_utf8(&buf).unwrap().to_string()) } } /// Remove the Git merge message. pub fn remove_message(&self) -> Result<(), Error> { unsafe { try_call!(raw::git_repository_message_remove(self.raw)); Ok(()) } } /// List all remotes for a given repository pub fn remotes(&self) -> Result { let mut arr = raw::git_strarray { strings: ptr::null_mut(), count: 0, }; unsafe { try_call!(raw::git_remote_list(&mut arr, self.raw)); Ok(Binding::from_raw(arr)) } } /// Get the information for a particular remote pub fn find_remote(&self, name: &str) -> Result, Error> { let mut ret = ptr::null_mut(); let name = CString::new(name)?; unsafe { try_call!(raw::git_remote_lookup(&mut ret, self.raw, name)); Ok(Binding::from_raw(ret)) } } /// Add a remote with the default fetch refspec to the repository's /// configuration. pub fn remote(&self, name: &str, url: &str) -> Result, Error> { let mut ret = ptr::null_mut(); let name = CString::new(name)?; let url = CString::new(url)?; unsafe { try_call!(raw::git_remote_create(&mut ret, self.raw, name, url)); Ok(Binding::from_raw(ret)) } } /// Add a remote with the provided fetch refspec to the repository's /// configuration. pub fn remote_with_fetch( &self, name: &str, url: &str, fetch: &str, ) -> Result, Error> { let mut ret = ptr::null_mut(); let name = CString::new(name)?; let url = CString::new(url)?; let fetch = CString::new(fetch)?; unsafe { try_call!(raw::git_remote_create_with_fetchspec( &mut ret, self.raw, name, url, fetch )); Ok(Binding::from_raw(ret)) } } /// Create an anonymous remote /// /// Create a remote with the given url and refspec in memory. You can use /// this when you have a URL instead of a remote's name. Note that anonymous /// remotes cannot be converted to persisted remotes. pub fn remote_anonymous(&self, url: &str) -> Result, Error> { let mut ret = ptr::null_mut(); let url = CString::new(url)?; unsafe { try_call!(raw::git_remote_create_anonymous(&mut ret, self.raw, url)); Ok(Binding::from_raw(ret)) } } /// Give a remote a new name /// /// All remote-tracking branches and configuration settings for the remote /// are updated. /// /// A temporary in-memory remote cannot be given a name with this method. /// /// No loaded instances of the remote with the old name will change their /// name or their list of refspecs. /// /// The returned array of strings is a list of the non-default refspecs /// which cannot be renamed and are returned for further processing by the /// caller. pub fn remote_rename(&self, name: &str, new_name: &str) -> Result { let name = CString::new(name)?; let new_name = CString::new(new_name)?; let mut problems = raw::git_strarray { count: 0, strings: ptr::null_mut(), }; unsafe { try_call!(raw::git_remote_rename( &mut problems, self.raw, name, new_name )); Ok(Binding::from_raw(problems)) } } /// Delete an existing persisted remote. /// /// All remote-tracking branches and configuration settings for the remote /// will be removed. pub fn remote_delete(&self, name: &str) -> Result<(), Error> { let name = CString::new(name)?; unsafe { try_call!(raw::git_remote_delete(self.raw, name)); } Ok(()) } /// Add a fetch refspec to the remote's configuration /// /// Add the given refspec to the fetch list in the configuration. No loaded /// remote instances will be affected. pub fn remote_add_fetch(&self, name: &str, spec: &str) -> Result<(), Error> { let name = CString::new(name)?; let spec = CString::new(spec)?; unsafe { try_call!(raw::git_remote_add_fetch(self.raw, name, spec)); } Ok(()) } /// Add a push refspec to the remote's configuration. /// /// Add the given refspec to the push list in the configuration. No /// loaded remote instances will be affected. pub fn remote_add_push(&self, name: &str, spec: &str) -> Result<(), Error> { let name = CString::new(name)?; let spec = CString::new(spec)?; unsafe { try_call!(raw::git_remote_add_push(self.raw, name, spec)); } Ok(()) } /// Set the remote's url in the configuration /// /// Remote objects already in memory will not be affected. This assumes /// the common case of a single-url remote and will otherwise return an /// error. pub fn remote_set_url(&self, name: &str, url: &str) -> Result<(), Error> { let name = CString::new(name)?; let url = CString::new(url)?; unsafe { try_call!(raw::git_remote_set_url(self.raw, name, url)); } Ok(()) } /// Set the remote's url for pushing in the configuration. /// /// Remote objects already in memory will not be affected. This assumes /// the common case of a single-url remote and will otherwise return an /// error. /// /// `None` indicates that it should be cleared. pub fn remote_set_pushurl(&self, name: &str, pushurl: Option<&str>) -> Result<(), Error> { let name = CString::new(name)?; let pushurl = crate::opt_cstr(pushurl)?; unsafe { try_call!(raw::git_remote_set_pushurl(self.raw, name, pushurl)); } Ok(()) } /// Sets the current head to the specified object and optionally resets /// the index and working tree to match. /// /// A soft reset means the head will be moved to the commit. /// /// A mixed reset will trigger a soft reset, plus the index will be /// replaced with the content of the commit tree. /// /// A hard reset will trigger a mixed reset and the working directory will /// be replaced with the content of the index. (Untracked and ignored files /// will be left alone, however.) /// /// The `target` is a commit-ish to which the head should be moved to. The /// object can either be a commit or a tag, but tags must be dereferenceable /// to a commit. /// /// The `checkout` options will only be used for a hard reset. pub fn reset( &self, target: &Object<'_>, kind: ResetType, checkout: Option<&mut CheckoutBuilder<'_>>, ) -> Result<(), Error> { unsafe { let mut opts: raw::git_checkout_options = mem::zeroed(); try_call!(raw::git_checkout_init_options( &mut opts, raw::GIT_CHECKOUT_OPTIONS_VERSION )); let opts = checkout.map(|c| { c.configure(&mut opts); &mut opts }); try_call!(raw::git_reset(self.raw, target.raw(), kind, opts)); } Ok(()) } /// Updates some entries in the index from the target commit tree. /// /// The scope of the updated entries is determined by the paths being /// in the iterator provided. /// /// Passing a `None` target will result in removing entries in the index /// matching the provided pathspecs. pub fn reset_default(&self, target: Option<&Object<'_>>, paths: I) -> Result<(), Error> where T: IntoCString, I: IntoIterator, { let (_a, _b, mut arr) = crate::util::iter2cstrs_paths(paths)?; let target = target.map(|t| t.raw()); unsafe { try_call!(raw::git_reset_default(self.raw, target, &mut arr)); } Ok(()) } /// Retrieve and resolve the reference pointed at by HEAD. pub fn head(&self) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_repository_head(&mut ret, self.raw)); Ok(Binding::from_raw(ret)) } } /// Make the repository HEAD point to the specified reference. /// /// If the provided reference points to a tree or a blob, the HEAD is /// unaltered and an error is returned. /// /// If the provided reference points to a branch, the HEAD will point to /// that branch, staying attached, or become attached if it isn't yet. If /// the branch doesn't exist yet, no error will be returned. The HEAD will /// then be attached to an unborn branch. /// /// Otherwise, the HEAD will be detached and will directly point to the /// commit. pub fn set_head(&self, refname: &str) -> Result<(), Error> { let refname = CString::new(refname)?; unsafe { try_call!(raw::git_repository_set_head(self.raw, refname)); } Ok(()) } /// Determines whether the repository HEAD is detached. pub fn head_detached(&self) -> Result { unsafe { let value = raw::git_repository_head_detached(self.raw); match value { 0 => Ok(false), 1 => Ok(true), _ => Err(Error::last_error(value).unwrap()), } } } /// Make the repository HEAD directly point to the commit. /// /// If the provided committish cannot be found in the repository, the HEAD /// is unaltered and an error is returned. /// /// If the provided commitish cannot be peeled into a commit, the HEAD is /// unaltered and an error is returned. /// /// Otherwise, the HEAD will eventually be detached and will directly point /// to the peeled commit. pub fn set_head_detached(&self, commitish: Oid) -> Result<(), Error> { unsafe { try_call!(raw::git_repository_set_head_detached( self.raw, commitish.raw() )); } Ok(()) } /// Make the repository HEAD directly point to the commit. /// /// If the provided committish cannot be found in the repository, the HEAD /// is unaltered and an error is returned. /// If the provided commitish cannot be peeled into a commit, the HEAD is /// unaltered and an error is returned. /// Otherwise, the HEAD will eventually be detached and will directly point /// to the peeled commit. pub fn set_head_detached_from_annotated( &self, commitish: AnnotatedCommit<'_>, ) -> Result<(), Error> { unsafe { try_call!(raw::git_repository_set_head_detached_from_annotated( self.raw, commitish.raw() )); } Ok(()) } /// Create an iterator for the repo's references pub fn references(&self) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_reference_iterator_new(&mut ret, self.raw)); Ok(Binding::from_raw(ret)) } } /// Create an iterator for the repo's references that match the specified /// glob pub fn references_glob(&self, glob: &str) -> Result, Error> { let mut ret = ptr::null_mut(); let glob = CString::new(glob)?; unsafe { try_call!(raw::git_reference_iterator_glob_new( &mut ret, self.raw, glob )); Ok(Binding::from_raw(ret)) } } /// Load all submodules for this repository and return them. pub fn submodules(&self) -> Result>, Error> { struct Data<'a, 'b> { repo: &'b Repository, ret: &'a mut Vec>, } let mut ret = Vec::new(); unsafe { let mut data = Data { repo: self, ret: &mut ret, }; let cb: raw::git_submodule_cb = Some(append); try_call!(raw::git_submodule_foreach( self.raw, cb, &mut data as *mut _ as *mut c_void )); } return Ok(ret); extern "C" fn append( _repo: *mut raw::git_submodule, name: *const c_char, data: *mut c_void, ) -> c_int { unsafe { let data = &mut *(data as *mut Data<'_, '_>); let mut raw = ptr::null_mut(); let rc = raw::git_submodule_lookup(&mut raw, data.repo.raw(), name); assert_eq!(rc, 0); data.ret.push(Binding::from_raw(raw)); } 0 } } /// Gather file status information and populate the returned structure. /// /// Note that if a pathspec is given in the options to filter the /// status, then the results from rename detection (if you enable it) may /// not be accurate. To do rename detection properly, this must be called /// with no pathspec so that all files can be considered. pub fn statuses(&self, options: Option<&mut StatusOptions>) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_status_list_new( &mut ret, self.raw, options.map(|s| s.raw()).unwrap_or(ptr::null()) )); Ok(Binding::from_raw(ret)) } } /// Test if the ignore rules apply to a given file. /// /// This function checks the ignore rules to see if they would apply to the /// given file. This indicates if the file would be ignored regardless of /// whether the file is already in the index or committed to the repository. /// /// One way to think of this is if you were to do "git add ." on the /// directory containing the file, would it be added or not? pub fn status_should_ignore(&self, path: &Path) -> Result { let mut ret = 0 as c_int; let path = util::cstring_to_repo_path(path)?; unsafe { try_call!(raw::git_status_should_ignore(&mut ret, self.raw, path)); } Ok(ret != 0) } /// Get file status for a single file. /// /// This tries to get status for the filename that you give. If no files /// match that name (in either the HEAD, index, or working directory), this /// returns NotFound. /// /// If the name matches multiple files (for example, if the path names a /// directory or if running on a case- insensitive filesystem and yet the /// HEAD has two entries that both match the path), then this returns /// Ambiguous because it cannot give correct results. /// /// This does not do any sort of rename detection. Renames require a set of /// targets and because of the path filtering, there is not enough /// information to check renames correctly. To check file status with rename /// detection, there is no choice but to do a full `statuses` and scan /// through looking for the path that you are interested in. pub fn status_file(&self, path: &Path) -> Result { let mut ret = 0 as c_uint; let path = path_to_repo_path(path)?; unsafe { try_call!(raw::git_status_file(&mut ret, self.raw, path)); } Ok(Status::from_bits_truncate(ret as u32)) } /// Create an iterator which loops over the requested branches. pub fn branches(&self, filter: Option) -> Result, Error> { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_branch_iterator_new(&mut raw, self.raw(), filter)); Ok(Branches::from_raw(raw)) } } /// Get the Index file for this repository. /// /// If a custom index has not been set, the default index for the repository /// will be returned (the one located in .git/index). pub fn index(&self) -> Result { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_repository_index(&mut raw, self.raw())); Ok(Binding::from_raw(raw)) } } /// Set the Index file for this repository. pub fn set_index(&self, index: &mut Index) -> Result<(), Error> { unsafe { try_call!(raw::git_repository_set_index(self.raw(), index.raw())); } Ok(()) } /// Get the configuration file for this repository. /// /// If a configuration file has not been set, the default config set for the /// repository will be returned, including global and system configurations /// (if they are available). pub fn config(&self) -> Result { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_repository_config(&mut raw, self.raw())); Ok(Binding::from_raw(raw)) } } /// Get the value of a git attribute for a path as a string. /// /// This function will return a special string if the attribute is set to a special value. /// Interpreting the special string is discouraged. You should always use /// [`AttrValue::from_string`](crate::AttrValue::from_string) to interpret the return value /// and avoid the special string. /// /// As such, the return type of this function will probably be changed in the next major version /// to prevent interpreting the returned string without checking whether it's special. pub fn get_attr( &self, path: &Path, name: &str, flags: AttrCheckFlags, ) -> Result, Error> { Ok(self .get_attr_bytes(path, name, flags)? .and_then(|a| str::from_utf8(a).ok())) } /// Get the value of a git attribute for a path as a byte slice. /// /// This function will return a special byte slice if the attribute is set to a special value. /// Interpreting the special byte slice is discouraged. You should always use /// [`AttrValue::from_bytes`](crate::AttrValue::from_bytes) to interpret the return value and /// avoid the special string. /// /// As such, the return type of this function will probably be changed in the next major version /// to prevent interpreting the returned byte slice without checking whether it's special. pub fn get_attr_bytes( &self, path: &Path, name: &str, flags: AttrCheckFlags, ) -> Result, Error> { let mut ret = ptr::null(); let path = util::cstring_to_repo_path(path)?; let name = CString::new(name)?; unsafe { try_call!(raw::git_attr_get( &mut ret, self.raw(), flags.bits(), path, name )); Ok(crate::opt_bytes(self, ret)) } } /// Write an in-memory buffer to the ODB as a blob. /// /// The Oid returned can in turn be passed to `find_blob` to get a handle to /// the blob. pub fn blob(&self, data: &[u8]) -> Result { let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { let ptr = data.as_ptr() as *const c_void; let len = data.len() as size_t; try_call!(raw::git_blob_create_frombuffer( &mut raw, self.raw(), ptr, len )); Ok(Binding::from_raw(&raw as *const _)) } } /// Read a file from the filesystem and write its content to the Object /// Database as a loose blob /// /// The Oid returned can in turn be passed to `find_blob` to get a handle to /// the blob. pub fn blob_path(&self, path: &Path) -> Result { // Normal file path OK (does not need Windows conversion). let path = path.into_c_string()?; let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_blob_create_fromdisk(&mut raw, self.raw(), path)); Ok(Binding::from_raw(&raw as *const _)) } } /// Create a stream to write blob /// /// This function may need to buffer the data on disk and will in general /// not be the right choice if you know the size of the data to write. /// /// Use `BlobWriter::commit()` to commit the write to the object db /// and get the object id. /// /// If the `hintpath` parameter is filled, it will be used to determine /// what git filters should be applied to the object before it is written /// to the object database. pub fn blob_writer(&self, hintpath: Option<&Path>) -> Result, Error> { let path_str = match hintpath { Some(path) => Some(path.into_c_string()?), None => None, }; let path = match path_str { Some(ref path) => path.as_ptr(), None => ptr::null(), }; let mut out = ptr::null_mut(); unsafe { try_call!(raw::git_blob_create_fromstream(&mut out, self.raw(), path)); Ok(BlobWriter::from_raw(out)) } } /// Lookup a reference to one of the objects in a repository. pub fn find_blob(&self, oid: Oid) -> Result, Error> { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_blob_lookup(&mut raw, self.raw(), oid.raw())); Ok(Binding::from_raw(raw)) } } /// Get the object database for this repository pub fn odb(&self) -> Result, Error> { let mut odb = ptr::null_mut(); unsafe { try_call!(raw::git_repository_odb(&mut odb, self.raw())); Ok(Odb::from_raw(odb)) } } /// Override the object database for this repository pub fn set_odb(&self, odb: &Odb<'_>) -> Result<(), Error> { unsafe { try_call!(raw::git_repository_set_odb(self.raw(), odb.raw())); } Ok(()) } /// Create a new branch pointing at a target commit /// /// A new direct reference will be created pointing to this target commit. /// If `force` is true and a reference already exists with the given name, /// it'll be replaced. pub fn branch( &self, branch_name: &str, target: &Commit<'_>, force: bool, ) -> Result, Error> { let branch_name = CString::new(branch_name)?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_branch_create( &mut raw, self.raw(), branch_name, target.raw(), force )); Ok(Branch::wrap(Binding::from_raw(raw))) } } /// Create a new branch pointing at a target commit /// /// This behaves like `Repository::branch()` but takes /// an annotated commit, which lets you specify which /// extended sha syntax string was specified by a user, /// allowing for more exact reflog messages. /// /// See the documentation for `Repository::branch()` pub fn branch_from_annotated_commit( &self, branch_name: &str, target: &AnnotatedCommit<'_>, force: bool, ) -> Result, Error> { let branch_name = CString::new(branch_name)?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_branch_create_from_annotated( &mut raw, self.raw(), branch_name, target.raw(), force )); Ok(Branch::wrap(Binding::from_raw(raw))) } } /// Lookup a branch by its name in a repository. pub fn find_branch(&self, name: &str, branch_type: BranchType) -> Result, Error> { let name = CString::new(name)?; let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_branch_lookup( &mut ret, self.raw(), name, branch_type )); Ok(Branch::wrap(Binding::from_raw(ret))) } } /// Create new commit in the repository /// /// If the `update_ref` is not `None`, name of the reference that will be /// updated to point to this commit. If the reference is not direct, it will /// be resolved to a direct reference. Use "HEAD" to update the HEAD of the /// current branch and make it point to this commit. If the reference /// doesn't exist yet, it will be created. If it does exist, the first /// parent must be the tip of this branch. pub fn commit( &self, update_ref: Option<&str>, author: &Signature<'_>, committer: &Signature<'_>, message: &str, tree: &Tree<'_>, parents: &[&Commit<'_>], ) -> Result { let update_ref = crate::opt_cstr(update_ref)?; let mut parent_ptrs = parents .iter() .map(|p| p.raw() as *const raw::git_commit) .collect::>(); let message = CString::new(message)?; let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_commit_create( &mut raw, self.raw(), update_ref, author.raw(), committer.raw(), ptr::null(), message, tree.raw(), parents.len() as size_t, parent_ptrs.as_mut_ptr() )); Ok(Binding::from_raw(&raw as *const _)) } } /// Create a commit object and return that as a Buf. /// /// That can be converted to a string like this `str::from_utf8(&buf).unwrap().to_string()`. /// And that string can be passed to the `commit_signed` function, /// the arguments behave the same as in the `commit` function. pub fn commit_create_buffer( &self, author: &Signature<'_>, committer: &Signature<'_>, message: &str, tree: &Tree<'_>, parents: &[&Commit<'_>], ) -> Result { let mut parent_ptrs = parents .iter() .map(|p| p.raw() as *const raw::git_commit) .collect::>(); let message = CString::new(message)?; let buf = Buf::new(); unsafe { try_call!(raw::git_commit_create_buffer( buf.raw(), self.raw(), author.raw(), committer.raw(), ptr::null(), message, tree.raw(), parents.len() as size_t, parent_ptrs.as_mut_ptr() )); Ok(buf) } } /// Create a commit object from the given buffer and signature /// /// Given the unsigned commit object's contents, its signature and the /// header field in which to store the signature, attach the signature to /// the commit and write it into the given repository. /// /// Use `None` in `signature_field` to use the default of `gpgsig`, which is /// almost certainly what you want. /// /// Returns the resulting (signed) commit id. pub fn commit_signed( &self, commit_content: &str, signature: &str, signature_field: Option<&str>, ) -> Result { let commit_content = CString::new(commit_content)?; let signature = CString::new(signature)?; let signature_field = crate::opt_cstr(signature_field)?; let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_commit_create_with_signature( &mut raw, self.raw(), commit_content, signature, signature_field )); Ok(Binding::from_raw(&raw as *const _)) } } /// Extract the signature from a commit /// /// Returns a tuple containing the signature in the first value and the /// signed data in the second. pub fn extract_signature( &self, commit_id: &Oid, signature_field: Option<&str>, ) -> Result<(Buf, Buf), Error> { let signature_field = crate::opt_cstr(signature_field)?; let signature = Buf::new(); let content = Buf::new(); unsafe { try_call!(raw::git_commit_extract_signature( signature.raw(), content.raw(), self.raw(), commit_id.raw() as *mut _, signature_field )); Ok((signature, content)) } } /// Lookup a reference to one of the commits in a repository. pub fn find_commit(&self, oid: Oid) -> Result, Error> { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_commit_lookup(&mut raw, self.raw(), oid.raw())); Ok(Binding::from_raw(raw)) } } /// Creates an `AnnotatedCommit` from the given commit id. pub fn find_annotated_commit(&self, id: Oid) -> Result, Error> { unsafe { let mut raw = ptr::null_mut(); try_call!(raw::git_annotated_commit_lookup( &mut raw, self.raw(), id.raw() )); Ok(Binding::from_raw(raw)) } } /// Lookup a reference to one of the objects in a repository. pub fn find_object(&self, oid: Oid, kind: Option) -> Result, Error> { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_object_lookup( &mut raw, self.raw(), oid.raw(), kind )); Ok(Binding::from_raw(raw)) } } /// Create a new direct reference. /// /// This function will return an error if a reference already exists with /// the given name unless force is true, in which case it will be /// overwritten. pub fn reference( &self, name: &str, id: Oid, force: bool, log_message: &str, ) -> Result, Error> { let name = CString::new(name)?; let log_message = CString::new(log_message)?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_reference_create( &mut raw, self.raw(), name, id.raw(), force, log_message )); Ok(Binding::from_raw(raw)) } } /// Conditionally create new direct reference. /// /// A direct reference (also called an object id reference) refers directly /// to a specific object id (a.k.a. OID or SHA) in the repository. The id /// permanently refers to the object (although the reference itself can be /// moved). For example, in libgit2 the direct ref "refs/tags/v0.17.0" /// refers to OID 5b9fac39d8a76b9139667c26a63e6b3f204b3977. /// /// The direct reference will be created in the repository and written to /// the disk. /// /// Valid reference names must follow one of two patterns: /// /// 1. Top-level names must contain only capital letters and underscores, /// and must begin and end with a letter. (e.g. "HEAD", "ORIG_HEAD"). /// 2. Names prefixed with "refs/" can be almost anything. You must avoid /// the characters `~`, `^`, `:`, `\\`, `?`, `[`, and `*`, and the /// sequences ".." and "@{" which have special meaning to revparse. /// /// This function will return an error if a reference already exists with /// the given name unless `force` is true, in which case it will be /// overwritten. /// /// The message for the reflog will be ignored if the reference does not /// belong in the standard set (HEAD, branches and remote-tracking /// branches) and it does not have a reflog. /// /// It will return GIT_EMODIFIED if the reference's value at the time of /// updating does not match the one passed through `current_id` (i.e. if the /// ref has changed since the user read it). pub fn reference_matching( &self, name: &str, id: Oid, force: bool, current_id: Oid, log_message: &str, ) -> Result, Error> { let name = CString::new(name)?; let log_message = CString::new(log_message)?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_reference_create_matching( &mut raw, self.raw(), name, id.raw(), force, current_id.raw(), log_message )); Ok(Binding::from_raw(raw)) } } /// Create a new symbolic reference. /// /// This function will return an error if a reference already exists with /// the given name unless force is true, in which case it will be /// overwritten. pub fn reference_symbolic( &self, name: &str, target: &str, force: bool, log_message: &str, ) -> Result, Error> { let name = CString::new(name)?; let target = CString::new(target)?; let log_message = CString::new(log_message)?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_reference_symbolic_create( &mut raw, self.raw(), name, target, force, log_message )); Ok(Binding::from_raw(raw)) } } /// Create a new symbolic reference. /// /// This function will return an error if a reference already exists with /// the given name unless force is true, in which case it will be /// overwritten. /// /// It will return GIT_EMODIFIED if the reference's value at the time of /// updating does not match the one passed through current_value (i.e. if /// the ref has changed since the user read it). pub fn reference_symbolic_matching( &self, name: &str, target: &str, force: bool, current_value: &str, log_message: &str, ) -> Result, Error> { let name = CString::new(name)?; let target = CString::new(target)?; let current_value = CString::new(current_value)?; let log_message = CString::new(log_message)?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_reference_symbolic_create_matching( &mut raw, self.raw(), name, target, force, current_value, log_message )); Ok(Binding::from_raw(raw)) } } /// Lookup a reference to one of the objects in a repository. pub fn find_reference(&self, name: &str) -> Result, Error> { let name = CString::new(name)?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_reference_lookup(&mut raw, self.raw(), name)); Ok(Binding::from_raw(raw)) } } /// Lookup a reference to one of the objects in a repository. /// `Repository::find_reference` with teeth; give the method your reference in /// human-readable format e.g. 'main' instead of 'refs/heads/main', and it /// will do-what-you-mean, returning the `Reference`. pub fn resolve_reference_from_short_name(&self, refname: &str) -> Result, Error> { let refname = CString::new(refname)?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_reference_dwim(&mut raw, self.raw(), refname)); Ok(Binding::from_raw(raw)) } } /// Lookup a reference by name and resolve immediately to OID. /// /// This function provides a quick way to resolve a reference name straight /// through to the object id that it refers to. This avoids having to /// allocate or free any `Reference` objects for simple situations. pub fn refname_to_id(&self, name: &str) -> Result { let name = CString::new(name)?; let mut ret = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_reference_name_to_id(&mut ret, self.raw(), name)); Ok(Binding::from_raw(&ret as *const _)) } } /// Creates a git_annotated_commit from the given reference. pub fn reference_to_annotated_commit( &self, reference: &Reference<'_>, ) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_annotated_commit_from_ref( &mut ret, self.raw(), reference.raw() )); Ok(AnnotatedCommit::from_raw(ret)) } } /// Creates a git_annotated_commit from FETCH_HEAD. pub fn annotated_commit_from_fetchhead( &self, branch_name: &str, remote_url: &str, id: &Oid, ) -> Result, Error> { let branch_name = CString::new(branch_name)?; let remote_url = CString::new(remote_url)?; let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_annotated_commit_from_fetchhead( &mut ret, self.raw(), branch_name, remote_url, id.raw() )); Ok(AnnotatedCommit::from_raw(ret)) } } /// Create a new action signature with default user and now timestamp. /// /// This looks up the user.name and user.email from the configuration and /// uses the current time as the timestamp, and creates a new signature /// based on that information. It will return `NotFound` if either the /// user.name or user.email are not set. pub fn signature(&self) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_signature_default(&mut ret, self.raw())); Ok(Binding::from_raw(ret)) } } /// Set up a new git submodule for checkout. /// /// This does "git submodule add" up to the fetch and checkout of the /// submodule contents. It preps a new submodule, creates an entry in /// `.gitmodules` and creates an empty initialized repository either at the /// given path in the working directory or in `.git/modules` with a gitlink /// from the working directory to the new repo. /// /// To fully emulate "git submodule add" call this function, then `open()` /// the submodule repo and perform the clone step as needed. Lastly, call /// `add_finalize()` to wrap up adding the new submodule and `.gitmodules` /// to the index to be ready to commit. pub fn submodule( &self, url: &str, path: &Path, use_gitlink: bool, ) -> Result, Error> { let url = CString::new(url)?; let path = path_to_repo_path(path)?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_submodule_add_setup( &mut raw, self.raw(), url, path, use_gitlink )); Ok(Binding::from_raw(raw)) } } /// Lookup submodule information by name or path. /// /// Given either the submodule name or path (they are usually the same), /// this returns a structure describing the submodule. pub fn find_submodule(&self, name: &str) -> Result, Error> { let name = CString::new(name)?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_submodule_lookup(&mut raw, self.raw(), name)); Ok(Binding::from_raw(raw)) } } /// Get the status for a submodule. /// /// This looks at a submodule and tries to determine the status. It /// will return a combination of the `SubmoduleStatus` values. pub fn submodule_status( &self, name: &str, ignore: SubmoduleIgnore, ) -> Result { let mut ret = 0; let name = CString::new(name)?; unsafe { try_call!(raw::git_submodule_status(&mut ret, self.raw, name, ignore)); } Ok(SubmoduleStatus::from_bits_truncate(ret as u32)) } /// Set the ignore rule for the submodule in the configuration /// /// This does not affect any currently-loaded instances. pub fn submodule_set_ignore( &mut self, name: &str, ignore: SubmoduleIgnore, ) -> Result<(), Error> { let name = CString::new(name)?; unsafe { try_call!(raw::git_submodule_set_ignore(self.raw(), name, ignore)); } Ok(()) } /// Set the update rule for the submodule in the configuration /// /// This setting won't affect any existing instances. pub fn submodule_set_update( &mut self, name: &str, update: SubmoduleUpdate, ) -> Result<(), Error> { let name = CString::new(name)?; unsafe { try_call!(raw::git_submodule_set_update(self.raw(), name, update)); } Ok(()) } /// Set the URL for the submodule in the configuration /// /// After calling this, you may wish to call [`Submodule::sync`] to write /// the changes to the checked out submodule repository. pub fn submodule_set_url(&mut self, name: &str, url: &str) -> Result<(), Error> { let name = CString::new(name)?; let url = CString::new(url)?; unsafe { try_call!(raw::git_submodule_set_url(self.raw(), name, url)); } Ok(()) } /// Set the branch for the submodule in the configuration /// /// After calling this, you may wish to call [`Submodule::sync`] to write /// the changes to the checked out submodule repository. pub fn submodule_set_branch(&mut self, name: &str, branch_name: &str) -> Result<(), Error> { let name = CString::new(name)?; let branch_name = CString::new(branch_name)?; unsafe { try_call!(raw::git_submodule_set_branch(self.raw(), name, branch_name)); } Ok(()) } /// Lookup a reference to one of the objects in a repository. pub fn find_tree(&self, oid: Oid) -> Result, Error> { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_tree_lookup(&mut raw, self.raw(), oid.raw())); Ok(Binding::from_raw(raw)) } } /// Create a new TreeBuilder, optionally initialized with the /// entries of the given Tree. /// /// The tree builder can be used to create or modify trees in memory and /// write them as tree objects to the database. pub fn treebuilder(&self, tree: Option<&Tree<'_>>) -> Result, Error> { unsafe { let mut ret = ptr::null_mut(); let tree = match tree { Some(tree) => tree.raw(), None => ptr::null_mut(), }; try_call!(raw::git_treebuilder_new(&mut ret, self.raw, tree)); Ok(Binding::from_raw(ret)) } } /// Create a new tag in the repository from an object /// /// A new reference will also be created pointing to this tag object. If /// `force` is true and a reference already exists with the given name, /// it'll be replaced. /// /// The message will not be cleaned up. /// /// The tag name will be checked for validity. You must avoid the characters /// '~', '^', ':', ' \ ', '?', '[', and '*', and the sequences ".." and " @ /// {" which have special meaning to revparse. pub fn tag( &self, name: &str, target: &Object<'_>, tagger: &Signature<'_>, message: &str, force: bool, ) -> Result { let name = CString::new(name)?; let message = CString::new(message)?; let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_tag_create( &mut raw, self.raw, name, target.raw(), tagger.raw(), message, force )); Ok(Binding::from_raw(&raw as *const _)) } } /// Create a new lightweight tag pointing at a target object /// /// A new direct reference will be created pointing to this target object. /// If force is true and a reference already exists with the given name, /// it'll be replaced. pub fn tag_lightweight( &self, name: &str, target: &Object<'_>, force: bool, ) -> Result { let name = CString::new(name)?; let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_tag_create_lightweight( &mut raw, self.raw, name, target.raw(), force )); Ok(Binding::from_raw(&raw as *const _)) } } /// Lookup a tag object from the repository. pub fn find_tag(&self, id: Oid) -> Result, Error> { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_tag_lookup(&mut raw, self.raw, id.raw())); Ok(Binding::from_raw(raw)) } } /// Delete an existing tag reference. /// /// The tag name will be checked for validity, see `tag` for some rules /// about valid names. pub fn tag_delete(&self, name: &str) -> Result<(), Error> { let name = CString::new(name)?; unsafe { try_call!(raw::git_tag_delete(self.raw, name)); Ok(()) } } /// Get a list with all the tags in the repository. /// /// An optional fnmatch pattern can also be specified. pub fn tag_names(&self, pattern: Option<&str>) -> Result { let mut arr = raw::git_strarray { strings: ptr::null_mut(), count: 0, }; unsafe { match pattern { Some(s) => { let s = CString::new(s)?; try_call!(raw::git_tag_list_match(&mut arr, s, self.raw)); } None => { try_call!(raw::git_tag_list(&mut arr, self.raw)); } } Ok(Binding::from_raw(arr)) } } /// iterate over all tags calling `cb` on each. /// the callback is provided the tag id and name pub fn tag_foreach(&self, cb: T) -> Result<(), Error> where T: FnMut(Oid, &[u8]) -> bool, { let mut data = TagForeachData { cb: Box::new(cb) as TagForeachCB<'_>, }; unsafe { raw::git_tag_foreach( self.raw, Some(tag_foreach_cb), (&mut data) as *mut _ as *mut _, ); } Ok(()) } /// Updates files in the index and the working tree to match the content of /// the commit pointed at by HEAD. pub fn checkout_head(&self, opts: Option<&mut CheckoutBuilder<'_>>) -> Result<(), Error> { unsafe { let mut raw_opts = mem::zeroed(); try_call!(raw::git_checkout_init_options( &mut raw_opts, raw::GIT_CHECKOUT_OPTIONS_VERSION )); if let Some(c) = opts { c.configure(&mut raw_opts); } try_call!(raw::git_checkout_head(self.raw, &raw_opts)); } Ok(()) } /// Updates files in the working tree to match the content of the index. /// /// If the index is `None`, the repository's index will be used. pub fn checkout_index( &self, index: Option<&mut Index>, opts: Option<&mut CheckoutBuilder<'_>>, ) -> Result<(), Error> { unsafe { let mut raw_opts = mem::zeroed(); try_call!(raw::git_checkout_init_options( &mut raw_opts, raw::GIT_CHECKOUT_OPTIONS_VERSION )); if let Some(c) = opts { c.configure(&mut raw_opts); } try_call!(raw::git_checkout_index( self.raw, index.map(|i| &mut *i.raw()), &raw_opts )); } Ok(()) } /// Updates files in the index and working tree to match the content of the /// tree pointed at by the treeish. pub fn checkout_tree( &self, treeish: &Object<'_>, opts: Option<&mut CheckoutBuilder<'_>>, ) -> Result<(), Error> { unsafe { let mut raw_opts = mem::zeroed(); try_call!(raw::git_checkout_init_options( &mut raw_opts, raw::GIT_CHECKOUT_OPTIONS_VERSION )); if let Some(c) = opts { c.configure(&mut raw_opts); } try_call!(raw::git_checkout_tree(self.raw, &*treeish.raw(), &raw_opts)); } Ok(()) } /// Merges the given commit(s) into HEAD, writing the results into the /// working directory. Any changes are staged for commit and any conflicts /// are written to the index. Callers should inspect the repository's index /// after this completes, resolve any conflicts and prepare a commit. /// /// For compatibility with git, the repository is put into a merging state. /// Once the commit is done (or if the user wishes to abort), you should /// clear this state by calling cleanup_state(). pub fn merge( &self, annotated_commits: &[&AnnotatedCommit<'_>], merge_opts: Option<&mut MergeOptions>, checkout_opts: Option<&mut CheckoutBuilder<'_>>, ) -> Result<(), Error> { unsafe { let mut raw_checkout_opts = mem::zeroed(); try_call!(raw::git_checkout_init_options( &mut raw_checkout_opts, raw::GIT_CHECKOUT_OPTIONS_VERSION )); if let Some(c) = checkout_opts { c.configure(&mut raw_checkout_opts); } let mut commit_ptrs = annotated_commits .iter() .map(|c| c.raw() as *const raw::git_annotated_commit) .collect::>(); try_call!(raw::git_merge( self.raw, commit_ptrs.as_mut_ptr(), annotated_commits.len() as size_t, merge_opts.map(|o| o.raw()).unwrap_or(ptr::null()), &raw_checkout_opts )); } Ok(()) } /// Merge two commits, producing an index that reflects the result of /// the merge. The index may be written as-is to the working directory or /// checked out. If the index is to be converted to a tree, the caller /// should resolve any conflicts that arose as part of the merge. pub fn merge_commits( &self, our_commit: &Commit<'_>, their_commit: &Commit<'_>, opts: Option<&MergeOptions>, ) -> Result { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_merge_commits( &mut raw, self.raw, our_commit.raw(), their_commit.raw(), opts.map(|o| o.raw()) )); Ok(Binding::from_raw(raw)) } } /// Merge two trees, producing an index that reflects the result of /// the merge. The index may be written as-is to the working directory or /// checked out. If the index is to be converted to a tree, the caller /// should resolve any conflicts that arose as part of the merge. pub fn merge_trees( &self, ancestor_tree: &Tree<'_>, our_tree: &Tree<'_>, their_tree: &Tree<'_>, opts: Option<&MergeOptions>, ) -> Result { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_merge_trees( &mut raw, self.raw, ancestor_tree.raw(), our_tree.raw(), their_tree.raw(), opts.map(|o| o.raw()) )); Ok(Binding::from_raw(raw)) } } /// Remove all the metadata associated with an ongoing command like merge, /// revert, cherry-pick, etc. For example: MERGE_HEAD, MERGE_MSG, etc. pub fn cleanup_state(&self) -> Result<(), Error> { unsafe { try_call!(raw::git_repository_state_cleanup(self.raw)); } Ok(()) } /// Analyzes the given branch(es) and determines the opportunities for /// merging them into the HEAD of the repository. pub fn merge_analysis( &self, their_heads: &[&AnnotatedCommit<'_>], ) -> Result<(MergeAnalysis, MergePreference), Error> { unsafe { let mut raw_merge_analysis = 0 as raw::git_merge_analysis_t; let mut raw_merge_preference = 0 as raw::git_merge_preference_t; let mut their_heads = their_heads .iter() .map(|v| v.raw() as *const _) .collect::>(); try_call!(raw::git_merge_analysis( &mut raw_merge_analysis, &mut raw_merge_preference, self.raw, their_heads.as_mut_ptr() as *mut _, their_heads.len() )); Ok(( MergeAnalysis::from_bits_truncate(raw_merge_analysis as u32), MergePreference::from_bits_truncate(raw_merge_preference as u32), )) } } /// Analyzes the given branch(es) and determines the opportunities for /// merging them into a reference. pub fn merge_analysis_for_ref( &self, our_ref: &Reference<'_>, their_heads: &[&AnnotatedCommit<'_>], ) -> Result<(MergeAnalysis, MergePreference), Error> { unsafe { let mut raw_merge_analysis = 0 as raw::git_merge_analysis_t; let mut raw_merge_preference = 0 as raw::git_merge_preference_t; let mut their_heads = their_heads .iter() .map(|v| v.raw() as *const _) .collect::>(); try_call!(raw::git_merge_analysis_for_ref( &mut raw_merge_analysis, &mut raw_merge_preference, self.raw, our_ref.raw(), their_heads.as_mut_ptr() as *mut _, their_heads.len() )); Ok(( MergeAnalysis::from_bits_truncate(raw_merge_analysis as u32), MergePreference::from_bits_truncate(raw_merge_preference as u32), )) } } /// Initializes a rebase operation to rebase the changes in `branch` /// relative to `upstream` onto another branch. To begin the rebase process, /// call `next()`. pub fn rebase( &self, branch: Option<&AnnotatedCommit<'_>>, upstream: Option<&AnnotatedCommit<'_>>, onto: Option<&AnnotatedCommit<'_>>, opts: Option<&mut RebaseOptions<'_>>, ) -> Result, Error> { let mut rebase: *mut raw::git_rebase = ptr::null_mut(); unsafe { try_call!(raw::git_rebase_init( &mut rebase, self.raw(), branch.map(|c| c.raw()), upstream.map(|c| c.raw()), onto.map(|c| c.raw()), opts.map(|o| o.raw()).unwrap_or(ptr::null()) )); Ok(Rebase::from_raw(rebase)) } } /// Opens an existing rebase that was previously started by either an /// invocation of `rebase()` or by another client. pub fn open_rebase(&self, opts: Option<&mut RebaseOptions<'_>>) -> Result, Error> { let mut rebase: *mut raw::git_rebase = ptr::null_mut(); unsafe { try_call!(raw::git_rebase_open( &mut rebase, self.raw(), opts.map(|o| o.raw()).unwrap_or(ptr::null()) )); Ok(Rebase::from_raw(rebase)) } } /// Add a note for an object /// /// The `notes_ref` argument is the canonical name of the reference to use, /// defaulting to "refs/notes/commits". If `force` is specified then /// previous notes are overwritten. pub fn note( &self, author: &Signature<'_>, committer: &Signature<'_>, notes_ref: Option<&str>, oid: Oid, note: &str, force: bool, ) -> Result { let notes_ref = crate::opt_cstr(notes_ref)?; let note = CString::new(note)?; let mut ret = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_note_create( &mut ret, self.raw, notes_ref, author.raw(), committer.raw(), oid.raw(), note, force )); Ok(Binding::from_raw(&ret as *const _)) } } /// Get the default notes reference for this repository pub fn note_default_ref(&self) -> Result { let ret = Buf::new(); unsafe { try_call!(raw::git_note_default_ref(ret.raw(), self.raw)); } Ok(str::from_utf8(&ret).unwrap().to_string()) } /// Creates a new iterator for notes in this repository. /// /// The `notes_ref` argument is the canonical name of the reference to use, /// defaulting to "refs/notes/commits". /// /// The iterator returned yields pairs of (Oid, Oid) where the first element /// is the id of the note and the second id is the id the note is /// annotating. pub fn notes(&self, notes_ref: Option<&str>) -> Result, Error> { let notes_ref = crate::opt_cstr(notes_ref)?; let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_note_iterator_new(&mut ret, self.raw, notes_ref)); Ok(Binding::from_raw(ret)) } } /// Read the note for an object. /// /// The `notes_ref` argument is the canonical name of the reference to use, /// defaulting to "refs/notes/commits". /// /// The id specified is the Oid of the git object to read the note from. pub fn find_note(&self, notes_ref: Option<&str>, id: Oid) -> Result, Error> { let notes_ref = crate::opt_cstr(notes_ref)?; let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_note_read(&mut ret, self.raw, notes_ref, id.raw())); Ok(Binding::from_raw(ret)) } } /// Remove the note for an object. /// /// The `notes_ref` argument is the canonical name of the reference to use, /// defaulting to "refs/notes/commits". /// /// The id specified is the Oid of the git object to remove the note from. pub fn note_delete( &self, id: Oid, notes_ref: Option<&str>, author: &Signature<'_>, committer: &Signature<'_>, ) -> Result<(), Error> { let notes_ref = crate::opt_cstr(notes_ref)?; unsafe { try_call!(raw::git_note_remove( self.raw, notes_ref, author.raw(), committer.raw(), id.raw() )); Ok(()) } } /// Create a revwalk that can be used to traverse the commit graph. pub fn revwalk(&self) -> Result, Error> { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_revwalk_new(&mut raw, self.raw())); Ok(Binding::from_raw(raw)) } } /// Get the blame for a single file. pub fn blame_file( &self, path: &Path, opts: Option<&mut BlameOptions>, ) -> Result, Error> { let path = path_to_repo_path(path)?; let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_blame_file( &mut raw, self.raw(), path, opts.map(|s| s.raw()) )); Ok(Binding::from_raw(raw)) } } /// Find a merge base between two commits pub fn merge_base(&self, one: Oid, two: Oid) -> Result { let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_merge_base( &mut raw, self.raw, one.raw(), two.raw() )); Ok(Binding::from_raw(&raw as *const _)) } } /// Find a merge base given a list of commits pub fn merge_base_many(&self, oids: &[Oid]) -> Result { let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_merge_base_many( &mut raw, self.raw, oids.len() as size_t, oids.as_ptr() as *const raw::git_oid )); Ok(Binding::from_raw(&raw as *const _)) } } /// Find all merge bases between two commits pub fn merge_bases(&self, one: Oid, two: Oid) -> Result { let mut arr = raw::git_oidarray { ids: ptr::null_mut(), count: 0, }; unsafe { try_call!(raw::git_merge_bases( &mut arr, self.raw, one.raw(), two.raw() )); Ok(Binding::from_raw(arr)) } } /// Find all merge bases given a list of commits pub fn merge_bases_many(&self, oids: &[Oid]) -> Result { let mut arr = raw::git_oidarray { ids: ptr::null_mut(), count: 0, }; unsafe { try_call!(raw::git_merge_bases_many( &mut arr, self.raw, oids.len() as size_t, oids.as_ptr() as *const raw::git_oid )); Ok(Binding::from_raw(arr)) } } /// Count the number of unique commits between two commit objects /// /// There is no need for branches containing the commits to have any /// upstream relationship, but it helps to think of one as a branch and the /// other as its upstream, the ahead and behind values will be what git /// would report for the branches. pub fn graph_ahead_behind(&self, local: Oid, upstream: Oid) -> Result<(usize, usize), Error> { unsafe { let mut ahead: size_t = 0; let mut behind: size_t = 0; try_call!(raw::git_graph_ahead_behind( &mut ahead, &mut behind, self.raw(), local.raw(), upstream.raw() )); Ok((ahead as usize, behind as usize)) } } /// Determine if a commit is the descendant of another commit pub fn graph_descendant_of(&self, commit: Oid, ancestor: Oid) -> Result { unsafe { let rv = try_call!(raw::git_graph_descendant_of( self.raw(), commit.raw(), ancestor.raw() )); Ok(rv != 0) } } /// Read the reflog for the given reference /// /// If there is no reflog file for the given reference yet, an empty reflog /// object will be returned. pub fn reflog(&self, name: &str) -> Result { let name = CString::new(name)?; let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_reflog_read(&mut ret, self.raw, name)); Ok(Binding::from_raw(ret)) } } /// Delete the reflog for the given reference pub fn reflog_delete(&self, name: &str) -> Result<(), Error> { let name = CString::new(name)?; unsafe { try_call!(raw::git_reflog_delete(self.raw, name)); } Ok(()) } /// Rename a reflog /// /// The reflog to be renamed is expected to already exist. pub fn reflog_rename(&self, old_name: &str, new_name: &str) -> Result<(), Error> { let old_name = CString::new(old_name)?; let new_name = CString::new(new_name)?; unsafe { try_call!(raw::git_reflog_rename(self.raw, old_name, new_name)); } Ok(()) } /// Check if the given reference has a reflog. pub fn reference_has_log(&self, name: &str) -> Result { let name = CString::new(name)?; let ret = unsafe { try_call!(raw::git_reference_has_log(self.raw, name)) }; Ok(ret != 0) } /// Ensure that the given reference has a reflog. pub fn reference_ensure_log(&self, name: &str) -> Result<(), Error> { let name = CString::new(name)?; unsafe { try_call!(raw::git_reference_ensure_log(self.raw, name)); } Ok(()) } /// Describes a commit /// /// Performs a describe operation on the current commit and the worktree. /// After performing a describe on HEAD, a status is run and description is /// considered to be dirty if there are. pub fn describe(&self, opts: &DescribeOptions) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_describe_workdir(&mut ret, self.raw, opts.raw())); Ok(Binding::from_raw(ret)) } } /// Directly run a diff on two blobs. /// /// Compared to a file, a blob lacks some contextual information. As such, the /// `DiffFile` given to the callback will have some fake data; i.e. mode will be /// 0 and path will be `None`. /// /// `None` is allowed for either `old_blob` or `new_blob` and will be treated /// as an empty blob, with the oid set to zero in the `DiffFile`. Passing `None` /// for both blobs is a noop; no callbacks will be made at all. /// /// We do run a binary content check on the blob content and if either blob looks /// like binary data, the `DiffFile` binary attribute will be set to 1 and no call to /// the `hunk_cb` nor `line_cb` will be made (unless you set the `force_text` /// option). pub fn diff_blobs( &self, old_blob: Option<&Blob<'_>>, old_as_path: Option<&str>, new_blob: Option<&Blob<'_>>, new_as_path: Option<&str>, opts: Option<&mut DiffOptions>, file_cb: Option<&mut FileCb<'_>>, binary_cb: Option<&mut BinaryCb<'_>>, hunk_cb: Option<&mut HunkCb<'_>>, line_cb: Option<&mut LineCb<'_>>, ) -> Result<(), Error> { let old_as_path = crate::opt_cstr(old_as_path)?; let new_as_path = crate::opt_cstr(new_as_path)?; let mut cbs = DiffCallbacks { file: file_cb, binary: binary_cb, hunk: hunk_cb, line: line_cb, }; let ptr = &mut cbs as *mut _; unsafe { let file_cb_c: raw::git_diff_file_cb = if cbs.file.is_some() { Some(file_cb_c) } else { None }; let binary_cb_c: raw::git_diff_binary_cb = if cbs.binary.is_some() { Some(binary_cb_c) } else { None }; let hunk_cb_c: raw::git_diff_hunk_cb = if cbs.hunk.is_some() { Some(hunk_cb_c) } else { None }; let line_cb_c: raw::git_diff_line_cb = if cbs.line.is_some() { Some(line_cb_c) } else { None }; try_call!(raw::git_diff_blobs( old_blob.map(|s| s.raw()), old_as_path, new_blob.map(|s| s.raw()), new_as_path, opts.map(|s| s.raw()), file_cb_c, binary_cb_c, hunk_cb_c, line_cb_c, ptr as *mut _ )); Ok(()) } } /// Create a diff with the difference between two tree objects. /// /// This is equivalent to `git diff ` /// /// The first tree will be used for the "old_file" side of the delta and the /// second tree will be used for the "new_file" side of the delta. You can /// pass `None` to indicate an empty tree, although it is an error to pass /// `None` for both the `old_tree` and `new_tree`. pub fn diff_tree_to_tree( &self, old_tree: Option<&Tree<'_>>, new_tree: Option<&Tree<'_>>, opts: Option<&mut DiffOptions>, ) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_diff_tree_to_tree( &mut ret, self.raw(), old_tree.map(|s| s.raw()), new_tree.map(|s| s.raw()), opts.map(|s| s.raw()) )); Ok(Binding::from_raw(ret)) } } /// Create a diff between a tree and repository index. /// /// This is equivalent to `git diff --cached ` or if you pass /// the HEAD tree, then like `git diff --cached`. /// /// The tree you pass will be used for the "old_file" side of the delta, and /// the index will be used for the "new_file" side of the delta. /// /// If you pass `None` for the index, then the existing index of the `repo` /// will be used. In this case, the index will be refreshed from disk /// (if it has changed) before the diff is generated. /// /// If the tree is `None`, then it is considered an empty tree. pub fn diff_tree_to_index( &self, old_tree: Option<&Tree<'_>>, index: Option<&Index>, opts: Option<&mut DiffOptions>, ) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_diff_tree_to_index( &mut ret, self.raw(), old_tree.map(|s| s.raw()), index.map(|s| s.raw()), opts.map(|s| s.raw()) )); Ok(Binding::from_raw(ret)) } } /// Create a diff between two index objects. /// /// The first index will be used for the "old_file" side of the delta, and /// the second index will be used for the "new_file" side of the delta. pub fn diff_index_to_index( &self, old_index: &Index, new_index: &Index, opts: Option<&mut DiffOptions>, ) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_diff_index_to_index( &mut ret, self.raw(), old_index.raw(), new_index.raw(), opts.map(|s| s.raw()) )); Ok(Binding::from_raw(ret)) } } /// Create a diff between the repository index and the workdir directory. /// /// This matches the `git diff` command. See the note below on /// `tree_to_workdir` for a discussion of the difference between /// `git diff` and `git diff HEAD` and how to emulate a `git diff ` /// using libgit2. /// /// The index will be used for the "old_file" side of the delta, and the /// working directory will be used for the "new_file" side of the delta. /// /// If you pass `None` for the index, then the existing index of the `repo` /// will be used. In this case, the index will be refreshed from disk /// (if it has changed) before the diff is generated. pub fn diff_index_to_workdir( &self, index: Option<&Index>, opts: Option<&mut DiffOptions>, ) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_diff_index_to_workdir( &mut ret, self.raw(), index.map(|s| s.raw()), opts.map(|s| s.raw()) )); Ok(Binding::from_raw(ret)) } } /// Create a diff between a tree and the working directory. /// /// The tree you provide will be used for the "old_file" side of the delta, /// and the working directory will be used for the "new_file" side. /// /// This is not the same as `git diff ` or `git diff-index /// `. Those commands use information from the index, whereas this /// function strictly returns the differences between the tree and the files /// in the working directory, regardless of the state of the index. Use /// `tree_to_workdir_with_index` to emulate those commands. /// /// To see difference between this and `tree_to_workdir_with_index`, /// consider the example of a staged file deletion where the file has then /// been put back into the working dir and further modified. The /// tree-to-workdir diff for that file is 'modified', but `git diff` would /// show status 'deleted' since there is a staged delete. /// /// If `None` is passed for `tree`, then an empty tree is used. pub fn diff_tree_to_workdir( &self, old_tree: Option<&Tree<'_>>, opts: Option<&mut DiffOptions>, ) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_diff_tree_to_workdir( &mut ret, self.raw(), old_tree.map(|s| s.raw()), opts.map(|s| s.raw()) )); Ok(Binding::from_raw(ret)) } } /// Create a diff between a tree and the working directory using index data /// to account for staged deletes, tracked files, etc. /// /// This emulates `git diff ` by diffing the tree to the index and /// the index to the working directory and blending the results into a /// single diff that includes staged deleted, etc. pub fn diff_tree_to_workdir_with_index( &self, old_tree: Option<&Tree<'_>>, opts: Option<&mut DiffOptions>, ) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_diff_tree_to_workdir_with_index( &mut ret, self.raw(), old_tree.map(|s| s.raw()), opts.map(|s| s.raw()) )); Ok(Binding::from_raw(ret)) } } /// Create a PackBuilder pub fn packbuilder(&self) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_packbuilder_new(&mut ret, self.raw())); Ok(Binding::from_raw(ret)) } } /// Save the local modifications to a new stash. pub fn stash_save( &mut self, stasher: &Signature<'_>, message: &str, flags: Option, ) -> Result { self.stash_save2(stasher, Some(message), flags) } /// Save the local modifications to a new stash. /// unlike `stash_save` it allows to pass a null `message` pub fn stash_save2( &mut self, stasher: &Signature<'_>, message: Option<&str>, flags: Option, ) -> Result { unsafe { let mut raw_oid = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; let message = crate::opt_cstr(message)?; let flags = flags.unwrap_or_else(StashFlags::empty); try_call!(raw::git_stash_save( &mut raw_oid, self.raw(), stasher.raw(), message, flags.bits() as c_uint )); Ok(Binding::from_raw(&raw_oid as *const _)) } } /// Apply a single stashed state from the stash list. pub fn stash_apply( &mut self, index: usize, opts: Option<&mut StashApplyOptions<'_>>, ) -> Result<(), Error> { unsafe { let opts = opts.map(|opts| opts.raw()); try_call!(raw::git_stash_apply(self.raw(), index, opts)); Ok(()) } } /// Loop over all the stashed states and issue a callback for each one. /// /// Return `true` to continue iterating or `false` to stop. pub fn stash_foreach(&mut self, mut callback: C) -> Result<(), Error> where C: FnMut(usize, &str, &Oid) -> bool, { unsafe { let mut data = StashCbData { callback: &mut callback, }; let cb: raw::git_stash_cb = Some(stash_cb); try_call!(raw::git_stash_foreach( self.raw(), cb, &mut data as *mut _ as *mut _ )); Ok(()) } } /// Remove a single stashed state from the stash list. pub fn stash_drop(&mut self, index: usize) -> Result<(), Error> { unsafe { try_call!(raw::git_stash_drop(self.raw(), index)); Ok(()) } } /// Apply a single stashed state from the stash list and remove it from the list if successful. pub fn stash_pop( &mut self, index: usize, opts: Option<&mut StashApplyOptions<'_>>, ) -> Result<(), Error> { unsafe { let opts = opts.map(|opts| opts.raw()); try_call!(raw::git_stash_pop(self.raw(), index, opts)); Ok(()) } } /// Add ignore rules for a repository. /// /// The format of the rules is the same one of the .gitignore file. pub fn add_ignore_rule(&self, rules: &str) -> Result<(), Error> { let rules = CString::new(rules)?; unsafe { try_call!(raw::git_ignore_add_rule(self.raw, rules)); } Ok(()) } /// Clear ignore rules that were explicitly added. pub fn clear_ignore_rules(&self) -> Result<(), Error> { unsafe { try_call!(raw::git_ignore_clear_internal_rules(self.raw)); } Ok(()) } /// Test if the ignore rules apply to a given path. pub fn is_path_ignored>(&self, path: P) -> Result { let path = util::cstring_to_repo_path(path.as_ref())?; let mut ignored: c_int = 0; unsafe { try_call!(raw::git_ignore_path_is_ignored( &mut ignored, self.raw, path )); } Ok(ignored == 1) } /// Perform a cherrypick pub fn cherrypick( &self, commit: &Commit<'_>, options: Option<&mut CherrypickOptions<'_>>, ) -> Result<(), Error> { let raw_opts = options.map(|o| o.raw()); let ptr_raw_opts = match raw_opts.as_ref() { Some(v) => v, None => std::ptr::null(), }; unsafe { try_call!(raw::git_cherrypick(self.raw(), commit.raw(), ptr_raw_opts)); Ok(()) } } /// Create an index of uncommitted changes, representing the result of /// cherry-picking. pub fn cherrypick_commit( &self, cherrypick_commit: &Commit<'_>, our_commit: &Commit<'_>, mainline: u32, options: Option<&MergeOptions>, ) -> Result { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_cherrypick_commit( &mut ret, self.raw(), cherrypick_commit.raw(), our_commit.raw(), mainline, options.map(|o| o.raw()) )); Ok(Binding::from_raw(ret)) } } /// Find the remote name of a remote-tracking branch pub fn branch_remote_name(&self, refname: &str) -> Result { let refname = CString::new(refname)?; unsafe { let buf = Buf::new(); try_call!(raw::git_branch_remote_name(buf.raw(), self.raw, refname)); Ok(buf) } } /// Retrieves the name of the reference supporting the remote tracking branch, /// given the name of a local branch reference. pub fn branch_upstream_name(&self, refname: &str) -> Result { let refname = CString::new(refname)?; unsafe { let buf = Buf::new(); try_call!(raw::git_branch_upstream_name(buf.raw(), self.raw, refname)); Ok(buf) } } /// Retrieve the name of the upstream remote of a local branch. pub fn branch_upstream_remote(&self, refname: &str) -> Result { let refname = CString::new(refname)?; unsafe { let buf = Buf::new(); try_call!(raw::git_branch_upstream_remote( buf.raw(), self.raw, refname )); Ok(buf) } } /// Apply a Diff to the given repo, making changes directly in the working directory, the index, or both. pub fn apply( &self, diff: &Diff<'_>, location: ApplyLocation, options: Option<&mut ApplyOptions<'_>>, ) -> Result<(), Error> { unsafe { try_call!(raw::git_apply( self.raw, diff.raw(), location.raw(), options.map(|s| s.raw()).unwrap_or(ptr::null()) )); Ok(()) } } /// Apply a Diff to the provided tree, and return the resulting Index. pub fn apply_to_tree( &self, tree: &Tree<'_>, diff: &Diff<'_>, options: Option<&mut ApplyOptions<'_>>, ) -> Result { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_apply_to_tree( &mut ret, self.raw, tree.raw(), diff.raw(), options.map(|s| s.raw()).unwrap_or(ptr::null()) )); Ok(Binding::from_raw(ret)) } } /// Reverts the given commit, producing changes in the index and working directory. pub fn revert( &self, commit: &Commit<'_>, options: Option<&mut RevertOptions<'_>>, ) -> Result<(), Error> { let raw_opts = options.map(|o| o.raw()); let ptr_raw_opts = match raw_opts.as_ref() { Some(v) => v, None => 0 as *const _, }; unsafe { try_call!(raw::git_revert(self.raw(), commit.raw(), ptr_raw_opts)); Ok(()) } } /// Reverts the given commit against the given "our" commit, /// producing an index that reflects the result of the revert. pub fn revert_commit( &self, revert_commit: &Commit<'_>, our_commit: &Commit<'_>, mainline: u32, options: Option<&MergeOptions>, ) -> Result { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_revert_commit( &mut ret, self.raw(), revert_commit.raw(), our_commit.raw(), mainline, options.map(|o| o.raw()) )); Ok(Binding::from_raw(ret)) } } /// Lists all the worktrees for the repository pub fn worktrees(&self) -> Result { let mut arr = raw::git_strarray { strings: ptr::null_mut(), count: 0, }; unsafe { try_call!(raw::git_worktree_list(&mut arr, self.raw)); Ok(Binding::from_raw(arr)) } } /// Opens a worktree by name for the given repository /// /// This can open any worktree that the worktrees method returns. pub fn find_worktree(&self, name: &str) -> Result { let mut raw = ptr::null_mut(); let raw_name = CString::new(name)?; unsafe { try_call!(raw::git_worktree_lookup(&mut raw, self.raw, raw_name)); Ok(Binding::from_raw(raw)) } } /// Creates a new worktree for the repository pub fn worktree<'a>( &'a self, name: &str, path: &Path, opts: Option<&WorktreeAddOptions<'a>>, ) -> Result { let mut raw = ptr::null_mut(); let raw_name = CString::new(name)?; let raw_path = path.into_c_string()?; unsafe { try_call!(raw::git_worktree_add( &mut raw, self.raw, raw_name, raw_path, opts.map(|o| o.raw()) )); Ok(Binding::from_raw(raw)) } } /// Create a new transaction pub fn transaction<'a>(&'a self) -> Result, Error> { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_transaction_new(&mut raw, self.raw)); Ok(Binding::from_raw(raw)) } } /// Gets this repository's mailmap. pub fn mailmap(&self) -> Result { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_mailmap_from_repository(&mut ret, self.raw)); Ok(Binding::from_raw(ret)) } } /// If a merge is in progress, invoke 'callback' for each commit ID in the /// MERGE_HEAD file. pub fn mergehead_foreach(&mut self, mut callback: C) -> Result<(), Error> where C: FnMut(&Oid) -> bool, { unsafe { let mut data = MergeheadForeachCbData { callback: &mut callback, }; let cb: raw::git_repository_mergehead_foreach_cb = Some(mergehead_foreach_cb); try_call!(raw::git_repository_mergehead_foreach( self.raw(), cb, &mut data as *mut _ as *mut _ )); Ok(()) } } /// Invoke 'callback' for each entry in the given FETCH_HEAD file. /// /// `callback` will be called with with following arguments: /// /// - `&str`: the reference name /// - `&[u8]`: the remote url /// - `&Oid`: the reference target OID /// - `bool`: was the reference the result of a merge pub fn fetchhead_foreach(&self, mut callback: C) -> Result<(), Error> where C: FnMut(&str, &[u8], &Oid, bool) -> bool, { unsafe { let mut data = FetchheadForeachCbData { callback: &mut callback, }; let cb: raw::git_repository_fetchhead_foreach_cb = Some(fetchhead_foreach_cb); try_call!(raw::git_repository_fetchhead_foreach( self.raw(), cb, &mut data as *mut _ as *mut _ )); Ok(()) } } } impl Binding for Repository { type Raw = *mut raw::git_repository; unsafe fn from_raw(ptr: *mut raw::git_repository) -> Repository { Repository { raw: ptr } } fn raw(&self) -> *mut raw::git_repository { self.raw } } impl Drop for Repository { fn drop(&mut self) { unsafe { raw::git_repository_free(self.raw) } } } impl RepositoryInitOptions { /// Creates a default set of initialization options. /// /// By default this will set flags for creating all necessary directories /// and initializing a directory from the user-configured templates path. pub fn new() -> RepositoryInitOptions { RepositoryInitOptions { flags: raw::GIT_REPOSITORY_INIT_MKDIR as u32 | raw::GIT_REPOSITORY_INIT_MKPATH as u32 | raw::GIT_REPOSITORY_INIT_EXTERNAL_TEMPLATE as u32, mode: 0, workdir_path: None, description: None, template_path: None, initial_head: None, origin_url: None, } } /// Create a bare repository with no working directory. /// /// Defaults to false. pub fn bare(&mut self, bare: bool) -> &mut RepositoryInitOptions { self.flag(raw::GIT_REPOSITORY_INIT_BARE, bare) } /// Return an error if the repository path appears to already be a git /// repository. /// /// Defaults to false. pub fn no_reinit(&mut self, enabled: bool) -> &mut RepositoryInitOptions { self.flag(raw::GIT_REPOSITORY_INIT_NO_REINIT, enabled) } /// Normally a '/.git/' will be appended to the repo path for non-bare repos /// (if it is not already there), but passing this flag prevents that /// behavior. /// /// Defaults to false. pub fn no_dotgit_dir(&mut self, enabled: bool) -> &mut RepositoryInitOptions { self.flag(raw::GIT_REPOSITORY_INIT_NO_DOTGIT_DIR, enabled) } /// Make the repo path (and workdir path) as needed. The ".git" directory /// will always be created regardless of this flag. /// /// Defaults to true. pub fn mkdir(&mut self, enabled: bool) -> &mut RepositoryInitOptions { self.flag(raw::GIT_REPOSITORY_INIT_MKDIR, enabled) } /// Recursively make all components of the repo and workdir path as /// necessary. /// /// Defaults to true. pub fn mkpath(&mut self, enabled: bool) -> &mut RepositoryInitOptions { self.flag(raw::GIT_REPOSITORY_INIT_MKPATH, enabled) } /// Set to one of the `RepositoryInit` constants, or a custom value. pub fn mode(&mut self, mode: RepositoryInitMode) -> &mut RepositoryInitOptions { self.mode = mode.bits(); self } /// Enable or disable using external templates. /// /// If enabled, then the `template_path` option will be queried first, then /// `init.templatedir` from the global config, and finally /// `/usr/share/git-core-templates` will be used (if it exists). /// /// Defaults to true. pub fn external_template(&mut self, enabled: bool) -> &mut RepositoryInitOptions { self.flag(raw::GIT_REPOSITORY_INIT_EXTERNAL_TEMPLATE, enabled) } fn flag( &mut self, flag: raw::git_repository_init_flag_t, on: bool, ) -> &mut RepositoryInitOptions { if on { self.flags |= flag as u32; } else { self.flags &= !(flag as u32); } self } /// The path to the working directory. /// /// If this is a relative path it will be evaulated relative to the repo /// path. If this is not the "natural" working directory, a .git gitlink /// file will be created here linking to the repo path. pub fn workdir_path(&mut self, path: &Path) -> &mut RepositoryInitOptions { // Normal file path OK (does not need Windows conversion). self.workdir_path = Some(path.into_c_string().unwrap()); self } /// If set, this will be used to initialize the "description" file in the /// repository instead of using the template content. pub fn description(&mut self, desc: &str) -> &mut RepositoryInitOptions { self.description = Some(CString::new(desc).unwrap()); self } /// When the `external_template` option is set, this is the first location /// to check for the template directory. /// /// If this is not configured, then the default locations will be searched /// instead. pub fn template_path(&mut self, path: &Path) -> &mut RepositoryInitOptions { // Normal file path OK (does not need Windows conversion). self.template_path = Some(path.into_c_string().unwrap()); self } /// The name of the head to point HEAD at. /// /// If not configured, this will be taken from your git configuration. /// If this begins with `refs/` it will be used verbatim; /// otherwise `refs/heads/` will be prefixed pub fn initial_head(&mut self, head: &str) -> &mut RepositoryInitOptions { self.initial_head = Some(CString::new(head).unwrap()); self } /// If set, then after the rest of the repository initialization is /// completed an `origin` remote will be added pointing to this URL. pub fn origin_url(&mut self, url: &str) -> &mut RepositoryInitOptions { self.origin_url = Some(CString::new(url).unwrap()); self } /// Creates a set of raw init options to be used with /// `git_repository_init_ext`. /// /// This method is unsafe as the returned value may have pointers to the /// interior of this structure. pub unsafe fn raw(&self) -> raw::git_repository_init_options { let mut opts = mem::zeroed(); assert_eq!( raw::git_repository_init_init_options( &mut opts, raw::GIT_REPOSITORY_INIT_OPTIONS_VERSION ), 0 ); opts.flags = self.flags; opts.mode = self.mode; opts.workdir_path = crate::call::convert(&self.workdir_path); opts.description = crate::call::convert(&self.description); opts.template_path = crate::call::convert(&self.template_path); opts.initial_head = crate::call::convert(&self.initial_head); opts.origin_url = crate::call::convert(&self.origin_url); opts } } #[cfg(test)] mod tests { use crate::build::CheckoutBuilder; use crate::CherrypickOptions; use crate::{ ObjectType, Oid, Repository, ResetType, Signature, SubmoduleIgnore, SubmoduleUpdate, }; use std::ffi::OsStr; use std::fs; use std::path::Path; use tempfile::TempDir; #[test] fn smoke_init() { let td = TempDir::new().unwrap(); let path = td.path(); let repo = Repository::init(path).unwrap(); assert!(!repo.is_bare()); } #[test] fn smoke_init_bare() { let td = TempDir::new().unwrap(); let path = td.path(); let repo = Repository::init_bare(path).unwrap(); assert!(repo.is_bare()); assert!(repo.namespace().is_none()); } #[test] fn smoke_open() { let td = TempDir::new().unwrap(); let path = td.path(); Repository::init(td.path()).unwrap(); let repo = Repository::open(path).unwrap(); assert!(!repo.is_bare()); assert!(!repo.is_shallow()); assert!(repo.is_empty().unwrap()); assert_eq!( crate::test::realpath(&repo.path()).unwrap(), crate::test::realpath(&td.path().join(".git/")).unwrap() ); assert_eq!(repo.state(), crate::RepositoryState::Clean); } #[test] fn smoke_open_bare() { let td = TempDir::new().unwrap(); let path = td.path(); Repository::init_bare(td.path()).unwrap(); let repo = Repository::open(path).unwrap(); assert!(repo.is_bare()); assert_eq!( crate::test::realpath(&repo.path()).unwrap(), crate::test::realpath(&td.path().join("")).unwrap() ); } #[test] fn smoke_checkout() { let (_td, repo) = crate::test::repo_init(); repo.checkout_head(None).unwrap(); } #[test] fn smoke_revparse() { let (_td, repo) = crate::test::repo_init(); let rev = repo.revparse("HEAD").unwrap(); assert!(rev.to().is_none()); let from = rev.from().unwrap(); assert!(rev.from().is_some()); assert_eq!(repo.revparse_single("HEAD").unwrap().id(), from.id()); let obj = repo.find_object(from.id(), None).unwrap().clone(); obj.peel(ObjectType::Any).unwrap(); obj.short_id().unwrap(); repo.reset(&obj, ResetType::Hard, None).unwrap(); let mut opts = CheckoutBuilder::new(); t!(repo.reset(&obj, ResetType::Soft, Some(&mut opts))); } #[test] fn makes_dirs() { let td = TempDir::new().unwrap(); Repository::init(&td.path().join("a/b/c/d")).unwrap(); } #[test] fn smoke_discover() { let td = TempDir::new().unwrap(); let subdir = td.path().join("subdi"); fs::create_dir(&subdir).unwrap(); Repository::init_bare(td.path()).unwrap(); let repo = Repository::discover(&subdir).unwrap(); assert_eq!( crate::test::realpath(&repo.path()).unwrap(), crate::test::realpath(&td.path().join("")).unwrap() ); } #[test] fn smoke_open_ext() { let td = TempDir::new().unwrap(); let subdir = td.path().join("subdir"); fs::create_dir(&subdir).unwrap(); Repository::init(td.path()).unwrap(); let repo = Repository::open_ext( &subdir, crate::RepositoryOpenFlags::empty(), &[] as &[&OsStr], ) .unwrap(); assert!(!repo.is_bare()); assert_eq!( crate::test::realpath(&repo.path()).unwrap(), crate::test::realpath(&td.path().join(".git")).unwrap() ); let repo = Repository::open_ext(&subdir, crate::RepositoryOpenFlags::BARE, &[] as &[&OsStr]) .unwrap(); assert!(repo.is_bare()); assert_eq!( crate::test::realpath(&repo.path()).unwrap(), crate::test::realpath(&td.path().join(".git")).unwrap() ); let err = Repository::open_ext( &subdir, crate::RepositoryOpenFlags::NO_SEARCH, &[] as &[&OsStr], ) .err() .unwrap(); assert_eq!(err.code(), crate::ErrorCode::NotFound); assert!( Repository::open_ext(&subdir, crate::RepositoryOpenFlags::empty(), &[&subdir]).is_ok() ); } fn graph_repo_init() -> (TempDir, Repository) { let (_td, repo) = crate::test::repo_init(); { let head = repo.head().unwrap().target().unwrap(); let head = repo.find_commit(head).unwrap(); let mut index = repo.index().unwrap(); let id = index.write_tree().unwrap(); let tree = repo.find_tree(id).unwrap(); let sig = repo.signature().unwrap(); repo.commit(Some("HEAD"), &sig, &sig, "second", &tree, &[&head]) .unwrap(); } (_td, repo) } #[test] fn smoke_graph_ahead_behind() { let (_td, repo) = graph_repo_init(); let head = repo.head().unwrap().target().unwrap(); let head = repo.find_commit(head).unwrap(); let head_id = head.id(); let head_parent_id = head.parent(0).unwrap().id(); let (ahead, behind) = repo.graph_ahead_behind(head_id, head_parent_id).unwrap(); assert_eq!(ahead, 1); assert_eq!(behind, 0); let (ahead, behind) = repo.graph_ahead_behind(head_parent_id, head_id).unwrap(); assert_eq!(ahead, 0); assert_eq!(behind, 1); } #[test] fn smoke_graph_descendant_of() { let (_td, repo) = graph_repo_init(); let head = repo.head().unwrap().target().unwrap(); let head = repo.find_commit(head).unwrap(); let head_id = head.id(); let head_parent_id = head.parent(0).unwrap().id(); assert!(repo.graph_descendant_of(head_id, head_parent_id).unwrap()); assert!(!repo.graph_descendant_of(head_parent_id, head_id).unwrap()); } #[test] fn smoke_reference_has_log_ensure_log() { let (_td, repo) = crate::test::repo_init(); assert_eq!(repo.reference_has_log("HEAD").unwrap(), true); assert_eq!(repo.reference_has_log("refs/heads/main").unwrap(), true); assert_eq!(repo.reference_has_log("NOT_HEAD").unwrap(), false); let main_oid = repo.revparse_single("main").unwrap().id(); assert!(repo .reference("NOT_HEAD", main_oid, false, "creating a new branch") .is_ok()); assert_eq!(repo.reference_has_log("NOT_HEAD").unwrap(), false); assert!(repo.reference_ensure_log("NOT_HEAD").is_ok()); assert_eq!(repo.reference_has_log("NOT_HEAD").unwrap(), true); } #[test] fn smoke_set_head() { let (_td, repo) = crate::test::repo_init(); assert!(repo.set_head("refs/heads/does-not-exist").is_ok()); assert!(repo.head().is_err()); assert!(repo.set_head("refs/heads/main").is_ok()); assert!(repo.head().is_ok()); assert!(repo.set_head("*").is_err()); } #[test] fn smoke_set_head_detached() { let (_td, repo) = crate::test::repo_init(); let void_oid = Oid::from_bytes(b"00000000000000000000").unwrap(); assert!(repo.set_head_detached(void_oid).is_err()); let main_oid = repo.revparse_single("main").unwrap().id(); assert!(repo.set_head_detached(main_oid).is_ok()); assert_eq!(repo.head().unwrap().target().unwrap(), main_oid); } /// create the following: /// /---o4 /// /---o3 /// o1---o2 #[test] fn smoke_merge_base() { let (_td, repo) = graph_repo_init(); let sig = repo.signature().unwrap(); // let oid1 = head let oid1 = repo.head().unwrap().target().unwrap(); let commit1 = repo.find_commit(oid1).unwrap(); println!("created oid1 {:?}", oid1); repo.branch("branch_a", &commit1, true).unwrap(); repo.branch("branch_b", &commit1, true).unwrap(); repo.branch("branch_c", &commit1, true).unwrap(); // create commit oid2 on branch_a let mut index = repo.index().unwrap(); let p = Path::new(repo.workdir().unwrap()).join("file_a"); println!("using path {:?}", p); fs::File::create(&p).unwrap(); index.add_path(Path::new("file_a")).unwrap(); let id_a = index.write_tree().unwrap(); let tree_a = repo.find_tree(id_a).unwrap(); let oid2 = repo .commit( Some("refs/heads/branch_a"), &sig, &sig, "commit 2", &tree_a, &[&commit1], ) .unwrap(); repo.find_commit(oid2).unwrap(); println!("created oid2 {:?}", oid2); t!(repo.reset(commit1.as_object(), ResetType::Hard, None)); // create commit oid3 on branch_b let mut index = repo.index().unwrap(); let p = Path::new(repo.workdir().unwrap()).join("file_b"); fs::File::create(&p).unwrap(); index.add_path(Path::new("file_b")).unwrap(); let id_b = index.write_tree().unwrap(); let tree_b = repo.find_tree(id_b).unwrap(); let oid3 = repo .commit( Some("refs/heads/branch_b"), &sig, &sig, "commit 3", &tree_b, &[&commit1], ) .unwrap(); repo.find_commit(oid3).unwrap(); println!("created oid3 {:?}", oid3); t!(repo.reset(commit1.as_object(), ResetType::Hard, None)); // create commit oid4 on branch_c let mut index = repo.index().unwrap(); let p = Path::new(repo.workdir().unwrap()).join("file_c"); fs::File::create(&p).unwrap(); index.add_path(Path::new("file_c")).unwrap(); let id_c = index.write_tree().unwrap(); let tree_c = repo.find_tree(id_c).unwrap(); let oid4 = repo .commit( Some("refs/heads/branch_c"), &sig, &sig, "commit 3", &tree_c, &[&commit1], ) .unwrap(); repo.find_commit(oid4).unwrap(); println!("created oid4 {:?}", oid4); // the merge base of (oid2,oid3) should be oid1 let merge_base = repo.merge_base(oid2, oid3).unwrap(); assert_eq!(merge_base, oid1); // the merge base of (oid2,oid3,oid4) should be oid1 let merge_base = repo.merge_base_many(&[oid2, oid3, oid4]).unwrap(); assert_eq!(merge_base, oid1); } /// create an octopus: /// /---o2-o4 /// o1 X /// \---o3-o5 /// and checks that the merge bases of (o4,o5) are (o2,o3) #[test] fn smoke_merge_bases() { let (_td, repo) = graph_repo_init(); let sig = repo.signature().unwrap(); // let oid1 = head let oid1 = repo.head().unwrap().target().unwrap(); let commit1 = repo.find_commit(oid1).unwrap(); println!("created oid1 {:?}", oid1); repo.branch("branch_a", &commit1, true).unwrap(); repo.branch("branch_b", &commit1, true).unwrap(); // create commit oid2 on branchA let mut index = repo.index().unwrap(); let p = Path::new(repo.workdir().unwrap()).join("file_a"); println!("using path {:?}", p); fs::File::create(&p).unwrap(); index.add_path(Path::new("file_a")).unwrap(); let id_a = index.write_tree().unwrap(); let tree_a = repo.find_tree(id_a).unwrap(); let oid2 = repo .commit( Some("refs/heads/branch_a"), &sig, &sig, "commit 2", &tree_a, &[&commit1], ) .unwrap(); let commit2 = repo.find_commit(oid2).unwrap(); println!("created oid2 {:?}", oid2); t!(repo.reset(commit1.as_object(), ResetType::Hard, None)); // create commit oid3 on branchB let mut index = repo.index().unwrap(); let p = Path::new(repo.workdir().unwrap()).join("file_b"); fs::File::create(&p).unwrap(); index.add_path(Path::new("file_b")).unwrap(); let id_b = index.write_tree().unwrap(); let tree_b = repo.find_tree(id_b).unwrap(); let oid3 = repo .commit( Some("refs/heads/branch_b"), &sig, &sig, "commit 3", &tree_b, &[&commit1], ) .unwrap(); let commit3 = repo.find_commit(oid3).unwrap(); println!("created oid3 {:?}", oid3); // create merge commit oid4 on branchA with parents oid2 and oid3 //let mut index4 = repo.merge_commits(&commit2, &commit3, None).unwrap(); repo.set_head("refs/heads/branch_a").unwrap(); repo.checkout_head(None).unwrap(); let oid4 = repo .commit( Some("refs/heads/branch_a"), &sig, &sig, "commit 4", &tree_a, &[&commit2, &commit3], ) .unwrap(); //index4.write_tree_to(&repo).unwrap(); println!("created oid4 {:?}", oid4); // create merge commit oid5 on branchB with parents oid2 and oid3 //let mut index5 = repo.merge_commits(&commit3, &commit2, None).unwrap(); repo.set_head("refs/heads/branch_b").unwrap(); repo.checkout_head(None).unwrap(); let oid5 = repo .commit( Some("refs/heads/branch_b"), &sig, &sig, "commit 5", &tree_a, &[&commit3, &commit2], ) .unwrap(); //index5.write_tree_to(&repo).unwrap(); println!("created oid5 {:?}", oid5); // merge bases of (oid4,oid5) should be (oid2,oid3) let merge_bases = repo.merge_bases(oid4, oid5).unwrap(); let mut found_oid2 = false; let mut found_oid3 = false; for mg in merge_bases.iter() { println!("found merge base {:?}", mg); if mg == &oid2 { found_oid2 = true; } else if mg == &oid3 { found_oid3 = true; } else { assert!(false); } } assert!(found_oid2); assert!(found_oid3); assert_eq!(merge_bases.len(), 2); // merge bases of (oid4,oid5) should be (oid2,oid3) let merge_bases = repo.merge_bases_many(&[oid4, oid5]).unwrap(); let mut found_oid2 = false; let mut found_oid3 = false; for mg in merge_bases.iter() { println!("found merge base {:?}", mg); if mg == &oid2 { found_oid2 = true; } else if mg == &oid3 { found_oid3 = true; } else { assert!(false); } } assert!(found_oid2); assert!(found_oid3); assert_eq!(merge_bases.len(), 2); } #[test] fn smoke_revparse_ext() { let (_td, repo) = graph_repo_init(); { let short_refname = "main"; let expected_refname = "refs/heads/main"; let (obj, reference) = repo.revparse_ext(short_refname).unwrap(); let expected_obj = repo.revparse_single(expected_refname).unwrap(); assert_eq!(obj.id(), expected_obj.id()); assert_eq!(reference.unwrap().name().unwrap(), expected_refname); } { let missing_refname = "refs/heads/does-not-exist"; assert!(repo.revparse_ext(missing_refname).is_err()); } { let (_obj, reference) = repo.revparse_ext("HEAD^").unwrap(); assert!(reference.is_none()); } } #[test] fn smoke_is_path_ignored() { let (_td, repo) = graph_repo_init(); assert!(!repo.is_path_ignored(Path::new("foo")).unwrap()); let _ = repo.add_ignore_rule("/foo"); assert!(repo.is_path_ignored(Path::new("foo")).unwrap()); if cfg!(windows) { assert!(repo.is_path_ignored(Path::new("foo\\thing")).unwrap()); } let _ = repo.clear_ignore_rules(); assert!(!repo.is_path_ignored(Path::new("foo")).unwrap()); if cfg!(windows) { assert!(!repo.is_path_ignored(Path::new("foo\\thing")).unwrap()); } } #[test] fn smoke_cherrypick() { let (_td, repo) = crate::test::repo_init(); let sig = repo.signature().unwrap(); let oid1 = repo.head().unwrap().target().unwrap(); let commit1 = repo.find_commit(oid1).unwrap(); repo.branch("branch_a", &commit1, true).unwrap(); // Add 2 commits on top of the initial one in branch_a let mut index = repo.index().unwrap(); let p1 = Path::new(repo.workdir().unwrap()).join("file_c"); fs::File::create(&p1).unwrap(); index.add_path(Path::new("file_c")).unwrap(); let id = index.write_tree().unwrap(); let tree_c = repo.find_tree(id).unwrap(); let oid2 = repo .commit( Some("refs/heads/branch_a"), &sig, &sig, "commit 2", &tree_c, &[&commit1], ) .unwrap(); let commit2 = repo.find_commit(oid2).unwrap(); println!("created oid2 {:?}", oid2); assert!(p1.exists()); let mut index = repo.index().unwrap(); let p2 = Path::new(repo.workdir().unwrap()).join("file_d"); fs::File::create(&p2).unwrap(); index.add_path(Path::new("file_d")).unwrap(); let id = index.write_tree().unwrap(); let tree_d = repo.find_tree(id).unwrap(); let oid3 = repo .commit( Some("refs/heads/branch_a"), &sig, &sig, "commit 3", &tree_d, &[&commit2], ) .unwrap(); let commit3 = repo.find_commit(oid3).unwrap(); println!("created oid3 {:?}", oid3); assert!(p1.exists()); assert!(p2.exists()); // cherry-pick commit3 on top of commit1 in branch b repo.reset(commit1.as_object(), ResetType::Hard, None) .unwrap(); let mut cherrypick_opts = CherrypickOptions::new(); repo.cherrypick(&commit3, Some(&mut cherrypick_opts)) .unwrap(); let id = repo.index().unwrap().write_tree().unwrap(); let tree_d = repo.find_tree(id).unwrap(); let oid4 = repo .commit(Some("HEAD"), &sig, &sig, "commit 4", &tree_d, &[&commit1]) .unwrap(); let commit4 = repo.find_commit(oid4).unwrap(); // should have file from commit3, but not the file from commit2 assert_eq!(commit4.parent(0).unwrap().id(), commit1.id()); assert!(!p1.exists()); assert!(p2.exists()); } #[test] fn smoke_revert() { let (_td, repo) = crate::test::repo_init(); let foo_file = Path::new(repo.workdir().unwrap()).join("foo"); assert!(!foo_file.exists()); let (oid1, _id) = crate::test::commit(&repo); let commit1 = repo.find_commit(oid1).unwrap(); t!(repo.reset(commit1.as_object(), ResetType::Hard, None)); assert!(foo_file.exists()); repo.revert(&commit1, None).unwrap(); let id = repo.index().unwrap().write_tree().unwrap(); let tree2 = repo.find_tree(id).unwrap(); let sig = repo.signature().unwrap(); repo.commit(Some("HEAD"), &sig, &sig, "commit 1", &tree2, &[&commit1]) .unwrap(); // reverting once removes `foo` file assert!(!foo_file.exists()); let oid2 = repo.head().unwrap().target().unwrap(); let commit2 = repo.find_commit(oid2).unwrap(); repo.revert(&commit2, None).unwrap(); let id = repo.index().unwrap().write_tree().unwrap(); let tree3 = repo.find_tree(id).unwrap(); repo.commit(Some("HEAD"), &sig, &sig, "commit 2", &tree3, &[&commit2]) .unwrap(); // reverting twice restores `foo` file assert!(foo_file.exists()); } #[test] fn smoke_config_write_and_read() { let (td, repo) = crate::test::repo_init(); let mut config = repo.config().unwrap(); config.set_bool("commit.gpgsign", false).unwrap(); let c = fs::read_to_string(td.path().join(".git").join("config")).unwrap(); assert!(c.contains("[commit]")); assert!(c.contains("gpgsign = false")); let config = repo.config().unwrap(); assert!(!config.get_bool("commit.gpgsign").unwrap()); } #[test] fn smoke_merge_analysis_for_ref() -> Result<(), crate::Error> { let (_td, repo) = graph_repo_init(); // Set up this repo state: // * second (their-branch) // * initial (HEAD -> main) // // We expect that their-branch can be fast-forward merged into main. // git checkout --detach HEAD let head_commit = repo.head()?.peel_to_commit()?; repo.set_head_detached(head_commit.id())?; // git branch their-branch HEAD let their_branch = repo.branch("their-branch", &head_commit, false)?; // git branch -f main HEAD~ let mut parents_iter = head_commit.parents(); let parent = parents_iter.next().unwrap(); assert!(parents_iter.next().is_none()); let main = repo.branch("main", &parent, true)?; // git checkout main repo.set_head(main.get().name().expect("should be utf-8"))?; let (merge_analysis, _merge_preference) = repo.merge_analysis_for_ref( main.get(), &[&repo.reference_to_annotated_commit(their_branch.get())?], )?; assert!(merge_analysis.contains(crate::MergeAnalysis::ANALYSIS_FASTFORWARD)); Ok(()) } #[test] fn smoke_submodule_set() -> Result<(), crate::Error> { let (td1, _repo) = crate::test::repo_init(); let (td2, mut repo2) = crate::test::repo_init(); let url = crate::test::path2url(td1.path()); let name = "bar"; { let mut s = repo2.submodule(&url, Path::new(name), true)?; fs::remove_dir_all(td2.path().join("bar")).unwrap(); Repository::clone(&url, td2.path().join("bar"))?; s.add_to_index(false)?; s.add_finalize()?; } // update strategy repo2.submodule_set_update(name, SubmoduleUpdate::None)?; assert!(matches!( repo2.find_submodule(name)?.update_strategy(), SubmoduleUpdate::None )); repo2.submodule_set_update(name, SubmoduleUpdate::Rebase)?; assert!(matches!( repo2.find_submodule(name)?.update_strategy(), SubmoduleUpdate::Rebase )); // ignore rule repo2.submodule_set_ignore(name, SubmoduleIgnore::Untracked)?; assert!(matches!( repo2.find_submodule(name)?.ignore_rule(), SubmoduleIgnore::Untracked )); repo2.submodule_set_ignore(name, SubmoduleIgnore::Dirty)?; assert!(matches!( repo2.find_submodule(name)?.ignore_rule(), SubmoduleIgnore::Dirty )); // url repo2.submodule_set_url(name, "fake-url")?; assert_eq!(repo2.find_submodule(name)?.url(), Some("fake-url")); // branch repo2.submodule_set_branch(name, "fake-branch")?; assert_eq!(repo2.find_submodule(name)?.branch(), Some("fake-branch")); Ok(()) } #[test] fn smoke_mailmap_from_repository() { let (_td, repo) = crate::test::repo_init(); let commit = { let head = t!(repo.head()).target().unwrap(); t!(repo.find_commit(head)) }; // This is our baseline for HEAD. let author = commit.author(); let committer = commit.committer(); assert_eq!(author.name(), Some("name")); assert_eq!(author.email(), Some("email")); assert_eq!(committer.name(), Some("name")); assert_eq!(committer.email(), Some("email")); // There is no .mailmap file in the test repo so all signature identities are equal. let mailmap = t!(repo.mailmap()); let mailmapped_author = t!(commit.author_with_mailmap(&mailmap)); let mailmapped_committer = t!(commit.committer_with_mailmap(&mailmap)); assert_eq!(mailmapped_author.name(), author.name()); assert_eq!(mailmapped_author.email(), author.email()); assert_eq!(mailmapped_committer.name(), committer.name()); assert_eq!(mailmapped_committer.email(), committer.email()); let commit = { // - Add a .mailmap file to the repository. // - Commit with a signature identity different from the author's. // - Include entries for both author and committer to prove we call // the right raw functions. let mailmap_file = Path::new(".mailmap"); let p = Path::new(repo.workdir().unwrap()).join(&mailmap_file); t!(fs::write( p, r#" Author Name name Committer Name "#, )); let mut index = t!(repo.index()); t!(index.add_path(&mailmap_file)); let id_mailmap = t!(index.write_tree()); let tree_mailmap = t!(repo.find_tree(id_mailmap)); let head = t!(repo.commit( Some("HEAD"), &author, t!(&Signature::now("committer", "committer@email")), "Add mailmap", &tree_mailmap, &[&commit], )); t!(repo.find_commit(head)) }; // Sanity check that we're working with the right commit and that its // author and committer identities differ. let author = commit.author(); let committer = commit.committer(); assert_ne!(author.name(), committer.name()); assert_ne!(author.email(), committer.email()); assert_eq!(author.name(), Some("name")); assert_eq!(author.email(), Some("email")); assert_eq!(committer.name(), Some("committer")); assert_eq!(committer.email(), Some("committer@email")); // Fetch the newly added .mailmap from the repository. let mailmap = t!(repo.mailmap()); let mailmapped_author = t!(commit.author_with_mailmap(&mailmap)); let mailmapped_committer = t!(commit.committer_with_mailmap(&mailmap)); let mm_resolve_author = t!(mailmap.resolve_signature(&author)); let mm_resolve_committer = t!(mailmap.resolve_signature(&committer)); // Mailmap Signature lifetime is independent of Commit lifetime. drop(author); drop(committer); drop(commit); // author_with_mailmap() + committer_with_mailmap() work assert_eq!(mailmapped_author.name(), Some("Author Name")); assert_eq!(mailmapped_author.email(), Some("author.proper@email")); assert_eq!(mailmapped_committer.name(), Some("Committer Name")); assert_eq!(mailmapped_committer.email(), Some("committer.proper@email")); // resolve_signature() works assert_eq!(mm_resolve_author.email(), mailmapped_author.email()); assert_eq!(mm_resolve_committer.email(), mailmapped_committer.email()); } } vendor/git2/src/apply.rs0000664000175000017500000001400014160055207015771 0ustar mwhudsonmwhudson//! git_apply support //! see original: use crate::{panic, raw, util::Binding, DiffDelta, DiffHunk}; use libc::c_int; use std::{ffi::c_void, mem}; /// Possible application locations for git_apply /// see #[derive(Copy, Clone, Debug)] pub enum ApplyLocation { /// Apply the patch to the workdir WorkDir, /// Apply the patch to the index Index, /// Apply the patch to both the working directory and the index Both, } impl Binding for ApplyLocation { type Raw = raw::git_apply_location_t; unsafe fn from_raw(raw: raw::git_apply_location_t) -> Self { match raw { raw::GIT_APPLY_LOCATION_WORKDIR => Self::WorkDir, raw::GIT_APPLY_LOCATION_INDEX => Self::Index, raw::GIT_APPLY_LOCATION_BOTH => Self::Both, _ => panic!("Unknown git diff binary kind"), } } fn raw(&self) -> raw::git_apply_location_t { match *self { Self::WorkDir => raw::GIT_APPLY_LOCATION_WORKDIR, Self::Index => raw::GIT_APPLY_LOCATION_INDEX, Self::Both => raw::GIT_APPLY_LOCATION_BOTH, } } } /// Options to specify when applying a diff pub struct ApplyOptions<'cb> { raw: raw::git_apply_options, hunk_cb: Option>>, delta_cb: Option>>, } type HunkCB<'a> = dyn FnMut(Option>) -> bool + 'a; type DeltaCB<'a> = dyn FnMut(Option>) -> bool + 'a; extern "C" fn delta_cb_c(delta: *const raw::git_diff_delta, data: *mut c_void) -> c_int { panic::wrap(|| unsafe { let delta = Binding::from_raw_opt(delta as *mut _); let payload = &mut *(data as *mut ApplyOptions<'_>); let callback = match payload.delta_cb { Some(ref mut c) => c, None => return -1, }; let apply = callback(delta); if apply { 0 } else { 1 } }) .unwrap_or(-1) } extern "C" fn hunk_cb_c(hunk: *const raw::git_diff_hunk, data: *mut c_void) -> c_int { panic::wrap(|| unsafe { let hunk = Binding::from_raw_opt(hunk); let payload = &mut *(data as *mut ApplyOptions<'_>); let callback = match payload.hunk_cb { Some(ref mut c) => c, None => return -1, }; let apply = callback(hunk); if apply { 0 } else { 1 } }) .unwrap_or(-1) } impl<'cb> ApplyOptions<'cb> { /// Creates a new set of empty options (zeroed). pub fn new() -> Self { let mut opts = Self { raw: unsafe { mem::zeroed() }, hunk_cb: None, delta_cb: None, }; assert_eq!( unsafe { raw::git_apply_options_init(&mut opts.raw, raw::GIT_APPLY_OPTIONS_VERSION) }, 0 ); opts } fn flag(&mut self, opt: raw::git_apply_flags_t, val: bool) -> &mut Self { let opt = opt as u32; if val { self.raw.flags |= opt; } else { self.raw.flags &= !opt; } self } /// Don't actually make changes, just test that the patch applies. pub fn check(&mut self, check: bool) -> &mut Self { self.flag(raw::GIT_APPLY_CHECK, check) } /// When applying a patch, callback that will be made per hunk. pub fn hunk_callback(&mut self, cb: F) -> &mut Self where F: FnMut(Option>) -> bool + 'cb, { self.hunk_cb = Some(Box::new(cb) as Box>); self.raw.hunk_cb = Some(hunk_cb_c); self.raw.payload = self as *mut _ as *mut _; self } /// When applying a patch, callback that will be made per delta (file). pub fn delta_callback(&mut self, cb: F) -> &mut Self where F: FnMut(Option>) -> bool + 'cb, { self.delta_cb = Some(Box::new(cb) as Box>); self.raw.delta_cb = Some(delta_cb_c); self.raw.payload = self as *mut _ as *mut _; self } /// Pointer to a raw git_stash_apply_options pub unsafe fn raw(&mut self) -> *const raw::git_apply_options { &self.raw as *const _ } } #[cfg(test)] mod tests { use super::*; use std::{fs::File, io::Write, path::Path}; #[test] fn smoke_test() { let (_td, repo) = crate::test::repo_init(); let diff = t!(repo.diff_tree_to_workdir(None, None)); let mut count_hunks = 0; let mut count_delta = 0; { let mut opts = ApplyOptions::new(); opts.hunk_callback(|_hunk| { count_hunks += 1; true }); opts.delta_callback(|_delta| { count_delta += 1; true }); t!(repo.apply(&diff, ApplyLocation::Both, Some(&mut opts))); } assert_eq!(count_hunks, 0); assert_eq!(count_delta, 0); } #[test] fn apply_hunks_and_delta() { let file_path = Path::new("foo.txt"); let (td, repo) = crate::test::repo_init(); // create new file t!(t!(File::create(&td.path().join(file_path))).write_all(b"bar")); // stage the new file t!(t!(repo.index()).add_path(file_path)); // now change workdir version t!(t!(File::create(&td.path().join(file_path))).write_all(b"foo\nbar")); let diff = t!(repo.diff_index_to_workdir(None, None)); assert_eq!(diff.deltas().len(), 1); let mut count_hunks = 0; let mut count_delta = 0; { let mut opts = ApplyOptions::new(); opts.hunk_callback(|_hunk| { count_hunks += 1; true }); opts.delta_callback(|_delta| { count_delta += 1; true }); t!(repo.apply(&diff, ApplyLocation::Index, Some(&mut opts))); } assert_eq!(count_delta, 1); assert_eq!(count_hunks, 1); } } vendor/git2/src/remote_callbacks.rs0000664000175000017500000003474114160055207020154 0ustar mwhudsonmwhudsonuse libc::{c_char, c_int, c_uint, c_void, size_t}; use std::ffi::{CStr, CString}; use std::mem; use std::ptr; use std::slice; use std::str; use crate::cert::Cert; use crate::util::Binding; use crate::{ panic, raw, Cred, CredentialType, Error, IndexerProgress, Oid, PackBuilderStage, Progress, }; /// A structure to contain the callbacks which are invoked when a repository is /// being updated or downloaded. /// /// These callbacks are used to manage facilities such as authentication, /// transfer progress, etc. pub struct RemoteCallbacks<'a> { push_progress: Option>>, progress: Option>>, pack_progress: Option>>, credentials: Option>>, sideband_progress: Option>>, update_tips: Option>>, certificate_check: Option>>, push_update_reference: Option>>, } /// Callback used to acquire credentials for when a remote is fetched. /// /// * `url` - the resource for which the credentials are required. /// * `username_from_url` - the username that was embedded in the url, or `None` /// if it was not included. /// * `allowed_types` - a bitmask stating which cred types are ok to return. pub type Credentials<'a> = dyn FnMut(&str, Option<&str>, CredentialType) -> Result + 'a; /// Callback for receiving messages delivered by the transport. /// /// The return value indicates whether the network operation should continue. pub type TransportMessage<'a> = dyn FnMut(&[u8]) -> bool + 'a; /// Callback for whenever a reference is updated locally. pub type UpdateTips<'a> = dyn FnMut(&str, Oid, Oid) -> bool + 'a; /// Callback for a custom certificate check. /// /// The first argument is the certificate receved on the connection. /// Certificates are typically either an SSH or X509 certificate. /// /// The second argument is the hostname for the connection is passed as the last /// argument. pub type CertificateCheck<'a> = dyn FnMut(&Cert<'_>, &str) -> bool + 'a; /// Callback for each updated reference on push. /// /// The first argument here is the `refname` of the reference, and the second is /// the status message sent by a server. If the status is `Some` then the update /// was rejected by the remote server with a reason why. pub type PushUpdateReference<'a> = dyn FnMut(&str, Option<&str>) -> Result<(), Error> + 'a; /// Callback for push transfer progress /// /// Parameters: /// * current /// * total /// * bytes pub type PushTransferProgress<'a> = dyn FnMut(usize, usize, usize) + 'a; /// Callback for pack progress /// /// Parameters: /// * stage /// * current /// * total pub type PackProgress<'a> = dyn FnMut(PackBuilderStage, usize, usize) + 'a; impl<'a> Default for RemoteCallbacks<'a> { fn default() -> Self { Self::new() } } impl<'a> RemoteCallbacks<'a> { /// Creates a new set of empty callbacks pub fn new() -> RemoteCallbacks<'a> { RemoteCallbacks { credentials: None, progress: None, pack_progress: None, sideband_progress: None, update_tips: None, certificate_check: None, push_update_reference: None, push_progress: None, } } /// The callback through which to fetch credentials if required. /// /// # Example /// /// Prepare a callback to authenticate using the `$HOME/.ssh/id_rsa` SSH key, and /// extracting the username from the URL (i.e. git@github.com:rust-lang/git2-rs.git): /// /// ```no_run /// use git2::{Cred, RemoteCallbacks}; /// use std::env; /// /// let mut callbacks = RemoteCallbacks::new(); /// callbacks.credentials(|_url, username_from_url, _allowed_types| { /// Cred::ssh_key( /// username_from_url.unwrap(), /// None, /// std::path::Path::new(&format!("{}/.ssh/id_rsa", env::var("HOME").unwrap())), /// None, /// ) /// }); /// ``` pub fn credentials(&mut self, cb: F) -> &mut RemoteCallbacks<'a> where F: FnMut(&str, Option<&str>, CredentialType) -> Result + 'a, { self.credentials = Some(Box::new(cb) as Box>); self } /// The callback through which progress is monitored. pub fn transfer_progress(&mut self, cb: F) -> &mut RemoteCallbacks<'a> where F: FnMut(Progress<'_>) -> bool + 'a, { self.progress = Some(Box::new(cb) as Box>); self } /// Textual progress from the remote. /// /// Text sent over the progress side-band will be passed to this function /// (this is the 'counting objects' output). pub fn sideband_progress(&mut self, cb: F) -> &mut RemoteCallbacks<'a> where F: FnMut(&[u8]) -> bool + 'a, { self.sideband_progress = Some(Box::new(cb) as Box>); self } /// Each time a reference is updated locally, the callback will be called /// with information about it. pub fn update_tips(&mut self, cb: F) -> &mut RemoteCallbacks<'a> where F: FnMut(&str, Oid, Oid) -> bool + 'a, { self.update_tips = Some(Box::new(cb) as Box>); self } /// If certificate verification fails, then this callback will be invoked to /// let the caller make the final decision of whether to allow the /// connection to proceed. pub fn certificate_check(&mut self, cb: F) -> &mut RemoteCallbacks<'a> where F: FnMut(&Cert<'_>, &str) -> bool + 'a, { self.certificate_check = Some(Box::new(cb) as Box>); self } /// Set a callback to get invoked for each updated reference on a push. /// /// The first argument to the callback is the name of the reference and the /// second is a status message sent by the server. If the status is `Some` /// then the push was rejected. pub fn push_update_reference(&mut self, cb: F) -> &mut RemoteCallbacks<'a> where F: FnMut(&str, Option<&str>) -> Result<(), Error> + 'a, { self.push_update_reference = Some(Box::new(cb) as Box>); self } /// The callback through which progress of push transfer is monitored pub fn push_transfer_progress(&mut self, cb: F) -> &mut RemoteCallbacks<'a> where F: FnMut(usize, usize, usize) + 'a, { self.push_progress = Some(Box::new(cb) as Box>); self } /// Function to call with progress information during pack building. /// Be aware that this is called inline with pack building operations, /// so performance may be affected. pub fn pack_progress(&mut self, cb: F) -> &mut RemoteCallbacks<'a> where F: FnMut(PackBuilderStage, usize, usize) + 'a, { self.pack_progress = Some(Box::new(cb) as Box>); self } } impl<'a> Binding for RemoteCallbacks<'a> { type Raw = raw::git_remote_callbacks; unsafe fn from_raw(_raw: raw::git_remote_callbacks) -> RemoteCallbacks<'a> { panic!("unimplemented"); } fn raw(&self) -> raw::git_remote_callbacks { unsafe { let mut callbacks: raw::git_remote_callbacks = mem::zeroed(); assert_eq!( raw::git_remote_init_callbacks(&mut callbacks, raw::GIT_REMOTE_CALLBACKS_VERSION), 0 ); if self.progress.is_some() { callbacks.transfer_progress = Some(transfer_progress_cb); } if self.credentials.is_some() { callbacks.credentials = Some(credentials_cb); } if self.sideband_progress.is_some() { callbacks.sideband_progress = Some(sideband_progress_cb); } if self.certificate_check.is_some() { callbacks.certificate_check = Some(certificate_check_cb); } if self.push_update_reference.is_some() { callbacks.push_update_reference = Some(push_update_reference_cb); } if self.push_progress.is_some() { callbacks.push_transfer_progress = Some(push_transfer_progress_cb); } if self.pack_progress.is_some() { callbacks.pack_progress = Some(pack_progress_cb); } if self.update_tips.is_some() { let f: extern "C" fn( *const c_char, *const raw::git_oid, *const raw::git_oid, *mut c_void, ) -> c_int = update_tips_cb; callbacks.update_tips = Some(f); } callbacks.payload = self as *const _ as *mut _; callbacks } } } extern "C" fn credentials_cb( ret: *mut *mut raw::git_cred, url: *const c_char, username_from_url: *const c_char, allowed_types: c_uint, payload: *mut c_void, ) -> c_int { unsafe { let ok = panic::wrap(|| { let payload = &mut *(payload as *mut RemoteCallbacks<'_>); let callback = payload .credentials .as_mut() .ok_or(raw::GIT_PASSTHROUGH as c_int)?; *ret = ptr::null_mut(); let url = str::from_utf8(CStr::from_ptr(url).to_bytes()) .map_err(|_| raw::GIT_PASSTHROUGH as c_int)?; let username_from_url = match crate::opt_bytes(&url, username_from_url) { Some(username) => { Some(str::from_utf8(username).map_err(|_| raw::GIT_PASSTHROUGH as c_int)?) } None => None, }; let cred_type = CredentialType::from_bits_truncate(allowed_types as u32); callback(url, username_from_url, cred_type).map_err(|e| { let s = CString::new(e.to_string()).unwrap(); raw::git_error_set_str(e.raw_code() as c_int, s.as_ptr()); e.raw_code() as c_int }) }); match ok { Some(Ok(cred)) => { // Turns out it's a memory safety issue if we pass through any // and all credentials into libgit2 if allowed_types & (cred.credtype() as c_uint) != 0 { *ret = cred.unwrap(); 0 } else { raw::GIT_PASSTHROUGH as c_int } } Some(Err(e)) => e, None => -1, } } } extern "C" fn transfer_progress_cb( stats: *const raw::git_indexer_progress, payload: *mut c_void, ) -> c_int { let ok = panic::wrap(|| unsafe { let payload = &mut *(payload as *mut RemoteCallbacks<'_>); let callback = match payload.progress { Some(ref mut c) => c, None => return true, }; let progress = Binding::from_raw(stats); callback(progress) }); if ok == Some(true) { 0 } else { -1 } } extern "C" fn sideband_progress_cb(str: *const c_char, len: c_int, payload: *mut c_void) -> c_int { let ok = panic::wrap(|| unsafe { let payload = &mut *(payload as *mut RemoteCallbacks<'_>); let callback = match payload.sideband_progress { Some(ref mut c) => c, None => return true, }; let buf = slice::from_raw_parts(str as *const u8, len as usize); callback(buf) }); if ok == Some(true) { 0 } else { -1 } } extern "C" fn update_tips_cb( refname: *const c_char, a: *const raw::git_oid, b: *const raw::git_oid, data: *mut c_void, ) -> c_int { let ok = panic::wrap(|| unsafe { let payload = &mut *(data as *mut RemoteCallbacks<'_>); let callback = match payload.update_tips { Some(ref mut c) => c, None => return true, }; let refname = str::from_utf8(CStr::from_ptr(refname).to_bytes()).unwrap(); let a = Binding::from_raw(a); let b = Binding::from_raw(b); callback(refname, a, b) }); if ok == Some(true) { 0 } else { -1 } } extern "C" fn certificate_check_cb( cert: *mut raw::git_cert, _valid: c_int, hostname: *const c_char, data: *mut c_void, ) -> c_int { let ok = panic::wrap(|| unsafe { let payload = &mut *(data as *mut RemoteCallbacks<'_>); let callback = match payload.certificate_check { Some(ref mut c) => c, None => return true, }; let cert = Binding::from_raw(cert); let hostname = str::from_utf8(CStr::from_ptr(hostname).to_bytes()).unwrap(); callback(&cert, hostname) }); if ok == Some(true) { 0 } else { -1 } } extern "C" fn push_update_reference_cb( refname: *const c_char, status: *const c_char, data: *mut c_void, ) -> c_int { panic::wrap(|| unsafe { let payload = &mut *(data as *mut RemoteCallbacks<'_>); let callback = match payload.push_update_reference { Some(ref mut c) => c, None => return 0, }; let refname = str::from_utf8(CStr::from_ptr(refname).to_bytes()).unwrap(); let status = if status.is_null() { None } else { Some(str::from_utf8(CStr::from_ptr(status).to_bytes()).unwrap()) }; match callback(refname, status) { Ok(()) => 0, Err(e) => e.raw_code(), } }) .unwrap_or(-1) } extern "C" fn push_transfer_progress_cb( progress: c_uint, total: c_uint, bytes: size_t, data: *mut c_void, ) -> c_int { panic::wrap(|| unsafe { let payload = &mut *(data as *mut RemoteCallbacks<'_>); let callback = match payload.push_progress { Some(ref mut c) => c, None => return 0, }; callback(progress as usize, total as usize, bytes as usize); 0 }) .unwrap_or(-1) } extern "C" fn pack_progress_cb( stage: raw::git_packbuilder_stage_t, current: c_uint, total: c_uint, data: *mut c_void, ) -> c_int { panic::wrap(|| unsafe { let payload = &mut *(data as *mut RemoteCallbacks<'_>); let callback = match payload.pack_progress { Some(ref mut c) => c, None => return 0, }; let stage = Binding::from_raw(stage); callback(stage, current as usize, total as usize); 0 }) .unwrap_or(-1) } vendor/git2/src/index.rs0000664000175000017500000007471614160055207015777 0ustar mwhudsonmwhudsonuse std::ffi::{CStr, CString}; use std::marker; use std::ops::Range; use std::path::Path; use std::ptr; use std::slice; use libc::{c_char, c_int, c_uint, c_void, size_t}; use crate::util::{self, path_to_repo_path, Binding}; use crate::IntoCString; use crate::{panic, raw, Error, IndexAddOption, IndexTime, Oid, Repository, Tree}; /// A structure to represent a git [index][1] /// /// [1]: http://git-scm.com/book/en/Git-Internals-Git-Objects pub struct Index { raw: *mut raw::git_index, } /// An iterator over the entries in an index pub struct IndexEntries<'index> { range: Range, index: &'index Index, } /// An iterator over the conflicting entries in an index pub struct IndexConflicts<'index> { conflict_iter: *mut raw::git_index_conflict_iterator, _marker: marker::PhantomData<&'index Index>, } /// A structure to represent the information returned when a conflict is detected in an index entry pub struct IndexConflict { /// The ancestor index entry of the two conflicting index entries pub ancestor: Option, /// The index entry originating from the user's copy of the repository. /// Its contents conflict with 'their' index entry pub our: Option, /// The index entry originating from the external repository. /// Its contents conflict with 'our' index entry pub their: Option, } /// A callback function to filter index matches. /// /// Used by `Index::{add_all,remove_all,update_all}`. The first argument is the /// path, and the second is the patchspec that matched it. Return 0 to confirm /// the operation on the item, > 0 to skip the item, and < 0 to abort the scan. pub type IndexMatchedPath<'a> = dyn FnMut(&Path, &[u8]) -> i32 + 'a; /// A structure to represent an entry or a file inside of an index. /// /// All fields of an entry are public for modification and inspection. This is /// also how a new index entry is created. #[allow(missing_docs)] pub struct IndexEntry { pub ctime: IndexTime, pub mtime: IndexTime, pub dev: u32, pub ino: u32, pub mode: u32, pub uid: u32, pub gid: u32, pub file_size: u32, pub id: Oid, pub flags: u16, pub flags_extended: u16, /// The path of this index entry as a byte vector. Regardless of the /// current platform, the directory separator is an ASCII forward slash /// (`0x2F`). There are no terminating or internal NUL characters, and no /// trailing slashes. Most of the time, paths will be valid utf-8 — but /// not always. For more information on the path storage format, see /// [these git docs][git-index-docs]. Note that libgit2 will take care of /// handling the prefix compression mentioned there. /// /// [git-index-docs]: https://github.com/git/git/blob/a08a83db2bf27f015bec9a435f6d73e223c21c5e/Documentation/technical/index-format.txt#L107-L124 /// /// You can turn this value into a `std::ffi::CString` with /// `CString::new(&entry.path[..]).unwrap()`. To turn a reference into a /// `&std::path::Path`, see the `bytes2path()` function in the private, /// internal `util` module in this crate’s source code. pub path: Vec, } impl Index { /// Creates a new in-memory index. /// /// This index object cannot be read/written to the filesystem, but may be /// used to perform in-memory index operations. pub fn new() -> Result { crate::init(); let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_index_new(&mut raw)); Ok(Binding::from_raw(raw)) } } /// Create a new bare Git index object as a memory representation of the Git /// index file in 'index_path', without a repository to back it. /// /// Since there is no ODB or working directory behind this index, any Index /// methods which rely on these (e.g. add_path) will fail. /// /// If you need an index attached to a repository, use the `index()` method /// on `Repository`. pub fn open(index_path: &Path) -> Result { crate::init(); let mut raw = ptr::null_mut(); // Normal file path OK (does not need Windows conversion). let index_path = index_path.into_c_string()?; unsafe { try_call!(raw::git_index_open(&mut raw, index_path)); Ok(Binding::from_raw(raw)) } } /// Get index on-disk version. /// /// Valid return values are 2, 3, or 4. If 3 is returned, an index /// with version 2 may be written instead, if the extension data in /// version 3 is not necessary. pub fn version(&self) -> u32 { unsafe { raw::git_index_version(self.raw) } } /// Set index on-disk version. /// /// Valid values are 2, 3, or 4. If 2 is given, git_index_write may /// write an index with version 3 instead, if necessary to accurately /// represent the index. pub fn set_version(&mut self, version: u32) -> Result<(), Error> { unsafe { try_call!(raw::git_index_set_version(self.raw, version)); } Ok(()) } /// Add or update an index entry from an in-memory struct /// /// If a previous index entry exists that has the same path and stage as the /// given 'source_entry', it will be replaced. Otherwise, the 'source_entry' /// will be added. pub fn add(&mut self, entry: &IndexEntry) -> Result<(), Error> { let path = CString::new(&entry.path[..])?; // libgit2 encodes the length of the path in the lower bits of the // `flags` entry, so mask those out and recalculate here to ensure we // don't corrupt anything. let mut flags = entry.flags & !raw::GIT_INDEX_ENTRY_NAMEMASK; if entry.path.len() < raw::GIT_INDEX_ENTRY_NAMEMASK as usize { flags |= entry.path.len() as u16; } else { flags |= raw::GIT_INDEX_ENTRY_NAMEMASK; } unsafe { let raw = raw::git_index_entry { dev: entry.dev, ino: entry.ino, mode: entry.mode, uid: entry.uid, gid: entry.gid, file_size: entry.file_size, id: *entry.id.raw(), flags, flags_extended: entry.flags_extended, path: path.as_ptr(), mtime: raw::git_index_time { seconds: entry.mtime.seconds(), nanoseconds: entry.mtime.nanoseconds(), }, ctime: raw::git_index_time { seconds: entry.ctime.seconds(), nanoseconds: entry.ctime.nanoseconds(), }, }; try_call!(raw::git_index_add(self.raw, &raw)); Ok(()) } } /// Add or update an index entry from a buffer in memory /// /// This method will create a blob in the repository that owns the index and /// then add the index entry to the index. The path of the entry represents /// the position of the blob relative to the repository's root folder. /// /// If a previous index entry exists that has the same path as the given /// 'entry', it will be replaced. Otherwise, the 'entry' will be added. /// The id and the file_size of the 'entry' are updated with the real value /// of the blob. /// /// This forces the file to be added to the index, not looking at gitignore /// rules. /// /// If this file currently is the result of a merge conflict, this file will /// no longer be marked as conflicting. The data about the conflict will be /// moved to the "resolve undo" (REUC) section. pub fn add_frombuffer(&mut self, entry: &IndexEntry, data: &[u8]) -> Result<(), Error> { let path = CString::new(&entry.path[..])?; // libgit2 encodes the length of the path in the lower bits of the // `flags` entry, so mask those out and recalculate here to ensure we // don't corrupt anything. let mut flags = entry.flags & !raw::GIT_INDEX_ENTRY_NAMEMASK; if entry.path.len() < raw::GIT_INDEX_ENTRY_NAMEMASK as usize { flags |= entry.path.len() as u16; } else { flags |= raw::GIT_INDEX_ENTRY_NAMEMASK; } unsafe { let raw = raw::git_index_entry { dev: entry.dev, ino: entry.ino, mode: entry.mode, uid: entry.uid, gid: entry.gid, file_size: entry.file_size, id: *entry.id.raw(), flags, flags_extended: entry.flags_extended, path: path.as_ptr(), mtime: raw::git_index_time { seconds: entry.mtime.seconds(), nanoseconds: entry.mtime.nanoseconds(), }, ctime: raw::git_index_time { seconds: entry.ctime.seconds(), nanoseconds: entry.ctime.nanoseconds(), }, }; let ptr = data.as_ptr() as *const c_void; let len = data.len() as size_t; try_call!(raw::git_index_add_frombuffer(self.raw, &raw, ptr, len)); Ok(()) } } /// Add or update an index entry from a file on disk /// /// The file path must be relative to the repository's working folder and /// must be readable. /// /// This method will fail in bare index instances. /// /// This forces the file to be added to the index, not looking at gitignore /// rules. /// /// If this file currently is the result of a merge conflict, this file will /// no longer be marked as conflicting. The data about the conflict will be /// moved to the "resolve undo" (REUC) section. pub fn add_path(&mut self, path: &Path) -> Result<(), Error> { let posix_path = path_to_repo_path(path)?; unsafe { try_call!(raw::git_index_add_bypath(self.raw, posix_path)); Ok(()) } } /// Add or update index entries matching files in the working directory. /// /// This method will fail in bare index instances. /// /// The `pathspecs` are a list of file names or shell glob patterns that /// will matched against files in the repository's working directory. Each /// file that matches will be added to the index (either updating an /// existing entry or adding a new entry). You can disable glob expansion /// and force exact matching with the `AddDisablePathspecMatch` flag. /// /// Files that are ignored will be skipped (unlike `add_path`). If a file is /// already tracked in the index, then it will be updated even if it is /// ignored. Pass the `AddForce` flag to skip the checking of ignore rules. /// /// To emulate `git add -A` and generate an error if the pathspec contains /// the exact path of an ignored file (when not using `AddForce`), add the /// `AddCheckPathspec` flag. This checks that each entry in `pathspecs` /// that is an exact match to a filename on disk is either not ignored or /// already in the index. If this check fails, the function will return /// an error. /// /// To emulate `git add -A` with the "dry-run" option, just use a callback /// function that always returns a positive value. See below for details. /// /// If any files are currently the result of a merge conflict, those files /// will no longer be marked as conflicting. The data about the conflicts /// will be moved to the "resolve undo" (REUC) section. /// /// If you provide a callback function, it will be invoked on each matching /// item in the working directory immediately before it is added to / /// updated in the index. Returning zero will add the item to the index, /// greater than zero will skip the item, and less than zero will abort the /// scan an return an error to the caller. /// /// # Example /// /// Emulate `git add *`: /// /// ```no_run /// use git2::{Index, IndexAddOption, Repository}; /// /// let repo = Repository::open("/path/to/a/repo").expect("failed to open"); /// let mut index = repo.index().expect("cannot get the Index file"); /// index.add_all(["*"].iter(), IndexAddOption::DEFAULT, None); /// index.write(); /// ``` pub fn add_all( &mut self, pathspecs: I, flag: IndexAddOption, mut cb: Option<&mut IndexMatchedPath<'_>>, ) -> Result<(), Error> where T: IntoCString, I: IntoIterator, { let (_a, _b, raw_strarray) = crate::util::iter2cstrs_paths(pathspecs)?; let ptr = cb.as_mut(); let callback = ptr .as_ref() .map(|_| index_matched_path_cb as extern "C" fn(_, _, _) -> _); unsafe { try_call!(raw::git_index_add_all( self.raw, &raw_strarray, flag.bits() as c_uint, callback, ptr.map(|p| p as *mut _).unwrap_or(ptr::null_mut()) as *mut c_void )); } Ok(()) } /// Clear the contents (all the entries) of an index object. /// /// This clears the index object in memory; changes must be explicitly /// written to disk for them to take effect persistently via `write_*`. pub fn clear(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_index_clear(self.raw)); } Ok(()) } /// Get the count of entries currently in the index pub fn len(&self) -> usize { unsafe { raw::git_index_entrycount(&*self.raw) as usize } } /// Return `true` is there is no entry in the index pub fn is_empty(&self) -> bool { self.len() == 0 } /// Get one of the entries in the index by its position. pub fn get(&self, n: usize) -> Option { unsafe { let ptr = raw::git_index_get_byindex(self.raw, n as size_t); if ptr.is_null() { None } else { Some(Binding::from_raw(*ptr)) } } } /// Get an iterator over the entries in this index. pub fn iter(&self) -> IndexEntries<'_> { IndexEntries { range: 0..self.len(), index: self, } } /// Get an iterator over the index entries that have conflicts pub fn conflicts(&self) -> Result, Error> { crate::init(); let mut conflict_iter = ptr::null_mut(); unsafe { try_call!(raw::git_index_conflict_iterator_new( &mut conflict_iter, self.raw )); Ok(Binding::from_raw(conflict_iter)) } } /// Get one of the entries in the index by its path. pub fn get_path(&self, path: &Path, stage: i32) -> Option { let path = path_to_repo_path(path).unwrap(); unsafe { let ptr = call!(raw::git_index_get_bypath(self.raw, path, stage as c_int)); if ptr.is_null() { None } else { Some(Binding::from_raw(*ptr)) } } } /// Does this index have conflicts? /// /// Returns `true` if the index contains conflicts, `false` if it does not. pub fn has_conflicts(&self) -> bool { unsafe { raw::git_index_has_conflicts(self.raw) == 1 } } /// Get the full path to the index file on disk. /// /// Returns `None` if this is an in-memory index. pub fn path(&self) -> Option<&Path> { unsafe { crate::opt_bytes(self, raw::git_index_path(&*self.raw)).map(util::bytes2path) } } /// Update the contents of an existing index object in memory by reading /// from the hard disk. /// /// If force is true, this performs a "hard" read that discards in-memory /// changes and always reloads the on-disk index data. If there is no /// on-disk version, the index will be cleared. /// /// If force is false, this does a "soft" read that reloads the index data /// from disk only if it has changed since the last time it was loaded. /// Purely in-memory index data will be untouched. Be aware: if there are /// changes on disk, unwritten in-memory changes are discarded. pub fn read(&mut self, force: bool) -> Result<(), Error> { unsafe { try_call!(raw::git_index_read(self.raw, force)); } Ok(()) } /// Read a tree into the index file with stats /// /// The current index contents will be replaced by the specified tree. pub fn read_tree(&mut self, tree: &Tree<'_>) -> Result<(), Error> { unsafe { try_call!(raw::git_index_read_tree(self.raw, &*tree.raw())); } Ok(()) } /// Remove an entry from the index pub fn remove(&mut self, path: &Path, stage: i32) -> Result<(), Error> { let path = path_to_repo_path(path)?; unsafe { try_call!(raw::git_index_remove(self.raw, path, stage as c_int)); } Ok(()) } /// Remove an index entry corresponding to a file on disk. /// /// The file path must be relative to the repository's working folder. It /// may exist. /// /// If this file currently is the result of a merge conflict, this file will /// no longer be marked as conflicting. The data about the conflict will be /// moved to the "resolve undo" (REUC) section. pub fn remove_path(&mut self, path: &Path) -> Result<(), Error> { let path = path_to_repo_path(path)?; unsafe { try_call!(raw::git_index_remove_bypath(self.raw, path)); } Ok(()) } /// Remove all entries from the index under a given directory. pub fn remove_dir(&mut self, path: &Path, stage: i32) -> Result<(), Error> { let path = path_to_repo_path(path)?; unsafe { try_call!(raw::git_index_remove_directory( self.raw, path, stage as c_int )); } Ok(()) } /// Remove all matching index entries. /// /// If you provide a callback function, it will be invoked on each matching /// item in the index immediately before it is removed. Return 0 to remove /// the item, > 0 to skip the item, and < 0 to abort the scan. pub fn remove_all( &mut self, pathspecs: I, mut cb: Option<&mut IndexMatchedPath<'_>>, ) -> Result<(), Error> where T: IntoCString, I: IntoIterator, { let (_a, _b, raw_strarray) = crate::util::iter2cstrs_paths(pathspecs)?; let ptr = cb.as_mut(); let callback = ptr .as_ref() .map(|_| index_matched_path_cb as extern "C" fn(_, _, _) -> _); unsafe { try_call!(raw::git_index_remove_all( self.raw, &raw_strarray, callback, ptr.map(|p| p as *mut _).unwrap_or(ptr::null_mut()) as *mut c_void )); } Ok(()) } /// Update all index entries to match the working directory /// /// This method will fail in bare index instances. /// /// This scans the existing index entries and synchronizes them with the /// working directory, deleting them if the corresponding working directory /// file no longer exists otherwise updating the information (including /// adding the latest version of file to the ODB if needed). /// /// If you provide a callback function, it will be invoked on each matching /// item in the index immediately before it is updated (either refreshed or /// removed depending on working directory state). Return 0 to proceed with /// updating the item, > 0 to skip the item, and < 0 to abort the scan. pub fn update_all( &mut self, pathspecs: I, mut cb: Option<&mut IndexMatchedPath<'_>>, ) -> Result<(), Error> where T: IntoCString, I: IntoIterator, { let (_a, _b, raw_strarray) = crate::util::iter2cstrs_paths(pathspecs)?; let ptr = cb.as_mut(); let callback = ptr .as_ref() .map(|_| index_matched_path_cb as extern "C" fn(_, _, _) -> _); unsafe { try_call!(raw::git_index_update_all( self.raw, &raw_strarray, callback, ptr.map(|p| p as *mut _).unwrap_or(ptr::null_mut()) as *mut c_void )); } Ok(()) } /// Write an existing index object from memory back to disk using an atomic /// file lock. pub fn write(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_index_write(self.raw)); } Ok(()) } /// Write the index as a tree. /// /// This method will scan the index and write a representation of its /// current state back to disk; it recursively creates tree objects for each /// of the subtrees stored in the index, but only returns the OID of the /// root tree. This is the OID that can be used e.g. to create a commit. /// /// The index instance cannot be bare, and needs to be associated to an /// existing repository. /// /// The index must not contain any file in conflict. pub fn write_tree(&mut self) -> Result { let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_index_write_tree(&mut raw, self.raw)); Ok(Binding::from_raw(&raw as *const _)) } } /// Write the index as a tree to the given repository /// /// This is the same as `write_tree` except that the destination repository /// can be chosen. pub fn write_tree_to(&mut self, repo: &Repository) -> Result { let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_index_write_tree_to(&mut raw, self.raw, repo.raw())); Ok(Binding::from_raw(&raw as *const _)) } } } impl Binding for Index { type Raw = *mut raw::git_index; unsafe fn from_raw(raw: *mut raw::git_index) -> Index { Index { raw } } fn raw(&self) -> *mut raw::git_index { self.raw } } impl<'index> Binding for IndexConflicts<'index> { type Raw = *mut raw::git_index_conflict_iterator; unsafe fn from_raw(raw: *mut raw::git_index_conflict_iterator) -> IndexConflicts<'index> { IndexConflicts { conflict_iter: raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_index_conflict_iterator { self.conflict_iter } } extern "C" fn index_matched_path_cb( path: *const c_char, matched_pathspec: *const c_char, payload: *mut c_void, ) -> c_int { unsafe { let path = CStr::from_ptr(path).to_bytes(); let matched_pathspec = CStr::from_ptr(matched_pathspec).to_bytes(); panic::wrap(|| { let payload = payload as *mut &mut IndexMatchedPath<'_>; (*payload)(util::bytes2path(path), matched_pathspec) as c_int }) .unwrap_or(-1) } } impl Drop for Index { fn drop(&mut self) { unsafe { raw::git_index_free(self.raw) } } } impl<'index> Drop for IndexConflicts<'index> { fn drop(&mut self) { unsafe { raw::git_index_conflict_iterator_free(self.conflict_iter) } } } impl<'index> Iterator for IndexEntries<'index> { type Item = IndexEntry; fn next(&mut self) -> Option { self.range.next().map(|i| self.index.get(i).unwrap()) } } impl<'index> Iterator for IndexConflicts<'index> { type Item = Result; fn next(&mut self) -> Option> { let mut ancestor = ptr::null(); let mut our = ptr::null(); let mut their = ptr::null(); unsafe { try_call_iter!(raw::git_index_conflict_next( &mut ancestor, &mut our, &mut their, self.conflict_iter )); Some(Ok(IndexConflict { ancestor: match ancestor.is_null() { false => Some(IndexEntry::from_raw(*ancestor)), true => None, }, our: match our.is_null() { false => Some(IndexEntry::from_raw(*our)), true => None, }, their: match their.is_null() { false => Some(IndexEntry::from_raw(*their)), true => None, }, })) } } } impl Binding for IndexEntry { type Raw = raw::git_index_entry; unsafe fn from_raw(raw: raw::git_index_entry) -> IndexEntry { let raw::git_index_entry { ctime, mtime, dev, ino, mode, uid, gid, file_size, id, flags, flags_extended, path, } = raw; // libgit2 encodes the length of the path in the lower bits of `flags`, // but if the length exceeds the number of bits then the path is // nul-terminated. let mut pathlen = (flags & raw::GIT_INDEX_ENTRY_NAMEMASK) as usize; if pathlen == raw::GIT_INDEX_ENTRY_NAMEMASK as usize { pathlen = CStr::from_ptr(path).to_bytes().len(); } let path = slice::from_raw_parts(path as *const u8, pathlen); IndexEntry { dev, ino, mode, uid, gid, file_size, id: Binding::from_raw(&id as *const _), flags, flags_extended, path: path.to_vec(), mtime: Binding::from_raw(mtime), ctime: Binding::from_raw(ctime), } } fn raw(&self) -> raw::git_index_entry { // not implemented, may require a CString in storage panic!() } } #[cfg(test)] mod tests { use std::fs::{self, File}; use std::path::Path; use tempfile::TempDir; use crate::{Index, IndexEntry, IndexTime, Oid, Repository, ResetType}; #[test] fn smoke() { let mut index = Index::new().unwrap(); assert!(index.add_path(&Path::new(".")).is_err()); index.clear().unwrap(); assert_eq!(index.len(), 0); assert!(index.get(0).is_none()); assert!(index.path().is_none()); assert!(index.read(true).is_err()); } #[test] fn smoke_from_repo() { let (_td, repo) = crate::test::repo_init(); let mut index = repo.index().unwrap(); assert_eq!( index.path().map(|s| s.to_path_buf()), Some(repo.path().join("index")) ); Index::open(&repo.path().join("index")).unwrap(); index.clear().unwrap(); index.read(true).unwrap(); index.write().unwrap(); index.write_tree().unwrap(); index.write_tree_to(&repo).unwrap(); } #[test] fn add_all() { let (_td, repo) = crate::test::repo_init(); let mut index = repo.index().unwrap(); let root = repo.path().parent().unwrap(); fs::create_dir(&root.join("foo")).unwrap(); File::create(&root.join("foo/bar")).unwrap(); let mut called = false; index .add_all( ["foo"].iter(), crate::IndexAddOption::DEFAULT, Some(&mut |a: &Path, b: &[u8]| { assert!(!called); called = true; assert_eq!(b, b"foo"); assert_eq!(a, Path::new("foo/bar")); 0 }), ) .unwrap(); assert!(called); called = false; index .remove_all( ["."].iter(), Some(&mut |a: &Path, b: &[u8]| { assert!(!called); called = true; assert_eq!(b, b"."); assert_eq!(a, Path::new("foo/bar")); 0 }), ) .unwrap(); assert!(called); } #[test] fn smoke_add() { let (_td, repo) = crate::test::repo_init(); let mut index = repo.index().unwrap(); let root = repo.path().parent().unwrap(); fs::create_dir(&root.join("foo")).unwrap(); File::create(&root.join("foo/bar")).unwrap(); index.add_path(Path::new("foo/bar")).unwrap(); index.write().unwrap(); assert_eq!(index.iter().count(), 1); // Make sure we can use this repo somewhere else now. let id = index.write_tree().unwrap(); let tree = repo.find_tree(id).unwrap(); let sig = repo.signature().unwrap(); let id = repo.refname_to_id("HEAD").unwrap(); let parent = repo.find_commit(id).unwrap(); let commit = repo .commit(Some("HEAD"), &sig, &sig, "commit", &tree, &[&parent]) .unwrap(); let obj = repo.find_object(commit, None).unwrap(); repo.reset(&obj, ResetType::Hard, None).unwrap(); let td2 = TempDir::new().unwrap(); let url = crate::test::path2url(&root); let repo = Repository::clone(&url, td2.path()).unwrap(); let obj = repo.find_object(commit, None).unwrap(); repo.reset(&obj, ResetType::Hard, None).unwrap(); } #[test] fn add_then_read() { let mut index = Index::new().unwrap(); assert!(index.add(&entry()).is_err()); let mut index = Index::new().unwrap(); let mut e = entry(); e.path = b"foobar".to_vec(); index.add(&e).unwrap(); let e = index.get(0).unwrap(); assert_eq!(e.path.len(), 6); } #[test] fn add_frombuffer_then_read() { let (_td, repo) = crate::test::repo_init(); let mut index = repo.index().unwrap(); let mut e = entry(); e.path = b"foobar".to_vec(); let content = b"the contents"; index.add_frombuffer(&e, content).unwrap(); let e = index.get(0).unwrap(); assert_eq!(e.path.len(), 6); let b = repo.find_blob(e.id).unwrap(); assert_eq!(b.content(), content); } fn entry() -> IndexEntry { IndexEntry { ctime: IndexTime::new(0, 0), mtime: IndexTime::new(0, 0), dev: 0, ino: 0, mode: 0o100644, uid: 0, gid: 0, file_size: 0, id: Oid::from_bytes(&[0; 20]).unwrap(), flags: 0, flags_extended: 0, path: Vec::new(), } } } vendor/git2/src/util.rs0000664000175000017500000002224314160055207015631 0ustar mwhudsonmwhudsonuse libc::{c_char, c_int, size_t}; use std::cmp::Ordering; use std::ffi::{CString, OsStr, OsString}; use std::iter::IntoIterator; use std::path::{Component, Path, PathBuf}; use crate::{raw, Error}; #[doc(hidden)] pub trait IsNull { fn is_ptr_null(&self) -> bool; } impl IsNull for *const T { fn is_ptr_null(&self) -> bool { self.is_null() } } impl IsNull for *mut T { fn is_ptr_null(&self) -> bool { self.is_null() } } #[doc(hidden)] pub trait Binding: Sized { type Raw; unsafe fn from_raw(raw: Self::Raw) -> Self; fn raw(&self) -> Self::Raw; unsafe fn from_raw_opt(raw: T) -> Option where T: Copy + IsNull, Self: Binding, { if raw.is_ptr_null() { None } else { Some(Binding::from_raw(raw)) } } } /// Converts an iterator of repo paths into a git2-compatible array of cstrings. /// /// Only use this for repo-relative paths or pathspecs. /// /// See `iter2cstrs` for more details. pub fn iter2cstrs_paths( iter: I, ) -> Result<(Vec, Vec<*const c_char>, raw::git_strarray), Error> where T: IntoCString, I: IntoIterator, { let cstrs = iter .into_iter() .map(|i| fixup_windows_path(i.into_c_string()?)) .collect::, _>>()?; iter2cstrs(cstrs) } /// Converts an iterator of things into a git array of c-strings. /// /// Returns a tuple `(cstrings, pointers, git_strarray)`. The first two values /// should not be dropped before `git_strarray`. pub fn iter2cstrs( iter: I, ) -> Result<(Vec, Vec<*const c_char>, raw::git_strarray), Error> where T: IntoCString, I: IntoIterator, { let cstrs = iter .into_iter() .map(|i| i.into_c_string()) .collect::, _>>()?; let ptrs = cstrs.iter().map(|i| i.as_ptr()).collect::>(); let raw = raw::git_strarray { strings: ptrs.as_ptr() as *mut _, count: ptrs.len() as size_t, }; Ok((cstrs, ptrs, raw)) } #[cfg(unix)] pub fn bytes2path(b: &[u8]) -> &Path { use std::os::unix::prelude::*; Path::new(OsStr::from_bytes(b)) } #[cfg(windows)] pub fn bytes2path(b: &[u8]) -> &Path { use std::str; Path::new(str::from_utf8(b).unwrap()) } /// A class of types that can be converted to C strings. /// /// These types are represented internally as byte slices and it is quite rare /// for them to contain an interior 0 byte. pub trait IntoCString { /// Consume this container, converting it into a CString fn into_c_string(self) -> Result; } impl<'a, T: IntoCString + Clone> IntoCString for &'a T { fn into_c_string(self) -> Result { self.clone().into_c_string() } } impl<'a> IntoCString for &'a str { fn into_c_string(self) -> Result { Ok(CString::new(self)?) } } impl IntoCString for String { fn into_c_string(self) -> Result { Ok(CString::new(self.into_bytes())?) } } impl IntoCString for CString { fn into_c_string(self) -> Result { Ok(self) } } impl<'a> IntoCString for &'a Path { fn into_c_string(self) -> Result { let s: &OsStr = self.as_ref(); s.into_c_string() } } impl IntoCString for PathBuf { fn into_c_string(self) -> Result { let s: OsString = self.into(); s.into_c_string() } } impl<'a> IntoCString for &'a OsStr { fn into_c_string(self) -> Result { self.to_os_string().into_c_string() } } impl IntoCString for OsString { #[cfg(unix)] fn into_c_string(self) -> Result { use std::os::unix::prelude::*; let s: &OsStr = self.as_ref(); Ok(CString::new(s.as_bytes())?) } #[cfg(windows)] fn into_c_string(self) -> Result { match self.to_str() { Some(s) => s.into_c_string(), None => Err(Error::from_str( "only valid unicode paths are accepted on windows", )), } } } impl<'a> IntoCString for &'a [u8] { fn into_c_string(self) -> Result { Ok(CString::new(self)?) } } impl IntoCString for Vec { fn into_c_string(self) -> Result { Ok(CString::new(self)?) } } pub fn into_opt_c_string(opt_s: Option) -> Result, Error> where S: IntoCString, { match opt_s { None => Ok(None), Some(s) => Ok(Some(s.into_c_string()?)), } } pub fn c_cmp_to_ordering(cmp: c_int) -> Ordering { match cmp { 0 => Ordering::Equal, n if n < 0 => Ordering::Less, _ => Ordering::Greater, } } /// Converts a path to a CString that is usable by the libgit2 API. /// /// Checks if it is a relative path. /// /// On Windows, this also requires the path to be valid unicode, and translates /// back slashes to forward slashes. pub fn path_to_repo_path(path: &Path) -> Result { macro_rules! err { ($msg:literal, $path:expr) => { return Err(Error::from_str(&format!($msg, $path.display()))) }; } match path.components().next() { None => return Err(Error::from_str("repo path should not be empty")), Some(Component::Prefix(_)) => err!( "repo path `{}` should be relative, not a windows prefix", path ), Some(Component::RootDir) => err!("repo path `{}` should be relative", path), Some(Component::CurDir) => err!("repo path `{}` should not start with `.`", path), Some(Component::ParentDir) => err!("repo path `{}` should not start with `..`", path), Some(Component::Normal(_)) => {} } #[cfg(windows)] { match path.to_str() { None => { return Err(Error::from_str( "only valid unicode paths are accepted on windows", )) } Some(s) => return fixup_windows_path(s), } } #[cfg(not(windows))] { path.into_c_string() } } pub fn cstring_to_repo_path(path: T) -> Result { fixup_windows_path(path.into_c_string()?) } #[cfg(windows)] fn fixup_windows_path>>(path: P) -> Result { let mut bytes: Vec = path.into(); for i in 0..bytes.len() { if bytes[i] == b'\\' { bytes[i] = b'/'; } } Ok(CString::new(bytes)?) } #[cfg(not(windows))] fn fixup_windows_path(path: CString) -> Result { Ok(path) } #[cfg(test)] mod tests { use super::*; macro_rules! assert_err { ($path:expr, $msg:expr) => { match path_to_repo_path(Path::new($path)) { Ok(_) => panic!("expected `{}` to err", $path), Err(e) => assert_eq!(e.message(), $msg), } }; } macro_rules! assert_repo_path_ok { ($path:expr) => { assert_repo_path_ok!($path, $path) }; ($path:expr, $expect:expr) => { assert_eq!( path_to_repo_path(Path::new($path)), Ok(CString::new($expect).unwrap()) ); }; } #[test] #[cfg(windows)] fn path_to_repo_path_translate() { assert_repo_path_ok!("foo"); assert_repo_path_ok!("foo/bar"); assert_repo_path_ok!(r"foo\bar", "foo/bar"); assert_repo_path_ok!(r"foo\bar\", "foo/bar/"); } #[test] fn path_to_repo_path_no_weird() { assert_err!("", "repo path should not be empty"); assert_err!("./foo", "repo path `./foo` should not start with `.`"); assert_err!("../foo", "repo path `../foo` should not start with `..`"); } #[test] #[cfg(not(windows))] fn path_to_repo_path_no_absolute() { assert_err!("/", "repo path `/` should be relative"); assert_repo_path_ok!("foo/bar"); } #[test] #[cfg(windows)] fn path_to_repo_path_no_absolute() { assert_err!( r"c:", r"repo path `c:` should be relative, not a windows prefix" ); assert_err!( r"c:\", r"repo path `c:\` should be relative, not a windows prefix" ); assert_err!( r"c:temp", r"repo path `c:temp` should be relative, not a windows prefix" ); assert_err!( r"\\?\UNC\a\b\c", r"repo path `\\?\UNC\a\b\c` should be relative, not a windows prefix" ); assert_err!( r"\\?\c:\foo", r"repo path `\\?\c:\foo` should be relative, not a windows prefix" ); assert_err!( r"\\.\COM42", r"repo path `\\.\COM42` should be relative, not a windows prefix" ); assert_err!( r"\\a\b", r"repo path `\\a\b` should be relative, not a windows prefix" ); assert_err!(r"\", r"repo path `\` should be relative"); assert_err!(r"/", r"repo path `/` should be relative"); assert_err!(r"\foo", r"repo path `\foo` should be relative"); assert_err!(r"/foo", r"repo path `/foo` should be relative"); } } vendor/git2/src/oid.rs0000664000175000017500000001605214160055207015430 0ustar mwhudsonmwhudsonuse libc; use std::cmp::Ordering; use std::fmt; use std::hash::{Hash, Hasher}; use std::path::Path; use std::str; use crate::{raw, Error, IntoCString, ObjectType}; use crate::util::{c_cmp_to_ordering, Binding}; /// Unique identity of any object (commit, tree, blob, tag). #[derive(Copy, Clone)] #[repr(C)] pub struct Oid { raw: raw::git_oid, } impl Oid { /// Parse a hex-formatted object id into an Oid structure. /// /// # Errors /// /// Returns an error if the string is empty, is longer than 40 hex /// characters, or contains any non-hex characters. pub fn from_str(s: &str) -> Result { crate::init(); let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_oid_fromstrn( &mut raw, s.as_bytes().as_ptr() as *const libc::c_char, s.len() as libc::size_t )); } Ok(Oid { raw }) } /// Parse a raw object id into an Oid structure. /// /// If the array given is not 20 bytes in length, an error is returned. pub fn from_bytes(bytes: &[u8]) -> Result { crate::init(); let mut raw = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; if bytes.len() != raw::GIT_OID_RAWSZ { Err(Error::from_str("raw byte array must be 20 bytes")) } else { unsafe { try_call!(raw::git_oid_fromraw(&mut raw, bytes.as_ptr())); } Ok(Oid { raw }) } } /// Creates an all zero Oid structure. pub fn zero() -> Oid { let out = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; Oid { raw: out } } /// Hashes the provided data as an object of the provided type, and returns /// an Oid corresponding to the result. This does not store the object /// inside any object database or repository. pub fn hash_object(kind: ObjectType, bytes: &[u8]) -> Result { crate::init(); let mut out = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_odb_hash( &mut out, bytes.as_ptr() as *const libc::c_void, bytes.len(), kind.raw() )); } Ok(Oid { raw: out }) } /// Hashes the content of the provided file as an object of the provided type, /// and returns an Oid corresponding to the result. This does not store the object /// inside any object database or repository. pub fn hash_file>(kind: ObjectType, path: P) -> Result { crate::init(); // Normal file path OK (does not need Windows conversion). let rpath = path.as_ref().into_c_string()?; let mut out = raw::git_oid { id: [0; raw::GIT_OID_RAWSZ], }; unsafe { try_call!(raw::git_odb_hashfile(&mut out, rpath, kind.raw())); } Ok(Oid { raw: out }) } /// View this OID as a byte-slice 20 bytes in length. pub fn as_bytes(&self) -> &[u8] { &self.raw.id } /// Test if this OID is all zeros. pub fn is_zero(&self) -> bool { unsafe { raw::git_oid_iszero(&self.raw) == 1 } } } impl Binding for Oid { type Raw = *const raw::git_oid; unsafe fn from_raw(oid: *const raw::git_oid) -> Oid { Oid { raw: *oid } } fn raw(&self) -> *const raw::git_oid { &self.raw as *const _ } } impl fmt::Debug for Oid { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fmt::Display::fmt(self, f) } } impl fmt::Display for Oid { /// Hex-encode this Oid into a formatter. fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { let mut dst = [0u8; raw::GIT_OID_HEXSZ + 1]; unsafe { raw::git_oid_tostr( dst.as_mut_ptr() as *mut libc::c_char, dst.len() as libc::size_t, &self.raw, ); } let s = &dst[..dst.iter().position(|&a| a == 0).unwrap()]; str::from_utf8(s).unwrap().fmt(f) } } impl str::FromStr for Oid { type Err = Error; /// Parse a hex-formatted object id into an Oid structure. /// /// # Errors /// /// Returns an error if the string is empty, is longer than 40 hex /// characters, or contains any non-hex characters. fn from_str(s: &str) -> Result { Oid::from_str(s) } } impl PartialEq for Oid { fn eq(&self, other: &Oid) -> bool { unsafe { raw::git_oid_equal(&self.raw, &other.raw) != 0 } } } impl Eq for Oid {} impl PartialOrd for Oid { fn partial_cmp(&self, other: &Oid) -> Option { Some(self.cmp(other)) } } impl Ord for Oid { fn cmp(&self, other: &Oid) -> Ordering { c_cmp_to_ordering(unsafe { raw::git_oid_cmp(&self.raw, &other.raw) }) } } impl Hash for Oid { fn hash(&self, into: &mut H) { self.raw.id.hash(into) } } impl AsRef<[u8]> for Oid { fn as_ref(&self) -> &[u8] { self.as_bytes() } } #[cfg(test)] mod tests { use std::fs::File; use std::io::prelude::*; use super::Error; use super::Oid; use crate::ObjectType; use tempfile::TempDir; #[test] fn conversions() { assert!(Oid::from_str("foo").is_err()); assert!(Oid::from_str("decbf2be529ab6557d5429922251e5ee36519817").is_ok()); assert!(Oid::from_bytes(b"foo").is_err()); assert!(Oid::from_bytes(b"00000000000000000000").is_ok()); } #[test] fn comparisons() -> Result<(), Error> { assert_eq!(Oid::from_str("decbf2b")?, Oid::from_str("decbf2b")?); assert!(Oid::from_str("decbf2b")? <= Oid::from_str("decbf2b")?); assert!(Oid::from_str("decbf2b")? >= Oid::from_str("decbf2b")?); { let o = Oid::from_str("decbf2b")?; assert_eq!(o, o); assert!(o <= o); assert!(o >= o); } assert_eq!( Oid::from_str("decbf2b")?, Oid::from_str("decbf2b000000000000000000000000000000000")? ); assert!( Oid::from_bytes(b"00000000000000000000")? < Oid::from_bytes(b"00000000000000000001")? ); assert!(Oid::from_bytes(b"00000000000000000000")? < Oid::from_str("decbf2b")?); assert_eq!( Oid::from_bytes(b"00000000000000000000")?, Oid::from_str("3030303030303030303030303030303030303030")? ); Ok(()) } #[test] fn zero_is_zero() { assert!(Oid::zero().is_zero()); } #[test] fn hash_object() { let bytes = "Hello".as_bytes(); assert!(Oid::hash_object(ObjectType::Blob, bytes).is_ok()); } #[test] fn hash_file() { let td = TempDir::new().unwrap(); let path = td.path().join("hello.txt"); let mut file = File::create(&path).unwrap(); file.write_all("Hello".as_bytes()).unwrap(); assert!(Oid::hash_file(ObjectType::Blob, &path).is_ok()); } } vendor/git2/src/refspec.rs0000664000175000017500000000727714160055207016315 0ustar mwhudsonmwhudsonuse std::ffi::CString; use std::marker; use std::str; use crate::util::Binding; use crate::{raw, Buf, Direction, Error}; /// A structure to represent a git [refspec][1]. /// /// Refspecs are currently mainly accessed/created through a `Remote`. /// /// [1]: http://git-scm.com/book/en/Git-Internals-The-Refspec pub struct Refspec<'remote> { raw: *const raw::git_refspec, _marker: marker::PhantomData<&'remote raw::git_remote>, } impl<'remote> Refspec<'remote> { /// Get the refspec's direction. pub fn direction(&self) -> Direction { match unsafe { raw::git_refspec_direction(self.raw) } { raw::GIT_DIRECTION_FETCH => Direction::Fetch, raw::GIT_DIRECTION_PUSH => Direction::Push, n => panic!("unknown refspec direction: {}", n), } } /// Get the destination specifier. /// /// If the destination is not utf-8, None is returned. pub fn dst(&self) -> Option<&str> { str::from_utf8(self.dst_bytes()).ok() } /// Get the destination specifier, in bytes. pub fn dst_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_refspec_dst(self.raw)).unwrap() } } /// Check if a refspec's destination descriptor matches a reference pub fn dst_matches(&self, refname: &str) -> bool { let refname = CString::new(refname).unwrap(); unsafe { raw::git_refspec_dst_matches(self.raw, refname.as_ptr()) == 1 } } /// Get the source specifier. /// /// If the source is not utf-8, None is returned. pub fn src(&self) -> Option<&str> { str::from_utf8(self.src_bytes()).ok() } /// Get the source specifier, in bytes. pub fn src_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_refspec_src(self.raw)).unwrap() } } /// Check if a refspec's source descriptor matches a reference pub fn src_matches(&self, refname: &str) -> bool { let refname = CString::new(refname).unwrap(); unsafe { raw::git_refspec_src_matches(self.raw, refname.as_ptr()) == 1 } } /// Get the force update setting. pub fn is_force(&self) -> bool { unsafe { raw::git_refspec_force(self.raw) == 1 } } /// Get the refspec's string. /// /// Returns None if the string is not valid utf8. pub fn str(&self) -> Option<&str> { str::from_utf8(self.bytes()).ok() } /// Get the refspec's string as a byte array pub fn bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_refspec_string(self.raw)).unwrap() } } /// Transform a reference to its target following the refspec's rules pub fn transform(&self, name: &str) -> Result { let name = CString::new(name).unwrap(); unsafe { let buf = Buf::new(); try_call!(raw::git_refspec_transform( buf.raw(), self.raw, name.as_ptr() )); Ok(buf) } } /// Transform a target reference to its source reference following the refspec's rules pub fn rtransform(&self, name: &str) -> Result { let name = CString::new(name).unwrap(); unsafe { let buf = Buf::new(); try_call!(raw::git_refspec_rtransform( buf.raw(), self.raw, name.as_ptr() )); Ok(buf) } } } impl<'remote> Binding for Refspec<'remote> { type Raw = *const raw::git_refspec; unsafe fn from_raw(raw: *const raw::git_refspec) -> Refspec<'remote> { Refspec { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *const raw::git_refspec { self.raw } } vendor/git2/src/message.rs0000664000175000017500000000360514160055207016301 0ustar mwhudsonmwhudsonuse std::ffi::CString; use libc::{c_char, c_int}; use crate::util::Binding; use crate::{raw, Buf, Error, IntoCString}; /// Clean up a message, removing extraneous whitespace, and ensure that the /// message ends with a newline. If `comment_char` is `Some`, also remove comment /// lines starting with that character. pub fn message_prettify( message: T, comment_char: Option, ) -> Result { _message_prettify(message.into_c_string()?, comment_char) } fn _message_prettify(message: CString, comment_char: Option) -> Result { let ret = Buf::new(); unsafe { try_call!(raw::git_message_prettify( ret.raw(), message, comment_char.is_some() as c_int, comment_char.unwrap_or(0) as c_char )); } Ok(ret.as_str().unwrap().to_string()) } /// The default comment character for `message_prettify` ('#') pub const DEFAULT_COMMENT_CHAR: Option = Some(b'#'); #[cfg(test)] mod tests { use crate::{message_prettify, DEFAULT_COMMENT_CHAR}; #[test] fn prettify() { // This does not attempt to duplicate the extensive tests for // git_message_prettify in libgit2, just a few representative values to // make sure the interface works as expected. assert_eq!(message_prettify("1\n\n\n2", None).unwrap(), "1\n\n2\n"); assert_eq!( message_prettify("1\n\n\n2\n\n\n3", None).unwrap(), "1\n\n2\n\n3\n" ); assert_eq!( message_prettify("1\n# comment\n# more", None).unwrap(), "1\n# comment\n# more\n" ); assert_eq!( message_prettify("1\n# comment\n# more", DEFAULT_COMMENT_CHAR).unwrap(), "1\n" ); assert_eq!( message_prettify("1\n; comment\n; more", Some(';' as u8)).unwrap(), "1\n" ); } } vendor/git2/src/merge.rs0000664000175000017500000001467614160055207015766 0ustar mwhudsonmwhudsonuse libc::c_uint; use std::marker; use std::mem; use std::str; use crate::call::Convert; use crate::util::Binding; use crate::{raw, Commit, FileFavor, Oid}; /// A structure to represent an annotated commit, the input to merge and rebase. /// /// An annotated commit contains information about how it was looked up, which /// may be useful for functions like merge or rebase to provide context to the /// operation. pub struct AnnotatedCommit<'repo> { raw: *mut raw::git_annotated_commit, _marker: marker::PhantomData>, } /// Options to specify when merging. pub struct MergeOptions { raw: raw::git_merge_options, } impl<'repo> AnnotatedCommit<'repo> { /// Gets the commit ID that the given git_annotated_commit refers to pub fn id(&self) -> Oid { unsafe { Binding::from_raw(raw::git_annotated_commit_id(self.raw)) } } /// Get the refname that the given git_annotated_commit refers to /// /// Returns None if it is not valid utf8 pub fn refname(&self) -> Option<&str> { str::from_utf8(self.refname_bytes()).ok() } /// Get the refname that the given git_annotated_commit refers to. pub fn refname_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_annotated_commit_ref(&*self.raw)).unwrap() } } } impl Default for MergeOptions { fn default() -> Self { Self::new() } } impl MergeOptions { /// Creates a default set of merge options. pub fn new() -> MergeOptions { let mut opts = MergeOptions { raw: unsafe { mem::zeroed() }, }; assert_eq!(unsafe { raw::git_merge_init_options(&mut opts.raw, 1) }, 0); opts } fn flag(&mut self, opt: u32, val: bool) -> &mut MergeOptions { if val { self.raw.flags |= opt; } else { self.raw.flags &= !opt; } self } /// Detect file renames pub fn find_renames(&mut self, find: bool) -> &mut MergeOptions { self.flag(raw::GIT_MERGE_FIND_RENAMES as u32, find) } /// If a conflict occurs, exit immediately instead of attempting to continue /// resolving conflicts pub fn fail_on_conflict(&mut self, fail: bool) -> &mut MergeOptions { self.flag(raw::GIT_MERGE_FAIL_ON_CONFLICT as u32, fail) } /// Do not write the REUC extension on the generated index pub fn skip_reuc(&mut self, skip: bool) -> &mut MergeOptions { self.flag(raw::GIT_MERGE_FAIL_ON_CONFLICT as u32, skip) } /// If the commits being merged have multiple merge bases, do not build a /// recursive merge base (by merging the multiple merge bases), instead /// simply use the first base. pub fn no_recursive(&mut self, disable: bool) -> &mut MergeOptions { self.flag(raw::GIT_MERGE_NO_RECURSIVE as u32, disable) } /// Similarity to consider a file renamed (default 50) pub fn rename_threshold(&mut self, thresh: u32) -> &mut MergeOptions { self.raw.rename_threshold = thresh; self } /// Maximum similarity sources to examine for renames (default 200). /// If the number of rename candidates (add / delete pairs) is greater /// than this value, inexact rename detection is aborted. This setting /// overrides the `merge.renameLimit` configuration value. pub fn target_limit(&mut self, limit: u32) -> &mut MergeOptions { self.raw.target_limit = limit as c_uint; self } /// Maximum number of times to merge common ancestors to build a /// virtual merge base when faced with criss-cross merges. When /// this limit is reached, the next ancestor will simply be used /// instead of attempting to merge it. The default is unlimited. pub fn recursion_limit(&mut self, limit: u32) -> &mut MergeOptions { self.raw.recursion_limit = limit as c_uint; self } /// Specify a side to favor for resolving conflicts pub fn file_favor(&mut self, favor: FileFavor) -> &mut MergeOptions { self.raw.file_favor = favor.convert(); self } fn file_flag(&mut self, opt: u32, val: bool) -> &mut MergeOptions { if val { self.raw.file_flags |= opt; } else { self.raw.file_flags &= !opt; } self } /// Create standard conflicted merge files pub fn standard_style(&mut self, standard: bool) -> &mut MergeOptions { self.file_flag(raw::GIT_MERGE_FILE_STYLE_MERGE as u32, standard) } /// Create diff3-style file pub fn diff3_style(&mut self, diff3: bool) -> &mut MergeOptions { self.file_flag(raw::GIT_MERGE_FILE_STYLE_DIFF3 as u32, diff3) } /// Condense non-alphanumeric regions for simplified diff file pub fn simplify_alnum(&mut self, simplify: bool) -> &mut MergeOptions { self.file_flag(raw::GIT_MERGE_FILE_SIMPLIFY_ALNUM as u32, simplify) } /// Ignore all whitespace pub fn ignore_whitespace(&mut self, ignore: bool) -> &mut MergeOptions { self.file_flag(raw::GIT_MERGE_FILE_IGNORE_WHITESPACE as u32, ignore) } /// Ignore changes in amount of whitespace pub fn ignore_whitespace_change(&mut self, ignore: bool) -> &mut MergeOptions { self.file_flag(raw::GIT_MERGE_FILE_IGNORE_WHITESPACE_CHANGE as u32, ignore) } /// Ignore whitespace at end of line pub fn ignore_whitespace_eol(&mut self, ignore: bool) -> &mut MergeOptions { self.file_flag(raw::GIT_MERGE_FILE_IGNORE_WHITESPACE_EOL as u32, ignore) } /// Use the "patience diff" algorithm pub fn patience(&mut self, patience: bool) -> &mut MergeOptions { self.file_flag(raw::GIT_MERGE_FILE_DIFF_PATIENCE as u32, patience) } /// Take extra time to find minimal diff pub fn minimal(&mut self, minimal: bool) -> &mut MergeOptions { self.file_flag(raw::GIT_MERGE_FILE_DIFF_MINIMAL as u32, minimal) } /// Acquire a pointer to the underlying raw options. pub unsafe fn raw(&self) -> *const raw::git_merge_options { &self.raw as *const _ } } impl<'repo> Binding for AnnotatedCommit<'repo> { type Raw = *mut raw::git_annotated_commit; unsafe fn from_raw(raw: *mut raw::git_annotated_commit) -> AnnotatedCommit<'repo> { AnnotatedCommit { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_annotated_commit { self.raw } } impl<'repo> Drop for AnnotatedCommit<'repo> { fn drop(&mut self) { unsafe { raw::git_annotated_commit_free(self.raw) } } } vendor/git2/src/opts.rs0000664000175000017500000000762314160055207015646 0ustar mwhudsonmwhudson//! Bindings to libgit2's git_libgit2_opts function. use std::ffi::CString; use crate::util::Binding; use crate::{raw, Buf, ConfigLevel, Error, IntoCString}; /// Set the search path for a level of config data. The search path applied to /// shared attributes and ignore files, too. /// /// `level` must be one of [`ConfigLevel::System`], [`ConfigLevel::Global`], /// [`ConfigLevel::XDG`], [`ConfigLevel::ProgramData`]. /// /// `path` lists directories delimited by `GIT_PATH_LIST_SEPARATOR`. /// Use magic path `$PATH` to include the old value of the path /// (if you want to prepend or append, for instance). /// /// This function is unsafe as it mutates the global state but cannot guarantee /// thread-safety. It needs to be externally synchronized with calls to access /// the global state. pub unsafe fn set_search_path

(level: ConfigLevel, path: P) -> Result<(), Error> where P: IntoCString, { crate::init(); try_call!(raw::git_libgit2_opts( raw::GIT_OPT_SET_SEARCH_PATH as libc::c_int, level as libc::c_int, path.into_c_string()?.as_ptr() )); Ok(()) } /// Reset the search path for a given level of config data to the default /// (generally based on environment variables). /// /// `level` must be one of [`ConfigLevel::System`], [`ConfigLevel::Global`], /// [`ConfigLevel::XDG`], [`ConfigLevel::ProgramData`]. /// /// This function is unsafe as it mutates the global state but cannot guarantee /// thread-safety. It needs to be externally synchronized with calls to access /// the global state. pub unsafe fn reset_search_path(level: ConfigLevel) -> Result<(), Error> { crate::init(); try_call!(raw::git_libgit2_opts( raw::GIT_OPT_SET_SEARCH_PATH as libc::c_int, level as libc::c_int, core::ptr::null::() )); Ok(()) } /// Get the search path for a given level of config data. /// /// `level` must be one of [`ConfigLevel::System`], [`ConfigLevel::Global`], /// [`ConfigLevel::XDG`], [`ConfigLevel::ProgramData`]. /// /// This function is unsafe as it mutates the global state but cannot guarantee /// thread-safety. It needs to be externally synchronized with calls to access /// the global state. pub unsafe fn get_search_path(level: ConfigLevel) -> Result { crate::init(); let buf = Buf::new(); try_call!(raw::git_libgit2_opts( raw::GIT_OPT_GET_SEARCH_PATH as libc::c_int, level as libc::c_int, buf.raw() as *const _ )); buf.into_c_string() } /// Controls whether or not libgit2 will verify when writing an object that all /// objects it references are valid. Enabled by default, but disabling this can /// significantly improve performance, at the cost of potentially allowing the /// creation of objects that reference invalid objects (due to programming /// error or repository corruption). pub fn strict_object_creation(enabled: bool) { let error = unsafe { raw::git_libgit2_opts( raw::GIT_OPT_ENABLE_STRICT_OBJECT_CREATION as libc::c_int, enabled as libc::c_int, ) }; // This function cannot actually fail, but the function has an error return // for other options that can. debug_assert!(error >= 0); } /// Controls whether or not libgit2 will verify that objects loaded have the /// expected hash. Enabled by default, but disabling this can significantly /// improve performance, at the cost of relying on repository integrity /// without checking it. pub fn strict_hash_verification(enabled: bool) { let error = unsafe { raw::git_libgit2_opts( raw::GIT_OPT_ENABLE_STRICT_HASH_VERIFICATION as libc::c_int, enabled as libc::c_int, ) }; // This function cannot actually fail, but the function has an error return // for other options that can. debug_assert!(error >= 0); } #[cfg(test)] mod test { use super::*; #[test] fn smoke() { strict_hash_verification(false); } } vendor/git2/src/oid_array.rs0000664000175000017500000000241114160055207016620 0ustar mwhudsonmwhudson//! Bindings to libgit2's raw `git_oidarray` type use std::ops::Deref; use crate::oid::Oid; use crate::raw; use crate::util::Binding; use std::mem; use std::slice; /// An oid array structure used by libgit2 /// /// Some apis return arrays of oids which originate from libgit2. This /// wrapper type behaves a little like `Vec<&Oid>` but does so without copying /// the underlying Oids until necessary. pub struct OidArray { raw: raw::git_oidarray, } impl Deref for OidArray { type Target = [Oid]; fn deref(&self) -> &[Oid] { unsafe { debug_assert_eq!(mem::size_of::(), mem::size_of_val(&*self.raw.ids)); slice::from_raw_parts(self.raw.ids as *const Oid, self.raw.count as usize) } } } impl Binding for OidArray { type Raw = raw::git_oidarray; unsafe fn from_raw(raw: raw::git_oidarray) -> OidArray { OidArray { raw } } fn raw(&self) -> raw::git_oidarray { self.raw } } impl<'repo> std::fmt::Debug for OidArray { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { f.debug_tuple("OidArray").field(&self.deref()).finish() } } impl Drop for OidArray { fn drop(&mut self) { unsafe { raw::git_oidarray_free(&mut self.raw) } } } vendor/git2/src/object.rs0000664000175000017500000001736214160055207016130 0ustar mwhudsonmwhudsonuse std::marker; use std::mem; use std::ptr; use crate::util::Binding; use crate::{raw, Blob, Buf, Commit, Error, ObjectType, Oid, Repository, Tag, Tree}; use crate::{Describe, DescribeOptions}; /// A structure to represent a git [object][1] /// /// [1]: http://git-scm.com/book/en/Git-Internals-Git-Objects pub struct Object<'repo> { raw: *mut raw::git_object, _marker: marker::PhantomData<&'repo Repository>, } impl<'repo> Object<'repo> { /// Get the id (SHA1) of a repository object pub fn id(&self) -> Oid { unsafe { Binding::from_raw(raw::git_object_id(&*self.raw)) } } /// Get the object type of an object. /// /// If the type is unknown, then `None` is returned. pub fn kind(&self) -> Option { ObjectType::from_raw(unsafe { raw::git_object_type(&*self.raw) }) } /// Recursively peel an object until an object of the specified type is met. /// /// If you pass `Any` as the target type, then the object will be /// peeled until the type changes (e.g. a tag will be chased until the /// referenced object is no longer a tag). pub fn peel(&self, kind: ObjectType) -> Result, Error> { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_object_peel(&mut raw, &*self.raw(), kind)); Ok(Binding::from_raw(raw)) } } /// Recursively peel an object until a blob is found pub fn peel_to_blob(&self) -> Result, Error> { self.peel(ObjectType::Blob) .map(|o| o.cast_or_panic(ObjectType::Blob)) } /// Recursively peel an object until a commit is found pub fn peel_to_commit(&self) -> Result, Error> { self.peel(ObjectType::Commit) .map(|o| o.cast_or_panic(ObjectType::Commit)) } /// Recursively peel an object until a tag is found pub fn peel_to_tag(&self) -> Result, Error> { self.peel(ObjectType::Tag) .map(|o| o.cast_or_panic(ObjectType::Tag)) } /// Recursively peel an object until a tree is found pub fn peel_to_tree(&self) -> Result, Error> { self.peel(ObjectType::Tree) .map(|o| o.cast_or_panic(ObjectType::Tree)) } /// Get a short abbreviated OID string for the object /// /// This starts at the "core.abbrev" length (default 7 characters) and /// iteratively extends to a longer string if that length is ambiguous. The /// result will be unambiguous (at least until new objects are added to the /// repository). pub fn short_id(&self) -> Result { unsafe { let buf = Buf::new(); try_call!(raw::git_object_short_id(buf.raw(), &*self.raw())); Ok(buf) } } /// Attempt to view this object as a commit. /// /// Returns `None` if the object is not actually a commit. pub fn as_commit(&self) -> Option<&Commit<'repo>> { self.cast(ObjectType::Commit) } /// Attempt to consume this object and return a commit. /// /// Returns `Err(self)` if this object is not actually a commit. pub fn into_commit(self) -> Result, Object<'repo>> { self.cast_into(ObjectType::Commit) } /// Attempt to view this object as a tag. /// /// Returns `None` if the object is not actually a tag. pub fn as_tag(&self) -> Option<&Tag<'repo>> { self.cast(ObjectType::Tag) } /// Attempt to consume this object and return a tag. /// /// Returns `Err(self)` if this object is not actually a tag. pub fn into_tag(self) -> Result, Object<'repo>> { self.cast_into(ObjectType::Tag) } /// Attempt to view this object as a tree. /// /// Returns `None` if the object is not actually a tree. pub fn as_tree(&self) -> Option<&Tree<'repo>> { self.cast(ObjectType::Tree) } /// Attempt to consume this object and return a tree. /// /// Returns `Err(self)` if this object is not actually a tree. pub fn into_tree(self) -> Result, Object<'repo>> { self.cast_into(ObjectType::Tree) } /// Attempt to view this object as a blob. /// /// Returns `None` if the object is not actually a blob. pub fn as_blob(&self) -> Option<&Blob<'repo>> { self.cast(ObjectType::Blob) } /// Attempt to consume this object and return a blob. /// /// Returns `Err(self)` if this object is not actually a blob. pub fn into_blob(self) -> Result, Object<'repo>> { self.cast_into(ObjectType::Blob) } /// Describes a commit /// /// Performs a describe operation on this commitish object. pub fn describe(&self, opts: &DescribeOptions) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_describe_commit(&mut ret, self.raw, opts.raw())); Ok(Binding::from_raw(ret)) } } fn cast(&self, kind: ObjectType) -> Option<&T> { assert_eq!(mem::size_of::>(), mem::size_of::()); if self.kind() == Some(kind) { unsafe { Some(&*(self as *const _ as *const T)) } } else { None } } fn cast_into(self, kind: ObjectType) -> Result> { assert_eq!(mem::size_of_val(&self), mem::size_of::()); if self.kind() == Some(kind) { Ok(unsafe { let other = ptr::read(&self as *const _ as *const T); mem::forget(self); other }) } else { Err(self) } } } /// This trait is useful to export cast_or_panic into crate but not outside pub trait CastOrPanic { fn cast_or_panic(self, kind: ObjectType) -> T; } impl<'repo> CastOrPanic for Object<'repo> { fn cast_or_panic(self, kind: ObjectType) -> T { assert_eq!(mem::size_of_val(&self), mem::size_of::()); if self.kind() == Some(kind) { unsafe { let other = ptr::read(&self as *const _ as *const T); mem::forget(self); other } } else { let buf; let akind = match self.kind() { Some(akind) => akind.str(), None => { buf = format!("unknown ({})", unsafe { raw::git_object_type(&*self.raw) }); &buf } }; panic!( "Expected object {} to be {} but it is {}", self.id(), kind.str(), akind ) } } } impl<'repo> Clone for Object<'repo> { fn clone(&self) -> Object<'repo> { let mut raw = ptr::null_mut(); unsafe { let rc = raw::git_object_dup(&mut raw, self.raw); assert_eq!(rc, 0); Binding::from_raw(raw) } } } impl<'repo> std::fmt::Debug for Object<'repo> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { let mut ds = f.debug_struct("Object"); match self.kind() { Some(kind) => ds.field("kind", &kind), None => ds.field( "kind", &format!("Unknow ({})", unsafe { raw::git_object_type(&*self.raw) }), ), }; ds.field("id", &self.id()); ds.finish() } } impl<'repo> Binding for Object<'repo> { type Raw = *mut raw::git_object; unsafe fn from_raw(raw: *mut raw::git_object) -> Object<'repo> { Object { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_object { self.raw } } impl<'repo> Drop for Object<'repo> { fn drop(&mut self) { unsafe { raw::git_object_free(self.raw) } } } vendor/git2/src/transaction.rs0000664000175000017500000002036514160055207017204 0ustar mwhudsonmwhudsonuse std::ffi::CString; use std::marker; use crate::{raw, util::Binding, Error, Oid, Reflog, Repository, Signature}; /// A structure representing a transactional update of a repository's references. /// /// Transactions work by locking loose refs for as long as the [`Transaction`] /// is held, and committing all changes to disk when [`Transaction::commit`] is /// called. Note that comitting is not atomic: if an operation fails, the /// transaction aborts, but previous successful operations are not rolled back. pub struct Transaction<'repo> { raw: *mut raw::git_transaction, _marker: marker::PhantomData<&'repo Repository>, } impl Drop for Transaction<'_> { fn drop(&mut self) { unsafe { raw::git_transaction_free(self.raw) } } } impl<'repo> Binding for Transaction<'repo> { type Raw = *mut raw::git_transaction; unsafe fn from_raw(ptr: *mut raw::git_transaction) -> Transaction<'repo> { Transaction { raw: ptr, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_transaction { self.raw } } impl<'repo> Transaction<'repo> { /// Lock the specified reference by name. pub fn lock_ref(&mut self, refname: &str) -> Result<(), Error> { let refname = CString::new(refname).unwrap(); unsafe { try_call!(raw::git_transaction_lock_ref(self.raw, refname)); } Ok(()) } /// Set the target of the specified reference. /// /// The reference must have been locked via `lock_ref`. /// /// If `reflog_signature` is `None`, the [`Signature`] is read from the /// repository config. pub fn set_target( &mut self, refname: &str, target: Oid, reflog_signature: Option<&Signature<'_>>, reflog_message: &str, ) -> Result<(), Error> { let refname = CString::new(refname).unwrap(); let reflog_message = CString::new(reflog_message).unwrap(); unsafe { try_call!(raw::git_transaction_set_target( self.raw, refname, target.raw(), reflog_signature.map(|s| s.raw()), reflog_message )); } Ok(()) } /// Set the target of the specified symbolic reference. /// /// The reference must have been locked via `lock_ref`. /// /// If `reflog_signature` is `None`, the [`Signature`] is read from the /// repository config. pub fn set_symbolic_target( &mut self, refname: &str, target: &str, reflog_signature: Option<&Signature<'_>>, reflog_message: &str, ) -> Result<(), Error> { let refname = CString::new(refname).unwrap(); let target = CString::new(target).unwrap(); let reflog_message = CString::new(reflog_message).unwrap(); unsafe { try_call!(raw::git_transaction_set_symbolic_target( self.raw, refname, target, reflog_signature.map(|s| s.raw()), reflog_message )); } Ok(()) } /// Add a [`Reflog`] to the transaction. /// /// This commit the in-memory [`Reflog`] to disk when the transaction commits. /// Note that atomicty is **not* guaranteed: if the transaction fails to /// modify `refname`, the reflog may still have been comitted to disk. /// /// If this is combined with setting the target, that update won't be /// written to the log (ie. the `reflog_signature` and `reflog_message` /// parameters will be ignored). pub fn set_reflog(&mut self, refname: &str, reflog: Reflog) -> Result<(), Error> { let refname = CString::new(refname).unwrap(); unsafe { try_call!(raw::git_transaction_set_reflog( self.raw, refname, reflog.raw() )); } Ok(()) } /// Remove a reference. /// /// The reference must have been locked via `lock_ref`. pub fn remove(&mut self, refname: &str) -> Result<(), Error> { let refname = CString::new(refname).unwrap(); unsafe { try_call!(raw::git_transaction_remove(self.raw, refname)); } Ok(()) } /// Commit the changes from the transaction. /// /// The updates will be made one by one, and the first failure will stop the /// processing. pub fn commit(self) -> Result<(), Error> { unsafe { try_call!(raw::git_transaction_commit(self.raw)); } Ok(()) } } #[cfg(test)] mod tests { use crate::{Error, ErrorClass, ErrorCode, Oid, Repository}; #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let mut tx = t!(repo.transaction()); t!(tx.lock_ref("refs/heads/main")); t!(tx.lock_ref("refs/heads/next")); t!(tx.set_target("refs/heads/main", Oid::zero(), None, "set main to zero")); t!(tx.set_symbolic_target( "refs/heads/next", "refs/heads/main", None, "set next to main", )); t!(tx.commit()); assert_eq!(repo.refname_to_id("refs/heads/main").unwrap(), Oid::zero()); assert_eq!( repo.find_reference("refs/heads/next") .unwrap() .symbolic_target() .unwrap(), "refs/heads/main" ); } #[test] fn locks_same_repo_handle() { let (_td, repo) = crate::test::repo_init(); let mut tx1 = t!(repo.transaction()); t!(tx1.lock_ref("refs/heads/seen")); let mut tx2 = t!(repo.transaction()); assert!(matches!(tx2.lock_ref("refs/heads/seen"), Err(e) if e.code() == ErrorCode::Locked)) } #[test] fn locks_across_repo_handles() { let (td, repo1) = crate::test::repo_init(); let repo2 = t!(Repository::open(&td)); let mut tx1 = t!(repo1.transaction()); t!(tx1.lock_ref("refs/heads/seen")); let mut tx2 = t!(repo2.transaction()); assert!(matches!(tx2.lock_ref("refs/heads/seen"), Err(e) if e.code() == ErrorCode::Locked)) } #[test] fn drop_unlocks() { let (_td, repo) = crate::test::repo_init(); let mut tx = t!(repo.transaction()); t!(tx.lock_ref("refs/heads/seen")); drop(tx); let mut tx2 = t!(repo.transaction()); t!(tx2.lock_ref("refs/heads/seen")) } #[test] fn commit_unlocks() { let (_td, repo) = crate::test::repo_init(); let mut tx = t!(repo.transaction()); t!(tx.lock_ref("refs/heads/seen")); t!(tx.commit()); let mut tx2 = t!(repo.transaction()); t!(tx2.lock_ref("refs/heads/seen")); } #[test] fn prevents_non_transactional_updates() { let (_td, repo) = crate::test::repo_init(); let head = t!(repo.refname_to_id("HEAD")); let mut tx = t!(repo.transaction()); t!(tx.lock_ref("refs/heads/seen")); assert!(matches!( repo.reference("refs/heads/seen", head, true, "competing with lock"), Err(e) if e.code() == ErrorCode::Locked )); } #[test] fn remove() { let (_td, repo) = crate::test::repo_init(); let head = t!(repo.refname_to_id("HEAD")); let next = "refs/heads/next"; t!(repo.reference( next, head, true, "refs/heads/next@{0}: branch: Created from HEAD" )); { let mut tx = t!(repo.transaction()); t!(tx.lock_ref(next)); t!(tx.remove(next)); t!(tx.commit()); } assert!(matches!(repo.refname_to_id(next), Err(e) if e.code() == ErrorCode::NotFound)) } #[test] fn must_lock_ref() { let (_td, repo) = crate::test::repo_init(); // 🤷 fn is_not_locked_err(e: &Error) -> bool { e.code() == ErrorCode::NotFound && e.class() == ErrorClass::Reference && e.message() == "the specified reference is not locked" } let mut tx = t!(repo.transaction()); assert!(matches!( tx.set_target("refs/heads/main", Oid::zero(), None, "set main to zero"), Err(e) if is_not_locked_err(&e) )) } } vendor/git2/src/status.rs0000664000175000017500000003462414160055207016205 0ustar mwhudsonmwhudsonuse libc::{c_char, c_uint, size_t}; use std::ffi::CString; use std::marker; use std::mem; use std::ops::Range; use std::str; use crate::util::{self, Binding}; use crate::{raw, DiffDelta, IntoCString, Repository, Status}; /// Options that can be provided to `repo.statuses()` to control how the status /// information is gathered. pub struct StatusOptions { raw: raw::git_status_options, pathspec: Vec, ptrs: Vec<*const c_char>, } /// Enumeration of possible methods of what can be shown through a status /// operation. #[derive(Copy, Clone)] pub enum StatusShow { /// Only gives status based on HEAD to index comparison, not looking at /// working directory changes. Index, /// Only gives status based on index to working directory comparison, not /// comparing the index to the HEAD. Workdir, /// The default, this roughly matches `git status --porcelain` regarding /// which files are included and in what order. IndexAndWorkdir, } /// A container for a list of status information about a repository. /// /// Each instance appears as if it were a collection, having a length and /// allowing indexing, as well as providing an iterator. pub struct Statuses<'repo> { raw: *mut raw::git_status_list, // Hm, not currently present, but can't hurt? _marker: marker::PhantomData<&'repo Repository>, } /// An iterator over the statuses in a `Statuses` instance. pub struct StatusIter<'statuses> { statuses: &'statuses Statuses<'statuses>, range: Range, } /// A structure representing an entry in the `Statuses` structure. /// /// Instances are created through the `.iter()` method or the `.get()` method. pub struct StatusEntry<'statuses> { raw: *const raw::git_status_entry, _marker: marker::PhantomData<&'statuses DiffDelta<'statuses>>, } impl Default for StatusOptions { fn default() -> Self { Self::new() } } impl StatusOptions { /// Creates a new blank set of status options. pub fn new() -> StatusOptions { unsafe { let mut raw = mem::zeroed(); let r = raw::git_status_init_options(&mut raw, raw::GIT_STATUS_OPTIONS_VERSION); assert_eq!(r, 0); StatusOptions { raw, pathspec: Vec::new(), ptrs: Vec::new(), } } } /// Select the files on which to report status. /// /// The default, if unspecified, is to show the index and the working /// directory. pub fn show(&mut self, show: StatusShow) -> &mut StatusOptions { self.raw.show = match show { StatusShow::Index => raw::GIT_STATUS_SHOW_INDEX_ONLY, StatusShow::Workdir => raw::GIT_STATUS_SHOW_WORKDIR_ONLY, StatusShow::IndexAndWorkdir => raw::GIT_STATUS_SHOW_INDEX_AND_WORKDIR, }; self } /// Add a path pattern to match (using fnmatch-style matching). /// /// If the `disable_pathspec_match` option is given, then this is a literal /// path to match. If this is not called, then there will be no patterns to /// match and the entire directory will be used. pub fn pathspec(&mut self, pathspec: T) -> &mut StatusOptions { let s = util::cstring_to_repo_path(pathspec).unwrap(); self.ptrs.push(s.as_ptr()); self.pathspec.push(s); self } fn flag(&mut self, flag: raw::git_status_opt_t, val: bool) -> &mut StatusOptions { if val { self.raw.flags |= flag as c_uint; } else { self.raw.flags &= !(flag as c_uint); } self } /// Flag whether untracked files will be included. /// /// Untracked files will only be included if the workdir files are included /// in the status "show" option. pub fn include_untracked(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_INCLUDE_UNTRACKED, include) } /// Flag whether ignored files will be included. /// /// The files will only be included if the workdir files are included /// in the status "show" option. pub fn include_ignored(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_INCLUDE_IGNORED, include) } /// Flag to include unmodified files. pub fn include_unmodified(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_INCLUDE_UNMODIFIED, include) } /// Flag that submodules should be skipped. /// /// This only applies if there are no pending typechanges to the submodule /// (either from or to another type). pub fn exclude_submodules(&mut self, exclude: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_EXCLUDE_SUBMODULES, exclude) } /// Flag that all files in untracked directories should be included. /// /// Normally if an entire directory is new then just the top-level directory /// is included (with a trailing slash on the entry name). pub fn recurse_untracked_dirs(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_RECURSE_UNTRACKED_DIRS, include) } /// Indicates that the given paths should be treated as literals paths, note /// patterns. pub fn disable_pathspec_match(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_DISABLE_PATHSPEC_MATCH, include) } /// Indicates that the contents of ignored directories should be included in /// the status. pub fn recurse_ignored_dirs(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_RECURSE_IGNORED_DIRS, include) } /// Indicates that rename detection should be processed between the head. pub fn renames_head_to_index(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_RENAMES_HEAD_TO_INDEX, include) } /// Indicates that rename detection should be run between the index and the /// working directory. pub fn renames_index_to_workdir(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_RENAMES_INDEX_TO_WORKDIR, include) } /// Override the native case sensitivity for the file system and force the /// output to be in case sensitive order. pub fn sort_case_sensitively(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_SORT_CASE_SENSITIVELY, include) } /// Override the native case sensitivity for the file system and force the /// output to be in case-insensitive order. pub fn sort_case_insensitively(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_SORT_CASE_INSENSITIVELY, include) } /// Indicates that rename detection should include rewritten files. pub fn renames_from_rewrites(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_RENAMES_FROM_REWRITES, include) } /// Bypasses the default status behavior of doing a "soft" index reload. pub fn no_refresh(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_NO_REFRESH, include) } /// Refresh the stat cache in the index for files are unchanged but have /// out of date stat information in the index. /// /// This will result in less work being done on subsequent calls to fetching /// the status. pub fn update_index(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_UPDATE_INDEX, include) } // erm... #[allow(missing_docs)] pub fn include_unreadable(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_INCLUDE_UNREADABLE, include) } // erm... #[allow(missing_docs)] pub fn include_unreadable_as_untracked(&mut self, include: bool) -> &mut StatusOptions { self.flag(raw::GIT_STATUS_OPT_INCLUDE_UNREADABLE_AS_UNTRACKED, include) } /// Get a pointer to the inner list of status options. /// /// This function is unsafe as the returned structure has interior pointers /// and may no longer be valid if these options continue to be mutated. pub unsafe fn raw(&mut self) -> *const raw::git_status_options { self.raw.pathspec.strings = self.ptrs.as_ptr() as *mut _; self.raw.pathspec.count = self.ptrs.len() as size_t; &self.raw } } impl<'repo> Statuses<'repo> { /// Gets a status entry from this list at the specified index. /// /// Returns `None` if the index is out of bounds. pub fn get(&self, index: usize) -> Option> { unsafe { let p = raw::git_status_byindex(self.raw, index as size_t); Binding::from_raw_opt(p) } } /// Gets the count of status entries in this list. /// /// If there are no changes in status (according to the options given /// when the status list was created), this should return 0. pub fn len(&self) -> usize { unsafe { raw::git_status_list_entrycount(self.raw) as usize } } /// Return `true` if there is no status entry in this list. pub fn is_empty(&self) -> bool { self.len() == 0 } /// Returns an iterator over the statuses in this list. pub fn iter(&self) -> StatusIter<'_> { StatusIter { statuses: self, range: 0..self.len(), } } } impl<'repo> Binding for Statuses<'repo> { type Raw = *mut raw::git_status_list; unsafe fn from_raw(raw: *mut raw::git_status_list) -> Statuses<'repo> { Statuses { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_status_list { self.raw } } impl<'repo> Drop for Statuses<'repo> { fn drop(&mut self) { unsafe { raw::git_status_list_free(self.raw); } } } impl<'a> Iterator for StatusIter<'a> { type Item = StatusEntry<'a>; fn next(&mut self) -> Option> { self.range.next().and_then(|i| self.statuses.get(i)) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } impl<'a> DoubleEndedIterator for StatusIter<'a> { fn next_back(&mut self) -> Option> { self.range.next_back().and_then(|i| self.statuses.get(i)) } } impl<'a> ExactSizeIterator for StatusIter<'a> {} impl<'statuses> StatusEntry<'statuses> { /// Access the bytes for this entry's corresponding pathname pub fn path_bytes(&self) -> &[u8] { unsafe { if (*self.raw).head_to_index.is_null() { crate::opt_bytes(self, (*(*self.raw).index_to_workdir).old_file.path) } else { crate::opt_bytes(self, (*(*self.raw).head_to_index).old_file.path) } .unwrap() } } /// Access this entry's path name as a string. /// /// Returns `None` if the path is not valid utf-8. pub fn path(&self) -> Option<&str> { str::from_utf8(self.path_bytes()).ok() } /// Access the status flags for this file pub fn status(&self) -> Status { Status::from_bits_truncate(unsafe { (*self.raw).status as u32 }) } /// Access detailed information about the differences between the file in /// HEAD and the file in the index. pub fn head_to_index(&self) -> Option> { unsafe { Binding::from_raw_opt((*self.raw).head_to_index) } } /// Access detailed information about the differences between the file in /// the index and the file in the working directory. pub fn index_to_workdir(&self) -> Option> { unsafe { Binding::from_raw_opt((*self.raw).index_to_workdir) } } } impl<'statuses> Binding for StatusEntry<'statuses> { type Raw = *const raw::git_status_entry; unsafe fn from_raw(raw: *const raw::git_status_entry) -> StatusEntry<'statuses> { StatusEntry { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *const raw::git_status_entry { self.raw } } #[cfg(test)] mod tests { use super::StatusOptions; use std::fs::File; use std::io::prelude::*; use std::path::Path; #[test] fn smoke() { let (td, repo) = crate::test::repo_init(); assert_eq!(repo.statuses(None).unwrap().len(), 0); File::create(&td.path().join("foo")).unwrap(); let statuses = repo.statuses(None).unwrap(); assert_eq!(statuses.iter().count(), 1); let status = statuses.iter().next().unwrap(); assert_eq!(status.path(), Some("foo")); assert!(status.status().contains(crate::Status::WT_NEW)); assert!(!status.status().contains(crate::Status::INDEX_NEW)); assert!(status.head_to_index().is_none()); let diff = status.index_to_workdir().unwrap(); assert_eq!(diff.old_file().path_bytes().unwrap(), b"foo"); assert_eq!(diff.new_file().path_bytes().unwrap(), b"foo"); } #[test] fn filter() { let (td, repo) = crate::test::repo_init(); t!(File::create(&td.path().join("foo"))); t!(File::create(&td.path().join("bar"))); let mut opts = StatusOptions::new(); opts.include_untracked(true).pathspec("foo"); let statuses = t!(repo.statuses(Some(&mut opts))); assert_eq!(statuses.iter().count(), 1); let status = statuses.iter().next().unwrap(); assert_eq!(status.path(), Some("foo")); } #[test] fn gitignore() { let (td, repo) = crate::test::repo_init(); t!(t!(File::create(td.path().join(".gitignore"))).write_all(b"foo\n")); assert!(!t!(repo.status_should_ignore(Path::new("bar")))); assert!(t!(repo.status_should_ignore(Path::new("foo")))); } #[test] fn status_file() { let (td, repo) = crate::test::repo_init(); assert!(repo.status_file(Path::new("foo")).is_err()); if cfg!(windows) { assert!(repo.status_file(Path::new("bar\\foo.txt")).is_err()); } t!(File::create(td.path().join("foo"))); if cfg!(windows) { t!(::std::fs::create_dir_all(td.path().join("bar"))); t!(File::create(td.path().join("bar").join("foo.txt"))); } let status = t!(repo.status_file(Path::new("foo"))); assert!(status.contains(crate::Status::WT_NEW)); if cfg!(windows) { let status = t!(repo.status_file(Path::new("bar\\foo.txt"))); assert!(status.contains(crate::Status::WT_NEW)); } } } vendor/git2/src/cred.rs0000664000175000017500000006057214160055207015600 0ustar mwhudsonmwhudsonuse log::{debug, trace}; use std::ffi::CString; use std::io::Write; use std::mem; use std::path::Path; use std::process::{Command, Stdio}; use std::ptr; use url; use crate::util::Binding; use crate::{raw, Config, Error, IntoCString}; /// A structure to represent git credentials in libgit2. pub struct Cred { raw: *mut raw::git_cred, } /// Management of the gitcredentials(7) interface. pub struct CredentialHelper { /// A public field representing the currently discovered username from /// configuration. pub username: Option, protocol: Option, host: Option, port: Option, path: Option, url: String, commands: Vec, } impl Cred { /// Create a "default" credential usable for Negotiate mechanisms like NTLM /// or Kerberos authentication. pub fn default() -> Result { crate::init(); let mut out = ptr::null_mut(); unsafe { try_call!(raw::git_cred_default_new(&mut out)); Ok(Binding::from_raw(out)) } } /// Create a new ssh key credential object used for querying an ssh-agent. /// /// The username specified is the username to authenticate. pub fn ssh_key_from_agent(username: &str) -> Result { crate::init(); let mut out = ptr::null_mut(); let username = CString::new(username)?; unsafe { try_call!(raw::git_cred_ssh_key_from_agent(&mut out, username)); Ok(Binding::from_raw(out)) } } /// Create a new passphrase-protected ssh key credential object. pub fn ssh_key( username: &str, publickey: Option<&Path>, privatekey: &Path, passphrase: Option<&str>, ) -> Result { crate::init(); let username = CString::new(username)?; let publickey = crate::opt_cstr(publickey)?; let privatekey = privatekey.into_c_string()?; let passphrase = crate::opt_cstr(passphrase)?; let mut out = ptr::null_mut(); unsafe { try_call!(raw::git_cred_ssh_key_new( &mut out, username, publickey, privatekey, passphrase )); Ok(Binding::from_raw(out)) } } /// Create a new ssh key credential object reading the keys from memory. pub fn ssh_key_from_memory( username: &str, publickey: Option<&str>, privatekey: &str, passphrase: Option<&str>, ) -> Result { crate::init(); let username = CString::new(username)?; let publickey = crate::opt_cstr(publickey)?; let privatekey = CString::new(privatekey)?; let passphrase = crate::opt_cstr(passphrase)?; let mut out = ptr::null_mut(); unsafe { try_call!(raw::git_cred_ssh_key_memory_new( &mut out, username, publickey, privatekey, passphrase )); Ok(Binding::from_raw(out)) } } /// Create a new plain-text username and password credential object. pub fn userpass_plaintext(username: &str, password: &str) -> Result { crate::init(); let username = CString::new(username)?; let password = CString::new(password)?; let mut out = ptr::null_mut(); unsafe { try_call!(raw::git_cred_userpass_plaintext_new( &mut out, username, password )); Ok(Binding::from_raw(out)) } } /// Attempt to read `credential.helper` according to gitcredentials(7) [1] /// /// This function will attempt to parse the user's `credential.helper` /// configuration, invoke the necessary processes, and read off what the /// username/password should be for a particular url. /// /// The returned credential type will be a username/password credential if /// successful. /// /// [1]: https://www.kernel.org/pub/software/scm/git/docs/gitcredentials.html pub fn credential_helper( config: &Config, url: &str, username: Option<&str>, ) -> Result { match CredentialHelper::new(url) .config(config) .username(username) .execute() { Some((username, password)) => Cred::userpass_plaintext(&username, &password), None => Err(Error::from_str( "failed to acquire username/password \ from local configuration", )), } } /// Create a credential to specify a username. /// /// This is used with ssh authentication to query for the username if none is /// specified in the url. pub fn username(username: &str) -> Result { crate::init(); let username = CString::new(username)?; let mut out = ptr::null_mut(); unsafe { try_call!(raw::git_cred_username_new(&mut out, username)); Ok(Binding::from_raw(out)) } } /// Check whether a credential object contains username information. pub fn has_username(&self) -> bool { unsafe { raw::git_cred_has_username(self.raw) == 1 } } /// Return the type of credentials that this object represents. pub fn credtype(&self) -> raw::git_credtype_t { unsafe { (*self.raw).credtype } } /// Unwrap access to the underlying raw pointer, canceling the destructor pub unsafe fn unwrap(mut self) -> *mut raw::git_cred { mem::replace(&mut self.raw, ptr::null_mut()) } } impl Binding for Cred { type Raw = *mut raw::git_cred; unsafe fn from_raw(raw: *mut raw::git_cred) -> Cred { Cred { raw } } fn raw(&self) -> *mut raw::git_cred { self.raw } } impl Drop for Cred { fn drop(&mut self) { if !self.raw.is_null() { unsafe { if let Some(f) = (*self.raw).free { f(self.raw) } } } } } impl CredentialHelper { /// Create a new credential helper object which will be used to probe git's /// local credential configuration. /// /// The url specified is the namespace on which this will query credentials. /// Invalid urls are currently ignored. pub fn new(url: &str) -> CredentialHelper { let mut ret = CredentialHelper { protocol: None, host: None, port: None, path: None, username: None, url: url.to_string(), commands: Vec::new(), }; // Parse out the (protocol, host) if one is available if let Ok(url) = url::Url::parse(url) { if let Some(url::Host::Domain(s)) = url.host() { ret.host = Some(s.to_string()); } ret.port = url.port(); ret.protocol = Some(url.scheme().to_string()); } ret } /// Set the username that this credential helper will query with. /// /// By default the username is `None`. pub fn username(&mut self, username: Option<&str>) -> &mut CredentialHelper { self.username = username.map(|s| s.to_string()); self } /// Query the specified configuration object to discover commands to /// execute, usernames to query, etc. pub fn config(&mut self, config: &Config) -> &mut CredentialHelper { // Figure out the configured username/helper program. // // see http://git-scm.com/docs/gitcredentials.html#_configuration_options if self.username.is_none() { self.config_username(config); } self.config_helper(config); self.config_use_http_path(config); self } // Configure the queried username from `config` fn config_username(&mut self, config: &Config) { let key = self.exact_key("username"); self.username = config .get_string(&key) .ok() .or_else(|| { self.url_key("username") .and_then(|s| config.get_string(&s).ok()) }) .or_else(|| config.get_string("credential.username").ok()) } // Discover all `helper` directives from `config` fn config_helper(&mut self, config: &Config) { let exact = config.get_string(&self.exact_key("helper")); self.add_command(exact.as_ref().ok().map(|s| &s[..])); if let Some(key) = self.url_key("helper") { let url = config.get_string(&key); self.add_command(url.as_ref().ok().map(|s| &s[..])); } let global = config.get_string("credential.helper"); self.add_command(global.as_ref().ok().map(|s| &s[..])); } // Discover `useHttpPath` from `config` fn config_use_http_path(&mut self, config: &Config) { let mut use_http_path = false; if let Some(value) = config.get_bool(&self.exact_key("useHttpPath")).ok() { use_http_path = value; } else if let Some(value) = self .url_key("useHttpPath") .and_then(|key| config.get_bool(&key).ok()) { use_http_path = value; } else if let Some(value) = config.get_bool("credential.useHttpPath").ok() { use_http_path = value; } if use_http_path { if let Ok(url) = url::Url::parse(&self.url) { let path = url.path(); // Url::parse always includes a leading slash for rooted URLs, while git does not. self.path = Some(path.strip_prefix('/').unwrap_or(path).to_string()); } } } // Add a `helper` configured command to the list of commands to execute. // // see https://www.kernel.org/pub/software/scm/git/docs/technical // /api-credentials.html#_credential_helpers fn add_command(&mut self, cmd: Option<&str>) { let cmd = match cmd { Some("") | None => return, Some(s) => s, }; if cmd.starts_with('!') { self.commands.push(cmd[1..].to_string()); } else if cmd.contains("/") || cmd.contains("\\") { self.commands.push(cmd.to_string()); } else { self.commands.push(format!("git credential-{}", cmd)); } } fn exact_key(&self, name: &str) -> String { format!("credential.{}.{}", self.url, name) } fn url_key(&self, name: &str) -> Option { match (&self.host, &self.protocol) { (&Some(ref host), &Some(ref protocol)) => { Some(format!("credential.{}://{}.{}", protocol, host, name)) } _ => None, } } /// Execute this helper, attempting to discover a username/password pair. /// /// All I/O errors are ignored, (to match git behavior), and this function /// only succeeds if both a username and a password were found pub fn execute(&self) -> Option<(String, String)> { let mut username = self.username.clone(); let mut password = None; for cmd in &self.commands { let (u, p) = self.execute_cmd(cmd, &username); if u.is_some() && username.is_none() { username = u; } if p.is_some() && password.is_none() { password = p; } if username.is_some() && password.is_some() { break; } } match (username, password) { (Some(u), Some(p)) => Some((u, p)), _ => None, } } // Execute the given `cmd`, providing the appropriate variables on stdin and // then afterwards parsing the output into the username/password on stdout. fn execute_cmd( &self, cmd: &str, username: &Option, ) -> (Option, Option) { macro_rules! my_try( ($e:expr) => ( match $e { Ok(e) => e, Err(e) => { debug!("{} failed with {}", stringify!($e), e); return (None, None) } } ) ); // It looks like the `cmd` specification is typically bourne-shell-like // syntax, so try that first. If that fails, though, we may be on a // Windows machine for example where `sh` isn't actually available by // default. Most credential helper configurations though are pretty // simple (aka one or two space-separated strings) so also try to invoke // the process directly. // // If that fails then it's up to the user to put `sh` in path and make // sure it works. let mut c = Command::new("sh"); c.arg("-c") .arg(&format!("{} get", cmd)) .stdin(Stdio::piped()) .stdout(Stdio::piped()) .stderr(Stdio::piped()); debug!("executing credential helper {:?}", c); let mut p = match c.spawn() { Ok(p) => p, Err(e) => { debug!("`sh` failed to spawn: {}", e); let mut parts = cmd.split_whitespace(); let mut c = Command::new(parts.next().unwrap()); for arg in parts { c.arg(arg); } c.arg("get") .stdin(Stdio::piped()) .stdout(Stdio::piped()) .stderr(Stdio::piped()); debug!("executing credential helper {:?}", c); match c.spawn() { Ok(p) => p, Err(e) => { debug!("fallback of {:?} failed with {}", cmd, e); return (None, None); } } } }; // Ignore write errors as the command may not actually be listening for // stdin { let stdin = p.stdin.as_mut().unwrap(); if let Some(ref p) = self.protocol { let _ = writeln!(stdin, "protocol={}", p); } if let Some(ref p) = self.host { if let Some(ref p2) = self.port { let _ = writeln!(stdin, "host={}:{}", p, p2); } else { let _ = writeln!(stdin, "host={}", p); } } if let Some(ref p) = self.path { let _ = writeln!(stdin, "path={}", p); } if let Some(ref p) = *username { let _ = writeln!(stdin, "username={}", p); } } let output = my_try!(p.wait_with_output()); if !output.status.success() { debug!( "credential helper failed: {}\nstdout ---\n{}\nstdout ---\n{}", output.status, String::from_utf8_lossy(&output.stdout), String::from_utf8_lossy(&output.stderr) ); return (None, None); } trace!( "credential helper stderr ---\n{}", String::from_utf8_lossy(&output.stderr) ); self.parse_output(output.stdout) } // Parse the output of a command into the username/password found fn parse_output(&self, output: Vec) -> (Option, Option) { // Parse the output of the command, looking for username/password let mut username = None; let mut password = None; for line in output.split(|t| *t == b'\n') { let mut parts = line.splitn(2, |t| *t == b'='); let key = parts.next().unwrap(); let value = match parts.next() { Some(s) => s, None => { trace!("ignoring output line: {}", String::from_utf8_lossy(line)); continue; } }; let value = match String::from_utf8(value.to_vec()) { Ok(s) => s, Err(..) => continue, }; match key { b"username" => username = Some(value), b"password" => password = Some(value), _ => {} } } (username, password) } } #[cfg(test)] mod test { use std::env; use std::fs::File; use std::io::prelude::*; use std::path::Path; use tempfile::TempDir; use crate::{Config, ConfigLevel, Cred, CredentialHelper}; macro_rules! test_cfg( ($($k:expr => $v:expr),*) => ({ let td = TempDir::new().unwrap(); let mut cfg = Config::new().unwrap(); cfg.add_file(&td.path().join("cfg"), ConfigLevel::Highest, false).unwrap(); $(cfg.set_str($k, $v).unwrap();)* cfg }) ); #[test] fn smoke() { Cred::default().unwrap(); } #[test] fn credential_helper1() { let cfg = test_cfg! { "credential.helper" => "!f() { echo username=a; echo password=b; }; f" }; let (u, p) = CredentialHelper::new("https://example.com/foo/bar") .config(&cfg) .execute() .unwrap(); assert_eq!(u, "a"); assert_eq!(p, "b"); } #[test] fn credential_helper2() { let cfg = test_cfg! {}; assert!(CredentialHelper::new("https://example.com/foo/bar") .config(&cfg) .execute() .is_none()); } #[test] fn credential_helper3() { let cfg = test_cfg! { "credential.https://example.com.helper" => "!f() { echo username=c; }; f", "credential.helper" => "!f() { echo username=a; echo password=b; }; f" }; let (u, p) = CredentialHelper::new("https://example.com/foo/bar") .config(&cfg) .execute() .unwrap(); assert_eq!(u, "c"); assert_eq!(p, "b"); } #[test] fn credential_helper4() { if cfg!(windows) { return; } // shell scripts don't work on Windows let td = TempDir::new().unwrap(); let path = td.path().join("script"); File::create(&path) .unwrap() .write( br"\ #!/bin/sh echo username=c ", ) .unwrap(); chmod(&path); let cfg = test_cfg! { "credential.https://example.com.helper" => &path.display().to_string()[..], "credential.helper" => "!f() { echo username=a; echo password=b; }; f" }; let (u, p) = CredentialHelper::new("https://example.com/foo/bar") .config(&cfg) .execute() .unwrap(); assert_eq!(u, "c"); assert_eq!(p, "b"); } #[test] fn credential_helper5() { if !Path::new("/usr/bin/git").exists() { return; } //this test does not work if git is not installed if cfg!(windows) { return; } // shell scripts don't work on Windows let td = TempDir::new().unwrap(); let path = td.path().join("git-credential-script"); File::create(&path) .unwrap() .write( br"\ #!/bin/sh echo username=c ", ) .unwrap(); chmod(&path); let paths = env::var("PATH").unwrap(); let paths = env::split_paths(&paths).chain(path.parent().map(|p| p.to_path_buf()).into_iter()); env::set_var("PATH", &env::join_paths(paths).unwrap()); let cfg = test_cfg! { "credential.https://example.com.helper" => "script", "credential.helper" => "!f() { echo username=a; echo password=b; }; f" }; let (u, p) = CredentialHelper::new("https://example.com/foo/bar") .config(&cfg) .execute() .unwrap(); assert_eq!(u, "c"); assert_eq!(p, "b"); } #[test] fn credential_helper6() { let cfg = test_cfg! { "credential.helper" => "" }; assert!(CredentialHelper::new("https://example.com/foo/bar") .config(&cfg) .execute() .is_none()); } #[test] fn credential_helper7() { if cfg!(windows) { return; } // shell scripts don't work on Windows let td = TempDir::new().unwrap(); let path = td.path().join("script"); File::create(&path) .unwrap() .write( br"\ #!/bin/sh echo username=$1 echo password=$2 ", ) .unwrap(); chmod(&path); let cfg = test_cfg! { "credential.helper" => &format!("{} a b", path.display()) }; let (u, p) = CredentialHelper::new("https://example.com/foo/bar") .config(&cfg) .execute() .unwrap(); assert_eq!(u, "a"); assert_eq!(p, "b"); } #[test] fn credential_helper8() { let cfg = test_cfg! { "credential.useHttpPath" => "true" }; let mut helper = CredentialHelper::new("https://example.com/foo/bar"); helper.config(&cfg); assert_eq!(helper.path.as_deref(), Some("foo/bar")); } #[test] fn credential_helper9() { let cfg = test_cfg! { "credential.helper" => "!f() { while read line; do eval $line; done; if [ \"$host\" = example.com:3000 ]; then echo username=a; echo password=b; fi; }; f" }; let (u, p) = CredentialHelper::new("https://example.com:3000/foo/bar") .config(&cfg) .execute() .unwrap(); assert_eq!(u, "a"); assert_eq!(p, "b"); } #[test] #[cfg(feature = "ssh")] fn ssh_key_from_memory() { let cred = Cred::ssh_key_from_memory( "test", Some("ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDByAO8uj+kXicj6C2ODMspgmUoVyl5eaw8vR6a1yEnFuJFzevabNlN6Ut+CPT3TRnYk5BW73pyXBtnSL2X95BOnbjMDXc4YIkgs3YYHWnxbqsD4Pj/RoGqhf+gwhOBtL0poh8tT8WqXZYxdJQKLQC7oBqf3ykCEYulE4oeRUmNh4IzEE+skD/zDkaJ+S1HRD8D8YCiTO01qQnSmoDFdmIZTi8MS8Cw+O/Qhym1271ThMlhD6PubSYJXfE6rVbE7A9RzH73A6MmKBlzK8VTb4SlNSrr/DOk+L0uq+wPkv+pm+D9WtxoqQ9yl6FaK1cPawa3+7yRNle3m+72KCtyMkQv"), r#" -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC,818C7722D3B01F2161C2ACF6A5BBAAE8 3Cht4QB3PcoQ0I55j1B3m2ZzIC/mrh+K5nQeA1Vy2GBTMyM7yqGHqTOv7qLhJscd H+cB0Pm6yCr3lYuNrcKWOCUto+91P7ikyARruHVwyIxKdNx15uNulOzQJHQWNbA4 RQHlhjON4atVo2FyJ6n+ujK6QiBg2PR5Vbbw/AtV6zBCFW3PhzDn+qqmHjpBFqj2 vZUUe+MkDQcaF5J45XMHahhSdo/uKCDhfbylExp/+ACWkvxdPpsvcARM6X434ucD aPY+4i0/JyLkdbm0GFN9/q3i53qf4kCBhojFl4AYJdGI0AzAgbdTXZ7EJHbAGZHS os5K0oTwDVXMI0sSE2I/qHxaZZsDP1dOKq6di6SFPUp8liYimm7rNintRX88Gl2L g1ko9abp/NlgD0YY/3mad+NNAISDL/YfXq2fklH3En3/7ZrOVZFKfZXwQwas5g+p VQPKi3+ae74iOjLyuPDSc1ePmhUNYeP+9rLSc0wiaiHqls+2blPPDxAGMEo63kbz YPVjdmuVX4VWnyEsfTxxJdFDYGSNh6rlrrO1RFrex7kJvpg5gTX4M/FT8TfCd7Hn M6adXsLMqwu5tz8FuDmAtVdq8zdSrgZeAbpJ9D3EDOmZ70xz4XBL19ImxDp+Qqs2 kQX7kobRzeeP2URfRoGr7XZikQWyQ2UASfPcQULY8R58QoZWWsQ4w51GZHg7TDnw 1DRo/0OgkK7Gqf215nFmMpB4uyi58cq3WFwWQa1IqslkObpVgBQZcNZb/hKUYPGk g4zehfIgAfCdnQHwZvQ6Fdzhcs3SZeO+zVyuiZN3Gsi9HU0/1vpAKiuuOzcG02vF b6Y6hwsAA9yphF3atI+ARD4ZwXdDfzuGb3yJglMT3Fr/xuLwAvdchRo1spANKA0E tT5okLrK0H4wnHvf2SniVVWRhmJis0lQo9LjGGwRIdsPpVnJSDvaISIVF+fHT90r HvxN8zXI93x9jcPtwp7puQ1C7ehKJK10sZ71OLIZeuUgwt+5DRunqg6evPco9Go7 UOGwcVhLY200KT+1k7zWzCS0yVQp2HRm6cxsZXAp4ClBSwIx15eIoLIrjZdJRjCq COp6pZx1fnvJ9ERIvl5hon+Ty+renMcFKz2HmchC7egpcqIxW9Dsv6zjhHle6pxb 37GaEKHF2KA3RN+dSV/K8n+C9Yent5tx5Y9a/pMcgRGtgu+G+nyFmkPKn5Zt39yX qDpyM0LtbRVZPs+MgiqoGIwYc/ujoCq7GL38gezsBQoHaTt79yYBqCp6UR0LMuZ5 f/7CtWqffgySfJ/0wjGidDAumDv8CK45AURpL/Z+tbFG3M9ar/LZz/Y6EyBcLtGY Wwb4zs8zXIA0qHrjNTnPqHDvezziArYfgPjxCIHMZzms9Yn8+N02p39uIytqg434 BAlCqZ7GYdDFfTpWIwX+segTK9ux0KdBqcQv+9Fwwjkq9KySnRKqNl7ZJcefFZJq c6PA1iinZWBjuaO1HKx3PFulrl0bcpR9Kud1ZIyfnh5rwYN8UQkkcR/wZPla04TY 8l5dq/LI/3G5sZXwUHKOcuQWTj7Saq7Q6gkKoMfqt0wC5bpZ1m17GHPoMz6GtX9O -----END RSA PRIVATE KEY----- "#, Some("test123")); assert!(cred.is_ok()); } #[cfg(unix)] fn chmod(path: &Path) { use std::fs; use std::os::unix::prelude::*; let mut perms = fs::metadata(path).unwrap().permissions(); perms.set_mode(0o755); fs::set_permissions(path, perms).unwrap(); } #[cfg(windows)] fn chmod(_path: &Path) {} } vendor/git2/src/reflog.rs0000664000175000017500000001240514160055207016131 0ustar mwhudsonmwhudsonuse libc::size_t; use std::marker; use std::ops::Range; use std::str; use crate::util::Binding; use crate::{raw, signature, Error, Oid, Signature}; /// A reference log of a git repository. pub struct Reflog { raw: *mut raw::git_reflog, } /// An entry inside the reflog of a repository pub struct ReflogEntry<'reflog> { raw: *const raw::git_reflog_entry, _marker: marker::PhantomData<&'reflog Reflog>, } /// An iterator over the entries inside of a reflog. pub struct ReflogIter<'reflog> { range: Range, reflog: &'reflog Reflog, } impl Reflog { /// Add a new entry to the in-memory reflog. pub fn append( &mut self, new_oid: Oid, committer: &Signature<'_>, msg: Option<&str>, ) -> Result<(), Error> { let msg = crate::opt_cstr(msg)?; unsafe { try_call!(raw::git_reflog_append( self.raw, new_oid.raw(), committer.raw(), msg )); } Ok(()) } /// Remove an entry from the reflog by its index /// /// To ensure there's no gap in the log history, set rewrite_previous_entry /// param value to `true`. When deleting entry n, member old_oid of entry /// n-1 (if any) will be updated with the value of member new_oid of entry /// n+1. pub fn remove(&mut self, i: usize, rewrite_previous_entry: bool) -> Result<(), Error> { unsafe { try_call!(raw::git_reflog_drop( self.raw, i as size_t, rewrite_previous_entry )); } Ok(()) } /// Lookup an entry by its index /// /// Requesting the reflog entry with an index of 0 (zero) will return the /// most recently created entry. pub fn get(&self, i: usize) -> Option> { unsafe { let ptr = raw::git_reflog_entry_byindex(self.raw, i as size_t); Binding::from_raw_opt(ptr) } } /// Get the number of log entries in a reflog pub fn len(&self) -> usize { unsafe { raw::git_reflog_entrycount(self.raw) as usize } } /// Return `true ` is there is no log entry in a reflog pub fn is_empty(&self) -> bool { self.len() == 0 } /// Get an iterator to all entries inside of this reflog pub fn iter(&self) -> ReflogIter<'_> { ReflogIter { range: 0..self.len(), reflog: self, } } /// Write an existing in-memory reflog object back to disk using an atomic /// file lock. pub fn write(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_reflog_write(self.raw)); } Ok(()) } } impl Binding for Reflog { type Raw = *mut raw::git_reflog; unsafe fn from_raw(raw: *mut raw::git_reflog) -> Reflog { Reflog { raw } } fn raw(&self) -> *mut raw::git_reflog { self.raw } } impl Drop for Reflog { fn drop(&mut self) { unsafe { raw::git_reflog_free(self.raw) } } } impl<'reflog> ReflogEntry<'reflog> { /// Get the committer of this entry pub fn committer(&self) -> Signature<'_> { unsafe { let ptr = raw::git_reflog_entry_committer(self.raw); signature::from_raw_const(self, ptr) } } /// Get the new oid pub fn id_new(&self) -> Oid { unsafe { Binding::from_raw(raw::git_reflog_entry_id_new(self.raw)) } } /// Get the old oid pub fn id_old(&self) -> Oid { unsafe { Binding::from_raw(raw::git_reflog_entry_id_old(self.raw)) } } /// Get the log message, returning `None` on invalid UTF-8. pub fn message(&self) -> Option<&str> { self.message_bytes().and_then(|s| str::from_utf8(s).ok()) } /// Get the log message as a byte array. pub fn message_bytes(&self) -> Option<&[u8]> { unsafe { crate::opt_bytes(self, raw::git_reflog_entry_message(self.raw)) } } } impl<'reflog> Binding for ReflogEntry<'reflog> { type Raw = *const raw::git_reflog_entry; unsafe fn from_raw(raw: *const raw::git_reflog_entry) -> ReflogEntry<'reflog> { ReflogEntry { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *const raw::git_reflog_entry { self.raw } } impl<'reflog> Iterator for ReflogIter<'reflog> { type Item = ReflogEntry<'reflog>; fn next(&mut self) -> Option> { self.range.next().and_then(|i| self.reflog.get(i)) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } } impl<'reflog> DoubleEndedIterator for ReflogIter<'reflog> { fn next_back(&mut self) -> Option> { self.range.next_back().and_then(|i| self.reflog.get(i)) } } impl<'reflog> ExactSizeIterator for ReflogIter<'reflog> {} #[cfg(test)] mod tests { #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let mut reflog = repo.reflog("HEAD").unwrap(); assert_eq!(reflog.iter().len(), 1); reflog.write().unwrap(); let entry = reflog.iter().next().unwrap(); assert!(entry.message().is_some()); repo.reflog_rename("HEAD", "refs/heads/foo").unwrap(); repo.reflog_delete("refs/heads/foo").unwrap(); } } vendor/git2/src/worktree.rs0000664000175000017500000002376714160055207016532 0ustar mwhudsonmwhudsonuse crate::buf::Buf; use crate::reference::Reference; use crate::repo::Repository; use crate::util::{self, Binding}; use crate::{raw, Error}; use std::os::raw::c_int; use std::path::Path; use std::ptr; use std::str; use std::{marker, mem}; /// An owned git worktree /// /// This structure corresponds to a `git_worktree` in libgit2. // pub struct Worktree { raw: *mut raw::git_worktree, } /// Options which can be used to configure how a worktree is initialized pub struct WorktreeAddOptions<'a> { raw: raw::git_worktree_add_options, _marker: marker::PhantomData>, } /// Options to configure how worktree pruning is performed pub struct WorktreePruneOptions { raw: raw::git_worktree_prune_options, } /// Lock Status of a worktree #[derive(PartialEq, Debug)] pub enum WorktreeLockStatus { /// Worktree is Unlocked Unlocked, /// Worktree is locked with the optional message Locked(Option), } impl Worktree { /// Open a worktree of a the repository /// /// If a repository is not the main tree but a worktree, this /// function will look up the worktree inside the parent /// repository and create a new `git_worktree` structure. pub fn open_from_repository(repo: &Repository) -> Result { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_worktree_open_from_repository(&mut raw, repo.raw())); Ok(Binding::from_raw(raw)) } } /// Retrieves the name of the worktree /// /// This is the name that can be passed to repo::Repository::find_worktree /// to reopen the worktree. This is also the name that would appear in the /// list returned by repo::Repository::worktrees pub fn name(&self) -> Option<&str> { unsafe { crate::opt_bytes(self, raw::git_worktree_name(self.raw)) .and_then(|s| str::from_utf8(s).ok()) } } /// Retrieves the path to the worktree /// /// This is the path to the top-level of the source and not the path to the /// .git file within the worktree. This path can be passed to /// repo::Repository::open. pub fn path(&self) -> &Path { unsafe { util::bytes2path(crate::opt_bytes(self, raw::git_worktree_path(self.raw)).unwrap()) } } /// Validates the worktree /// /// This checks that it still exists on the /// filesystem and that the metadata is correct pub fn validate(&self) -> Result<(), Error> { unsafe { try_call!(raw::git_worktree_validate(self.raw)); } Ok(()) } /// Locks the worktree pub fn lock(&self, reason: Option<&str>) -> Result<(), Error> { let reason = crate::opt_cstr(reason)?; unsafe { try_call!(raw::git_worktree_lock(self.raw, reason)); } Ok(()) } /// Unlocks the worktree pub fn unlock(&self) -> Result<(), Error> { unsafe { try_call!(raw::git_worktree_unlock(self.raw)); } Ok(()) } /// Checks if worktree is locked pub fn is_locked(&self) -> Result { let buf = Buf::new(); unsafe { match try_call!(raw::git_worktree_is_locked(buf.raw(), self.raw)) { 0 => Ok(WorktreeLockStatus::Unlocked), _ => { let v = buf.to_vec(); Ok(WorktreeLockStatus::Locked(match v.len() { 0 => None, _ => Some(String::from_utf8(v).unwrap()), })) } } } } /// Prunes the worktree pub fn prune(&self, opts: Option<&mut WorktreePruneOptions>) -> Result<(), Error> { // When successful the worktree should be removed however the backing structure // of the git_worktree should still be valid. unsafe { try_call!(raw::git_worktree_prune(self.raw, opts.map(|o| o.raw()))); } Ok(()) } /// Checks if the worktree is prunable pub fn is_prunable(&self, opts: Option<&mut WorktreePruneOptions>) -> Result { unsafe { let rv = try_call!(raw::git_worktree_is_prunable( self.raw, opts.map(|o| o.raw()) )); Ok(rv != 0) } } } impl<'a> WorktreeAddOptions<'a> { /// Creates a default set of add options. /// /// By default this will not lock the worktree pub fn new() -> WorktreeAddOptions<'a> { unsafe { let mut raw = mem::zeroed(); assert_eq!( raw::git_worktree_add_options_init(&mut raw, raw::GIT_WORKTREE_ADD_OPTIONS_VERSION), 0 ); WorktreeAddOptions { raw, _marker: marker::PhantomData, } } } /// If enabled, this will cause the newly added worktree to be locked pub fn lock(&mut self, enabled: bool) -> &mut WorktreeAddOptions<'a> { self.raw.lock = enabled as c_int; self } /// reference to use for the new worktree HEAD pub fn reference( &mut self, reference: Option<&'a Reference<'_>>, ) -> &mut WorktreeAddOptions<'a> { self.raw.reference = if let Some(reference) = reference { reference.raw() } else { ptr::null_mut() }; self } /// Get a set of raw add options to be used with `git_worktree_add` pub fn raw(&self) -> *const raw::git_worktree_add_options { &self.raw } } impl WorktreePruneOptions { /// Creates a default set of pruning options /// /// By defaults this will prune only worktrees that are no longer valid /// unlocked and not checked out pub fn new() -> WorktreePruneOptions { unsafe { let mut raw = mem::zeroed(); assert_eq!( raw::git_worktree_prune_options_init( &mut raw, raw::GIT_WORKTREE_PRUNE_OPTIONS_VERSION ), 0 ); WorktreePruneOptions { raw } } } /// Controls whether valid (still existing on the filesystem) worktrees /// will be pruned /// /// Defaults to false pub fn valid(&mut self, valid: bool) -> &mut WorktreePruneOptions { self.flag(raw::GIT_WORKTREE_PRUNE_VALID, valid) } /// Controls whether locked worktrees will be pruned /// /// Defaults to false pub fn locked(&mut self, locked: bool) -> &mut WorktreePruneOptions { self.flag(raw::GIT_WORKTREE_PRUNE_LOCKED, locked) } /// Controls whether the actual working tree on the fs is recursively removed /// /// Defaults to false pub fn working_tree(&mut self, working_tree: bool) -> &mut WorktreePruneOptions { self.flag(raw::GIT_WORKTREE_PRUNE_WORKING_TREE, working_tree) } fn flag(&mut self, flag: raw::git_worktree_prune_t, on: bool) -> &mut WorktreePruneOptions { if on { self.raw.flags |= flag as u32; } else { self.raw.flags &= !(flag as u32); } self } /// Get a set of raw prune options to be used with `git_worktree_prune` pub fn raw(&mut self) -> *mut raw::git_worktree_prune_options { &mut self.raw } } impl Binding for Worktree { type Raw = *mut raw::git_worktree; unsafe fn from_raw(ptr: *mut raw::git_worktree) -> Worktree { Worktree { raw: ptr } } fn raw(&self) -> *mut raw::git_worktree { self.raw } } impl Drop for Worktree { fn drop(&mut self) { unsafe { raw::git_worktree_free(self.raw) } } } #[cfg(test)] mod tests { use crate::WorktreeAddOptions; use crate::WorktreeLockStatus; use tempfile::TempDir; #[test] fn smoke_add_no_ref() { let (_td, repo) = crate::test::repo_init(); let wtdir = TempDir::new().unwrap(); let wt_path = wtdir.path().join("tree-no-ref-dir"); let opts = WorktreeAddOptions::new(); let wt = repo.worktree("tree-no-ref", &wt_path, Some(&opts)).unwrap(); assert_eq!(wt.name(), Some("tree-no-ref")); assert_eq!( wt.path().canonicalize().unwrap(), wt_path.canonicalize().unwrap() ); let status = wt.is_locked().unwrap(); assert_eq!(status, WorktreeLockStatus::Unlocked); } #[test] fn smoke_add_locked() { let (_td, repo) = crate::test::repo_init(); let wtdir = TempDir::new().unwrap(); let wt_path = wtdir.path().join("locked-tree"); let mut opts = WorktreeAddOptions::new(); opts.lock(true); let wt = repo.worktree("locked-tree", &wt_path, Some(&opts)).unwrap(); // shouldn't be able to lock a worktree that was created locked assert!(wt.lock(Some("my reason")).is_err()); assert_eq!(wt.name(), Some("locked-tree")); assert_eq!( wt.path().canonicalize().unwrap(), wt_path.canonicalize().unwrap() ); assert_eq!(wt.is_locked().unwrap(), WorktreeLockStatus::Locked(None)); assert!(wt.unlock().is_ok()); assert!(wt.lock(Some("my reason")).is_ok()); assert_eq!( wt.is_locked().unwrap(), WorktreeLockStatus::Locked(Some("my reason".to_string())) ); } #[test] fn smoke_add_from_branch() { let (_td, repo) = crate::test::repo_init(); let (wt_top, branch) = crate::test::worktrees_env_init(&repo); let wt_path = wt_top.path().join("test"); let mut opts = WorktreeAddOptions::new(); let reference = branch.into_reference(); opts.reference(Some(&reference)); let wt = repo .worktree("test-worktree", &wt_path, Some(&opts)) .unwrap(); assert_eq!(wt.name(), Some("test-worktree")); assert_eq!( wt.path().canonicalize().unwrap(), wt_path.canonicalize().unwrap() ); let status = wt.is_locked().unwrap(); assert_eq!(status, WorktreeLockStatus::Unlocked); } } vendor/git2/src/time.rs0000664000175000017500000000614114160055207015611 0ustar mwhudsonmwhudsonuse std::cmp::Ordering; use libc::{c_char, c_int}; use crate::raw; use crate::util::Binding; /// Time in a signature #[derive(Copy, Clone, Eq, PartialEq)] pub struct Time { raw: raw::git_time, } /// Time structure used in a git index entry. #[derive(Copy, Clone, Eq, PartialEq)] pub struct IndexTime { raw: raw::git_index_time, } impl Time { /// Creates a new time structure from its components. pub fn new(time: i64, offset: i32) -> Time { unsafe { Binding::from_raw(raw::git_time { time: time as raw::git_time_t, offset: offset as c_int, sign: if offset < 0 { '-' } else { '+' } as c_char, }) } } /// Return the time, in seconds, from epoch pub fn seconds(&self) -> i64 { self.raw.time as i64 } /// Return the timezone offset, in minutes pub fn offset_minutes(&self) -> i32 { self.raw.offset as i32 } /// Return whether the offset was positive or negative. Primarily useful /// in case the offset is specified as a negative zero. pub fn sign(&self) -> char { self.raw.sign as u8 as char } } impl PartialOrd for Time { fn partial_cmp(&self, other: &Time) -> Option { Some(self.cmp(other)) } } impl Ord for Time { fn cmp(&self, other: &Time) -> Ordering { (self.raw.time, self.raw.offset).cmp(&(other.raw.time, other.raw.offset)) } } impl Binding for Time { type Raw = raw::git_time; unsafe fn from_raw(raw: raw::git_time) -> Time { Time { raw } } fn raw(&self) -> raw::git_time { self.raw } } impl IndexTime { /// Creates a new time structure from its components. pub fn new(seconds: i32, nanoseconds: u32) -> IndexTime { unsafe { Binding::from_raw(raw::git_index_time { seconds, nanoseconds, }) } } /// Returns the number of seconds in the second component of this time. pub fn seconds(&self) -> i32 { self.raw.seconds } /// Returns the nanosecond component of this time. pub fn nanoseconds(&self) -> u32 { self.raw.nanoseconds } } impl Binding for IndexTime { type Raw = raw::git_index_time; unsafe fn from_raw(raw: raw::git_index_time) -> IndexTime { IndexTime { raw } } fn raw(&self) -> raw::git_index_time { self.raw } } impl PartialOrd for IndexTime { fn partial_cmp(&self, other: &IndexTime) -> Option { Some(self.cmp(other)) } } impl Ord for IndexTime { fn cmp(&self, other: &IndexTime) -> Ordering { let me = (self.raw.seconds, self.raw.nanoseconds); let other = (other.raw.seconds, other.raw.nanoseconds); me.cmp(&other) } } #[cfg(test)] mod tests { use crate::Time; #[test] fn smoke() { assert_eq!(Time::new(1608839587, -300).seconds(), 1608839587); assert_eq!(Time::new(1608839587, -300).offset_minutes(), -300); assert_eq!(Time::new(1608839587, -300).sign(), '-'); assert_eq!(Time::new(1608839587, 300).sign(), '+'); } } vendor/git2/src/transport.rs0000664000175000017500000003310114160055207016703 0ustar mwhudsonmwhudson//! Interfaces for adding custom transports to libgit2 use libc::{c_char, c_int, c_uint, c_void, size_t}; use std::ffi::{CStr, CString}; use std::io; use std::io::prelude::*; use std::mem; use std::ptr; use std::slice; use std::str; use crate::util::Binding; use crate::{panic, raw, Error, Remote}; /// A transport is a structure which knows how to transfer data to and from a /// remote. /// /// This transport is a representation of the raw transport underneath it, which /// is similar to a trait object in Rust. #[allow(missing_copy_implementations)] pub struct Transport { raw: *mut raw::git_transport, owned: bool, } /// Interface used by smart transports. /// /// The full-fledged definiton of transports has to deal with lots of /// nitty-gritty details of the git protocol, but "smart transports" largely /// only need to deal with read() and write() of data over a channel. /// /// A smart subtransport is contained within an instance of a smart transport /// and is delegated to in order to actually conduct network activity to push or /// pull data from a remote. pub trait SmartSubtransport: Send + 'static { /// Indicates that this subtransport will be performing the specified action /// on the specified URL. /// /// This function is responsible for making any network connections and /// returns a stream which can be read and written from in order to /// negotiate the git protocol. fn action(&self, url: &str, action: Service) -> Result, Error>; /// Terminates a connection with the remote. /// /// Each subtransport is guaranteed a call to close() between calls to /// action(), except for the following two natural progressions of actions /// against a constant URL. /// /// 1. UploadPackLs -> UploadPack /// 2. ReceivePackLs -> ReceivePack fn close(&self) -> Result<(), Error>; } /// Actions that a smart transport can ask a subtransport to perform #[derive(Copy, Clone, PartialEq)] #[allow(missing_docs)] pub enum Service { UploadPackLs, UploadPack, ReceivePackLs, ReceivePack, } /// An instance of a stream over which a smart transport will communicate with a /// remote. /// /// Currently this only requires the standard `Read` and `Write` traits. This /// trait also does not need to be implemented manually as long as the `Read` /// and `Write` traits are implemented. pub trait SmartSubtransportStream: Read + Write + Send + 'static {} impl SmartSubtransportStream for T {} type TransportFactory = dyn Fn(&Remote<'_>) -> Result + Send + Sync + 'static; /// Boxed data payload used for registering new transports. /// /// Currently only contains a field which knows how to create transports. struct TransportData { factory: Box, } /// Instance of a `git_smart_subtransport`, must use `#[repr(C)]` to ensure that /// the C fields come first. #[repr(C)] struct RawSmartSubtransport { raw: raw::git_smart_subtransport, stream: Option<*mut raw::git_smart_subtransport_stream>, rpc: bool, obj: Box, } /// Instance of a `git_smart_subtransport_stream`, must use `#[repr(C)]` to /// ensure that the C fields come first. #[repr(C)] struct RawSmartSubtransportStream { raw: raw::git_smart_subtransport_stream, obj: Box, } /// Add a custom transport definition, to be used in addition to the built-in /// set of transports that come with libgit2. /// /// This function is unsafe as it needs to be externally synchronized with calls /// to creation of other transports. pub unsafe fn register(prefix: &str, factory: F) -> Result<(), Error> where F: Fn(&Remote<'_>) -> Result + Send + Sync + 'static, { crate::init(); let mut data = Box::new(TransportData { factory: Box::new(factory), }); let prefix = CString::new(prefix)?; let datap = (&mut *data) as *mut TransportData as *mut c_void; let factory: raw::git_transport_cb = Some(transport_factory); try_call!(raw::git_transport_register(prefix, factory, datap)); mem::forget(data); Ok(()) } impl Transport { /// Creates a new transport which will use the "smart" transport protocol /// for transferring data. /// /// A smart transport requires a *subtransport* over which data is actually /// communicated, but this subtransport largely just needs to be able to /// read() and write(). The subtransport provided will be used to make /// connections which can then be read/written from. /// /// The `rpc` argument is `true` if the protocol is stateless, false /// otherwise. For example `http://` is stateless but `git://` is not. pub fn smart(remote: &Remote<'_>, rpc: bool, subtransport: S) -> Result where S: SmartSubtransport, { let mut ret = ptr::null_mut(); let mut raw = Box::new(RawSmartSubtransport { raw: raw::git_smart_subtransport { action: Some(subtransport_action), close: Some(subtransport_close), free: Some(subtransport_free), }, stream: None, rpc, obj: Box::new(subtransport), }); let mut defn = raw::git_smart_subtransport_definition { callback: Some(smart_factory), rpc: rpc as c_uint, param: &mut *raw as *mut _ as *mut _, }; // Currently there's no way to pass a payload via the // git_smart_subtransport_definition structure, but it's only used as a // configuration for the initial creation of the smart transport (verified // by reading the current code, hopefully it doesn't change!). // // We, however, need some state (gotta pass in our // `RawSmartSubtransport`). This also means that this block must be // entirely synchronized with a lock (boo!) unsafe { try_call!(raw::git_transport_smart( &mut ret, remote.raw(), &mut defn as *mut _ as *mut _ )); mem::forget(raw); // ownership transport to `ret` } return Ok(Transport { raw: ret, owned: true, }); extern "C" fn smart_factory( out: *mut *mut raw::git_smart_subtransport, _owner: *mut raw::git_transport, ptr: *mut c_void, ) -> c_int { unsafe { *out = ptr as *mut raw::git_smart_subtransport; 0 } } } } impl Drop for Transport { fn drop(&mut self) { if self.owned { unsafe { (*self.raw).free.unwrap()(self.raw) } } } } // callback used by register() to create new transports extern "C" fn transport_factory( out: *mut *mut raw::git_transport, owner: *mut raw::git_remote, param: *mut c_void, ) -> c_int { struct Bomb<'a> { remote: Option>, } impl<'a> Drop for Bomb<'a> { fn drop(&mut self) { // TODO: maybe a method instead? mem::forget(self.remote.take()); } } panic::wrap(|| unsafe { let remote = Bomb { remote: Some(Binding::from_raw(owner)), }; let data = &mut *(param as *mut TransportData); match (data.factory)(remote.remote.as_ref().unwrap()) { Ok(mut transport) => { *out = transport.raw; transport.owned = false; 0 } Err(e) => e.raw_code() as c_int, } }) .unwrap_or(-1) } // callback used by smart transports to delegate an action to a // `SmartSubtransport` trait object. extern "C" fn subtransport_action( stream: *mut *mut raw::git_smart_subtransport_stream, raw_transport: *mut raw::git_smart_subtransport, url: *const c_char, action: raw::git_smart_service_t, ) -> c_int { panic::wrap(|| unsafe { let url = CStr::from_ptr(url).to_bytes(); let url = match str::from_utf8(url).ok() { Some(s) => s, None => return -1, }; let action = match action { raw::GIT_SERVICE_UPLOADPACK_LS => Service::UploadPackLs, raw::GIT_SERVICE_UPLOADPACK => Service::UploadPack, raw::GIT_SERVICE_RECEIVEPACK_LS => Service::ReceivePackLs, raw::GIT_SERVICE_RECEIVEPACK => Service::ReceivePack, n => panic!("unknown action: {}", n), }; let mut transport = &mut *(raw_transport as *mut RawSmartSubtransport); // Note: we only need to generate if rpc is on. Else, for receive-pack and upload-pack // libgit2 reuses the stream generated for receive-pack-ls or upload-pack-ls. let generate_stream = transport.rpc || action == Service::UploadPackLs || action == Service::ReceivePackLs; if generate_stream { let obj = match transport.obj.action(url, action) { Ok(s) => s, Err(e) => { set_err(&e); return e.raw_code() as c_int; } }; *stream = mem::transmute(Box::new(RawSmartSubtransportStream { raw: raw::git_smart_subtransport_stream { subtransport: raw_transport, read: Some(stream_read), write: Some(stream_write), free: Some(stream_free), }, obj, })); transport.stream = Some(*stream); } else { if transport.stream.is_none() { return -1; } *stream = transport.stream.unwrap(); } 0 }) .unwrap_or(-1) } // callback used by smart transports to close a `SmartSubtransport` trait // object. extern "C" fn subtransport_close(transport: *mut raw::git_smart_subtransport) -> c_int { let ret = panic::wrap(|| unsafe { let transport = &mut *(transport as *mut RawSmartSubtransport); transport.obj.close() }); match ret { Some(Ok(())) => 0, Some(Err(e)) => e.raw_code() as c_int, None => -1, } } // callback used by smart transports to free a `SmartSubtransport` trait // object. extern "C" fn subtransport_free(transport: *mut raw::git_smart_subtransport) { let _ = panic::wrap(|| unsafe { mem::transmute::<_, Box>(transport); }); } // callback used by smart transports to read from a `SmartSubtransportStream` // object. extern "C" fn stream_read( stream: *mut raw::git_smart_subtransport_stream, buffer: *mut c_char, buf_size: size_t, bytes_read: *mut size_t, ) -> c_int { let ret = panic::wrap(|| unsafe { let transport = &mut *(stream as *mut RawSmartSubtransportStream); let buf = slice::from_raw_parts_mut(buffer as *mut u8, buf_size as usize); match transport.obj.read(buf) { Ok(n) => { *bytes_read = n as size_t; Ok(n) } e => e, } }); match ret { Some(Ok(_)) => 0, Some(Err(e)) => unsafe { set_err_io(&e); -2 }, None => -1, } } // callback used by smart transports to write to a `SmartSubtransportStream` // object. extern "C" fn stream_write( stream: *mut raw::git_smart_subtransport_stream, buffer: *const c_char, len: size_t, ) -> c_int { let ret = panic::wrap(|| unsafe { let transport = &mut *(stream as *mut RawSmartSubtransportStream); let buf = slice::from_raw_parts(buffer as *const u8, len as usize); transport.obj.write_all(buf) }); match ret { Some(Ok(())) => 0, Some(Err(e)) => unsafe { set_err_io(&e); -2 }, None => -1, } } unsafe fn set_err_io(e: &io::Error) { let s = CString::new(e.to_string()).unwrap(); raw::git_error_set_str(raw::GIT_ERROR_NET as c_int, s.as_ptr()); } unsafe fn set_err(e: &Error) { let s = CString::new(e.message()).unwrap(); raw::git_error_set_str(e.raw_class() as c_int, s.as_ptr()); } // callback used by smart transports to free a `SmartSubtransportStream` // object. extern "C" fn stream_free(stream: *mut raw::git_smart_subtransport_stream) { let _ = panic::wrap(|| unsafe { mem::transmute::<_, Box>(stream); }); } #[cfg(test)] mod tests { use super::*; use crate::{ErrorClass, ErrorCode}; use std::sync::Once; struct DummyTransport; // in lieu of lazy_static fn dummy_error() -> Error { Error::new(ErrorCode::Ambiguous, ErrorClass::Net, "bleh") } impl SmartSubtransport for DummyTransport { fn action( &self, _url: &str, _service: Service, ) -> Result, Error> { Err(dummy_error()) } fn close(&self) -> Result<(), Error> { Ok(()) } } #[test] fn transport_error_propagates() { static INIT: Once = Once::new(); unsafe { INIT.call_once(|| { register("dummy", move |remote| { Transport::smart(&remote, true, DummyTransport) }) .unwrap(); }) } let (_td, repo) = crate::test::repo_init(); t!(repo.remote("origin", "dummy://ball")); let mut origin = t!(repo.find_remote("origin")); match origin.fetch(&["main"], None, None) { Ok(()) => unreachable!(), Err(e) => assert_eq!(e, dummy_error()), } } } vendor/git2/src/tag.rs0000664000175000017500000001333214160055207015426 0ustar mwhudsonmwhudsonuse std::marker; use std::mem; use std::ptr; use std::str; use crate::util::Binding; use crate::{raw, signature, Error, Object, ObjectType, Oid, Signature}; /// A structure to represent a git [tag][1] /// /// [1]: http://git-scm.com/book/en/Git-Basics-Tagging pub struct Tag<'repo> { raw: *mut raw::git_tag, _marker: marker::PhantomData>, } impl<'repo> Tag<'repo> { /// Get the id (SHA1) of a repository tag pub fn id(&self) -> Oid { unsafe { Binding::from_raw(raw::git_tag_id(&*self.raw)) } } /// Get the message of a tag /// /// Returns None if there is no message or if it is not valid utf8 pub fn message(&self) -> Option<&str> { self.message_bytes().and_then(|s| str::from_utf8(s).ok()) } /// Get the message of a tag /// /// Returns None if there is no message pub fn message_bytes(&self) -> Option<&[u8]> { unsafe { crate::opt_bytes(self, raw::git_tag_message(&*self.raw)) } } /// Get the name of a tag /// /// Returns None if it is not valid utf8 pub fn name(&self) -> Option<&str> { str::from_utf8(self.name_bytes()).ok() } /// Get the name of a tag pub fn name_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_tag_name(&*self.raw)).unwrap() } } /// Recursively peel a tag until a non tag git_object is found pub fn peel(&self) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_tag_peel(&mut ret, &*self.raw)); Ok(Binding::from_raw(ret)) } } /// Get the tagger (author) of a tag /// /// If the author is unspecified, then `None` is returned. pub fn tagger(&self) -> Option> { unsafe { let ptr = raw::git_tag_tagger(&*self.raw); if ptr.is_null() { None } else { Some(signature::from_raw_const(self, ptr)) } } } /// Get the tagged object of a tag /// /// This method performs a repository lookup for the given object and /// returns it pub fn target(&self) -> Result, Error> { let mut ret = ptr::null_mut(); unsafe { try_call!(raw::git_tag_target(&mut ret, &*self.raw)); Ok(Binding::from_raw(ret)) } } /// Get the OID of the tagged object of a tag pub fn target_id(&self) -> Oid { unsafe { Binding::from_raw(raw::git_tag_target_id(&*self.raw)) } } /// Get the ObjectType of the tagged object of a tag pub fn target_type(&self) -> Option { unsafe { ObjectType::from_raw(raw::git_tag_target_type(&*self.raw)) } } /// Casts this Tag to be usable as an `Object` pub fn as_object(&self) -> &Object<'repo> { unsafe { &*(self as *const _ as *const Object<'repo>) } } /// Consumes Tag to be returned as an `Object` pub fn into_object(self) -> Object<'repo> { assert_eq!(mem::size_of_val(&self), mem::size_of::>()); unsafe { mem::transmute(self) } } } impl<'repo> std::fmt::Debug for Tag<'repo> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { let mut ds = f.debug_struct("Tag"); if let Some(name) = self.name() { ds.field("name", &name); } ds.field("id", &self.id()); ds.finish() } } impl<'repo> Binding for Tag<'repo> { type Raw = *mut raw::git_tag; unsafe fn from_raw(raw: *mut raw::git_tag) -> Tag<'repo> { Tag { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_tag { self.raw } } impl<'repo> Clone for Tag<'repo> { fn clone(&self) -> Self { self.as_object().clone().into_tag().ok().unwrap() } } impl<'repo> Drop for Tag<'repo> { fn drop(&mut self) { unsafe { raw::git_tag_free(self.raw) } } } #[cfg(test)] mod tests { #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let head = repo.head().unwrap(); let id = head.target().unwrap(); assert!(repo.find_tag(id).is_err()); let obj = repo.find_object(id, None).unwrap(); let sig = repo.signature().unwrap(); let tag_id = repo.tag("foo", &obj, &sig, "msg", false).unwrap(); let tag = repo.find_tag(tag_id).unwrap(); assert_eq!(tag.id(), tag_id); let tags = repo.tag_names(None).unwrap(); assert_eq!(tags.len(), 1); assert_eq!(tags.get(0), Some("foo")); assert_eq!(tag.name(), Some("foo")); assert_eq!(tag.message(), Some("msg")); assert_eq!(tag.peel().unwrap().id(), obj.id()); assert_eq!(tag.target_id(), obj.id()); assert_eq!(tag.target_type(), Some(crate::ObjectType::Commit)); assert_eq!(tag.tagger().unwrap().name(), sig.name()); tag.target().unwrap(); tag.into_object(); repo.find_object(tag_id, None).unwrap().as_tag().unwrap(); repo.find_object(tag_id, None) .unwrap() .into_tag() .ok() .unwrap(); repo.tag_delete("foo").unwrap(); } #[test] fn lite() { let (_td, repo) = crate::test::repo_init(); let head = t!(repo.head()); let id = head.target().unwrap(); let obj = t!(repo.find_object(id, None)); let tag_id = t!(repo.tag_lightweight("foo", &obj, false)); assert!(repo.find_tag(tag_id).is_err()); assert_eq!(t!(repo.refname_to_id("refs/tags/foo")), id); let tags = t!(repo.tag_names(Some("f*"))); assert_eq!(tags.len(), 1); let tags = t!(repo.tag_names(Some("b*"))); assert_eq!(tags.len(), 0); } } vendor/git2/src/test.rs0000664000175000017500000000533214160055207015633 0ustar mwhudsonmwhudsonuse std::fs::File; use std::io; use std::path::{Path, PathBuf}; #[cfg(unix)] use std::ptr; use tempfile::TempDir; use url::Url; use crate::{Branch, Oid, Repository, RepositoryInitOptions}; macro_rules! t { ($e:expr) => { match $e { Ok(e) => e, Err(e) => panic!("{} failed with {}", stringify!($e), e), } }; } pub fn repo_init() -> (TempDir, Repository) { let td = TempDir::new().unwrap(); let mut opts = RepositoryInitOptions::new(); opts.initial_head("main"); let repo = Repository::init_opts(td.path(), &opts).unwrap(); { let mut config = repo.config().unwrap(); config.set_str("user.name", "name").unwrap(); config.set_str("user.email", "email").unwrap(); let mut index = repo.index().unwrap(); let id = index.write_tree().unwrap(); let tree = repo.find_tree(id).unwrap(); let sig = repo.signature().unwrap(); repo.commit(Some("HEAD"), &sig, &sig, "initial", &tree, &[]) .unwrap(); } (td, repo) } pub fn commit(repo: &Repository) -> (Oid, Oid) { let mut index = t!(repo.index()); let root = repo.path().parent().unwrap(); t!(File::create(&root.join("foo"))); t!(index.add_path(Path::new("foo"))); let tree_id = t!(index.write_tree()); let tree = t!(repo.find_tree(tree_id)); let sig = t!(repo.signature()); let head_id = t!(repo.refname_to_id("HEAD")); let parent = t!(repo.find_commit(head_id)); let commit = t!(repo.commit(Some("HEAD"), &sig, &sig, "commit", &tree, &[&parent])); (commit, tree_id) } pub fn path2url(path: &Path) -> String { Url::from_file_path(path).unwrap().to_string() } pub fn worktrees_env_init(repo: &Repository) -> (TempDir, Branch<'_>) { let oid = repo.head().unwrap().target().unwrap(); let commit = repo.find_commit(oid).unwrap(); let branch = repo.branch("wt-branch", &commit, true).unwrap(); let wtdir = TempDir::new().unwrap(); (wtdir, branch) } #[cfg(windows)] pub fn realpath(original: &Path) -> io::Result { Ok(original.to_path_buf()) } #[cfg(unix)] pub fn realpath(original: &Path) -> io::Result { use libc::c_char; use std::ffi::{CStr, CString, OsString}; use std::os::unix::prelude::*; extern "C" { fn realpath(name: *const c_char, resolved: *mut c_char) -> *mut c_char; } unsafe { let cstr = CString::new(original.as_os_str().as_bytes())?; let ptr = realpath(cstr.as_ptr(), ptr::null_mut()); if ptr.is_null() { return Err(io::Error::last_os_error()); } let bytes = CStr::from_ptr(ptr).to_bytes().to_vec(); libc::free(ptr as *mut _); Ok(PathBuf::from(OsString::from_vec(bytes))) } } vendor/git2/src/submodule.rs0000664000175000017500000003302214160055207016650 0ustar mwhudsonmwhudsonuse std::marker; use std::mem; use std::os::raw::c_int; use std::path::Path; use std::ptr; use std::str; use crate::util::{self, Binding}; use crate::{build::CheckoutBuilder, SubmoduleIgnore, SubmoduleUpdate}; use crate::{raw, Error, FetchOptions, Oid, Repository}; /// A structure to represent a git [submodule][1] /// /// [1]: http://git-scm.com/book/en/Git-Tools-Submodules pub struct Submodule<'repo> { raw: *mut raw::git_submodule, _marker: marker::PhantomData<&'repo Repository>, } impl<'repo> Submodule<'repo> { /// Get the submodule's branch. /// /// Returns `None` if the branch is not valid utf-8 or if the branch is not /// yet available. pub fn branch(&self) -> Option<&str> { self.branch_bytes().and_then(|s| str::from_utf8(s).ok()) } /// Get the branch for the submodule. /// /// Returns `None` if the branch is not yet available. pub fn branch_bytes(&self) -> Option<&[u8]> { unsafe { crate::opt_bytes(self, raw::git_submodule_branch(self.raw)) } } /// Perform the clone step for a newly created submodule. /// /// This performs the necessary `git_clone` to setup a newly-created submodule. pub fn clone( &mut self, opts: Option<&mut SubmoduleUpdateOptions<'_>>, ) -> Result { unsafe { let raw_opts = opts.map(|o| o.raw()); let mut raw_repo = ptr::null_mut(); try_call!(raw::git_submodule_clone( &mut raw_repo, self.raw, raw_opts.as_ref() )); Ok(Binding::from_raw(raw_repo)) } } /// Get the submodule's url. /// /// Returns `None` if the url is not valid utf-8 or if the URL isn't present pub fn url(&self) -> Option<&str> { self.opt_url_bytes().and_then(|b| str::from_utf8(b).ok()) } /// Get the url for the submodule. #[doc(hidden)] #[deprecated(note = "renamed to `opt_url_bytes`")] pub fn url_bytes(&self) -> &[u8] { self.opt_url_bytes().unwrap() } /// Get the url for the submodule. /// /// Returns `None` if the URL isn't present // TODO: delete this method and fix the signature of `url_bytes` on next // major version bump pub fn opt_url_bytes(&self) -> Option<&[u8]> { unsafe { crate::opt_bytes(self, raw::git_submodule_url(self.raw)) } } /// Get the submodule's name. /// /// Returns `None` if the name is not valid utf-8 pub fn name(&self) -> Option<&str> { str::from_utf8(self.name_bytes()).ok() } /// Get the name for the submodule. pub fn name_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, raw::git_submodule_name(self.raw)).unwrap() } } /// Get the path for the submodule. pub fn path(&self) -> &Path { util::bytes2path(unsafe { crate::opt_bytes(self, raw::git_submodule_path(self.raw)).unwrap() }) } /// Get the OID for the submodule in the current HEAD tree. pub fn head_id(&self) -> Option { unsafe { Binding::from_raw_opt(raw::git_submodule_head_id(self.raw)) } } /// Get the OID for the submodule in the index. pub fn index_id(&self) -> Option { unsafe { Binding::from_raw_opt(raw::git_submodule_index_id(self.raw)) } } /// Get the OID for the submodule in the current working directory. /// /// This returns the OID that corresponds to looking up 'HEAD' in the /// checked out submodule. If there are pending changes in the index or /// anything else, this won't notice that. pub fn workdir_id(&self) -> Option { unsafe { Binding::from_raw_opt(raw::git_submodule_wd_id(self.raw)) } } /// Get the ignore rule that will be used for the submodule. pub fn ignore_rule(&self) -> SubmoduleIgnore { SubmoduleIgnore::from_raw(unsafe { raw::git_submodule_ignore(self.raw) }) } /// Get the update rule that will be used for the submodule. pub fn update_strategy(&self) -> SubmoduleUpdate { SubmoduleUpdate::from_raw(unsafe { raw::git_submodule_update_strategy(self.raw) }) } /// Copy submodule info into ".git/config" file. /// /// Just like "git submodule init", this copies information about the /// submodule into ".git/config". You can use the accessor functions above /// to alter the in-memory git_submodule object and control what is written /// to the config, overriding what is in .gitmodules. /// /// By default, existing entries will not be overwritten, but passing `true` /// for `overwrite` forces them to be updated. pub fn init(&mut self, overwrite: bool) -> Result<(), Error> { unsafe { try_call!(raw::git_submodule_init(self.raw, overwrite)); } Ok(()) } /// Open the repository for a submodule. /// /// This will only work if the submodule is checked out into the working /// directory. pub fn open(&self) -> Result { let mut raw = ptr::null_mut(); unsafe { try_call!(raw::git_submodule_open(&mut raw, self.raw)); Ok(Binding::from_raw(raw)) } } /// Reread submodule info from config, index, and HEAD. /// /// Call this to reread cached submodule information for this submodule if /// you have reason to believe that it has changed. /// /// If `force` is `true`, then data will be reloaded even if it doesn't seem /// out of date pub fn reload(&mut self, force: bool) -> Result<(), Error> { unsafe { try_call!(raw::git_submodule_reload(self.raw, force)); } Ok(()) } /// Copy submodule remote info into submodule repo. /// /// This copies the information about the submodules URL into the checked /// out submodule config, acting like "git submodule sync". This is useful /// if you have altered the URL for the submodule (or it has been altered /// by a fetch of upstream changes) and you need to update your local repo. pub fn sync(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_submodule_sync(self.raw)); } Ok(()) } /// Add current submodule HEAD commit to index of superproject. /// /// If `write_index` is true, then the index file will be immediately /// written. Otherwise you must explicitly call `write()` on an `Index` /// later on. pub fn add_to_index(&mut self, write_index: bool) -> Result<(), Error> { unsafe { try_call!(raw::git_submodule_add_to_index(self.raw, write_index)); } Ok(()) } /// Resolve the setup of a new git submodule. /// /// This should be called on a submodule once you have called add setup and /// done the clone of the submodule. This adds the .gitmodules file and the /// newly cloned submodule to the index to be ready to be committed (but /// doesn't actually do the commit). pub fn add_finalize(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_submodule_add_finalize(self.raw)); } Ok(()) } /// Update submodule. /// /// This will clone a missing submodule and check out the subrepository to /// the commit specified in the index of the containing repository. If /// the submodule repository doesn't contain the target commit, then the /// submodule is fetched using the fetch options supplied in `opts`. /// /// `init` indicates if the submodule should be initialized first if it has /// not been initialized yet. pub fn update( &mut self, init: bool, opts: Option<&mut SubmoduleUpdateOptions<'_>>, ) -> Result<(), Error> { unsafe { let mut raw_opts = opts.map(|o| o.raw()); try_call!(raw::git_submodule_update( self.raw, init as c_int, raw_opts.as_mut().map_or(ptr::null_mut(), |o| o) )); } Ok(()) } } impl<'repo> Binding for Submodule<'repo> { type Raw = *mut raw::git_submodule; unsafe fn from_raw(raw: *mut raw::git_submodule) -> Submodule<'repo> { Submodule { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_submodule { self.raw } } impl<'repo> Drop for Submodule<'repo> { fn drop(&mut self) { unsafe { raw::git_submodule_free(self.raw) } } } /// Options to update a submodule. pub struct SubmoduleUpdateOptions<'cb> { checkout_builder: CheckoutBuilder<'cb>, fetch_opts: FetchOptions<'cb>, allow_fetch: bool, } impl<'cb> SubmoduleUpdateOptions<'cb> { /// Return default options. pub fn new() -> Self { SubmoduleUpdateOptions { checkout_builder: CheckoutBuilder::new(), fetch_opts: FetchOptions::new(), allow_fetch: true, } } unsafe fn raw(&mut self) -> raw::git_submodule_update_options { let mut checkout_opts: raw::git_checkout_options = mem::zeroed(); let init_res = raw::git_checkout_init_options(&mut checkout_opts, raw::GIT_CHECKOUT_OPTIONS_VERSION); assert_eq!(0, init_res); self.checkout_builder.configure(&mut checkout_opts); let opts = raw::git_submodule_update_options { version: raw::GIT_SUBMODULE_UPDATE_OPTIONS_VERSION, checkout_opts, fetch_opts: self.fetch_opts.raw(), allow_fetch: self.allow_fetch as c_int, }; opts } /// Set checkout options. pub fn checkout(&mut self, opts: CheckoutBuilder<'cb>) -> &mut Self { self.checkout_builder = opts; self } /// Set fetch options and allow fetching. pub fn fetch(&mut self, opts: FetchOptions<'cb>) -> &mut Self { self.fetch_opts = opts; self.allow_fetch = true; self } /// Allow or disallow fetching. pub fn allow_fetch(&mut self, b: bool) -> &mut Self { self.allow_fetch = b; self } } impl<'cb> Default for SubmoduleUpdateOptions<'cb> { fn default() -> Self { Self::new() } } #[cfg(test)] mod tests { use std::fs; use std::path::Path; use tempfile::TempDir; use url::Url; use crate::Repository; use crate::SubmoduleUpdateOptions; #[test] fn smoke() { let td = TempDir::new().unwrap(); let repo = Repository::init(td.path()).unwrap(); let mut s1 = repo .submodule("/path/to/nowhere", Path::new("foo"), true) .unwrap(); s1.init(false).unwrap(); s1.sync().unwrap(); let s2 = repo .submodule("/path/to/nowhere", Path::new("bar"), true) .unwrap(); drop((s1, s2)); let mut submodules = repo.submodules().unwrap(); assert_eq!(submodules.len(), 2); let mut s = submodules.remove(0); assert_eq!(s.name(), Some("bar")); assert_eq!(s.url(), Some("/path/to/nowhere")); assert_eq!(s.branch(), None); assert!(s.head_id().is_none()); assert!(s.index_id().is_none()); assert!(s.workdir_id().is_none()); repo.find_submodule("bar").unwrap(); s.open().unwrap(); assert!(s.path() == Path::new("bar")); s.reload(true).unwrap(); } #[test] fn add_a_submodule() { let (_td, repo1) = crate::test::repo_init(); let (td, repo2) = crate::test::repo_init(); let url = Url::from_file_path(&repo1.workdir().unwrap()).unwrap(); let mut s = repo2 .submodule(&url.to_string(), Path::new("bar"), true) .unwrap(); t!(fs::remove_dir_all(td.path().join("bar"))); t!(Repository::clone(&url.to_string(), td.path().join("bar"))); t!(s.add_to_index(false)); t!(s.add_finalize()); } #[test] fn update_submodule() { // ----------------------------------- // Same as `add_a_submodule()` let (_td, repo1) = crate::test::repo_init(); let (td, repo2) = crate::test::repo_init(); let url = Url::from_file_path(&repo1.workdir().unwrap()).unwrap(); let mut s = repo2 .submodule(&url.to_string(), Path::new("bar"), true) .unwrap(); t!(fs::remove_dir_all(td.path().join("bar"))); t!(Repository::clone(&url.to_string(), td.path().join("bar"))); t!(s.add_to_index(false)); t!(s.add_finalize()); // ----------------------------------- // Attempt to update submodule let submodules = t!(repo1.submodules()); for mut submodule in submodules { let mut submodule_options = SubmoduleUpdateOptions::new(); let init = true; let opts = Some(&mut submodule_options); t!(submodule.update(init, opts)); } } #[test] fn clone_submodule() { // ----------------------------------- // Same as `add_a_submodule()` let (_td, repo1) = crate::test::repo_init(); let (_td, repo2) = crate::test::repo_init(); let (_td, parent) = crate::test::repo_init(); let url1 = Url::from_file_path(&repo1.workdir().unwrap()).unwrap(); let url3 = Url::from_file_path(&repo2.workdir().unwrap()).unwrap(); let mut s1 = parent .submodule(&url1.to_string(), Path::new("bar"), true) .unwrap(); let mut s2 = parent .submodule(&url3.to_string(), Path::new("bar2"), true) .unwrap(); // ----------------------------------- t!(s1.clone(Some(&mut SubmoduleUpdateOptions::default()))); t!(s2.clone(None)); } } vendor/git2/src/rebase.rs0000664000175000017500000003743014160055207016121 0ustar mwhudsonmwhudsonuse std::ffi::CString; use std::{marker, mem, ptr, str}; use crate::build::CheckoutBuilder; use crate::util::Binding; use crate::{raw, Error, Index, MergeOptions, Oid, Signature}; /// Rebase options /// /// Use to tell the rebase machinery how to operate. pub struct RebaseOptions<'cb> { raw: raw::git_rebase_options, rewrite_notes_ref: Option, merge_options: Option, checkout_options: Option>, } impl<'cb> Default for RebaseOptions<'cb> { fn default() -> Self { Self::new() } } impl<'cb> RebaseOptions<'cb> { /// Creates a new default set of rebase options. pub fn new() -> RebaseOptions<'cb> { let mut opts = RebaseOptions { raw: unsafe { mem::zeroed() }, rewrite_notes_ref: None, merge_options: None, checkout_options: None, }; assert_eq!(unsafe { raw::git_rebase_init_options(&mut opts.raw, 1) }, 0); opts } /// Used by `Repository::rebase`, this will instruct other clients working on this /// rebase that you want a quiet rebase experience, which they may choose to /// provide in an application-specific manner. This has no effect upon /// libgit2 directly, but is provided for interoperability between Git /// tools. pub fn quiet(&mut self, quiet: bool) -> &mut RebaseOptions<'cb> { self.raw.quiet = quiet as i32; self } /// Used by `Repository::rebase`, this will begin an in-memory rebase, /// which will allow callers to step through the rebase operations and /// commit the rebased changes, but will not rewind HEAD or update the /// repository to be in a rebasing state. This will not interfere with /// the working directory (if there is one). pub fn inmemory(&mut self, inmemory: bool) -> &mut RebaseOptions<'cb> { self.raw.inmemory = inmemory as i32; self } /// Used by `finish()`, this is the name of the notes reference /// used to rewrite notes for rebased commits when finishing the rebase; /// if NULL, the contents of the configuration option `notes.rewriteRef` /// is examined, unless the configuration option `notes.rewrite.rebase` /// is set to false. If `notes.rewriteRef` is also NULL, notes will /// not be rewritten. pub fn rewrite_notes_ref(&mut self, rewrite_notes_ref: &str) -> &mut RebaseOptions<'cb> { self.rewrite_notes_ref = Some(CString::new(rewrite_notes_ref).unwrap()); self } /// Options to control how trees are merged during `next()`. pub fn merge_options(&mut self, opts: MergeOptions) -> &mut RebaseOptions<'cb> { self.merge_options = Some(opts); self } /// Options to control how files are written during `Repository::rebase`, /// `next()` and `abort()`. Note that a minimum strategy of /// `GIT_CHECKOUT_SAFE` is defaulted in `init` and `next`, and a minimum /// strategy of `GIT_CHECKOUT_FORCE` is defaulted in `abort` to match git /// semantics. pub fn checkout_options(&mut self, opts: CheckoutBuilder<'cb>) -> &mut RebaseOptions<'cb> { self.checkout_options = Some(opts); self } /// Acquire a pointer to the underlying raw options. pub fn raw(&mut self) -> *const raw::git_rebase_options { unsafe { if let Some(opts) = self.merge_options.as_mut().take() { ptr::copy_nonoverlapping(opts.raw(), &mut self.raw.merge_options, 1); } if let Some(opts) = self.checkout_options.as_mut() { opts.configure(&mut self.raw.checkout_options); } self.raw.rewrite_notes_ref = self .rewrite_notes_ref .as_ref() .map(|s| s.as_ptr()) .unwrap_or(ptr::null()); } &self.raw } } /// Representation of a rebase pub struct Rebase<'repo> { raw: *mut raw::git_rebase, _marker: marker::PhantomData<&'repo raw::git_rebase>, } impl<'repo> Rebase<'repo> { /// Gets the count of rebase operations that are to be applied. pub fn len(&self) -> usize { unsafe { raw::git_rebase_operation_entrycount(self.raw) } } /// Gets the original `HEAD` ref name for merge rebases. pub fn orig_head_name(&self) -> Option<&str> { let name_bytes = unsafe { crate::opt_bytes(self, raw::git_rebase_orig_head_name(self.raw)) }; name_bytes.and_then(|s| str::from_utf8(s).ok()) } /// Gets the original HEAD id for merge rebases. pub fn orig_head_id(&self) -> Option { unsafe { Oid::from_raw_opt(raw::git_rebase_orig_head_id(self.raw)) } } /// Gets the rebase operation specified by the given index. pub fn nth(&mut self, n: usize) -> Option> { unsafe { let op = raw::git_rebase_operation_byindex(self.raw, n); if op.is_null() { None } else { Some(RebaseOperation::from_raw(op)) } } } /// Gets the index of the rebase operation that is currently being applied. /// If the first operation has not yet been applied (because you have called /// `init` but not yet `next`) then this returns None. pub fn operation_current(&mut self) -> Option { let cur = unsafe { raw::git_rebase_operation_current(self.raw) }; if cur == raw::GIT_REBASE_NO_OPERATION { None } else { Some(cur) } } /// Gets the index produced by the last operation, which is the result of /// `next()` and which will be committed by the next invocation of /// `commit()`. This is useful for resolving conflicts in an in-memory /// rebase before committing them. /// /// This is only applicable for in-memory rebases; for rebases within a /// working directory, the changes were applied to the repository's index. pub fn inmemory_index(&mut self) -> Result { let mut idx = ptr::null_mut(); unsafe { try_call!(raw::git_rebase_inmemory_index(&mut idx, self.raw)); Ok(Binding::from_raw(idx)) } } /// Commits the current patch. You must have resolved any conflicts that /// were introduced during the patch application from the `git_rebase_next` /// invocation. To keep the author and message from the original commit leave /// them as None pub fn commit( &mut self, author: Option<&Signature<'_>>, committer: &Signature<'_>, message: Option<&str>, ) -> Result { let mut id: raw::git_oid = unsafe { mem::zeroed() }; let message = crate::opt_cstr(message)?; unsafe { try_call!(raw::git_rebase_commit( &mut id, self.raw, author.map(|a| a.raw()), committer.raw(), ptr::null(), message )); Ok(Binding::from_raw(&id as *const _)) } } /// Aborts a rebase that is currently in progress, resetting the repository /// and working directory to their state before rebase began. pub fn abort(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_rebase_abort(self.raw)); } Ok(()) } /// Finishes a rebase that is currently in progress once all patches have /// been applied. pub fn finish(&mut self, signature: Option<&Signature<'_>>) -> Result<(), Error> { unsafe { try_call!(raw::git_rebase_finish(self.raw, signature.map(|s| s.raw()))); } Ok(()) } } impl<'rebase> Iterator for Rebase<'rebase> { type Item = Result, Error>; /// Performs the next rebase operation and returns the information about it. /// If the operation is one that applies a patch (which is any operation except /// GitRebaseOperation::Exec) then the patch will be applied and the index and /// working directory will be updated with the changes. If there are conflicts, /// you will need to address those before committing the changes. fn next(&mut self) -> Option, Error>> { let mut out = ptr::null_mut(); unsafe { try_call_iter!(raw::git_rebase_next(&mut out, self.raw)); Some(Ok(RebaseOperation::from_raw(out))) } } } impl<'repo> Binding for Rebase<'repo> { type Raw = *mut raw::git_rebase; unsafe fn from_raw(raw: *mut raw::git_rebase) -> Rebase<'repo> { Rebase { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_rebase { self.raw } } impl<'repo> Drop for Rebase<'repo> { fn drop(&mut self) { unsafe { raw::git_rebase_free(self.raw) } } } /// A rebase operation /// /// Describes a single instruction/operation to be performed during the /// rebase. #[derive(Debug, PartialEq)] pub enum RebaseOperationType { /// The given commit is to be cherry-picked. The client should commit the /// changes and continue if there are no conflicts. Pick, /// The given commit is to be cherry-picked, but the client should prompt /// the user to provide an updated commit message. Reword, /// The given commit is to be cherry-picked, but the client should stop to /// allow the user to edit the changes before committing them. Edit, /// The given commit is to be squashed into the previous commit. The commit /// message will be merged with the previous message. Squash, /// The given commit is to be squashed into the previous commit. The commit /// message from this commit will be discarded. Fixup, /// No commit will be cherry-picked. The client should run the given command /// and (if successful) continue. Exec, } impl RebaseOperationType { /// Convert from the int into an enum. Returns None if invalid. pub fn from_raw(raw: raw::git_rebase_operation_t) -> Option { match raw { raw::GIT_REBASE_OPERATION_PICK => Some(RebaseOperationType::Pick), raw::GIT_REBASE_OPERATION_REWORD => Some(RebaseOperationType::Reword), raw::GIT_REBASE_OPERATION_EDIT => Some(RebaseOperationType::Edit), raw::GIT_REBASE_OPERATION_SQUASH => Some(RebaseOperationType::Squash), raw::GIT_REBASE_OPERATION_FIXUP => Some(RebaseOperationType::Fixup), raw::GIT_REBASE_OPERATION_EXEC => Some(RebaseOperationType::Exec), _ => None, } } } /// A rebase operation /// /// Describes a single instruction/operation to be performed during the /// rebase. #[derive(Debug)] pub struct RebaseOperation<'rebase> { raw: *const raw::git_rebase_operation, _marker: marker::PhantomData>, } impl<'rebase> RebaseOperation<'rebase> { /// The type of rebase operation pub fn kind(&self) -> Option { unsafe { RebaseOperationType::from_raw((*self.raw).kind) } } /// The commit ID being cherry-picked. This will be populated for all /// operations except those of type `GIT_REBASE_OPERATION_EXEC`. pub fn id(&self) -> Oid { unsafe { Binding::from_raw(&(*self.raw).id as *const _) } } ///The executable the user has requested be run. This will only /// be populated for operations of type RebaseOperationType::Exec pub fn exec(&self) -> Option<&str> { unsafe { str::from_utf8(crate::opt_bytes(self, (*self.raw).exec).unwrap()).ok() } } } impl<'rebase> Binding for RebaseOperation<'rebase> { type Raw = *const raw::git_rebase_operation; unsafe fn from_raw(raw: *const raw::git_rebase_operation) -> RebaseOperation<'rebase> { RebaseOperation { raw, _marker: marker::PhantomData, } } fn raw(&self) -> *const raw::git_rebase_operation { self.raw } } #[cfg(test)] mod tests { use crate::{RebaseOperationType, RebaseOptions, Signature}; use std::{fs, path}; #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let head_target = repo.head().unwrap().target().unwrap(); let tip = repo.find_commit(head_target).unwrap(); let sig = tip.author(); let tree = tip.tree().unwrap(); // We just want to see the iteration work so we can create commits with // no changes let c1 = repo .commit(Some("refs/heads/main"), &sig, &sig, "foo", &tree, &[&tip]) .unwrap(); let c1 = repo.find_commit(c1).unwrap(); let c2 = repo .commit(Some("refs/heads/main"), &sig, &sig, "foo", &tree, &[&c1]) .unwrap(); let head = repo.find_reference("refs/heads/main").unwrap(); let branch = repo.reference_to_annotated_commit(&head).unwrap(); let upstream = repo.find_annotated_commit(tip.id()).unwrap(); let mut rebase = repo .rebase(Some(&branch), Some(&upstream), None, None) .unwrap(); assert_eq!(Some("refs/heads/main"), rebase.orig_head_name()); assert_eq!(Some(c2), rebase.orig_head_id()); assert_eq!(rebase.len(), 2); { let op = rebase.next().unwrap().unwrap(); assert_eq!(op.kind(), Some(RebaseOperationType::Pick)); assert_eq!(op.id(), c1.id()); } { let op = rebase.next().unwrap().unwrap(); assert_eq!(op.kind(), Some(RebaseOperationType::Pick)); assert_eq!(op.id(), c2); } { let op = rebase.next(); assert!(op.is_none()); } } #[test] fn keeping_original_author_msg() { let (td, repo) = crate::test::repo_init(); let head_target = repo.head().unwrap().target().unwrap(); let tip = repo.find_commit(head_target).unwrap(); let sig = Signature::now("testname", "testemail").unwrap(); let mut index = repo.index().unwrap(); fs::File::create(td.path().join("file_a")).unwrap(); index.add_path(path::Path::new("file_a")).unwrap(); index.write().unwrap(); let tree_id_a = index.write_tree().unwrap(); let tree_a = repo.find_tree(tree_id_a).unwrap(); let c1 = repo .commit(Some("refs/heads/main"), &sig, &sig, "A", &tree_a, &[&tip]) .unwrap(); let c1 = repo.find_commit(c1).unwrap(); fs::File::create(td.path().join("file_b")).unwrap(); index.add_path(path::Path::new("file_b")).unwrap(); index.write().unwrap(); let tree_id_b = index.write_tree().unwrap(); let tree_b = repo.find_tree(tree_id_b).unwrap(); let c2 = repo .commit(Some("refs/heads/main"), &sig, &sig, "B", &tree_b, &[&c1]) .unwrap(); let branch = repo.find_annotated_commit(c2).unwrap(); let upstream = repo.find_annotated_commit(tip.id()).unwrap(); let mut opts: RebaseOptions<'_> = Default::default(); let mut rebase = repo .rebase(Some(&branch), Some(&upstream), None, Some(&mut opts)) .unwrap(); assert_eq!(rebase.len(), 2); { rebase.next().unwrap().unwrap(); let id = rebase.commit(None, &sig, None).unwrap(); let commit = repo.find_commit(id).unwrap(); assert_eq!(commit.message(), Some("A")); assert_eq!(commit.author().name(), Some("testname")); assert_eq!(commit.author().email(), Some("testemail")); } { rebase.next().unwrap().unwrap(); let id = rebase.commit(None, &sig, None).unwrap(); let commit = repo.find_commit(id).unwrap(); assert_eq!(commit.message(), Some("B")); assert_eq!(commit.author().name(), Some("testname")); assert_eq!(commit.author().email(), Some("testemail")); } rebase.finish(None).unwrap(); } } vendor/git2/src/packbuilder.rs0000664000175000017500000003055714160055207017150 0ustar mwhudsonmwhudsonuse libc::{c_int, c_uint, c_void, size_t}; use std::marker; use std::ptr; use std::slice; use crate::util::Binding; use crate::{panic, raw, Buf, Error, Oid, Repository, Revwalk}; #[derive(PartialEq, Eq, Clone, Debug, Copy)] /// Stages that are reported by the `PackBuilder` progress callback. pub enum PackBuilderStage { /// Adding objects to the pack AddingObjects, /// Deltafication of the pack Deltafication, } pub type ProgressCb<'a> = dyn FnMut(PackBuilderStage, u32, u32) -> bool + 'a; pub type ForEachCb<'a> = dyn FnMut(&[u8]) -> bool + 'a; /// A builder for creating a packfile pub struct PackBuilder<'repo> { raw: *mut raw::git_packbuilder, _progress: Option>>>, _marker: marker::PhantomData<&'repo Repository>, } impl<'repo> PackBuilder<'repo> { /// Insert a single object. For an optimal pack it's mandatory to insert /// objects in recency order, commits followed by trees and blobs. pub fn insert_object(&mut self, id: Oid, name: Option<&str>) -> Result<(), Error> { let name = crate::opt_cstr(name)?; unsafe { try_call!(raw::git_packbuilder_insert(self.raw, id.raw(), name)); } Ok(()) } /// Insert a root tree object. This will add the tree as well as all /// referenced trees and blobs. pub fn insert_tree(&mut self, id: Oid) -> Result<(), Error> { unsafe { try_call!(raw::git_packbuilder_insert_tree(self.raw, id.raw())); } Ok(()) } /// Insert a commit object. This will add a commit as well as the completed /// referenced tree. pub fn insert_commit(&mut self, id: Oid) -> Result<(), Error> { unsafe { try_call!(raw::git_packbuilder_insert_commit(self.raw, id.raw())); } Ok(()) } /// Insert objects as given by the walk. Those commits and all objects they /// reference will be inserted into the packbuilder. pub fn insert_walk(&mut self, walk: &mut Revwalk<'_>) -> Result<(), Error> { unsafe { try_call!(raw::git_packbuilder_insert_walk(self.raw, walk.raw())); } Ok(()) } /// Recursively insert an object and its referenced objects. Insert the /// object as well as any object it references. pub fn insert_recursive(&mut self, id: Oid, name: Option<&str>) -> Result<(), Error> { let name = crate::opt_cstr(name)?; unsafe { try_call!(raw::git_packbuilder_insert_recur(self.raw, id.raw(), name)); } Ok(()) } /// Write the contents of the packfile to an in-memory buffer. The contents /// of the buffer will become a valid packfile, even though there will be /// no attached index. pub fn write_buf(&mut self, buf: &mut Buf) -> Result<(), Error> { unsafe { try_call!(raw::git_packbuilder_write_buf(buf.raw(), self.raw)); } Ok(()) } /// Create the new pack and pass each object to the callback. pub fn foreach(&mut self, mut cb: F) -> Result<(), Error> where F: FnMut(&[u8]) -> bool, { let mut cb = &mut cb as &mut ForEachCb<'_>; let ptr = &mut cb as *mut _; let foreach: raw::git_packbuilder_foreach_cb = Some(foreach_c); unsafe { try_call!(raw::git_packbuilder_foreach( self.raw, foreach, ptr as *mut _ )); } Ok(()) } /// `progress` will be called with progress information during pack /// building. Be aware that this is called inline with pack building /// operations, so performance may be affected. /// /// There can only be one progress callback attached, this will replace any /// existing one. See `unset_progress_callback` to remove the current /// progress callback without attaching a new one. pub fn set_progress_callback(&mut self, progress: F) -> Result<(), Error> where F: FnMut(PackBuilderStage, u32, u32) -> bool + 'repo, { let mut progress = Box::new(Box::new(progress) as Box>); let ptr = &mut *progress as *mut _; let progress_c: raw::git_packbuilder_progress = Some(progress_c); unsafe { try_call!(raw::git_packbuilder_set_callbacks( self.raw, progress_c, ptr as *mut _ )); } self._progress = Some(progress); Ok(()) } /// Remove the current progress callback. See `set_progress_callback` to /// set the progress callback. pub fn unset_progress_callback(&mut self) -> Result<(), Error> { unsafe { try_call!(raw::git_packbuilder_set_callbacks( self.raw, None, ptr::null_mut() )); self._progress = None; } Ok(()) } /// Set the number of threads to be used. /// /// Returns the number of threads to be used. pub fn set_threads(&mut self, threads: u32) -> u32 { unsafe { raw::git_packbuilder_set_threads(self.raw, threads) } } /// Get the total number of objects the packbuilder will write out. pub fn object_count(&self) -> usize { unsafe { raw::git_packbuilder_object_count(self.raw) } } /// Get the number of objects the packbuilder has already written out. pub fn written(&self) -> usize { unsafe { raw::git_packbuilder_written(self.raw) } } /// Get the packfile's hash. A packfile's name is derived from the sorted /// hashing of all object names. This is only correct after the packfile /// has been written. pub fn hash(&self) -> Option { if self.object_count() == 0 { unsafe { Some(Binding::from_raw(raw::git_packbuilder_hash(self.raw))) } } else { None } } } impl<'repo> Binding for PackBuilder<'repo> { type Raw = *mut raw::git_packbuilder; unsafe fn from_raw(ptr: *mut raw::git_packbuilder) -> PackBuilder<'repo> { PackBuilder { raw: ptr, _progress: None, _marker: marker::PhantomData, } } fn raw(&self) -> *mut raw::git_packbuilder { self.raw } } impl<'repo> Drop for PackBuilder<'repo> { fn drop(&mut self) { unsafe { raw::git_packbuilder_set_callbacks(self.raw, None, ptr::null_mut()); raw::git_packbuilder_free(self.raw); } } } impl Binding for PackBuilderStage { type Raw = raw::git_packbuilder_stage_t; unsafe fn from_raw(raw: raw::git_packbuilder_stage_t) -> PackBuilderStage { match raw { raw::GIT_PACKBUILDER_ADDING_OBJECTS => PackBuilderStage::AddingObjects, raw::GIT_PACKBUILDER_DELTAFICATION => PackBuilderStage::Deltafication, _ => panic!("Unknown git diff binary kind"), } } fn raw(&self) -> raw::git_packbuilder_stage_t { match *self { PackBuilderStage::AddingObjects => raw::GIT_PACKBUILDER_ADDING_OBJECTS, PackBuilderStage::Deltafication => raw::GIT_PACKBUILDER_DELTAFICATION, } } } extern "C" fn foreach_c(buf: *const c_void, size: size_t, data: *mut c_void) -> c_int { unsafe { let buf = slice::from_raw_parts(buf as *const u8, size as usize); let r = panic::wrap(|| { let data = data as *mut &mut ForEachCb<'_>; (*data)(buf) }); if r == Some(true) { 0 } else { -1 } } } extern "C" fn progress_c( stage: raw::git_packbuilder_stage_t, current: c_uint, total: c_uint, data: *mut c_void, ) -> c_int { unsafe { let stage = Binding::from_raw(stage); let r = panic::wrap(|| { let data = data as *mut Box>; (*data)(stage, current, total) }); if r == Some(true) { 0 } else { -1 } } } #[cfg(test)] mod tests { use crate::Buf; fn pack_header(len: u8) -> Vec { [].iter() .chain(b"PACK") // signature .chain(&[0, 0, 0, 2]) // version number .chain(&[0, 0, 0, len]) // number of objects .cloned() .collect::>() } fn empty_pack_header() -> Vec { pack_header(0) .iter() .chain(&[ 0x02, 0x9d, 0x08, 0x82, 0x3b, // ^ 0xd8, 0xa8, 0xea, 0xb5, 0x10, // | SHA-1 of the zero 0xad, 0x6a, 0xc7, 0x5c, 0x82, // | object pack header 0x3c, 0xfd, 0x3e, 0xd3, 0x1e, ]) // v .cloned() .collect::>() } #[test] fn smoke() { let (_td, repo) = crate::test::repo_init(); let _builder = t!(repo.packbuilder()); } #[test] fn smoke_write_buf() { let (_td, repo) = crate::test::repo_init(); let mut builder = t!(repo.packbuilder()); let mut buf = Buf::new(); t!(builder.write_buf(&mut buf)); assert!(builder.hash().unwrap().is_zero()); assert_eq!(&*buf, &*empty_pack_header()); } #[test] fn smoke_foreach() { let (_td, repo) = crate::test::repo_init(); let mut builder = t!(repo.packbuilder()); let mut buf = Vec::::new(); t!(builder.foreach(|bytes| { buf.extend(bytes); true })); assert_eq!(&*buf, &*empty_pack_header()); } #[test] fn insert_write_buf() { let (_td, repo) = crate::test::repo_init(); let mut builder = t!(repo.packbuilder()); let mut buf = Buf::new(); let (commit, _tree) = crate::test::commit(&repo); t!(builder.insert_object(commit, None)); assert_eq!(builder.object_count(), 1); t!(builder.write_buf(&mut buf)); // Just check that the correct number of objects are written assert_eq!(&buf[0..12], &*pack_header(1)); } #[test] fn insert_tree_write_buf() { let (_td, repo) = crate::test::repo_init(); let mut builder = t!(repo.packbuilder()); let mut buf = Buf::new(); let (_commit, tree) = crate::test::commit(&repo); // will insert the tree itself and the blob, 2 objects t!(builder.insert_tree(tree)); assert_eq!(builder.object_count(), 2); t!(builder.write_buf(&mut buf)); // Just check that the correct number of objects are written assert_eq!(&buf[0..12], &*pack_header(2)); } #[test] fn insert_commit_write_buf() { let (_td, repo) = crate::test::repo_init(); let mut builder = t!(repo.packbuilder()); let mut buf = Buf::new(); let (commit, _tree) = crate::test::commit(&repo); // will insert the commit, its tree and the blob, 3 objects t!(builder.insert_commit(commit)); assert_eq!(builder.object_count(), 3); t!(builder.write_buf(&mut buf)); // Just check that the correct number of objects are written assert_eq!(&buf[0..12], &*pack_header(3)); } #[test] fn progress_callback() { let mut progress_called = false; { let (_td, repo) = crate::test::repo_init(); let mut builder = t!(repo.packbuilder()); let (commit, _tree) = crate::test::commit(&repo); t!(builder.set_progress_callback(|_, _, _| { progress_called = true; true })); t!(builder.insert_commit(commit)); t!(builder.write_buf(&mut Buf::new())); } assert_eq!(progress_called, true); } #[test] fn clear_progress_callback() { let mut progress_called = false; { let (_td, repo) = crate::test::repo_init(); let mut builder = t!(repo.packbuilder()); let (commit, _tree) = crate::test::commit(&repo); t!(builder.set_progress_callback(|_, _, _| { progress_called = true; true })); t!(builder.unset_progress_callback()); t!(builder.insert_commit(commit)); t!(builder.write_buf(&mut Buf::new())); } assert_eq!(progress_called, false); } #[test] fn set_threads() { let (_td, repo) = crate::test::repo_init(); let mut builder = t!(repo.packbuilder()); let used = builder.set_threads(4); // Will be 1 if not compiled with threading. assert!(used == 1 || used == 4); } } vendor/git2/src/version.rs0000664000175000017500000000546314160055207016346 0ustar mwhudsonmwhudsonuse crate::raw; use libc::c_int; use std::fmt; /// Version information about libgit2 and the capabilities it supports. pub struct Version { major: c_int, minor: c_int, rev: c_int, features: c_int, } macro_rules! flag_test { ($features:expr, $flag:expr) => { ($features as u32 & $flag as u32) != 0 }; } impl Version { /// Returns a [`Version`] which provides information about libgit2. pub fn get() -> Version { let mut v = Version { major: 0, minor: 0, rev: 0, features: 0, }; unsafe { raw::git_libgit2_version(&mut v.major, &mut v.minor, &mut v.rev); v.features = raw::git_libgit2_features(); } v } /// Returns the version of libgit2. /// /// The return value is a tuple of `(major, minor, rev)` pub fn libgit2_version(&self) -> (u32, u32, u32) { (self.major as u32, self.minor as u32, self.rev as u32) } /// Returns the version of the libgit2-sys crate. pub fn crate_version(&self) -> &'static str { env!("CARGO_PKG_VERSION") } /// Returns true if this was built with the vendored version of libgit2. pub fn vendored(&self) -> bool { raw::vendored() } /// Returns true if libgit2 was built thread-aware and can be safely used /// from multiple threads. pub fn threads(&self) -> bool { flag_test!(self.features, raw::GIT_FEATURE_THREADS) } /// Returns true if libgit2 was built with and linked against a TLS implementation. /// /// Custom TLS streams may still be added by the user to support HTTPS /// regardless of this. pub fn https(&self) -> bool { flag_test!(self.features, raw::GIT_FEATURE_HTTPS) } /// Returns true if libgit2 was built with and linked against libssh2. /// /// A custom transport may still be added by the user to support libssh2 /// regardless of this. pub fn ssh(&self) -> bool { flag_test!(self.features, raw::GIT_FEATURE_SSH) } /// Returns true if libgit2 was built with support for sub-second /// resolution in file modification times. pub fn nsec(&self) -> bool { flag_test!(self.features, raw::GIT_FEATURE_NSEC) } } impl fmt::Debug for Version { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> { let mut f = f.debug_struct("Version"); f.field("major", &self.major) .field("minor", &self.minor) .field("rev", &self.rev) .field("crate_version", &self.crate_version()) .field("vendored", &self.vendored()) .field("threads", &self.threads()) .field("https", &self.https()) .field("ssh", &self.ssh()) .field("nsec", &self.nsec()); f.finish() } } vendor/git2/src/signature.rs0000664000175000017500000001242414160055207016655 0ustar mwhudsonmwhudsonuse libc; use std::ffi::CString; use std::fmt; use std::marker; use std::mem; use std::ptr; use std::str; use crate::util::Binding; use crate::{raw, Error, Time}; /// A Signature is used to indicate authorship of various actions throughout the /// library. /// /// Signatures contain a name, email, and timestamp. All fields can be specified /// with `new` while the `now` constructor omits the timestamp. The /// [`Repository::signature`] method can be used to create a default signature /// with name and email values read from the configuration. /// /// [`Repository::signature`]: struct.Repository.html#method.signature pub struct Signature<'a> { raw: *mut raw::git_signature, _marker: marker::PhantomData<&'a str>, owned: bool, } impl<'a> Signature<'a> { /// Create a new action signature with a timestamp of 'now'. /// /// See `new` for more information pub fn now(name: &str, email: &str) -> Result, Error> { crate::init(); let mut ret = ptr::null_mut(); let name = CString::new(name)?; let email = CString::new(email)?; unsafe { try_call!(raw::git_signature_now(&mut ret, name, email)); Ok(Binding::from_raw(ret)) } } /// Create a new action signature. /// /// The `time` specified is in seconds since the epoch, and the `offset` is /// the time zone offset in minutes. /// /// Returns error if either `name` or `email` contain angle brackets. pub fn new(name: &str, email: &str, time: &Time) -> Result, Error> { crate::init(); let mut ret = ptr::null_mut(); let name = CString::new(name)?; let email = CString::new(email)?; unsafe { try_call!(raw::git_signature_new( &mut ret, name, email, time.seconds() as raw::git_time_t, time.offset_minutes() as libc::c_int )); Ok(Binding::from_raw(ret)) } } /// Gets the name on the signature. /// /// Returns `None` if the name is not valid utf-8 pub fn name(&self) -> Option<&str> { str::from_utf8(self.name_bytes()).ok() } /// Gets the name on the signature as a byte slice. pub fn name_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, (*self.raw).name).unwrap() } } /// Gets the email on the signature. /// /// Returns `None` if the email is not valid utf-8 pub fn email(&self) -> Option<&str> { str::from_utf8(self.email_bytes()).ok() } /// Gets the email on the signature as a byte slice. pub fn email_bytes(&self) -> &[u8] { unsafe { crate::opt_bytes(self, (*self.raw).email).unwrap() } } /// Get the `when` of this signature. pub fn when(&self) -> Time { unsafe { Binding::from_raw((*self.raw).when) } } /// Convert a signature of any lifetime into an owned signature with a /// static lifetime. pub fn to_owned(&self) -> Signature<'static> { unsafe { let me = mem::transmute::<&Signature<'a>, &Signature<'static>>(self); me.clone() } } } impl<'a> Binding for Signature<'a> { type Raw = *mut raw::git_signature; unsafe fn from_raw(raw: *mut raw::git_signature) -> Signature<'a> { Signature { raw, _marker: marker::PhantomData, owned: true, } } fn raw(&self) -> *mut raw::git_signature { self.raw } } /// Creates a new signature from the give raw pointer, tied to the lifetime /// of the given object. /// /// This function is unsafe as there is no guarantee that `raw` is valid for /// `'a` nor if it's a valid pointer. pub unsafe fn from_raw_const<'b, T>(_lt: &'b T, raw: *const raw::git_signature) -> Signature<'b> { Signature { raw: raw as *mut raw::git_signature, _marker: marker::PhantomData, owned: false, } } impl Clone for Signature<'static> { fn clone(&self) -> Signature<'static> { // TODO: can this be defined for 'a and just do a plain old copy if the // lifetime isn't static? let mut raw = ptr::null_mut(); let rc = unsafe { raw::git_signature_dup(&mut raw, &*self.raw) }; assert_eq!(rc, 0); unsafe { Binding::from_raw(raw) } } } impl<'a> Drop for Signature<'a> { fn drop(&mut self) { if self.owned { unsafe { raw::git_signature_free(self.raw) } } } } impl<'a> fmt::Display for Signature<'a> { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!( f, "{} <{}>", String::from_utf8_lossy(self.name_bytes()), String::from_utf8_lossy(self.email_bytes()) ) } } #[cfg(test)] mod tests { use crate::{Signature, Time}; #[test] fn smoke() { Signature::new("foo", "bar", &Time::new(89, 0)).unwrap(); Signature::now("foo", "bar").unwrap(); assert!(Signature::new("", "bar", &Time::new(89, 0)).is_err()); assert!(Signature::now("", "bar").is_err()); let s = Signature::now("foo", "bar").unwrap(); assert_eq!(s.name(), Some("foo")); assert_eq!(s.email(), Some("bar")); drop(s.clone()); drop(s.to_owned()); } } vendor/git2/src/lib.rs0000664000175000017500000015561114160055207015430 0ustar mwhudsonmwhudson//! # libgit2 bindings for Rust //! //! This library contains bindings to the [libgit2][1] C library which is used //! to manage git repositories. The library itself is a work in progress and is //! likely lacking some bindings here and there, so be warned. //! //! [1]: https://libgit2.github.com/ //! //! The git2-rs library strives to be as close to libgit2 as possible, but also //! strives to make using libgit2 as safe as possible. All resource management //! is automatic as well as adding strong types to all interfaces (including //! `Result`) //! //! ## Creating a `Repository` //! //! The `Repository` is the source from which almost all other objects in git-rs //! are spawned. A repository can be created through opening, initializing, or //! cloning. //! //! ### Initializing a new repository //! //! The `init` method will create a new repository, assuming one does not //! already exist. //! //! ```no_run //! # #![allow(unstable)] //! use git2::Repository; //! //! let repo = match Repository::init("/path/to/a/repo") { //! Ok(repo) => repo, //! Err(e) => panic!("failed to init: {}", e), //! }; //! ``` //! //! ### Opening an existing repository //! //! ```no_run //! # #![allow(unstable)] //! use git2::Repository; //! //! let repo = match Repository::open("/path/to/a/repo") { //! Ok(repo) => repo, //! Err(e) => panic!("failed to open: {}", e), //! }; //! ``` //! //! ### Cloning an existing repository //! //! ```no_run //! # #![allow(unstable)] //! use git2::Repository; //! //! let url = "https://github.com/alexcrichton/git2-rs"; //! let repo = match Repository::clone(url, "/path/to/a/repo") { //! Ok(repo) => repo, //! Err(e) => panic!("failed to clone: {}", e), //! }; //! ``` //! //! To clone using SSH, refer to [RepoBuilder](./build/struct.RepoBuilder.html). //! //! ## Working with a `Repository` //! //! All derivative objects, references, etc are attached to the lifetime of the //! source `Repository`, to ensure that they do not outlive the repository //! itself. #![doc(html_root_url = "https://docs.rs/git2/0.13")] #![allow(trivial_numeric_casts, trivial_casts)] #![deny(missing_docs)] #![warn(rust_2018_idioms)] #![cfg_attr(test, deny(warnings))] use bitflags::bitflags; use libgit2_sys as raw; use std::ffi::{CStr, CString}; use std::fmt; use std::str; use std::sync::Once; pub use crate::apply::{ApplyLocation, ApplyOptions}; pub use crate::attr::AttrValue; pub use crate::blame::{Blame, BlameHunk, BlameIter, BlameOptions}; pub use crate::blob::{Blob, BlobWriter}; pub use crate::branch::{Branch, Branches}; pub use crate::buf::Buf; pub use crate::cherrypick::CherrypickOptions; pub use crate::commit::{Commit, Parents}; pub use crate::config::{Config, ConfigEntries, ConfigEntry}; pub use crate::cred::{Cred, CredentialHelper}; pub use crate::describe::{Describe, DescribeFormatOptions, DescribeOptions}; pub use crate::diff::{Deltas, Diff, DiffDelta, DiffFile, DiffOptions}; pub use crate::diff::{DiffBinary, DiffBinaryFile, DiffBinaryKind}; pub use crate::diff::{DiffFindOptions, DiffHunk, DiffLine, DiffLineType, DiffStats}; pub use crate::error::Error; pub use crate::index::{ Index, IndexConflict, IndexConflicts, IndexEntries, IndexEntry, IndexMatchedPath, }; pub use crate::indexer::{IndexerProgress, Progress}; pub use crate::mailmap::Mailmap; pub use crate::mempack::Mempack; pub use crate::merge::{AnnotatedCommit, MergeOptions}; pub use crate::message::{message_prettify, DEFAULT_COMMENT_CHAR}; pub use crate::note::{Note, Notes}; pub use crate::object::Object; pub use crate::odb::{Odb, OdbObject, OdbPackwriter, OdbReader, OdbWriter}; pub use crate::oid::Oid; pub use crate::packbuilder::{PackBuilder, PackBuilderStage}; pub use crate::patch::Patch; pub use crate::pathspec::{Pathspec, PathspecFailedEntries, PathspecMatchList}; pub use crate::pathspec::{PathspecDiffEntries, PathspecEntries}; pub use crate::proxy_options::ProxyOptions; pub use crate::rebase::{Rebase, RebaseOperation, RebaseOperationType, RebaseOptions}; pub use crate::reference::{Reference, ReferenceNames, References}; pub use crate::reflog::{Reflog, ReflogEntry, ReflogIter}; pub use crate::refspec::Refspec; pub use crate::remote::{ FetchOptions, PushOptions, Refspecs, Remote, RemoteConnection, RemoteHead, }; pub use crate::remote_callbacks::{Credentials, RemoteCallbacks}; pub use crate::remote_callbacks::{TransportMessage, UpdateTips}; pub use crate::repo::{Repository, RepositoryInitOptions}; pub use crate::revert::RevertOptions; pub use crate::revspec::Revspec; pub use crate::revwalk::Revwalk; pub use crate::signature::Signature; pub use crate::stash::{StashApplyOptions, StashApplyProgressCb, StashCb}; pub use crate::status::{StatusEntry, StatusIter, StatusOptions, StatusShow, Statuses}; pub use crate::submodule::{Submodule, SubmoduleUpdateOptions}; pub use crate::tag::Tag; pub use crate::time::{IndexTime, Time}; pub use crate::tracing::{trace_set, TraceLevel}; pub use crate::transaction::Transaction; pub use crate::tree::{Tree, TreeEntry, TreeIter, TreeWalkMode, TreeWalkResult}; pub use crate::treebuilder::TreeBuilder; pub use crate::util::IntoCString; pub use crate::version::Version; pub use crate::worktree::{Worktree, WorktreeAddOptions, WorktreeLockStatus, WorktreePruneOptions}; // Create a convinience method on bitflag struct which checks the given flag macro_rules! is_bit_set { ($name:ident, $flag:expr) => { #[allow(missing_docs)] pub fn $name(&self) -> bool { self.intersects($flag) } }; } /// An enumeration of possible errors that can happen when working with a git /// repository. // Note: We omit a few native error codes, as they are unlikely to be propagated // to the library user. Currently: // // * GIT_EPASSTHROUGH // * GIT_ITEROVER // * GIT_RETRY #[derive(PartialEq, Eq, Clone, Debug, Copy)] pub enum ErrorCode { /// Generic error GenericError, /// Requested object could not be found NotFound, /// Object exists preventing operation Exists, /// More than one object matches Ambiguous, /// Output buffer too short to hold data BufSize, /// User-generated error User, /// Operation not allowed on bare repository BareRepo, /// HEAD refers to branch with no commits UnbornBranch, /// Merge in progress prevented operation Unmerged, /// Reference was not fast-forwardable NotFastForward, /// Name/ref spec was not in a valid format InvalidSpec, /// Checkout conflicts prevented operation Conflict, /// Lock file prevented operation Locked, /// Reference value does not match expected Modified, /// Authentication error Auth, /// Server certificate is invalid Certificate, /// Patch/merge has already been applied Applied, /// The requested peel operation is not possible Peel, /// Unexpected EOF Eof, /// Invalid operation or input Invalid, /// Uncommitted changes in index prevented operation Uncommitted, /// Operation was not valid for a directory Directory, /// A merge conflict exists and cannot continue MergeConflict, /// Hashsum mismatch in object HashsumMismatch, /// Unsaved changes in the index would be overwritten IndexDirty, /// Patch application failed ApplyFail, } /// An enumeration of possible categories of things that can have /// errors when working with a git repository. #[derive(PartialEq, Eq, Clone, Debug, Copy)] pub enum ErrorClass { /// Uncategorized None, /// Out of memory or insufficient allocated space NoMemory, /// Syscall or standard system library error Os, /// Invalid input Invalid, /// Error resolving or manipulating a reference Reference, /// ZLib failure Zlib, /// Bad repository state Repository, /// Bad configuration Config, /// Regex failure Regex, /// Bad object Odb, /// Invalid index data Index, /// Error creating or obtaining an object Object, /// Network error Net, /// Error manpulating a tag Tag, /// Invalid value in tree Tree, /// Hashing or packing error Indexer, /// Error from SSL Ssl, /// Error involing submodules Submodule, /// Threading error Thread, /// Error manipulating a stash Stash, /// Checkout failure Checkout, /// Invalid FETCH_HEAD FetchHead, /// Merge failure Merge, /// SSH failure Ssh, /// Error manipulating filters Filter, /// Error reverting commit Revert, /// Error from a user callback Callback, /// Error cherry-picking commit CherryPick, /// Can't describe object Describe, /// Error during rebase Rebase, /// Filesystem-related error Filesystem, /// Invalid patch data Patch, /// Error involving worktrees Worktree, /// Hash library error or SHA-1 collision Sha1, /// HTTP error Http, } /// A listing of the possible states that a repository can be in. #[derive(PartialEq, Eq, Clone, Debug, Copy)] #[allow(missing_docs)] pub enum RepositoryState { Clean, Merge, Revert, RevertSequence, CherryPick, CherryPickSequence, Bisect, Rebase, RebaseInteractive, RebaseMerge, ApplyMailbox, ApplyMailboxOrRebase, } /// An enumeration of the possible directions for a remote. #[derive(Copy, Clone)] pub enum Direction { /// Data will be fetched (read) from this remote. Fetch, /// Data will be pushed (written) to this remote. Push, } /// An enumeration of the operations that can be performed for the `reset` /// method on a `Repository`. #[derive(Copy, Clone)] pub enum ResetType { /// Move the head to the given commit. Soft, /// Soft plus reset the index to the commit. Mixed, /// Mixed plus changes in the working tree are discarded. Hard, } /// An enumeration all possible kinds objects may have. #[derive(PartialEq, Eq, Copy, Clone, Debug)] pub enum ObjectType { /// Any kind of git object Any, /// An object which corresponds to a git commit Commit, /// An object which corresponds to a git tree Tree, /// An object which corresponds to a git blob Blob, /// An object which corresponds to a git tag Tag, } /// An enumeration of all possile kinds of references. #[derive(PartialEq, Eq, Copy, Clone, Debug)] pub enum ReferenceType { /// A reference which points at an object id. Direct, /// A reference which points at another reference. Symbolic, } /// An enumeration for the possible types of branches #[derive(PartialEq, Eq, Debug, Copy, Clone)] pub enum BranchType { /// A local branch not on a remote. Local, /// A branch for a remote. Remote, } /// An enumeration of the possible priority levels of a config file. /// /// The levels corresponding to the escalation logic (higher to lower) when /// searching for config entries. #[derive(PartialEq, Eq, Debug, Copy, Clone)] pub enum ConfigLevel { /// System-wide on Windows, for compatibility with portable git ProgramData = 1, /// System-wide configuration file, e.g. /etc/gitconfig System, /// XDG-compatible configuration file, e.g. ~/.config/git/config XDG, /// User-specific configuration, e.g. ~/.gitconfig Global, /// Repository specific config, e.g. $PWD/.git/config Local, /// Application specific configuration file App, /// Highest level available Highest = -1, } /// Merge file favor options for `MergeOptions` instruct the file-level /// merging functionality how to deal with conflicting regions of the files. #[derive(PartialEq, Eq, Debug, Copy, Clone)] pub enum FileFavor { /// When a region of a file is changed in both branches, a conflict will be /// recorded in the index so that git_checkout can produce a merge file with /// conflict markers in the working directory. This is the default. Normal, /// When a region of a file is changed in both branches, the file created /// in the index will contain the "ours" side of any conflicting region. /// The index will not record a conflict. Ours, /// When a region of a file is changed in both branches, the file created /// in the index will contain the "theirs" side of any conflicting region. /// The index will not record a conflict. Theirs, /// When a region of a file is changed in both branches, the file created /// in the index will contain each unique line from each side, which has /// the result of combining both files. The index will not record a conflict. Union, } bitflags! { /// Orderings that may be specified for Revwalk iteration. pub struct Sort: u32 { /// Sort the repository contents in no particular ordering. /// /// This sorting is arbitrary, implementation-specific, and subject to /// change at any time. This is the default sorting for new walkers. const NONE = raw::GIT_SORT_NONE as u32; /// Sort the repository contents in topological order (children before /// parents). /// /// This sorting mode can be combined with time sorting. const TOPOLOGICAL = raw::GIT_SORT_TOPOLOGICAL as u32; /// Sort the repository contents by commit time. /// /// This sorting mode can be combined with topological sorting. const TIME = raw::GIT_SORT_TIME as u32; /// Iterate through the repository contents in reverse order. /// /// This sorting mode can be combined with any others. const REVERSE = raw::GIT_SORT_REVERSE as u32; } } impl Sort { is_bit_set!(is_none, Sort::NONE); is_bit_set!(is_topological, Sort::TOPOLOGICAL); is_bit_set!(is_time, Sort::TIME); is_bit_set!(is_reverse, Sort::REVERSE); } bitflags! { /// Types of credentials that can be requested by a credential callback. pub struct CredentialType: u32 { #[allow(missing_docs)] const USER_PASS_PLAINTEXT = raw::GIT_CREDTYPE_USERPASS_PLAINTEXT as u32; #[allow(missing_docs)] const SSH_KEY = raw::GIT_CREDTYPE_SSH_KEY as u32; #[allow(missing_docs)] const SSH_MEMORY = raw::GIT_CREDTYPE_SSH_MEMORY as u32; #[allow(missing_docs)] const SSH_CUSTOM = raw::GIT_CREDTYPE_SSH_CUSTOM as u32; #[allow(missing_docs)] const DEFAULT = raw::GIT_CREDTYPE_DEFAULT as u32; #[allow(missing_docs)] const SSH_INTERACTIVE = raw::GIT_CREDTYPE_SSH_INTERACTIVE as u32; #[allow(missing_docs)] const USERNAME = raw::GIT_CREDTYPE_USERNAME as u32; } } impl CredentialType { is_bit_set!(is_user_pass_plaintext, CredentialType::USER_PASS_PLAINTEXT); is_bit_set!(is_ssh_key, CredentialType::SSH_KEY); is_bit_set!(is_ssh_memory, CredentialType::SSH_MEMORY); is_bit_set!(is_ssh_custom, CredentialType::SSH_CUSTOM); is_bit_set!(is_default, CredentialType::DEFAULT); is_bit_set!(is_ssh_interactive, CredentialType::SSH_INTERACTIVE); is_bit_set!(is_username, CredentialType::USERNAME); } impl Default for CredentialType { fn default() -> Self { CredentialType::DEFAULT } } bitflags! { /// Flags for the `flags` field of an IndexEntry. pub struct IndexEntryFlag: u16 { /// Set when the `extended_flags` field is valid. const EXTENDED = raw::GIT_INDEX_ENTRY_EXTENDED as u16; /// "Assume valid" flag const VALID = raw::GIT_INDEX_ENTRY_VALID as u16; } } impl IndexEntryFlag { is_bit_set!(is_extended, IndexEntryFlag::EXTENDED); is_bit_set!(is_valid, IndexEntryFlag::VALID); } bitflags! { /// Flags for the `extended_flags` field of an IndexEntry. pub struct IndexEntryExtendedFlag: u16 { /// An "intent to add" entry from "git add -N" const INTENT_TO_ADD = raw::GIT_INDEX_ENTRY_INTENT_TO_ADD as u16; /// Skip the associated worktree file, for sparse checkouts const SKIP_WORKTREE = raw::GIT_INDEX_ENTRY_SKIP_WORKTREE as u16; #[allow(missing_docs)] const UPTODATE = raw::GIT_INDEX_ENTRY_UPTODATE as u16; } } impl IndexEntryExtendedFlag { is_bit_set!(is_intent_to_add, IndexEntryExtendedFlag::INTENT_TO_ADD); is_bit_set!(is_skip_worktree, IndexEntryExtendedFlag::SKIP_WORKTREE); is_bit_set!(is_up_to_date, IndexEntryExtendedFlag::UPTODATE); } bitflags! { /// Flags for APIs that add files matching pathspec pub struct IndexAddOption: u32 { #[allow(missing_docs)] const DEFAULT = raw::GIT_INDEX_ADD_DEFAULT as u32; #[allow(missing_docs)] const FORCE = raw::GIT_INDEX_ADD_FORCE as u32; #[allow(missing_docs)] const DISABLE_PATHSPEC_MATCH = raw::GIT_INDEX_ADD_DISABLE_PATHSPEC_MATCH as u32; #[allow(missing_docs)] const CHECK_PATHSPEC = raw::GIT_INDEX_ADD_CHECK_PATHSPEC as u32; } } impl IndexAddOption { is_bit_set!(is_default, IndexAddOption::DEFAULT); is_bit_set!(is_force, IndexAddOption::FORCE); is_bit_set!( is_disable_pathspec_match, IndexAddOption::DISABLE_PATHSPEC_MATCH ); is_bit_set!(is_check_pathspec, IndexAddOption::CHECK_PATHSPEC); } impl Default for IndexAddOption { fn default() -> Self { IndexAddOption::DEFAULT } } bitflags! { /// Flags for `Repository::open_ext` pub struct RepositoryOpenFlags: u32 { /// Only open the specified path; don't walk upward searching. const NO_SEARCH = raw::GIT_REPOSITORY_OPEN_NO_SEARCH as u32; /// Search across filesystem boundaries. const CROSS_FS = raw::GIT_REPOSITORY_OPEN_CROSS_FS as u32; /// Force opening as bare repository, and defer loading its config. const BARE = raw::GIT_REPOSITORY_OPEN_BARE as u32; /// Don't try appending `/.git` to the specified repository path. const NO_DOTGIT = raw::GIT_REPOSITORY_OPEN_NO_DOTGIT as u32; /// Respect environment variables like `$GIT_DIR`. const FROM_ENV = raw::GIT_REPOSITORY_OPEN_FROM_ENV as u32; } } impl RepositoryOpenFlags { is_bit_set!(is_no_search, RepositoryOpenFlags::NO_SEARCH); is_bit_set!(is_cross_fs, RepositoryOpenFlags::CROSS_FS); is_bit_set!(is_bare, RepositoryOpenFlags::BARE); is_bit_set!(is_no_dotgit, RepositoryOpenFlags::NO_DOTGIT); is_bit_set!(is_from_env, RepositoryOpenFlags::FROM_ENV); } bitflags! { /// Flags for the return value of `Repository::revparse` pub struct RevparseMode: u32 { /// The spec targeted a single object const SINGLE = raw::GIT_REVPARSE_SINGLE as u32; /// The spec targeted a range of commits const RANGE = raw::GIT_REVPARSE_RANGE as u32; /// The spec used the `...` operator, which invokes special semantics. const MERGE_BASE = raw::GIT_REVPARSE_MERGE_BASE as u32; } } impl RevparseMode { is_bit_set!(is_no_single, RevparseMode::SINGLE); is_bit_set!(is_range, RevparseMode::RANGE); is_bit_set!(is_merge_base, RevparseMode::MERGE_BASE); } bitflags! { /// The results of `merge_analysis` indicating the merge opportunities. pub struct MergeAnalysis: u32 { /// No merge is possible. const ANALYSIS_NONE = raw::GIT_MERGE_ANALYSIS_NONE as u32; /// A "normal" merge; both HEAD and the given merge input have diverged /// from their common ancestor. The divergent commits must be merged. const ANALYSIS_NORMAL = raw::GIT_MERGE_ANALYSIS_NORMAL as u32; /// All given merge inputs are reachable from HEAD, meaning the /// repository is up-to-date and no merge needs to be performed. const ANALYSIS_UP_TO_DATE = raw::GIT_MERGE_ANALYSIS_UP_TO_DATE as u32; /// The given merge input is a fast-forward from HEAD and no merge /// needs to be performed. Instead, the client can check out the /// given merge input. const ANALYSIS_FASTFORWARD = raw::GIT_MERGE_ANALYSIS_FASTFORWARD as u32; /// The HEAD of the current repository is "unborn" and does not point to /// a valid commit. No merge can be performed, but the caller may wish /// to simply set HEAD to the target commit(s). const ANALYSIS_UNBORN = raw::GIT_MERGE_ANALYSIS_UNBORN as u32; } } impl MergeAnalysis { is_bit_set!(is_none, MergeAnalysis::ANALYSIS_NONE); is_bit_set!(is_normal, MergeAnalysis::ANALYSIS_NORMAL); is_bit_set!(is_up_to_date, MergeAnalysis::ANALYSIS_UP_TO_DATE); is_bit_set!(is_fast_forward, MergeAnalysis::ANALYSIS_FASTFORWARD); is_bit_set!(is_unborn, MergeAnalysis::ANALYSIS_UNBORN); } bitflags! { /// The user's stated preference for merges. pub struct MergePreference: u32 { /// No configuration was found that suggests a preferred behavior for /// merge. const NONE = raw::GIT_MERGE_PREFERENCE_NONE as u32; /// There is a `merge.ff=false` configuration setting, suggesting that /// the user does not want to allow a fast-forward merge. const NO_FAST_FORWARD = raw::GIT_MERGE_PREFERENCE_NO_FASTFORWARD as u32; /// There is a `merge.ff=only` configuration setting, suggesting that /// the user only wants fast-forward merges. const FASTFORWARD_ONLY = raw::GIT_MERGE_PREFERENCE_FASTFORWARD_ONLY as u32; } } impl MergePreference { is_bit_set!(is_none, MergePreference::NONE); is_bit_set!(is_no_fast_forward, MergePreference::NO_FAST_FORWARD); is_bit_set!(is_fastforward_only, MergePreference::FASTFORWARD_ONLY); } #[cfg(test)] #[macro_use] mod test; #[macro_use] mod panic; mod attr; mod call; mod util; pub mod build; pub mod cert; pub mod oid_array; pub mod opts; pub mod string_array; pub mod transport; mod apply; mod blame; mod blob; mod branch; mod buf; mod cherrypick; mod commit; mod config; mod cred; mod describe; mod diff; mod error; mod index; mod indexer; mod mailmap; mod mempack; mod merge; mod message; mod note; mod object; mod odb; mod oid; mod packbuilder; mod patch; mod pathspec; mod proxy_options; mod rebase; mod reference; mod reflog; mod refspec; mod remote; mod remote_callbacks; mod repo; mod revert; mod revspec; mod revwalk; mod signature; mod stash; mod status; mod submodule; mod tag; mod tagforeach; mod time; mod tracing; mod transaction; mod tree; mod treebuilder; mod version; mod worktree; fn init() { static INIT: Once = Once::new(); INIT.call_once(|| { openssl_env_init(); }); raw::init(); } #[cfg(all( unix, not(target_os = "macos"), not(target_os = "ios"), feature = "https" ))] fn openssl_env_init() { // Currently, libgit2 leverages OpenSSL for SSL support when cloning // repositories over HTTPS. This means that we're picking up an OpenSSL // dependency on non-Windows platforms (where it has its own HTTPS // subsystem). As a result, we need to link to OpenSSL. // // Now actually *linking* to OpenSSL isn't so hard. We just need to make // sure to use pkg-config to discover any relevant system dependencies for // differences between distributions like CentOS and Ubuntu. The actual // trickiness comes about when we start *distributing* the resulting // binaries. Currently Cargo is distributed in binary form as nightlies, // which means we're distributing a binary with OpenSSL linked in. // // For historical reasons, the Linux nightly builder is running a CentOS // distribution in order to have as much ABI compatibility with other // distributions as possible. Sadly, however, this compatibility does not // extend to OpenSSL. Currently OpenSSL has two major versions, 0.9 and 1.0, // which are incompatible (many ABI differences). The CentOS builder we // build on has version 1.0, as do most distributions today. Some still have // 0.9, however. This means that if we are to distribute the binaries built // by the CentOS machine, we would only be compatible with OpenSSL 1.0 and // we would fail to run (a dynamic linker error at runtime) on systems with // only 9.8 installed (hopefully). // // But wait, the plot thickens! Apparently CentOS has dubbed their OpenSSL // library as `libssl.so.10`, notably the `10` is included at the end. On // the other hand Ubuntu, for example, only distributes `libssl.so`. This // means that the binaries created at CentOS are hard-wired to probe for a // file called `libssl.so.10` at runtime (using the LD_LIBRARY_PATH), which // will not be found on ubuntu. The conclusion of this is that binaries // built on CentOS cannot be distributed to Ubuntu and run successfully. // // There are a number of sneaky things we could do, including, but not // limited to: // // 1. Create a shim program which runs "just before" cargo runs. The // responsibility of this shim program would be to locate `libssl.so`, // whatever it's called, on the current system, make sure there's a // symlink *somewhere* called `libssl.so.10`, and then set up // LD_LIBRARY_PATH and run the actual cargo. // // This approach definitely seems unconventional, and is borderline // overkill for this problem. It's also dubious if we can find a // libssl.so reliably on the target system. // // 2. Somehow re-work the CentOS installation so that the linked-against // library is called libssl.so instead of libssl.so.10 // // The problem with this approach is that systems with 0.9 installed will // start to silently fail, due to also having libraries called libssl.so // (probably symlinked under a more appropriate version). // // 3. Compile Cargo against both OpenSSL 1.0 *and* OpenSSL 0.9, and // distribute both. Also make sure that the linked-against name of the // library is `libssl.so`. At runtime we determine which version is // installed, and we then the appropriate binary. // // This approach clearly has drawbacks in terms of infrastructure and // feasibility. // // 4. Build a nightly of Cargo for each distribution we'd like to support. // You would then pick the appropriate Cargo nightly to install locally. // // So, with all this in mind, the decision was made to *statically* link // OpenSSL. This solves any problem of relying on a downstream OpenSSL // version being available. This does, however, open a can of worms related // to security issues. It's generally a good idea to dynamically link // OpenSSL as you'll get security updates over time without having to do // anything (the system administrator will update the local openssl // package). By statically linking, we're forfeiting this feature. // // The conclusion was made it is likely appropriate for the Cargo nightlies // to statically link OpenSSL, but highly encourage distributions and // packagers of Cargo to dynamically link OpenSSL. Packagers are targeting // one system and are distributing to only that system, so none of the // problems mentioned above would arise. // // In order to support this, a new package was made: openssl-static-sys. // This package currently performs a fairly simple task: // // 1. Run pkg-config to discover where openssl is installed. // 2. If openssl is installed in a nonstandard location, *and* static copies // of the libraries are available, copy them to $OUT_DIR. // // This library will bring in libssl.a and libcrypto.a into the local build, // allowing them to be picked up by this crate. This allows us to configure // our own buildbots to have pkg-config point to these local pre-built // copies of a static OpenSSL (with very few dependencies) while allowing // most other builds of Cargo to naturally dynamically link OpenSSL. // // So in summary, if you're with me so far, we've statically linked OpenSSL // to the Cargo binary (or any binary, for that matter) and we're ready to // distribute it to *all* linux distributions. Remember that our original // intent for openssl was for HTTPS support, which implies that we need some // for of CA certificate store to validate certificates. This is normally // installed in a standard system location. // // Unfortunately, as one might imagine, OpenSSL is configured for where this // standard location is at *build time*, but it often varies widely // per-system. Consequently, it was discovered that OpenSSL will respect the // SSL_CERT_FILE and SSL_CERT_DIR environment variables in order to assist // in discovering the location of this file (hurray!). // // So, finally getting to the point, this function solely exists to support // our static builds of OpenSSL by probing for the "standard system // location" of certificates and setting relevant environment variable to // point to them. // // Ah, and as a final note, this is only a problem on Linux, not on OS X. On // OS X the OpenSSL binaries are stable enough that we can just rely on // dynamic linkage (plus they have some weird modifications to OpenSSL which // means we wouldn't want to link statically). openssl_probe::init_ssl_cert_env_vars(); } #[cfg(any( windows, target_os = "macos", target_os = "ios", not(feature = "https") ))] fn openssl_env_init() {} unsafe fn opt_bytes<'a, T>(_anchor: &'a T, c: *const libc::c_char) -> Option<&'a [u8]> { if c.is_null() { None } else { Some(CStr::from_ptr(c).to_bytes()) } } fn opt_cstr(o: Option) -> Result, Error> { match o { Some(s) => s.into_c_string().map(Some), None => Ok(None), } } impl ObjectType { /// Convert an object type to its string representation. pub fn str(&self) -> &'static str { unsafe { let ptr = call!(raw::git_object_type2string(*self)) as *const _; let data = CStr::from_ptr(ptr).to_bytes(); str::from_utf8(data).unwrap() } } /// Determine if the given git_object_t is a valid loose object type. pub fn is_loose(&self) -> bool { unsafe { call!(raw::git_object_typeisloose(*self)) == 1 } } /// Convert a raw git_object_t to an ObjectType pub fn from_raw(raw: raw::git_object_t) -> Option { match raw { raw::GIT_OBJECT_ANY => Some(ObjectType::Any), raw::GIT_OBJECT_COMMIT => Some(ObjectType::Commit), raw::GIT_OBJECT_TREE => Some(ObjectType::Tree), raw::GIT_OBJECT_BLOB => Some(ObjectType::Blob), raw::GIT_OBJECT_TAG => Some(ObjectType::Tag), _ => None, } } /// Convert this kind into its raw representation pub fn raw(&self) -> raw::git_object_t { call::convert(self) } /// Convert a string object type representation to its object type. pub fn from_str(s: &str) -> Option { let raw = unsafe { call!(raw::git_object_string2type(CString::new(s).unwrap())) }; ObjectType::from_raw(raw) } } impl fmt::Display for ObjectType { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { self.str().fmt(f) } } impl ReferenceType { /// Convert an object type to its string representation. pub fn str(&self) -> &'static str { match self { ReferenceType::Direct => "direct", ReferenceType::Symbolic => "symbolic", } } /// Convert a raw git_reference_t to a ReferenceType. pub fn from_raw(raw: raw::git_reference_t) -> Option { match raw { raw::GIT_REFERENCE_DIRECT => Some(ReferenceType::Direct), raw::GIT_REFERENCE_SYMBOLIC => Some(ReferenceType::Symbolic), _ => None, } } } impl fmt::Display for ReferenceType { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { self.str().fmt(f) } } impl ConfigLevel { /// Converts a raw configuration level to a ConfigLevel pub fn from_raw(raw: raw::git_config_level_t) -> ConfigLevel { match raw { raw::GIT_CONFIG_LEVEL_PROGRAMDATA => ConfigLevel::ProgramData, raw::GIT_CONFIG_LEVEL_SYSTEM => ConfigLevel::System, raw::GIT_CONFIG_LEVEL_XDG => ConfigLevel::XDG, raw::GIT_CONFIG_LEVEL_GLOBAL => ConfigLevel::Global, raw::GIT_CONFIG_LEVEL_LOCAL => ConfigLevel::Local, raw::GIT_CONFIG_LEVEL_APP => ConfigLevel::App, raw::GIT_CONFIG_HIGHEST_LEVEL => ConfigLevel::Highest, n => panic!("unknown config level: {}", n), } } } impl SubmoduleIgnore { /// Converts a [`raw::git_submodule_ignore_t`] to a [`SubmoduleIgnore`] pub fn from_raw(raw: raw::git_submodule_ignore_t) -> Self { match raw { raw::GIT_SUBMODULE_IGNORE_UNSPECIFIED => SubmoduleIgnore::Unspecified, raw::GIT_SUBMODULE_IGNORE_NONE => SubmoduleIgnore::None, raw::GIT_SUBMODULE_IGNORE_UNTRACKED => SubmoduleIgnore::Untracked, raw::GIT_SUBMODULE_IGNORE_DIRTY => SubmoduleIgnore::Dirty, raw::GIT_SUBMODULE_IGNORE_ALL => SubmoduleIgnore::All, n => panic!("unknown submodule ignore rule: {}", n), } } } impl SubmoduleUpdate { /// Converts a [`raw::git_submodule_update_t`] to a [`SubmoduleUpdate`] pub fn from_raw(raw: raw::git_submodule_update_t) -> Self { match raw { raw::GIT_SUBMODULE_UPDATE_CHECKOUT => SubmoduleUpdate::Checkout, raw::GIT_SUBMODULE_UPDATE_REBASE => SubmoduleUpdate::Rebase, raw::GIT_SUBMODULE_UPDATE_MERGE => SubmoduleUpdate::Merge, raw::GIT_SUBMODULE_UPDATE_NONE => SubmoduleUpdate::None, raw::GIT_SUBMODULE_UPDATE_DEFAULT => SubmoduleUpdate::Default, n => panic!("unknown submodule update strategy: {}", n), } } } bitflags! { /// Status flags for a single file /// /// A combination of these values will be returned to indicate the status of /// a file. Status compares the working directory, the index, and the /// current HEAD of the repository. The `STATUS_INDEX_*` set of flags /// represents the status of file in the index relative to the HEAD, and the /// `STATUS_WT_*` set of flags represent the status of the file in the /// working directory relative to the index. pub struct Status: u32 { #[allow(missing_docs)] const CURRENT = raw::GIT_STATUS_CURRENT as u32; #[allow(missing_docs)] const INDEX_NEW = raw::GIT_STATUS_INDEX_NEW as u32; #[allow(missing_docs)] const INDEX_MODIFIED = raw::GIT_STATUS_INDEX_MODIFIED as u32; #[allow(missing_docs)] const INDEX_DELETED = raw::GIT_STATUS_INDEX_DELETED as u32; #[allow(missing_docs)] const INDEX_RENAMED = raw::GIT_STATUS_INDEX_RENAMED as u32; #[allow(missing_docs)] const INDEX_TYPECHANGE = raw::GIT_STATUS_INDEX_TYPECHANGE as u32; #[allow(missing_docs)] const WT_NEW = raw::GIT_STATUS_WT_NEW as u32; #[allow(missing_docs)] const WT_MODIFIED = raw::GIT_STATUS_WT_MODIFIED as u32; #[allow(missing_docs)] const WT_DELETED = raw::GIT_STATUS_WT_DELETED as u32; #[allow(missing_docs)] const WT_TYPECHANGE = raw::GIT_STATUS_WT_TYPECHANGE as u32; #[allow(missing_docs)] const WT_RENAMED = raw::GIT_STATUS_WT_RENAMED as u32; #[allow(missing_docs)] const IGNORED = raw::GIT_STATUS_IGNORED as u32; #[allow(missing_docs)] const CONFLICTED = raw::GIT_STATUS_CONFLICTED as u32; } } impl Status { is_bit_set!(is_index_new, Status::INDEX_NEW); is_bit_set!(is_index_modified, Status::INDEX_MODIFIED); is_bit_set!(is_index_deleted, Status::INDEX_DELETED); is_bit_set!(is_index_renamed, Status::INDEX_RENAMED); is_bit_set!(is_index_typechange, Status::INDEX_TYPECHANGE); is_bit_set!(is_wt_new, Status::WT_NEW); is_bit_set!(is_wt_modified, Status::WT_MODIFIED); is_bit_set!(is_wt_deleted, Status::WT_DELETED); is_bit_set!(is_wt_typechange, Status::WT_TYPECHANGE); is_bit_set!(is_wt_renamed, Status::WT_RENAMED); is_bit_set!(is_ignored, Status::IGNORED); is_bit_set!(is_conflicted, Status::CONFLICTED); } bitflags! { /// Mode options for RepositoryInitOptions pub struct RepositoryInitMode: u32 { /// Use permissions configured by umask - the default const SHARED_UMASK = raw::GIT_REPOSITORY_INIT_SHARED_UMASK as u32; /// Use `--shared=group` behavior, chmod'ing the new repo to be /// group writable and \"g+sx\" for sticky group assignment const SHARED_GROUP = raw::GIT_REPOSITORY_INIT_SHARED_GROUP as u32; /// Use `--shared=all` behavior, adding world readability. const SHARED_ALL = raw::GIT_REPOSITORY_INIT_SHARED_ALL as u32; } } impl RepositoryInitMode { is_bit_set!(is_shared_umask, RepositoryInitMode::SHARED_UMASK); is_bit_set!(is_shared_group, RepositoryInitMode::SHARED_GROUP); is_bit_set!(is_shared_all, RepositoryInitMode::SHARED_ALL); } /// What type of change is described by a `DiffDelta`? #[derive(Copy, Clone, Debug, PartialEq, Eq)] pub enum Delta { /// No changes Unmodified, /// Entry does not exist in old version Added, /// Entry does not exist in new version Deleted, /// Entry content changed between old and new Modified, /// Entry was renamed between old and new Renamed, /// Entry was copied from another old entry Copied, /// Entry is ignored item in workdir Ignored, /// Entry is untracked item in workdir Untracked, /// Type of entry changed between old and new Typechange, /// Entry is unreadable Unreadable, /// Entry in the index is conflicted Conflicted, } /// Valid modes for index and tree entries. #[derive(Copy, Clone, Debug, PartialEq, Eq)] pub enum FileMode { /// Unreadable Unreadable, /// Tree Tree, /// Blob Blob, /// Blob executable BlobExecutable, /// Link Link, /// Commit Commit, } impl From for i32 { fn from(mode: FileMode) -> i32 { match mode { FileMode::Unreadable => raw::GIT_FILEMODE_UNREADABLE as i32, FileMode::Tree => raw::GIT_FILEMODE_TREE as i32, FileMode::Blob => raw::GIT_FILEMODE_BLOB as i32, FileMode::BlobExecutable => raw::GIT_FILEMODE_BLOB_EXECUTABLE as i32, FileMode::Link => raw::GIT_FILEMODE_LINK as i32, FileMode::Commit => raw::GIT_FILEMODE_COMMIT as i32, } } } impl From for u32 { fn from(mode: FileMode) -> u32 { match mode { FileMode::Unreadable => raw::GIT_FILEMODE_UNREADABLE as u32, FileMode::Tree => raw::GIT_FILEMODE_TREE as u32, FileMode::Blob => raw::GIT_FILEMODE_BLOB as u32, FileMode::BlobExecutable => raw::GIT_FILEMODE_BLOB_EXECUTABLE as u32, FileMode::Link => raw::GIT_FILEMODE_LINK as u32, FileMode::Commit => raw::GIT_FILEMODE_COMMIT as u32, } } } bitflags! { /// Return codes for submodule status. /// /// A combination of these flags will be returned to describe the status of a /// submodule. Depending on the "ignore" property of the submodule, some of /// the flags may never be returned because they indicate changes that are /// supposed to be ignored. /// /// Submodule info is contained in 4 places: the HEAD tree, the index, config /// files (both .git/config and .gitmodules), and the working directory. Any /// or all of those places might be missing information about the submodule /// depending on what state the repo is in. We consider all four places to /// build the combination of status flags. /// /// There are four values that are not really status, but give basic info /// about what sources of submodule data are available. These will be /// returned even if ignore is set to "ALL". /// /// * IN_HEAD - superproject head contains submodule /// * IN_INDEX - superproject index contains submodule /// * IN_CONFIG - superproject gitmodules has submodule /// * IN_WD - superproject workdir has submodule /// /// The following values will be returned so long as ignore is not "ALL". /// /// * INDEX_ADDED - in index, not in head /// * INDEX_DELETED - in head, not in index /// * INDEX_MODIFIED - index and head don't match /// * WD_UNINITIALIZED - workdir contains empty directory /// * WD_ADDED - in workdir, not index /// * WD_DELETED - in index, not workdir /// * WD_MODIFIED - index and workdir head don't match /// /// The following can only be returned if ignore is "NONE" or "UNTRACKED". /// /// * WD_INDEX_MODIFIED - submodule workdir index is dirty /// * WD_WD_MODIFIED - submodule workdir has modified files /// /// Lastly, the following will only be returned for ignore "NONE". /// /// * WD_UNTRACKED - wd contains untracked files pub struct SubmoduleStatus: u32 { #[allow(missing_docs)] const IN_HEAD = raw::GIT_SUBMODULE_STATUS_IN_HEAD as u32; #[allow(missing_docs)] const IN_INDEX = raw::GIT_SUBMODULE_STATUS_IN_INDEX as u32; #[allow(missing_docs)] const IN_CONFIG = raw::GIT_SUBMODULE_STATUS_IN_CONFIG as u32; #[allow(missing_docs)] const IN_WD = raw::GIT_SUBMODULE_STATUS_IN_WD as u32; #[allow(missing_docs)] const INDEX_ADDED = raw::GIT_SUBMODULE_STATUS_INDEX_ADDED as u32; #[allow(missing_docs)] const INDEX_DELETED = raw::GIT_SUBMODULE_STATUS_INDEX_DELETED as u32; #[allow(missing_docs)] const INDEX_MODIFIED = raw::GIT_SUBMODULE_STATUS_INDEX_MODIFIED as u32; #[allow(missing_docs)] const WD_UNINITIALIZED = raw::GIT_SUBMODULE_STATUS_WD_UNINITIALIZED as u32; #[allow(missing_docs)] const WD_ADDED = raw::GIT_SUBMODULE_STATUS_WD_ADDED as u32; #[allow(missing_docs)] const WD_DELETED = raw::GIT_SUBMODULE_STATUS_WD_DELETED as u32; #[allow(missing_docs)] const WD_MODIFIED = raw::GIT_SUBMODULE_STATUS_WD_MODIFIED as u32; #[allow(missing_docs)] const WD_INDEX_MODIFIED = raw::GIT_SUBMODULE_STATUS_WD_INDEX_MODIFIED as u32; #[allow(missing_docs)] const WD_WD_MODIFIED = raw::GIT_SUBMODULE_STATUS_WD_WD_MODIFIED as u32; #[allow(missing_docs)] const WD_UNTRACKED = raw::GIT_SUBMODULE_STATUS_WD_UNTRACKED as u32; } } impl SubmoduleStatus { is_bit_set!(is_in_head, SubmoduleStatus::IN_HEAD); is_bit_set!(is_in_index, SubmoduleStatus::IN_INDEX); is_bit_set!(is_in_config, SubmoduleStatus::IN_CONFIG); is_bit_set!(is_in_wd, SubmoduleStatus::IN_WD); is_bit_set!(is_index_added, SubmoduleStatus::INDEX_ADDED); is_bit_set!(is_index_deleted, SubmoduleStatus::INDEX_DELETED); is_bit_set!(is_index_modified, SubmoduleStatus::INDEX_MODIFIED); is_bit_set!(is_wd_uninitialized, SubmoduleStatus::WD_UNINITIALIZED); is_bit_set!(is_wd_added, SubmoduleStatus::WD_ADDED); is_bit_set!(is_wd_deleted, SubmoduleStatus::WD_DELETED); is_bit_set!(is_wd_modified, SubmoduleStatus::WD_MODIFIED); is_bit_set!(is_wd_wd_modified, SubmoduleStatus::WD_WD_MODIFIED); is_bit_set!(is_wd_untracked, SubmoduleStatus::WD_UNTRACKED); } /// Submodule ignore values /// /// These values represent settings for the `submodule.$name.ignore` /// configuration value which says how deeply to look at the working /// directory when getting the submodule status. #[derive(Debug)] pub enum SubmoduleIgnore { /// Use the submodule's configuration Unspecified, /// Any change or untracked file is considered dirty None, /// Only dirty if tracked files have changed Untracked, /// Only dirty if HEAD has moved Dirty, /// Never dirty All, } /// Submodule update values /// /// These values represent settings for the `submodule.$name.update` /// configuration value which says how to handle `git submodule update` /// for this submodule. The value is usually set in the ".gitmodules" /// file and copied to ".git/config" when the submodule is initialized. #[derive(Debug)] pub enum SubmoduleUpdate { /// The default; when a submodule is updated, checkout the new detached /// HEAD to the submodule directory. Checkout, /// Update by rebasing the current checked out branch onto the commit from /// the superproject. Rebase, /// Update by merging the commit in the superproject into the current /// checkout out branch of the submodule. Merge, /// Do not update this submodule even when the commit in the superproject /// is updated. None, /// Not used except as static initializer when we don't want any particular /// update rule to be specified. Default, } bitflags! { /// ... pub struct PathspecFlags: u32 { /// Use the default pathspec matching configuration. const DEFAULT = raw::GIT_PATHSPEC_DEFAULT as u32; /// Force matching to ignore case, otherwise matching will use native /// case sensitivity fo the platform filesystem. const IGNORE_CASE = raw::GIT_PATHSPEC_IGNORE_CASE as u32; /// Force case sensitive matches, otherwise match will use the native /// case sensitivity of the platform filesystem. const USE_CASE = raw::GIT_PATHSPEC_USE_CASE as u32; /// Disable glob patterns and just use simple string comparison for /// matching. const NO_GLOB = raw::GIT_PATHSPEC_NO_GLOB as u32; /// Means that match functions return the error code `NotFound` if no /// matches are found. By default no matches is a success. const NO_MATCH_ERROR = raw::GIT_PATHSPEC_NO_MATCH_ERROR as u32; /// Means that the list returned should track which patterns matched /// which files so that at the end of the match we can identify patterns /// that did not match any files. const FIND_FAILURES = raw::GIT_PATHSPEC_FIND_FAILURES as u32; /// Means that the list returned does not need to keep the actual /// matching filenames. Use this to just test if there were any matches /// at all or in combination with `PATHSPEC_FAILURES` to validate a /// pathspec. const FAILURES_ONLY = raw::GIT_PATHSPEC_FAILURES_ONLY as u32; } } impl PathspecFlags { is_bit_set!(is_default, PathspecFlags::DEFAULT); is_bit_set!(is_ignore_case, PathspecFlags::IGNORE_CASE); is_bit_set!(is_use_case, PathspecFlags::USE_CASE); is_bit_set!(is_no_glob, PathspecFlags::NO_GLOB); is_bit_set!(is_no_match_error, PathspecFlags::NO_MATCH_ERROR); is_bit_set!(is_find_failures, PathspecFlags::FIND_FAILURES); is_bit_set!(is_failures_only, PathspecFlags::FAILURES_ONLY); } impl Default for PathspecFlags { fn default() -> Self { PathspecFlags::DEFAULT } } bitflags! { /// Types of notifications emitted from checkouts. pub struct CheckoutNotificationType: u32 { /// Notification about a conflict. const CONFLICT = raw::GIT_CHECKOUT_NOTIFY_CONFLICT as u32; /// Notification about a dirty file. const DIRTY = raw::GIT_CHECKOUT_NOTIFY_DIRTY as u32; /// Notification about an updated file. const UPDATED = raw::GIT_CHECKOUT_NOTIFY_UPDATED as u32; /// Notification about an untracked file. const UNTRACKED = raw::GIT_CHECKOUT_NOTIFY_UNTRACKED as u32; /// Notification about an ignored file. const IGNORED = raw::GIT_CHECKOUT_NOTIFY_IGNORED as u32; } } impl CheckoutNotificationType { is_bit_set!(is_conflict, CheckoutNotificationType::CONFLICT); is_bit_set!(is_dirty, CheckoutNotificationType::DIRTY); is_bit_set!(is_updated, CheckoutNotificationType::UPDATED); is_bit_set!(is_untracked, CheckoutNotificationType::UNTRACKED); is_bit_set!(is_ignored, CheckoutNotificationType::IGNORED); } /// Possible output formats for diff data #[derive(Copy, Clone)] pub enum DiffFormat { /// full git diff Patch, /// just the headers of the patch PatchHeader, /// like git diff --raw Raw, /// like git diff --name-only NameOnly, /// like git diff --name-status NameStatus, /// git diff as used by git patch-id PatchId, } bitflags! { /// Formatting options for diff stats pub struct DiffStatsFormat: raw::git_diff_stats_format_t { /// Don't generate any stats const NONE = raw::GIT_DIFF_STATS_NONE; /// Equivalent of `--stat` in git const FULL = raw::GIT_DIFF_STATS_FULL; /// Equivalent of `--shortstat` in git const SHORT = raw::GIT_DIFF_STATS_SHORT; /// Equivalent of `--numstat` in git const NUMBER = raw::GIT_DIFF_STATS_NUMBER; /// Extended header information such as creations, renames and mode /// changes, equivalent of `--summary` in git const INCLUDE_SUMMARY = raw::GIT_DIFF_STATS_INCLUDE_SUMMARY; } } impl DiffStatsFormat { is_bit_set!(is_none, DiffStatsFormat::NONE); is_bit_set!(is_full, DiffStatsFormat::FULL); is_bit_set!(is_short, DiffStatsFormat::SHORT); is_bit_set!(is_number, DiffStatsFormat::NUMBER); is_bit_set!(is_include_summary, DiffStatsFormat::INCLUDE_SUMMARY); } /// Automatic tag following options. pub enum AutotagOption { /// Use the setting from the remote's configuration Unspecified, /// Ask the server for tags pointing to objects we're already downloading Auto, /// Don't ask for any tags beyond the refspecs None, /// Ask for all the tags All, } /// Configuration for how pruning is done on a fetch pub enum FetchPrune { /// Use the setting from the configuration Unspecified, /// Force pruning on On, /// Force pruning off Off, } #[allow(missing_docs)] #[derive(Debug)] pub enum StashApplyProgress { /// None None, /// Loading the stashed data from the object database LoadingStash, /// The stored index is being analyzed AnalyzeIndex, /// The modified files are being analyzed AnalyzeModified, /// The untracked and ignored files are being analyzed AnalyzeUntracked, /// The untracked files are being written to disk CheckoutUntracked, /// The modified files are being written to disk CheckoutModified, /// The stash was applied successfully Done, } bitflags! { #[allow(missing_docs)] pub struct StashApplyFlags: u32 { #[allow(missing_docs)] const DEFAULT = raw::GIT_STASH_APPLY_DEFAULT as u32; /// Try to reinstate not only the working tree's changes, /// but also the index's changes. const REINSTATE_INDEX = raw::GIT_STASH_APPLY_REINSTATE_INDEX as u32; } } impl StashApplyFlags { is_bit_set!(is_default, StashApplyFlags::DEFAULT); is_bit_set!(is_reinstate_index, StashApplyFlags::REINSTATE_INDEX); } impl Default for StashApplyFlags { fn default() -> Self { StashApplyFlags::DEFAULT } } bitflags! { #[allow(missing_docs)] pub struct StashFlags: u32 { #[allow(missing_docs)] const DEFAULT = raw::GIT_STASH_DEFAULT as u32; /// All changes already added to the index are left intact in /// the working directory const KEEP_INDEX = raw::GIT_STASH_KEEP_INDEX as u32; /// All untracked files are also stashed and then cleaned up /// from the working directory const INCLUDE_UNTRACKED = raw::GIT_STASH_INCLUDE_UNTRACKED as u32; /// All ignored files are also stashed and then cleaned up from /// the working directory const INCLUDE_IGNORED = raw::GIT_STASH_INCLUDE_IGNORED as u32; } } impl StashFlags { is_bit_set!(is_default, StashFlags::DEFAULT); is_bit_set!(is_keep_index, StashFlags::KEEP_INDEX); is_bit_set!(is_include_untracked, StashFlags::INCLUDE_UNTRACKED); is_bit_set!(is_include_ignored, StashFlags::INCLUDE_IGNORED); } impl Default for StashFlags { fn default() -> Self { StashFlags::DEFAULT } } bitflags! { #[allow(missing_docs)] pub struct AttrCheckFlags: u32 { /// Check the working directory, then the index. const FILE_THEN_INDEX = raw::GIT_ATTR_CHECK_FILE_THEN_INDEX as u32; /// Check the index, then the working directory. const INDEX_THEN_FILE = raw::GIT_ATTR_CHECK_INDEX_THEN_FILE as u32; /// Check the index only. const INDEX_ONLY = raw::GIT_ATTR_CHECK_INDEX_ONLY as u32; /// Do not use the system gitattributes file. const NO_SYSTEM = raw::GIT_ATTR_CHECK_NO_SYSTEM as u32; } } impl Default for AttrCheckFlags { fn default() -> Self { AttrCheckFlags::FILE_THEN_INDEX } } bitflags! { #[allow(missing_docs)] pub struct DiffFlags: u32 { /// File(s) treated as binary data. const BINARY = raw::GIT_DIFF_FLAG_BINARY as u32; /// File(s) treated as text data. const NOT_BINARY = raw::GIT_DIFF_FLAG_NOT_BINARY as u32; /// `id` value is known correct. const VALID_ID = raw::GIT_DIFF_FLAG_VALID_ID as u32; /// File exists at this side of the delta. const EXISTS = raw::GIT_DIFF_FLAG_EXISTS as u32; } } impl DiffFlags { is_bit_set!(is_binary, DiffFlags::BINARY); is_bit_set!(is_not_binary, DiffFlags::NOT_BINARY); is_bit_set!(has_valid_id, DiffFlags::VALID_ID); is_bit_set!(exists, DiffFlags::EXISTS); } bitflags! { /// Options for [`Reference::normalize_name`]. pub struct ReferenceFormat: u32 { /// No particular normalization. const NORMAL = raw::GIT_REFERENCE_FORMAT_NORMAL as u32; /// Constrol whether one-level refname are accepted (i.e., refnames that /// do not contain multiple `/`-separated components). Those are /// expected to be written only using uppercase letters and underscore /// (e.g. `HEAD`, `FETCH_HEAD`). const ALLOW_ONELEVEL = raw::GIT_REFERENCE_FORMAT_ALLOW_ONELEVEL as u32; /// Interpret the provided name as a reference pattern for a refspec (as /// used with remote repositories). If this option is enabled, the name /// is allowed to contain a single `*` in place of a full pathname /// components (e.g., `foo/*/bar` but not `foo/bar*`). const REFSPEC_PATTERN = raw::GIT_REFERENCE_FORMAT_REFSPEC_PATTERN as u32; /// Interpret the name as part of a refspec in shorthand form so the /// `ALLOW_ONELEVEL` naming rules aren't enforced and `main` becomes a /// valid name. const REFSPEC_SHORTHAND = raw::GIT_REFERENCE_FORMAT_REFSPEC_SHORTHAND as u32; } } impl ReferenceFormat { is_bit_set!(is_allow_onelevel, ReferenceFormat::ALLOW_ONELEVEL); is_bit_set!(is_refspec_pattern, ReferenceFormat::REFSPEC_PATTERN); is_bit_set!(is_refspec_shorthand, ReferenceFormat::REFSPEC_SHORTHAND); } impl Default for ReferenceFormat { fn default() -> Self { ReferenceFormat::NORMAL } } #[cfg(test)] mod tests { use super::{FileMode, ObjectType}; #[test] fn convert() { assert_eq!(ObjectType::Blob.str(), "blob"); assert_eq!(ObjectType::from_str("blob"), Some(ObjectType::Blob)); assert!(ObjectType::Blob.is_loose()); } #[test] fn convert_filemode() { assert_eq!(i32::from(FileMode::Blob), 0o100644); assert_eq!(i32::from(FileMode::BlobExecutable), 0o100755); assert_eq!(u32::from(FileMode::Blob), 0o100644); assert_eq!(u32::from(FileMode::BlobExecutable), 0o100755); } } vendor/git2/tests/0000775000175000017500000000000014160055207014656 5ustar mwhudsonmwhudsonvendor/git2/tests/global_state.rs0000664000175000017500000000247214160055207017671 0ustar mwhudsonmwhudson//! Test for some global state set up by libgit2's `git_libgit2_init` function //! that need to be synchronized within a single process. use git2::opts; use git2::{ConfigLevel, IntoCString}; // Test for mutating configuration file search path which is set during // initialization in libgit2's `git_sysdir_global_init` function. #[test] fn search_path() -> Result<(), Box> { use std::env::join_paths; let path = "fake_path"; let original = unsafe { opts::get_search_path(ConfigLevel::Global) }; assert_ne!(original, Ok(path.into_c_string()?)); // Set unsafe { opts::set_search_path(ConfigLevel::Global, &path)?; } assert_eq!( unsafe { opts::get_search_path(ConfigLevel::Global) }, Ok(path.into_c_string()?) ); // Append let paths = join_paths(["$PATH", path].iter())?; let expected_paths = join_paths([path, path].iter())?.into_c_string()?; unsafe { opts::set_search_path(ConfigLevel::Global, paths)?; } assert_eq!( unsafe { opts::get_search_path(ConfigLevel::Global) }, Ok(expected_paths) ); // Reset unsafe { opts::reset_search_path(ConfigLevel::Global)?; } assert_eq!( unsafe { opts::get_search_path(ConfigLevel::Global) }, original ); Ok(()) } vendor/git2/examples/0000775000175000017500000000000014160055207015332 5ustar mwhudsonmwhudsonvendor/git2/examples/diff.rs0000664000175000017500000002677614160055207016632 0ustar mwhudsonmwhudson/* * libgit2 "diff" example - shows how to use the diff API * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use git2::{Blob, Diff, DiffOptions, Error, Object, ObjectType, Oid, Repository}; use git2::{DiffDelta, DiffFindOptions, DiffFormat, DiffHunk, DiffLine}; use std::str; use structopt::StructOpt; #[derive(StructOpt)] #[allow(non_snake_case)] struct Args { #[structopt(name = "from_oid")] arg_from_oid: Option, #[structopt(name = "to_oid")] arg_to_oid: Option, #[structopt(name = "blobs", long)] /// treat from_oid and to_oid as blob ids flag_blobs: bool, #[structopt(name = "patch", short, long)] /// show output in patch format flag_patch: bool, #[structopt(name = "cached", long)] /// use staged changes as diff flag_cached: bool, #[structopt(name = "nocached", long)] /// do not use staged changes flag_nocached: bool, #[structopt(name = "name-only", long)] /// show only names of changed files flag_name_only: bool, #[structopt(name = "name-status", long)] /// show only names and status changes flag_name_status: bool, #[structopt(name = "raw", long)] /// generate the raw format flag_raw: bool, #[structopt(name = "format", long)] /// specify format for stat summary flag_format: Option, #[structopt(name = "color", long)] /// use color output flag_color: bool, #[structopt(name = "no-color", long)] /// never use color output flag_no_color: bool, #[structopt(short = "R")] /// swap two inputs flag_R: bool, #[structopt(name = "text", short = "a", long)] /// treat all files as text flag_text: bool, #[structopt(name = "ignore-space-at-eol", long)] /// ignore changes in whitespace at EOL flag_ignore_space_at_eol: bool, #[structopt(name = "ignore-space-change", short = "b", long)] /// ignore changes in amount of whitespace flag_ignore_space_change: bool, #[structopt(name = "ignore-all-space", short = "w", long)] /// ignore whitespace when comparing lines flag_ignore_all_space: bool, #[structopt(name = "ignored", long)] /// show untracked files flag_ignored: bool, #[structopt(name = "untracked", long)] /// generate diff using the patience algorithm flag_untracked: bool, #[structopt(name = "patience", long)] /// show ignored files as well flag_patience: bool, #[structopt(name = "minimal", long)] /// spend extra time to find smallest diff flag_minimal: bool, #[structopt(name = "stat", long)] /// generate a diffstat flag_stat: bool, #[structopt(name = "numstat", long)] /// similar to --stat, but more machine friendly flag_numstat: bool, #[structopt(name = "shortstat", long)] /// only output last line of --stat flag_shortstat: bool, #[structopt(name = "summary", long)] /// output condensed summary of header info flag_summary: bool, #[structopt(name = "find-renames", short = "M", long)] /// set threshold for findind renames (default 50) flag_find_renames: Option, #[structopt(name = "find-copies", short = "C", long)] /// set threshold for finding copies (default 50) flag_find_copies: Option, #[structopt(name = "find-copies-harder", long)] /// inspect unmodified files for sources of copies flag_find_copies_harder: bool, #[structopt(name = "break_rewrites", short = "B", long)] /// break complete rewrite changes into pairs flag_break_rewrites: bool, #[structopt(name = "unified", short = "U", long)] /// lints of context to show flag_unified: Option, #[structopt(name = "inter-hunk-context", long)] /// maximum lines of change between hunks flag_inter_hunk_context: Option, #[structopt(name = "abbrev", long)] /// length to abbreviate commits to flag_abbrev: Option, #[structopt(name = "src-prefix", long)] /// show given source prefix instead of 'a/' flag_src_prefix: Option, #[structopt(name = "dst-prefix", long)] /// show given destinction prefix instead of 'b/' flag_dst_prefix: Option, #[structopt(name = "path", long = "git-dir")] /// path to git repository to use flag_git_dir: Option, } const RESET: &str = "\u{1b}[m"; const BOLD: &str = "\u{1b}[1m"; const RED: &str = "\u{1b}[31m"; const GREEN: &str = "\u{1b}[32m"; const CYAN: &str = "\u{1b}[36m"; #[derive(PartialEq, Eq, Copy, Clone)] enum Cache { Normal, Only, None, } fn line_color(line: &DiffLine) -> Option<&'static str> { match line.origin() { '+' => Some(GREEN), '-' => Some(RED), '>' => Some(GREEN), '<' => Some(RED), 'F' => Some(BOLD), 'H' => Some(CYAN), _ => None, } } fn print_diff_line( _delta: DiffDelta, _hunk: Option, line: DiffLine, args: &Args, ) -> bool { if args.color() { print!("{}", RESET); if let Some(color) = line_color(&line) { print!("{}", color); } } match line.origin() { '+' | '-' | ' ' => print!("{}", line.origin()), _ => {} } print!("{}", str::from_utf8(line.content()).unwrap()); true } fn run(args: &Args) -> Result<(), Error> { let path = args.flag_git_dir.as_ref().map(|s| &s[..]).unwrap_or("."); let repo = Repository::open(path)?; // Prepare our diff options based on the arguments given let mut opts = DiffOptions::new(); opts.reverse(args.flag_R) .force_text(args.flag_text) .ignore_whitespace_eol(args.flag_ignore_space_at_eol) .ignore_whitespace_change(args.flag_ignore_space_change) .ignore_whitespace(args.flag_ignore_all_space) .include_ignored(args.flag_ignored) .include_untracked(args.flag_untracked) .patience(args.flag_patience) .minimal(args.flag_minimal); if let Some(amt) = args.flag_unified { opts.context_lines(amt); } if let Some(amt) = args.flag_inter_hunk_context { opts.interhunk_lines(amt); } if let Some(amt) = args.flag_abbrev { opts.id_abbrev(amt); } if let Some(ref s) = args.flag_src_prefix { opts.old_prefix(&s); } if let Some(ref s) = args.flag_dst_prefix { opts.new_prefix(&s); } if let Some("diff-index") = args.flag_format.as_ref().map(|s| &s[..]) { opts.id_abbrev(40); } if args.flag_blobs { let b1 = resolve_blob(&repo, args.arg_from_oid.as_ref())?; let b2 = resolve_blob(&repo, args.arg_to_oid.as_ref())?; repo.diff_blobs( b1.as_ref(), None, b2.as_ref(), None, Some(&mut opts), None, None, None, Some(&mut |d, h, l| print_diff_line(d, h, l, args)), )?; if args.color() { print!("{}", RESET); } return Ok(()); } // Prepare the diff to inspect let t1 = tree_to_treeish(&repo, args.arg_from_oid.as_ref())?; let t2 = tree_to_treeish(&repo, args.arg_to_oid.as_ref())?; let head = tree_to_treeish(&repo, Some(&"HEAD".to_string()))?.unwrap(); let mut diff = match (t1, t2, args.cache()) { (Some(t1), Some(t2), _) => { repo.diff_tree_to_tree(t1.as_tree(), t2.as_tree(), Some(&mut opts))? } (t1, None, Cache::None) => { let t1 = t1.unwrap_or(head); repo.diff_tree_to_workdir(t1.as_tree(), Some(&mut opts))? } (t1, None, Cache::Only) => { let t1 = t1.unwrap_or(head); repo.diff_tree_to_index(t1.as_tree(), None, Some(&mut opts))? } (Some(t1), None, _) => { repo.diff_tree_to_workdir_with_index(t1.as_tree(), Some(&mut opts))? } (None, None, _) => repo.diff_index_to_workdir(None, Some(&mut opts))?, (None, Some(_), _) => unreachable!(), }; // Apply rename and copy detection if requested if args.flag_break_rewrites || args.flag_find_copies_harder || args.flag_find_renames.is_some() || args.flag_find_copies.is_some() { let mut opts = DiffFindOptions::new(); if let Some(t) = args.flag_find_renames { opts.rename_threshold(t); opts.renames(true); } if let Some(t) = args.flag_find_copies { opts.copy_threshold(t); opts.copies(true); } opts.copies_from_unmodified(args.flag_find_copies_harder) .rewrites(args.flag_break_rewrites); diff.find_similar(Some(&mut opts))?; } // Generate simple output let stats = args.flag_stat | args.flag_numstat | args.flag_shortstat | args.flag_summary; if stats { print_stats(&diff, args)?; } if args.flag_patch || !stats { diff.print(args.diff_format(), |d, h, l| print_diff_line(d, h, l, args))?; if args.color() { print!("{}", RESET); } } Ok(()) } fn print_stats(diff: &Diff, args: &Args) -> Result<(), Error> { let stats = diff.stats()?; let mut format = git2::DiffStatsFormat::NONE; if args.flag_stat { format |= git2::DiffStatsFormat::FULL; } if args.flag_shortstat { format |= git2::DiffStatsFormat::SHORT; } if args.flag_numstat { format |= git2::DiffStatsFormat::NUMBER; } if args.flag_summary { format |= git2::DiffStatsFormat::INCLUDE_SUMMARY; } let buf = stats.to_buf(format, 80)?; print!("{}", str::from_utf8(&*buf).unwrap()); Ok(()) } fn tree_to_treeish<'a>( repo: &'a Repository, arg: Option<&String>, ) -> Result>, Error> { let arg = match arg { Some(s) => s, None => return Ok(None), }; let obj = repo.revparse_single(arg)?; let tree = obj.peel(ObjectType::Tree)?; Ok(Some(tree)) } fn resolve_blob<'a>(repo: &'a Repository, arg: Option<&String>) -> Result>, Error> { let arg = match arg { Some(s) => Oid::from_str(s)?, None => return Ok(None), }; repo.find_blob(arg).map(|b| Some(b)) } impl Args { fn cache(&self) -> Cache { if self.flag_cached { Cache::Only } else if self.flag_nocached { Cache::None } else { Cache::Normal } } fn color(&self) -> bool { self.flag_color && !self.flag_no_color } fn diff_format(&self) -> DiffFormat { if self.flag_patch { DiffFormat::Patch } else if self.flag_name_only { DiffFormat::NameOnly } else if self.flag_name_status { DiffFormat::NameStatus } else if self.flag_raw { DiffFormat::Raw } else { match self.flag_format.as_ref().map(|s| &s[..]) { Some("name") => DiffFormat::NameOnly, Some("name-status") => DiffFormat::NameStatus, Some("raw") => DiffFormat::Raw, Some("diff-index") => DiffFormat::Raw, _ => DiffFormat::Patch, } } } } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/fetch.rs0000664000175000017500000001020114160055207016763 0ustar mwhudsonmwhudson/* * libgit2 "fetch" example - shows how to fetch remote data * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use git2::{AutotagOption, FetchOptions, RemoteCallbacks, Repository}; use std::io::{self, Write}; use std::str; use structopt::StructOpt; #[derive(StructOpt)] struct Args { #[structopt(name = "remote")] arg_remote: Option, } fn run(args: &Args) -> Result<(), git2::Error> { let repo = Repository::open(".")?; let remote = args.arg_remote.as_ref().map(|s| &s[..]).unwrap_or("origin"); // Figure out whether it's a named remote or a URL println!("Fetching {} for repo", remote); let mut cb = RemoteCallbacks::new(); let mut remote = repo .find_remote(remote) .or_else(|_| repo.remote_anonymous(remote))?; cb.sideband_progress(|data| { print!("remote: {}", str::from_utf8(data).unwrap()); io::stdout().flush().unwrap(); true }); // This callback gets called for each remote-tracking branch that gets // updated. The message we output depends on whether it's a new one or an // update. cb.update_tips(|refname, a, b| { if a.is_zero() { println!("[new] {:20} {}", b, refname); } else { println!("[updated] {:10}..{:10} {}", a, b, refname); } true }); // Here we show processed and total objects in the pack and the amount of // received data. Most frontends will probably want to show a percentage and // the download rate. cb.transfer_progress(|stats| { if stats.received_objects() == stats.total_objects() { print!( "Resolving deltas {}/{}\r", stats.indexed_deltas(), stats.total_deltas() ); } else if stats.total_objects() > 0 { print!( "Received {}/{} objects ({}) in {} bytes\r", stats.received_objects(), stats.total_objects(), stats.indexed_objects(), stats.received_bytes() ); } io::stdout().flush().unwrap(); true }); // Download the packfile and index it. This function updates the amount of // received data and the indexer stats which lets you inform the user about // progress. let mut fo = FetchOptions::new(); fo.remote_callbacks(cb); remote.download(&[] as &[&str], Some(&mut fo))?; { // If there are local objects (we got a thin pack), then tell the user // how many objects we saved from having to cross the network. let stats = remote.stats(); if stats.local_objects() > 0 { println!( "\rReceived {}/{} objects in {} bytes (used {} local \ objects)", stats.indexed_objects(), stats.total_objects(), stats.received_bytes(), stats.local_objects() ); } else { println!( "\rReceived {}/{} objects in {} bytes", stats.indexed_objects(), stats.total_objects(), stats.received_bytes() ); } } // Disconnect the underlying connection to prevent from idling. remote.disconnect()?; // Update the references in the remote's namespace to point to the right // commits. This may be needed even if there was no packfile to download, // which can happen e.g. when the branches have been changed but all the // needed objects are available locally. remote.update_tips(None, true, AutotagOption::Unspecified, None)?; Ok(()) } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/blame.rs0000664000175000017500000000577614160055207016777 0ustar mwhudsonmwhudson/* * libgit2 "blame" example - shows how to use the blame API * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use git2::{BlameOptions, Repository}; use std::io::{BufRead, BufReader}; use std::path::Path; use structopt::StructOpt; #[derive(StructOpt)] #[allow(non_snake_case)] struct Args { #[structopt(name = "path")] arg_path: String, #[structopt(name = "spec")] arg_spec: Option, #[structopt(short = "M")] /// find line moves within and across files flag_M: bool, #[structopt(short = "C")] /// find line copies within and across files flag_C: bool, #[structopt(short = "F")] /// follow only the first parent commits flag_F: bool, } fn run(args: &Args) -> Result<(), git2::Error> { let repo = Repository::open(".")?; let path = Path::new(&args.arg_path[..]); // Prepare our blame options let mut opts = BlameOptions::new(); opts.track_copies_same_commit_moves(args.flag_M) .track_copies_same_commit_copies(args.flag_C) .first_parent(args.flag_F); let mut commit_id = "HEAD".to_string(); // Parse spec if let Some(spec) = args.arg_spec.as_ref() { let revspec = repo.revparse(spec)?; let (oldest, newest) = if revspec.mode().contains(git2::RevparseMode::SINGLE) { (None, revspec.from()) } else if revspec.mode().contains(git2::RevparseMode::RANGE) { (revspec.from(), revspec.to()) } else { (None, None) }; if let Some(commit) = oldest { opts.oldest_commit(commit.id()); } if let Some(commit) = newest { opts.newest_commit(commit.id()); if !commit.id().is_zero() { commit_id = format!("{}", commit.id()) } } } let spec = format!("{}:{}", commit_id, path.display()); let blame = repo.blame_file(path, Some(&mut opts))?; let object = repo.revparse_single(&spec[..])?; let blob = repo.find_blob(object.id())?; let reader = BufReader::new(blob.content()); for (i, line) in reader.lines().enumerate() { if let (Ok(line), Some(hunk)) = (line, blame.get_line(i + 1)) { let sig = hunk.final_signature(); println!( "{} {} <{}> {}", hunk.final_commit_id(), String::from_utf8_lossy(sig.name_bytes()), String::from_utf8_lossy(sig.email_bytes()), line ); } } Ok(()) } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/cat-file.rs0000664000175000017500000000771714160055207017400 0ustar mwhudsonmwhudson/* * libgit2 "cat-file" example - shows how to print data from the ODB * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use std::io::{self, Write}; use git2::{Blob, Commit, ObjectType, Repository, Signature, Tag, Tree}; use structopt::StructOpt; #[derive(StructOpt)] struct Args { #[structopt(name = "object")] arg_object: String, #[structopt(short = "t")] /// show the object type flag_t: bool, #[structopt(short = "s")] /// show the object size flag_s: bool, #[structopt(short = "e")] /// suppress all output flag_e: bool, #[structopt(short = "p")] /// pretty print the contents of the object flag_p: bool, #[structopt(name = "quiet", short, long)] /// suppress output flag_q: bool, #[structopt(name = "verbose", short, long)] flag_v: bool, #[structopt(name = "dir", long = "git-dir")] /// use the specified directory as the base directory flag_git_dir: Option, } fn run(args: &Args) -> Result<(), git2::Error> { let path = args.flag_git_dir.as_ref().map(|s| &s[..]).unwrap_or("."); let repo = Repository::open(path)?; let obj = repo.revparse_single(&args.arg_object)?; if args.flag_v && !args.flag_q { println!("{} {}\n--", obj.kind().unwrap().str(), obj.id()); } if args.flag_t { println!("{}", obj.kind().unwrap().str()); } else if args.flag_s || args.flag_e { /* ... */ } else if args.flag_p { match obj.kind() { Some(ObjectType::Blob) => { show_blob(obj.as_blob().unwrap()); } Some(ObjectType::Commit) => { show_commit(obj.as_commit().unwrap()); } Some(ObjectType::Tag) => { show_tag(obj.as_tag().unwrap()); } Some(ObjectType::Tree) => { show_tree(obj.as_tree().unwrap()); } Some(ObjectType::Any) | None => println!("unknown {}", obj.id()), } } Ok(()) } fn show_blob(blob: &Blob) { io::stdout().write_all(blob.content()).unwrap(); } fn show_commit(commit: &Commit) { println!("tree {}", commit.tree_id()); for parent in commit.parent_ids() { println!("parent {}", parent); } show_sig("author", Some(commit.author())); show_sig("committer", Some(commit.committer())); if let Some(msg) = commit.message() { println!("\n{}", msg); } } fn show_tag(tag: &Tag) { println!("object {}", tag.target_id()); println!("type {}", tag.target_type().unwrap().str()); println!("tag {}", tag.name().unwrap()); show_sig("tagger", tag.tagger()); if let Some(msg) = tag.message() { println!("\n{}", msg); } } fn show_tree(tree: &Tree) { for entry in tree.iter() { println!( "{:06o} {} {}\t{}", entry.filemode(), entry.kind().unwrap().str(), entry.id(), entry.name().unwrap() ); } } fn show_sig(header: &str, sig: Option) { let sig = match sig { Some(s) => s, None => return, }; let offset = sig.when().offset_minutes(); let (sign, offset) = if offset < 0 { ('-', -offset) } else { ('+', offset) }; let (hours, minutes) = (offset / 60, offset % 60); println!( "{} {} {} {}{:02}{:02}", header, sig, sig.when().seconds(), sign, hours, minutes ); } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/rev-list.rs0000664000175000017500000000544714160055207017457 0ustar mwhudsonmwhudson/* * libgit2 "rev-list" example - shows how to transform a rev-spec into a list * of commit ids * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use git2::{Error, Oid, Repository, Revwalk}; use structopt::StructOpt; #[derive(StructOpt)] struct Args { #[structopt(name = "topo-order", long)] /// sort commits in topological order flag_topo_order: bool, #[structopt(name = "date-order", long)] /// sort commits in date order flag_date_order: bool, #[structopt(name = "reverse", long)] /// sort commits in reverse flag_reverse: bool, #[structopt(name = "not")] /// don't show flag_not: Vec, #[structopt(name = "spec", last = true)] arg_spec: Vec, } fn run(args: &Args) -> Result<(), git2::Error> { let repo = Repository::open(".")?; let mut revwalk = repo.revwalk()?; let base = if args.flag_reverse { git2::Sort::REVERSE } else { git2::Sort::NONE }; revwalk.set_sorting( base | if args.flag_topo_order { git2::Sort::TOPOLOGICAL } else if args.flag_date_order { git2::Sort::TIME } else { git2::Sort::NONE }, )?; let specs = args .flag_not .iter() .map(|s| (s, true)) .chain(args.arg_spec.iter().map(|s| (s, false))) .map(|(spec, hide)| { if spec.starts_with('^') { (&spec[1..], !hide) } else { (&spec[..], hide) } }); for (spec, hide) in specs { let id = if spec.contains("..") { let revspec = repo.revparse(spec)?; if revspec.mode().contains(git2::RevparseMode::MERGE_BASE) { return Err(Error::from_str("merge bases not implemented")); } push(&mut revwalk, revspec.from().unwrap().id(), !hide)?; revspec.to().unwrap().id() } else { repo.revparse_single(spec)?.id() }; push(&mut revwalk, id, hide)?; } for id in revwalk { let id = id?; println!("{}", id); } Ok(()) } fn push(revwalk: &mut Revwalk, id: Oid, hide: bool) -> Result<(), Error> { if hide { revwalk.hide(id) } else { revwalk.push(id) } } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/add.rs0000664000175000017500000000403314160055207016430 0ustar mwhudsonmwhudson/* * libgit2 "add" example - shows how to modify the index * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] #![allow(trivial_casts)] use git2::Repository; use std::path::Path; use structopt::StructOpt; #[derive(StructOpt)] struct Args { #[structopt(name = "spec")] arg_spec: Vec, #[structopt(name = "dry_run", short = "n", long)] /// dry run flag_dry_run: bool, #[structopt(name = "verbose", short, long)] /// be verbose flag_verbose: bool, #[structopt(name = "update", short, long)] /// update tracked files flag_update: bool, } fn run(args: &Args) -> Result<(), git2::Error> { let repo = Repository::open(&Path::new("."))?; let mut index = repo.index()?; let cb = &mut |path: &Path, _matched_spec: &[u8]| -> i32 { let status = repo.status_file(path).unwrap(); let ret = if status.contains(git2::Status::WT_MODIFIED) || status.contains(git2::Status::WT_NEW) { println!("add '{}'", path.display()); 0 } else { 1 }; if args.flag_dry_run { 1 } else { ret } }; let cb = if args.flag_verbose || args.flag_update { Some(cb as &mut git2::IndexMatchedPath) } else { None }; if args.flag_update { index.update_all(args.arg_spec.iter(), cb)?; } else { index.add_all(args.arg_spec.iter(), git2::IndexAddOption::DEFAULT, cb)?; } index.write()?; Ok(()) } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/clone.rs0000664000175000017500000000650014160055207017001 0ustar mwhudsonmwhudson/* * libgit2 "clone" example * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use git2::build::{CheckoutBuilder, RepoBuilder}; use git2::{FetchOptions, Progress, RemoteCallbacks}; use std::cell::RefCell; use std::io::{self, Write}; use std::path::{Path, PathBuf}; use structopt::StructOpt; #[derive(StructOpt)] struct Args { #[structopt(name = "url")] arg_url: String, #[structopt(name = "path")] arg_path: String, } struct State { progress: Option>, total: usize, current: usize, path: Option, newline: bool, } fn print(state: &mut State) { let stats = state.progress.as_ref().unwrap(); let network_pct = (100 * stats.received_objects()) / stats.total_objects(); let index_pct = (100 * stats.indexed_objects()) / stats.total_objects(); let co_pct = if state.total > 0 { (100 * state.current) / state.total } else { 0 }; let kbytes = stats.received_bytes() / 1024; if stats.received_objects() == stats.total_objects() { if !state.newline { println!(); state.newline = true; } print!( "Resolving deltas {}/{}\r", stats.indexed_deltas(), stats.total_deltas() ); } else { print!( "net {:3}% ({:4} kb, {:5}/{:5}) / idx {:3}% ({:5}/{:5}) \ / chk {:3}% ({:4}/{:4}) {}\r", network_pct, kbytes, stats.received_objects(), stats.total_objects(), index_pct, stats.indexed_objects(), stats.total_objects(), co_pct, state.current, state.total, state .path .as_ref() .map(|s| s.to_string_lossy().into_owned()) .unwrap_or_default() ) } io::stdout().flush().unwrap(); } fn run(args: &Args) -> Result<(), git2::Error> { let state = RefCell::new(State { progress: None, total: 0, current: 0, path: None, newline: false, }); let mut cb = RemoteCallbacks::new(); cb.transfer_progress(|stats| { let mut state = state.borrow_mut(); state.progress = Some(stats.to_owned()); print(&mut *state); true }); let mut co = CheckoutBuilder::new(); co.progress(|path, cur, total| { let mut state = state.borrow_mut(); state.path = path.map(|p| p.to_path_buf()); state.current = cur; state.total = total; print(&mut *state); }); let mut fo = FetchOptions::new(); fo.remote_callbacks(cb); RepoBuilder::new() .fetch_options(fo) .with_checkout(co) .clone(&args.arg_url, Path::new(&args.arg_path))?; println!(); Ok(()) } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/init.rs0000664000175000017500000001150214160055207016642 0ustar mwhudsonmwhudson/* * libgit2 "init" example - shows how to initialize a new repo * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use git2::{Error, Repository, RepositoryInitMode, RepositoryInitOptions}; use std::path::{Path, PathBuf}; use structopt::StructOpt; #[derive(StructOpt)] struct Args { #[structopt(name = "directory")] arg_directory: String, #[structopt(name = "quiet", short, long)] /// don't print information to stdout flag_quiet: bool, #[structopt(name = "bare", long)] /// initialize a new bare repository flag_bare: bool, #[structopt(name = "dir", long = "template")] /// use

as an initialization template flag_template: Option, #[structopt(name = "separate-git-dir", long)] /// use as the .git directory flag_separate_git_dir: Option, #[structopt(name = "initial-commit", long)] /// create an initial empty commit flag_initial_commit: bool, #[structopt(name = "perms", long = "shared")] /// permissions to create the repository with flag_shared: Option, } fn run(args: &Args) -> Result<(), Error> { let mut path = PathBuf::from(&args.arg_directory); let repo = if !args.flag_bare && args.flag_template.is_none() && args.flag_shared.is_none() && args.flag_separate_git_dir.is_none() { Repository::init(&path)? } else { let mut opts = RepositoryInitOptions::new(); opts.bare(args.flag_bare); if let Some(ref s) = args.flag_template { opts.template_path(Path::new(s)); } // If you specified a separate git directory, then initialize // the repository at that path and use the second path as the // working directory of the repository (with a git-link file) if let Some(ref s) = args.flag_separate_git_dir { opts.workdir_path(&path); path = PathBuf::from(s); } if let Some(ref s) = args.flag_shared { opts.mode(parse_shared(s)?); } Repository::init_opts(&path, &opts)? }; // Print a message to stdout like "git init" does if !args.flag_quiet { if args.flag_bare || args.flag_separate_git_dir.is_some() { path = repo.path().to_path_buf(); } else { path = repo.workdir().unwrap().to_path_buf(); } println!("Initialized empty Git repository in {}", path.display()); } if args.flag_initial_commit { create_initial_commit(&repo)?; println!("Created empty initial commit"); } Ok(()) } /// Unlike regular "git init", this example shows how to create an initial empty /// commit in the repository. This is the helper function that does that. fn create_initial_commit(repo: &Repository) -> Result<(), Error> { // First use the config to initialize a commit signature for the user. let sig = repo.signature()?; // Now let's create an empty tree for this commit let tree_id = { let mut index = repo.index()?; // Outside of this example, you could call index.add_path() // here to put actual files into the index. For our purposes, we'll // leave it empty for now. index.write_tree()? }; let tree = repo.find_tree(tree_id)?; // Ready to create the initial commit. // // Normally creating a commit would involve looking up the current HEAD // commit and making that be the parent of the initial commit, but here this // is the first commit so there will be no parent. repo.commit(Some("HEAD"), &sig, &sig, "Initial commit", &tree, &[])?; Ok(()) } fn parse_shared(shared: &str) -> Result { match shared { "false" | "umask" => Ok(git2::RepositoryInitMode::SHARED_UMASK), "true" | "group" => Ok(git2::RepositoryInitMode::SHARED_GROUP), "all" | "world" => Ok(git2::RepositoryInitMode::SHARED_ALL), _ => { if shared.starts_with('0') { match u32::from_str_radix(&shared[1..], 8).ok() { Some(n) => Ok(RepositoryInitMode::from_bits_truncate(n)), None => Err(Error::from_str("invalid octal value for --shared")), } } else { Err(Error::from_str("unknown value for --shared")) } } } } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/log.rs0000664000175000017500000002266014160055207016467 0ustar mwhudsonmwhudson/* * libgit2 "log" example - shows how to walk history and get commit info * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use git2::{Commit, DiffOptions, ObjectType, Repository, Signature, Time}; use git2::{DiffFormat, Error, Pathspec}; use std::str; use structopt::StructOpt; #[derive(StructOpt)] struct Args { #[structopt(name = "topo-order", long)] /// sort commits in topological order flag_topo_order: bool, #[structopt(name = "date-order", long)] /// sort commits in date order flag_date_order: bool, #[structopt(name = "reverse", long)] /// sort commits in reverse flag_reverse: bool, #[structopt(name = "author", long)] /// author to sort by flag_author: Option, #[structopt(name = "committer", long)] /// committer to sort by flag_committer: Option, #[structopt(name = "pat", long = "grep")] /// pattern to filter commit messages by flag_grep: Option, #[structopt(name = "dir", long = "git-dir")] /// alternative git directory to use flag_git_dir: Option, #[structopt(name = "skip", long)] /// number of commits to skip flag_skip: Option, #[structopt(name = "max-count", short = "n", long)] /// maximum number of commits to show flag_max_count: Option, #[structopt(name = "merges", long)] /// only show merge commits flag_merges: bool, #[structopt(name = "no-merges", long)] /// don't show merge commits flag_no_merges: bool, #[structopt(name = "no-min-parents", long)] /// don't require a minimum number of parents flag_no_min_parents: bool, #[structopt(name = "no-max-parents", long)] /// don't require a maximum number of parents flag_no_max_parents: bool, #[structopt(name = "max-parents")] /// specify a maximum number of parents for a commit flag_max_parents: Option, #[structopt(name = "min-parents")] /// specify a minimum number of parents for a commit flag_min_parents: Option, #[structopt(name = "patch", long, short)] /// show commit diff flag_patch: bool, #[structopt(name = "commit")] arg_commit: Vec, #[structopt(name = "spec", last = true)] arg_spec: Vec, } fn run(args: &Args) -> Result<(), Error> { let path = args.flag_git_dir.as_ref().map(|s| &s[..]).unwrap_or("."); let repo = Repository::open(path)?; let mut revwalk = repo.revwalk()?; // Prepare the revwalk based on CLI parameters let base = if args.flag_reverse { git2::Sort::REVERSE } else { git2::Sort::NONE }; revwalk.set_sorting( base | if args.flag_topo_order { git2::Sort::TOPOLOGICAL } else if args.flag_date_order { git2::Sort::TIME } else { git2::Sort::NONE }, )?; for commit in &args.arg_commit { if commit.starts_with('^') { let obj = repo.revparse_single(&commit[1..])?; revwalk.hide(obj.id())?; continue; } let revspec = repo.revparse(commit)?; if revspec.mode().contains(git2::RevparseMode::SINGLE) { revwalk.push(revspec.from().unwrap().id())?; } else { let from = revspec.from().unwrap().id(); let to = revspec.to().unwrap().id(); revwalk.push(to)?; if revspec.mode().contains(git2::RevparseMode::MERGE_BASE) { let base = repo.merge_base(from, to)?; let o = repo.find_object(base, Some(ObjectType::Commit))?; revwalk.push(o.id())?; } revwalk.hide(from)?; } } if args.arg_commit.is_empty() { revwalk.push_head()?; } // Prepare our diff options and pathspec matcher let (mut diffopts, mut diffopts2) = (DiffOptions::new(), DiffOptions::new()); for spec in &args.arg_spec { diffopts.pathspec(spec); diffopts2.pathspec(spec); } let ps = Pathspec::new(args.arg_spec.iter())?; // Filter our revwalk based on the CLI parameters macro_rules! filter_try { ($e:expr) => { match $e { Ok(t) => t, Err(e) => return Some(Err(e)), } }; } let revwalk = revwalk .filter_map(|id| { let id = filter_try!(id); let commit = filter_try!(repo.find_commit(id)); let parents = commit.parents().len(); if parents < args.min_parents() { return None; } if let Some(n) = args.max_parents() { if parents >= n { return None; } } if !args.arg_spec.is_empty() { match commit.parents().len() { 0 => { let tree = filter_try!(commit.tree()); let flags = git2::PathspecFlags::NO_MATCH_ERROR; if ps.match_tree(&tree, flags).is_err() { return None; } } _ => { let m = commit.parents().all(|parent| { match_with_parent(&repo, &commit, &parent, &mut diffopts) .unwrap_or(false) }); if !m { return None; } } } } if !sig_matches(&commit.author(), &args.flag_author) { return None; } if !sig_matches(&commit.committer(), &args.flag_committer) { return None; } if !log_message_matches(commit.message(), &args.flag_grep) { return None; } Some(Ok(commit)) }) .skip(args.flag_skip.unwrap_or(0)) .take(args.flag_max_count.unwrap_or(!0)); // print! for commit in revwalk { let commit = commit?; print_commit(&commit); if !args.flag_patch || commit.parents().len() > 1 { continue; } let a = if commit.parents().len() == 1 { let parent = commit.parent(0)?; Some(parent.tree()?) } else { None }; let b = commit.tree()?; let diff = repo.diff_tree_to_tree(a.as_ref(), Some(&b), Some(&mut diffopts2))?; diff.print(DiffFormat::Patch, |_delta, _hunk, line| { match line.origin() { ' ' | '+' | '-' => print!("{}", line.origin()), _ => {} } print!("{}", str::from_utf8(line.content()).unwrap()); true })?; } Ok(()) } fn sig_matches(sig: &Signature, arg: &Option) -> bool { match *arg { Some(ref s) => { sig.name().map(|n| n.contains(s)).unwrap_or(false) || sig.email().map(|n| n.contains(s)).unwrap_or(false) } None => true, } } fn log_message_matches(msg: Option<&str>, grep: &Option) -> bool { match (grep, msg) { (&None, _) => true, (&Some(_), None) => false, (&Some(ref s), Some(msg)) => msg.contains(s), } } fn print_commit(commit: &Commit) { println!("commit {}", commit.id()); if commit.parents().len() > 1 { print!("Merge:"); for id in commit.parent_ids() { print!(" {:.8}", id); } println!(); } let author = commit.author(); println!("Author: {}", author); print_time(&author.when(), "Date: "); println!(); for line in String::from_utf8_lossy(commit.message_bytes()).lines() { println!(" {}", line); } println!(); } fn print_time(time: &Time, prefix: &str) { let (offset, sign) = match time.offset_minutes() { n if n < 0 => (-n, '-'), n => (n, '+'), }; let (hours, minutes) = (offset / 60, offset % 60); let ts = time::Timespec::new(time.seconds() + (time.offset_minutes() as i64) * 60, 0); let time = time::at(ts); println!( "{}{} {}{:02}{:02}", prefix, time.strftime("%a %b %e %T %Y").unwrap(), sign, hours, minutes ); } fn match_with_parent( repo: &Repository, commit: &Commit, parent: &Commit, opts: &mut DiffOptions, ) -> Result { let a = parent.tree()?; let b = commit.tree()?; let diff = repo.diff_tree_to_tree(Some(&a), Some(&b), Some(opts))?; Ok(diff.deltas().len() > 0) } impl Args { fn min_parents(&self) -> usize { if self.flag_no_min_parents { return 0; } self.flag_min_parents .unwrap_or(if self.flag_merges { 2 } else { 0 }) } fn max_parents(&self) -> Option { if self.flag_no_max_parents { return None; } self.flag_max_parents .or(if self.flag_no_merges { Some(1) } else { None }) } } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/pull.rs0000664000175000017500000001550314160055207016660 0ustar mwhudsonmwhudson/* * libgit2 "pull" example - shows how to pull remote data into a local branch. * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ use git2::Repository; use std::io::{self, Write}; use std::str; use structopt::StructOpt; #[derive(StructOpt)] struct Args { arg_remote: Option, arg_branch: Option, } fn do_fetch<'a>( repo: &'a git2::Repository, refs: &[&str], remote: &'a mut git2::Remote, ) -> Result, git2::Error> { let mut cb = git2::RemoteCallbacks::new(); // Print out our transfer progress. cb.transfer_progress(|stats| { if stats.received_objects() == stats.total_objects() { print!( "Resolving deltas {}/{}\r", stats.indexed_deltas(), stats.total_deltas() ); } else if stats.total_objects() > 0 { print!( "Received {}/{} objects ({}) in {} bytes\r", stats.received_objects(), stats.total_objects(), stats.indexed_objects(), stats.received_bytes() ); } io::stdout().flush().unwrap(); true }); let mut fo = git2::FetchOptions::new(); fo.remote_callbacks(cb); // Always fetch all tags. // Perform a download and also update tips fo.download_tags(git2::AutotagOption::All); println!("Fetching {} for repo", remote.name().unwrap()); remote.fetch(refs, Some(&mut fo), None)?; // If there are local objects (we got a thin pack), then tell the user // how many objects we saved from having to cross the network. let stats = remote.stats(); if stats.local_objects() > 0 { println!( "\rReceived {}/{} objects in {} bytes (used {} local \ objects)", stats.indexed_objects(), stats.total_objects(), stats.received_bytes(), stats.local_objects() ); } else { println!( "\rReceived {}/{} objects in {} bytes", stats.indexed_objects(), stats.total_objects(), stats.received_bytes() ); } let fetch_head = repo.find_reference("FETCH_HEAD")?; Ok(repo.reference_to_annotated_commit(&fetch_head)?) } fn fast_forward( repo: &Repository, lb: &mut git2::Reference, rc: &git2::AnnotatedCommit, ) -> Result<(), git2::Error> { let name = match lb.name() { Some(s) => s.to_string(), None => String::from_utf8_lossy(lb.name_bytes()).to_string(), }; let msg = format!("Fast-Forward: Setting {} to id: {}", name, rc.id()); println!("{}", msg); lb.set_target(rc.id(), &msg)?; repo.set_head(&name)?; repo.checkout_head(Some( git2::build::CheckoutBuilder::default() // For some reason the force is required to make the working directory actually get updated // I suspect we should be adding some logic to handle dirty working directory states // but this is just an example so maybe not. .force(), ))?; Ok(()) } fn normal_merge( repo: &Repository, local: &git2::AnnotatedCommit, remote: &git2::AnnotatedCommit, ) -> Result<(), git2::Error> { let local_tree = repo.find_commit(local.id())?.tree()?; let remote_tree = repo.find_commit(remote.id())?.tree()?; let ancestor = repo .find_commit(repo.merge_base(local.id(), remote.id())?)? .tree()?; let mut idx = repo.merge_trees(&ancestor, &local_tree, &remote_tree, None)?; if idx.has_conflicts() { println!("Merge conficts detected..."); repo.checkout_index(Some(&mut idx), None)?; return Ok(()); } let result_tree = repo.find_tree(idx.write_tree_to(repo)?)?; // now create the merge commit let msg = format!("Merge: {} into {}", remote.id(), local.id()); let sig = repo.signature()?; let local_commit = repo.find_commit(local.id())?; let remote_commit = repo.find_commit(remote.id())?; // Do our merge commit and set current branch head to that commit. let _merge_commit = repo.commit( Some("HEAD"), &sig, &sig, &msg, &result_tree, &[&local_commit, &remote_commit], )?; // Set working tree to match head. repo.checkout_head(None)?; Ok(()) } fn do_merge<'a>( repo: &'a Repository, remote_branch: &str, fetch_commit: git2::AnnotatedCommit<'a>, ) -> Result<(), git2::Error> { // 1. do a merge analysis let analysis = repo.merge_analysis(&[&fetch_commit])?; // 2. Do the appopriate merge if analysis.0.is_fast_forward() { println!("Doing a fast forward"); // do a fast forward let refname = format!("refs/heads/{}", remote_branch); match repo.find_reference(&refname) { Ok(mut r) => { fast_forward(repo, &mut r, &fetch_commit)?; } Err(_) => { // The branch doesn't exist so just set the reference to the // commit directly. Usually this is because you are pulling // into an empty repository. repo.reference( &refname, fetch_commit.id(), true, &format!("Setting {} to {}", remote_branch, fetch_commit.id()), )?; repo.set_head(&refname)?; repo.checkout_head(Some( git2::build::CheckoutBuilder::default() .allow_conflicts(true) .conflict_style_merge(true) .force(), ))?; } }; } else if analysis.0.is_normal() { // do a normal merge let head_commit = repo.reference_to_annotated_commit(&repo.head()?)?; normal_merge(&repo, &head_commit, &fetch_commit)?; } else { println!("Nothing to do..."); } Ok(()) } fn run(args: &Args) -> Result<(), git2::Error> { let remote_name = args.arg_remote.as_ref().map(|s| &s[..]).unwrap_or("origin"); let remote_branch = args.arg_branch.as_ref().map(|s| &s[..]).unwrap_or("master"); let repo = Repository::open(".")?; let mut remote = repo.find_remote(remote_name)?; let fetch_commit = do_fetch(&repo, &[remote_branch], &mut remote)?; do_merge(&repo, &remote_branch, fetch_commit) } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/status.rs0000664000175000017500000003256714160055207017240 0ustar mwhudsonmwhudson/* * libgit2 "status" example - shows how to use the status APIs * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use git2::{Error, ErrorCode, Repository, StatusOptions, SubmoduleIgnore}; use std::str; use std::time::Duration; use structopt::StructOpt; #[derive(StructOpt)] struct Args { arg_spec: Vec, #[structopt(name = "long", long)] /// show longer statuses (default) _flag_long: bool, /// show short statuses #[structopt(name = "short", long)] flag_short: bool, #[structopt(name = "porcelain", long)] /// ?? flag_porcelain: bool, #[structopt(name = "branch", short, long)] /// show branch information flag_branch: bool, #[structopt(name = "z", short)] /// ?? flag_z: bool, #[structopt(name = "ignored", long)] /// show ignored files as well flag_ignored: bool, #[structopt(name = "opt-modules", long = "untracked-files")] /// setting for showing untracked files [no|normal|all] flag_untracked_files: Option, #[structopt(name = "opt-files", long = "ignore-submodules")] /// setting for ignoring submodules [all] flag_ignore_submodules: Option, #[structopt(name = "dir", long = "git-dir")] /// git directory to analyze flag_git_dir: Option, #[structopt(name = "repeat", long)] /// repeatedly show status, sleeping inbetween flag_repeat: bool, #[structopt(name = "list-submodules", long)] /// show submodules flag_list_submodules: bool, } #[derive(Eq, PartialEq)] enum Format { Long, Short, Porcelain, } fn run(args: &Args) -> Result<(), Error> { let path = args.flag_git_dir.clone().unwrap_or_else(|| ".".to_string()); let repo = Repository::open(&path)?; if repo.is_bare() { return Err(Error::from_str("cannot report status on bare repository")); } let mut opts = StatusOptions::new(); opts.include_ignored(args.flag_ignored); match args.flag_untracked_files.as_ref().map(|s| &s[..]) { Some("no") => { opts.include_untracked(false); } Some("normal") => { opts.include_untracked(true); } Some("all") => { opts.include_untracked(true).recurse_untracked_dirs(true); } Some(_) => return Err(Error::from_str("invalid untracked-files value")), None => {} } match args.flag_ignore_submodules.as_ref().map(|s| &s[..]) { Some("all") => { opts.exclude_submodules(true); } Some(_) => return Err(Error::from_str("invalid ignore-submodules value")), None => {} } opts.include_untracked(!args.flag_ignored); for spec in &args.arg_spec { opts.pathspec(spec); } loop { if args.flag_repeat { println!("\u{1b}[H\u{1b}[2J"); } let statuses = repo.statuses(Some(&mut opts))?; if args.flag_branch { show_branch(&repo, &args.format())?; } if args.flag_list_submodules { print_submodules(&repo)?; } if args.format() == Format::Long { print_long(&statuses); } else { print_short(&repo, &statuses); } if args.flag_repeat { std::thread::sleep(Duration::new(10, 0)); } else { return Ok(()); } } } fn show_branch(repo: &Repository, format: &Format) -> Result<(), Error> { let head = match repo.head() { Ok(head) => Some(head), Err(ref e) if e.code() == ErrorCode::UnbornBranch || e.code() == ErrorCode::NotFound => { None } Err(e) => return Err(e), }; let head = head.as_ref().and_then(|h| h.shorthand()); if format == &Format::Long { println!( "# On branch {}", head.unwrap_or("Not currently on any branch") ); } else { println!("## {}", head.unwrap_or("HEAD (no branch)")); } Ok(()) } fn print_submodules(repo: &Repository) -> Result<(), Error> { let modules = repo.submodules()?; println!("# Submodules"); for sm in &modules { println!( "# - submodule '{}' at {}", sm.name().unwrap(), sm.path().display() ); } Ok(()) } // This function print out an output similar to git's status command in long // form, including the command-line hints. fn print_long(statuses: &git2::Statuses) { let mut header = false; let mut rm_in_workdir = false; let mut changes_in_index = false; let mut changed_in_workdir = false; // Print index changes for entry in statuses .iter() .filter(|e| e.status() != git2::Status::CURRENT) { if entry.status().contains(git2::Status::WT_DELETED) { rm_in_workdir = true; } let istatus = match entry.status() { s if s.contains(git2::Status::INDEX_NEW) => "new file: ", s if s.contains(git2::Status::INDEX_MODIFIED) => "modified: ", s if s.contains(git2::Status::INDEX_DELETED) => "deleted: ", s if s.contains(git2::Status::INDEX_RENAMED) => "renamed: ", s if s.contains(git2::Status::INDEX_TYPECHANGE) => "typechange:", _ => continue, }; if !header { println!( "\ # Changes to be committed: # (use \"git reset HEAD ...\" to unstage) #" ); header = true; } let old_path = entry.head_to_index().unwrap().old_file().path(); let new_path = entry.head_to_index().unwrap().new_file().path(); match (old_path, new_path) { (Some(old), Some(new)) if old != new => { println!("#\t{} {} -> {}", istatus, old.display(), new.display()); } (old, new) => { println!("#\t{} {}", istatus, old.or(new).unwrap().display()); } } } if header { changes_in_index = true; println!("#"); } header = false; // Print workdir changes to tracked files for entry in statuses.iter() { // With `Status::OPT_INCLUDE_UNMODIFIED` (not used in this example) // `index_to_workdir` may not be `None` even if there are no differences, // in which case it will be a `Delta::Unmodified`. if entry.status() == git2::Status::CURRENT || entry.index_to_workdir().is_none() { continue; } let istatus = match entry.status() { s if s.contains(git2::Status::WT_MODIFIED) => "modified: ", s if s.contains(git2::Status::WT_DELETED) => "deleted: ", s if s.contains(git2::Status::WT_RENAMED) => "renamed: ", s if s.contains(git2::Status::WT_TYPECHANGE) => "typechange:", _ => continue, }; if !header { println!( "\ # Changes not staged for commit: # (use \"git add{} ...\" to update what will be committed) # (use \"git checkout -- ...\" to discard changes in working directory) #\ ", if rm_in_workdir { "/rm" } else { "" } ); header = true; } let old_path = entry.index_to_workdir().unwrap().old_file().path(); let new_path = entry.index_to_workdir().unwrap().new_file().path(); match (old_path, new_path) { (Some(old), Some(new)) if old != new => { println!("#\t{} {} -> {}", istatus, old.display(), new.display()); } (old, new) => { println!("#\t{} {}", istatus, old.or(new).unwrap().display()); } } } if header { changed_in_workdir = true; println!("#"); } header = false; // Print untracked files for entry in statuses .iter() .filter(|e| e.status() == git2::Status::WT_NEW) { if !header { println!( "\ # Untracked files # (use \"git add ...\" to include in what will be committed) #" ); header = true; } let file = entry.index_to_workdir().unwrap().old_file().path().unwrap(); println!("#\t{}", file.display()); } header = false; // Print ignored files for entry in statuses .iter() .filter(|e| e.status() == git2::Status::IGNORED) { if !header { println!( "\ # Ignored files # (use \"git add -f ...\" to include in what will be committed) #" ); header = true; } let file = entry.index_to_workdir().unwrap().old_file().path().unwrap(); println!("#\t{}", file.display()); } if !changes_in_index && changed_in_workdir { println!( "no changes added to commit (use \"git add\" and/or \ \"git commit -a\")" ); } } // This version of the output prefixes each path with two status columns and // shows submodule status information. fn print_short(repo: &Repository, statuses: &git2::Statuses) { for entry in statuses .iter() .filter(|e| e.status() != git2::Status::CURRENT) { let mut istatus = match entry.status() { s if s.contains(git2::Status::INDEX_NEW) => 'A', s if s.contains(git2::Status::INDEX_MODIFIED) => 'M', s if s.contains(git2::Status::INDEX_DELETED) => 'D', s if s.contains(git2::Status::INDEX_RENAMED) => 'R', s if s.contains(git2::Status::INDEX_TYPECHANGE) => 'T', _ => ' ', }; let mut wstatus = match entry.status() { s if s.contains(git2::Status::WT_NEW) => { if istatus == ' ' { istatus = '?'; } '?' } s if s.contains(git2::Status::WT_MODIFIED) => 'M', s if s.contains(git2::Status::WT_DELETED) => 'D', s if s.contains(git2::Status::WT_RENAMED) => 'R', s if s.contains(git2::Status::WT_TYPECHANGE) => 'T', _ => ' ', }; if entry.status().contains(git2::Status::IGNORED) { istatus = '!'; wstatus = '!'; } if istatus == '?' && wstatus == '?' { continue; } let mut extra = ""; // A commit in a tree is how submodules are stored, so let's go take a // look at its status. // // TODO: check for GIT_FILEMODE_COMMIT let status = entry.index_to_workdir().and_then(|diff| { let ignore = SubmoduleIgnore::Unspecified; diff.new_file() .path_bytes() .and_then(|s| str::from_utf8(s).ok()) .and_then(|name| repo.submodule_status(name, ignore).ok()) }); if let Some(status) = status { if status.contains(git2::SubmoduleStatus::WD_MODIFIED) { extra = " (new commits)"; } else if status.contains(git2::SubmoduleStatus::WD_INDEX_MODIFIED) || status.contains(git2::SubmoduleStatus::WD_WD_MODIFIED) { extra = " (modified content)"; } else if status.contains(git2::SubmoduleStatus::WD_UNTRACKED) { extra = " (untracked content)"; } } let (mut a, mut b, mut c) = (None, None, None); if let Some(diff) = entry.head_to_index() { a = diff.old_file().path(); b = diff.new_file().path(); } if let Some(diff) = entry.index_to_workdir() { a = a.or_else(|| diff.old_file().path()); b = b.or_else(|| diff.old_file().path()); c = diff.new_file().path(); } match (istatus, wstatus) { ('R', 'R') => println!( "RR {} {} {}{}", a.unwrap().display(), b.unwrap().display(), c.unwrap().display(), extra ), ('R', w) => println!( "R{} {} {}{}", w, a.unwrap().display(), b.unwrap().display(), extra ), (i, 'R') => println!( "{}R {} {}{}", i, a.unwrap().display(), c.unwrap().display(), extra ), (i, w) => println!("{}{} {}{}", i, w, a.unwrap().display(), extra), } } for entry in statuses .iter() .filter(|e| e.status() == git2::Status::WT_NEW) { println!( "?? {}", entry .index_to_workdir() .unwrap() .old_file() .path() .unwrap() .display() ); } } impl Args { fn format(&self) -> Format { if self.flag_short { Format::Short } else if self.flag_porcelain || self.flag_z { Format::Porcelain } else { Format::Long } } } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/ls-remote.rs0000664000175000017500000000262614160055207017615 0ustar mwhudsonmwhudson/* * libgit2 "ls-remote" example * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use git2::{Direction, Repository}; use structopt::StructOpt; #[derive(StructOpt)] struct Args { #[structopt(name = "remote")] arg_remote: String, } fn run(args: &Args) -> Result<(), git2::Error> { let repo = Repository::open(".")?; let remote = &args.arg_remote; let mut remote = repo .find_remote(remote) .or_else(|_| repo.remote_anonymous(remote))?; // Connect to the remote and call the printing function for each of the // remote references. let connection = remote.connect_auth(Direction::Fetch, None, None)?; // Get the list of references on the remote and print out their name next to // what they point to. for head in connection.list()?.iter() { println!("{}\t{}", head.oid(), head.name()); } Ok(()) } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/tag.rs0000664000175000017500000000724514160055207016463 0ustar mwhudsonmwhudson/* * libgit2 "tag" example - shows how to list, create and delete tags * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use git2::{Commit, Error, Repository, Tag}; use std::str; use structopt::StructOpt; #[derive(StructOpt)] struct Args { arg_tagname: Option, arg_object: Option, arg_pattern: Option, #[structopt(name = "n", short)] /// specify number of lines from the annotation to print flag_n: Option, #[structopt(name = "force", short, long)] /// replace an existing tag with the given name flag_force: bool, #[structopt(name = "list", short, long)] /// list tags with names matching the pattern given flag_list: bool, #[structopt(name = "tag", short, long = "delete")] /// delete the tag specified flag_delete: Option, #[structopt(name = "msg", short, long = "message")] /// message for a new tag flag_message: Option, } fn run(args: &Args) -> Result<(), Error> { let repo = Repository::open(".")?; if let Some(ref name) = args.arg_tagname { let target = args.arg_object.as_ref().map(|s| &s[..]).unwrap_or("HEAD"); let obj = repo.revparse_single(target)?; if let Some(ref message) = args.flag_message { let sig = repo.signature()?; repo.tag(name, &obj, &sig, message, args.flag_force)?; } else { repo.tag_lightweight(name, &obj, args.flag_force)?; } } else if let Some(ref name) = args.flag_delete { let obj = repo.revparse_single(name)?; let id = obj.short_id()?; repo.tag_delete(name)?; println!( "Deleted tag '{}' (was {})", name, str::from_utf8(&*id).unwrap() ); } else if args.flag_list { let pattern = args.arg_pattern.as_ref().map(|s| &s[..]).unwrap_or("*"); for name in repo.tag_names(Some(pattern))?.iter() { let name = name.unwrap(); let obj = repo.revparse_single(name)?; if let Some(tag) = obj.as_tag() { print_tag(tag, args); } else if let Some(commit) = obj.as_commit() { print_commit(commit, name, args); } else { print_name(name); } } } Ok(()) } fn print_tag(tag: &Tag, args: &Args) { print!("{:<16}", tag.name().unwrap()); if args.flag_n.is_some() { print_list_lines(tag.message(), args); } else { println!(); } } fn print_commit(commit: &Commit, name: &str, args: &Args) { print!("{:<16}", name); if args.flag_n.is_some() { print_list_lines(commit.message(), args); } else { println!(); } } fn print_name(name: &str) { println!("{}", name); } fn print_list_lines(message: Option<&str>, args: &Args) { let message = match message { Some(s) => s, None => return, }; let mut lines = message.lines().filter(|l| !l.trim().is_empty()); if let Some(first) = lines.next() { print!("{}", first); } println!(); for line in lines.take(args.flag_n.unwrap_or(0) as usize) { print!(" {}", line); } } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/examples/rev-parse.rs0000664000175000017500000000336014160055207017606 0ustar mwhudsonmwhudson/* * libgit2 "rev-parse" example - shows how to parse revspecs * * Written by the libgit2 contributors * * To the extent possible under law, the author(s) have dedicated all copyright * and related and neighboring rights to this software to the public domain * worldwide. This software is distributed without any warranty. * * You should have received a copy of the CC0 Public Domain Dedication along * with this software. If not, see * . */ #![deny(warnings)] use git2::Repository; use structopt::StructOpt; #[derive(StructOpt)] struct Args { #[structopt(name = "spec")] arg_spec: String, #[structopt(name = "dir", long = "git-dir")] /// directory of the git repository to check flag_git_dir: Option, } fn run(args: &Args) -> Result<(), git2::Error> { let path = args.flag_git_dir.as_ref().map(|s| &s[..]).unwrap_or("."); let repo = Repository::open(path)?; let revspec = repo.revparse(&args.arg_spec)?; if revspec.mode().contains(git2::RevparseMode::SINGLE) { println!("{}", revspec.from().unwrap().id()); } else if revspec.mode().contains(git2::RevparseMode::RANGE) { let to = revspec.to().unwrap(); let from = revspec.from().unwrap(); println!("{}", to.id()); if revspec.mode().contains(git2::RevparseMode::MERGE_BASE) { let base = repo.merge_base(from.id(), to.id())?; println!("{}", base); } println!("^{}", from.id()); } else { return Err(git2::Error::from_str("invalid results from revparse")); } Ok(()) } fn main() { let args = Args::from_args(); match run(&args) { Ok(()) => {} Err(e) => println!("error: {}", e), } } vendor/git2/LICENSE-MIT0000664000175000017500000000204114160055207015145 0ustar mwhudsonmwhudsonCopyright (c) 2014 Alex Crichton Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/git2/README.md0000664000175000017500000000367014160055207015001 0ustar mwhudsonmwhudson# git2-rs [Documentation](https://docs.rs/git2) libgit2 bindings for Rust ```toml [dependencies] git2 = "0.13" ``` ## Rust version requirements git2-rs works with stable Rust, and typically works with the most recent prior stable release as well. ## Version of libgit2 Currently this library requires libgit2 1.1.0. The source for libgit2 is included in the libgit2-sys crate so there's no need to pre-install the libgit2 library, the libgit2-sys crate will figure that and/or build that for you. ## Building git2-rs ```sh $ git clone https://github.com/rust-lang/git2-rs $ cd git2-rs $ cargo build ``` ### Automating Testing Running tests and handling all of the associated edge cases on every commit proves tedious very quickly. To automate tests and handle proper stashing and unstashing of unstaged changes and thus avoid nasty surprises, use the pre-commit hook found [here][pre-commit-hook] and place it into the `.git/hooks/` with the name `pre-commit`. You may need to add execution permissions with `chmod +x`. To skip tests on a simple commit or doc-fixes, use `git commit --no-verify`. ## Building on OSX 10.10+ If the `ssh` feature is enabled (and it is by default) then this library depends on libssh2 which depends on OpenSSL. To get OpenSSL working follow the [`openssl` crate's instructions](https://github.com/sfackler/rust-openssl#macos). # License This project is licensed under either of * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in git2-rs by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. [pre-commit-hook]: https://gist.github.com/glfmn/0c5e9e2b41b48007ed3497d11e3dbbfa vendor/git2/Cargo.lock0000664000175000017500000003422014160055207015422 0ustar mwhudsonmwhudson# This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "ansi_term" version = "0.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ee49baf6cb617b853aa8d93bf420db2383fab46d314482ca2803b40d5fde979b" dependencies = [ "winapi", ] [[package]] name = "atty" version = "0.2.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d9b39be18770d11421cdb1b9947a45dd3f37e93092cbf377614828a319d5fee8" dependencies = [ "hermit-abi", "libc", "winapi", ] [[package]] name = "autocfg" version = "1.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a" [[package]] name = "bitflags" version = "1.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a" [[package]] name = "cc" version = "1.0.70" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d26a6ce4b6a484fa3edb70f7efa6fc430fd2b87285fe8b84304fd0936faa0dc0" dependencies = [ "jobserver", ] [[package]] name = "cfg-if" version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "clap" version = "2.33.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "37e58ac78573c40708d45522f0d80fa2f01cc4f9b4e2bf749807255454312002" dependencies = [ "ansi_term", "atty", "bitflags", "strsim", "textwrap", "unicode-width", "vec_map", ] [[package]] name = "cmake" version = "0.1.45" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "eb6210b637171dfba4cda12e579ac6dc73f5165ad56133e5d72ef3131f320855" dependencies = [ "cc", ] [[package]] name = "form_urlencoded" version = "1.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5fc25a87fa4fd2094bffb06925852034d90a17f0d1e05197d4956d3555752191" dependencies = [ "matches", "percent-encoding", ] [[package]] name = "getrandom" version = "0.2.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7fcd999463524c52659517fe2cea98493cfe485d10565e7b0fb07dbba7ad2753" dependencies = [ "cfg-if", "libc", "wasi", ] [[package]] name = "git2" version = "0.13.23" dependencies = [ "bitflags", "libc", "libgit2-sys", "log", "openssl-probe", "openssl-sys", "paste", "structopt", "tempfile", "thread-id", "time", "url", ] [[package]] name = "heck" version = "0.3.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6d621efb26863f0e9924c6ac577e8275e5e6b77455db64ffa6c65c904e9e132c" dependencies = [ "unicode-segmentation", ] [[package]] name = "hermit-abi" version = "0.1.19" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "62b467343b94ba476dcb2500d242dadbb39557df889310ac77c5d99100aaac33" dependencies = [ "libc", ] [[package]] name = "idna" version = "0.2.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "418a0a6fab821475f634efe3ccc45c013f742efe03d853e8d3355d5cb850ecf8" dependencies = [ "matches", "unicode-bidi", "unicode-normalization", ] [[package]] name = "jobserver" version = "0.1.24" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "af25a77299a7f711a01975c35a6a424eb6862092cc2d6c72c4ed6cbc56dfc1fa" dependencies = [ "libc", ] [[package]] name = "lazy_static" version = "1.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646" [[package]] name = "libc" version = "0.2.103" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dd8f7255a17a627354f321ef0055d63b898c6fb27eff628af4d1b66b7331edf6" [[package]] name = "libgit2-sys" version = "0.12.24+1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ddbd6021eef06fb289a8f54b3c2acfdd85ff2a585dfbb24b8576325373d2152c" dependencies = [ "cc", "libc", "libssh2-sys", "libz-sys", "openssl-sys", "pkg-config", ] [[package]] name = "libssh2-sys" version = "0.2.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e0186af0d8f171ae6b9c4c90ec51898bad5d08a2d5e470903a50d9ad8959cbee" dependencies = [ "cc", "libc", "libz-sys", "openssl-sys", "pkg-config", "vcpkg", ] [[package]] name = "libz-sys" version = "1.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "de5435b8549c16d423ed0c03dbaafe57cf6c3344744f1242520d59c9d8ecec66" dependencies = [ "cc", "cmake", "libc", "pkg-config", "vcpkg", ] [[package]] name = "log" version = "0.4.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "51b9bbe6c47d51fc3e1a9b945965946b4c44142ab8792c50835a980d362c2710" dependencies = [ "cfg-if", ] [[package]] name = "matches" version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a3e378b66a060d48947b590737b30a1be76706c8dd7b8ba0f2fe3989c68a853f" [[package]] name = "openssl-probe" version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "28988d872ab76095a6e6ac88d99b54fd267702734fd7ffe610ca27f533ddb95a" [[package]] name = "openssl-src" version = "111.16.0+1.1.1l" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7ab2173f69416cf3ec12debb5823d244127d23a9b127d5a5189aa97c5fa2859f" dependencies = [ "cc", ] [[package]] name = "openssl-sys" version = "0.9.67" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "69df2d8dfc6ce3aaf44b40dec6f487d5a886516cf6879c49e98e0710f310a058" dependencies = [ "autocfg", "cc", "libc", "openssl-src", "pkg-config", "vcpkg", ] [[package]] name = "paste" version = "1.0.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "acbf547ad0c65e31259204bd90935776d1c693cec2f4ff7abb7a1bbbd40dfe58" [[package]] name = "percent-encoding" version = "2.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d4fd5641d01c8f18a23da7b6fe29298ff4b55afcccdf78973b24cf3175fee32e" [[package]] name = "pkg-config" version = "0.3.20" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7c9b1041b4387893b91ee6746cddfc28516aff326a3519fb2adf820932c5e6cb" [[package]] name = "ppv-lite86" version = "0.2.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ac74c624d6b2d21f425f752262f42188365d7b8ff1aff74c82e45136510a4857" [[package]] name = "proc-macro-error" version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "da25490ff9892aab3fcf7c36f08cfb902dd3e71ca0f9f9517bea02a73a5ce38c" dependencies = [ "proc-macro-error-attr", "proc-macro2", "quote", "syn", "version_check", ] [[package]] name = "proc-macro-error-attr" version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a1be40180e52ecc98ad80b184934baf3d0d29f979574e439af5a55274b35f869" dependencies = [ "proc-macro2", "quote", "version_check", ] [[package]] name = "proc-macro2" version = "1.0.29" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b9f5105d4fdaab20335ca9565e106a5d9b82b6219b5ba735731124ac6711d23d" dependencies = [ "unicode-xid", ] [[package]] name = "quote" version = "1.0.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c3d0b9745dc2debf507c8422de05d7226cc1f0644216dfdfead988f9b1ab32a7" dependencies = [ "proc-macro2", ] [[package]] name = "rand" version = "0.8.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2e7573632e6454cf6b99d7aac4ccca54be06da05aca2ef7423d22d27d4d4bcd8" dependencies = [ "libc", "rand_chacha", "rand_core", "rand_hc", ] [[package]] name = "rand_chacha" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88" dependencies = [ "ppv-lite86", "rand_core", ] [[package]] name = "rand_core" version = "0.6.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d34f1408f55294453790c48b2f1ebbb1c5b4b7563eb1f418bcfcfdbb06ebb4e7" dependencies = [ "getrandom", ] [[package]] name = "rand_hc" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d51e9f596de227fda2ea6c84607f5558e196eeaf43c986b724ba4fb8fdf497e7" dependencies = [ "rand_core", ] [[package]] name = "redox_syscall" version = "0.1.57" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "41cc0f7e4d5d4544e8861606a285bb08d3e70712ccc7d2b84d7c0ccfaf4b05ce" [[package]] name = "redox_syscall" version = "0.2.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8383f39639269cde97d255a32bdb68c047337295414940c68bdd30c2e13203ff" dependencies = [ "bitflags", ] [[package]] name = "remove_dir_all" version = "0.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3acd125665422973a33ac9d3dd2df85edad0f4ae9b00dafb1a05e43a9f5ef8e7" dependencies = [ "winapi", ] [[package]] name = "strsim" version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8ea5119cdb4c55b55d432abb513a0429384878c15dde60cc77b1c99de1a95a6a" [[package]] name = "structopt" version = "0.3.23" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bf9d950ef167e25e0bdb073cf1d68e9ad2795ac826f2f3f59647817cf23c0bfa" dependencies = [ "clap", "lazy_static", "structopt-derive", ] [[package]] name = "structopt-derive" version = "0.4.16" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "134d838a2c9943ac3125cf6df165eda53493451b719f3255b2a26b85f772d0ba" dependencies = [ "heck", "proc-macro-error", "proc-macro2", "quote", "syn", ] [[package]] name = "syn" version = "1.0.77" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5239bc68e0fef57495900cfea4e8dc75596d9a319d7e16b1e0a440d24e6fe0a0" dependencies = [ "proc-macro2", "quote", "unicode-xid", ] [[package]] name = "tempfile" version = "3.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dac1c663cfc93810f88aed9b8941d48cabf856a1b111c29a40439018d870eb22" dependencies = [ "cfg-if", "libc", "rand", "redox_syscall 0.2.10", "remove_dir_all", "winapi", ] [[package]] name = "textwrap" version = "0.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d326610f408c7a4eb6f51c37c330e496b08506c9457c9d34287ecc38809fb060" dependencies = [ "unicode-width", ] [[package]] name = "thread-id" version = "3.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c7fbf4c9d56b320106cd64fd024dadfa0be7cb4706725fc44a7d7ce952d820c1" dependencies = [ "libc", "redox_syscall 0.1.57", "winapi", ] [[package]] name = "time" version = "0.1.43" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ca8a50ef2360fbd1eeb0ecd46795a87a19024eb4b53c5dc916ca1fd95fe62438" dependencies = [ "libc", "winapi", ] [[package]] name = "tinyvec" version = "1.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f83b2a3d4d9091d0abd7eba4dc2710b1718583bd4d8992e2190720ea38f391f7" dependencies = [ "tinyvec_macros", ] [[package]] name = "tinyvec_macros" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cda74da7e1a664f795bb1f8a87ec406fb89a02522cf6e50620d016add6dbbf5c" [[package]] name = "unicode-bidi" version = "0.3.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "246f4c42e67e7a4e3c6106ff716a5d067d4132a642840b242e357e468a2a0085" [[package]] name = "unicode-normalization" version = "0.1.19" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d54590932941a9e9266f0832deed84ebe1bf2e4c9e4a3554d393d18f5e854bf9" dependencies = [ "tinyvec", ] [[package]] name = "unicode-segmentation" version = "1.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8895849a949e7845e06bd6dc1aa51731a103c42707010a5b591c0038fb73385b" [[package]] name = "unicode-width" version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3ed742d4ea2bd1176e236172c8429aaf54486e7ac098db29ffe6529e0ce50973" [[package]] name = "unicode-xid" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8ccb82d61f80a663efe1f787a51b16b5a51e3314d6ac365b08639f52387b33f3" [[package]] name = "url" version = "2.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a507c383b2d33b5fc35d1861e77e6b383d158b2da5e14fe51b83dfedf6fd578c" dependencies = [ "form_urlencoded", "idna", "matches", "percent-encoding", ] [[package]] name = "vcpkg" version = "0.2.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "accd4ea62f7bb7a82fe23066fb0957d48ef677f6eeb8215f372f52e48bb32426" [[package]] name = "vec_map" version = "0.8.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f1bddf1187be692e79c5ffeab891132dfb0f236ed36a43c7ed39f1165ee20191" [[package]] name = "version_check" version = "0.9.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5fecdca9a5291cc2b8dcf7dc02453fee791a280f3743cb0905f8822ae463b3fe" [[package]] name = "wasi" version = "0.10.2+wasi-snapshot-preview1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fd6fbd9a79829dd1ad0cc20627bf1ed606756a7f77edff7b66b7064f9cb327c6" [[package]] name = "winapi" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419" dependencies = [ "winapi-i686-pc-windows-gnu", "winapi-x86_64-pc-windows-gnu", ] [[package]] name = "winapi-i686-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6" [[package]] name = "winapi-x86_64-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f" vendor/serde_derive/0000775000175000017500000000000014172417313015312 5ustar mwhudsonmwhudsonvendor/serde_derive/.cargo-checksum.json0000664000175000017500000000013114172417313021151 0ustar mwhudsonmwhudson{"files":{},"package":"ed201699328568d8d08208fdd080e3ff594e6c422e438b6705905da01005d537"}vendor/serde_derive/LICENSE-APACHE0000664000175000017500000002513714160055207017243 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/serde_derive/Cargo.toml0000664000175000017500000000252714172417313017250 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] rust-version = "1.31" name = "serde_derive" version = "1.0.133" authors = ["Erick Tryzelaar ", "David Tolnay "] include = ["build.rs", "src/**/*.rs", "crates-io.md", "README.md", "LICENSE-APACHE", "LICENSE-MIT"] description = "Macros 1.1 implementation of #[derive(Serialize, Deserialize)]" homepage = "https://serde.rs" documentation = "https://serde.rs/derive.html" readme = "crates-io.md" keywords = ["serde", "serialization", "no_std"] license = "MIT OR Apache-2.0" repository = "https://github.com/serde-rs/serde" [package.metadata.docs.rs] targets = ["x86_64-unknown-linux-gnu"] [lib] name = "serde_derive" proc-macro = true [dependencies.proc-macro2] version = "1.0" [dependencies.quote] version = "1.0" [dependencies.syn] version = "1.0.60" [dev-dependencies.serde] version = "1.0" [features] default = [] deserialize_in_place = [] vendor/serde_derive/build.rs0000664000175000017500000000224414160055207016756 0ustar mwhudsonmwhudsonuse std::env; use std::process::Command; use std::str; // The rustc-cfg strings below are *not* public API. Please let us know by // opening a GitHub issue if your build environment requires some way to enable // these cfgs other than by executing our build script. fn main() { let minor = match rustc_minor_version() { Some(minor) => minor, None => return, }; // Underscore const names stabilized in Rust 1.37: // https://blog.rust-lang.org/2019/08/15/Rust-1.37.0.html#using-unnamed-const-items-for-macros if minor >= 37 { println!("cargo:rustc-cfg=underscore_consts"); } // The ptr::addr_of! macro stabilized in Rust 1.51: // https://blog.rust-lang.org/2021/03/25/Rust-1.51.0.html#stabilized-apis if minor >= 51 { println!("cargo:rustc-cfg=ptr_addr_of"); } } fn rustc_minor_version() -> Option { let rustc = env::var_os("RUSTC")?; let output = Command::new(rustc).arg("--version").output().ok()?; let version = str::from_utf8(&output.stdout).ok()?; let mut pieces = version.split('.'); if pieces.next() != Some("rustc 1") { return None; } pieces.next()?.parse().ok() } vendor/serde_derive/src/0000775000175000017500000000000014172417313016101 5ustar mwhudsonmwhudsonvendor/serde_derive/src/dummy.rs0000664000175000017500000000233614160055207017603 0ustar mwhudsonmwhudsonuse proc_macro2::{Ident, TokenStream}; use quote::format_ident; use syn; use try; pub fn wrap_in_const( serde_path: Option<&syn::Path>, trait_: &str, ty: &Ident, code: TokenStream, ) -> TokenStream { let try_replacement = try::replacement(); let dummy_const = if cfg!(underscore_consts) { format_ident!("_") } else { format_ident!("_IMPL_{}_FOR_{}", trait_, unraw(ty)) }; let use_serde = match serde_path { Some(path) => quote! { use #path as _serde; }, None => quote! { #[allow(unused_extern_crates, clippy::useless_attribute)] extern crate serde as _serde; }, }; quote! { #[doc(hidden)] #[allow(non_upper_case_globals, unused_attributes, unused_qualifications)] const #dummy_const: () = { #use_serde #try_replacement #code }; } } #[allow(deprecated)] fn unraw(ident: &Ident) -> String { // str::trim_start_matches was added in 1.30, trim_left_matches deprecated // in 1.33. We currently support rustc back to 1.15 so we need to continue // to use the deprecated one. ident.to_string().trim_left_matches("r#").to_owned() } vendor/serde_derive/src/ser.rs0000664000175000017500000012635114160055207017245 0ustar mwhudsonmwhudsonuse proc_macro2::{Span, TokenStream}; use syn::spanned::Spanned; use syn::{self, Ident, Index, Member}; use bound; use dummy; use fragment::{Fragment, Match, Stmts}; use internals::ast::{Container, Data, Field, Style, Variant}; use internals::{attr, replace_receiver, Ctxt, Derive}; use pretend; pub fn expand_derive_serialize( input: &mut syn::DeriveInput, ) -> Result> { replace_receiver(input); let ctxt = Ctxt::new(); let cont = match Container::from_ast(&ctxt, input, Derive::Serialize) { Some(cont) => cont, None => return Err(ctxt.check().unwrap_err()), }; precondition(&ctxt, &cont); ctxt.check()?; let ident = &cont.ident; let params = Parameters::new(&cont); let (impl_generics, ty_generics, where_clause) = params.generics.split_for_impl(); let body = Stmts(serialize_body(&cont, ¶ms)); let serde = cont.attrs.serde_path(); let impl_block = if let Some(remote) = cont.attrs.remote() { let vis = &input.vis; let used = pretend::pretend_used(&cont, params.is_packed); quote! { impl #impl_generics #ident #ty_generics #where_clause { #vis fn serialize<__S>(__self: &#remote #ty_generics, __serializer: __S) -> #serde::__private::Result<__S::Ok, __S::Error> where __S: #serde::Serializer, { #used #body } } } } else { quote! { #[automatically_derived] impl #impl_generics #serde::Serialize for #ident #ty_generics #where_clause { fn serialize<__S>(&self, __serializer: __S) -> #serde::__private::Result<__S::Ok, __S::Error> where __S: #serde::Serializer, { #body } } } }; Ok(dummy::wrap_in_const( cont.attrs.custom_serde_path(), "SERIALIZE", ident, impl_block, )) } fn precondition(cx: &Ctxt, cont: &Container) { match cont.attrs.identifier() { attr::Identifier::No => {} attr::Identifier::Field => { cx.error_spanned_by(cont.original, "field identifiers cannot be serialized"); } attr::Identifier::Variant => { cx.error_spanned_by(cont.original, "variant identifiers cannot be serialized"); } } } struct Parameters { /// Variable holding the value being serialized. Either `self` for local /// types or `__self` for remote types. self_var: Ident, /// Path to the type the impl is for. Either a single `Ident` for local /// types or `some::remote::Ident` for remote types. Does not include /// generic parameters. this: syn::Path, /// Generics including any explicit and inferred bounds for the impl. generics: syn::Generics, /// Type has a `serde(remote = "...")` attribute. is_remote: bool, /// Type has a repr(packed) attribute. is_packed: bool, } impl Parameters { fn new(cont: &Container) -> Self { let is_remote = cont.attrs.remote().is_some(); let self_var = if is_remote { Ident::new("__self", Span::call_site()) } else { Ident::new("self", Span::call_site()) }; let this = match cont.attrs.remote() { Some(remote) => remote.clone(), None => cont.ident.clone().into(), }; let is_packed = cont.attrs.is_packed(); let generics = build_generics(cont); Parameters { self_var, this, generics, is_remote, is_packed, } } /// Type name to use in error messages and `&'static str` arguments to /// various Serializer methods. fn type_name(&self) -> String { self.this.segments.last().unwrap().ident.to_string() } } // All the generics in the input, plus a bound `T: Serialize` for each generic // field type that will be serialized by us. fn build_generics(cont: &Container) -> syn::Generics { let generics = bound::without_defaults(cont.generics); let generics = bound::with_where_predicates_from_fields(cont, &generics, attr::Field::ser_bound); let generics = bound::with_where_predicates_from_variants(cont, &generics, attr::Variant::ser_bound); match cont.attrs.ser_bound() { Some(predicates) => bound::with_where_predicates(&generics, predicates), None => bound::with_bound( cont, &generics, needs_serialize_bound, &parse_quote!(_serde::Serialize), ), } } // Fields with a `skip_serializing` or `serialize_with` attribute, or which // belong to a variant with a 'skip_serializing` or `serialize_with` attribute, // are not serialized by us so we do not generate a bound. Fields with a `bound` // attribute specify their own bound so we do not generate one. All other fields // may need a `T: Serialize` bound where T is the type of the field. fn needs_serialize_bound(field: &attr::Field, variant: Option<&attr::Variant>) -> bool { !field.skip_serializing() && field.serialize_with().is_none() && field.ser_bound().is_none() && variant.map_or(true, |variant| { !variant.skip_serializing() && variant.serialize_with().is_none() && variant.ser_bound().is_none() }) } fn serialize_body(cont: &Container, params: &Parameters) -> Fragment { if cont.attrs.transparent() { serialize_transparent(cont, params) } else if let Some(type_into) = cont.attrs.type_into() { serialize_into(params, type_into) } else { match &cont.data { Data::Enum(variants) => serialize_enum(params, variants, &cont.attrs), Data::Struct(Style::Struct, fields) => serialize_struct(params, fields, &cont.attrs), Data::Struct(Style::Tuple, fields) => { serialize_tuple_struct(params, fields, &cont.attrs) } Data::Struct(Style::Newtype, fields) => { serialize_newtype_struct(params, &fields[0], &cont.attrs) } Data::Struct(Style::Unit, _) => serialize_unit_struct(&cont.attrs), } } } fn serialize_transparent(cont: &Container, params: &Parameters) -> Fragment { let fields = match &cont.data { Data::Struct(_, fields) => fields, Data::Enum(_) => unreachable!(), }; let self_var = ¶ms.self_var; let transparent_field = fields.iter().find(|f| f.attrs.transparent()).unwrap(); let member = &transparent_field.member; let path = match transparent_field.attrs.serialize_with() { Some(path) => quote!(#path), None => { let span = transparent_field.original.span(); quote_spanned!(span=> _serde::Serialize::serialize) } }; quote_block! { #path(&#self_var.#member, __serializer) } } fn serialize_into(params: &Parameters, type_into: &syn::Type) -> Fragment { let self_var = ¶ms.self_var; quote_block! { _serde::Serialize::serialize( &_serde::__private::Into::<#type_into>::into(_serde::__private::Clone::clone(#self_var)), __serializer) } } fn serialize_unit_struct(cattrs: &attr::Container) -> Fragment { let type_name = cattrs.name().serialize_name(); quote_expr! { _serde::Serializer::serialize_unit_struct(__serializer, #type_name) } } fn serialize_newtype_struct( params: &Parameters, field: &Field, cattrs: &attr::Container, ) -> Fragment { let type_name = cattrs.name().serialize_name(); let mut field_expr = get_member( params, field, &Member::Unnamed(Index { index: 0, span: Span::call_site(), }), ); if let Some(path) = field.attrs.serialize_with() { field_expr = wrap_serialize_field_with(params, field.ty, path, &field_expr); } let span = field.original.span(); let func = quote_spanned!(span=> _serde::Serializer::serialize_newtype_struct); quote_expr! { #func(__serializer, #type_name, #field_expr) } } fn serialize_tuple_struct( params: &Parameters, fields: &[Field], cattrs: &attr::Container, ) -> Fragment { let serialize_stmts = serialize_tuple_struct_visitor(fields, params, false, &TupleTrait::SerializeTupleStruct); let type_name = cattrs.name().serialize_name(); let mut serialized_fields = fields .iter() .enumerate() .filter(|(_, field)| !field.attrs.skip_serializing()) .peekable(); let let_mut = mut_if(serialized_fields.peek().is_some()); let len = serialized_fields .map(|(i, field)| match field.attrs.skip_serializing_if() { None => quote!(1), Some(path) => { let index = syn::Index { index: i as u32, span: Span::call_site(), }; let field_expr = get_member(params, field, &Member::Unnamed(index)); quote!(if #path(#field_expr) { 0 } else { 1 }) } }) .fold(quote!(0), |sum, expr| quote!(#sum + #expr)); quote_block! { let #let_mut __serde_state = try!(_serde::Serializer::serialize_tuple_struct(__serializer, #type_name, #len)); #(#serialize_stmts)* _serde::ser::SerializeTupleStruct::end(__serde_state) } } fn serialize_struct(params: &Parameters, fields: &[Field], cattrs: &attr::Container) -> Fragment { assert!(fields.len() as u64 <= u64::from(u32::max_value())); if cattrs.has_flatten() { serialize_struct_as_map(params, fields, cattrs) } else { serialize_struct_as_struct(params, fields, cattrs) } } fn serialize_struct_tag_field(cattrs: &attr::Container, struct_trait: &StructTrait) -> TokenStream { match cattrs.tag() { attr::TagType::Internal { tag } => { let type_name = cattrs.name().serialize_name(); let func = struct_trait.serialize_field(Span::call_site()); quote! { try!(#func(&mut __serde_state, #tag, #type_name)); } } _ => quote! {}, } } fn serialize_struct_as_struct( params: &Parameters, fields: &[Field], cattrs: &attr::Container, ) -> Fragment { let serialize_fields = serialize_struct_visitor(fields, params, false, &StructTrait::SerializeStruct); let type_name = cattrs.name().serialize_name(); let tag_field = serialize_struct_tag_field(cattrs, &StructTrait::SerializeStruct); let tag_field_exists = !tag_field.is_empty(); let mut serialized_fields = fields .iter() .filter(|&field| !field.attrs.skip_serializing()) .peekable(); let let_mut = mut_if(serialized_fields.peek().is_some() || tag_field_exists); let len = serialized_fields .map(|field| match field.attrs.skip_serializing_if() { None => quote!(1), Some(path) => { let field_expr = get_member(params, field, &field.member); quote!(if #path(#field_expr) { 0 } else { 1 }) } }) .fold( quote!(#tag_field_exists as usize), |sum, expr| quote!(#sum + #expr), ); quote_block! { let #let_mut __serde_state = try!(_serde::Serializer::serialize_struct(__serializer, #type_name, #len)); #tag_field #(#serialize_fields)* _serde::ser::SerializeStruct::end(__serde_state) } } fn serialize_struct_as_map( params: &Parameters, fields: &[Field], cattrs: &attr::Container, ) -> Fragment { let serialize_fields = serialize_struct_visitor(fields, params, false, &StructTrait::SerializeMap); let tag_field = serialize_struct_tag_field(cattrs, &StructTrait::SerializeMap); let tag_field_exists = !tag_field.is_empty(); let mut serialized_fields = fields .iter() .filter(|&field| !field.attrs.skip_serializing()) .peekable(); let let_mut = mut_if(serialized_fields.peek().is_some() || tag_field_exists); let len = if cattrs.has_flatten() { quote!(_serde::__private::None) } else { let len = serialized_fields .map(|field| match field.attrs.skip_serializing_if() { None => quote!(1), Some(path) => { let field_expr = get_member(params, field, &field.member); quote!(if #path(#field_expr) { 0 } else { 1 }) } }) .fold( quote!(#tag_field_exists as usize), |sum, expr| quote!(#sum + #expr), ); quote!(_serde::__private::Some(#len)) }; quote_block! { let #let_mut __serde_state = try!(_serde::Serializer::serialize_map(__serializer, #len)); #tag_field #(#serialize_fields)* _serde::ser::SerializeMap::end(__serde_state) } } fn serialize_enum(params: &Parameters, variants: &[Variant], cattrs: &attr::Container) -> Fragment { assert!(variants.len() as u64 <= u64::from(u32::max_value())); let self_var = ¶ms.self_var; let arms: Vec<_> = variants .iter() .enumerate() .map(|(variant_index, variant)| { serialize_variant(params, variant, variant_index as u32, cattrs) }) .collect(); quote_expr! { match *#self_var { #(#arms)* } } } fn serialize_variant( params: &Parameters, variant: &Variant, variant_index: u32, cattrs: &attr::Container, ) -> TokenStream { let this = ¶ms.this; let variant_ident = &variant.ident; if variant.attrs.skip_serializing() { let skipped_msg = format!( "the enum variant {}::{} cannot be serialized", params.type_name(), variant_ident ); let skipped_err = quote! { _serde::__private::Err(_serde::ser::Error::custom(#skipped_msg)) }; let fields_pat = match variant.style { Style::Unit => quote!(), Style::Newtype | Style::Tuple => quote!((..)), Style::Struct => quote!({ .. }), }; quote! { #this::#variant_ident #fields_pat => #skipped_err, } } else { // variant wasn't skipped let case = match variant.style { Style::Unit => { quote! { #this::#variant_ident } } Style::Newtype => { quote! { #this::#variant_ident(ref __field0) } } Style::Tuple => { let field_names = (0..variant.fields.len()) .map(|i| Ident::new(&format!("__field{}", i), Span::call_site())); quote! { #this::#variant_ident(#(ref #field_names),*) } } Style::Struct => { let members = variant.fields.iter().map(|f| &f.member); quote! { #this::#variant_ident { #(ref #members),* } } } }; let body = Match(match cattrs.tag() { attr::TagType::External => { serialize_externally_tagged_variant(params, variant, variant_index, cattrs) } attr::TagType::Internal { tag } => { serialize_internally_tagged_variant(params, variant, cattrs, tag) } attr::TagType::Adjacent { tag, content } => { serialize_adjacently_tagged_variant(params, variant, cattrs, tag, content) } attr::TagType::None => serialize_untagged_variant(params, variant, cattrs), }); quote! { #case => #body } } } fn serialize_externally_tagged_variant( params: &Parameters, variant: &Variant, variant_index: u32, cattrs: &attr::Container, ) -> Fragment { let type_name = cattrs.name().serialize_name(); let variant_name = variant.attrs.name().serialize_name(); if let Some(path) = variant.attrs.serialize_with() { let ser = wrap_serialize_variant_with(params, path, variant); return quote_expr! { _serde::Serializer::serialize_newtype_variant( __serializer, #type_name, #variant_index, #variant_name, #ser, ) }; } match effective_style(variant) { Style::Unit => { quote_expr! { _serde::Serializer::serialize_unit_variant( __serializer, #type_name, #variant_index, #variant_name, ) } } Style::Newtype => { let field = &variant.fields[0]; let mut field_expr = quote!(__field0); if let Some(path) = field.attrs.serialize_with() { field_expr = wrap_serialize_field_with(params, field.ty, path, &field_expr); } let span = field.original.span(); let func = quote_spanned!(span=> _serde::Serializer::serialize_newtype_variant); quote_expr! { #func( __serializer, #type_name, #variant_index, #variant_name, #field_expr, ) } } Style::Tuple => serialize_tuple_variant( TupleVariant::ExternallyTagged { type_name, variant_index, variant_name, }, params, &variant.fields, ), Style::Struct => serialize_struct_variant( StructVariant::ExternallyTagged { variant_index, variant_name, }, params, &variant.fields, &type_name, ), } } fn serialize_internally_tagged_variant( params: &Parameters, variant: &Variant, cattrs: &attr::Container, tag: &str, ) -> Fragment { let type_name = cattrs.name().serialize_name(); let variant_name = variant.attrs.name().serialize_name(); let enum_ident_str = params.type_name(); let variant_ident_str = variant.ident.to_string(); if let Some(path) = variant.attrs.serialize_with() { let ser = wrap_serialize_variant_with(params, path, variant); return quote_expr! { _serde::__private::ser::serialize_tagged_newtype( __serializer, #enum_ident_str, #variant_ident_str, #tag, #variant_name, #ser, ) }; } match effective_style(variant) { Style::Unit => { quote_block! { let mut __struct = try!(_serde::Serializer::serialize_struct( __serializer, #type_name, 1)); try!(_serde::ser::SerializeStruct::serialize_field( &mut __struct, #tag, #variant_name)); _serde::ser::SerializeStruct::end(__struct) } } Style::Newtype => { let field = &variant.fields[0]; let mut field_expr = quote!(__field0); if let Some(path) = field.attrs.serialize_with() { field_expr = wrap_serialize_field_with(params, field.ty, path, &field_expr); } let span = field.original.span(); let func = quote_spanned!(span=> _serde::__private::ser::serialize_tagged_newtype); quote_expr! { #func( __serializer, #enum_ident_str, #variant_ident_str, #tag, #variant_name, #field_expr, ) } } Style::Struct => serialize_struct_variant( StructVariant::InternallyTagged { tag, variant_name }, params, &variant.fields, &type_name, ), Style::Tuple => unreachable!("checked in serde_derive_internals"), } } fn serialize_adjacently_tagged_variant( params: &Parameters, variant: &Variant, cattrs: &attr::Container, tag: &str, content: &str, ) -> Fragment { let this = ¶ms.this; let type_name = cattrs.name().serialize_name(); let variant_name = variant.attrs.name().serialize_name(); let inner = Stmts(if let Some(path) = variant.attrs.serialize_with() { let ser = wrap_serialize_variant_with(params, path, variant); quote_expr! { _serde::Serialize::serialize(#ser, __serializer) } } else { match effective_style(variant) { Style::Unit => { return quote_block! { let mut __struct = try!(_serde::Serializer::serialize_struct( __serializer, #type_name, 1)); try!(_serde::ser::SerializeStruct::serialize_field( &mut __struct, #tag, #variant_name)); _serde::ser::SerializeStruct::end(__struct) }; } Style::Newtype => { let field = &variant.fields[0]; let mut field_expr = quote!(__field0); if let Some(path) = field.attrs.serialize_with() { field_expr = wrap_serialize_field_with(params, field.ty, path, &field_expr); } let span = field.original.span(); let func = quote_spanned!(span=> _serde::ser::SerializeStruct::serialize_field); return quote_block! { let mut __struct = try!(_serde::Serializer::serialize_struct( __serializer, #type_name, 2)); try!(_serde::ser::SerializeStruct::serialize_field( &mut __struct, #tag, #variant_name)); try!(#func( &mut __struct, #content, #field_expr)); _serde::ser::SerializeStruct::end(__struct) }; } Style::Tuple => { serialize_tuple_variant(TupleVariant::Untagged, params, &variant.fields) } Style::Struct => serialize_struct_variant( StructVariant::Untagged, params, &variant.fields, &variant_name, ), } }); let fields_ty = variant.fields.iter().map(|f| &f.ty); let fields_ident: &Vec<_> = &match variant.style { Style::Unit => { if variant.attrs.serialize_with().is_some() { vec![] } else { unreachable!() } } Style::Newtype => vec![Member::Named(Ident::new("__field0", Span::call_site()))], Style::Tuple => (0..variant.fields.len()) .map(|i| Member::Named(Ident::new(&format!("__field{}", i), Span::call_site()))) .collect(), Style::Struct => variant.fields.iter().map(|f| f.member.clone()).collect(), }; let (_, ty_generics, where_clause) = params.generics.split_for_impl(); let wrapper_generics = if fields_ident.is_empty() { params.generics.clone() } else { bound::with_lifetime_bound(¶ms.generics, "'__a") }; let (wrapper_impl_generics, wrapper_ty_generics, _) = wrapper_generics.split_for_impl(); quote_block! { struct __AdjacentlyTagged #wrapper_generics #where_clause { data: (#(&'__a #fields_ty,)*), phantom: _serde::__private::PhantomData<#this #ty_generics>, } impl #wrapper_impl_generics _serde::Serialize for __AdjacentlyTagged #wrapper_ty_generics #where_clause { fn serialize<__S>(&self, __serializer: __S) -> _serde::__private::Result<__S::Ok, __S::Error> where __S: _serde::Serializer, { // Elements that have skip_serializing will be unused. #[allow(unused_variables)] let (#(#fields_ident,)*) = self.data; #inner } } let mut __struct = try!(_serde::Serializer::serialize_struct( __serializer, #type_name, 2)); try!(_serde::ser::SerializeStruct::serialize_field( &mut __struct, #tag, #variant_name)); try!(_serde::ser::SerializeStruct::serialize_field( &mut __struct, #content, &__AdjacentlyTagged { data: (#(#fields_ident,)*), phantom: _serde::__private::PhantomData::<#this #ty_generics>, })); _serde::ser::SerializeStruct::end(__struct) } } fn serialize_untagged_variant( params: &Parameters, variant: &Variant, cattrs: &attr::Container, ) -> Fragment { if let Some(path) = variant.attrs.serialize_with() { let ser = wrap_serialize_variant_with(params, path, variant); return quote_expr! { _serde::Serialize::serialize(#ser, __serializer) }; } match effective_style(variant) { Style::Unit => { quote_expr! { _serde::Serializer::serialize_unit(__serializer) } } Style::Newtype => { let field = &variant.fields[0]; let mut field_expr = quote!(__field0); if let Some(path) = field.attrs.serialize_with() { field_expr = wrap_serialize_field_with(params, field.ty, path, &field_expr); } let span = field.original.span(); let func = quote_spanned!(span=> _serde::Serialize::serialize); quote_expr! { #func(#field_expr, __serializer) } } Style::Tuple => serialize_tuple_variant(TupleVariant::Untagged, params, &variant.fields), Style::Struct => { let type_name = cattrs.name().serialize_name(); serialize_struct_variant(StructVariant::Untagged, params, &variant.fields, &type_name) } } } enum TupleVariant { ExternallyTagged { type_name: String, variant_index: u32, variant_name: String, }, Untagged, } fn serialize_tuple_variant( context: TupleVariant, params: &Parameters, fields: &[Field], ) -> Fragment { let tuple_trait = match context { TupleVariant::ExternallyTagged { .. } => TupleTrait::SerializeTupleVariant, TupleVariant::Untagged => TupleTrait::SerializeTuple, }; let serialize_stmts = serialize_tuple_struct_visitor(fields, params, true, &tuple_trait); let mut serialized_fields = fields .iter() .enumerate() .filter(|(_, field)| !field.attrs.skip_serializing()) .peekable(); let let_mut = mut_if(serialized_fields.peek().is_some()); let len = serialized_fields .map(|(i, field)| match field.attrs.skip_serializing_if() { None => quote!(1), Some(path) => { let field_expr = Ident::new(&format!("__field{}", i), Span::call_site()); quote!(if #path(#field_expr) { 0 } else { 1 }) } }) .fold(quote!(0), |sum, expr| quote!(#sum + #expr)); match context { TupleVariant::ExternallyTagged { type_name, variant_index, variant_name, } => { quote_block! { let #let_mut __serde_state = try!(_serde::Serializer::serialize_tuple_variant( __serializer, #type_name, #variant_index, #variant_name, #len)); #(#serialize_stmts)* _serde::ser::SerializeTupleVariant::end(__serde_state) } } TupleVariant::Untagged => { quote_block! { let #let_mut __serde_state = try!(_serde::Serializer::serialize_tuple( __serializer, #len)); #(#serialize_stmts)* _serde::ser::SerializeTuple::end(__serde_state) } } } } enum StructVariant<'a> { ExternallyTagged { variant_index: u32, variant_name: String, }, InternallyTagged { tag: &'a str, variant_name: String, }, Untagged, } fn serialize_struct_variant<'a>( context: StructVariant<'a>, params: &Parameters, fields: &[Field], name: &str, ) -> Fragment { if fields.iter().any(|field| field.attrs.flatten()) { return serialize_struct_variant_with_flatten(context, params, fields, name); } let struct_trait = match context { StructVariant::ExternallyTagged { .. } => StructTrait::SerializeStructVariant, StructVariant::InternallyTagged { .. } | StructVariant::Untagged => { StructTrait::SerializeStruct } }; let serialize_fields = serialize_struct_visitor(fields, params, true, &struct_trait); let mut serialized_fields = fields .iter() .filter(|&field| !field.attrs.skip_serializing()) .peekable(); let let_mut = mut_if(serialized_fields.peek().is_some()); let len = serialized_fields .map(|field| { let member = &field.member; match field.attrs.skip_serializing_if() { Some(path) => quote!(if #path(#member) { 0 } else { 1 }), None => quote!(1), } }) .fold(quote!(0), |sum, expr| quote!(#sum + #expr)); match context { StructVariant::ExternallyTagged { variant_index, variant_name, } => { quote_block! { let #let_mut __serde_state = try!(_serde::Serializer::serialize_struct_variant( __serializer, #name, #variant_index, #variant_name, #len, )); #(#serialize_fields)* _serde::ser::SerializeStructVariant::end(__serde_state) } } StructVariant::InternallyTagged { tag, variant_name } => { quote_block! { let mut __serde_state = try!(_serde::Serializer::serialize_struct( __serializer, #name, #len + 1, )); try!(_serde::ser::SerializeStruct::serialize_field( &mut __serde_state, #tag, #variant_name, )); #(#serialize_fields)* _serde::ser::SerializeStruct::end(__serde_state) } } StructVariant::Untagged => { quote_block! { let #let_mut __serde_state = try!(_serde::Serializer::serialize_struct( __serializer, #name, #len, )); #(#serialize_fields)* _serde::ser::SerializeStruct::end(__serde_state) } } } } fn serialize_struct_variant_with_flatten<'a>( context: StructVariant<'a>, params: &Parameters, fields: &[Field], name: &str, ) -> Fragment { let struct_trait = StructTrait::SerializeMap; let serialize_fields = serialize_struct_visitor(fields, params, true, &struct_trait); let mut serialized_fields = fields .iter() .filter(|&field| !field.attrs.skip_serializing()) .peekable(); let let_mut = mut_if(serialized_fields.peek().is_some()); match context { StructVariant::ExternallyTagged { variant_index, variant_name, } => { let this = ¶ms.this; let fields_ty = fields.iter().map(|f| &f.ty); let members = &fields.iter().map(|f| &f.member).collect::>(); let (_, ty_generics, where_clause) = params.generics.split_for_impl(); let wrapper_generics = bound::with_lifetime_bound(¶ms.generics, "'__a"); let (wrapper_impl_generics, wrapper_ty_generics, _) = wrapper_generics.split_for_impl(); quote_block! { struct __EnumFlatten #wrapper_generics #where_clause { data: (#(&'__a #fields_ty,)*), phantom: _serde::__private::PhantomData<#this #ty_generics>, } impl #wrapper_impl_generics _serde::Serialize for __EnumFlatten #wrapper_ty_generics #where_clause { fn serialize<__S>(&self, __serializer: __S) -> _serde::__private::Result<__S::Ok, __S::Error> where __S: _serde::Serializer, { let (#(#members,)*) = self.data; let #let_mut __serde_state = try!(_serde::Serializer::serialize_map( __serializer, _serde::__private::None)); #(#serialize_fields)* _serde::ser::SerializeMap::end(__serde_state) } } _serde::Serializer::serialize_newtype_variant( __serializer, #name, #variant_index, #variant_name, &__EnumFlatten { data: (#(#members,)*), phantom: _serde::__private::PhantomData::<#this #ty_generics>, }) } } StructVariant::InternallyTagged { tag, variant_name } => { quote_block! { let #let_mut __serde_state = try!(_serde::Serializer::serialize_map( __serializer, _serde::__private::None)); try!(_serde::ser::SerializeMap::serialize_entry( &mut __serde_state, #tag, #variant_name, )); #(#serialize_fields)* _serde::ser::SerializeMap::end(__serde_state) } } StructVariant::Untagged => { quote_block! { let #let_mut __serde_state = try!(_serde::Serializer::serialize_map( __serializer, _serde::__private::None)); #(#serialize_fields)* _serde::ser::SerializeMap::end(__serde_state) } } } } fn serialize_tuple_struct_visitor( fields: &[Field], params: &Parameters, is_enum: bool, tuple_trait: &TupleTrait, ) -> Vec { fields .iter() .enumerate() .filter(|(_, field)| !field.attrs.skip_serializing()) .map(|(i, field)| { let mut field_expr = if is_enum { let id = Ident::new(&format!("__field{}", i), Span::call_site()); quote!(#id) } else { get_member( params, field, &Member::Unnamed(Index { index: i as u32, span: Span::call_site(), }), ) }; let skip = field .attrs .skip_serializing_if() .map(|path| quote!(#path(#field_expr))); if let Some(path) = field.attrs.serialize_with() { field_expr = wrap_serialize_field_with(params, field.ty, path, &field_expr); } let span = field.original.span(); let func = tuple_trait.serialize_element(span); let ser = quote! { try!(#func(&mut __serde_state, #field_expr)); }; match skip { None => ser, Some(skip) => quote!(if !#skip { #ser }), } }) .collect() } fn serialize_struct_visitor( fields: &[Field], params: &Parameters, is_enum: bool, struct_trait: &StructTrait, ) -> Vec { fields .iter() .filter(|&field| !field.attrs.skip_serializing()) .map(|field| { let member = &field.member; let mut field_expr = if is_enum { quote!(#member) } else { get_member(params, field, member) }; let key_expr = field.attrs.name().serialize_name(); let skip = field .attrs .skip_serializing_if() .map(|path| quote!(#path(#field_expr))); if let Some(path) = field.attrs.serialize_with() { field_expr = wrap_serialize_field_with(params, field.ty, path, &field_expr); } let span = field.original.span(); let ser = if field.attrs.flatten() { let func = quote_spanned!(span=> _serde::Serialize::serialize); quote! { try!(#func(&#field_expr, _serde::__private::ser::FlatMapSerializer(&mut __serde_state))); } } else { let func = struct_trait.serialize_field(span); quote! { try!(#func(&mut __serde_state, #key_expr, #field_expr)); } }; match skip { None => ser, Some(skip) => { if let Some(skip_func) = struct_trait.skip_field(span) { quote! { if !#skip { #ser } else { try!(#skip_func(&mut __serde_state, #key_expr)); } } } else { quote! { if !#skip { #ser } } } } } }) .collect() } fn wrap_serialize_field_with( params: &Parameters, field_ty: &syn::Type, serialize_with: &syn::ExprPath, field_expr: &TokenStream, ) -> TokenStream { wrap_serialize_with(params, serialize_with, &[field_ty], &[quote!(#field_expr)]) } fn wrap_serialize_variant_with( params: &Parameters, serialize_with: &syn::ExprPath, variant: &Variant, ) -> TokenStream { let field_tys: Vec<_> = variant.fields.iter().map(|field| field.ty).collect(); let field_exprs: Vec<_> = variant .fields .iter() .map(|field| { let id = match &field.member { Member::Named(ident) => ident.clone(), Member::Unnamed(member) => { Ident::new(&format!("__field{}", member.index), Span::call_site()) } }; quote!(#id) }) .collect(); wrap_serialize_with( params, serialize_with, field_tys.as_slice(), field_exprs.as_slice(), ) } fn wrap_serialize_with( params: &Parameters, serialize_with: &syn::ExprPath, field_tys: &[&syn::Type], field_exprs: &[TokenStream], ) -> TokenStream { let this = ¶ms.this; let (_, ty_generics, where_clause) = params.generics.split_for_impl(); let wrapper_generics = if field_exprs.is_empty() { params.generics.clone() } else { bound::with_lifetime_bound(¶ms.generics, "'__a") }; let (wrapper_impl_generics, wrapper_ty_generics, _) = wrapper_generics.split_for_impl(); let field_access = (0..field_exprs.len()).map(|n| { Member::Unnamed(Index { index: n as u32, span: Span::call_site(), }) }); quote!({ struct __SerializeWith #wrapper_impl_generics #where_clause { values: (#(&'__a #field_tys, )*), phantom: _serde::__private::PhantomData<#this #ty_generics>, } impl #wrapper_impl_generics _serde::Serialize for __SerializeWith #wrapper_ty_generics #where_clause { fn serialize<__S>(&self, __s: __S) -> _serde::__private::Result<__S::Ok, __S::Error> where __S: _serde::Serializer, { #serialize_with(#(self.values.#field_access, )* __s) } } &__SerializeWith { values: (#(#field_exprs, )*), phantom: _serde::__private::PhantomData::<#this #ty_generics>, } }) } // Serialization of an empty struct results in code like: // // let mut __serde_state = try!(serializer.serialize_struct("S", 0)); // _serde::ser::SerializeStruct::end(__serde_state) // // where we want to omit the `mut` to avoid a warning. fn mut_if(is_mut: bool) -> Option { if is_mut { Some(quote!(mut)) } else { None } } fn get_member(params: &Parameters, field: &Field, member: &Member) -> TokenStream { let self_var = ¶ms.self_var; match (params.is_remote, field.attrs.getter()) { (false, None) => { if params.is_packed { quote!(&{#self_var.#member}) } else { quote!(&#self_var.#member) } } (true, None) => { let inner = if params.is_packed { quote!(&{#self_var.#member}) } else { quote!(&#self_var.#member) }; let ty = field.ty; quote!(_serde::__private::ser::constrain::<#ty>(#inner)) } (true, Some(getter)) => { let ty = field.ty; quote!(_serde::__private::ser::constrain::<#ty>(&#getter(#self_var))) } (false, Some(_)) => { unreachable!("getter is only allowed for remote impls"); } } } fn effective_style(variant: &Variant) -> Style { match variant.style { Style::Newtype if variant.fields[0].attrs.skip_serializing() => Style::Unit, other => other, } } enum StructTrait { SerializeMap, SerializeStruct, SerializeStructVariant, } impl StructTrait { fn serialize_field(&self, span: Span) -> TokenStream { match *self { StructTrait::SerializeMap => { quote_spanned!(span=> _serde::ser::SerializeMap::serialize_entry) } StructTrait::SerializeStruct => { quote_spanned!(span=> _serde::ser::SerializeStruct::serialize_field) } StructTrait::SerializeStructVariant => { quote_spanned!(span=> _serde::ser::SerializeStructVariant::serialize_field) } } } fn skip_field(&self, span: Span) -> Option { match *self { StructTrait::SerializeMap => None, StructTrait::SerializeStruct => { Some(quote_spanned!(span=> _serde::ser::SerializeStruct::skip_field)) } StructTrait::SerializeStructVariant => { Some(quote_spanned!(span=> _serde::ser::SerializeStructVariant::skip_field)) } } } } enum TupleTrait { SerializeTuple, SerializeTupleStruct, SerializeTupleVariant, } impl TupleTrait { fn serialize_element(&self, span: Span) -> TokenStream { match *self { TupleTrait::SerializeTuple => { quote_spanned!(span=> _serde::ser::SerializeTuple::serialize_element) } TupleTrait::SerializeTupleStruct => { quote_spanned!(span=> _serde::ser::SerializeTupleStruct::serialize_field) } TupleTrait::SerializeTupleVariant => { quote_spanned!(span=> _serde::ser::SerializeTupleVariant::serialize_field) } } } } vendor/serde_derive/src/fragment.rs0000664000175000017500000000407314160055207020253 0ustar mwhudsonmwhudsonuse proc_macro2::TokenStream; use quote::ToTokens; use syn::token; pub enum Fragment { /// Tokens that can be used as an expression. Expr(TokenStream), /// Tokens that can be used inside a block. The surrounding curly braces are /// not part of these tokens. Block(TokenStream), } macro_rules! quote_expr { ($($tt:tt)*) => { $crate::fragment::Fragment::Expr(quote!($($tt)*)) } } macro_rules! quote_block { ($($tt:tt)*) => { $crate::fragment::Fragment::Block(quote!($($tt)*)) } } /// Interpolate a fragment in place of an expression. This involves surrounding /// Block fragments in curly braces. pub struct Expr(pub Fragment); impl ToTokens for Expr { fn to_tokens(&self, out: &mut TokenStream) { match &self.0 { Fragment::Expr(expr) => expr.to_tokens(out), Fragment::Block(block) => { token::Brace::default().surround(out, |out| block.to_tokens(out)); } } } } /// Interpolate a fragment as the statements of a block. pub struct Stmts(pub Fragment); impl ToTokens for Stmts { fn to_tokens(&self, out: &mut TokenStream) { match &self.0 { Fragment::Expr(expr) => expr.to_tokens(out), Fragment::Block(block) => block.to_tokens(out), } } } /// Interpolate a fragment as the value part of a `match` expression. This /// involves putting a comma after expressions and curly braces around blocks. pub struct Match(pub Fragment); impl ToTokens for Match { fn to_tokens(&self, out: &mut TokenStream) { match &self.0 { Fragment::Expr(expr) => { expr.to_tokens(out); ::default().to_tokens(out); } Fragment::Block(block) => { token::Brace::default().surround(out, |out| block.to_tokens(out)); } } } } impl AsRef for Fragment { fn as_ref(&self) -> &TokenStream { match self { Fragment::Expr(expr) => expr, Fragment::Block(block) => block, } } } vendor/serde_derive/src/internals/0000775000175000017500000000000014160055207020075 5ustar mwhudsonmwhudsonvendor/serde_derive/src/internals/case.rs0000664000175000017500000001567714160055207021376 0ustar mwhudsonmwhudson//! Code to convert the Rust-styled field/variant (e.g. `my_field`, `MyType`) to the //! case of the source (e.g. `my-field`, `MY_FIELD`). // See https://users.rust-lang.org/t/psa-dealing-with-warning-unused-import-std-ascii-asciiext-in-today-s-nightly/13726 #[allow(deprecated, unused_imports)] use std::ascii::AsciiExt; use std::fmt::{self, Debug, Display}; use self::RenameRule::*; /// The different possible ways to change case of fields in a struct, or variants in an enum. #[derive(Copy, Clone, PartialEq)] pub enum RenameRule { /// Don't apply a default rename rule. None, /// Rename direct children to "lowercase" style. LowerCase, /// Rename direct children to "UPPERCASE" style. UpperCase, /// Rename direct children to "PascalCase" style, as typically used for /// enum variants. PascalCase, /// Rename direct children to "camelCase" style. CamelCase, /// Rename direct children to "snake_case" style, as commonly used for /// fields. SnakeCase, /// Rename direct children to "SCREAMING_SNAKE_CASE" style, as commonly /// used for constants. ScreamingSnakeCase, /// Rename direct children to "kebab-case" style. KebabCase, /// Rename direct children to "SCREAMING-KEBAB-CASE" style. ScreamingKebabCase, } static RENAME_RULES: &[(&str, RenameRule)] = &[ ("lowercase", LowerCase), ("UPPERCASE", UpperCase), ("PascalCase", PascalCase), ("camelCase", CamelCase), ("snake_case", SnakeCase), ("SCREAMING_SNAKE_CASE", ScreamingSnakeCase), ("kebab-case", KebabCase), ("SCREAMING-KEBAB-CASE", ScreamingKebabCase), ]; impl RenameRule { pub fn from_str(rename_all_str: &str) -> Result { for (name, rule) in RENAME_RULES { if rename_all_str == *name { return Ok(*rule); } } Err(ParseError { unknown: rename_all_str, }) } /// Apply a renaming rule to an enum variant, returning the version expected in the source. pub fn apply_to_variant(&self, variant: &str) -> String { match *self { None | PascalCase => variant.to_owned(), LowerCase => variant.to_ascii_lowercase(), UpperCase => variant.to_ascii_uppercase(), CamelCase => variant[..1].to_ascii_lowercase() + &variant[1..], SnakeCase => { let mut snake = String::new(); for (i, ch) in variant.char_indices() { if i > 0 && ch.is_uppercase() { snake.push('_'); } snake.push(ch.to_ascii_lowercase()); } snake } ScreamingSnakeCase => SnakeCase.apply_to_variant(variant).to_ascii_uppercase(), KebabCase => SnakeCase.apply_to_variant(variant).replace('_', "-"), ScreamingKebabCase => ScreamingSnakeCase .apply_to_variant(variant) .replace('_', "-"), } } /// Apply a renaming rule to a struct field, returning the version expected in the source. pub fn apply_to_field(&self, field: &str) -> String { match *self { None | LowerCase | SnakeCase => field.to_owned(), UpperCase => field.to_ascii_uppercase(), PascalCase => { let mut pascal = String::new(); let mut capitalize = true; for ch in field.chars() { if ch == '_' { capitalize = true; } else if capitalize { pascal.push(ch.to_ascii_uppercase()); capitalize = false; } else { pascal.push(ch); } } pascal } CamelCase => { let pascal = PascalCase.apply_to_field(field); pascal[..1].to_ascii_lowercase() + &pascal[1..] } ScreamingSnakeCase => field.to_ascii_uppercase(), KebabCase => field.replace('_', "-"), ScreamingKebabCase => ScreamingSnakeCase.apply_to_field(field).replace('_', "-"), } } } pub struct ParseError<'a> { unknown: &'a str, } impl<'a> Display for ParseError<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.write_str("unknown rename rule `rename_all = ")?; Debug::fmt(self.unknown, f)?; f.write_str("`, expected one of ")?; for (i, (name, _rule)) in RENAME_RULES.iter().enumerate() { if i > 0 { f.write_str(", ")?; } Debug::fmt(name, f)?; } Ok(()) } } #[test] fn rename_variants() { for &(original, lower, upper, camel, snake, screaming, kebab, screaming_kebab) in &[ ( "Outcome", "outcome", "OUTCOME", "outcome", "outcome", "OUTCOME", "outcome", "OUTCOME", ), ( "VeryTasty", "verytasty", "VERYTASTY", "veryTasty", "very_tasty", "VERY_TASTY", "very-tasty", "VERY-TASTY", ), ("A", "a", "A", "a", "a", "A", "a", "A"), ("Z42", "z42", "Z42", "z42", "z42", "Z42", "z42", "Z42"), ] { assert_eq!(None.apply_to_variant(original), original); assert_eq!(LowerCase.apply_to_variant(original), lower); assert_eq!(UpperCase.apply_to_variant(original), upper); assert_eq!(PascalCase.apply_to_variant(original), original); assert_eq!(CamelCase.apply_to_variant(original), camel); assert_eq!(SnakeCase.apply_to_variant(original), snake); assert_eq!(ScreamingSnakeCase.apply_to_variant(original), screaming); assert_eq!(KebabCase.apply_to_variant(original), kebab); assert_eq!( ScreamingKebabCase.apply_to_variant(original), screaming_kebab ); } } #[test] fn rename_fields() { for &(original, upper, pascal, camel, screaming, kebab, screaming_kebab) in &[ ( "outcome", "OUTCOME", "Outcome", "outcome", "OUTCOME", "outcome", "OUTCOME", ), ( "very_tasty", "VERY_TASTY", "VeryTasty", "veryTasty", "VERY_TASTY", "very-tasty", "VERY-TASTY", ), ("a", "A", "A", "a", "A", "a", "A"), ("z42", "Z42", "Z42", "z42", "Z42", "z42", "Z42"), ] { assert_eq!(None.apply_to_field(original), original); assert_eq!(UpperCase.apply_to_field(original), upper); assert_eq!(PascalCase.apply_to_field(original), pascal); assert_eq!(CamelCase.apply_to_field(original), camel); assert_eq!(SnakeCase.apply_to_field(original), original); assert_eq!(ScreamingSnakeCase.apply_to_field(original), screaming); assert_eq!(KebabCase.apply_to_field(original), kebab); assert_eq!(ScreamingKebabCase.apply_to_field(original), screaming_kebab); } } vendor/serde_derive/src/internals/mod.rs0000664000175000017500000000057614160055207021232 0ustar mwhudsonmwhudsonpub mod ast; pub mod attr; mod ctxt; pub use self::ctxt::Ctxt; mod receiver; pub use self::receiver::replace_receiver; mod case; mod check; mod respan; mod symbol; use syn::Type; #[derive(Copy, Clone)] pub enum Derive { Serialize, Deserialize, } pub fn ungroup(mut ty: &Type) -> &Type { while let Type::Group(group) = ty { ty = &group.elem; } ty } vendor/serde_derive/src/internals/attr.rs0000664000175000017500000020764714160055207021435 0ustar mwhudsonmwhudsonuse internals::respan::respan; use internals::symbol::*; use internals::{ungroup, Ctxt}; use proc_macro2::{Spacing, Span, TokenStream, TokenTree}; use quote::ToTokens; use std::borrow::Cow; use std::collections::BTreeSet; use syn; use syn::parse::{self, Parse, ParseStream}; use syn::punctuated::Punctuated; use syn::Ident; use syn::Meta::{List, NameValue, Path}; use syn::NestedMeta::{Lit, Meta}; // This module handles parsing of `#[serde(...)]` attributes. The entrypoints // are `attr::Container::from_ast`, `attr::Variant::from_ast`, and // `attr::Field::from_ast`. Each returns an instance of the corresponding // struct. Note that none of them return a Result. Unrecognized, malformed, or // duplicated attributes result in a span_err but otherwise are ignored. The // user will see errors simultaneously for all bad attributes in the crate // rather than just the first. pub use internals::case::RenameRule; struct Attr<'c, T> { cx: &'c Ctxt, name: Symbol, tokens: TokenStream, value: Option, } impl<'c, T> Attr<'c, T> { fn none(cx: &'c Ctxt, name: Symbol) -> Self { Attr { cx, name, tokens: TokenStream::new(), value: None, } } fn set(&mut self, obj: A, value: T) { let tokens = obj.into_token_stream(); if self.value.is_some() { self.cx .error_spanned_by(tokens, format!("duplicate serde attribute `{}`", self.name)); } else { self.tokens = tokens; self.value = Some(value); } } fn set_opt(&mut self, obj: A, value: Option) { if let Some(value) = value { self.set(obj, value); } } fn set_if_none(&mut self, value: T) { if self.value.is_none() { self.value = Some(value); } } fn get(self) -> Option { self.value } fn get_with_tokens(self) -> Option<(TokenStream, T)> { match self.value { Some(v) => Some((self.tokens, v)), None => None, } } } struct BoolAttr<'c>(Attr<'c, ()>); impl<'c> BoolAttr<'c> { fn none(cx: &'c Ctxt, name: Symbol) -> Self { BoolAttr(Attr::none(cx, name)) } fn set_true(&mut self, obj: A) { self.0.set(obj, ()); } fn get(&self) -> bool { self.0.value.is_some() } } struct VecAttr<'c, T> { cx: &'c Ctxt, name: Symbol, first_dup_tokens: TokenStream, values: Vec, } impl<'c, T> VecAttr<'c, T> { fn none(cx: &'c Ctxt, name: Symbol) -> Self { VecAttr { cx, name, first_dup_tokens: TokenStream::new(), values: Vec::new(), } } fn insert(&mut self, obj: A, value: T) { if self.values.len() == 1 { self.first_dup_tokens = obj.into_token_stream(); } self.values.push(value); } fn at_most_one(mut self) -> Result, ()> { if self.values.len() > 1 { let dup_token = self.first_dup_tokens; self.cx.error_spanned_by( dup_token, format!("duplicate serde attribute `{}`", self.name), ); Err(()) } else { Ok(self.values.pop()) } } fn get(self) -> Vec { self.values } } pub struct Name { serialize: String, serialize_renamed: bool, deserialize: String, deserialize_renamed: bool, deserialize_aliases: Vec, } #[allow(deprecated)] fn unraw(ident: &Ident) -> String { // str::trim_start_matches was added in 1.30, trim_left_matches deprecated // in 1.33. We currently support rustc back to 1.15 so we need to continue // to use the deprecated one. ident.to_string().trim_left_matches("r#").to_owned() } impl Name { fn from_attrs( source_name: String, ser_name: Attr, de_name: Attr, de_aliases: Option>, ) -> Name { let deserialize_aliases = match de_aliases { Some(de_aliases) => { let mut alias_list = BTreeSet::new(); for alias_name in de_aliases.get() { alias_list.insert(alias_name); } alias_list.into_iter().collect() } None => Vec::new(), }; let ser_name = ser_name.get(); let ser_renamed = ser_name.is_some(); let de_name = de_name.get(); let de_renamed = de_name.is_some(); Name { serialize: ser_name.unwrap_or_else(|| source_name.clone()), serialize_renamed: ser_renamed, deserialize: de_name.unwrap_or(source_name), deserialize_renamed: de_renamed, deserialize_aliases, } } /// Return the container name for the container when serializing. pub fn serialize_name(&self) -> String { self.serialize.clone() } /// Return the container name for the container when deserializing. pub fn deserialize_name(&self) -> String { self.deserialize.clone() } fn deserialize_aliases(&self) -> Vec { let mut aliases = self.deserialize_aliases.clone(); let main_name = self.deserialize_name(); if !aliases.contains(&main_name) { aliases.push(main_name); } aliases } } pub struct RenameAllRules { serialize: RenameRule, deserialize: RenameRule, } /// Represents struct or enum attribute information. pub struct Container { name: Name, transparent: bool, deny_unknown_fields: bool, default: Default, rename_all_rules: RenameAllRules, ser_bound: Option>, de_bound: Option>, tag: TagType, type_from: Option, type_try_from: Option, type_into: Option, remote: Option, identifier: Identifier, has_flatten: bool, serde_path: Option, is_packed: bool, /// Error message generated when type can't be deserialized expecting: Option, } /// Styles of representing an enum. pub enum TagType { /// The default. /// /// ```json /// {"variant1": {"key1": "value1", "key2": "value2"}} /// ``` External, /// `#[serde(tag = "type")]` /// /// ```json /// {"type": "variant1", "key1": "value1", "key2": "value2"} /// ``` Internal { tag: String }, /// `#[serde(tag = "t", content = "c")]` /// /// ```json /// {"t": "variant1", "c": {"key1": "value1", "key2": "value2"}} /// ``` Adjacent { tag: String, content: String }, /// `#[serde(untagged)]` /// /// ```json /// {"key1": "value1", "key2": "value2"} /// ``` None, } /// Whether this enum represents the fields of a struct or the variants of an /// enum. #[derive(Copy, Clone)] pub enum Identifier { /// It does not. No, /// This enum represents the fields of a struct. All of the variants must be /// unit variants, except possibly one which is annotated with /// `#[serde(other)]` and is a newtype variant. Field, /// This enum represents the variants of an enum. All of the variants must /// be unit variants. Variant, } impl Identifier { #[cfg(feature = "deserialize_in_place")] pub fn is_some(self) -> bool { match self { Identifier::No => false, Identifier::Field | Identifier::Variant => true, } } } impl Container { /// Extract out the `#[serde(...)]` attributes from an item. pub fn from_ast(cx: &Ctxt, item: &syn::DeriveInput) -> Self { let mut ser_name = Attr::none(cx, RENAME); let mut de_name = Attr::none(cx, RENAME); let mut transparent = BoolAttr::none(cx, TRANSPARENT); let mut deny_unknown_fields = BoolAttr::none(cx, DENY_UNKNOWN_FIELDS); let mut default = Attr::none(cx, DEFAULT); let mut rename_all_ser_rule = Attr::none(cx, RENAME_ALL); let mut rename_all_de_rule = Attr::none(cx, RENAME_ALL); let mut ser_bound = Attr::none(cx, BOUND); let mut de_bound = Attr::none(cx, BOUND); let mut untagged = BoolAttr::none(cx, UNTAGGED); let mut internal_tag = Attr::none(cx, TAG); let mut content = Attr::none(cx, CONTENT); let mut type_from = Attr::none(cx, FROM); let mut type_try_from = Attr::none(cx, TRY_FROM); let mut type_into = Attr::none(cx, INTO); let mut remote = Attr::none(cx, REMOTE); let mut field_identifier = BoolAttr::none(cx, FIELD_IDENTIFIER); let mut variant_identifier = BoolAttr::none(cx, VARIANT_IDENTIFIER); let mut serde_path = Attr::none(cx, CRATE); let mut expecting = Attr::none(cx, EXPECTING); for meta_item in item .attrs .iter() .flat_map(|attr| get_serde_meta_items(cx, attr)) .flatten() { match &meta_item { // Parse `#[serde(rename = "foo")]` Meta(NameValue(m)) if m.path == RENAME => { if let Ok(s) = get_lit_str(cx, RENAME, &m.lit) { ser_name.set(&m.path, s.value()); de_name.set(&m.path, s.value()); } } // Parse `#[serde(rename(serialize = "foo", deserialize = "bar"))]` Meta(List(m)) if m.path == RENAME => { if let Ok((ser, de)) = get_renames(cx, &m.nested) { ser_name.set_opt(&m.path, ser.map(syn::LitStr::value)); de_name.set_opt(&m.path, de.map(syn::LitStr::value)); } } // Parse `#[serde(rename_all = "foo")]` Meta(NameValue(m)) if m.path == RENAME_ALL => { if let Ok(s) = get_lit_str(cx, RENAME_ALL, &m.lit) { match RenameRule::from_str(&s.value()) { Ok(rename_rule) => { rename_all_ser_rule.set(&m.path, rename_rule); rename_all_de_rule.set(&m.path, rename_rule); } Err(err) => cx.error_spanned_by(s, err), } } } // Parse `#[serde(rename_all(serialize = "foo", deserialize = "bar"))]` Meta(List(m)) if m.path == RENAME_ALL => { if let Ok((ser, de)) = get_renames(cx, &m.nested) { if let Some(ser) = ser { match RenameRule::from_str(&ser.value()) { Ok(rename_rule) => rename_all_ser_rule.set(&m.path, rename_rule), Err(err) => cx.error_spanned_by(ser, err), } } if let Some(de) = de { match RenameRule::from_str(&de.value()) { Ok(rename_rule) => rename_all_de_rule.set(&m.path, rename_rule), Err(err) => cx.error_spanned_by(de, err), } } } } // Parse `#[serde(transparent)]` Meta(Path(word)) if word == TRANSPARENT => { transparent.set_true(word); } // Parse `#[serde(deny_unknown_fields)]` Meta(Path(word)) if word == DENY_UNKNOWN_FIELDS => { deny_unknown_fields.set_true(word); } // Parse `#[serde(default)]` Meta(Path(word)) if word == DEFAULT => match &item.data { syn::Data::Struct(syn::DataStruct { fields, .. }) => match fields { syn::Fields::Named(_) => { default.set(word, Default::Default); } syn::Fields::Unnamed(_) | syn::Fields::Unit => cx.error_spanned_by( fields, "#[serde(default)] can only be used on structs with named fields", ), }, syn::Data::Enum(syn::DataEnum { enum_token, .. }) => cx.error_spanned_by( enum_token, "#[serde(default)] can only be used on structs with named fields", ), syn::Data::Union(syn::DataUnion { union_token, .. }) => cx.error_spanned_by( union_token, "#[serde(default)] can only be used on structs with named fields", ), }, // Parse `#[serde(default = "...")]` Meta(NameValue(m)) if m.path == DEFAULT => { if let Ok(path) = parse_lit_into_expr_path(cx, DEFAULT, &m.lit) { match &item.data { syn::Data::Struct(syn::DataStruct { fields, .. }) => { match fields { syn::Fields::Named(_) => { default.set(&m.path, Default::Path(path)); } syn::Fields::Unnamed(_) | syn::Fields::Unit => cx .error_spanned_by( fields, "#[serde(default = \"...\")] can only be used on structs with named fields", ), } } syn::Data::Enum(syn::DataEnum { enum_token, .. }) => cx .error_spanned_by( enum_token, "#[serde(default = \"...\")] can only be used on structs with named fields", ), syn::Data::Union(syn::DataUnion { union_token, .. }) => cx.error_spanned_by( union_token, "#[serde(default = \"...\")] can only be used on structs with named fields", ), } } } // Parse `#[serde(bound = "T: SomeBound")]` Meta(NameValue(m)) if m.path == BOUND => { if let Ok(where_predicates) = parse_lit_into_where(cx, BOUND, BOUND, &m.lit) { ser_bound.set(&m.path, where_predicates.clone()); de_bound.set(&m.path, where_predicates); } } // Parse `#[serde(bound(serialize = "...", deserialize = "..."))]` Meta(List(m)) if m.path == BOUND => { if let Ok((ser, de)) = get_where_predicates(cx, &m.nested) { ser_bound.set_opt(&m.path, ser); de_bound.set_opt(&m.path, de); } } // Parse `#[serde(untagged)]` Meta(Path(word)) if word == UNTAGGED => match item.data { syn::Data::Enum(_) => { untagged.set_true(word); } syn::Data::Struct(syn::DataStruct { struct_token, .. }) => { cx.error_spanned_by( struct_token, "#[serde(untagged)] can only be used on enums", ); } syn::Data::Union(syn::DataUnion { union_token, .. }) => { cx.error_spanned_by( union_token, "#[serde(untagged)] can only be used on enums", ); } }, // Parse `#[serde(tag = "type")]` Meta(NameValue(m)) if m.path == TAG => { if let Ok(s) = get_lit_str(cx, TAG, &m.lit) { match &item.data { syn::Data::Enum(_) => { internal_tag.set(&m.path, s.value()); } syn::Data::Struct(syn::DataStruct { fields, .. }) => match fields { syn::Fields::Named(_) => { internal_tag.set(&m.path, s.value()); } syn::Fields::Unnamed(_) | syn::Fields::Unit => { cx.error_spanned_by( fields, "#[serde(tag = \"...\")] can only be used on enums and structs with named fields", ); } }, syn::Data::Union(syn::DataUnion { union_token, .. }) => { cx.error_spanned_by( union_token, "#[serde(tag = \"...\")] can only be used on enums and structs with named fields", ); } } } } // Parse `#[serde(content = "c")]` Meta(NameValue(m)) if m.path == CONTENT => { if let Ok(s) = get_lit_str(cx, CONTENT, &m.lit) { match &item.data { syn::Data::Enum(_) => { content.set(&m.path, s.value()); } syn::Data::Struct(syn::DataStruct { struct_token, .. }) => { cx.error_spanned_by( struct_token, "#[serde(content = \"...\")] can only be used on enums", ); } syn::Data::Union(syn::DataUnion { union_token, .. }) => { cx.error_spanned_by( union_token, "#[serde(content = \"...\")] can only be used on enums", ); } } } } // Parse `#[serde(from = "Type")] Meta(NameValue(m)) if m.path == FROM => { if let Ok(from_ty) = parse_lit_into_ty(cx, FROM, &m.lit) { type_from.set_opt(&m.path, Some(from_ty)); } } // Parse `#[serde(try_from = "Type")] Meta(NameValue(m)) if m.path == TRY_FROM => { if let Ok(try_from_ty) = parse_lit_into_ty(cx, TRY_FROM, &m.lit) { type_try_from.set_opt(&m.path, Some(try_from_ty)); } } // Parse `#[serde(into = "Type")] Meta(NameValue(m)) if m.path == INTO => { if let Ok(into_ty) = parse_lit_into_ty(cx, INTO, &m.lit) { type_into.set_opt(&m.path, Some(into_ty)); } } // Parse `#[serde(remote = "...")]` Meta(NameValue(m)) if m.path == REMOTE => { if let Ok(path) = parse_lit_into_path(cx, REMOTE, &m.lit) { if is_primitive_path(&path, "Self") { remote.set(&m.path, item.ident.clone().into()); } else { remote.set(&m.path, path); } } } // Parse `#[serde(field_identifier)]` Meta(Path(word)) if word == FIELD_IDENTIFIER => { field_identifier.set_true(word); } // Parse `#[serde(variant_identifier)]` Meta(Path(word)) if word == VARIANT_IDENTIFIER => { variant_identifier.set_true(word); } // Parse `#[serde(crate = "foo")]` Meta(NameValue(m)) if m.path == CRATE => { if let Ok(path) = parse_lit_into_path(cx, CRATE, &m.lit) { serde_path.set(&m.path, path); } } // Parse `#[serde(expecting = "a message")]` Meta(NameValue(m)) if m.path == EXPECTING => { if let Ok(s) = get_lit_str(cx, EXPECTING, &m.lit) { expecting.set(&m.path, s.value()); } } Meta(meta_item) => { let path = meta_item .path() .into_token_stream() .to_string() .replace(' ', ""); cx.error_spanned_by( meta_item.path(), format!("unknown serde container attribute `{}`", path), ); } Lit(lit) => { cx.error_spanned_by(lit, "unexpected literal in serde container attribute"); } } } let mut is_packed = false; for attr in &item.attrs { if attr.path.is_ident("repr") { let _ = attr.parse_args_with(|input: ParseStream| { while let Some(token) = input.parse()? { if let TokenTree::Ident(ident) = token { is_packed |= ident == "packed"; } } Ok(()) }); } } Container { name: Name::from_attrs(unraw(&item.ident), ser_name, de_name, None), transparent: transparent.get(), deny_unknown_fields: deny_unknown_fields.get(), default: default.get().unwrap_or(Default::None), rename_all_rules: RenameAllRules { serialize: rename_all_ser_rule.get().unwrap_or(RenameRule::None), deserialize: rename_all_de_rule.get().unwrap_or(RenameRule::None), }, ser_bound: ser_bound.get(), de_bound: de_bound.get(), tag: decide_tag(cx, item, untagged, internal_tag, content), type_from: type_from.get(), type_try_from: type_try_from.get(), type_into: type_into.get(), remote: remote.get(), identifier: decide_identifier(cx, item, field_identifier, variant_identifier), has_flatten: false, serde_path: serde_path.get(), is_packed, expecting: expecting.get(), } } pub fn name(&self) -> &Name { &self.name } pub fn rename_all_rules(&self) -> &RenameAllRules { &self.rename_all_rules } pub fn transparent(&self) -> bool { self.transparent } pub fn deny_unknown_fields(&self) -> bool { self.deny_unknown_fields } pub fn default(&self) -> &Default { &self.default } pub fn ser_bound(&self) -> Option<&[syn::WherePredicate]> { self.ser_bound.as_ref().map(|vec| &vec[..]) } pub fn de_bound(&self) -> Option<&[syn::WherePredicate]> { self.de_bound.as_ref().map(|vec| &vec[..]) } pub fn tag(&self) -> &TagType { &self.tag } pub fn type_from(&self) -> Option<&syn::Type> { self.type_from.as_ref() } pub fn type_try_from(&self) -> Option<&syn::Type> { self.type_try_from.as_ref() } pub fn type_into(&self) -> Option<&syn::Type> { self.type_into.as_ref() } pub fn remote(&self) -> Option<&syn::Path> { self.remote.as_ref() } pub fn is_packed(&self) -> bool { self.is_packed } pub fn identifier(&self) -> Identifier { self.identifier } pub fn has_flatten(&self) -> bool { self.has_flatten } pub fn mark_has_flatten(&mut self) { self.has_flatten = true; } pub fn custom_serde_path(&self) -> Option<&syn::Path> { self.serde_path.as_ref() } pub fn serde_path(&self) -> Cow { self.custom_serde_path() .map_or_else(|| Cow::Owned(parse_quote!(_serde)), Cow::Borrowed) } /// Error message generated when type can't be deserialized. /// If `None`, default message will be used pub fn expecting(&self) -> Option<&str> { self.expecting.as_ref().map(String::as_ref) } } fn decide_tag( cx: &Ctxt, item: &syn::DeriveInput, untagged: BoolAttr, internal_tag: Attr, content: Attr, ) -> TagType { match ( untagged.0.get_with_tokens(), internal_tag.get_with_tokens(), content.get_with_tokens(), ) { (None, None, None) => TagType::External, (Some(_), None, None) => TagType::None, (None, Some((_, tag)), None) => { // Check that there are no tuple variants. if let syn::Data::Enum(data) = &item.data { for variant in &data.variants { match &variant.fields { syn::Fields::Named(_) | syn::Fields::Unit => {} syn::Fields::Unnamed(fields) => { if fields.unnamed.len() != 1 { cx.error_spanned_by( variant, "#[serde(tag = \"...\")] cannot be used with tuple variants", ); break; } } } } } TagType::Internal { tag } } (Some((untagged_tokens, _)), Some((tag_tokens, _)), None) => { cx.error_spanned_by( untagged_tokens, "enum cannot be both untagged and internally tagged", ); cx.error_spanned_by( tag_tokens, "enum cannot be both untagged and internally tagged", ); TagType::External // doesn't matter, will error } (None, None, Some((content_tokens, _))) => { cx.error_spanned_by( content_tokens, "#[serde(tag = \"...\", content = \"...\")] must be used together", ); TagType::External } (Some((untagged_tokens, _)), None, Some((content_tokens, _))) => { cx.error_spanned_by( untagged_tokens, "untagged enum cannot have #[serde(content = \"...\")]", ); cx.error_spanned_by( content_tokens, "untagged enum cannot have #[serde(content = \"...\")]", ); TagType::External } (None, Some((_, tag)), Some((_, content))) => TagType::Adjacent { tag, content }, (Some((untagged_tokens, _)), Some((tag_tokens, _)), Some((content_tokens, _))) => { cx.error_spanned_by( untagged_tokens, "untagged enum cannot have #[serde(tag = \"...\", content = \"...\")]", ); cx.error_spanned_by( tag_tokens, "untagged enum cannot have #[serde(tag = \"...\", content = \"...\")]", ); cx.error_spanned_by( content_tokens, "untagged enum cannot have #[serde(tag = \"...\", content = \"...\")]", ); TagType::External } } } fn decide_identifier( cx: &Ctxt, item: &syn::DeriveInput, field_identifier: BoolAttr, variant_identifier: BoolAttr, ) -> Identifier { match ( &item.data, field_identifier.0.get_with_tokens(), variant_identifier.0.get_with_tokens(), ) { (_, None, None) => Identifier::No, (_, Some((field_identifier_tokens, _)), Some((variant_identifier_tokens, _))) => { cx.error_spanned_by( field_identifier_tokens, "#[serde(field_identifier)] and #[serde(variant_identifier)] cannot both be set", ); cx.error_spanned_by( variant_identifier_tokens, "#[serde(field_identifier)] and #[serde(variant_identifier)] cannot both be set", ); Identifier::No } (syn::Data::Enum(_), Some(_), None) => Identifier::Field, (syn::Data::Enum(_), None, Some(_)) => Identifier::Variant, (syn::Data::Struct(syn::DataStruct { struct_token, .. }), Some(_), None) => { cx.error_spanned_by( struct_token, "#[serde(field_identifier)] can only be used on an enum", ); Identifier::No } (syn::Data::Union(syn::DataUnion { union_token, .. }), Some(_), None) => { cx.error_spanned_by( union_token, "#[serde(field_identifier)] can only be used on an enum", ); Identifier::No } (syn::Data::Struct(syn::DataStruct { struct_token, .. }), None, Some(_)) => { cx.error_spanned_by( struct_token, "#[serde(variant_identifier)] can only be used on an enum", ); Identifier::No } (syn::Data::Union(syn::DataUnion { union_token, .. }), None, Some(_)) => { cx.error_spanned_by( union_token, "#[serde(variant_identifier)] can only be used on an enum", ); Identifier::No } } } /// Represents variant attribute information pub struct Variant { name: Name, rename_all_rules: RenameAllRules, ser_bound: Option>, de_bound: Option>, skip_deserializing: bool, skip_serializing: bool, other: bool, serialize_with: Option, deserialize_with: Option, borrow: Option, } impl Variant { pub fn from_ast(cx: &Ctxt, variant: &syn::Variant) -> Self { let mut ser_name = Attr::none(cx, RENAME); let mut de_name = Attr::none(cx, RENAME); let mut de_aliases = VecAttr::none(cx, RENAME); let mut skip_deserializing = BoolAttr::none(cx, SKIP_DESERIALIZING); let mut skip_serializing = BoolAttr::none(cx, SKIP_SERIALIZING); let mut rename_all_ser_rule = Attr::none(cx, RENAME_ALL); let mut rename_all_de_rule = Attr::none(cx, RENAME_ALL); let mut ser_bound = Attr::none(cx, BOUND); let mut de_bound = Attr::none(cx, BOUND); let mut other = BoolAttr::none(cx, OTHER); let mut serialize_with = Attr::none(cx, SERIALIZE_WITH); let mut deserialize_with = Attr::none(cx, DESERIALIZE_WITH); let mut borrow = Attr::none(cx, BORROW); for meta_item in variant .attrs .iter() .flat_map(|attr| get_serde_meta_items(cx, attr)) .flatten() { match &meta_item { // Parse `#[serde(rename = "foo")]` Meta(NameValue(m)) if m.path == RENAME => { if let Ok(s) = get_lit_str(cx, RENAME, &m.lit) { ser_name.set(&m.path, s.value()); de_name.set_if_none(s.value()); de_aliases.insert(&m.path, s.value()); } } // Parse `#[serde(rename(serialize = "foo", deserialize = "bar"))]` Meta(List(m)) if m.path == RENAME => { if let Ok((ser, de)) = get_multiple_renames(cx, &m.nested) { ser_name.set_opt(&m.path, ser.map(syn::LitStr::value)); for de_value in de { de_name.set_if_none(de_value.value()); de_aliases.insert(&m.path, de_value.value()); } } } // Parse `#[serde(alias = "foo")]` Meta(NameValue(m)) if m.path == ALIAS => { if let Ok(s) = get_lit_str(cx, ALIAS, &m.lit) { de_aliases.insert(&m.path, s.value()); } } // Parse `#[serde(rename_all = "foo")]` Meta(NameValue(m)) if m.path == RENAME_ALL => { if let Ok(s) = get_lit_str(cx, RENAME_ALL, &m.lit) { match RenameRule::from_str(&s.value()) { Ok(rename_rule) => { rename_all_ser_rule.set(&m.path, rename_rule); rename_all_de_rule.set(&m.path, rename_rule); } Err(err) => cx.error_spanned_by(s, err), } } } // Parse `#[serde(rename_all(serialize = "foo", deserialize = "bar"))]` Meta(List(m)) if m.path == RENAME_ALL => { if let Ok((ser, de)) = get_renames(cx, &m.nested) { if let Some(ser) = ser { match RenameRule::from_str(&ser.value()) { Ok(rename_rule) => rename_all_ser_rule.set(&m.path, rename_rule), Err(err) => cx.error_spanned_by(ser, err), } } if let Some(de) = de { match RenameRule::from_str(&de.value()) { Ok(rename_rule) => rename_all_de_rule.set(&m.path, rename_rule), Err(err) => cx.error_spanned_by(de, err), } } } } // Parse `#[serde(skip)]` Meta(Path(word)) if word == SKIP => { skip_serializing.set_true(word); skip_deserializing.set_true(word); } // Parse `#[serde(skip_deserializing)]` Meta(Path(word)) if word == SKIP_DESERIALIZING => { skip_deserializing.set_true(word); } // Parse `#[serde(skip_serializing)]` Meta(Path(word)) if word == SKIP_SERIALIZING => { skip_serializing.set_true(word); } // Parse `#[serde(other)]` Meta(Path(word)) if word == OTHER => { other.set_true(word); } // Parse `#[serde(bound = "T: SomeBound")]` Meta(NameValue(m)) if m.path == BOUND => { if let Ok(where_predicates) = parse_lit_into_where(cx, BOUND, BOUND, &m.lit) { ser_bound.set(&m.path, where_predicates.clone()); de_bound.set(&m.path, where_predicates); } } // Parse `#[serde(bound(serialize = "...", deserialize = "..."))]` Meta(List(m)) if m.path == BOUND => { if let Ok((ser, de)) = get_where_predicates(cx, &m.nested) { ser_bound.set_opt(&m.path, ser); de_bound.set_opt(&m.path, de); } } // Parse `#[serde(with = "...")]` Meta(NameValue(m)) if m.path == WITH => { if let Ok(path) = parse_lit_into_expr_path(cx, WITH, &m.lit) { let mut ser_path = path.clone(); ser_path .path .segments .push(Ident::new("serialize", Span::call_site()).into()); serialize_with.set(&m.path, ser_path); let mut de_path = path; de_path .path .segments .push(Ident::new("deserialize", Span::call_site()).into()); deserialize_with.set(&m.path, de_path); } } // Parse `#[serde(serialize_with = "...")]` Meta(NameValue(m)) if m.path == SERIALIZE_WITH => { if let Ok(path) = parse_lit_into_expr_path(cx, SERIALIZE_WITH, &m.lit) { serialize_with.set(&m.path, path); } } // Parse `#[serde(deserialize_with = "...")]` Meta(NameValue(m)) if m.path == DESERIALIZE_WITH => { if let Ok(path) = parse_lit_into_expr_path(cx, DESERIALIZE_WITH, &m.lit) { deserialize_with.set(&m.path, path); } } // Defer `#[serde(borrow)]` and `#[serde(borrow = "'a + 'b")]` Meta(m) if m.path() == BORROW => match &variant.fields { syn::Fields::Unnamed(fields) if fields.unnamed.len() == 1 => { borrow.set(m.path(), m.clone()); } _ => { cx.error_spanned_by( variant, "#[serde(borrow)] may only be used on newtype variants", ); } }, Meta(meta_item) => { let path = meta_item .path() .into_token_stream() .to_string() .replace(' ', ""); cx.error_spanned_by( meta_item.path(), format!("unknown serde variant attribute `{}`", path), ); } Lit(lit) => { cx.error_spanned_by(lit, "unexpected literal in serde variant attribute"); } } } Variant { name: Name::from_attrs(unraw(&variant.ident), ser_name, de_name, Some(de_aliases)), rename_all_rules: RenameAllRules { serialize: rename_all_ser_rule.get().unwrap_or(RenameRule::None), deserialize: rename_all_de_rule.get().unwrap_or(RenameRule::None), }, ser_bound: ser_bound.get(), de_bound: de_bound.get(), skip_deserializing: skip_deserializing.get(), skip_serializing: skip_serializing.get(), other: other.get(), serialize_with: serialize_with.get(), deserialize_with: deserialize_with.get(), borrow: borrow.get(), } } pub fn name(&self) -> &Name { &self.name } pub fn aliases(&self) -> Vec { self.name.deserialize_aliases() } pub fn rename_by_rules(&mut self, rules: &RenameAllRules) { if !self.name.serialize_renamed { self.name.serialize = rules.serialize.apply_to_variant(&self.name.serialize); } if !self.name.deserialize_renamed { self.name.deserialize = rules.deserialize.apply_to_variant(&self.name.deserialize); } } pub fn rename_all_rules(&self) -> &RenameAllRules { &self.rename_all_rules } pub fn ser_bound(&self) -> Option<&[syn::WherePredicate]> { self.ser_bound.as_ref().map(|vec| &vec[..]) } pub fn de_bound(&self) -> Option<&[syn::WherePredicate]> { self.de_bound.as_ref().map(|vec| &vec[..]) } pub fn skip_deserializing(&self) -> bool { self.skip_deserializing } pub fn skip_serializing(&self) -> bool { self.skip_serializing } pub fn other(&self) -> bool { self.other } pub fn serialize_with(&self) -> Option<&syn::ExprPath> { self.serialize_with.as_ref() } pub fn deserialize_with(&self) -> Option<&syn::ExprPath> { self.deserialize_with.as_ref() } } /// Represents field attribute information pub struct Field { name: Name, skip_serializing: bool, skip_deserializing: bool, skip_serializing_if: Option, default: Default, serialize_with: Option, deserialize_with: Option, ser_bound: Option>, de_bound: Option>, borrowed_lifetimes: BTreeSet, getter: Option, flatten: bool, transparent: bool, } /// Represents the default to use for a field when deserializing. pub enum Default { /// Field must always be specified because it does not have a default. None, /// The default is given by `std::default::Default::default()`. Default, /// The default is given by this function. Path(syn::ExprPath), } impl Default { pub fn is_none(&self) -> bool { match self { Default::None => true, Default::Default | Default::Path(_) => false, } } } impl Field { /// Extract out the `#[serde(...)]` attributes from a struct field. pub fn from_ast( cx: &Ctxt, index: usize, field: &syn::Field, attrs: Option<&Variant>, container_default: &Default, ) -> Self { let mut ser_name = Attr::none(cx, RENAME); let mut de_name = Attr::none(cx, RENAME); let mut de_aliases = VecAttr::none(cx, RENAME); let mut skip_serializing = BoolAttr::none(cx, SKIP_SERIALIZING); let mut skip_deserializing = BoolAttr::none(cx, SKIP_DESERIALIZING); let mut skip_serializing_if = Attr::none(cx, SKIP_SERIALIZING_IF); let mut default = Attr::none(cx, DEFAULT); let mut serialize_with = Attr::none(cx, SERIALIZE_WITH); let mut deserialize_with = Attr::none(cx, DESERIALIZE_WITH); let mut ser_bound = Attr::none(cx, BOUND); let mut de_bound = Attr::none(cx, BOUND); let mut borrowed_lifetimes = Attr::none(cx, BORROW); let mut getter = Attr::none(cx, GETTER); let mut flatten = BoolAttr::none(cx, FLATTEN); let ident = match &field.ident { Some(ident) => unraw(ident), None => index.to_string(), }; let variant_borrow = attrs .and_then(|variant| variant.borrow.as_ref()) .map(|borrow| Meta(borrow.clone())); for meta_item in field .attrs .iter() .flat_map(|attr| get_serde_meta_items(cx, attr)) .flatten() .chain(variant_borrow) { match &meta_item { // Parse `#[serde(rename = "foo")]` Meta(NameValue(m)) if m.path == RENAME => { if let Ok(s) = get_lit_str(cx, RENAME, &m.lit) { ser_name.set(&m.path, s.value()); de_name.set_if_none(s.value()); de_aliases.insert(&m.path, s.value()); } } // Parse `#[serde(rename(serialize = "foo", deserialize = "bar"))]` Meta(List(m)) if m.path == RENAME => { if let Ok((ser, de)) = get_multiple_renames(cx, &m.nested) { ser_name.set_opt(&m.path, ser.map(syn::LitStr::value)); for de_value in de { de_name.set_if_none(de_value.value()); de_aliases.insert(&m.path, de_value.value()); } } } // Parse `#[serde(alias = "foo")]` Meta(NameValue(m)) if m.path == ALIAS => { if let Ok(s) = get_lit_str(cx, ALIAS, &m.lit) { de_aliases.insert(&m.path, s.value()); } } // Parse `#[serde(default)]` Meta(Path(word)) if word == DEFAULT => { default.set(word, Default::Default); } // Parse `#[serde(default = "...")]` Meta(NameValue(m)) if m.path == DEFAULT => { if let Ok(path) = parse_lit_into_expr_path(cx, DEFAULT, &m.lit) { default.set(&m.path, Default::Path(path)); } } // Parse `#[serde(skip_serializing)]` Meta(Path(word)) if word == SKIP_SERIALIZING => { skip_serializing.set_true(word); } // Parse `#[serde(skip_deserializing)]` Meta(Path(word)) if word == SKIP_DESERIALIZING => { skip_deserializing.set_true(word); } // Parse `#[serde(skip)]` Meta(Path(word)) if word == SKIP => { skip_serializing.set_true(word); skip_deserializing.set_true(word); } // Parse `#[serde(skip_serializing_if = "...")]` Meta(NameValue(m)) if m.path == SKIP_SERIALIZING_IF => { if let Ok(path) = parse_lit_into_expr_path(cx, SKIP_SERIALIZING_IF, &m.lit) { skip_serializing_if.set(&m.path, path); } } // Parse `#[serde(serialize_with = "...")]` Meta(NameValue(m)) if m.path == SERIALIZE_WITH => { if let Ok(path) = parse_lit_into_expr_path(cx, SERIALIZE_WITH, &m.lit) { serialize_with.set(&m.path, path); } } // Parse `#[serde(deserialize_with = "...")]` Meta(NameValue(m)) if m.path == DESERIALIZE_WITH => { if let Ok(path) = parse_lit_into_expr_path(cx, DESERIALIZE_WITH, &m.lit) { deserialize_with.set(&m.path, path); } } // Parse `#[serde(with = "...")]` Meta(NameValue(m)) if m.path == WITH => { if let Ok(path) = parse_lit_into_expr_path(cx, WITH, &m.lit) { let mut ser_path = path.clone(); ser_path .path .segments .push(Ident::new("serialize", Span::call_site()).into()); serialize_with.set(&m.path, ser_path); let mut de_path = path; de_path .path .segments .push(Ident::new("deserialize", Span::call_site()).into()); deserialize_with.set(&m.path, de_path); } } // Parse `#[serde(bound = "T: SomeBound")]` Meta(NameValue(m)) if m.path == BOUND => { if let Ok(where_predicates) = parse_lit_into_where(cx, BOUND, BOUND, &m.lit) { ser_bound.set(&m.path, where_predicates.clone()); de_bound.set(&m.path, where_predicates); } } // Parse `#[serde(bound(serialize = "...", deserialize = "..."))]` Meta(List(m)) if m.path == BOUND => { if let Ok((ser, de)) = get_where_predicates(cx, &m.nested) { ser_bound.set_opt(&m.path, ser); de_bound.set_opt(&m.path, de); } } // Parse `#[serde(borrow)]` Meta(Path(word)) if word == BORROW => { if let Ok(borrowable) = borrowable_lifetimes(cx, &ident, field) { borrowed_lifetimes.set(word, borrowable); } } // Parse `#[serde(borrow = "'a + 'b")]` Meta(NameValue(m)) if m.path == BORROW => { if let Ok(lifetimes) = parse_lit_into_lifetimes(cx, BORROW, &m.lit) { if let Ok(borrowable) = borrowable_lifetimes(cx, &ident, field) { for lifetime in &lifetimes { if !borrowable.contains(lifetime) { cx.error_spanned_by( field, format!( "field `{}` does not have lifetime {}", ident, lifetime ), ); } } borrowed_lifetimes.set(&m.path, lifetimes); } } } // Parse `#[serde(getter = "...")]` Meta(NameValue(m)) if m.path == GETTER => { if let Ok(path) = parse_lit_into_expr_path(cx, GETTER, &m.lit) { getter.set(&m.path, path); } } // Parse `#[serde(flatten)]` Meta(Path(word)) if word == FLATTEN => { flatten.set_true(word); } Meta(meta_item) => { let path = meta_item .path() .into_token_stream() .to_string() .replace(' ', ""); cx.error_spanned_by( meta_item.path(), format!("unknown serde field attribute `{}`", path), ); } Lit(lit) => { cx.error_spanned_by(lit, "unexpected literal in serde field attribute"); } } } // Is skip_deserializing, initialize the field to Default::default() unless a // different default is specified by `#[serde(default = "...")]` on // ourselves or our container (e.g. the struct we are in). if let Default::None = *container_default { if skip_deserializing.0.value.is_some() { default.set_if_none(Default::Default); } } let mut borrowed_lifetimes = borrowed_lifetimes.get().unwrap_or_default(); if !borrowed_lifetimes.is_empty() { // Cow and Cow<[u8]> never borrow by default: // // impl<'de, 'a, T: ?Sized> Deserialize<'de> for Cow<'a, T> // // A #[serde(borrow)] attribute enables borrowing that corresponds // roughly to these impls: // // impl<'de: 'a, 'a> Deserialize<'de> for Cow<'a, str> // impl<'de: 'a, 'a> Deserialize<'de> for Cow<'a, [u8]> if is_cow(&field.ty, is_str) { let mut path = syn::Path { leading_colon: None, segments: Punctuated::new(), }; let span = Span::call_site(); path.segments.push(Ident::new("_serde", span).into()); path.segments.push(Ident::new("__private", span).into()); path.segments.push(Ident::new("de", span).into()); path.segments .push(Ident::new("borrow_cow_str", span).into()); let expr = syn::ExprPath { attrs: Vec::new(), qself: None, path, }; deserialize_with.set_if_none(expr); } else if is_cow(&field.ty, is_slice_u8) { let mut path = syn::Path { leading_colon: None, segments: Punctuated::new(), }; let span = Span::call_site(); path.segments.push(Ident::new("_serde", span).into()); path.segments.push(Ident::new("__private", span).into()); path.segments.push(Ident::new("de", span).into()); path.segments .push(Ident::new("borrow_cow_bytes", span).into()); let expr = syn::ExprPath { attrs: Vec::new(), qself: None, path, }; deserialize_with.set_if_none(expr); } } else if is_implicitly_borrowed(&field.ty) { // Types &str and &[u8] are always implicitly borrowed. No need for // a #[serde(borrow)]. collect_lifetimes(&field.ty, &mut borrowed_lifetimes); } Field { name: Name::from_attrs(ident, ser_name, de_name, Some(de_aliases)), skip_serializing: skip_serializing.get(), skip_deserializing: skip_deserializing.get(), skip_serializing_if: skip_serializing_if.get(), default: default.get().unwrap_or(Default::None), serialize_with: serialize_with.get(), deserialize_with: deserialize_with.get(), ser_bound: ser_bound.get(), de_bound: de_bound.get(), borrowed_lifetimes, getter: getter.get(), flatten: flatten.get(), transparent: false, } } pub fn name(&self) -> &Name { &self.name } pub fn aliases(&self) -> Vec { self.name.deserialize_aliases() } pub fn rename_by_rules(&mut self, rules: &RenameAllRules) { if !self.name.serialize_renamed { self.name.serialize = rules.serialize.apply_to_field(&self.name.serialize); } if !self.name.deserialize_renamed { self.name.deserialize = rules.deserialize.apply_to_field(&self.name.deserialize); } } pub fn skip_serializing(&self) -> bool { self.skip_serializing } pub fn skip_deserializing(&self) -> bool { self.skip_deserializing } pub fn skip_serializing_if(&self) -> Option<&syn::ExprPath> { self.skip_serializing_if.as_ref() } pub fn default(&self) -> &Default { &self.default } pub fn serialize_with(&self) -> Option<&syn::ExprPath> { self.serialize_with.as_ref() } pub fn deserialize_with(&self) -> Option<&syn::ExprPath> { self.deserialize_with.as_ref() } pub fn ser_bound(&self) -> Option<&[syn::WherePredicate]> { self.ser_bound.as_ref().map(|vec| &vec[..]) } pub fn de_bound(&self) -> Option<&[syn::WherePredicate]> { self.de_bound.as_ref().map(|vec| &vec[..]) } pub fn borrowed_lifetimes(&self) -> &BTreeSet { &self.borrowed_lifetimes } pub fn getter(&self) -> Option<&syn::ExprPath> { self.getter.as_ref() } pub fn flatten(&self) -> bool { self.flatten } pub fn transparent(&self) -> bool { self.transparent } pub fn mark_transparent(&mut self) { self.transparent = true; } } type SerAndDe = (Option, Option); fn get_ser_and_de<'a, 'b, T, F>( cx: &'b Ctxt, attr_name: Symbol, metas: &'a Punctuated, f: F, ) -> Result<(VecAttr<'b, T>, VecAttr<'b, T>), ()> where T: 'a, F: Fn(&Ctxt, Symbol, Symbol, &'a syn::Lit) -> Result, { let mut ser_meta = VecAttr::none(cx, attr_name); let mut de_meta = VecAttr::none(cx, attr_name); for meta in metas { match meta { Meta(NameValue(meta)) if meta.path == SERIALIZE => { if let Ok(v) = f(cx, attr_name, SERIALIZE, &meta.lit) { ser_meta.insert(&meta.path, v); } } Meta(NameValue(meta)) if meta.path == DESERIALIZE => { if let Ok(v) = f(cx, attr_name, DESERIALIZE, &meta.lit) { de_meta.insert(&meta.path, v); } } _ => { cx.error_spanned_by( meta, format!( "malformed {0} attribute, expected `{0}(serialize = ..., deserialize = ...)`", attr_name ), ); return Err(()); } } } Ok((ser_meta, de_meta)) } fn get_renames<'a>( cx: &Ctxt, items: &'a Punctuated, ) -> Result, ()> { let (ser, de) = get_ser_and_de(cx, RENAME, items, get_lit_str2)?; Ok((ser.at_most_one()?, de.at_most_one()?)) } fn get_multiple_renames<'a>( cx: &Ctxt, items: &'a Punctuated, ) -> Result<(Option<&'a syn::LitStr>, Vec<&'a syn::LitStr>), ()> { let (ser, de) = get_ser_and_de(cx, RENAME, items, get_lit_str2)?; Ok((ser.at_most_one()?, de.get())) } fn get_where_predicates( cx: &Ctxt, items: &Punctuated, ) -> Result>, ()> { let (ser, de) = get_ser_and_de(cx, BOUND, items, parse_lit_into_where)?; Ok((ser.at_most_one()?, de.at_most_one()?)) } pub fn get_serde_meta_items(cx: &Ctxt, attr: &syn::Attribute) -> Result, ()> { if attr.path != SERDE { return Ok(Vec::new()); } match attr.parse_meta() { Ok(List(meta)) => Ok(meta.nested.into_iter().collect()), Ok(other) => { cx.error_spanned_by(other, "expected #[serde(...)]"); Err(()) } Err(err) => { cx.syn_error(err); Err(()) } } } fn get_lit_str<'a>(cx: &Ctxt, attr_name: Symbol, lit: &'a syn::Lit) -> Result<&'a syn::LitStr, ()> { get_lit_str2(cx, attr_name, attr_name, lit) } fn get_lit_str2<'a>( cx: &Ctxt, attr_name: Symbol, meta_item_name: Symbol, lit: &'a syn::Lit, ) -> Result<&'a syn::LitStr, ()> { if let syn::Lit::Str(lit) = lit { Ok(lit) } else { cx.error_spanned_by( lit, format!( "expected serde {} attribute to be a string: `{} = \"...\"`", attr_name, meta_item_name ), ); Err(()) } } fn parse_lit_into_path(cx: &Ctxt, attr_name: Symbol, lit: &syn::Lit) -> Result { let string = get_lit_str(cx, attr_name, lit)?; parse_lit_str(string).map_err(|_| { cx.error_spanned_by(lit, format!("failed to parse path: {:?}", string.value())); }) } fn parse_lit_into_expr_path( cx: &Ctxt, attr_name: Symbol, lit: &syn::Lit, ) -> Result { let string = get_lit_str(cx, attr_name, lit)?; parse_lit_str(string).map_err(|_| { cx.error_spanned_by(lit, format!("failed to parse path: {:?}", string.value())); }) } fn parse_lit_into_where( cx: &Ctxt, attr_name: Symbol, meta_item_name: Symbol, lit: &syn::Lit, ) -> Result, ()> { let string = get_lit_str2(cx, attr_name, meta_item_name, lit)?; if string.value().is_empty() { return Ok(Vec::new()); } let where_string = syn::LitStr::new(&format!("where {}", string.value()), string.span()); parse_lit_str::(&where_string) .map(|wh| wh.predicates.into_iter().collect()) .map_err(|err| cx.error_spanned_by(lit, err)) } fn parse_lit_into_ty(cx: &Ctxt, attr_name: Symbol, lit: &syn::Lit) -> Result { let string = get_lit_str(cx, attr_name, lit)?; parse_lit_str(string).map_err(|_| { cx.error_spanned_by( lit, format!("failed to parse type: {} = {:?}", attr_name, string.value()), ); }) } // Parses a string literal like "'a + 'b + 'c" containing a nonempty list of // lifetimes separated by `+`. fn parse_lit_into_lifetimes( cx: &Ctxt, attr_name: Symbol, lit: &syn::Lit, ) -> Result, ()> { let string = get_lit_str(cx, attr_name, lit)?; if string.value().is_empty() { cx.error_spanned_by(lit, "at least one lifetime must be borrowed"); return Err(()); } struct BorrowedLifetimes(Punctuated); impl Parse for BorrowedLifetimes { fn parse(input: ParseStream) -> parse::Result { Punctuated::parse_separated_nonempty(input).map(BorrowedLifetimes) } } if let Ok(BorrowedLifetimes(lifetimes)) = parse_lit_str(string) { let mut set = BTreeSet::new(); for lifetime in lifetimes { if !set.insert(lifetime.clone()) { cx.error_spanned_by(lit, format!("duplicate borrowed lifetime `{}`", lifetime)); } } return Ok(set); } cx.error_spanned_by( lit, format!("failed to parse borrowed lifetimes: {:?}", string.value()), ); Err(()) } fn is_implicitly_borrowed(ty: &syn::Type) -> bool { is_implicitly_borrowed_reference(ty) || is_option(ty, is_implicitly_borrowed_reference) } fn is_implicitly_borrowed_reference(ty: &syn::Type) -> bool { is_reference(ty, is_str) || is_reference(ty, is_slice_u8) } // Whether the type looks like it might be `std::borrow::Cow` where elem="T". // This can have false negatives and false positives. // // False negative: // // use std::borrow::Cow as Pig; // // #[derive(Deserialize)] // struct S<'a> { // #[serde(borrow)] // pig: Pig<'a, str>, // } // // False positive: // // type str = [i16]; // // #[derive(Deserialize)] // struct S<'a> { // #[serde(borrow)] // cow: Cow<'a, str>, // } fn is_cow(ty: &syn::Type, elem: fn(&syn::Type) -> bool) -> bool { let path = match ungroup(ty) { syn::Type::Path(ty) => &ty.path, _ => { return false; } }; let seg = match path.segments.last() { Some(seg) => seg, None => { return false; } }; let args = match &seg.arguments { syn::PathArguments::AngleBracketed(bracketed) => &bracketed.args, _ => { return false; } }; seg.ident == "Cow" && args.len() == 2 && match (&args[0], &args[1]) { (syn::GenericArgument::Lifetime(_), syn::GenericArgument::Type(arg)) => elem(arg), _ => false, } } fn is_option(ty: &syn::Type, elem: fn(&syn::Type) -> bool) -> bool { let path = match ungroup(ty) { syn::Type::Path(ty) => &ty.path, _ => { return false; } }; let seg = match path.segments.last() { Some(seg) => seg, None => { return false; } }; let args = match &seg.arguments { syn::PathArguments::AngleBracketed(bracketed) => &bracketed.args, _ => { return false; } }; seg.ident == "Option" && args.len() == 1 && match &args[0] { syn::GenericArgument::Type(arg) => elem(arg), _ => false, } } // Whether the type looks like it might be `&T` where elem="T". This can have // false negatives and false positives. // // False negative: // // type Yarn = str; // // #[derive(Deserialize)] // struct S<'a> { // r: &'a Yarn, // } // // False positive: // // type str = [i16]; // // #[derive(Deserialize)] // struct S<'a> { // r: &'a str, // } fn is_reference(ty: &syn::Type, elem: fn(&syn::Type) -> bool) -> bool { match ungroup(ty) { syn::Type::Reference(ty) => ty.mutability.is_none() && elem(&ty.elem), _ => false, } } fn is_str(ty: &syn::Type) -> bool { is_primitive_type(ty, "str") } fn is_slice_u8(ty: &syn::Type) -> bool { match ungroup(ty) { syn::Type::Slice(ty) => is_primitive_type(&ty.elem, "u8"), _ => false, } } fn is_primitive_type(ty: &syn::Type, primitive: &str) -> bool { match ungroup(ty) { syn::Type::Path(ty) => ty.qself.is_none() && is_primitive_path(&ty.path, primitive), _ => false, } } fn is_primitive_path(path: &syn::Path, primitive: &str) -> bool { path.leading_colon.is_none() && path.segments.len() == 1 && path.segments[0].ident == primitive && path.segments[0].arguments.is_empty() } // All lifetimes that this type could borrow from a Deserializer. // // For example a type `S<'a, 'b>` could borrow `'a` and `'b`. On the other hand // a type `for<'a> fn(&'a str)` could not borrow `'a` from the Deserializer. // // This is used when there is an explicit or implicit `#[serde(borrow)]` // attribute on the field so there must be at least one borrowable lifetime. fn borrowable_lifetimes( cx: &Ctxt, name: &str, field: &syn::Field, ) -> Result, ()> { let mut lifetimes = BTreeSet::new(); collect_lifetimes(&field.ty, &mut lifetimes); if lifetimes.is_empty() { cx.error_spanned_by( field, format!("field `{}` has no lifetimes to borrow", name), ); Err(()) } else { Ok(lifetimes) } } fn collect_lifetimes(ty: &syn::Type, out: &mut BTreeSet) { match ty { syn::Type::Slice(ty) => { collect_lifetimes(&ty.elem, out); } syn::Type::Array(ty) => { collect_lifetimes(&ty.elem, out); } syn::Type::Ptr(ty) => { collect_lifetimes(&ty.elem, out); } syn::Type::Reference(ty) => { out.extend(ty.lifetime.iter().cloned()); collect_lifetimes(&ty.elem, out); } syn::Type::Tuple(ty) => { for elem in &ty.elems { collect_lifetimes(elem, out); } } syn::Type::Path(ty) => { if let Some(qself) = &ty.qself { collect_lifetimes(&qself.ty, out); } for seg in &ty.path.segments { if let syn::PathArguments::AngleBracketed(bracketed) = &seg.arguments { for arg in &bracketed.args { match arg { syn::GenericArgument::Lifetime(lifetime) => { out.insert(lifetime.clone()); } syn::GenericArgument::Type(ty) => { collect_lifetimes(ty, out); } syn::GenericArgument::Binding(binding) => { collect_lifetimes(&binding.ty, out); } syn::GenericArgument::Constraint(_) | syn::GenericArgument::Const(_) => {} } } } } } syn::Type::Paren(ty) => { collect_lifetimes(&ty.elem, out); } syn::Type::Group(ty) => { collect_lifetimes(&ty.elem, out); } syn::Type::Macro(ty) => { collect_lifetimes_from_tokens(ty.mac.tokens.clone(), out); } syn::Type::BareFn(_) | syn::Type::Never(_) | syn::Type::TraitObject(_) | syn::Type::ImplTrait(_) | syn::Type::Infer(_) | syn::Type::Verbatim(_) => {} #[cfg(test)] syn::Type::__TestExhaustive(_) => unimplemented!(), #[cfg(not(test))] _ => {} } } fn collect_lifetimes_from_tokens(tokens: TokenStream, out: &mut BTreeSet) { let mut iter = tokens.into_iter(); while let Some(tt) = iter.next() { match &tt { TokenTree::Punct(op) if op.as_char() == '\'' && op.spacing() == Spacing::Joint => { if let Some(TokenTree::Ident(ident)) = iter.next() { out.insert(syn::Lifetime { apostrophe: op.span(), ident, }); } } TokenTree::Group(group) => { let tokens = group.stream(); collect_lifetimes_from_tokens(tokens, out); } _ => {} } } } fn parse_lit_str(s: &syn::LitStr) -> parse::Result where T: Parse, { let tokens = spanned_tokens(s)?; syn::parse2(tokens) } fn spanned_tokens(s: &syn::LitStr) -> parse::Result { let stream = syn::parse_str(&s.value())?; Ok(respan(stream, s.span())) } vendor/serde_derive/src/internals/receiver.rs0000664000175000017500000002357614160055207022264 0ustar mwhudsonmwhudsonuse internals::respan::respan; use proc_macro2::Span; use quote::ToTokens; use std::mem; use syn::punctuated::Punctuated; use syn::{ parse_quote, Data, DeriveInput, Expr, ExprPath, GenericArgument, GenericParam, Generics, Macro, Path, PathArguments, QSelf, ReturnType, Type, TypeParamBound, TypePath, WherePredicate, }; pub fn replace_receiver(input: &mut DeriveInput) { let self_ty = { let ident = &input.ident; let ty_generics = input.generics.split_for_impl().1; parse_quote!(#ident #ty_generics) }; let mut visitor = ReplaceReceiver(&self_ty); visitor.visit_generics_mut(&mut input.generics); visitor.visit_data_mut(&mut input.data); } struct ReplaceReceiver<'a>(&'a TypePath); impl ReplaceReceiver<'_> { fn self_ty(&self, span: Span) -> TypePath { let tokens = self.0.to_token_stream(); let respanned = respan(tokens, span); syn::parse2(respanned).unwrap() } fn self_to_qself(&self, qself: &mut Option, path: &mut Path) { if path.leading_colon.is_some() || path.segments[0].ident != "Self" { return; } if path.segments.len() == 1 { self.self_to_expr_path(path); return; } let span = path.segments[0].ident.span(); *qself = Some(QSelf { lt_token: Token![<](span), ty: Box::new(Type::Path(self.self_ty(span))), position: 0, as_token: None, gt_token: Token![>](span), }); path.leading_colon = Some(**path.segments.pairs().next().unwrap().punct().unwrap()); let segments = mem::replace(&mut path.segments, Punctuated::new()); path.segments = segments.into_pairs().skip(1).collect(); } fn self_to_expr_path(&self, path: &mut Path) { let self_ty = self.self_ty(path.segments[0].ident.span()); let variant = mem::replace(path, self_ty.path); for segment in &mut path.segments { if let PathArguments::AngleBracketed(bracketed) = &mut segment.arguments { if bracketed.colon2_token.is_none() && !bracketed.args.is_empty() { bracketed.colon2_token = Some(::default()); } } } if variant.segments.len() > 1 { path.segments.push_punct(::default()); path.segments.extend(variant.segments.into_pairs().skip(1)); } } } impl ReplaceReceiver<'_> { // `Self` -> `Receiver` fn visit_type_mut(&mut self, ty: &mut Type) { let span = if let Type::Path(node) = ty { if node.qself.is_none() && node.path.is_ident("Self") { node.path.segments[0].ident.span() } else { self.visit_type_path_mut(node); return; } } else { self.visit_type_mut_impl(ty); return; }; *ty = self.self_ty(span).into(); } // `Self::Assoc` -> `::Assoc` fn visit_type_path_mut(&mut self, ty: &mut TypePath) { if ty.qself.is_none() { self.self_to_qself(&mut ty.qself, &mut ty.path); } self.visit_type_path_mut_impl(ty); } // `Self::method` -> `::method` fn visit_expr_path_mut(&mut self, expr: &mut ExprPath) { if expr.qself.is_none() { self.self_to_qself(&mut expr.qself, &mut expr.path); } self.visit_expr_path_mut_impl(expr); } // Everything below is simply traversing the syntax tree. fn visit_type_mut_impl(&mut self, ty: &mut Type) { match ty { Type::Array(ty) => { self.visit_type_mut(&mut ty.elem); self.visit_expr_mut(&mut ty.len); } Type::BareFn(ty) => { for arg in &mut ty.inputs { self.visit_type_mut(&mut arg.ty); } self.visit_return_type_mut(&mut ty.output); } Type::Group(ty) => self.visit_type_mut(&mut ty.elem), Type::ImplTrait(ty) => { for bound in &mut ty.bounds { self.visit_type_param_bound_mut(bound); } } Type::Macro(ty) => self.visit_macro_mut(&mut ty.mac), Type::Paren(ty) => self.visit_type_mut(&mut ty.elem), Type::Path(ty) => { if let Some(qself) = &mut ty.qself { self.visit_type_mut(&mut qself.ty); } self.visit_path_mut(&mut ty.path); } Type::Ptr(ty) => self.visit_type_mut(&mut ty.elem), Type::Reference(ty) => self.visit_type_mut(&mut ty.elem), Type::Slice(ty) => self.visit_type_mut(&mut ty.elem), Type::TraitObject(ty) => { for bound in &mut ty.bounds { self.visit_type_param_bound_mut(bound); } } Type::Tuple(ty) => { for elem in &mut ty.elems { self.visit_type_mut(elem); } } Type::Infer(_) | Type::Never(_) | Type::Verbatim(_) => {} #[cfg(test)] Type::__TestExhaustive(_) => unimplemented!(), #[cfg(not(test))] _ => {} } } fn visit_type_path_mut_impl(&mut self, ty: &mut TypePath) { if let Some(qself) = &mut ty.qself { self.visit_type_mut(&mut qself.ty); } self.visit_path_mut(&mut ty.path); } fn visit_expr_path_mut_impl(&mut self, expr: &mut ExprPath) { if let Some(qself) = &mut expr.qself { self.visit_type_mut(&mut qself.ty); } self.visit_path_mut(&mut expr.path); } fn visit_path_mut(&mut self, path: &mut Path) { for segment in &mut path.segments { self.visit_path_arguments_mut(&mut segment.arguments); } } fn visit_path_arguments_mut(&mut self, arguments: &mut PathArguments) { match arguments { PathArguments::None => {} PathArguments::AngleBracketed(arguments) => { for arg in &mut arguments.args { match arg { GenericArgument::Type(arg) => self.visit_type_mut(arg), GenericArgument::Binding(arg) => self.visit_type_mut(&mut arg.ty), GenericArgument::Lifetime(_) | GenericArgument::Constraint(_) | GenericArgument::Const(_) => {} } } } PathArguments::Parenthesized(arguments) => { for argument in &mut arguments.inputs { self.visit_type_mut(argument); } self.visit_return_type_mut(&mut arguments.output); } } } fn visit_return_type_mut(&mut self, return_type: &mut ReturnType) { match return_type { ReturnType::Default => {} ReturnType::Type(_, output) => self.visit_type_mut(output), } } fn visit_type_param_bound_mut(&mut self, bound: &mut TypeParamBound) { match bound { TypeParamBound::Trait(bound) => self.visit_path_mut(&mut bound.path), TypeParamBound::Lifetime(_) => {} } } fn visit_generics_mut(&mut self, generics: &mut Generics) { for param in &mut generics.params { match param { GenericParam::Type(param) => { for bound in &mut param.bounds { self.visit_type_param_bound_mut(bound); } } GenericParam::Lifetime(_) | GenericParam::Const(_) => {} } } if let Some(where_clause) = &mut generics.where_clause { for predicate in &mut where_clause.predicates { match predicate { WherePredicate::Type(predicate) => { self.visit_type_mut(&mut predicate.bounded_ty); for bound in &mut predicate.bounds { self.visit_type_param_bound_mut(bound); } } WherePredicate::Lifetime(_) | WherePredicate::Eq(_) => {} } } } } fn visit_data_mut(&mut self, data: &mut Data) { match data { Data::Struct(data) => { for field in &mut data.fields { self.visit_type_mut(&mut field.ty); } } Data::Enum(data) => { for variant in &mut data.variants { for field in &mut variant.fields { self.visit_type_mut(&mut field.ty); } } } Data::Union(_) => {} } } fn visit_expr_mut(&mut self, expr: &mut Expr) { match expr { Expr::Binary(expr) => { self.visit_expr_mut(&mut expr.left); self.visit_expr_mut(&mut expr.right); } Expr::Call(expr) => { self.visit_expr_mut(&mut expr.func); for arg in &mut expr.args { self.visit_expr_mut(arg); } } Expr::Cast(expr) => { self.visit_expr_mut(&mut expr.expr); self.visit_type_mut(&mut expr.ty); } Expr::Field(expr) => self.visit_expr_mut(&mut expr.base), Expr::Index(expr) => { self.visit_expr_mut(&mut expr.expr); self.visit_expr_mut(&mut expr.index); } Expr::Paren(expr) => self.visit_expr_mut(&mut expr.expr), Expr::Path(expr) => self.visit_expr_path_mut(expr), Expr::Unary(expr) => self.visit_expr_mut(&mut expr.expr), _ => {} } } fn visit_macro_mut(&mut self, _mac: &mut Macro) {} } vendor/serde_derive/src/internals/symbol.rs0000664000175000017500000000446214160055207021756 0ustar mwhudsonmwhudsonuse std::fmt::{self, Display}; use syn::{Ident, Path}; #[derive(Copy, Clone)] pub struct Symbol(&'static str); pub const ALIAS: Symbol = Symbol("alias"); pub const BORROW: Symbol = Symbol("borrow"); pub const BOUND: Symbol = Symbol("bound"); pub const CONTENT: Symbol = Symbol("content"); pub const CRATE: Symbol = Symbol("crate"); pub const DEFAULT: Symbol = Symbol("default"); pub const DENY_UNKNOWN_FIELDS: Symbol = Symbol("deny_unknown_fields"); pub const DESERIALIZE: Symbol = Symbol("deserialize"); pub const DESERIALIZE_WITH: Symbol = Symbol("deserialize_with"); pub const FIELD_IDENTIFIER: Symbol = Symbol("field_identifier"); pub const FLATTEN: Symbol = Symbol("flatten"); pub const FROM: Symbol = Symbol("from"); pub const GETTER: Symbol = Symbol("getter"); pub const INTO: Symbol = Symbol("into"); pub const OTHER: Symbol = Symbol("other"); pub const REMOTE: Symbol = Symbol("remote"); pub const RENAME: Symbol = Symbol("rename"); pub const RENAME_ALL: Symbol = Symbol("rename_all"); pub const SERDE: Symbol = Symbol("serde"); pub const SERIALIZE: Symbol = Symbol("serialize"); pub const SERIALIZE_WITH: Symbol = Symbol("serialize_with"); pub const SKIP: Symbol = Symbol("skip"); pub const SKIP_DESERIALIZING: Symbol = Symbol("skip_deserializing"); pub const SKIP_SERIALIZING: Symbol = Symbol("skip_serializing"); pub const SKIP_SERIALIZING_IF: Symbol = Symbol("skip_serializing_if"); pub const TAG: Symbol = Symbol("tag"); pub const TRANSPARENT: Symbol = Symbol("transparent"); pub const TRY_FROM: Symbol = Symbol("try_from"); pub const UNTAGGED: Symbol = Symbol("untagged"); pub const VARIANT_IDENTIFIER: Symbol = Symbol("variant_identifier"); pub const WITH: Symbol = Symbol("with"); pub const EXPECTING: Symbol = Symbol("expecting"); impl PartialEq for Ident { fn eq(&self, word: &Symbol) -> bool { self == word.0 } } impl<'a> PartialEq for &'a Ident { fn eq(&self, word: &Symbol) -> bool { *self == word.0 } } impl PartialEq for Path { fn eq(&self, word: &Symbol) -> bool { self.is_ident(word.0) } } impl<'a> PartialEq for &'a Path { fn eq(&self, word: &Symbol) -> bool { self.is_ident(word.0) } } impl Display for Symbol { fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result { formatter.write_str(self.0) } } vendor/serde_derive/src/internals/respan.rs0000664000175000017500000000070314160055207021733 0ustar mwhudsonmwhudsonuse proc_macro2::{Group, Span, TokenStream, TokenTree}; pub(crate) fn respan(stream: TokenStream, span: Span) -> TokenStream { stream .into_iter() .map(|token| respan_token(token, span)) .collect() } fn respan_token(mut token: TokenTree, span: Span) -> TokenTree { if let TokenTree::Group(g) = &mut token { *g = Group::new(g.delimiter(), respan(g.stream(), span)); } token.set_span(span); token } vendor/serde_derive/src/internals/check.rs0000664000175000017500000003357214160055207021532 0ustar mwhudsonmwhudsonuse internals::ast::{Container, Data, Field, Style}; use internals::attr::{Identifier, TagType}; use internals::{ungroup, Ctxt, Derive}; use syn::{Member, Type}; /// Cross-cutting checks that require looking at more than a single attrs /// object. Simpler checks should happen when parsing and building the attrs. pub fn check(cx: &Ctxt, cont: &mut Container, derive: Derive) { check_getter(cx, cont); check_flatten(cx, cont); check_identifier(cx, cont); check_variant_skip_attrs(cx, cont); check_internal_tag_field_name_conflict(cx, cont); check_adjacent_tag_conflict(cx, cont); check_transparent(cx, cont, derive); check_from_and_try_from(cx, cont); } /// Getters are only allowed inside structs (not enums) with the `remote` /// attribute. fn check_getter(cx: &Ctxt, cont: &Container) { match cont.data { Data::Enum(_) => { if cont.data.has_getter() { cx.error_spanned_by( cont.original, "#[serde(getter = \"...\")] is not allowed in an enum", ); } } Data::Struct(_, _) => { if cont.data.has_getter() && cont.attrs.remote().is_none() { cx.error_spanned_by( cont.original, "#[serde(getter = \"...\")] can only be used in structs that have #[serde(remote = \"...\")]", ); } } } } /// Flattening has some restrictions we can test. fn check_flatten(cx: &Ctxt, cont: &Container) { match &cont.data { Data::Enum(variants) => { for variant in variants { for field in &variant.fields { check_flatten_field(cx, variant.style, field); } } } Data::Struct(style, fields) => { for field in fields { check_flatten_field(cx, *style, field); } } } } fn check_flatten_field(cx: &Ctxt, style: Style, field: &Field) { if !field.attrs.flatten() { return; } match style { Style::Tuple => { cx.error_spanned_by( field.original, "#[serde(flatten)] cannot be used on tuple structs", ); } Style::Newtype => { cx.error_spanned_by( field.original, "#[serde(flatten)] cannot be used on newtype structs", ); } _ => {} } } /// The `other` attribute must be used at most once and it must be the last /// variant of an enum. /// /// Inside a `variant_identifier` all variants must be unit variants. Inside a /// `field_identifier` all but possibly one variant must be unit variants. The /// last variant may be a newtype variant which is an implicit "other" case. fn check_identifier(cx: &Ctxt, cont: &Container) { let variants = match &cont.data { Data::Enum(variants) => variants, Data::Struct(_, _) => { return; } }; for (i, variant) in variants.iter().enumerate() { match ( variant.style, cont.attrs.identifier(), variant.attrs.other(), cont.attrs.tag(), ) { // The `other` attribute may not be used in a variant_identifier. (_, Identifier::Variant, true, _) => { cx.error_spanned_by( variant.original, "#[serde(other)] may not be used on a variant identifier", ); } // Variant with `other` attribute cannot appear in untagged enum (_, Identifier::No, true, &TagType::None) => { cx.error_spanned_by( variant.original, "#[serde(other)] cannot appear on untagged enum", ); } // Variant with `other` attribute must be the last one. (Style::Unit, Identifier::Field, true, _) | (Style::Unit, Identifier::No, true, _) => { if i < variants.len() - 1 { cx.error_spanned_by( variant.original, "#[serde(other)] must be on the last variant", ); } } // Variant with `other` attribute must be a unit variant. (_, Identifier::Field, true, _) | (_, Identifier::No, true, _) => { cx.error_spanned_by( variant.original, "#[serde(other)] must be on a unit variant", ); } // Any sort of variant is allowed if this is not an identifier. (_, Identifier::No, false, _) => {} // Unit variant without `other` attribute is always fine. (Style::Unit, _, false, _) => {} // The last field is allowed to be a newtype catch-all. (Style::Newtype, Identifier::Field, false, _) => { if i < variants.len() - 1 { cx.error_spanned_by( variant.original, format!("`{}` must be the last variant", variant.ident), ); } } (_, Identifier::Field, false, _) => { cx.error_spanned_by( variant.original, "#[serde(field_identifier)] may only contain unit variants", ); } (_, Identifier::Variant, false, _) => { cx.error_spanned_by( variant.original, "#[serde(variant_identifier)] may only contain unit variants", ); } } } } /// Skip-(de)serializing attributes are not allowed on variants marked /// (de)serialize_with. fn check_variant_skip_attrs(cx: &Ctxt, cont: &Container) { let variants = match &cont.data { Data::Enum(variants) => variants, Data::Struct(_, _) => { return; } }; for variant in variants.iter() { if variant.attrs.serialize_with().is_some() { if variant.attrs.skip_serializing() { cx.error_spanned_by( variant.original, format!( "variant `{}` cannot have both #[serde(serialize_with)] and #[serde(skip_serializing)]", variant.ident ), ); } for field in &variant.fields { let member = member_message(&field.member); if field.attrs.skip_serializing() { cx.error_spanned_by( variant.original, format!( "variant `{}` cannot have both #[serde(serialize_with)] and a field {} marked with #[serde(skip_serializing)]", variant.ident, member ), ); } if field.attrs.skip_serializing_if().is_some() { cx.error_spanned_by( variant.original, format!( "variant `{}` cannot have both #[serde(serialize_with)] and a field {} marked with #[serde(skip_serializing_if)]", variant.ident, member ), ); } } } if variant.attrs.deserialize_with().is_some() { if variant.attrs.skip_deserializing() { cx.error_spanned_by( variant.original, format!( "variant `{}` cannot have both #[serde(deserialize_with)] and #[serde(skip_deserializing)]", variant.ident ), ); } for field in &variant.fields { if field.attrs.skip_deserializing() { let member = member_message(&field.member); cx.error_spanned_by( variant.original, format!( "variant `{}` cannot have both #[serde(deserialize_with)] and a field {} marked with #[serde(skip_deserializing)]", variant.ident, member ), ); } } } } } /// The tag of an internally-tagged struct variant must not be /// the same as either one of its fields, as this would result in /// duplicate keys in the serialized output and/or ambiguity in /// the to-be-deserialized input. fn check_internal_tag_field_name_conflict(cx: &Ctxt, cont: &Container) { let variants = match &cont.data { Data::Enum(variants) => variants, Data::Struct(_, _) => return, }; let tag = match cont.attrs.tag() { TagType::Internal { tag } => tag.as_str(), TagType::External | TagType::Adjacent { .. } | TagType::None => return, }; let diagnose_conflict = || { cx.error_spanned_by( cont.original, format!("variant field name `{}` conflicts with internal tag", tag), ); }; for variant in variants { match variant.style { Style::Struct => { for field in &variant.fields { let check_ser = !field.attrs.skip_serializing(); let check_de = !field.attrs.skip_deserializing(); let name = field.attrs.name(); let ser_name = name.serialize_name(); if check_ser && ser_name == tag { diagnose_conflict(); return; } for de_name in field.attrs.aliases() { if check_de && de_name == tag { diagnose_conflict(); return; } } } } Style::Unit | Style::Newtype | Style::Tuple => {} } } } /// In the case of adjacently-tagged enums, the type and the /// contents tag must differ, for the same reason. fn check_adjacent_tag_conflict(cx: &Ctxt, cont: &Container) { let (type_tag, content_tag) = match cont.attrs.tag() { TagType::Adjacent { tag, content } => (tag, content), TagType::Internal { .. } | TagType::External | TagType::None => return, }; if type_tag == content_tag { cx.error_spanned_by( cont.original, format!( "enum tags `{}` for type and content conflict with each other", type_tag ), ); } } /// Enums and unit structs cannot be transparent. fn check_transparent(cx: &Ctxt, cont: &mut Container, derive: Derive) { if !cont.attrs.transparent() { return; } if cont.attrs.type_from().is_some() { cx.error_spanned_by( cont.original, "#[serde(transparent)] is not allowed with #[serde(from = \"...\")]", ); } if cont.attrs.type_try_from().is_some() { cx.error_spanned_by( cont.original, "#[serde(transparent)] is not allowed with #[serde(try_from = \"...\")]", ); } if cont.attrs.type_into().is_some() { cx.error_spanned_by( cont.original, "#[serde(transparent)] is not allowed with #[serde(into = \"...\")]", ); } let fields = match &mut cont.data { Data::Enum(_) => { cx.error_spanned_by( cont.original, "#[serde(transparent)] is not allowed on an enum", ); return; } Data::Struct(Style::Unit, _) => { cx.error_spanned_by( cont.original, "#[serde(transparent)] is not allowed on a unit struct", ); return; } Data::Struct(_, fields) => fields, }; let mut transparent_field = None; for field in fields { if allow_transparent(field, derive) { if transparent_field.is_some() { cx.error_spanned_by( cont.original, "#[serde(transparent)] requires struct to have at most one transparent field", ); return; } transparent_field = Some(field); } } match transparent_field { Some(transparent_field) => transparent_field.attrs.mark_transparent(), None => match derive { Derive::Serialize => { cx.error_spanned_by( cont.original, "#[serde(transparent)] requires at least one field that is not skipped", ); } Derive::Deserialize => { cx.error_spanned_by( cont.original, "#[serde(transparent)] requires at least one field that is neither skipped nor has a default", ); } }, } } fn member_message(member: &Member) -> String { match member { Member::Named(ident) => format!("`{}`", ident), Member::Unnamed(i) => format!("#{}", i.index), } } fn allow_transparent(field: &Field, derive: Derive) -> bool { if let Type::Path(ty) = ungroup(field.ty) { if let Some(seg) = ty.path.segments.last() { if seg.ident == "PhantomData" { return false; } } } match derive { Derive::Serialize => !field.attrs.skip_serializing(), Derive::Deserialize => !field.attrs.skip_deserializing() && field.attrs.default().is_none(), } } fn check_from_and_try_from(cx: &Ctxt, cont: &mut Container) { if cont.attrs.type_from().is_some() && cont.attrs.type_try_from().is_some() { cx.error_spanned_by( cont.original, "#[serde(from = \"...\")] and #[serde(try_from = \"...\")] conflict with each other", ); } } vendor/serde_derive/src/internals/ast.rs0000664000175000017500000001350014160055207021231 0ustar mwhudsonmwhudson//! A Serde ast, parsed from the Syn ast and ready to generate Rust code. use internals::attr; use internals::check; use internals::{Ctxt, Derive}; use syn; use syn::punctuated::Punctuated; /// A source data structure annotated with `#[derive(Serialize)]` and/or `#[derive(Deserialize)]`, /// parsed into an internal representation. pub struct Container<'a> { /// The struct or enum name (without generics). pub ident: syn::Ident, /// Attributes on the structure, parsed for Serde. pub attrs: attr::Container, /// The contents of the struct or enum. pub data: Data<'a>, /// Any generics on the struct or enum. pub generics: &'a syn::Generics, /// Original input. pub original: &'a syn::DeriveInput, } /// The fields of a struct or enum. /// /// Analogous to `syn::Data`. pub enum Data<'a> { Enum(Vec>), Struct(Style, Vec>), } /// A variant of an enum. pub struct Variant<'a> { pub ident: syn::Ident, pub attrs: attr::Variant, pub style: Style, pub fields: Vec>, pub original: &'a syn::Variant, } /// A field of a struct. pub struct Field<'a> { pub member: syn::Member, pub attrs: attr::Field, pub ty: &'a syn::Type, pub original: &'a syn::Field, } #[derive(Copy, Clone)] pub enum Style { /// Named fields. Struct, /// Many unnamed fields. Tuple, /// One unnamed field. Newtype, /// No fields. Unit, } impl<'a> Container<'a> { /// Convert the raw Syn ast into a parsed container object, collecting errors in `cx`. pub fn from_ast( cx: &Ctxt, item: &'a syn::DeriveInput, derive: Derive, ) -> Option> { let mut attrs = attr::Container::from_ast(cx, item); let mut data = match &item.data { syn::Data::Enum(data) => Data::Enum(enum_from_ast(cx, &data.variants, attrs.default())), syn::Data::Struct(data) => { let (style, fields) = struct_from_ast(cx, &data.fields, None, attrs.default()); Data::Struct(style, fields) } syn::Data::Union(_) => { cx.error_spanned_by(item, "Serde does not support derive for unions"); return None; } }; let mut has_flatten = false; match &mut data { Data::Enum(variants) => { for variant in variants { variant.attrs.rename_by_rules(attrs.rename_all_rules()); for field in &mut variant.fields { if field.attrs.flatten() { has_flatten = true; } field .attrs .rename_by_rules(variant.attrs.rename_all_rules()); } } } Data::Struct(_, fields) => { for field in fields { if field.attrs.flatten() { has_flatten = true; } field.attrs.rename_by_rules(attrs.rename_all_rules()); } } } if has_flatten { attrs.mark_has_flatten(); } let mut item = Container { ident: item.ident.clone(), attrs, data, generics: &item.generics, original: item, }; check::check(cx, &mut item, derive); Some(item) } } impl<'a> Data<'a> { pub fn all_fields(&'a self) -> Box> + 'a> { match self { Data::Enum(variants) => { Box::new(variants.iter().flat_map(|variant| variant.fields.iter())) } Data::Struct(_, fields) => Box::new(fields.iter()), } } pub fn has_getter(&self) -> bool { self.all_fields().any(|f| f.attrs.getter().is_some()) } } fn enum_from_ast<'a>( cx: &Ctxt, variants: &'a Punctuated, container_default: &attr::Default, ) -> Vec> { variants .iter() .map(|variant| { let attrs = attr::Variant::from_ast(cx, variant); let (style, fields) = struct_from_ast(cx, &variant.fields, Some(&attrs), container_default); Variant { ident: variant.ident.clone(), attrs, style, fields, original: variant, } }) .collect() } fn struct_from_ast<'a>( cx: &Ctxt, fields: &'a syn::Fields, attrs: Option<&attr::Variant>, container_default: &attr::Default, ) -> (Style, Vec>) { match fields { syn::Fields::Named(fields) => ( Style::Struct, fields_from_ast(cx, &fields.named, attrs, container_default), ), syn::Fields::Unnamed(fields) if fields.unnamed.len() == 1 => ( Style::Newtype, fields_from_ast(cx, &fields.unnamed, attrs, container_default), ), syn::Fields::Unnamed(fields) => ( Style::Tuple, fields_from_ast(cx, &fields.unnamed, attrs, container_default), ), syn::Fields::Unit => (Style::Unit, Vec::new()), } } fn fields_from_ast<'a>( cx: &Ctxt, fields: &'a Punctuated, attrs: Option<&attr::Variant>, container_default: &attr::Default, ) -> Vec> { fields .iter() .enumerate() .map(|(i, field)| Field { member: match &field.ident { Some(ident) => syn::Member::Named(ident.clone()), None => syn::Member::Unnamed(i.into()), }, attrs: attr::Field::from_ast(cx, i, field, attrs, container_default), ty: &field.ty, original: field, }) .collect() } vendor/serde_derive/src/internals/ctxt.rs0000664000175000017500000000354314160055207021432 0ustar mwhudsonmwhudsonuse quote::ToTokens; use std::cell::RefCell; use std::fmt::Display; use std::thread; use syn; /// A type to collect errors together and format them. /// /// Dropping this object will cause a panic. It must be consumed using `check`. /// /// References can be shared since this type uses run-time exclusive mut checking. #[derive(Default)] pub struct Ctxt { // The contents will be set to `None` during checking. This is so that checking can be // enforced. errors: RefCell>>, } impl Ctxt { /// Create a new context object. /// /// This object contains no errors, but will still trigger a panic if it is not `check`ed. pub fn new() -> Self { Ctxt { errors: RefCell::new(Some(Vec::new())), } } /// Add an error to the context object with a tokenenizable object. /// /// The object is used for spanning in error messages. pub fn error_spanned_by(&self, obj: A, msg: T) { self.errors .borrow_mut() .as_mut() .unwrap() // Curb monomorphization from generating too many identical methods. .push(syn::Error::new_spanned(obj.into_token_stream(), msg)); } /// Add one of Syn's parse errors. pub fn syn_error(&self, err: syn::Error) { self.errors.borrow_mut().as_mut().unwrap().push(err); } /// Consume this object, producing a formatted error string if there are errors. pub fn check(self) -> Result<(), Vec> { let errors = self.errors.borrow_mut().take().unwrap(); match errors.len() { 0 => Ok(()), _ => Err(errors), } } } impl Drop for Ctxt { fn drop(&mut self) { if !thread::panicking() && self.errors.borrow().is_some() { panic!("forgot to check for errors"); } } } vendor/serde_derive/src/try.rs0000664000175000017500000000162014160055207017261 0ustar mwhudsonmwhudsonuse proc_macro2::{Punct, Spacing, TokenStream}; // None of our generated code requires the `From::from` error conversion // performed by the standard library's `try!` macro. With this simplified macro // we see a significant improvement in type checking and borrow checking time of // the generated code and a slight improvement in binary size. pub fn replacement() -> TokenStream { // Cannot pass `$expr` to `quote!` prior to Rust 1.17.0 so interpolate it. let dollar = Punct::new('$', Spacing::Alone); quote! { #[allow(unused_macros)] macro_rules! try { (#dollar __expr:expr) => { match #dollar __expr { _serde::__private::Ok(__val) => __val, _serde::__private::Err(__err) => { return _serde::__private::Err(__err); } } } } } } vendor/serde_derive/src/de.rs0000664000175000017500000032262414172417313017050 0ustar mwhudsonmwhudsonuse proc_macro2::{Literal, Span, TokenStream}; use quote::ToTokens; use syn::punctuated::Punctuated; use syn::spanned::Spanned; use syn::{self, Ident, Index, Member}; use bound; use dummy; use fragment::{Expr, Fragment, Match, Stmts}; use internals::ast::{Container, Data, Field, Style, Variant}; use internals::{attr, replace_receiver, ungroup, Ctxt, Derive}; use pretend; use std::collections::BTreeSet; use std::ptr; pub fn expand_derive_deserialize( input: &mut syn::DeriveInput, ) -> Result> { replace_receiver(input); let ctxt = Ctxt::new(); let cont = match Container::from_ast(&ctxt, input, Derive::Deserialize) { Some(cont) => cont, None => return Err(ctxt.check().unwrap_err()), }; precondition(&ctxt, &cont); ctxt.check()?; let ident = &cont.ident; let params = Parameters::new(&cont); let (de_impl_generics, _, ty_generics, where_clause) = split_with_de_lifetime(¶ms); let body = Stmts(deserialize_body(&cont, ¶ms)); let delife = params.borrowed.de_lifetime(); let serde = cont.attrs.serde_path(); let impl_block = if let Some(remote) = cont.attrs.remote() { let vis = &input.vis; let used = pretend::pretend_used(&cont, params.is_packed); quote! { impl #de_impl_generics #ident #ty_generics #where_clause { #vis fn deserialize<__D>(__deserializer: __D) -> #serde::__private::Result<#remote #ty_generics, __D::Error> where __D: #serde::Deserializer<#delife>, { #used #body } } } } else { let fn_deserialize_in_place = deserialize_in_place_body(&cont, ¶ms); quote! { #[automatically_derived] impl #de_impl_generics #serde::Deserialize<#delife> for #ident #ty_generics #where_clause { fn deserialize<__D>(__deserializer: __D) -> #serde::__private::Result where __D: #serde::Deserializer<#delife>, { #body } #fn_deserialize_in_place } } }; Ok(dummy::wrap_in_const( cont.attrs.custom_serde_path(), "DESERIALIZE", ident, impl_block, )) } fn precondition(cx: &Ctxt, cont: &Container) { precondition_sized(cx, cont); precondition_no_de_lifetime(cx, cont); } fn precondition_sized(cx: &Ctxt, cont: &Container) { if let Data::Struct(_, fields) = &cont.data { if let Some(last) = fields.last() { if let syn::Type::Slice(_) = ungroup(last.ty) { cx.error_spanned_by( cont.original, "cannot deserialize a dynamically sized struct", ); } } } } fn precondition_no_de_lifetime(cx: &Ctxt, cont: &Container) { if let BorrowedLifetimes::Borrowed(_) = borrowed_lifetimes(cont) { for param in cont.generics.lifetimes() { if param.lifetime.to_string() == "'de" { cx.error_spanned_by( ¶m.lifetime, "cannot deserialize when there is a lifetime parameter called 'de", ); return; } } } } struct Parameters { /// Name of the type the `derive` is on. local: syn::Ident, /// Path to the type the impl is for. Either a single `Ident` for local /// types or `some::remote::Ident` for remote types. Does not include /// generic parameters. this: syn::Path, /// Generics including any explicit and inferred bounds for the impl. generics: syn::Generics, /// Lifetimes borrowed from the deserializer. These will become bounds on /// the `'de` lifetime of the deserializer. borrowed: BorrowedLifetimes, /// At least one field has a serde(getter) attribute, implying that the /// remote type has a private field. has_getter: bool, /// Type has a repr(packed) attribute. is_packed: bool, } impl Parameters { fn new(cont: &Container) -> Self { let local = cont.ident.clone(); let this = match cont.attrs.remote() { Some(remote) => remote.clone(), None => cont.ident.clone().into(), }; let borrowed = borrowed_lifetimes(cont); let generics = build_generics(cont, &borrowed); let has_getter = cont.data.has_getter(); let is_packed = cont.attrs.is_packed(); Parameters { local, this, generics, borrowed, has_getter, is_packed, } } /// Type name to use in error messages and `&'static str` arguments to /// various Deserializer methods. fn type_name(&self) -> String { self.this.segments.last().unwrap().ident.to_string() } } // All the generics in the input, plus a bound `T: Deserialize` for each generic // field type that will be deserialized by us, plus a bound `T: Default` for // each generic field type that will be set to a default value. fn build_generics(cont: &Container, borrowed: &BorrowedLifetimes) -> syn::Generics { let generics = bound::without_defaults(cont.generics); let generics = bound::with_where_predicates_from_fields(cont, &generics, attr::Field::de_bound); let generics = bound::with_where_predicates_from_variants(cont, &generics, attr::Variant::de_bound); match cont.attrs.de_bound() { Some(predicates) => bound::with_where_predicates(&generics, predicates), None => { let generics = match *cont.attrs.default() { attr::Default::Default => bound::with_self_bound( cont, &generics, &parse_quote!(_serde::__private::Default), ), attr::Default::None | attr::Default::Path(_) => generics, }; let delife = borrowed.de_lifetime(); let generics = bound::with_bound( cont, &generics, needs_deserialize_bound, &parse_quote!(_serde::Deserialize<#delife>), ); bound::with_bound( cont, &generics, requires_default, &parse_quote!(_serde::__private::Default), ) } } } // Fields with a `skip_deserializing` or `deserialize_with` attribute, or which // belong to a variant with a `skip_deserializing` or `deserialize_with` // attribute, are not deserialized by us so we do not generate a bound. Fields // with a `bound` attribute specify their own bound so we do not generate one. // All other fields may need a `T: Deserialize` bound where T is the type of the // field. fn needs_deserialize_bound(field: &attr::Field, variant: Option<&attr::Variant>) -> bool { !field.skip_deserializing() && field.deserialize_with().is_none() && field.de_bound().is_none() && variant.map_or(true, |variant| { !variant.skip_deserializing() && variant.deserialize_with().is_none() && variant.de_bound().is_none() }) } // Fields with a `default` attribute (not `default=...`), and fields with a // `skip_deserializing` attribute that do not also have `default=...`. fn requires_default(field: &attr::Field, _variant: Option<&attr::Variant>) -> bool { if let attr::Default::Default = *field.default() { true } else { false } } enum BorrowedLifetimes { Borrowed(BTreeSet), Static, } impl BorrowedLifetimes { fn de_lifetime(&self) -> syn::Lifetime { match *self { BorrowedLifetimes::Borrowed(_) => syn::Lifetime::new("'de", Span::call_site()), BorrowedLifetimes::Static => syn::Lifetime::new("'static", Span::call_site()), } } fn de_lifetime_def(&self) -> Option { match self { BorrowedLifetimes::Borrowed(bounds) => Some(syn::LifetimeDef { attrs: Vec::new(), lifetime: syn::Lifetime::new("'de", Span::call_site()), colon_token: None, bounds: bounds.iter().cloned().collect(), }), BorrowedLifetimes::Static => None, } } } // The union of lifetimes borrowed by each field of the container. // // These turn into bounds on the `'de` lifetime of the Deserialize impl. If // lifetimes `'a` and `'b` are borrowed but `'c` is not, the impl is: // // impl<'de: 'a + 'b, 'a, 'b, 'c> Deserialize<'de> for S<'a, 'b, 'c> // // If any borrowed lifetime is `'static`, then `'de: 'static` would be redundant // and we use plain `'static` instead of `'de`. fn borrowed_lifetimes(cont: &Container) -> BorrowedLifetimes { let mut lifetimes = BTreeSet::new(); for field in cont.data.all_fields() { if !field.attrs.skip_deserializing() { lifetimes.extend(field.attrs.borrowed_lifetimes().iter().cloned()); } } if lifetimes.iter().any(|b| b.to_string() == "'static") { BorrowedLifetimes::Static } else { BorrowedLifetimes::Borrowed(lifetimes) } } fn deserialize_body(cont: &Container, params: &Parameters) -> Fragment { if cont.attrs.transparent() { deserialize_transparent(cont, params) } else if let Some(type_from) = cont.attrs.type_from() { deserialize_from(type_from) } else if let Some(type_try_from) = cont.attrs.type_try_from() { deserialize_try_from(type_try_from) } else if let attr::Identifier::No = cont.attrs.identifier() { match &cont.data { Data::Enum(variants) => deserialize_enum(params, variants, &cont.attrs), Data::Struct(Style::Struct, fields) => { deserialize_struct(None, params, fields, &cont.attrs, None, &Untagged::No) } Data::Struct(Style::Tuple, fields) | Data::Struct(Style::Newtype, fields) => { deserialize_tuple(None, params, fields, &cont.attrs, None) } Data::Struct(Style::Unit, _) => deserialize_unit_struct(params, &cont.attrs), } } else { match &cont.data { Data::Enum(variants) => deserialize_custom_identifier(params, variants, &cont.attrs), Data::Struct(_, _) => unreachable!("checked in serde_derive_internals"), } } } #[cfg(feature = "deserialize_in_place")] fn deserialize_in_place_body(cont: &Container, params: &Parameters) -> Option { // Only remote derives have getters, and we do not generate // deserialize_in_place for remote derives. assert!(!params.has_getter); if cont.attrs.transparent() || cont.attrs.type_from().is_some() || cont.attrs.type_try_from().is_some() || cont.attrs.identifier().is_some() || cont .data .all_fields() .all(|f| f.attrs.deserialize_with().is_some()) { return None; } let code = match &cont.data { Data::Struct(Style::Struct, fields) => { deserialize_struct_in_place(None, params, fields, &cont.attrs, None)? } Data::Struct(Style::Tuple, fields) | Data::Struct(Style::Newtype, fields) => { deserialize_tuple_in_place(None, params, fields, &cont.attrs, None) } Data::Enum(_) | Data::Struct(Style::Unit, _) => { return None; } }; let delife = params.borrowed.de_lifetime(); let stmts = Stmts(code); let fn_deserialize_in_place = quote_block! { fn deserialize_in_place<__D>(__deserializer: __D, __place: &mut Self) -> _serde::__private::Result<(), __D::Error> where __D: _serde::Deserializer<#delife>, { #stmts } }; Some(Stmts(fn_deserialize_in_place)) } #[cfg(not(feature = "deserialize_in_place"))] fn deserialize_in_place_body(_cont: &Container, _params: &Parameters) -> Option { None } fn deserialize_transparent(cont: &Container, params: &Parameters) -> Fragment { let fields = match &cont.data { Data::Struct(_, fields) => fields, Data::Enum(_) => unreachable!(), }; let this = ¶ms.this; let transparent_field = fields.iter().find(|f| f.attrs.transparent()).unwrap(); let path = match transparent_field.attrs.deserialize_with() { Some(path) => quote!(#path), None => { let span = transparent_field.original.span(); quote_spanned!(span=> _serde::Deserialize::deserialize) } }; let assign = fields.iter().map(|field| { let member = &field.member; if ptr::eq(field, transparent_field) { quote!(#member: __transparent) } else { let value = match field.attrs.default() { attr::Default::Default => quote!(_serde::__private::Default::default()), attr::Default::Path(path) => quote!(#path()), attr::Default::None => quote!(_serde::__private::PhantomData), }; quote!(#member: #value) } }); quote_block! { _serde::__private::Result::map( #path(__deserializer), |__transparent| #this { #(#assign),* }) } } fn deserialize_from(type_from: &syn::Type) -> Fragment { quote_block! { _serde::__private::Result::map( <#type_from as _serde::Deserialize>::deserialize(__deserializer), _serde::__private::From::from) } } fn deserialize_try_from(type_try_from: &syn::Type) -> Fragment { quote_block! { _serde::__private::Result::and_then( <#type_try_from as _serde::Deserialize>::deserialize(__deserializer), |v| _serde::__private::TryFrom::try_from(v).map_err(_serde::de::Error::custom)) } } fn deserialize_unit_struct(params: &Parameters, cattrs: &attr::Container) -> Fragment { let this = ¶ms.this; let type_name = cattrs.name().deserialize_name(); let expecting = format!("unit struct {}", params.type_name()); let expecting = cattrs.expecting().unwrap_or(&expecting); quote_block! { struct __Visitor; impl<'de> _serde::de::Visitor<'de> for __Visitor { type Value = #this; fn expecting(&self, __formatter: &mut _serde::__private::Formatter) -> _serde::__private::fmt::Result { _serde::__private::Formatter::write_str(__formatter, #expecting) } #[inline] fn visit_unit<__E>(self) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(#this) } } _serde::Deserializer::deserialize_unit_struct(__deserializer, #type_name, __Visitor) } } fn deserialize_tuple( variant_ident: Option<&syn::Ident>, params: &Parameters, fields: &[Field], cattrs: &attr::Container, deserializer: Option, ) -> Fragment { let this = ¶ms.this; let (de_impl_generics, de_ty_generics, ty_generics, where_clause) = split_with_de_lifetime(params); let delife = params.borrowed.de_lifetime(); assert!(!cattrs.has_flatten()); // If there are getters (implying private fields), construct the local type // and use an `Into` conversion to get the remote type. If there are no // getters then construct the target type directly. let construct = if params.has_getter { let local = ¶ms.local; quote!(#local) } else { quote!(#this) }; let is_enum = variant_ident.is_some(); let type_path = match variant_ident { Some(variant_ident) => quote!(#construct::#variant_ident), None => construct, }; let expecting = match variant_ident { Some(variant_ident) => format!("tuple variant {}::{}", params.type_name(), variant_ident), None => format!("tuple struct {}", params.type_name()), }; let expecting = cattrs.expecting().unwrap_or(&expecting); let nfields = fields.len(); let visit_newtype_struct = if !is_enum && nfields == 1 { Some(deserialize_newtype_struct(&type_path, params, &fields[0])) } else { None }; let visit_seq = Stmts(deserialize_seq( &type_path, params, fields, false, cattrs, expecting, )); let visitor_expr = quote! { __Visitor { marker: _serde::__private::PhantomData::<#this #ty_generics>, lifetime: _serde::__private::PhantomData, } }; let dispatch = if let Some(deserializer) = deserializer { quote!(_serde::Deserializer::deserialize_tuple(#deserializer, #nfields, #visitor_expr)) } else if is_enum { quote!(_serde::de::VariantAccess::tuple_variant(__variant, #nfields, #visitor_expr)) } else if nfields == 1 { let type_name = cattrs.name().deserialize_name(); quote!(_serde::Deserializer::deserialize_newtype_struct(__deserializer, #type_name, #visitor_expr)) } else { let type_name = cattrs.name().deserialize_name(); quote!(_serde::Deserializer::deserialize_tuple_struct(__deserializer, #type_name, #nfields, #visitor_expr)) }; let all_skipped = fields.iter().all(|field| field.attrs.skip_deserializing()); let visitor_var = if all_skipped { quote!(_) } else { quote!(mut __seq) }; quote_block! { struct __Visitor #de_impl_generics #where_clause { marker: _serde::__private::PhantomData<#this #ty_generics>, lifetime: _serde::__private::PhantomData<&#delife ()>, } impl #de_impl_generics _serde::de::Visitor<#delife> for __Visitor #de_ty_generics #where_clause { type Value = #this #ty_generics; fn expecting(&self, __formatter: &mut _serde::__private::Formatter) -> _serde::__private::fmt::Result { _serde::__private::Formatter::write_str(__formatter, #expecting) } #visit_newtype_struct #[inline] fn visit_seq<__A>(self, #visitor_var: __A) -> _serde::__private::Result where __A: _serde::de::SeqAccess<#delife>, { #visit_seq } } #dispatch } } #[cfg(feature = "deserialize_in_place")] fn deserialize_tuple_in_place( variant_ident: Option, params: &Parameters, fields: &[Field], cattrs: &attr::Container, deserializer: Option, ) -> Fragment { let this = ¶ms.this; let (de_impl_generics, de_ty_generics, ty_generics, where_clause) = split_with_de_lifetime(params); let delife = params.borrowed.de_lifetime(); assert!(!cattrs.has_flatten()); let is_enum = variant_ident.is_some(); let expecting = match variant_ident { Some(variant_ident) => format!("tuple variant {}::{}", params.type_name(), variant_ident), None => format!("tuple struct {}", params.type_name()), }; let expecting = cattrs.expecting().unwrap_or(&expecting); let nfields = fields.len(); let visit_newtype_struct = if !is_enum && nfields == 1 { Some(deserialize_newtype_struct_in_place(params, &fields[0])) } else { None }; let visit_seq = Stmts(deserialize_seq_in_place(params, fields, cattrs, expecting)); let visitor_expr = quote! { __Visitor { place: __place, lifetime: _serde::__private::PhantomData, } }; let dispatch = if let Some(deserializer) = deserializer { quote!(_serde::Deserializer::deserialize_tuple(#deserializer, #nfields, #visitor_expr)) } else if is_enum { quote!(_serde::de::VariantAccess::tuple_variant(__variant, #nfields, #visitor_expr)) } else if nfields == 1 { let type_name = cattrs.name().deserialize_name(); quote!(_serde::Deserializer::deserialize_newtype_struct(__deserializer, #type_name, #visitor_expr)) } else { let type_name = cattrs.name().deserialize_name(); quote!(_serde::Deserializer::deserialize_tuple_struct(__deserializer, #type_name, #nfields, #visitor_expr)) }; let all_skipped = fields.iter().all(|field| field.attrs.skip_deserializing()); let visitor_var = if all_skipped { quote!(_) } else { quote!(mut __seq) }; let in_place_impl_generics = de_impl_generics.in_place(); let in_place_ty_generics = de_ty_generics.in_place(); let place_life = place_lifetime(); quote_block! { struct __Visitor #in_place_impl_generics #where_clause { place: &#place_life mut #this #ty_generics, lifetime: _serde::__private::PhantomData<&#delife ()>, } impl #in_place_impl_generics _serde::de::Visitor<#delife> for __Visitor #in_place_ty_generics #where_clause { type Value = (); fn expecting(&self, __formatter: &mut _serde::__private::Formatter) -> _serde::__private::fmt::Result { _serde::__private::Formatter::write_str(__formatter, #expecting) } #visit_newtype_struct #[inline] fn visit_seq<__A>(self, #visitor_var: __A) -> _serde::__private::Result where __A: _serde::de::SeqAccess<#delife>, { #visit_seq } } #dispatch } } fn deserialize_seq( type_path: &TokenStream, params: &Parameters, fields: &[Field], is_struct: bool, cattrs: &attr::Container, expecting: &str, ) -> Fragment { let vars = (0..fields.len()).map(field_i as fn(_) -> _); let deserialized_count = fields .iter() .filter(|field| !field.attrs.skip_deserializing()) .count(); let expecting = if deserialized_count == 1 { format!("{} with 1 element", expecting) } else { format!("{} with {} elements", expecting, deserialized_count) }; let expecting = cattrs.expecting().unwrap_or(&expecting); let mut index_in_seq = 0_usize; let let_values = vars.clone().zip(fields).map(|(var, field)| { if field.attrs.skip_deserializing() { let default = Expr(expr_is_missing(field, cattrs)); quote! { let #var = #default; } } else { let visit = match field.attrs.deserialize_with() { None => { let field_ty = field.ty; let span = field.original.span(); let func = quote_spanned!(span=> _serde::de::SeqAccess::next_element::<#field_ty>); quote!(try!(#func(&mut __seq))) } Some(path) => { let (wrapper, wrapper_ty) = wrap_deserialize_field_with(params, field.ty, path); quote!({ #wrapper _serde::__private::Option::map( try!(_serde::de::SeqAccess::next_element::<#wrapper_ty>(&mut __seq)), |__wrap| __wrap.value) }) } }; let value_if_none = match field.attrs.default() { attr::Default::Default => quote!(_serde::__private::Default::default()), attr::Default::Path(path) => quote!(#path()), attr::Default::None => quote!( return _serde::__private::Err(_serde::de::Error::invalid_length(#index_in_seq, &#expecting)); ), }; let assign = quote! { let #var = match #visit { _serde::__private::Some(__value) => __value, _serde::__private::None => { #value_if_none } }; }; index_in_seq += 1; assign } }); let mut result = if is_struct { let names = fields.iter().map(|f| &f.member); quote! { #type_path { #( #names: #vars ),* } } } else { quote! { #type_path ( #(#vars),* ) } }; if params.has_getter { let this = ¶ms.this; result = quote! { _serde::__private::Into::<#this>::into(#result) }; } let let_default = match cattrs.default() { attr::Default::Default => Some(quote!( let __default: Self::Value = _serde::__private::Default::default(); )), attr::Default::Path(path) => Some(quote!( let __default: Self::Value = #path(); )), attr::Default::None => { // We don't need the default value, to prevent an unused variable warning // we'll leave the line empty. None } }; quote_block! { #let_default #(#let_values)* _serde::__private::Ok(#result) } } #[cfg(feature = "deserialize_in_place")] fn deserialize_seq_in_place( params: &Parameters, fields: &[Field], cattrs: &attr::Container, expecting: &str, ) -> Fragment { let deserialized_count = fields .iter() .filter(|field| !field.attrs.skip_deserializing()) .count(); let expecting = if deserialized_count == 1 { format!("{} with 1 element", expecting) } else { format!("{} with {} elements", expecting, deserialized_count) }; let expecting = cattrs.expecting().unwrap_or(&expecting); let mut index_in_seq = 0usize; let write_values = fields.iter().map(|field| { let member = &field.member; if field.attrs.skip_deserializing() { let default = Expr(expr_is_missing(field, cattrs)); quote! { self.place.#member = #default; } } else { let value_if_none = match field.attrs.default() { attr::Default::Default => quote!( self.place.#member = _serde::__private::Default::default(); ), attr::Default::Path(path) => quote!( self.place.#member = #path(); ), attr::Default::None => quote!( return _serde::__private::Err(_serde::de::Error::invalid_length(#index_in_seq, &#expecting)); ), }; let write = match field.attrs.deserialize_with() { None => { quote! { if let _serde::__private::None = try!(_serde::de::SeqAccess::next_element_seed(&mut __seq, _serde::__private::de::InPlaceSeed(&mut self.place.#member))) { #value_if_none } } } Some(path) => { let (wrapper, wrapper_ty) = wrap_deserialize_field_with(params, field.ty, path); quote!({ #wrapper match try!(_serde::de::SeqAccess::next_element::<#wrapper_ty>(&mut __seq)) { _serde::__private::Some(__wrap) => { self.place.#member = __wrap.value; } _serde::__private::None => { #value_if_none } } }) } }; index_in_seq += 1; write } }); let this = ¶ms.this; let (_, ty_generics, _) = params.generics.split_for_impl(); let let_default = match cattrs.default() { attr::Default::Default => Some(quote!( let __default: #this #ty_generics = _serde::__private::Default::default(); )), attr::Default::Path(path) => Some(quote!( let __default: #this #ty_generics = #path(); )), attr::Default::None => { // We don't need the default value, to prevent an unused variable warning // we'll leave the line empty. None } }; quote_block! { #let_default #(#write_values)* _serde::__private::Ok(()) } } fn deserialize_newtype_struct( type_path: &TokenStream, params: &Parameters, field: &Field, ) -> TokenStream { let delife = params.borrowed.de_lifetime(); let field_ty = field.ty; let value = match field.attrs.deserialize_with() { None => { let span = field.original.span(); let func = quote_spanned!(span=> <#field_ty as _serde::Deserialize>::deserialize); quote! { try!(#func(__e)) } } Some(path) => { quote! { try!(#path(__e)) } } }; let mut result = quote!(#type_path(__field0)); if params.has_getter { let this = ¶ms.this; result = quote! { _serde::__private::Into::<#this>::into(#result) }; } quote! { #[inline] fn visit_newtype_struct<__E>(self, __e: __E) -> _serde::__private::Result where __E: _serde::Deserializer<#delife>, { let __field0: #field_ty = #value; _serde::__private::Ok(#result) } } } #[cfg(feature = "deserialize_in_place")] fn deserialize_newtype_struct_in_place(params: &Parameters, field: &Field) -> TokenStream { // We do not generate deserialize_in_place if every field has a // deserialize_with. assert!(field.attrs.deserialize_with().is_none()); let delife = params.borrowed.de_lifetime(); quote! { #[inline] fn visit_newtype_struct<__E>(self, __e: __E) -> _serde::__private::Result where __E: _serde::Deserializer<#delife>, { _serde::Deserialize::deserialize_in_place(__e, &mut self.place.0) } } } enum Untagged { Yes, No, } fn deserialize_struct( variant_ident: Option<&syn::Ident>, params: &Parameters, fields: &[Field], cattrs: &attr::Container, deserializer: Option, untagged: &Untagged, ) -> Fragment { let is_enum = variant_ident.is_some(); let this = ¶ms.this; let (de_impl_generics, de_ty_generics, ty_generics, where_clause) = split_with_de_lifetime(params); let delife = params.borrowed.de_lifetime(); // If there are getters (implying private fields), construct the local type // and use an `Into` conversion to get the remote type. If there are no // getters then construct the target type directly. let construct = if params.has_getter { let local = ¶ms.local; quote!(#local) } else { quote!(#this) }; let type_path = match variant_ident { Some(variant_ident) => quote!(#construct::#variant_ident), None => construct, }; let expecting = match variant_ident { Some(variant_ident) => format!("struct variant {}::{}", params.type_name(), variant_ident), None => format!("struct {}", params.type_name()), }; let expecting = cattrs.expecting().unwrap_or(&expecting); let visit_seq = Stmts(deserialize_seq( &type_path, params, fields, true, cattrs, expecting, )); let (field_visitor, fields_stmt, visit_map) = if cattrs.has_flatten() { deserialize_struct_as_map_visitor(&type_path, params, fields, cattrs) } else { deserialize_struct_as_struct_visitor(&type_path, params, fields, cattrs) }; let field_visitor = Stmts(field_visitor); let fields_stmt = fields_stmt.map(Stmts); let visit_map = Stmts(visit_map); let visitor_expr = quote! { __Visitor { marker: _serde::__private::PhantomData::<#this #ty_generics>, lifetime: _serde::__private::PhantomData, } }; let dispatch = if let Some(deserializer) = deserializer { quote! { _serde::Deserializer::deserialize_any(#deserializer, #visitor_expr) } } else if is_enum && cattrs.has_flatten() { quote! { _serde::de::VariantAccess::newtype_variant_seed(__variant, #visitor_expr) } } else if is_enum { quote! { _serde::de::VariantAccess::struct_variant(__variant, FIELDS, #visitor_expr) } } else if cattrs.has_flatten() { quote! { _serde::Deserializer::deserialize_map(__deserializer, #visitor_expr) } } else { let type_name = cattrs.name().deserialize_name(); quote! { _serde::Deserializer::deserialize_struct(__deserializer, #type_name, FIELDS, #visitor_expr) } }; let all_skipped = fields.iter().all(|field| field.attrs.skip_deserializing()); let visitor_var = if all_skipped { quote!(_) } else { quote!(mut __seq) }; // untagged struct variants do not get a visit_seq method. The same applies to // structs that only have a map representation. let visit_seq = match *untagged { Untagged::No if !cattrs.has_flatten() => Some(quote! { #[inline] fn visit_seq<__A>(self, #visitor_var: __A) -> _serde::__private::Result where __A: _serde::de::SeqAccess<#delife>, { #visit_seq } }), _ => None, }; let visitor_seed = if is_enum && cattrs.has_flatten() { Some(quote! { impl #de_impl_generics _serde::de::DeserializeSeed<#delife> for __Visitor #de_ty_generics #where_clause { type Value = #this #ty_generics; fn deserialize<__D>(self, __deserializer: __D) -> _serde::__private::Result where __D: _serde::Deserializer<'de>, { _serde::Deserializer::deserialize_map(__deserializer, self) } } }) } else { None }; quote_block! { #field_visitor struct __Visitor #de_impl_generics #where_clause { marker: _serde::__private::PhantomData<#this #ty_generics>, lifetime: _serde::__private::PhantomData<&#delife ()>, } impl #de_impl_generics _serde::de::Visitor<#delife> for __Visitor #de_ty_generics #where_clause { type Value = #this #ty_generics; fn expecting(&self, __formatter: &mut _serde::__private::Formatter) -> _serde::__private::fmt::Result { _serde::__private::Formatter::write_str(__formatter, #expecting) } #visit_seq #[inline] fn visit_map<__A>(self, mut __map: __A) -> _serde::__private::Result where __A: _serde::de::MapAccess<#delife>, { #visit_map } } #visitor_seed #fields_stmt #dispatch } } #[cfg(feature = "deserialize_in_place")] fn deserialize_struct_in_place( variant_ident: Option, params: &Parameters, fields: &[Field], cattrs: &attr::Container, deserializer: Option, ) -> Option { let is_enum = variant_ident.is_some(); // for now we do not support in_place deserialization for structs that // are represented as map. if cattrs.has_flatten() { return None; } let this = ¶ms.this; let (de_impl_generics, de_ty_generics, ty_generics, where_clause) = split_with_de_lifetime(params); let delife = params.borrowed.de_lifetime(); let expecting = match variant_ident { Some(variant_ident) => format!("struct variant {}::{}", params.type_name(), variant_ident), None => format!("struct {}", params.type_name()), }; let expecting = cattrs.expecting().unwrap_or(&expecting); let visit_seq = Stmts(deserialize_seq_in_place(params, fields, cattrs, expecting)); let (field_visitor, fields_stmt, visit_map) = deserialize_struct_as_struct_in_place_visitor(params, fields, cattrs); let field_visitor = Stmts(field_visitor); let fields_stmt = Stmts(fields_stmt); let visit_map = Stmts(visit_map); let visitor_expr = quote! { __Visitor { place: __place, lifetime: _serde::__private::PhantomData, } }; let dispatch = if let Some(deserializer) = deserializer { quote! { _serde::Deserializer::deserialize_any(#deserializer, #visitor_expr) } } else if is_enum { quote! { _serde::de::VariantAccess::struct_variant(__variant, FIELDS, #visitor_expr) } } else { let type_name = cattrs.name().deserialize_name(); quote! { _serde::Deserializer::deserialize_struct(__deserializer, #type_name, FIELDS, #visitor_expr) } }; let all_skipped = fields.iter().all(|field| field.attrs.skip_deserializing()); let visitor_var = if all_skipped { quote!(_) } else { quote!(mut __seq) }; let visit_seq = quote! { #[inline] fn visit_seq<__A>(self, #visitor_var: __A) -> _serde::__private::Result where __A: _serde::de::SeqAccess<#delife>, { #visit_seq } }; let in_place_impl_generics = de_impl_generics.in_place(); let in_place_ty_generics = de_ty_generics.in_place(); let place_life = place_lifetime(); Some(quote_block! { #field_visitor struct __Visitor #in_place_impl_generics #where_clause { place: &#place_life mut #this #ty_generics, lifetime: _serde::__private::PhantomData<&#delife ()>, } impl #in_place_impl_generics _serde::de::Visitor<#delife> for __Visitor #in_place_ty_generics #where_clause { type Value = (); fn expecting(&self, __formatter: &mut _serde::__private::Formatter) -> _serde::__private::fmt::Result { _serde::__private::Formatter::write_str(__formatter, #expecting) } #visit_seq #[inline] fn visit_map<__A>(self, mut __map: __A) -> _serde::__private::Result where __A: _serde::de::MapAccess<#delife>, { #visit_map } } #fields_stmt #dispatch }) } fn deserialize_enum( params: &Parameters, variants: &[Variant], cattrs: &attr::Container, ) -> Fragment { match cattrs.tag() { attr::TagType::External => deserialize_externally_tagged_enum(params, variants, cattrs), attr::TagType::Internal { tag } => { deserialize_internally_tagged_enum(params, variants, cattrs, tag) } attr::TagType::Adjacent { tag, content } => { deserialize_adjacently_tagged_enum(params, variants, cattrs, tag, content) } attr::TagType::None => deserialize_untagged_enum(params, variants, cattrs), } } fn prepare_enum_variant_enum( variants: &[Variant], cattrs: &attr::Container, ) -> (TokenStream, Stmts) { let mut deserialized_variants = variants .iter() .enumerate() .filter(|&(_, variant)| !variant.attrs.skip_deserializing()); let variant_names_idents: Vec<_> = deserialized_variants .clone() .map(|(i, variant)| { ( variant.attrs.name().deserialize_name(), field_i(i), variant.attrs.aliases(), ) }) .collect(); let other_idx = deserialized_variants.position(|(_, variant)| variant.attrs.other()); let variants_stmt = { let variant_names = variant_names_idents.iter().map(|(name, _, _)| name); quote! { const VARIANTS: &'static [&'static str] = &[ #(#variant_names),* ]; } }; let variant_visitor = Stmts(deserialize_generated_identifier( &variant_names_idents, cattrs, true, other_idx, )); (variants_stmt, variant_visitor) } fn deserialize_externally_tagged_enum( params: &Parameters, variants: &[Variant], cattrs: &attr::Container, ) -> Fragment { let this = ¶ms.this; let (de_impl_generics, de_ty_generics, ty_generics, where_clause) = split_with_de_lifetime(params); let delife = params.borrowed.de_lifetime(); let type_name = cattrs.name().deserialize_name(); let expecting = format!("enum {}", params.type_name()); let expecting = cattrs.expecting().unwrap_or(&expecting); let (variants_stmt, variant_visitor) = prepare_enum_variant_enum(variants, cattrs); // Match arms to extract a variant from a string let variant_arms = variants .iter() .enumerate() .filter(|&(_, variant)| !variant.attrs.skip_deserializing()) .map(|(i, variant)| { let variant_name = field_i(i); let block = Match(deserialize_externally_tagged_variant( params, variant, cattrs, )); quote! { (__Field::#variant_name, __variant) => #block } }); let all_skipped = variants .iter() .all(|variant| variant.attrs.skip_deserializing()); let match_variant = if all_skipped { // This is an empty enum like `enum Impossible {}` or an enum in which // all variants have `#[serde(skip_deserializing)]`. quote! { // FIXME: Once we drop support for Rust 1.15: // let _serde::__private::Err(__err) = _serde::de::EnumAccess::variant::<__Field>(__data); // _serde::__private::Err(__err) _serde::__private::Result::map( _serde::de::EnumAccess::variant::<__Field>(__data), |(__impossible, _)| match __impossible {}) } } else { quote! { match try!(_serde::de::EnumAccess::variant(__data)) { #(#variant_arms)* } } }; quote_block! { #variant_visitor struct __Visitor #de_impl_generics #where_clause { marker: _serde::__private::PhantomData<#this #ty_generics>, lifetime: _serde::__private::PhantomData<&#delife ()>, } impl #de_impl_generics _serde::de::Visitor<#delife> for __Visitor #de_ty_generics #where_clause { type Value = #this #ty_generics; fn expecting(&self, __formatter: &mut _serde::__private::Formatter) -> _serde::__private::fmt::Result { _serde::__private::Formatter::write_str(__formatter, #expecting) } fn visit_enum<__A>(self, __data: __A) -> _serde::__private::Result where __A: _serde::de::EnumAccess<#delife>, { #match_variant } } #variants_stmt _serde::Deserializer::deserialize_enum( __deserializer, #type_name, VARIANTS, __Visitor { marker: _serde::__private::PhantomData::<#this #ty_generics>, lifetime: _serde::__private::PhantomData, }, ) } } fn deserialize_internally_tagged_enum( params: &Parameters, variants: &[Variant], cattrs: &attr::Container, tag: &str, ) -> Fragment { let (variants_stmt, variant_visitor) = prepare_enum_variant_enum(variants, cattrs); // Match arms to extract a variant from a string let variant_arms = variants .iter() .enumerate() .filter(|&(_, variant)| !variant.attrs.skip_deserializing()) .map(|(i, variant)| { let variant_name = field_i(i); let block = Match(deserialize_internally_tagged_variant( params, variant, cattrs, quote! { _serde::__private::de::ContentDeserializer::<__D::Error>::new(__tagged.content) }, )); quote! { __Field::#variant_name => #block } }); let expecting = format!("internally tagged enum {}", params.type_name()); let expecting = cattrs.expecting().unwrap_or(&expecting); quote_block! { #variant_visitor #variants_stmt let __tagged = try!(_serde::Deserializer::deserialize_any( __deserializer, _serde::__private::de::TaggedContentVisitor::<__Field>::new(#tag, #expecting))); match __tagged.tag { #(#variant_arms)* } } } fn deserialize_adjacently_tagged_enum( params: &Parameters, variants: &[Variant], cattrs: &attr::Container, tag: &str, content: &str, ) -> Fragment { let this = ¶ms.this; let (de_impl_generics, de_ty_generics, ty_generics, where_clause) = split_with_de_lifetime(params); let delife = params.borrowed.de_lifetime(); let (variants_stmt, variant_visitor) = prepare_enum_variant_enum(variants, cattrs); let variant_arms: &Vec<_> = &variants .iter() .enumerate() .filter(|&(_, variant)| !variant.attrs.skip_deserializing()) .map(|(i, variant)| { let variant_index = field_i(i); let block = Match(deserialize_untagged_variant( params, variant, cattrs, quote!(__deserializer), )); quote! { __Field::#variant_index => #block } }) .collect(); let expecting = format!("adjacently tagged enum {}", params.type_name()); let expecting = cattrs.expecting().unwrap_or(&expecting); let type_name = cattrs.name().deserialize_name(); let deny_unknown_fields = cattrs.deny_unknown_fields(); // If unknown fields are allowed, we pick the visitor that can step over // those. Otherwise we pick the visitor that fails on unknown keys. let field_visitor_ty = if deny_unknown_fields { quote! { _serde::__private::de::TagOrContentFieldVisitor } } else { quote! { _serde::__private::de::TagContentOtherFieldVisitor } }; let tag_or_content = quote! { #field_visitor_ty { tag: #tag, content: #content, } }; let mut missing_content = quote! { _serde::__private::Err(<__A::Error as _serde::de::Error>::missing_field(#content)) }; let mut missing_content_fallthrough = quote!(); let missing_content_arms = variants .iter() .enumerate() .filter(|&(_, variant)| !variant.attrs.skip_deserializing()) .filter_map(|(i, variant)| { let variant_index = field_i(i); let variant_ident = &variant.ident; let arm = match variant.style { Style::Unit => quote! { _serde::__private::Ok(#this::#variant_ident) }, Style::Newtype if variant.attrs.deserialize_with().is_none() => { let span = variant.original.span(); let func = quote_spanned!(span=> _serde::__private::de::missing_field); quote! { #func(#content).map(#this::#variant_ident) } } _ => { missing_content_fallthrough = quote!(_ => #missing_content); return None; } }; Some(quote! { __Field::#variant_index => #arm, }) }) .collect::>(); if !missing_content_arms.is_empty() { missing_content = quote! { match __field { #(#missing_content_arms)* #missing_content_fallthrough } }; } // Advance the map by one key, returning early in case of error. let next_key = quote! { try!(_serde::de::MapAccess::next_key_seed(&mut __map, #tag_or_content)) }; // When allowing unknown fields, we want to transparently step through keys // we don't care about until we find `tag`, `content`, or run out of keys. let next_relevant_key = if deny_unknown_fields { next_key } else { quote!({ let mut __rk : _serde::__private::Option<_serde::__private::de::TagOrContentField> = _serde::__private::None; while let _serde::__private::Some(__k) = #next_key { match __k { _serde::__private::de::TagContentOtherField::Other => { let _ = try!(_serde::de::MapAccess::next_value::<_serde::de::IgnoredAny>(&mut __map)); continue; }, _serde::__private::de::TagContentOtherField::Tag => { __rk = _serde::__private::Some(_serde::__private::de::TagOrContentField::Tag); break; } _serde::__private::de::TagContentOtherField::Content => { __rk = _serde::__private::Some(_serde::__private::de::TagOrContentField::Content); break; } } } __rk }) }; // Step through remaining keys, looking for duplicates of previously-seen // keys. When unknown fields are denied, any key that isn't a duplicate will // at this point immediately produce an error. let visit_remaining_keys = quote! { match #next_relevant_key { _serde::__private::Some(_serde::__private::de::TagOrContentField::Tag) => { _serde::__private::Err(<__A::Error as _serde::de::Error>::duplicate_field(#tag)) } _serde::__private::Some(_serde::__private::de::TagOrContentField::Content) => { _serde::__private::Err(<__A::Error as _serde::de::Error>::duplicate_field(#content)) } _serde::__private::None => _serde::__private::Ok(__ret), } }; let finish_content_then_tag = if variant_arms.is_empty() { quote! { match try!(_serde::de::MapAccess::next_value::<__Field>(&mut __map)) {} } } else { quote! { let __ret = try!(match try!(_serde::de::MapAccess::next_value(&mut __map)) { // Deserialize the buffered content now that we know the variant. #(#variant_arms)* }); // Visit remaining keys, looking for duplicates. #visit_remaining_keys } }; quote_block! { #variant_visitor #variants_stmt struct __Seed #de_impl_generics #where_clause { field: __Field, marker: _serde::__private::PhantomData<#this #ty_generics>, lifetime: _serde::__private::PhantomData<&#delife ()>, } impl #de_impl_generics _serde::de::DeserializeSeed<#delife> for __Seed #de_ty_generics #where_clause { type Value = #this #ty_generics; fn deserialize<__D>(self, __deserializer: __D) -> _serde::__private::Result where __D: _serde::Deserializer<#delife>, { match self.field { #(#variant_arms)* } } } struct __Visitor #de_impl_generics #where_clause { marker: _serde::__private::PhantomData<#this #ty_generics>, lifetime: _serde::__private::PhantomData<&#delife ()>, } impl #de_impl_generics _serde::de::Visitor<#delife> for __Visitor #de_ty_generics #where_clause { type Value = #this #ty_generics; fn expecting(&self, __formatter: &mut _serde::__private::Formatter) -> _serde::__private::fmt::Result { _serde::__private::Formatter::write_str(__formatter, #expecting) } fn visit_map<__A>(self, mut __map: __A) -> _serde::__private::Result where __A: _serde::de::MapAccess<#delife>, { // Visit the first relevant key. match #next_relevant_key { // First key is the tag. _serde::__private::Some(_serde::__private::de::TagOrContentField::Tag) => { // Parse the tag. let __field = try!(_serde::de::MapAccess::next_value(&mut __map)); // Visit the second key. match #next_relevant_key { // Second key is a duplicate of the tag. _serde::__private::Some(_serde::__private::de::TagOrContentField::Tag) => { _serde::__private::Err(<__A::Error as _serde::de::Error>::duplicate_field(#tag)) } // Second key is the content. _serde::__private::Some(_serde::__private::de::TagOrContentField::Content) => { let __ret = try!(_serde::de::MapAccess::next_value_seed(&mut __map, __Seed { field: __field, marker: _serde::__private::PhantomData, lifetime: _serde::__private::PhantomData, })); // Visit remaining keys, looking for duplicates. #visit_remaining_keys } // There is no second key; might be okay if the we have a unit variant. _serde::__private::None => #missing_content } } // First key is the content. _serde::__private::Some(_serde::__private::de::TagOrContentField::Content) => { // Buffer up the content. let __content = try!(_serde::de::MapAccess::next_value::<_serde::__private::de::Content>(&mut __map)); // Visit the second key. match #next_relevant_key { // Second key is the tag. _serde::__private::Some(_serde::__private::de::TagOrContentField::Tag) => { let __deserializer = _serde::__private::de::ContentDeserializer::<__A::Error>::new(__content); #finish_content_then_tag } // Second key is a duplicate of the content. _serde::__private::Some(_serde::__private::de::TagOrContentField::Content) => { _serde::__private::Err(<__A::Error as _serde::de::Error>::duplicate_field(#content)) } // There is no second key. _serde::__private::None => { _serde::__private::Err(<__A::Error as _serde::de::Error>::missing_field(#tag)) } } } // There is no first key. _serde::__private::None => { _serde::__private::Err(<__A::Error as _serde::de::Error>::missing_field(#tag)) } } } fn visit_seq<__A>(self, mut __seq: __A) -> _serde::__private::Result where __A: _serde::de::SeqAccess<#delife>, { // Visit the first element - the tag. match try!(_serde::de::SeqAccess::next_element(&mut __seq)) { _serde::__private::Some(__field) => { // Visit the second element - the content. match try!(_serde::de::SeqAccess::next_element_seed( &mut __seq, __Seed { field: __field, marker: _serde::__private::PhantomData, lifetime: _serde::__private::PhantomData, }, )) { _serde::__private::Some(__ret) => _serde::__private::Ok(__ret), // There is no second element. _serde::__private::None => { _serde::__private::Err(_serde::de::Error::invalid_length(1, &self)) } } } // There is no first element. _serde::__private::None => { _serde::__private::Err(_serde::de::Error::invalid_length(0, &self)) } } } } const FIELDS: &'static [&'static str] = &[#tag, #content]; _serde::Deserializer::deserialize_struct( __deserializer, #type_name, FIELDS, __Visitor { marker: _serde::__private::PhantomData::<#this #ty_generics>, lifetime: _serde::__private::PhantomData, }, ) } } fn deserialize_untagged_enum( params: &Parameters, variants: &[Variant], cattrs: &attr::Container, ) -> Fragment { let attempts = variants .iter() .filter(|variant| !variant.attrs.skip_deserializing()) .map(|variant| { Expr(deserialize_untagged_variant( params, variant, cattrs, quote!( _serde::__private::de::ContentRefDeserializer::<__D::Error>::new(&__content) ), )) }); // TODO this message could be better by saving the errors from the failed // attempts. The heuristic used by TOML was to count the number of fields // processed before an error, and use the error that happened after the // largest number of fields. I'm not sure I like that. Maybe it would be // better to save all the errors and combine them into one message that // explains why none of the variants matched. let fallthrough_msg = format!( "data did not match any variant of untagged enum {}", params.type_name() ); let fallthrough_msg = cattrs.expecting().unwrap_or(&fallthrough_msg); quote_block! { let __content = try!(<_serde::__private::de::Content as _serde::Deserialize>::deserialize(__deserializer)); #( if let _serde::__private::Ok(__ok) = #attempts { return _serde::__private::Ok(__ok); } )* _serde::__private::Err(_serde::de::Error::custom(#fallthrough_msg)) } } fn deserialize_externally_tagged_variant( params: &Parameters, variant: &Variant, cattrs: &attr::Container, ) -> Fragment { if let Some(path) = variant.attrs.deserialize_with() { let (wrapper, wrapper_ty, unwrap_fn) = wrap_deserialize_variant_with(params, variant, path); return quote_block! { #wrapper _serde::__private::Result::map( _serde::de::VariantAccess::newtype_variant::<#wrapper_ty>(__variant), #unwrap_fn) }; } let variant_ident = &variant.ident; match variant.style { Style::Unit => { let this = ¶ms.this; quote_block! { try!(_serde::de::VariantAccess::unit_variant(__variant)); _serde::__private::Ok(#this::#variant_ident) } } Style::Newtype => deserialize_externally_tagged_newtype_variant( variant_ident, params, &variant.fields[0], cattrs, ), Style::Tuple => { deserialize_tuple(Some(variant_ident), params, &variant.fields, cattrs, None) } Style::Struct => deserialize_struct( Some(variant_ident), params, &variant.fields, cattrs, None, &Untagged::No, ), } } // Generates significant part of the visit_seq and visit_map bodies of visitors // for the variants of internally tagged enum. fn deserialize_internally_tagged_variant( params: &Parameters, variant: &Variant, cattrs: &attr::Container, deserializer: TokenStream, ) -> Fragment { if variant.attrs.deserialize_with().is_some() { return deserialize_untagged_variant(params, variant, cattrs, deserializer); } let variant_ident = &variant.ident; match effective_style(variant) { Style::Unit => { let this = ¶ms.this; let type_name = params.type_name(); let variant_name = variant.ident.to_string(); let default = variant.fields.get(0).map(|field| { let default = Expr(expr_is_missing(field, cattrs)); quote!((#default)) }); quote_block! { try!(_serde::Deserializer::deserialize_any(#deserializer, _serde::__private::de::InternallyTaggedUnitVisitor::new(#type_name, #variant_name))); _serde::__private::Ok(#this::#variant_ident #default) } } Style::Newtype => deserialize_untagged_newtype_variant( variant_ident, params, &variant.fields[0], &deserializer, ), Style::Struct => deserialize_struct( Some(variant_ident), params, &variant.fields, cattrs, Some(deserializer), &Untagged::No, ), Style::Tuple => unreachable!("checked in serde_derive_internals"), } } fn deserialize_untagged_variant( params: &Parameters, variant: &Variant, cattrs: &attr::Container, deserializer: TokenStream, ) -> Fragment { if let Some(path) = variant.attrs.deserialize_with() { let unwrap_fn = unwrap_to_variant_closure(params, variant, false); return quote_block! { _serde::__private::Result::map(#path(#deserializer), #unwrap_fn) }; } let variant_ident = &variant.ident; match effective_style(variant) { Style::Unit => { let this = ¶ms.this; let type_name = params.type_name(); let variant_name = variant.ident.to_string(); let default = variant.fields.get(0).map(|field| { let default = Expr(expr_is_missing(field, cattrs)); quote!((#default)) }); quote_expr! { match _serde::Deserializer::deserialize_any( #deserializer, _serde::__private::de::UntaggedUnitVisitor::new(#type_name, #variant_name) ) { _serde::__private::Ok(()) => _serde::__private::Ok(#this::#variant_ident #default), _serde::__private::Err(__err) => _serde::__private::Err(__err), } } } Style::Newtype => deserialize_untagged_newtype_variant( variant_ident, params, &variant.fields[0], &deserializer, ), Style::Tuple => deserialize_tuple( Some(variant_ident), params, &variant.fields, cattrs, Some(deserializer), ), Style::Struct => deserialize_struct( Some(variant_ident), params, &variant.fields, cattrs, Some(deserializer), &Untagged::Yes, ), } } fn deserialize_externally_tagged_newtype_variant( variant_ident: &syn::Ident, params: &Parameters, field: &Field, cattrs: &attr::Container, ) -> Fragment { let this = ¶ms.this; if field.attrs.skip_deserializing() { let this = ¶ms.this; let default = Expr(expr_is_missing(field, cattrs)); return quote_block! { try!(_serde::de::VariantAccess::unit_variant(__variant)); _serde::__private::Ok(#this::#variant_ident(#default)) }; } match field.attrs.deserialize_with() { None => { let field_ty = field.ty; let span = field.original.span(); let func = quote_spanned!(span=> _serde::de::VariantAccess::newtype_variant::<#field_ty>); quote_expr! { _serde::__private::Result::map(#func(__variant), #this::#variant_ident) } } Some(path) => { let (wrapper, wrapper_ty) = wrap_deserialize_field_with(params, field.ty, path); quote_block! { #wrapper _serde::__private::Result::map( _serde::de::VariantAccess::newtype_variant::<#wrapper_ty>(__variant), |__wrapper| #this::#variant_ident(__wrapper.value)) } } } } fn deserialize_untagged_newtype_variant( variant_ident: &syn::Ident, params: &Parameters, field: &Field, deserializer: &TokenStream, ) -> Fragment { let this = ¶ms.this; let field_ty = field.ty; match field.attrs.deserialize_with() { None => { let span = field.original.span(); let func = quote_spanned!(span=> <#field_ty as _serde::Deserialize>::deserialize); quote_expr! { _serde::__private::Result::map(#func(#deserializer), #this::#variant_ident) } } Some(path) => { quote_block! { let __value: _serde::__private::Result<#field_ty, _> = #path(#deserializer); _serde::__private::Result::map(__value, #this::#variant_ident) } } } } fn deserialize_generated_identifier( fields: &[(String, Ident, Vec)], cattrs: &attr::Container, is_variant: bool, other_idx: Option, ) -> Fragment { let this = quote!(__Field); let field_idents: &Vec<_> = &fields.iter().map(|(_, ident, _)| ident).collect(); let (ignore_variant, fallthrough) = if !is_variant && cattrs.has_flatten() { let ignore_variant = quote!(__other(_serde::__private::de::Content<'de>),); let fallthrough = quote!(_serde::__private::Ok(__Field::__other(__value))); (Some(ignore_variant), Some(fallthrough)) } else if let Some(other_idx) = other_idx { let ignore_variant = fields[other_idx].1.clone(); let fallthrough = quote!(_serde::__private::Ok(__Field::#ignore_variant)); (None, Some(fallthrough)) } else if is_variant || cattrs.deny_unknown_fields() { (None, None) } else { let ignore_variant = quote!(__ignore,); let fallthrough = quote!(_serde::__private::Ok(__Field::__ignore)); (Some(ignore_variant), Some(fallthrough)) }; let visitor_impl = Stmts(deserialize_identifier( &this, fields, is_variant, fallthrough, None, !is_variant && cattrs.has_flatten(), None, )); let lifetime = if !is_variant && cattrs.has_flatten() { Some(quote!(<'de>)) } else { None }; quote_block! { #[allow(non_camel_case_types)] enum __Field #lifetime { #(#field_idents,)* #ignore_variant } struct __FieldVisitor; impl<'de> _serde::de::Visitor<'de> for __FieldVisitor { type Value = __Field #lifetime; #visitor_impl } impl<'de> _serde::Deserialize<'de> for __Field #lifetime { #[inline] fn deserialize<__D>(__deserializer: __D) -> _serde::__private::Result where __D: _serde::Deserializer<'de>, { _serde::Deserializer::deserialize_identifier(__deserializer, __FieldVisitor) } } } } // Generates `Deserialize::deserialize` body for an enum with // `serde(field_identifier)` or `serde(variant_identifier)` attribute. fn deserialize_custom_identifier( params: &Parameters, variants: &[Variant], cattrs: &attr::Container, ) -> Fragment { let is_variant = match cattrs.identifier() { attr::Identifier::Variant => true, attr::Identifier::Field => false, attr::Identifier::No => unreachable!(), }; let this = ¶ms.this; let this = quote!(#this); let (ordinary, fallthrough, fallthrough_borrowed) = if let Some(last) = variants.last() { let last_ident = &last.ident; if last.attrs.other() { // Process `serde(other)` attribute. It would always be found on the // last variant (checked in `check_identifier`), so all preceding // are ordinary variants. let ordinary = &variants[..variants.len() - 1]; let fallthrough = quote!(_serde::__private::Ok(#this::#last_ident)); (ordinary, Some(fallthrough), None) } else if let Style::Newtype = last.style { let ordinary = &variants[..variants.len() - 1]; let fallthrough = |value| { quote! { _serde::__private::Result::map( _serde::Deserialize::deserialize( _serde::__private::de::IdentifierDeserializer::from(#value) ), #this::#last_ident) } }; ( ordinary, Some(fallthrough(quote!(__value))), Some(fallthrough(quote!(_serde::__private::de::Borrowed( __value )))), ) } else { (variants, None, None) } } else { (variants, None, None) }; let names_idents: Vec<_> = ordinary .iter() .map(|variant| { ( variant.attrs.name().deserialize_name(), variant.ident.clone(), variant.attrs.aliases(), ) }) .collect(); let names = names_idents.iter().map(|(name, _, _)| name); let names_const = if fallthrough.is_some() { None } else if is_variant { let variants = quote! { const VARIANTS: &'static [&'static str] = &[ #(#names),* ]; }; Some(variants) } else { let fields = quote! { const FIELDS: &'static [&'static str] = &[ #(#names),* ]; }; Some(fields) }; let (de_impl_generics, de_ty_generics, ty_generics, where_clause) = split_with_de_lifetime(params); let delife = params.borrowed.de_lifetime(); let visitor_impl = Stmts(deserialize_identifier( &this, &names_idents, is_variant, fallthrough, fallthrough_borrowed, false, cattrs.expecting(), )); quote_block! { #names_const struct __FieldVisitor #de_impl_generics #where_clause { marker: _serde::__private::PhantomData<#this #ty_generics>, lifetime: _serde::__private::PhantomData<&#delife ()>, } impl #de_impl_generics _serde::de::Visitor<#delife> for __FieldVisitor #de_ty_generics #where_clause { type Value = #this #ty_generics; #visitor_impl } let __visitor = __FieldVisitor { marker: _serde::__private::PhantomData::<#this #ty_generics>, lifetime: _serde::__private::PhantomData, }; _serde::Deserializer::deserialize_identifier(__deserializer, __visitor) } } fn deserialize_identifier( this: &TokenStream, fields: &[(String, Ident, Vec)], is_variant: bool, fallthrough: Option, fallthrough_borrowed: Option, collect_other_fields: bool, expecting: Option<&str>, ) -> Fragment { let mut flat_fields = Vec::new(); for (_, ident, aliases) in fields { flat_fields.extend(aliases.iter().map(|alias| (alias, ident))); } let field_strs: &Vec<_> = &flat_fields.iter().map(|(name, _)| name).collect(); let field_bytes: &Vec<_> = &flat_fields .iter() .map(|(name, _)| Literal::byte_string(name.as_bytes())) .collect(); let constructors: &Vec<_> = &flat_fields .iter() .map(|(_, ident)| quote!(#this::#ident)) .collect(); let main_constructors: &Vec<_> = &fields .iter() .map(|(_, ident, _)| quote!(#this::#ident)) .collect(); let expecting = expecting.unwrap_or(if is_variant { "variant identifier" } else { "field identifier" }); let index_expecting = if is_variant { "variant" } else { "field" }; let bytes_to_str = if fallthrough.is_some() || collect_other_fields { None } else { Some(quote! { let __value = &_serde::__private::from_utf8_lossy(__value); }) }; let ( value_as_str_content, value_as_borrowed_str_content, value_as_bytes_content, value_as_borrowed_bytes_content, ) = if collect_other_fields { ( Some(quote! { let __value = _serde::__private::de::Content::String(_serde::__private::ToString::to_string(__value)); }), Some(quote! { let __value = _serde::__private::de::Content::Str(__value); }), Some(quote! { let __value = _serde::__private::de::Content::ByteBuf(__value.to_vec()); }), Some(quote! { let __value = _serde::__private::de::Content::Bytes(__value); }), ) } else { (None, None, None, None) }; let fallthrough_arm_tokens; let fallthrough_arm = if let Some(fallthrough) = &fallthrough { fallthrough } else if is_variant { fallthrough_arm_tokens = quote! { _serde::__private::Err(_serde::de::Error::unknown_variant(__value, VARIANTS)) }; &fallthrough_arm_tokens } else { fallthrough_arm_tokens = quote! { _serde::__private::Err(_serde::de::Error::unknown_field(__value, FIELDS)) }; &fallthrough_arm_tokens }; let u64_fallthrough_arm_tokens; let u64_fallthrough_arm = if let Some(fallthrough) = &fallthrough { fallthrough } else { let fallthrough_msg = format!("{} index 0 <= i < {}", index_expecting, fields.len()); u64_fallthrough_arm_tokens = quote! { _serde::__private::Err(_serde::de::Error::invalid_value( _serde::de::Unexpected::Unsigned(__value), &#fallthrough_msg, )) }; &u64_fallthrough_arm_tokens }; let variant_indices = 0_u64..; let visit_other = if collect_other_fields { quote! { fn visit_bool<__E>(self, __value: bool) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::Bool(__value))) } fn visit_i8<__E>(self, __value: i8) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::I8(__value))) } fn visit_i16<__E>(self, __value: i16) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::I16(__value))) } fn visit_i32<__E>(self, __value: i32) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::I32(__value))) } fn visit_i64<__E>(self, __value: i64) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::I64(__value))) } fn visit_u8<__E>(self, __value: u8) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::U8(__value))) } fn visit_u16<__E>(self, __value: u16) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::U16(__value))) } fn visit_u32<__E>(self, __value: u32) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::U32(__value))) } fn visit_u64<__E>(self, __value: u64) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::U64(__value))) } fn visit_f32<__E>(self, __value: f32) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::F32(__value))) } fn visit_f64<__E>(self, __value: f64) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::F64(__value))) } fn visit_char<__E>(self, __value: char) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::Char(__value))) } fn visit_unit<__E>(self) -> _serde::__private::Result where __E: _serde::de::Error, { _serde::__private::Ok(__Field::__other(_serde::__private::de::Content::Unit)) } } } else { quote! { fn visit_u64<__E>(self, __value: u64) -> _serde::__private::Result where __E: _serde::de::Error, { match __value { #( #variant_indices => _serde::__private::Ok(#main_constructors), )* _ => #u64_fallthrough_arm, } } } }; let visit_borrowed = if fallthrough_borrowed.is_some() || collect_other_fields { let fallthrough_borrowed_arm = fallthrough_borrowed.as_ref().unwrap_or(fallthrough_arm); Some(quote! { fn visit_borrowed_str<__E>(self, __value: &'de str) -> _serde::__private::Result where __E: _serde::de::Error, { match __value { #( #field_strs => _serde::__private::Ok(#constructors), )* _ => { #value_as_borrowed_str_content #fallthrough_borrowed_arm } } } fn visit_borrowed_bytes<__E>(self, __value: &'de [u8]) -> _serde::__private::Result where __E: _serde::de::Error, { match __value { #( #field_bytes => _serde::__private::Ok(#constructors), )* _ => { #bytes_to_str #value_as_borrowed_bytes_content #fallthrough_borrowed_arm } } } }) } else { None }; quote_block! { fn expecting(&self, __formatter: &mut _serde::__private::Formatter) -> _serde::__private::fmt::Result { _serde::__private::Formatter::write_str(__formatter, #expecting) } #visit_other fn visit_str<__E>(self, __value: &str) -> _serde::__private::Result where __E: _serde::de::Error, { match __value { #( #field_strs => _serde::__private::Ok(#constructors), )* _ => { #value_as_str_content #fallthrough_arm } } } fn visit_bytes<__E>(self, __value: &[u8]) -> _serde::__private::Result where __E: _serde::de::Error, { match __value { #( #field_bytes => _serde::__private::Ok(#constructors), )* _ => { #bytes_to_str #value_as_bytes_content #fallthrough_arm } } } #visit_borrowed } } fn deserialize_struct_as_struct_visitor( struct_path: &TokenStream, params: &Parameters, fields: &[Field], cattrs: &attr::Container, ) -> (Fragment, Option, Fragment) { assert!(!cattrs.has_flatten()); let field_names_idents: Vec<_> = fields .iter() .enumerate() .filter(|&(_, field)| !field.attrs.skip_deserializing()) .map(|(i, field)| { ( field.attrs.name().deserialize_name(), field_i(i), field.attrs.aliases(), ) }) .collect(); let fields_stmt = { let field_names = field_names_idents.iter().map(|(name, _, _)| name); quote_block! { const FIELDS: &'static [&'static str] = &[ #(#field_names),* ]; } }; let field_visitor = deserialize_generated_identifier(&field_names_idents, cattrs, false, None); let visit_map = deserialize_map(struct_path, params, fields, cattrs); (field_visitor, Some(fields_stmt), visit_map) } fn deserialize_struct_as_map_visitor( struct_path: &TokenStream, params: &Parameters, fields: &[Field], cattrs: &attr::Container, ) -> (Fragment, Option, Fragment) { let field_names_idents: Vec<_> = fields .iter() .enumerate() .filter(|&(_, field)| !field.attrs.skip_deserializing() && !field.attrs.flatten()) .map(|(i, field)| { ( field.attrs.name().deserialize_name(), field_i(i), field.attrs.aliases(), ) }) .collect(); let field_visitor = deserialize_generated_identifier(&field_names_idents, cattrs, false, None); let visit_map = deserialize_map(struct_path, params, fields, cattrs); (field_visitor, None, visit_map) } fn deserialize_map( struct_path: &TokenStream, params: &Parameters, fields: &[Field], cattrs: &attr::Container, ) -> Fragment { // Create the field names for the fields. let fields_names: Vec<_> = fields .iter() .enumerate() .map(|(i, field)| (field, field_i(i))) .collect(); // Declare each field that will be deserialized. let let_values = fields_names .iter() .filter(|&&(field, _)| !field.attrs.skip_deserializing() && !field.attrs.flatten()) .map(|(field, name)| { let field_ty = field.ty; quote! { let mut #name: _serde::__private::Option<#field_ty> = _serde::__private::None; } }); // Collect contents for flatten fields into a buffer let let_collect = if cattrs.has_flatten() { Some(quote! { let mut __collect = _serde::__private::Vec::<_serde::__private::Option<( _serde::__private::de::Content, _serde::__private::de::Content )>>::new(); }) } else { None }; // Match arms to extract a value for a field. let value_arms = fields_names .iter() .filter(|&&(field, _)| !field.attrs.skip_deserializing() && !field.attrs.flatten()) .map(|(field, name)| { let deser_name = field.attrs.name().deserialize_name(); let visit = match field.attrs.deserialize_with() { None => { let field_ty = field.ty; let span = field.original.span(); let func = quote_spanned!(span=> _serde::de::MapAccess::next_value::<#field_ty>); quote! { try!(#func(&mut __map)) } } Some(path) => { let (wrapper, wrapper_ty) = wrap_deserialize_field_with(params, field.ty, path); quote!({ #wrapper match _serde::de::MapAccess::next_value::<#wrapper_ty>(&mut __map) { _serde::__private::Ok(__wrapper) => __wrapper.value, _serde::__private::Err(__err) => { return _serde::__private::Err(__err); } } }) } }; quote! { __Field::#name => { if _serde::__private::Option::is_some(&#name) { return _serde::__private::Err(<__A::Error as _serde::de::Error>::duplicate_field(#deser_name)); } #name = _serde::__private::Some(#visit); } } }); // Visit ignored values to consume them let ignored_arm = if cattrs.has_flatten() { Some(quote! { __Field::__other(__name) => { __collect.push(_serde::__private::Some(( __name, try!(_serde::de::MapAccess::next_value(&mut __map))))); } }) } else if cattrs.deny_unknown_fields() { None } else { Some(quote! { _ => { let _ = try!(_serde::de::MapAccess::next_value::<_serde::de::IgnoredAny>(&mut __map)); } }) }; let all_skipped = fields.iter().all(|field| field.attrs.skip_deserializing()); let match_keys = if cattrs.deny_unknown_fields() && all_skipped { quote! { // FIXME: Once we drop support for Rust 1.15: // let _serde::__private::None::<__Field> = try!(_serde::de::MapAccess::next_key(&mut __map)); _serde::__private::Option::map( try!(_serde::de::MapAccess::next_key::<__Field>(&mut __map)), |__impossible| match __impossible {}); } } else { quote! { while let _serde::__private::Some(__key) = try!(_serde::de::MapAccess::next_key::<__Field>(&mut __map)) { match __key { #(#value_arms)* #ignored_arm } } } }; let extract_values = fields_names .iter() .filter(|&&(field, _)| !field.attrs.skip_deserializing() && !field.attrs.flatten()) .map(|(field, name)| { let missing_expr = Match(expr_is_missing(field, cattrs)); quote! { let #name = match #name { _serde::__private::Some(#name) => #name, _serde::__private::None => #missing_expr }; } }); let extract_collected = fields_names .iter() .filter(|&&(field, _)| field.attrs.flatten() && !field.attrs.skip_deserializing()) .map(|(field, name)| { let field_ty = field.ty; let func = match field.attrs.deserialize_with() { None => { let span = field.original.span(); quote_spanned!(span=> _serde::de::Deserialize::deserialize) } Some(path) => quote!(#path), }; quote! { let #name: #field_ty = try!(#func( _serde::__private::de::FlatMapDeserializer( &mut __collect, _serde::__private::PhantomData))); } }); let collected_deny_unknown_fields = if cattrs.has_flatten() && cattrs.deny_unknown_fields() { Some(quote! { if let _serde::__private::Some(_serde::__private::Some((__key, _))) = __collect.into_iter().filter(_serde::__private::Option::is_some).next() { if let _serde::__private::Some(__key) = __key.as_str() { return _serde::__private::Err( _serde::de::Error::custom(format_args!("unknown field `{}`", &__key))); } else { return _serde::__private::Err( _serde::de::Error::custom(format_args!("unexpected map key"))); } } }) } else { None }; let result = fields_names.iter().map(|(field, name)| { let member = &field.member; if field.attrs.skip_deserializing() { let value = Expr(expr_is_missing(field, cattrs)); quote!(#member: #value) } else { quote!(#member: #name) } }); let let_default = match cattrs.default() { attr::Default::Default => Some(quote!( let __default: Self::Value = _serde::__private::Default::default(); )), attr::Default::Path(path) => Some(quote!( let __default: Self::Value = #path(); )), attr::Default::None => { // We don't need the default value, to prevent an unused variable warning // we'll leave the line empty. None } }; let mut result = quote!(#struct_path { #(#result),* }); if params.has_getter { let this = ¶ms.this; result = quote! { _serde::__private::Into::<#this>::into(#result) }; } quote_block! { #(#let_values)* #let_collect #match_keys #let_default #(#extract_values)* #(#extract_collected)* #collected_deny_unknown_fields _serde::__private::Ok(#result) } } #[cfg(feature = "deserialize_in_place")] fn deserialize_struct_as_struct_in_place_visitor( params: &Parameters, fields: &[Field], cattrs: &attr::Container, ) -> (Fragment, Fragment, Fragment) { assert!(!cattrs.has_flatten()); let field_names_idents: Vec<_> = fields .iter() .enumerate() .filter(|&(_, field)| !field.attrs.skip_deserializing()) .map(|(i, field)| { ( field.attrs.name().deserialize_name(), field_i(i), field.attrs.aliases(), ) }) .collect(); let fields_stmt = { let field_names = field_names_idents.iter().map(|(name, _, _)| name); quote_block! { const FIELDS: &'static [&'static str] = &[ #(#field_names),* ]; } }; let field_visitor = deserialize_generated_identifier(&field_names_idents, cattrs, false, None); let visit_map = deserialize_map_in_place(params, fields, cattrs); (field_visitor, fields_stmt, visit_map) } #[cfg(feature = "deserialize_in_place")] fn deserialize_map_in_place( params: &Parameters, fields: &[Field], cattrs: &attr::Container, ) -> Fragment { assert!(!cattrs.has_flatten()); // Create the field names for the fields. let fields_names: Vec<_> = fields .iter() .enumerate() .map(|(i, field)| (field, field_i(i))) .collect(); // For deserialize_in_place, declare booleans for each field that will be // deserialized. let let_flags = fields_names .iter() .filter(|&&(field, _)| !field.attrs.skip_deserializing()) .map(|(_, name)| { quote! { let mut #name: bool = false; } }); // Match arms to extract a value for a field. let value_arms_from = fields_names .iter() .filter(|&&(field, _)| !field.attrs.skip_deserializing()) .map(|(field, name)| { let deser_name = field.attrs.name().deserialize_name(); let member = &field.member; let visit = match field.attrs.deserialize_with() { None => { quote! { try!(_serde::de::MapAccess::next_value_seed(&mut __map, _serde::__private::de::InPlaceSeed(&mut self.place.#member))) } } Some(path) => { let (wrapper, wrapper_ty) = wrap_deserialize_field_with(params, field.ty, path); quote!({ #wrapper self.place.#member = match _serde::de::MapAccess::next_value::<#wrapper_ty>(&mut __map) { _serde::__private::Ok(__wrapper) => __wrapper.value, _serde::__private::Err(__err) => { return _serde::__private::Err(__err); } }; }) } }; quote! { __Field::#name => { if #name { return _serde::__private::Err(<__A::Error as _serde::de::Error>::duplicate_field(#deser_name)); } #visit; #name = true; } } }); // Visit ignored values to consume them let ignored_arm = if cattrs.deny_unknown_fields() { None } else { Some(quote! { _ => { let _ = try!(_serde::de::MapAccess::next_value::<_serde::de::IgnoredAny>(&mut __map)); } }) }; let all_skipped = fields.iter().all(|field| field.attrs.skip_deserializing()); let match_keys = if cattrs.deny_unknown_fields() && all_skipped { quote! { // FIXME: Once we drop support for Rust 1.15: // let _serde::__private::None::<__Field> = try!(_serde::de::MapAccess::next_key(&mut __map)); _serde::__private::Option::map( try!(_serde::de::MapAccess::next_key::<__Field>(&mut __map)), |__impossible| match __impossible {}); } } else { quote! { while let _serde::__private::Some(__key) = try!(_serde::de::MapAccess::next_key::<__Field>(&mut __map)) { match __key { #(#value_arms_from)* #ignored_arm } } } }; let check_flags = fields_names .iter() .filter(|&&(field, _)| !field.attrs.skip_deserializing()) .map(|(field, name)| { let missing_expr = expr_is_missing(field, cattrs); // If missing_expr unconditionally returns an error, don't try // to assign its value to self.place. if field.attrs.default().is_none() && cattrs.default().is_none() && field.attrs.deserialize_with().is_some() { let missing_expr = Stmts(missing_expr); quote! { if !#name { #missing_expr; } } } else { let member = &field.member; let missing_expr = Expr(missing_expr); quote! { if !#name { self.place.#member = #missing_expr; }; } } }); let this = ¶ms.this; let (_, _, ty_generics, _) = split_with_de_lifetime(params); let let_default = match cattrs.default() { attr::Default::Default => Some(quote!( let __default: #this #ty_generics = _serde::__private::Default::default(); )), attr::Default::Path(path) => Some(quote!( let __default: #this #ty_generics = #path(); )), attr::Default::None => { // We don't need the default value, to prevent an unused variable warning // we'll leave the line empty. None } }; quote_block! { #(#let_flags)* #match_keys #let_default #(#check_flags)* _serde::__private::Ok(()) } } fn field_i(i: usize) -> Ident { Ident::new(&format!("__field{}", i), Span::call_site()) } /// This function wraps the expression in `#[serde(deserialize_with = "...")]` /// in a trait to prevent it from accessing the internal `Deserialize` state. fn wrap_deserialize_with( params: &Parameters, value_ty: &TokenStream, deserialize_with: &syn::ExprPath, ) -> (TokenStream, TokenStream) { let this = ¶ms.this; let (de_impl_generics, de_ty_generics, ty_generics, where_clause) = split_with_de_lifetime(params); let delife = params.borrowed.de_lifetime(); let wrapper = quote! { struct __DeserializeWith #de_impl_generics #where_clause { value: #value_ty, phantom: _serde::__private::PhantomData<#this #ty_generics>, lifetime: _serde::__private::PhantomData<&#delife ()>, } impl #de_impl_generics _serde::Deserialize<#delife> for __DeserializeWith #de_ty_generics #where_clause { fn deserialize<__D>(__deserializer: __D) -> _serde::__private::Result where __D: _serde::Deserializer<#delife>, { _serde::__private::Ok(__DeserializeWith { value: try!(#deserialize_with(__deserializer)), phantom: _serde::__private::PhantomData, lifetime: _serde::__private::PhantomData, }) } } }; let wrapper_ty = quote!(__DeserializeWith #de_ty_generics); (wrapper, wrapper_ty) } fn wrap_deserialize_field_with( params: &Parameters, field_ty: &syn::Type, deserialize_with: &syn::ExprPath, ) -> (TokenStream, TokenStream) { wrap_deserialize_with(params, "e!(#field_ty), deserialize_with) } fn wrap_deserialize_variant_with( params: &Parameters, variant: &Variant, deserialize_with: &syn::ExprPath, ) -> (TokenStream, TokenStream, TokenStream) { let field_tys = variant.fields.iter().map(|field| field.ty); let (wrapper, wrapper_ty) = wrap_deserialize_with(params, "e!((#(#field_tys),*)), deserialize_with); let unwrap_fn = unwrap_to_variant_closure(params, variant, true); (wrapper, wrapper_ty, unwrap_fn) } // Generates closure that converts single input parameter to the final value. fn unwrap_to_variant_closure( params: &Parameters, variant: &Variant, with_wrapper: bool, ) -> TokenStream { let this = ¶ms.this; let variant_ident = &variant.ident; let (arg, wrapper) = if with_wrapper { (quote! { __wrap }, quote! { __wrap.value }) } else { let field_tys = variant.fields.iter().map(|field| field.ty); (quote! { __wrap: (#(#field_tys),*) }, quote! { __wrap }) }; let field_access = (0..variant.fields.len()).map(|n| { Member::Unnamed(Index { index: n as u32, span: Span::call_site(), }) }); match variant.style { Style::Struct if variant.fields.len() == 1 => { let member = &variant.fields[0].member; quote! { |#arg| #this::#variant_ident { #member: #wrapper } } } Style::Struct => { let members = variant.fields.iter().map(|field| &field.member); quote! { |#arg| #this::#variant_ident { #(#members: #wrapper.#field_access),* } } } Style::Tuple => quote! { |#arg| #this::#variant_ident(#(#wrapper.#field_access),*) }, Style::Newtype => quote! { |#arg| #this::#variant_ident(#wrapper) }, Style::Unit => quote! { |#arg| #this::#variant_ident }, } } fn expr_is_missing(field: &Field, cattrs: &attr::Container) -> Fragment { match field.attrs.default() { attr::Default::Default => { let span = field.original.span(); let func = quote_spanned!(span=> _serde::__private::Default::default); return quote_expr!(#func()); } attr::Default::Path(path) => { return quote_expr!(#path()); } attr::Default::None => { /* below */ } } match *cattrs.default() { attr::Default::Default | attr::Default::Path(_) => { let member = &field.member; return quote_expr!(__default.#member); } attr::Default::None => { /* below */ } } let name = field.attrs.name().deserialize_name(); match field.attrs.deserialize_with() { None => { let span = field.original.span(); let func = quote_spanned!(span=> _serde::__private::de::missing_field); quote_expr! { try!(#func(#name)) } } Some(_) => { quote_expr! { return _serde::__private::Err(<__A::Error as _serde::de::Error>::missing_field(#name)) } } } } fn effective_style(variant: &Variant) -> Style { match variant.style { Style::Newtype if variant.fields[0].attrs.skip_deserializing() => Style::Unit, other => other, } } struct DeImplGenerics<'a>(&'a Parameters); #[cfg(feature = "deserialize_in_place")] struct InPlaceImplGenerics<'a>(&'a Parameters); impl<'a> ToTokens for DeImplGenerics<'a> { fn to_tokens(&self, tokens: &mut TokenStream) { let mut generics = self.0.generics.clone(); if let Some(de_lifetime) = self.0.borrowed.de_lifetime_def() { generics.params = Some(syn::GenericParam::Lifetime(de_lifetime)) .into_iter() .chain(generics.params) .collect(); } let (impl_generics, _, _) = generics.split_for_impl(); impl_generics.to_tokens(tokens); } } #[cfg(feature = "deserialize_in_place")] impl<'a> ToTokens for InPlaceImplGenerics<'a> { fn to_tokens(&self, tokens: &mut TokenStream) { let place_lifetime = place_lifetime(); let mut generics = self.0.generics.clone(); // Add lifetime for `&'place mut Self, and `'a: 'place` for param in &mut generics.params { match param { syn::GenericParam::Lifetime(param) => { param.bounds.push(place_lifetime.lifetime.clone()); } syn::GenericParam::Type(param) => { param.bounds.push(syn::TypeParamBound::Lifetime( place_lifetime.lifetime.clone(), )); } syn::GenericParam::Const(_) => {} } } generics.params = Some(syn::GenericParam::Lifetime(place_lifetime)) .into_iter() .chain(generics.params) .collect(); if let Some(de_lifetime) = self.0.borrowed.de_lifetime_def() { generics.params = Some(syn::GenericParam::Lifetime(de_lifetime)) .into_iter() .chain(generics.params) .collect(); } let (impl_generics, _, _) = generics.split_for_impl(); impl_generics.to_tokens(tokens); } } #[cfg(feature = "deserialize_in_place")] impl<'a> DeImplGenerics<'a> { fn in_place(self) -> InPlaceImplGenerics<'a> { InPlaceImplGenerics(self.0) } } struct DeTypeGenerics<'a>(&'a Parameters); #[cfg(feature = "deserialize_in_place")] struct InPlaceTypeGenerics<'a>(&'a Parameters); impl<'a> ToTokens for DeTypeGenerics<'a> { fn to_tokens(&self, tokens: &mut TokenStream) { let mut generics = self.0.generics.clone(); if self.0.borrowed.de_lifetime_def().is_some() { let def = syn::LifetimeDef { attrs: Vec::new(), lifetime: syn::Lifetime::new("'de", Span::call_site()), colon_token: None, bounds: Punctuated::new(), }; generics.params = Some(syn::GenericParam::Lifetime(def)) .into_iter() .chain(generics.params) .collect(); } let (_, ty_generics, _) = generics.split_for_impl(); ty_generics.to_tokens(tokens); } } #[cfg(feature = "deserialize_in_place")] impl<'a> ToTokens for InPlaceTypeGenerics<'a> { fn to_tokens(&self, tokens: &mut TokenStream) { let mut generics = self.0.generics.clone(); generics.params = Some(syn::GenericParam::Lifetime(place_lifetime())) .into_iter() .chain(generics.params) .collect(); if self.0.borrowed.de_lifetime_def().is_some() { let def = syn::LifetimeDef { attrs: Vec::new(), lifetime: syn::Lifetime::new("'de", Span::call_site()), colon_token: None, bounds: Punctuated::new(), }; generics.params = Some(syn::GenericParam::Lifetime(def)) .into_iter() .chain(generics.params) .collect(); } let (_, ty_generics, _) = generics.split_for_impl(); ty_generics.to_tokens(tokens); } } #[cfg(feature = "deserialize_in_place")] impl<'a> DeTypeGenerics<'a> { fn in_place(self) -> InPlaceTypeGenerics<'a> { InPlaceTypeGenerics(self.0) } } #[cfg(feature = "deserialize_in_place")] fn place_lifetime() -> syn::LifetimeDef { syn::LifetimeDef { attrs: Vec::new(), lifetime: syn::Lifetime::new("'place", Span::call_site()), colon_token: None, bounds: Punctuated::new(), } } fn split_with_de_lifetime( params: &Parameters, ) -> ( DeImplGenerics, DeTypeGenerics, syn::TypeGenerics, Option<&syn::WhereClause>, ) { let de_impl_generics = DeImplGenerics(params); let de_ty_generics = DeTypeGenerics(params); let (_, ty_generics, where_clause) = params.generics.split_for_impl(); (de_impl_generics, de_ty_generics, ty_generics, where_clause) } vendor/serde_derive/src/bound.rs0000664000175000017500000003372014172417313017563 0ustar mwhudsonmwhudsonuse std::collections::HashSet; use syn; use syn::punctuated::{Pair, Punctuated}; use internals::ast::{Container, Data}; use internals::{attr, ungroup}; use proc_macro2::Span; // Remove the default from every type parameter because in the generated impls // they look like associated types: "error: associated type bindings are not // allowed here". pub fn without_defaults(generics: &syn::Generics) -> syn::Generics { syn::Generics { params: generics .params .iter() .map(|param| match param { syn::GenericParam::Type(param) => syn::GenericParam::Type(syn::TypeParam { eq_token: None, default: None, ..param.clone() }), _ => param.clone(), }) .collect(), ..generics.clone() } } pub fn with_where_predicates( generics: &syn::Generics, predicates: &[syn::WherePredicate], ) -> syn::Generics { let mut generics = generics.clone(); generics .make_where_clause() .predicates .extend(predicates.iter().cloned()); generics } pub fn with_where_predicates_from_fields( cont: &Container, generics: &syn::Generics, from_field: fn(&attr::Field) -> Option<&[syn::WherePredicate]>, ) -> syn::Generics { let predicates = cont .data .all_fields() .filter_map(|field| from_field(&field.attrs)) .flat_map(<[syn::WherePredicate]>::to_vec); let mut generics = generics.clone(); generics.make_where_clause().predicates.extend(predicates); generics } pub fn with_where_predicates_from_variants( cont: &Container, generics: &syn::Generics, from_variant: fn(&attr::Variant) -> Option<&[syn::WherePredicate]>, ) -> syn::Generics { let variants = match &cont.data { Data::Enum(variants) => variants, Data::Struct(_, _) => { return generics.clone(); } }; let predicates = variants .iter() .filter_map(|variant| from_variant(&variant.attrs)) .flat_map(<[syn::WherePredicate]>::to_vec); let mut generics = generics.clone(); generics.make_where_clause().predicates.extend(predicates); generics } // Puts the given bound on any generic type parameters that are used in fields // for which filter returns true. // // For example, the following struct needs the bound `A: Serialize, B: // Serialize`. // // struct S<'b, A, B: 'b, C> { // a: A, // b: Option<&'b B> // #[serde(skip_serializing)] // c: C, // } pub fn with_bound( cont: &Container, generics: &syn::Generics, filter: fn(&attr::Field, Option<&attr::Variant>) -> bool, bound: &syn::Path, ) -> syn::Generics { struct FindTyParams<'ast> { // Set of all generic type parameters on the current struct (A, B, C in // the example). Initialized up front. all_type_params: HashSet, // Set of generic type parameters used in fields for which filter // returns true (A and B in the example). Filled in as the visitor sees // them. relevant_type_params: HashSet, // Fields whose type is an associated type of one of the generic type // parameters. associated_type_usage: Vec<&'ast syn::TypePath>, } impl<'ast> FindTyParams<'ast> { fn visit_field(&mut self, field: &'ast syn::Field) { if let syn::Type::Path(ty) = ungroup(&field.ty) { if let Some(Pair::Punctuated(t, _)) = ty.path.segments.pairs().next() { if self.all_type_params.contains(&t.ident) { self.associated_type_usage.push(ty); } } } self.visit_type(&field.ty); } fn visit_path(&mut self, path: &'ast syn::Path) { if let Some(seg) = path.segments.last() { if seg.ident == "PhantomData" { // Hardcoded exception, because PhantomData implements // Serialize and Deserialize whether or not T implements it. return; } } if path.leading_colon.is_none() && path.segments.len() == 1 { let id = &path.segments[0].ident; if self.all_type_params.contains(id) { self.relevant_type_params.insert(id.clone()); } } for segment in &path.segments { self.visit_path_segment(segment); } } // Everything below is simply traversing the syntax tree. fn visit_type(&mut self, ty: &'ast syn::Type) { match ty { syn::Type::Array(ty) => self.visit_type(&ty.elem), syn::Type::BareFn(ty) => { for arg in &ty.inputs { self.visit_type(&arg.ty); } self.visit_return_type(&ty.output); } syn::Type::Group(ty) => self.visit_type(&ty.elem), syn::Type::ImplTrait(ty) => { for bound in &ty.bounds { self.visit_type_param_bound(bound); } } syn::Type::Macro(ty) => self.visit_macro(&ty.mac), syn::Type::Paren(ty) => self.visit_type(&ty.elem), syn::Type::Path(ty) => { if let Some(qself) = &ty.qself { self.visit_type(&qself.ty); } self.visit_path(&ty.path); } syn::Type::Ptr(ty) => self.visit_type(&ty.elem), syn::Type::Reference(ty) => self.visit_type(&ty.elem), syn::Type::Slice(ty) => self.visit_type(&ty.elem), syn::Type::TraitObject(ty) => { for bound in &ty.bounds { self.visit_type_param_bound(bound); } } syn::Type::Tuple(ty) => { for elem in &ty.elems { self.visit_type(elem); } } syn::Type::Infer(_) | syn::Type::Never(_) | syn::Type::Verbatim(_) => {} #[cfg(test)] syn::Type::__TestExhaustive(_) => unimplemented!(), #[cfg(not(test))] _ => {} } } fn visit_path_segment(&mut self, segment: &'ast syn::PathSegment) { self.visit_path_arguments(&segment.arguments); } fn visit_path_arguments(&mut self, arguments: &'ast syn::PathArguments) { match arguments { syn::PathArguments::None => {} syn::PathArguments::AngleBracketed(arguments) => { for arg in &arguments.args { match arg { syn::GenericArgument::Type(arg) => self.visit_type(arg), syn::GenericArgument::Binding(arg) => self.visit_type(&arg.ty), syn::GenericArgument::Lifetime(_) | syn::GenericArgument::Constraint(_) | syn::GenericArgument::Const(_) => {} } } } syn::PathArguments::Parenthesized(arguments) => { for argument in &arguments.inputs { self.visit_type(argument); } self.visit_return_type(&arguments.output); } } } fn visit_return_type(&mut self, return_type: &'ast syn::ReturnType) { match return_type { syn::ReturnType::Default => {} syn::ReturnType::Type(_, output) => self.visit_type(output), } } fn visit_type_param_bound(&mut self, bound: &'ast syn::TypeParamBound) { match bound { syn::TypeParamBound::Trait(bound) => self.visit_path(&bound.path), syn::TypeParamBound::Lifetime(_) => {} } } // Type parameter should not be considered used by a macro path. // // struct TypeMacro { // mac: T!(), // marker: PhantomData, // } fn visit_macro(&mut self, _mac: &'ast syn::Macro) {} } let all_type_params = generics .type_params() .map(|param| param.ident.clone()) .collect(); let mut visitor = FindTyParams { all_type_params, relevant_type_params: HashSet::new(), associated_type_usage: Vec::new(), }; match &cont.data { Data::Enum(variants) => { for variant in variants.iter() { let relevant_fields = variant .fields .iter() .filter(|field| filter(&field.attrs, Some(&variant.attrs))); for field in relevant_fields { visitor.visit_field(field.original); } } } Data::Struct(_, fields) => { for field in fields.iter().filter(|field| filter(&field.attrs, None)) { visitor.visit_field(field.original); } } } let relevant_type_params = visitor.relevant_type_params; let associated_type_usage = visitor.associated_type_usage; let new_predicates = generics .type_params() .map(|param| param.ident.clone()) .filter(|id| relevant_type_params.contains(id)) .map(|id| syn::TypePath { qself: None, path: id.into(), }) .chain(associated_type_usage.into_iter().cloned()) .map(|bounded_ty| { syn::WherePredicate::Type(syn::PredicateType { lifetimes: None, // the type parameter that is being bounded e.g. T bounded_ty: syn::Type::Path(bounded_ty), colon_token: ::default(), // the bound e.g. Serialize bounds: vec![syn::TypeParamBound::Trait(syn::TraitBound { paren_token: None, modifier: syn::TraitBoundModifier::None, lifetimes: None, path: bound.clone(), })] .into_iter() .collect(), }) }); let mut generics = generics.clone(); generics .make_where_clause() .predicates .extend(new_predicates); generics } pub fn with_self_bound( cont: &Container, generics: &syn::Generics, bound: &syn::Path, ) -> syn::Generics { let mut generics = generics.clone(); generics .make_where_clause() .predicates .push(syn::WherePredicate::Type(syn::PredicateType { lifetimes: None, // the type that is being bounded e.g. MyStruct<'a, T> bounded_ty: type_of_item(cont), colon_token: ::default(), // the bound e.g. Default bounds: vec![syn::TypeParamBound::Trait(syn::TraitBound { paren_token: None, modifier: syn::TraitBoundModifier::None, lifetimes: None, path: bound.clone(), })] .into_iter() .collect(), })); generics } pub fn with_lifetime_bound(generics: &syn::Generics, lifetime: &str) -> syn::Generics { let bound = syn::Lifetime::new(lifetime, Span::call_site()); let def = syn::LifetimeDef { attrs: Vec::new(), lifetime: bound.clone(), colon_token: None, bounds: Punctuated::new(), }; let params = Some(syn::GenericParam::Lifetime(def)) .into_iter() .chain(generics.params.iter().cloned().map(|mut param| { match &mut param { syn::GenericParam::Lifetime(param) => { param.bounds.push(bound.clone()); } syn::GenericParam::Type(param) => { param .bounds .push(syn::TypeParamBound::Lifetime(bound.clone())); } syn::GenericParam::Const(_) => {} } param })) .collect(); syn::Generics { params, ..generics.clone() } } fn type_of_item(cont: &Container) -> syn::Type { syn::Type::Path(syn::TypePath { qself: None, path: syn::Path { leading_colon: None, segments: vec![syn::PathSegment { ident: cont.ident.clone(), arguments: syn::PathArguments::AngleBracketed( syn::AngleBracketedGenericArguments { colon2_token: None, lt_token: ::default(), args: cont .generics .params .iter() .map(|param| match param { syn::GenericParam::Type(param) => { syn::GenericArgument::Type(syn::Type::Path(syn::TypePath { qself: None, path: param.ident.clone().into(), })) } syn::GenericParam::Lifetime(param) => { syn::GenericArgument::Lifetime(param.lifetime.clone()) } syn::GenericParam::Const(_) => { panic!("Serde does not support const generics yet"); } }) .collect(), gt_token: ]>::default(), }, ), }] .into_iter() .collect(), }, }) } vendor/serde_derive/src/pretend.rs0000664000175000017500000001375514172417313020123 0ustar mwhudsonmwhudsonuse proc_macro2::TokenStream; use quote::format_ident; use internals::ast::{Container, Data, Field, Style, Variant}; // Suppress dead_code warnings that would otherwise appear when using a remote // derive. Other than this pretend code, a struct annotated with remote derive // never has its fields referenced and an enum annotated with remote derive // never has its variants constructed. // // warning: field is never used: `i` // --> src/main.rs:4:20 // | // 4 | struct StructDef { i: i32 } // | ^^^^^^ // // warning: variant is never constructed: `V` // --> src/main.rs:8:16 // | // 8 | enum EnumDef { V } // | ^ // pub fn pretend_used(cont: &Container, is_packed: bool) -> TokenStream { let pretend_fields = pretend_fields_used(cont, is_packed); let pretend_variants = pretend_variants_used(cont); quote! { #pretend_fields #pretend_variants } } // For structs with named fields, expands to: // // match None::<&T> { // Some(T { a: __v0, b: __v1 }) => {} // _ => {} // } // // For packed structs on sufficiently new rustc, expands to: // // match None::<&T> { // Some(__v @ T { a: _, b: _ }) => { // let _ = addr_of!(__v.a); // let _ = addr_of!(__v.b); // } // _ => {} // } // // For packed structs on older rustc, we assume Sized and !Drop, and expand to: // // match None:: { // Some(T { a: __v0, b: __v1 }) => {} // _ => {} // } // // For enums, expands to the following but only including struct variants: // // match None::<&T> { // Some(T::A { a: __v0 }) => {} // Some(T::B { b: __v0 }) => {} // _ => {} // } // fn pretend_fields_used(cont: &Container, is_packed: bool) -> TokenStream { match &cont.data { Data::Enum(variants) => pretend_fields_used_enum(cont, variants), Data::Struct(Style::Struct, fields) => { if is_packed { pretend_fields_used_struct_packed(cont, fields) } else { pretend_fields_used_struct(cont, fields) } } Data::Struct(_, _) => quote!(), } } fn pretend_fields_used_struct(cont: &Container, fields: &[Field]) -> TokenStream { let type_ident = &cont.ident; let (_, ty_generics, _) = cont.generics.split_for_impl(); let members = fields.iter().map(|field| &field.member); let placeholders = (0usize..).map(|i| format_ident!("__v{}", i)); quote! { match _serde::__private::None::<&#type_ident #ty_generics> { _serde::__private::Some(#type_ident { #(#members: #placeholders),* }) => {} _ => {} } } } fn pretend_fields_used_struct_packed(cont: &Container, fields: &[Field]) -> TokenStream { let type_ident = &cont.ident; let (_, ty_generics, _) = cont.generics.split_for_impl(); let members = fields.iter().map(|field| &field.member).collect::>(); #[cfg(ptr_addr_of)] { quote! { match _serde::__private::None::<&#type_ident #ty_generics> { _serde::__private::Some(__v @ #type_ident { #(#members: _),* }) => { #( let _ = _serde::__private::ptr::addr_of!(__v.#members); )* } _ => {} } } } #[cfg(not(ptr_addr_of))] { let placeholders = (0usize..).map(|i| format_ident!("__v{}", i)); quote! { match _serde::__private::None::<#type_ident #ty_generics> { _serde::__private::Some(#type_ident { #(#members: #placeholders),* }) => {} _ => {} } } } } fn pretend_fields_used_enum(cont: &Container, variants: &[Variant]) -> TokenStream { let type_ident = &cont.ident; let (_, ty_generics, _) = cont.generics.split_for_impl(); let patterns = variants .iter() .filter_map(|variant| match variant.style { Style::Struct => { let variant_ident = &variant.ident; let members = variant.fields.iter().map(|field| &field.member); let placeholders = (0usize..).map(|i| format_ident!("__v{}", i)); Some(quote!(#type_ident::#variant_ident { #(#members: #placeholders),* })) } _ => None, }) .collect::>(); quote! { match _serde::__private::None::<&#type_ident #ty_generics> { #( _serde::__private::Some(#patterns) => {} )* _ => {} } } } // Expands to one of these per enum variant: // // match None { // Some((__v0, __v1,)) => { // let _ = E::V { a: __v0, b: __v1 }; // } // _ => {} // } // fn pretend_variants_used(cont: &Container) -> TokenStream { let variants = match &cont.data { Data::Enum(variants) => variants, Data::Struct(_, _) => { return quote!(); } }; let type_ident = &cont.ident; let (_, ty_generics, _) = cont.generics.split_for_impl(); let turbofish = ty_generics.as_turbofish(); let cases = variants.iter().map(|variant| { let variant_ident = &variant.ident; let placeholders = &(0..variant.fields.len()) .map(|i| format_ident!("__v{}", i)) .collect::>(); let pat = match variant.style { Style::Struct => { let members = variant.fields.iter().map(|field| &field.member); quote!({ #(#members: #placeholders),* }) } Style::Tuple | Style::Newtype => quote!(( #(#placeholders),* )), Style::Unit => quote!(), }; quote! { match _serde::__private::None { _serde::__private::Some((#(#placeholders,)*)) => { let _ = #type_ident::#variant_ident #turbofish #pat; } _ => {} } } }); quote!(#(#cases)*) } vendor/serde_derive/src/lib.rs0000664000175000017500000000600114172417313017212 0ustar mwhudsonmwhudson//! This crate provides Serde's two derive macros. //! //! ```edition2018 //! # use serde_derive::{Serialize, Deserialize}; //! # //! #[derive(Serialize, Deserialize)] //! # struct S; //! # //! # fn main() {} //! ``` //! //! Please refer to [https://serde.rs/derive.html] for how to set this up. //! //! [https://serde.rs/derive.html]: https://serde.rs/derive.html #![doc(html_root_url = "https://docs.rs/serde_derive/1.0.133")] #![allow(unknown_lints, bare_trait_objects)] // Ignored clippy lints #![allow( // clippy false positive: https://github.com/rust-lang/rust-clippy/issues/7054 clippy::branches_sharing_code, clippy::cognitive_complexity, // clippy bug: https://github.com/rust-lang/rust-clippy/issues/7575 clippy::collapsible_match, clippy::enum_variant_names, // clippy bug: https://github.com/rust-lang/rust-clippy/issues/6797 clippy::manual_map, clippy::match_like_matches_macro, clippy::needless_pass_by_value, clippy::too_many_arguments, clippy::trivially_copy_pass_by_ref, clippy::used_underscore_binding, clippy::wildcard_in_or_patterns, // clippy bug: https://github.com/rust-lang/rust-clippy/issues/5704 clippy::unnested_or_patterns, )] // Ignored clippy_pedantic lints #![allow( clippy::cast_possible_truncation, clippy::checked_conversions, clippy::doc_markdown, clippy::enum_glob_use, clippy::indexing_slicing, clippy::items_after_statements, clippy::let_underscore_drop, clippy::manual_assert, clippy::map_err_ignore, clippy::match_same_arms, // clippy bug: https://github.com/rust-lang/rust-clippy/issues/6984 clippy::match_wildcard_for_single_variants, clippy::module_name_repetitions, clippy::must_use_candidate, clippy::option_if_let_else, clippy::similar_names, clippy::single_match_else, clippy::struct_excessive_bools, clippy::too_many_lines, clippy::unseparated_literal_suffix, clippy::unused_self, clippy::use_self, clippy::wildcard_imports )] #[macro_use] extern crate quote; #[macro_use] extern crate syn; extern crate proc_macro; extern crate proc_macro2; mod internals; use proc_macro::TokenStream; use syn::DeriveInput; #[macro_use] mod bound; #[macro_use] mod fragment; mod de; mod dummy; mod pretend; mod ser; mod try; #[proc_macro_derive(Serialize, attributes(serde))] pub fn derive_serialize(input: TokenStream) -> TokenStream { let mut input = parse_macro_input!(input as DeriveInput); ser::expand_derive_serialize(&mut input) .unwrap_or_else(to_compile_errors) .into() } #[proc_macro_derive(Deserialize, attributes(serde))] pub fn derive_deserialize(input: TokenStream) -> TokenStream { let mut input = parse_macro_input!(input as DeriveInput); de::expand_derive_deserialize(&mut input) .unwrap_or_else(to_compile_errors) .into() } fn to_compile_errors(errors: Vec) -> proc_macro2::TokenStream { let compile_errors = errors.iter().map(syn::Error::to_compile_error); quote!(#(#compile_errors)*) } vendor/serde_derive/LICENSE-MIT0000664000175000017500000000177714160055207016757 0ustar mwhudsonmwhudsonPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/serde_derive/README.md0000664000175000017500000001005214160055207016564 0ustar mwhudsonmwhudson# Serde   [![Build Status]][actions] [![Latest Version]][crates.io] [![serde: rustc 1.13+]][Rust 1.13] [![serde_derive: rustc 1.31+]][Rust 1.31] [Build Status]: https://img.shields.io/github/workflow/status/serde-rs/serde/CI/master [actions]: https://github.com/serde-rs/serde/actions?query=branch%3Amaster [Latest Version]: https://img.shields.io/crates/v/serde.svg [crates.io]: https://crates.io/crates/serde [serde: rustc 1.13+]: https://img.shields.io/badge/serde-rustc_1.13+-lightgray.svg [serde_derive: rustc 1.31+]: https://img.shields.io/badge/serde_derive-rustc_1.31+-lightgray.svg [Rust 1.13]: https://blog.rust-lang.org/2016/11/10/Rust-1.13.html [Rust 1.31]: https://blog.rust-lang.org/2018/12/06/Rust-1.31-and-rust-2018.html **Serde is a framework for *ser*ializing and *de*serializing Rust data structures efficiently and generically.** --- You may be looking for: - [An overview of Serde](https://serde.rs/) - [Data formats supported by Serde](https://serde.rs/#data-formats) - [Setting up `#[derive(Serialize, Deserialize)]`](https://serde.rs/derive.html) - [Examples](https://serde.rs/examples.html) - [API documentation](https://docs.serde.rs/serde/) - [Release notes](https://github.com/serde-rs/serde/releases) ## Serde in action
Click to show Cargo.toml. Run this code in the playground. ```toml [dependencies] # The core APIs, including the Serialize and Deserialize traits. Always # required when using Serde. The "derive" feature is only required when # using #[derive(Serialize, Deserialize)] to make Serde work with structs # and enums defined in your crate. serde = { version = "1.0", features = ["derive"] } # Each data format lives in its own crate; the sample code below uses JSON # but you may be using a different one. serde_json = "1.0" ```

```rust use serde::{Serialize, Deserialize}; #[derive(Serialize, Deserialize, Debug)] struct Point { x: i32, y: i32, } fn main() { let point = Point { x: 1, y: 2 }; // Convert the Point to a JSON string. let serialized = serde_json::to_string(&point).unwrap(); // Prints serialized = {"x":1,"y":2} println!("serialized = {}", serialized); // Convert the JSON string back to a Point. let deserialized: Point = serde_json::from_str(&serialized).unwrap(); // Prints deserialized = Point { x: 1, y: 2 } println!("deserialized = {:?}", deserialized); } ``` ## Getting help Serde is one of the most widely used Rust libraries so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the [#general] or [#beginners] channels of the unofficial community Discord, the [#rust-usage] channel of the official Rust Project Discord, or the [#general][zulip] stream in Zulip. For asynchronous, consider the [\[rust\] tag on StackOverflow][stackoverflow], the [/r/rust] subreddit which has a pinned weekly easy questions post, or the Rust [Discourse forum][discourse]. It's acceptable to file a support issue in this repo but they tend not to get as many eyes as any of the above and may get closed without a response after some time. [#general]: https://discord.com/channels/273534239310479360/274215136414400513 [#beginners]: https://discord.com/channels/273534239310479360/273541522815713281 [#rust-usage]: https://discord.com/channels/442252698964721669/443150878111694848 [zulip]: https://rust-lang.zulipchat.com/#narrow/stream/122651-general [stackoverflow]: https://stackoverflow.com/questions/tagged/rust [/r/rust]: https://www.reddit.com/r/rust [discourse]: https://users.rust-lang.org
#### License Licensed under either of Apache License, Version 2.0 or MIT license at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Serde by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. vendor/serde_derive/crates-io.md0000664000175000017500000000443414160055207017524 0ustar mwhudsonmwhudson **Serde is a framework for *ser*ializing and *de*serializing Rust data structures efficiently and generically.** --- You may be looking for: - [An overview of Serde](https://serde.rs/) - [Data formats supported by Serde](https://serde.rs/#data-formats) - [Setting up `#[derive(Serialize, Deserialize)]`](https://serde.rs/derive.html) - [Examples](https://serde.rs/examples.html) - [API documentation](https://docs.serde.rs/serde/) - [Release notes](https://github.com/serde-rs/serde/releases) ## Serde in action ```rust use serde::{Serialize, Deserialize}; #[derive(Serialize, Deserialize, Debug)] struct Point { x: i32, y: i32, } fn main() { let point = Point { x: 1, y: 2 }; // Convert the Point to a JSON string. let serialized = serde_json::to_string(&point).unwrap(); // Prints serialized = {"x":1,"y":2} println!("serialized = {}", serialized); // Convert the JSON string back to a Point. let deserialized: Point = serde_json::from_str(&serialized).unwrap(); // Prints deserialized = Point { x: 1, y: 2 } println!("deserialized = {:?}", deserialized); } ``` ## Getting help Serde is one of the most widely used Rust libraries so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the [#general] or [#beginners] channels of the unofficial community Discord, the [#rust-usage] channel of the official Rust Project Discord, or the [#general][zulip] stream in Zulip. For asynchronous, consider the [\[rust\] tag on StackOverflow][stackoverflow], the [/r/rust] subreddit which has a pinned weekly easy questions post, or the Rust [Discourse forum][discourse]. It's acceptable to file a support issue in this repo but they tend not to get as many eyes as any of the above and may get closed without a response after some time. [#general]: https://discord.com/channels/273534239310479360/274215136414400513 [#beginners]: https://discord.com/channels/273534239310479360/273541522815713281 [#rust-usage]: https://discord.com/channels/442252698964721669/443150878111694848 [zulip]: https://rust-lang.zulipchat.com/#narrow/stream/122651-general [stackoverflow]: https://stackoverflow.com/questions/tagged/rust [/r/rust]: https://www.reddit.com/r/rust [discourse]: https://users.rust-lang.org vendor/thread_local/0000775000175000017500000000000014160055207015270 5ustar mwhudsonmwhudsonvendor/thread_local/.cargo-checksum.json0000664000175000017500000000013114160055207021127 0ustar mwhudsonmwhudson{"files":{},"package":"8018d24e04c95ac8790716a5987d0fec4f8b27249ffa0f7d33f1369bdfb88cbd"}vendor/thread_local/LICENSE-APACHE0000664000175000017500000002513714160055207017224 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/thread_local/Cargo.toml0000664000175000017500000000222014160055207017214 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] edition = "2018" name = "thread_local" version = "1.1.3" authors = ["Amanieu d'Antras "] description = "Per-object thread-local storage" documentation = "https://docs.rs/thread_local/" readme = "README.md" keywords = ["thread_local", "concurrent", "thread"] license = "Apache-2.0/MIT" repository = "https://github.com/Amanieu/thread_local-rs" #[[bench]] #name = "thread_local" #harness = false #required-features = ["criterion"] #[dependencies.criterion] #version = "0.3.3" #optional = true [dependencies.once_cell] version = "1.5.2" [dev-dependencies] [badges.travis-ci] repository = "Amanieu/thread_local-rs" vendor/thread_local/benches/0000775000175000017500000000000014160055207016677 5ustar mwhudsonmwhudsonvendor/thread_local/benches/thread_local.rs0000664000175000017500000000117714160055207021674 0ustar mwhudsonmwhudson//extern crate criterion; extern crate thread_local; //use criterion::{black_box, BatchSize}; use thread_local::ThreadLocal; fn main() { /*let mut c = criterion::Criterion::default().configure_from_args(); c.bench_function("get", |b| { let local = ThreadLocal::new(); local.get_or(|| Box::new(0)); b.iter(|| { black_box(local.get()); }); }); c.bench_function("insert", |b| { b.iter_batched_ref( ThreadLocal::new, |local| { black_box(local.get_or(|| 0)); }, BatchSize::SmallInput, ) });*/ } vendor/thread_local/debian/0000775000175000017500000000000014160055207016512 5ustar mwhudsonmwhudsonvendor/thread_local/debian/patches/0000775000175000017500000000000014160055207020141 5ustar mwhudsonmwhudsonvendor/thread_local/debian/patches/series0000664000175000017500000000002614160055207021354 0ustar mwhudsonmwhudsondisable-criteron.diff vendor/thread_local/debian/patches/disable-criteron.diff0000664000175000017500000000262614160055207024227 0ustar mwhudsonmwhudsonIndex: thread-local/Cargo.toml =================================================================== --- thread-local.orig/Cargo.toml +++ thread-local/Cargo.toml @@ -22,13 +22,13 @@ keywords = ["thread_local", "concurrent" license = "Apache-2.0/MIT" repository = "https://github.com/Amanieu/thread_local-rs" -[[bench]] -name = "thread_local" -harness = false -required-features = ["criterion"] -[dependencies.criterion] -version = "0.3.3" -optional = true +#[[bench]] +#name = "thread_local" +#harness = false +#required-features = ["criterion"] +#[dependencies.criterion] +#version = "0.3.3" +#optional = true [dependencies.once_cell] version = "1.5.2" Index: thread-local/benches/thread_local.rs =================================================================== --- thread-local.orig/benches/thread_local.rs +++ thread-local/benches/thread_local.rs @@ -1,12 +1,12 @@ -extern crate criterion; +//extern crate criterion; extern crate thread_local; -use criterion::{black_box, BatchSize}; +//use criterion::{black_box, BatchSize}; use thread_local::ThreadLocal; fn main() { - let mut c = criterion::Criterion::default().configure_from_args(); + /*let mut c = criterion::Criterion::default().configure_from_args(); c.bench_function("get", |b| { let local = ThreadLocal::new(); @@ -24,5 +24,5 @@ fn main() { }, BatchSize::SmallInput, ) - }); + });*/ } vendor/thread_local/src/0000775000175000017500000000000014160055207016057 5ustar mwhudsonmwhudsonvendor/thread_local/src/thread_id.rs0000664000175000017500000000734614160055207020362 0ustar mwhudsonmwhudson// Copyright 2017 Amanieu d'Antras // // Licensed under the Apache License, Version 2.0, or the MIT license , at your option. This file may not be // copied, modified, or distributed except according to those terms. use crate::POINTER_WIDTH; use once_cell::sync::Lazy; use std::cmp::Reverse; use std::collections::BinaryHeap; use std::sync::Mutex; use std::usize; /// Thread ID manager which allocates thread IDs. It attempts to aggressively /// reuse thread IDs where possible to avoid cases where a ThreadLocal grows /// indefinitely when it is used by many short-lived threads. struct ThreadIdManager { free_from: usize, free_list: BinaryHeap>, } impl ThreadIdManager { fn new() -> ThreadIdManager { ThreadIdManager { free_from: 0, free_list: BinaryHeap::new(), } } fn alloc(&mut self) -> usize { if let Some(id) = self.free_list.pop() { id.0 } else { let id = self.free_from; self.free_from = self .free_from .checked_add(1) .expect("Ran out of thread IDs"); id } } fn free(&mut self, id: usize) { self.free_list.push(Reverse(id)); } } static THREAD_ID_MANAGER: Lazy> = Lazy::new(|| Mutex::new(ThreadIdManager::new())); /// Data which is unique to the current thread while it is running. /// A thread ID may be reused after a thread exits. #[derive(Clone, Copy)] pub(crate) struct Thread { /// The thread ID obtained from the thread ID manager. pub(crate) id: usize, /// The bucket this thread's local storage will be in. pub(crate) bucket: usize, /// The size of the bucket this thread's local storage will be in. pub(crate) bucket_size: usize, /// The index into the bucket this thread's local storage is in. pub(crate) index: usize, } impl Thread { fn new(id: usize) -> Thread { let bucket = usize::from(POINTER_WIDTH) - id.leading_zeros() as usize; let bucket_size = 1 << bucket.saturating_sub(1); let index = if id != 0 { id ^ bucket_size } else { 0 }; Thread { id, bucket, bucket_size, index, } } } /// Wrapper around `Thread` that allocates and deallocates the ID. struct ThreadHolder(Thread); impl ThreadHolder { fn new() -> ThreadHolder { ThreadHolder(Thread::new(THREAD_ID_MANAGER.lock().unwrap().alloc())) } } impl Drop for ThreadHolder { fn drop(&mut self) { THREAD_ID_MANAGER.lock().unwrap().free(self.0.id); } } thread_local!(static THREAD_HOLDER: ThreadHolder = ThreadHolder::new()); /// Get the current thread. pub(crate) fn get() -> Thread { THREAD_HOLDER.with(|holder| holder.0) } #[test] fn test_thread() { let thread = Thread::new(0); assert_eq!(thread.id, 0); assert_eq!(thread.bucket, 0); assert_eq!(thread.bucket_size, 1); assert_eq!(thread.index, 0); let thread = Thread::new(1); assert_eq!(thread.id, 1); assert_eq!(thread.bucket, 1); assert_eq!(thread.bucket_size, 1); assert_eq!(thread.index, 0); let thread = Thread::new(2); assert_eq!(thread.id, 2); assert_eq!(thread.bucket, 2); assert_eq!(thread.bucket_size, 2); assert_eq!(thread.index, 0); let thread = Thread::new(3); assert_eq!(thread.id, 3); assert_eq!(thread.bucket, 2); assert_eq!(thread.bucket_size, 2); assert_eq!(thread.index, 1); let thread = Thread::new(19); assert_eq!(thread.id, 19); assert_eq!(thread.bucket, 5); assert_eq!(thread.bucket_size, 16); assert_eq!(thread.index, 3); } vendor/thread_local/src/cached.rs0000664000175000017500000001054314160055207017637 0ustar mwhudsonmwhudson#![allow(deprecated)] use super::{IntoIter, IterMut, ThreadLocal}; use std::fmt; use std::panic::UnwindSafe; use std::usize; /// Wrapper around [`ThreadLocal`]. /// /// This used to add a fast path for a single thread, however that has been /// obsoleted by performance improvements to [`ThreadLocal`] itself. #[deprecated(since = "1.1.0", note = "Use `ThreadLocal` instead")] pub struct CachedThreadLocal { inner: ThreadLocal, } impl Default for CachedThreadLocal { fn default() -> CachedThreadLocal { CachedThreadLocal::new() } } impl CachedThreadLocal { /// Creates a new empty `CachedThreadLocal`. #[inline] pub fn new() -> CachedThreadLocal { CachedThreadLocal { inner: ThreadLocal::new(), } } /// Returns the element for the current thread, if it exists. #[inline] pub fn get(&self) -> Option<&T> { self.inner.get() } /// Returns the element for the current thread, or creates it if it doesn't /// exist. #[inline] pub fn get_or(&self, create: F) -> &T where F: FnOnce() -> T, { self.inner.get_or(create) } /// Returns the element for the current thread, or creates it if it doesn't /// exist. If `create` fails, that error is returned and no element is /// added. #[inline] pub fn get_or_try(&self, create: F) -> Result<&T, E> where F: FnOnce() -> Result, { self.inner.get_or_try(create) } /// Returns a mutable iterator over the local values of all threads. /// /// Since this call borrows the `ThreadLocal` mutably, this operation can /// be done safely---the mutable borrow statically guarantees no other /// threads are currently accessing their associated values. #[inline] pub fn iter_mut(&mut self) -> CachedIterMut { CachedIterMut { inner: self.inner.iter_mut(), } } /// Removes all thread-specific values from the `ThreadLocal`, effectively /// reseting it to its original state. /// /// Since this call borrows the `ThreadLocal` mutably, this operation can /// be done safely---the mutable borrow statically guarantees no other /// threads are currently accessing their associated values. #[inline] pub fn clear(&mut self) { self.inner.clear(); } } impl IntoIterator for CachedThreadLocal { type Item = T; type IntoIter = CachedIntoIter; fn into_iter(self) -> CachedIntoIter { CachedIntoIter { inner: self.inner.into_iter(), } } } impl<'a, T: Send + 'a> IntoIterator for &'a mut CachedThreadLocal { type Item = &'a mut T; type IntoIter = CachedIterMut<'a, T>; fn into_iter(self) -> CachedIterMut<'a, T> { self.iter_mut() } } impl CachedThreadLocal { /// Returns the element for the current thread, or creates a default one if /// it doesn't exist. pub fn get_or_default(&self) -> &T { self.get_or(T::default) } } impl fmt::Debug for CachedThreadLocal { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "ThreadLocal {{ local_data: {:?} }}", self.get()) } } impl UnwindSafe for CachedThreadLocal {} /// Mutable iterator over the contents of a `CachedThreadLocal`. #[deprecated(since = "1.1.0", note = "Use `IterMut` instead")] pub struct CachedIterMut<'a, T: Send + 'a> { inner: IterMut<'a, T>, } impl<'a, T: Send + 'a> Iterator for CachedIterMut<'a, T> { type Item = &'a mut T; #[inline] fn next(&mut self) -> Option<&'a mut T> { self.inner.next() } #[inline] fn size_hint(&self) -> (usize, Option) { self.inner.size_hint() } } impl<'a, T: Send + 'a> ExactSizeIterator for CachedIterMut<'a, T> {} /// An iterator that moves out of a `CachedThreadLocal`. #[deprecated(since = "1.1.0", note = "Use `IntoIter` instead")] pub struct CachedIntoIter { inner: IntoIter, } impl Iterator for CachedIntoIter { type Item = T; #[inline] fn next(&mut self) -> Option { self.inner.next() } #[inline] fn size_hint(&self) -> (usize, Option) { self.inner.size_hint() } } impl ExactSizeIterator for CachedIntoIter {} vendor/thread_local/src/unreachable.rs0000664000175000017500000000332314160055207020677 0ustar mwhudsonmwhudson// Copyright 2017 Amanieu d'Antras // // Licensed under the Apache License, Version 2.0, or the MIT license , at your option. This file may not be // copied, modified, or distributed except according to those terms. use std::hint::unreachable_unchecked; /// An extension trait for `Option` providing unchecked unwrapping methods. pub trait UncheckedOptionExt { /// Get the value out of this Option without checking for None. unsafe fn unchecked_unwrap(self) -> T; /// Assert that this Option is a None to the optimizer. unsafe fn unchecked_unwrap_none(self); } /// An extension trait for `Result` providing unchecked unwrapping methods. pub trait UncheckedResultExt { /// Get the value out of this Result without checking for Err. unsafe fn unchecked_unwrap_ok(self) -> T; /// Get the error out of this Result without checking for Ok. unsafe fn unchecked_unwrap_err(self) -> E; } impl UncheckedOptionExt for Option { unsafe fn unchecked_unwrap(self) -> T { match self { Some(x) => x, None => unreachable_unchecked(), } } unsafe fn unchecked_unwrap_none(self) { if self.is_some() { unreachable_unchecked() } } } impl UncheckedResultExt for Result { unsafe fn unchecked_unwrap_ok(self) -> T { match self { Ok(x) => x, Err(_) => unreachable_unchecked(), } } unsafe fn unchecked_unwrap_err(self) -> E { match self { Ok(_) => unreachable_unchecked(), Err(e) => e, } } } vendor/thread_local/src/lib.rs0000664000175000017500000004631514160055207017204 0ustar mwhudsonmwhudson// Copyright 2017 Amanieu d'Antras // // Licensed under the Apache License, Version 2.0, or the MIT license , at your option. This file may not be // copied, modified, or distributed except according to those terms. //! Per-object thread-local storage //! //! This library provides the `ThreadLocal` type which allows a separate copy of //! an object to be used for each thread. This allows for per-object //! thread-local storage, unlike the standard library's `thread_local!` macro //! which only allows static thread-local storage. //! //! Per-thread objects are not destroyed when a thread exits. Instead, objects //! are only destroyed when the `ThreadLocal` containing them is destroyed. //! //! You can also iterate over the thread-local values of all thread in a //! `ThreadLocal` object using the `iter_mut` and `into_iter` methods. This can //! only be done if you have mutable access to the `ThreadLocal` object, which //! guarantees that you are the only thread currently accessing it. //! //! Note that since thread IDs are recycled when a thread exits, it is possible //! for one thread to retrieve the object of another thread. Since this can only //! occur after a thread has exited this does not lead to any race conditions. //! //! # Examples //! //! Basic usage of `ThreadLocal`: //! //! ```rust //! use thread_local::ThreadLocal; //! let tls: ThreadLocal = ThreadLocal::new(); //! assert_eq!(tls.get(), None); //! assert_eq!(tls.get_or(|| 5), &5); //! assert_eq!(tls.get(), Some(&5)); //! ``` //! //! Combining thread-local values into a single result: //! //! ```rust //! use thread_local::ThreadLocal; //! use std::sync::Arc; //! use std::cell::Cell; //! use std::thread; //! //! let tls = Arc::new(ThreadLocal::new()); //! //! // Create a bunch of threads to do stuff //! for _ in 0..5 { //! let tls2 = tls.clone(); //! thread::spawn(move || { //! // Increment a counter to count some event... //! let cell = tls2.get_or(|| Cell::new(0)); //! cell.set(cell.get() + 1); //! }).join().unwrap(); //! } //! //! // Once all threads are done, collect the counter values and return the //! // sum of all thread-local counter values. //! let tls = Arc::try_unwrap(tls).unwrap(); //! let total = tls.into_iter().fold(0, |x, y| x + y.get()); //! assert_eq!(total, 5); //! ``` #![warn(missing_docs)] #![allow(clippy::mutex_atomic)] mod cached; mod thread_id; mod unreachable; #[allow(deprecated)] pub use cached::{CachedIntoIter, CachedIterMut, CachedThreadLocal}; use std::cell::UnsafeCell; use std::fmt; use std::iter::FusedIterator; use std::mem; use std::mem::MaybeUninit; use std::panic::UnwindSafe; use std::ptr; use std::sync::atomic::{AtomicBool, AtomicPtr, AtomicUsize, Ordering}; use std::sync::Mutex; use thread_id::Thread; use unreachable::UncheckedResultExt; // Use usize::BITS once it has stabilized and the MSRV has been bumped. #[cfg(target_pointer_width = "16")] const POINTER_WIDTH: u8 = 16; #[cfg(target_pointer_width = "32")] const POINTER_WIDTH: u8 = 32; #[cfg(target_pointer_width = "64")] const POINTER_WIDTH: u8 = 64; /// The total number of buckets stored in each thread local. const BUCKETS: usize = (POINTER_WIDTH + 1) as usize; /// Thread-local variable wrapper /// /// See the [module-level documentation](index.html) for more. pub struct ThreadLocal { /// The buckets in the thread local. The nth bucket contains `2^(n-1)` /// elements. Each bucket is lazily allocated. buckets: [AtomicPtr>; BUCKETS], /// The number of values in the thread local. This can be less than the real number of values, /// but is never more. values: AtomicUsize, /// Lock used to guard against concurrent modifications. This is taken when /// there is a possibility of allocating a new bucket, which only occurs /// when inserting values. lock: Mutex<()>, } struct Entry { present: AtomicBool, value: UnsafeCell>, } impl Drop for Entry { fn drop(&mut self) { unsafe { if *self.present.get_mut() { ptr::drop_in_place((*self.value.get()).as_mut_ptr()); } } } } // ThreadLocal is always Sync, even if T isn't unsafe impl Sync for ThreadLocal {} impl Default for ThreadLocal { fn default() -> ThreadLocal { ThreadLocal::new() } } impl Drop for ThreadLocal { fn drop(&mut self) { let mut bucket_size = 1; // Free each non-null bucket for (i, bucket) in self.buckets.iter_mut().enumerate() { let bucket_ptr = *bucket.get_mut(); let this_bucket_size = bucket_size; if i != 0 { bucket_size <<= 1; } if bucket_ptr.is_null() { continue; } unsafe { Box::from_raw(std::slice::from_raw_parts_mut(bucket_ptr, this_bucket_size)) }; } } } impl ThreadLocal { /// Creates a new empty `ThreadLocal`. pub fn new() -> ThreadLocal { Self::with_capacity(2) } /// Creates a new `ThreadLocal` with an initial capacity. If less than the capacity threads /// access the thread local it will never reallocate. The capacity may be rounded up to the /// nearest power of two. pub fn with_capacity(capacity: usize) -> ThreadLocal { let allocated_buckets = capacity .checked_sub(1) .map(|c| usize::from(POINTER_WIDTH) - (c.leading_zeros() as usize) + 1) .unwrap_or(0); let mut buckets = [ptr::null_mut(); BUCKETS]; let mut bucket_size = 1; for (i, bucket) in buckets[..allocated_buckets].iter_mut().enumerate() { *bucket = allocate_bucket::(bucket_size); if i != 0 { bucket_size <<= 1; } } ThreadLocal { // Safety: AtomicPtr has the same representation as a pointer and arrays have the same // representation as a sequence of their inner type. buckets: unsafe { mem::transmute(buckets) }, values: AtomicUsize::new(0), lock: Mutex::new(()), } } /// Returns the element for the current thread, if it exists. pub fn get(&self) -> Option<&T> { let thread = thread_id::get(); self.get_inner(thread) } /// Returns the element for the current thread, or creates it if it doesn't /// exist. pub fn get_or(&self, create: F) -> &T where F: FnOnce() -> T, { unsafe { self.get_or_try(|| Ok::(create())) .unchecked_unwrap_ok() } } /// Returns the element for the current thread, or creates it if it doesn't /// exist. If `create` fails, that error is returned and no element is /// added. pub fn get_or_try(&self, create: F) -> Result<&T, E> where F: FnOnce() -> Result, { let thread = thread_id::get(); match self.get_inner(thread) { Some(x) => Ok(x), None => Ok(self.insert(thread, create()?)), } } fn get_inner(&self, thread: Thread) -> Option<&T> { let bucket_ptr = unsafe { self.buckets.get_unchecked(thread.bucket) }.load(Ordering::Acquire); if bucket_ptr.is_null() { return None; } unsafe { let entry = &*bucket_ptr.add(thread.index); // Read without atomic operations as only this thread can set the value. if (&entry.present as *const _ as *const bool).read() { Some(&*(&*entry.value.get()).as_ptr()) } else { None } } } #[cold] fn insert(&self, thread: Thread, data: T) -> &T { // Lock the Mutex to ensure only a single thread is allocating buckets at once let _guard = self.lock.lock().unwrap(); let bucket_atomic_ptr = unsafe { self.buckets.get_unchecked(thread.bucket) }; let bucket_ptr: *const _ = bucket_atomic_ptr.load(Ordering::Acquire); let bucket_ptr = if bucket_ptr.is_null() { // Allocate a new bucket let bucket_ptr = allocate_bucket(thread.bucket_size); bucket_atomic_ptr.store(bucket_ptr, Ordering::Release); bucket_ptr } else { bucket_ptr }; drop(_guard); // Insert the new element into the bucket let entry = unsafe { &*bucket_ptr.add(thread.index) }; let value_ptr = entry.value.get(); unsafe { value_ptr.write(MaybeUninit::new(data)) }; entry.present.store(true, Ordering::Release); self.values.fetch_add(1, Ordering::Release); unsafe { &*(&*value_ptr).as_ptr() } } /// Returns an iterator over the local values of all threads in unspecified /// order. /// /// This call can be done safely, as `T` is required to implement [`Sync`]. pub fn iter(&self) -> Iter<'_, T> where T: Sync, { Iter { thread_local: self, raw: RawIter::new(), } } /// Returns a mutable iterator over the local values of all threads in /// unspecified order. /// /// Since this call borrows the `ThreadLocal` mutably, this operation can /// be done safely---the mutable borrow statically guarantees no other /// threads are currently accessing their associated values. pub fn iter_mut(&mut self) -> IterMut { IterMut { thread_local: self, raw: RawIter::new(), } } /// Removes all thread-specific values from the `ThreadLocal`, effectively /// reseting it to its original state. /// /// Since this call borrows the `ThreadLocal` mutably, this operation can /// be done safely---the mutable borrow statically guarantees no other /// threads are currently accessing their associated values. pub fn clear(&mut self) { *self = ThreadLocal::new(); } } impl IntoIterator for ThreadLocal { type Item = T; type IntoIter = IntoIter; fn into_iter(self) -> IntoIter { IntoIter { thread_local: self, raw: RawIter::new(), } } } impl<'a, T: Send + Sync> IntoIterator for &'a ThreadLocal { type Item = &'a T; type IntoIter = Iter<'a, T>; fn into_iter(self) -> Self::IntoIter { self.iter() } } impl<'a, T: Send> IntoIterator for &'a mut ThreadLocal { type Item = &'a mut T; type IntoIter = IterMut<'a, T>; fn into_iter(self) -> IterMut<'a, T> { self.iter_mut() } } impl ThreadLocal { /// Returns the element for the current thread, or creates a default one if /// it doesn't exist. pub fn get_or_default(&self) -> &T { self.get_or(Default::default) } } impl fmt::Debug for ThreadLocal { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "ThreadLocal {{ local_data: {:?} }}", self.get()) } } impl UnwindSafe for ThreadLocal {} #[derive(Debug)] struct RawIter { yielded: usize, bucket: usize, bucket_size: usize, index: usize, } impl RawIter { #[inline] fn new() -> Self { Self { yielded: 0, bucket: 0, bucket_size: 1, index: 0, } } fn next<'a, T: Send + Sync>(&mut self, thread_local: &'a ThreadLocal) -> Option<&'a T> { while self.bucket < BUCKETS { let bucket = unsafe { thread_local.buckets.get_unchecked(self.bucket) }; let bucket = bucket.load(Ordering::Relaxed); if !bucket.is_null() { while self.index < self.bucket_size { let entry = unsafe { &*bucket.add(self.index) }; self.index += 1; if entry.present.load(Ordering::Acquire) { self.yielded += 1; return Some(unsafe { &*(&*entry.value.get()).as_ptr() }); } } } self.next_bucket(); } None } fn next_mut<'a, T: Send>( &mut self, thread_local: &'a mut ThreadLocal, ) -> Option<&'a mut Entry> { if *thread_local.values.get_mut() == self.yielded { return None; } loop { let bucket = unsafe { thread_local.buckets.get_unchecked_mut(self.bucket) }; let bucket = *bucket.get_mut(); if !bucket.is_null() { while self.index < self.bucket_size { let entry = unsafe { &mut *bucket.add(self.index) }; self.index += 1; if *entry.present.get_mut() { self.yielded += 1; return Some(entry); } } } self.next_bucket(); } } #[inline] fn next_bucket(&mut self) { if self.bucket != 0 { self.bucket_size <<= 1; } self.bucket += 1; self.index = 0; } fn size_hint(&self, thread_local: &ThreadLocal) -> (usize, Option) { let total = thread_local.values.load(Ordering::Acquire); (total - self.yielded, None) } fn size_hint_frozen(&self, thread_local: &ThreadLocal) -> (usize, Option) { let total = unsafe { *(&thread_local.values as *const AtomicUsize as *const usize) }; let remaining = total - self.yielded; (remaining, Some(remaining)) } } /// Iterator over the contents of a `ThreadLocal`. #[derive(Debug)] pub struct Iter<'a, T: Send + Sync> { thread_local: &'a ThreadLocal, raw: RawIter, } impl<'a, T: Send + Sync> Iterator for Iter<'a, T> { type Item = &'a T; fn next(&mut self) -> Option { self.raw.next(self.thread_local) } fn size_hint(&self) -> (usize, Option) { self.raw.size_hint(self.thread_local) } } impl FusedIterator for Iter<'_, T> {} /// Mutable iterator over the contents of a `ThreadLocal`. pub struct IterMut<'a, T: Send> { thread_local: &'a mut ThreadLocal, raw: RawIter, } impl<'a, T: Send> Iterator for IterMut<'a, T> { type Item = &'a mut T; fn next(&mut self) -> Option<&'a mut T> { self.raw .next_mut(self.thread_local) .map(|entry| unsafe { &mut *(&mut *entry.value.get()).as_mut_ptr() }) } fn size_hint(&self) -> (usize, Option) { self.raw.size_hint_frozen(self.thread_local) } } impl ExactSizeIterator for IterMut<'_, T> {} impl FusedIterator for IterMut<'_, T> {} // Manual impl so we don't call Debug on the ThreadLocal, as doing so would create a reference to // this thread's value that potentially aliases with a mutable reference we have given out. impl<'a, T: Send + fmt::Debug> fmt::Debug for IterMut<'a, T> { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("IterMut").field("raw", &self.raw).finish() } } /// An iterator that moves out of a `ThreadLocal`. #[derive(Debug)] pub struct IntoIter { thread_local: ThreadLocal, raw: RawIter, } impl Iterator for IntoIter { type Item = T; fn next(&mut self) -> Option { self.raw.next_mut(&mut self.thread_local).map(|entry| { *entry.present.get_mut() = false; unsafe { std::mem::replace(&mut *entry.value.get(), MaybeUninit::uninit()).assume_init() } }) } fn size_hint(&self) -> (usize, Option) { self.raw.size_hint_frozen(&self.thread_local) } } impl ExactSizeIterator for IntoIter {} impl FusedIterator for IntoIter {} fn allocate_bucket(size: usize) -> *mut Entry { Box::into_raw( (0..size) .map(|_| Entry:: { present: AtomicBool::new(false), value: UnsafeCell::new(MaybeUninit::uninit()), }) .collect(), ) as *mut _ } #[cfg(test)] mod tests { use super::ThreadLocal; use std::cell::RefCell; use std::sync::atomic::AtomicUsize; use std::sync::atomic::Ordering::Relaxed; use std::sync::Arc; use std::thread; fn make_create() -> Arc usize + Send + Sync> { let count = AtomicUsize::new(0); Arc::new(move || count.fetch_add(1, Relaxed)) } #[test] fn same_thread() { let create = make_create(); let mut tls = ThreadLocal::new(); assert_eq!(None, tls.get()); assert_eq!("ThreadLocal { local_data: None }", format!("{:?}", &tls)); assert_eq!(0, *tls.get_or(|| create())); assert_eq!(Some(&0), tls.get()); assert_eq!(0, *tls.get_or(|| create())); assert_eq!(Some(&0), tls.get()); assert_eq!(0, *tls.get_or(|| create())); assert_eq!(Some(&0), tls.get()); assert_eq!("ThreadLocal { local_data: Some(0) }", format!("{:?}", &tls)); tls.clear(); assert_eq!(None, tls.get()); } #[test] fn different_thread() { let create = make_create(); let tls = Arc::new(ThreadLocal::new()); assert_eq!(None, tls.get()); assert_eq!(0, *tls.get_or(|| create())); assert_eq!(Some(&0), tls.get()); let tls2 = tls.clone(); let create2 = create.clone(); thread::spawn(move || { assert_eq!(None, tls2.get()); assert_eq!(1, *tls2.get_or(|| create2())); assert_eq!(Some(&1), tls2.get()); }) .join() .unwrap(); assert_eq!(Some(&0), tls.get()); assert_eq!(0, *tls.get_or(|| create())); } #[test] fn iter() { let tls = Arc::new(ThreadLocal::new()); tls.get_or(|| Box::new(1)); let tls2 = tls.clone(); thread::spawn(move || { tls2.get_or(|| Box::new(2)); let tls3 = tls2.clone(); thread::spawn(move || { tls3.get_or(|| Box::new(3)); }) .join() .unwrap(); drop(tls2); }) .join() .unwrap(); let mut tls = Arc::try_unwrap(tls).unwrap(); let mut v = tls.iter().map(|x| **x).collect::>(); v.sort_unstable(); assert_eq!(vec![1, 2, 3], v); let mut v = tls.iter_mut().map(|x| **x).collect::>(); v.sort_unstable(); assert_eq!(vec![1, 2, 3], v); let mut v = tls.into_iter().map(|x| *x).collect::>(); v.sort_unstable(); assert_eq!(vec![1, 2, 3], v); } #[test] fn test_drop() { let local = ThreadLocal::new(); struct Dropped(Arc); impl Drop for Dropped { fn drop(&mut self) { self.0.fetch_add(1, Relaxed); } } let dropped = Arc::new(AtomicUsize::new(0)); local.get_or(|| Dropped(dropped.clone())); assert_eq!(dropped.load(Relaxed), 0); drop(local); assert_eq!(dropped.load(Relaxed), 1); } #[test] fn is_sync() { fn foo() {} foo::>(); foo::>>(); } } vendor/thread_local/LICENSE-MIT0000664000175000017500000000205714160055207016730 0ustar mwhudsonmwhudsonCopyright (c) 2016 The Rust Project Developers Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/thread_local/README.md0000664000175000017500000000234214160055207016550 0ustar mwhudsonmwhudsonthread_local ============ [![Build Status](https://travis-ci.org/Amanieu/thread_local-rs.svg?branch=master)](https://travis-ci.org/Amanieu/thread_local-rs) [![Crates.io](https://img.shields.io/crates/v/thread_local.svg)](https://crates.io/crates/thread_local) This library provides the `ThreadLocal` type which allow a separate copy of an object to be used for each thread. This allows for per-object thread-local storage, unlike the standard library's `thread_local!` macro which only allows static thread-local storage. [Documentation](https://docs.rs/thread_local/) ## Usage Add this to your `Cargo.toml`: ```toml [dependencies] thread_local = "1.1" ``` ## Minimum Rust version This crate's minimum supported Rust version (MSRV) is 1.36.0. ## License Licensed under either of * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. vendor/url/0000775000175000017500000000000014160055207013451 5ustar mwhudsonmwhudsonvendor/url/.cargo-checksum.json0000664000175000017500000000013114160055207017310 0ustar mwhudsonmwhudson{"files":{},"package":"a507c383b2d33b5fc35d1861e77e6b383d158b2da5e14fe51b83dfedf6fd578c"}vendor/url/LICENSE-APACHE0000664000175000017500000002513714160055207015405 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/url/Cargo.toml0000664000175000017500000000256614160055207015412 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] edition = "2018" name = "url" version = "2.2.2" authors = ["The rust-url developers"] include = ["src/**/*", "LICENSE-*", "README.md", "tests/**"] description = "URL library for Rust, based on the WHATWG URL Standard" documentation = "https://docs.rs/url" readme = "../README.md" keywords = ["url", "parser"] categories = ["parser-implementations", "web-programming", "encoding"] license = "MIT/Apache-2.0" repository = "https://github.com/servo/rust-url" [dependencies.form_urlencoded] version = "1.0.0" [dependencies.idna] version = "0.2.0" [dependencies.matches] version = "0.1" [dependencies.percent-encoding] version = "2.1.0" [dependencies.serde] version = "1.0" features = ["derive"] optional = true [dev-dependencies.serde_json] version = "1.0" [badges.appveyor] repository = "Manishearth/rust-url" [badges.travis-ci] repository = "servo/rust-url" vendor/url/debian/0000775000175000017500000000000014160055207014673 5ustar mwhudsonmwhudsonvendor/url/debian/patches/0000775000175000017500000000000014160055207016322 5ustar mwhudsonmwhudsonvendor/url/debian/patches/series0000664000175000017500000000002714160055207017536 0ustar mwhudsonmwhudsonremove-benchmarks.diff vendor/url/debian/patches/remove-benchmarks.diff0000664000175000017500000000066314160055207022571 0ustar mwhudsonmwhudson--- a/Cargo.toml +++ b/Cargo.toml @@ -24,10 +24,6 @@ license = "MIT/Apache-2.0" repository = "https://github.com/servo/rust-url" -[[bench]] -name = "parse_url" -path = "benches/parse_url.rs" -harness = false [dependencies.form_urlencoded] version = "1.0.0" @@ -44,8 +40,6 @@ version = "1.0" features = ["derive"] optional = true -[dev-dependencies.bencher] -version = "0.1" [dev-dependencies.serde_json] version = "1.0" vendor/url/src/0000775000175000017500000000000014160055207014240 5ustar mwhudsonmwhudsonvendor/url/src/quirks.rs0000664000175000017500000002144214160055207016127 0ustar mwhudsonmwhudson// Copyright 2016 The rust-url developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Getters and setters for URL components implemented per https://url.spec.whatwg.org/#api //! //! Unless you need to be interoperable with web browsers, //! you probably want to use `Url` method instead. use crate::parser::{default_port, Context, Input, Parser, SchemeType}; use crate::{Host, ParseError, Position, Url}; /// https://url.spec.whatwg.org/#dom-url-domaintoascii pub fn domain_to_ascii(domain: &str) -> String { match Host::parse(domain) { Ok(Host::Domain(domain)) => domain, _ => String::new(), } } /// https://url.spec.whatwg.org/#dom-url-domaintounicode pub fn domain_to_unicode(domain: &str) -> String { match Host::parse(domain) { Ok(Host::Domain(ref domain)) => { let (unicode, _errors) = idna::domain_to_unicode(domain); unicode } _ => String::new(), } } /// Getter for https://url.spec.whatwg.org/#dom-url-href pub fn href(url: &Url) -> &str { url.as_str() } /// Setter for https://url.spec.whatwg.org/#dom-url-href pub fn set_href(url: &mut Url, value: &str) -> Result<(), ParseError> { *url = Url::parse(value)?; Ok(()) } /// Getter for https://url.spec.whatwg.org/#dom-url-origin pub fn origin(url: &Url) -> String { url.origin().ascii_serialization() } /// Getter for https://url.spec.whatwg.org/#dom-url-protocol #[inline] pub fn protocol(url: &Url) -> &str { &url.as_str()[..url.scheme().len() + ":".len()] } /// Setter for https://url.spec.whatwg.org/#dom-url-protocol #[allow(clippy::result_unit_err)] pub fn set_protocol(url: &mut Url, mut new_protocol: &str) -> Result<(), ()> { // The scheme state in the spec ignores everything after the first `:`, // but `set_scheme` errors if there is more. if let Some(position) = new_protocol.find(':') { new_protocol = &new_protocol[..position]; } url.set_scheme(new_protocol) } /// Getter for https://url.spec.whatwg.org/#dom-url-username #[inline] pub fn username(url: &Url) -> &str { url.username() } /// Setter for https://url.spec.whatwg.org/#dom-url-username #[allow(clippy::result_unit_err)] pub fn set_username(url: &mut Url, new_username: &str) -> Result<(), ()> { url.set_username(new_username) } /// Getter for https://url.spec.whatwg.org/#dom-url-password #[inline] pub fn password(url: &Url) -> &str { url.password().unwrap_or("") } /// Setter for https://url.spec.whatwg.org/#dom-url-password #[allow(clippy::result_unit_err)] pub fn set_password(url: &mut Url, new_password: &str) -> Result<(), ()> { url.set_password(if new_password.is_empty() { None } else { Some(new_password) }) } /// Getter for https://url.spec.whatwg.org/#dom-url-host #[inline] pub fn host(url: &Url) -> &str { &url[Position::BeforeHost..Position::AfterPort] } /// Setter for https://url.spec.whatwg.org/#dom-url-host #[allow(clippy::result_unit_err)] pub fn set_host(url: &mut Url, new_host: &str) -> Result<(), ()> { // If context object’s url’s cannot-be-a-base-URL flag is set, then return. if url.cannot_be_a_base() { return Err(()); } // Host parsing rules are strict, // We don't want to trim the input let input = Input::no_trim(new_host); let host; let opt_port; { let scheme = url.scheme(); let scheme_type = SchemeType::from(scheme); if scheme_type == SchemeType::File && new_host.is_empty() { url.set_host_internal(Host::Domain(String::new()), None); return Ok(()); } if let Ok((h, remaining)) = Parser::parse_host(input, scheme_type) { host = h; opt_port = if let Some(remaining) = remaining.split_prefix(':') { if remaining.is_empty() { None } else { Parser::parse_port(remaining, || default_port(scheme), Context::Setter) .ok() .map(|(port, _remaining)| port) } } else { None }; } else { return Err(()); } } // Make sure we won't set an empty host to a url with a username or a port if host == Host::Domain("".to_string()) { if !username(&url).is_empty() { return Err(()); } else if let Some(Some(_)) = opt_port { return Err(()); } else if url.port().is_some() { return Err(()); } } url.set_host_internal(host, opt_port); Ok(()) } /// Getter for https://url.spec.whatwg.org/#dom-url-hostname #[inline] pub fn hostname(url: &Url) -> &str { url.host_str().unwrap_or("") } /// Setter for https://url.spec.whatwg.org/#dom-url-hostname #[allow(clippy::result_unit_err)] pub fn set_hostname(url: &mut Url, new_hostname: &str) -> Result<(), ()> { if url.cannot_be_a_base() { return Err(()); } // Host parsing rules are strict we don't want to trim the input let input = Input::no_trim(new_hostname); let scheme_type = SchemeType::from(url.scheme()); if scheme_type == SchemeType::File && new_hostname.is_empty() { url.set_host_internal(Host::Domain(String::new()), None); return Ok(()); } if let Ok((host, _remaining)) = Parser::parse_host(input, scheme_type) { if let Host::Domain(h) = &host { if h.is_empty() { // Empty host on special not file url if SchemeType::from(url.scheme()) == SchemeType::SpecialNotFile // Port with an empty host ||!port(&url).is_empty() // Empty host that includes credentials || !url.username().is_empty() || !url.password().unwrap_or(&"").is_empty() { return Err(()); } } } url.set_host_internal(host, None); Ok(()) } else { Err(()) } } /// Getter for https://url.spec.whatwg.org/#dom-url-port #[inline] pub fn port(url: &Url) -> &str { &url[Position::BeforePort..Position::AfterPort] } /// Setter for https://url.spec.whatwg.org/#dom-url-port #[allow(clippy::result_unit_err)] pub fn set_port(url: &mut Url, new_port: &str) -> Result<(), ()> { let result; { // has_host implies !cannot_be_a_base let scheme = url.scheme(); if !url.has_host() || url.host() == Some(Host::Domain("")) || scheme == "file" { return Err(()); } result = Parser::parse_port( Input::new(new_port), || default_port(scheme), Context::Setter, ) } if let Ok((new_port, _remaining)) = result { url.set_port_internal(new_port); Ok(()) } else { Err(()) } } /// Getter for https://url.spec.whatwg.org/#dom-url-pathname #[inline] pub fn pathname(url: &Url) -> &str { url.path() } /// Setter for https://url.spec.whatwg.org/#dom-url-pathname pub fn set_pathname(url: &mut Url, new_pathname: &str) { if url.cannot_be_a_base() { return; } if new_pathname.starts_with('/') || (SchemeType::from(url.scheme()).is_special() // \ is a segment delimiter for 'special' URLs" && new_pathname.starts_with('\\')) { url.set_path(new_pathname) } else { let mut path_to_set = String::from("/"); path_to_set.push_str(new_pathname); url.set_path(&path_to_set) } } /// Getter for https://url.spec.whatwg.org/#dom-url-search pub fn search(url: &Url) -> &str { trim(&url[Position::AfterPath..Position::AfterQuery]) } /// Setter for https://url.spec.whatwg.org/#dom-url-search pub fn set_search(url: &mut Url, new_search: &str) { url.set_query(match new_search { "" => None, _ if new_search.starts_with('?') => Some(&new_search[1..]), _ => Some(new_search), }) } /// Getter for https://url.spec.whatwg.org/#dom-url-hash pub fn hash(url: &Url) -> &str { trim(&url[Position::AfterQuery..]) } /// Setter for https://url.spec.whatwg.org/#dom-url-hash pub fn set_hash(url: &mut Url, new_hash: &str) { url.set_fragment(match new_hash { // If the given value is the empty string, // then set context object’s url’s fragment to null and return. "" => None, // Let input be the given value with a single leading U+0023 (#) removed, if any. _ if new_hash.starts_with('#') => Some(&new_hash[1..]), _ => Some(new_hash), }) } fn trim(s: &str) -> &str { if s.len() == 1 { "" } else { s } } vendor/url/src/path_segments.rs0000664000175000017500000002074014160055207017452 0ustar mwhudsonmwhudson// Copyright 2016 The rust-url developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use crate::parser::{self, to_u32, SchemeType}; use crate::Url; use std::str; /// Exposes methods to manipulate the path of an URL that is not cannot-be-base. /// /// The path always starts with a `/` slash, and is made of slash-separated segments. /// There is always at least one segment (which may be the empty string). /// /// Examples: /// /// ```rust /// use url::Url; /// # use std::error::Error; /// /// # fn run() -> Result<(), Box> { /// let mut url = Url::parse("mailto:me@example.com")?; /// assert!(url.path_segments_mut().is_err()); /// /// let mut url = Url::parse("http://example.net/foo/index.html")?; /// url.path_segments_mut().map_err(|_| "cannot be base")? /// .pop().push("img").push("2/100%.png"); /// assert_eq!(url.as_str(), "http://example.net/foo/img/2%2F100%25.png"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[derive(Debug)] pub struct PathSegmentsMut<'a> { url: &'a mut Url, after_first_slash: usize, after_path: String, old_after_path_position: u32, } // Not re-exported outside the crate pub fn new(url: &mut Url) -> PathSegmentsMut<'_> { let after_path = url.take_after_path(); let old_after_path_position = to_u32(url.serialization.len()).unwrap(); // Special urls always have a non empty path if SchemeType::from(url.scheme()).is_special() { debug_assert!(url.byte_at(url.path_start) == b'/'); } else { debug_assert!( url.serialization.len() == url.path_start as usize || url.byte_at(url.path_start) == b'/' ); } PathSegmentsMut { after_first_slash: url.path_start as usize + "/".len(), url, old_after_path_position, after_path, } } impl<'a> Drop for PathSegmentsMut<'a> { fn drop(&mut self) { self.url .restore_after_path(self.old_after_path_position, &self.after_path) } } impl<'a> PathSegmentsMut<'a> { /// Remove all segments in the path, leaving the minimal `url.path() == "/"`. /// /// Returns `&mut Self` so that method calls can be chained. /// /// Example: /// /// ```rust /// use url::Url; /// # use std::error::Error; /// /// # fn run() -> Result<(), Box> { /// let mut url = Url::parse("https://github.com/servo/rust-url/")?; /// url.path_segments_mut().map_err(|_| "cannot be base")? /// .clear().push("logout"); /// assert_eq!(url.as_str(), "https://github.com/logout"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn clear(&mut self) -> &mut Self { self.url.serialization.truncate(self.after_first_slash); self } /// Remove the last segment of this URL’s path if it is empty, /// except if these was only one segment to begin with. /// /// In other words, remove one path trailing slash, if any, /// unless it is also the initial slash (so this does nothing if `url.path() == "/")`. /// /// Returns `&mut Self` so that method calls can be chained. /// /// Example: /// /// ```rust /// use url::Url; /// # use std::error::Error; /// /// # fn run() -> Result<(), Box> { /// let mut url = Url::parse("https://github.com/servo/rust-url/")?; /// url.path_segments_mut().map_err(|_| "cannot be base")? /// .push("pulls"); /// assert_eq!(url.as_str(), "https://github.com/servo/rust-url//pulls"); /// /// let mut url = Url::parse("https://github.com/servo/rust-url/")?; /// url.path_segments_mut().map_err(|_| "cannot be base")? /// .pop_if_empty().push("pulls"); /// assert_eq!(url.as_str(), "https://github.com/servo/rust-url/pulls"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn pop_if_empty(&mut self) -> &mut Self { if self.after_first_slash >= self.url.serialization.len() { return self; } if self.url.serialization[self.after_first_slash..].ends_with('/') { self.url.serialization.pop(); } self } /// Remove the last segment of this URL’s path. /// /// If the path only has one segment, make it empty such that `url.path() == "/"`. /// /// Returns `&mut Self` so that method calls can be chained. pub fn pop(&mut self) -> &mut Self { if self.after_first_slash >= self.url.serialization.len() { return self; } let last_slash = self.url.serialization[self.after_first_slash..] .rfind('/') .unwrap_or(0); self.url .serialization .truncate(self.after_first_slash + last_slash); self } /// Append the given segment at the end of this URL’s path. /// /// See the documentation for `.extend()`. /// /// Returns `&mut Self` so that method calls can be chained. pub fn push(&mut self, segment: &str) -> &mut Self { self.extend(Some(segment)) } /// Append each segment from the given iterator at the end of this URL’s path. /// /// Each segment is percent-encoded like in `Url::parse` or `Url::join`, /// except that `%` and `/` characters are also encoded (to `%25` and `%2F`). /// This is unlike `Url::parse` where `%` is left as-is in case some of the input /// is already percent-encoded, and `/` denotes a path segment separator.) /// /// Note that, in addition to slashes between new segments, /// this always adds a slash between the existing path and the new segments /// *except* if the existing path is `"/"`. /// If the previous last segment was empty (if the path had a trailing slash) /// the path after `.extend()` will contain two consecutive slashes. /// If that is undesired, call `.pop_if_empty()` first. /// /// To obtain a behavior similar to `Url::join`, call `.pop()` unconditionally first. /// /// Returns `&mut Self` so that method calls can be chained. /// /// Example: /// /// ```rust /// use url::Url; /// # use std::error::Error; /// /// # fn run() -> Result<(), Box> { /// let mut url = Url::parse("https://github.com/")?; /// let org = "servo"; /// let repo = "rust-url"; /// let issue_number = "188"; /// url.path_segments_mut().map_err(|_| "cannot be base")? /// .extend(&[org, repo, "issues", issue_number]); /// assert_eq!(url.as_str(), "https://github.com/servo/rust-url/issues/188"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// In order to make sure that parsing the serialization of an URL gives the same URL, /// a segment is ignored if it is `"."` or `".."`: /// /// ```rust /// use url::Url; /// # use std::error::Error; /// /// # fn run() -> Result<(), Box> { /// let mut url = Url::parse("https://github.com/servo")?; /// url.path_segments_mut().map_err(|_| "cannot be base")? /// .extend(&["..", "rust-url", ".", "pulls"]); /// assert_eq!(url.as_str(), "https://github.com/servo/rust-url/pulls"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn extend(&mut self, segments: I) -> &mut Self where I: IntoIterator, I::Item: AsRef, { let scheme_type = SchemeType::from(self.url.scheme()); let path_start = self.url.path_start as usize; self.url.mutate(|parser| { parser.context = parser::Context::PathSegmentSetter; for segment in segments { let segment = segment.as_ref(); if matches!(segment, "." | "..") { continue; } if parser.serialization.len() > path_start + 1 // Non special url's path might still be empty || parser.serialization.len() == path_start { parser.serialization.push('/'); } let mut has_host = true; // FIXME account for this? parser.parse_path( scheme_type, &mut has_host, path_start, parser::Input::new(segment), ); } }); self } } vendor/url/src/origin.rs0000664000175000017500000000763114160055207016104 0ustar mwhudsonmwhudson// Copyright 2016 The rust-url developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use crate::host::Host; use crate::parser::default_port; use crate::Url; use idna::domain_to_unicode; use std::sync::atomic::{AtomicUsize, Ordering}; pub fn url_origin(url: &Url) -> Origin { let scheme = url.scheme(); match scheme { "blob" => { let result = Url::parse(url.path()); match result { Ok(ref url) => url_origin(url), Err(_) => Origin::new_opaque(), } } "ftp" | "http" | "https" | "ws" | "wss" => Origin::Tuple( scheme.to_owned(), url.host().unwrap().to_owned(), url.port_or_known_default().unwrap(), ), // TODO: Figure out what to do if the scheme is a file "file" => Origin::new_opaque(), _ => Origin::new_opaque(), } } /// The origin of an URL /// /// Two URLs with the same origin are considered /// to originate from the same entity and can therefore trust /// each other. /// /// The origin is determined based on the scheme as follows: /// /// - If the scheme is "blob" the origin is the origin of the /// URL contained in the path component. If parsing fails, /// it is an opaque origin. /// - If the scheme is "ftp", "http", "https", "ws", or "wss", /// then the origin is a tuple of the scheme, host, and port. /// - If the scheme is anything else, the origin is opaque, meaning /// the URL does not have the same origin as any other URL. /// /// For more information see #[derive(PartialEq, Eq, Hash, Clone, Debug)] pub enum Origin { /// A globally unique identifier Opaque(OpaqueOrigin), /// Consists of the URL's scheme, host and port Tuple(String, Host, u16), } impl Origin { /// Creates a new opaque origin that is only equal to itself. pub fn new_opaque() -> Origin { static COUNTER: AtomicUsize = AtomicUsize::new(0); Origin::Opaque(OpaqueOrigin(COUNTER.fetch_add(1, Ordering::SeqCst))) } /// Return whether this origin is a (scheme, host, port) tuple /// (as opposed to an opaque origin). pub fn is_tuple(&self) -> bool { matches!(*self, Origin::Tuple(..)) } /// pub fn ascii_serialization(&self) -> String { match *self { Origin::Opaque(_) => "null".to_owned(), Origin::Tuple(ref scheme, ref host, port) => { if default_port(scheme) == Some(port) { format!("{}://{}", scheme, host) } else { format!("{}://{}:{}", scheme, host, port) } } } } /// pub fn unicode_serialization(&self) -> String { match *self { Origin::Opaque(_) => "null".to_owned(), Origin::Tuple(ref scheme, ref host, port) => { let host = match *host { Host::Domain(ref domain) => { let (domain, _errors) = domain_to_unicode(domain); Host::Domain(domain) } _ => host.clone(), }; if default_port(scheme) == Some(port) { format!("{}://{}", scheme, host) } else { format!("{}://{}:{}", scheme, host, port) } } } } } /// Opaque identifier for URLs that have file or other schemes #[derive(Eq, PartialEq, Hash, Clone, Debug)] pub struct OpaqueOrigin(usize); vendor/url/src/parser.rs0000664000175000017500000016600714160055207016114 0ustar mwhudsonmwhudson// Copyright 2013-2016 The rust-url developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use std::error::Error; use std::fmt::{self, Formatter, Write}; use std::str; use crate::host::{Host, HostInternal}; use crate::Url; use form_urlencoded::EncodingOverride; use percent_encoding::{percent_encode, utf8_percent_encode, AsciiSet, CONTROLS}; /// https://url.spec.whatwg.org/#fragment-percent-encode-set const FRAGMENT: &AsciiSet = &CONTROLS.add(b' ').add(b'"').add(b'<').add(b'>').add(b'`'); /// https://url.spec.whatwg.org/#path-percent-encode-set const PATH: &AsciiSet = &FRAGMENT.add(b'#').add(b'?').add(b'{').add(b'}'); /// https://url.spec.whatwg.org/#userinfo-percent-encode-set pub(crate) const USERINFO: &AsciiSet = &PATH .add(b'/') .add(b':') .add(b';') .add(b'=') .add(b'@') .add(b'[') .add(b'\\') .add(b']') .add(b'^') .add(b'|'); pub(crate) const PATH_SEGMENT: &AsciiSet = &PATH.add(b'/').add(b'%'); // The backslash (\) character is treated as a path separator in special URLs // so it needs to be additionally escaped in that case. pub(crate) const SPECIAL_PATH_SEGMENT: &AsciiSet = &PATH_SEGMENT.add(b'\\'); // https://url.spec.whatwg.org/#query-state const QUERY: &AsciiSet = &CONTROLS.add(b' ').add(b'"').add(b'#').add(b'<').add(b'>'); const SPECIAL_QUERY: &AsciiSet = &QUERY.add(b'\''); pub type ParseResult = Result; macro_rules! simple_enum_error { ($($name: ident => $description: expr,)+) => { /// Errors that can occur during parsing. /// /// This may be extended in the future so exhaustive matching is /// discouraged with an unused variant. #[allow(clippy::manual_non_exhaustive)] // introduced in 1.40, MSRV is 1.36 #[derive(PartialEq, Eq, Clone, Copy, Debug)] pub enum ParseError { $( $name, )+ /// Unused variant enable non-exhaustive matching #[doc(hidden)] __FutureProof, } impl fmt::Display for ParseError { fn fmt(&self, fmt: &mut Formatter<'_>) -> fmt::Result { match *self { $( ParseError::$name => fmt.write_str($description), )+ ParseError::__FutureProof => { unreachable!("Don't abuse the FutureProof!"); } } } } } } impl Error for ParseError {} simple_enum_error! { EmptyHost => "empty host", IdnaError => "invalid international domain name", InvalidPort => "invalid port number", InvalidIpv4Address => "invalid IPv4 address", InvalidIpv6Address => "invalid IPv6 address", InvalidDomainCharacter => "invalid domain character", RelativeUrlWithoutBase => "relative URL without a base", RelativeUrlWithCannotBeABaseBase => "relative URL with a cannot-be-a-base base", SetHostOnCannotBeABaseUrl => "a cannot-be-a-base URL doesn’t have a host to set", Overflow => "URLs more than 4 GB are not supported", } impl From<::idna::Errors> for ParseError { fn from(_: ::idna::Errors) -> ParseError { ParseError::IdnaError } } macro_rules! syntax_violation_enum { ($($name: ident => $description: expr,)+) => { /// Non-fatal syntax violations that can occur during parsing. /// /// This may be extended in the future so exhaustive matching is /// discouraged with an unused variant. #[allow(clippy::manual_non_exhaustive)] // introduced in 1.40, MSRV is 1.36 #[derive(PartialEq, Eq, Clone, Copy, Debug)] pub enum SyntaxViolation { $( $name, )+ /// Unused variant enable non-exhaustive matching #[doc(hidden)] __FutureProof, } impl SyntaxViolation { pub fn description(&self) -> &'static str { match *self { $( SyntaxViolation::$name => $description, )+ SyntaxViolation::__FutureProof => { unreachable!("Don't abuse the FutureProof!"); } } } } } } syntax_violation_enum! { Backslash => "backslash", C0SpaceIgnored => "leading or trailing control or space character are ignored in URLs", EmbeddedCredentials => "embedding authentication information (username or password) \ in an URL is not recommended", ExpectedDoubleSlash => "expected //", ExpectedFileDoubleSlash => "expected // after file:", FileWithHostAndWindowsDrive => "file: with host and Windows drive letter", NonUrlCodePoint => "non-URL code point", NullInFragment => "NULL characters are ignored in URL fragment identifiers", PercentDecode => "expected 2 hex digits after %", TabOrNewlineIgnored => "tabs or newlines are ignored in URLs", UnencodedAtSign => "unencoded @ sign in username or password", } impl fmt::Display for SyntaxViolation { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { fmt::Display::fmt(self.description(), f) } } #[derive(Copy, Clone, PartialEq)] pub enum SchemeType { File, SpecialNotFile, NotSpecial, } impl SchemeType { pub fn is_special(&self) -> bool { !matches!(*self, SchemeType::NotSpecial) } pub fn is_file(&self) -> bool { matches!(*self, SchemeType::File) } pub fn from(s: &str) -> Self { match s { "http" | "https" | "ws" | "wss" | "ftp" => SchemeType::SpecialNotFile, "file" => SchemeType::File, _ => SchemeType::NotSpecial, } } } pub fn default_port(scheme: &str) -> Option { match scheme { "http" | "ws" => Some(80), "https" | "wss" => Some(443), "ftp" => Some(21), _ => None, } } #[derive(Clone)] pub struct Input<'i> { chars: str::Chars<'i>, } impl<'i> Input<'i> { pub fn new(input: &'i str) -> Self { Input::with_log(input, None) } pub fn no_trim(input: &'i str) -> Self { Input { chars: input.chars(), } } pub fn trim_tab_and_newlines( original_input: &'i str, vfn: Option<&dyn Fn(SyntaxViolation)>, ) -> Self { let input = original_input.trim_matches(ascii_tab_or_new_line); if let Some(vfn) = vfn { if input.len() < original_input.len() { vfn(SyntaxViolation::C0SpaceIgnored) } if input.chars().any(|c| matches!(c, '\t' | '\n' | '\r')) { vfn(SyntaxViolation::TabOrNewlineIgnored) } } Input { chars: input.chars(), } } pub fn with_log(original_input: &'i str, vfn: Option<&dyn Fn(SyntaxViolation)>) -> Self { let input = original_input.trim_matches(c0_control_or_space); if let Some(vfn) = vfn { if input.len() < original_input.len() { vfn(SyntaxViolation::C0SpaceIgnored) } if input.chars().any(|c| matches!(c, '\t' | '\n' | '\r')) { vfn(SyntaxViolation::TabOrNewlineIgnored) } } Input { chars: input.chars(), } } #[inline] pub fn is_empty(&self) -> bool { self.clone().next().is_none() } #[inline] fn starts_with(&self, p: P) -> bool { p.split_prefix(&mut self.clone()) } #[inline] pub fn split_prefix(&self, p: P) -> Option { let mut remaining = self.clone(); if p.split_prefix(&mut remaining) { Some(remaining) } else { None } } #[inline] fn split_first(&self) -> (Option, Self) { let mut remaining = self.clone(); (remaining.next(), remaining) } #[inline] fn count_matching bool>(&self, f: F) -> (u32, Self) { let mut count = 0; let mut remaining = self.clone(); loop { let mut input = remaining.clone(); if matches!(input.next(), Some(c) if f(c)) { remaining = input; count += 1; } else { return (count, remaining); } } } #[inline] fn next_utf8(&mut self) -> Option<(char, &'i str)> { loop { let utf8 = self.chars.as_str(); match self.chars.next() { Some(c) => { if !matches!(c, '\t' | '\n' | '\r') { return Some((c, &utf8[..c.len_utf8()])); } } None => return None, } } } } pub trait Pattern { fn split_prefix(self, input: &mut Input) -> bool; } impl Pattern for char { fn split_prefix(self, input: &mut Input) -> bool { input.next() == Some(self) } } impl<'a> Pattern for &'a str { fn split_prefix(self, input: &mut Input) -> bool { for c in self.chars() { if input.next() != Some(c) { return false; } } true } } impl bool> Pattern for F { fn split_prefix(self, input: &mut Input) -> bool { input.next().map_or(false, self) } } impl<'i> Iterator for Input<'i> { type Item = char; fn next(&mut self) -> Option { self.chars .by_ref() .find(|&c| !matches!(c, '\t' | '\n' | '\r')) } } pub struct Parser<'a> { pub serialization: String, pub base_url: Option<&'a Url>, pub query_encoding_override: EncodingOverride<'a>, pub violation_fn: Option<&'a dyn Fn(SyntaxViolation)>, pub context: Context, } #[derive(PartialEq, Eq, Copy, Clone)] pub enum Context { UrlParser, Setter, PathSegmentSetter, } impl<'a> Parser<'a> { fn log_violation(&self, v: SyntaxViolation) { if let Some(f) = self.violation_fn { f(v) } } fn log_violation_if(&self, v: SyntaxViolation, test: impl FnOnce() -> bool) { if let Some(f) = self.violation_fn { if test() { f(v) } } } pub fn for_setter(serialization: String) -> Parser<'a> { Parser { serialization, base_url: None, query_encoding_override: None, violation_fn: None, context: Context::Setter, } } /// https://url.spec.whatwg.org/#concept-basic-url-parser pub fn parse_url(mut self, input: &str) -> ParseResult { let input = Input::with_log(input, self.violation_fn); if let Ok(remaining) = self.parse_scheme(input.clone()) { return self.parse_with_scheme(remaining); } // No-scheme state if let Some(base_url) = self.base_url { if input.starts_with('#') { self.fragment_only(base_url, input) } else if base_url.cannot_be_a_base() { Err(ParseError::RelativeUrlWithCannotBeABaseBase) } else { let scheme_type = SchemeType::from(base_url.scheme()); if scheme_type.is_file() { self.parse_file(input, scheme_type, Some(base_url)) } else { self.parse_relative(input, scheme_type, base_url) } } } else { Err(ParseError::RelativeUrlWithoutBase) } } pub fn parse_scheme<'i>(&mut self, mut input: Input<'i>) -> Result, ()> { if input.is_empty() || !input.starts_with(ascii_alpha) { return Err(()); } debug_assert!(self.serialization.is_empty()); while let Some(c) = input.next() { match c { 'a'..='z' | 'A'..='Z' | '0'..='9' | '+' | '-' | '.' => { self.serialization.push(c.to_ascii_lowercase()) } ':' => return Ok(input), _ => { self.serialization.clear(); return Err(()); } } } // EOF before ':' if self.context == Context::Setter { Ok(input) } else { self.serialization.clear(); Err(()) } } fn parse_with_scheme(mut self, input: Input<'_>) -> ParseResult { use crate::SyntaxViolation::{ExpectedDoubleSlash, ExpectedFileDoubleSlash}; let scheme_end = to_u32(self.serialization.len())?; let scheme_type = SchemeType::from(&self.serialization); self.serialization.push(':'); match scheme_type { SchemeType::File => { self.log_violation_if(ExpectedFileDoubleSlash, || !input.starts_with("//")); let base_file_url = self.base_url.and_then(|base| { if base.scheme() == "file" { Some(base) } else { None } }); self.serialization.clear(); self.parse_file(input, scheme_type, base_file_url) } SchemeType::SpecialNotFile => { // special relative or authority state let (slashes_count, remaining) = input.count_matching(|c| matches!(c, '/' | '\\')); if let Some(base_url) = self.base_url { if slashes_count < 2 && base_url.scheme() == &self.serialization[..scheme_end as usize] { // "Cannot-be-a-base" URLs only happen with "not special" schemes. debug_assert!(!base_url.cannot_be_a_base()); self.serialization.clear(); return self.parse_relative(input, scheme_type, base_url); } } // special authority slashes state self.log_violation_if(ExpectedDoubleSlash, || { input .clone() .take_while(|&c| matches!(c, '/' | '\\')) .collect::() != "//" }); self.after_double_slash(remaining, scheme_type, scheme_end) } SchemeType::NotSpecial => self.parse_non_special(input, scheme_type, scheme_end), } } /// Scheme other than file, http, https, ws, ws, ftp. fn parse_non_special( mut self, input: Input<'_>, scheme_type: SchemeType, scheme_end: u32, ) -> ParseResult { // path or authority state ( if let Some(input) = input.split_prefix("//") { return self.after_double_slash(input, scheme_type, scheme_end); } // Anarchist URL (no authority) let path_start = to_u32(self.serialization.len())?; let username_end = path_start; let host_start = path_start; let host_end = path_start; let host = HostInternal::None; let port = None; let remaining = if let Some(input) = input.split_prefix('/') { let path_start = self.serialization.len(); self.serialization.push('/'); self.parse_path(scheme_type, &mut false, path_start, input) } else { self.parse_cannot_be_a_base_path(input) }; self.with_query_and_fragment( scheme_type, scheme_end, username_end, host_start, host_end, host, port, path_start, remaining, ) } fn parse_file( mut self, input: Input<'_>, scheme_type: SchemeType, base_file_url: Option<&Url>, ) -> ParseResult { use crate::SyntaxViolation::Backslash; // file state debug_assert!(self.serialization.is_empty()); let (first_char, input_after_first_char) = input.split_first(); if matches!(first_char, Some('/') | Some('\\')) { self.log_violation_if(SyntaxViolation::Backslash, || first_char == Some('\\')); // file slash state let (next_char, input_after_next_char) = input_after_first_char.split_first(); if matches!(next_char, Some('/') | Some('\\')) { self.log_violation_if(Backslash, || next_char == Some('\\')); // file host state self.serialization.push_str("file://"); let scheme_end = "file".len() as u32; let host_start = "file://".len() as u32; let (path_start, mut host, remaining) = self.parse_file_host(input_after_next_char)?; let mut host_end = to_u32(self.serialization.len())?; let mut has_host = !matches!(host, HostInternal::None); let remaining = if path_start { self.parse_path_start(SchemeType::File, &mut has_host, remaining) } else { let path_start = self.serialization.len(); self.serialization.push('/'); self.parse_path(SchemeType::File, &mut has_host, path_start, remaining) }; // For file URLs that have a host and whose path starts // with the windows drive letter we just remove the host. if !has_host { self.serialization .drain(host_start as usize..host_end as usize); host_end = host_start; host = HostInternal::None; } let (query_start, fragment_start) = self.parse_query_and_fragment(scheme_type, scheme_end, remaining)?; return Ok(Url { serialization: self.serialization, scheme_end, username_end: host_start, host_start, host_end, host, port: None, path_start: host_end, query_start, fragment_start, }); } else { self.serialization.push_str("file://"); let scheme_end = "file".len() as u32; let host_start = "file://".len(); let mut host_end = host_start; let mut host = HostInternal::None; if !starts_with_windows_drive_letter_segment(&input_after_first_char) { if let Some(base_url) = base_file_url { let first_segment = base_url.path_segments().unwrap().next().unwrap(); if is_normalized_windows_drive_letter(first_segment) { self.serialization.push('/'); self.serialization.push_str(first_segment); } else if let Some(host_str) = base_url.host_str() { self.serialization.push_str(host_str); host_end = self.serialization.len(); host = base_url.host; } } } // If c is the EOF code point, U+002F (/), U+005C (\), U+003F (?), or U+0023 (#), then decrease pointer by one let parse_path_input = if let Some(c) = first_char { if c == '/' || c == '\\' || c == '?' || c == '#' { input } else { input_after_first_char } } else { input_after_first_char }; let remaining = self.parse_path(SchemeType::File, &mut false, host_end, parse_path_input); let host_start = host_start as u32; let (query_start, fragment_start) = self.parse_query_and_fragment(scheme_type, scheme_end, remaining)?; let host_end = host_end as u32; return Ok(Url { serialization: self.serialization, scheme_end, username_end: host_start, host_start, host_end, host, port: None, path_start: host_end, query_start, fragment_start, }); } } if let Some(base_url) = base_file_url { match first_char { None => { // Copy everything except the fragment let before_fragment = match base_url.fragment_start { Some(i) => &base_url.serialization[..i as usize], None => &*base_url.serialization, }; self.serialization.push_str(before_fragment); Ok(Url { serialization: self.serialization, fragment_start: None, ..*base_url }) } Some('?') => { // Copy everything up to the query string let before_query = match (base_url.query_start, base_url.fragment_start) { (None, None) => &*base_url.serialization, (Some(i), _) | (None, Some(i)) => base_url.slice(..i), }; self.serialization.push_str(before_query); let (query_start, fragment_start) = self.parse_query_and_fragment(scheme_type, base_url.scheme_end, input)?; Ok(Url { serialization: self.serialization, query_start, fragment_start, ..*base_url }) } Some('#') => self.fragment_only(base_url, input), _ => { if !starts_with_windows_drive_letter_segment(&input) { let before_query = match (base_url.query_start, base_url.fragment_start) { (None, None) => &*base_url.serialization, (Some(i), _) | (None, Some(i)) => base_url.slice(..i), }; self.serialization.push_str(before_query); self.shorten_path(SchemeType::File, base_url.path_start as usize); let remaining = self.parse_path( SchemeType::File, &mut true, base_url.path_start as usize, input, ); self.with_query_and_fragment( SchemeType::File, base_url.scheme_end, base_url.username_end, base_url.host_start, base_url.host_end, base_url.host, base_url.port, base_url.path_start, remaining, ) } else { self.serialization.push_str("file:///"); let scheme_end = "file".len() as u32; let path_start = "file://".len(); let remaining = self.parse_path(SchemeType::File, &mut false, path_start, input); let (query_start, fragment_start) = self.parse_query_and_fragment(SchemeType::File, scheme_end, remaining)?; let path_start = path_start as u32; Ok(Url { serialization: self.serialization, scheme_end, username_end: path_start, host_start: path_start, host_end: path_start, host: HostInternal::None, port: None, path_start, query_start, fragment_start, }) } } } } else { self.serialization.push_str("file:///"); let scheme_end = "file".len() as u32; let path_start = "file://".len(); let remaining = self.parse_path(SchemeType::File, &mut false, path_start, input); let (query_start, fragment_start) = self.parse_query_and_fragment(SchemeType::File, scheme_end, remaining)?; let path_start = path_start as u32; Ok(Url { serialization: self.serialization, scheme_end, username_end: path_start, host_start: path_start, host_end: path_start, host: HostInternal::None, port: None, path_start, query_start, fragment_start, }) } } fn parse_relative( mut self, input: Input<'_>, scheme_type: SchemeType, base_url: &Url, ) -> ParseResult { // relative state debug_assert!(self.serialization.is_empty()); let (first_char, input_after_first_char) = input.split_first(); match first_char { None => { // Copy everything except the fragment let before_fragment = match base_url.fragment_start { Some(i) => &base_url.serialization[..i as usize], None => &*base_url.serialization, }; self.serialization.push_str(before_fragment); Ok(Url { serialization: self.serialization, fragment_start: None, ..*base_url }) } Some('?') => { // Copy everything up to the query string let before_query = match (base_url.query_start, base_url.fragment_start) { (None, None) => &*base_url.serialization, (Some(i), _) | (None, Some(i)) => base_url.slice(..i), }; self.serialization.push_str(before_query); let (query_start, fragment_start) = self.parse_query_and_fragment(scheme_type, base_url.scheme_end, input)?; Ok(Url { serialization: self.serialization, query_start, fragment_start, ..*base_url }) } Some('#') => self.fragment_only(base_url, input), Some('/') | Some('\\') => { let (slashes_count, remaining) = input.count_matching(|c| matches!(c, '/' | '\\')); if slashes_count >= 2 { self.log_violation_if(SyntaxViolation::ExpectedDoubleSlash, || { input .clone() .take_while(|&c| matches!(c, '/' | '\\')) .collect::() != "//" }); let scheme_end = base_url.scheme_end; debug_assert!(base_url.byte_at(scheme_end) == b':'); self.serialization .push_str(base_url.slice(..scheme_end + 1)); if let Some(after_prefix) = input.split_prefix("//") { return self.after_double_slash(after_prefix, scheme_type, scheme_end); } return self.after_double_slash(remaining, scheme_type, scheme_end); } let path_start = base_url.path_start; self.serialization.push_str(base_url.slice(..path_start)); self.serialization.push('/'); let remaining = self.parse_path( scheme_type, &mut true, path_start as usize, input_after_first_char, ); self.with_query_and_fragment( scheme_type, base_url.scheme_end, base_url.username_end, base_url.host_start, base_url.host_end, base_url.host, base_url.port, base_url.path_start, remaining, ) } _ => { let before_query = match (base_url.query_start, base_url.fragment_start) { (None, None) => &*base_url.serialization, (Some(i), _) | (None, Some(i)) => base_url.slice(..i), }; self.serialization.push_str(before_query); // FIXME spec says just "remove last entry", not the "pop" algorithm self.pop_path(scheme_type, base_url.path_start as usize); // A special url always has a path. // A path always starts with '/' if self.serialization.len() == base_url.path_start as usize && (SchemeType::from(base_url.scheme()).is_special() || !input.is_empty()) { self.serialization.push('/'); } let remaining = match input.split_first() { (Some('/'), remaining) => self.parse_path( scheme_type, &mut true, base_url.path_start as usize, remaining, ), _ => { self.parse_path(scheme_type, &mut true, base_url.path_start as usize, input) } }; self.with_query_and_fragment( scheme_type, base_url.scheme_end, base_url.username_end, base_url.host_start, base_url.host_end, base_url.host, base_url.port, base_url.path_start, remaining, ) } } } fn after_double_slash( mut self, input: Input<'_>, scheme_type: SchemeType, scheme_end: u32, ) -> ParseResult { self.serialization.push('/'); self.serialization.push('/'); // authority state let before_authority = self.serialization.len(); let (username_end, remaining) = self.parse_userinfo(input, scheme_type)?; let has_authority = before_authority != self.serialization.len(); // host state let host_start = to_u32(self.serialization.len())?; let (host_end, host, port, remaining) = self.parse_host_and_port(remaining, scheme_end, scheme_type)?; if host == HostInternal::None && has_authority { return Err(ParseError::EmptyHost); } // path state let path_start = to_u32(self.serialization.len())?; let remaining = self.parse_path_start(scheme_type, &mut true, remaining); self.with_query_and_fragment( scheme_type, scheme_end, username_end, host_start, host_end, host, port, path_start, remaining, ) } /// Return (username_end, remaining) fn parse_userinfo<'i>( &mut self, mut input: Input<'i>, scheme_type: SchemeType, ) -> ParseResult<(u32, Input<'i>)> { let mut last_at = None; let mut remaining = input.clone(); let mut char_count = 0; while let Some(c) = remaining.next() { match c { '@' => { if last_at.is_some() { self.log_violation(SyntaxViolation::UnencodedAtSign) } else { self.log_violation(SyntaxViolation::EmbeddedCredentials) } last_at = Some((char_count, remaining.clone())) } '/' | '?' | '#' => break, '\\' if scheme_type.is_special() => break, _ => (), } char_count += 1; } let (mut userinfo_char_count, remaining) = match last_at { None => return Ok((to_u32(self.serialization.len())?, input)), Some((0, remaining)) => { // Otherwise, if one of the following is true // c is the EOF code point, U+002F (/), U+003F (?), or U+0023 (#) // url is special and c is U+005C (\) // If @ flag is set and buffer is the empty string, validation error, return failure. if let (Some(c), _) = remaining.split_first() { if c == '/' || c == '?' || c == '#' || (scheme_type.is_special() && c == '\\') { return Err(ParseError::EmptyHost); } } return Ok((to_u32(self.serialization.len())?, remaining)); } Some(x) => x, }; let mut username_end = None; let mut has_password = false; let mut has_username = false; while userinfo_char_count > 0 { let (c, utf8_c) = input.next_utf8().unwrap(); userinfo_char_count -= 1; if c == ':' && username_end.is_none() { // Start parsing password username_end = Some(to_u32(self.serialization.len())?); // We don't add a colon if the password is empty if userinfo_char_count > 0 { self.serialization.push(':'); has_password = true; } } else { if !has_password { has_username = true; } self.check_url_code_point(c, &input); self.serialization .extend(utf8_percent_encode(utf8_c, USERINFO)); } } let username_end = match username_end { Some(i) => i, None => to_u32(self.serialization.len())?, }; if has_username || has_password { self.serialization.push('@'); } Ok((username_end, remaining)) } fn parse_host_and_port<'i>( &mut self, input: Input<'i>, scheme_end: u32, scheme_type: SchemeType, ) -> ParseResult<(u32, HostInternal, Option, Input<'i>)> { let (host, remaining) = Parser::parse_host(input, scheme_type)?; write!(&mut self.serialization, "{}", host).unwrap(); let host_end = to_u32(self.serialization.len())?; if let Host::Domain(h) = &host { if h.is_empty() { // Port with an empty host if remaining.starts_with(":") { return Err(ParseError::EmptyHost); } if scheme_type.is_special() { return Err(ParseError::EmptyHost); } } }; let (port, remaining) = if let Some(remaining) = remaining.split_prefix(':') { let scheme = || default_port(&self.serialization[..scheme_end as usize]); Parser::parse_port(remaining, scheme, self.context)? } else { (None, remaining) }; if let Some(port) = port { write!(&mut self.serialization, ":{}", port).unwrap() } Ok((host_end, host.into(), port, remaining)) } pub fn parse_host( mut input: Input<'_>, scheme_type: SchemeType, ) -> ParseResult<(Host, Input<'_>)> { if scheme_type.is_file() { return Parser::get_file_host(input); } // Undo the Input abstraction here to avoid allocating in the common case // where the host part of the input does not contain any tab or newline let input_str = input.chars.as_str(); let mut inside_square_brackets = false; let mut has_ignored_chars = false; let mut non_ignored_chars = 0; let mut bytes = 0; for c in input_str.chars() { match c { ':' if !inside_square_brackets => break, '\\' if scheme_type.is_special() => break, '/' | '?' | '#' => break, '\t' | '\n' | '\r' => { has_ignored_chars = true; } '[' => { inside_square_brackets = true; non_ignored_chars += 1 } ']' => { inside_square_brackets = false; non_ignored_chars += 1 } _ => non_ignored_chars += 1, } bytes += c.len_utf8(); } let replaced: String; let host_str; { let host_input = input.by_ref().take(non_ignored_chars); if has_ignored_chars { replaced = host_input.collect(); host_str = &*replaced } else { for _ in host_input {} host_str = &input_str[..bytes] } } if scheme_type == SchemeType::SpecialNotFile && host_str.is_empty() { return Err(ParseError::EmptyHost); } if !scheme_type.is_special() { let host = Host::parse_opaque(host_str)?; return Ok((host, input)); } let host = Host::parse(host_str)?; Ok((host, input)) } fn get_file_host(input: Input<'_>) -> ParseResult<(Host, Input<'_>)> { let (_, host_str, remaining) = Parser::file_host(input)?; let host = match Host::parse(&host_str)? { Host::Domain(ref d) if d == "localhost" => Host::Domain("".to_string()), host => host, }; Ok((host, remaining)) } fn parse_file_host<'i>( &mut self, input: Input<'i>, ) -> ParseResult<(bool, HostInternal, Input<'i>)> { let has_host; let (_, host_str, remaining) = Parser::file_host(input)?; let host = if host_str.is_empty() { has_host = false; HostInternal::None } else { match Host::parse(&host_str)? { Host::Domain(ref d) if d == "localhost" => { has_host = false; HostInternal::None } host => { write!(&mut self.serialization, "{}", host).unwrap(); has_host = true; host.into() } } }; Ok((has_host, host, remaining)) } pub fn file_host(input: Input) -> ParseResult<(bool, String, Input)> { // Undo the Input abstraction here to avoid allocating in the common case // where the host part of the input does not contain any tab or newline let input_str = input.chars.as_str(); let mut has_ignored_chars = false; let mut non_ignored_chars = 0; let mut bytes = 0; for c in input_str.chars() { match c { '/' | '\\' | '?' | '#' => break, '\t' | '\n' | '\r' => has_ignored_chars = true, _ => non_ignored_chars += 1, } bytes += c.len_utf8(); } let replaced: String; let host_str; let mut remaining = input.clone(); { let host_input = remaining.by_ref().take(non_ignored_chars); if has_ignored_chars { replaced = host_input.collect(); host_str = &*replaced } else { for _ in host_input {} host_str = &input_str[..bytes] } } if is_windows_drive_letter(host_str) { return Ok((false, "".to_string(), input)); } Ok((true, host_str.to_string(), remaining)) } pub fn parse_port

( mut input: Input<'_>, default_port: P, context: Context, ) -> ParseResult<(Option, Input<'_>)> where P: Fn() -> Option, { let mut port: u32 = 0; let mut has_any_digit = false; while let (Some(c), remaining) = input.split_first() { if let Some(digit) = c.to_digit(10) { port = port * 10 + digit; if port > ::std::u16::MAX as u32 { return Err(ParseError::InvalidPort); } has_any_digit = true; } else if context == Context::UrlParser && !matches!(c, '/' | '\\' | '?' | '#') { return Err(ParseError::InvalidPort); } else { break; } input = remaining; } let mut opt_port = Some(port as u16); if !has_any_digit || opt_port == default_port() { opt_port = None; } Ok((opt_port, input)) } pub fn parse_path_start<'i>( &mut self, scheme_type: SchemeType, has_host: &mut bool, input: Input<'i>, ) -> Input<'i> { let path_start = self.serialization.len(); let (maybe_c, remaining) = input.split_first(); // If url is special, then: if scheme_type.is_special() { if maybe_c == Some('\\') { // If c is U+005C (\), validation error. self.log_violation(SyntaxViolation::Backslash); } // A special URL always has a non-empty path. if !self.serialization.ends_with('/') { self.serialization.push('/'); // We have already made sure the forward slash is present. if maybe_c == Some('/') || maybe_c == Some('\\') { return self.parse_path(scheme_type, has_host, path_start, remaining); } } return self.parse_path(scheme_type, has_host, path_start, input); } else if maybe_c == Some('?') || maybe_c == Some('#') { // Otherwise, if state override is not given and c is U+003F (?), // set url’s query to the empty string and state to query state. // Otherwise, if state override is not given and c is U+0023 (#), // set url’s fragment to the empty string and state to fragment state. // The query and path states will be handled by the caller. return input; } if maybe_c != None && maybe_c != Some('/') { self.serialization.push('/'); } // Otherwise, if c is not the EOF code point: self.parse_path(scheme_type, has_host, path_start, input) } pub fn parse_path<'i>( &mut self, scheme_type: SchemeType, has_host: &mut bool, path_start: usize, mut input: Input<'i>, ) -> Input<'i> { // Relative path state loop { let segment_start = self.serialization.len(); let mut ends_with_slash = false; loop { let input_before_c = input.clone(); let (c, utf8_c) = if let Some(x) = input.next_utf8() { x } else { break; }; match c { '/' if self.context != Context::PathSegmentSetter => { self.serialization.push(c); ends_with_slash = true; break; } '\\' if self.context != Context::PathSegmentSetter && scheme_type.is_special() => { self.log_violation(SyntaxViolation::Backslash); self.serialization.push('/'); ends_with_slash = true; break; } '?' | '#' if self.context == Context::UrlParser => { input = input_before_c; break; } _ => { self.check_url_code_point(c, &input); if self.context == Context::PathSegmentSetter { if scheme_type.is_special() { self.serialization .extend(utf8_percent_encode(utf8_c, SPECIAL_PATH_SEGMENT)); } else { self.serialization .extend(utf8_percent_encode(utf8_c, PATH_SEGMENT)); } } else { self.serialization.extend(utf8_percent_encode(utf8_c, PATH)); } } } } // Going from &str to String to &str to please the 1.33.0 borrow checker let before_slash_string = if ends_with_slash { self.serialization[segment_start..self.serialization.len() - 1].to_owned() } else { self.serialization[segment_start..self.serialization.len()].to_owned() }; let segment_before_slash: &str = &before_slash_string; match segment_before_slash { // If buffer is a double-dot path segment, shorten url’s path, ".." | "%2e%2e" | "%2e%2E" | "%2E%2e" | "%2E%2E" | "%2e." | "%2E." | ".%2e" | ".%2E" => { debug_assert!(self.serialization.as_bytes()[segment_start - 1] == b'/'); self.serialization.truncate(segment_start); if self.serialization.ends_with('/') && Parser::last_slash_can_be_removed(&self.serialization, path_start) { self.serialization.pop(); } self.shorten_path(scheme_type, path_start); // and then if neither c is U+002F (/), nor url is special and c is U+005C (\), append the empty string to url’s path. if ends_with_slash && !self.serialization.ends_with('/') { self.serialization.push('/'); } } // Otherwise, if buffer is a single-dot path segment and if neither c is U+002F (/), // nor url is special and c is U+005C (\), append the empty string to url’s path. "." | "%2e" | "%2E" => { self.serialization.truncate(segment_start); if !self.serialization.ends_with('/') { self.serialization.push('/'); } } _ => { // If url’s scheme is "file", url’s path is empty, and buffer is a Windows drive letter, then if scheme_type.is_file() && is_windows_drive_letter(segment_before_slash) { // Replace the second code point in buffer with U+003A (:). if let Some(c) = segment_before_slash.chars().next() { self.serialization.truncate(segment_start); self.serialization.push(c); self.serialization.push(':'); if ends_with_slash { self.serialization.push('/'); } } // If url’s host is neither the empty string nor null, // validation error, set url’s host to the empty string. if *has_host { self.log_violation(SyntaxViolation::FileWithHostAndWindowsDrive); *has_host = false; // FIXME account for this in callers } } } } if !ends_with_slash { break; } } if scheme_type.is_file() { // while url’s path’s size is greater than 1 // and url’s path[0] is the empty string, // validation error, remove the first item from url’s path. //FIXME: log violation let path = self.serialization.split_off(path_start); self.serialization.push('/'); self.serialization.push_str(&path.trim_start_matches('/')); } input } fn last_slash_can_be_removed(serialization: &str, path_start: usize) -> bool { let url_before_segment = &serialization[..serialization.len() - 1]; if let Some(segment_before_start) = url_before_segment.rfind('/') { // Do not remove the root slash segment_before_start >= path_start // Or a windows drive letter slash && !path_starts_with_windows_drive_letter(&serialization[segment_before_start..]) } else { false } } /// https://url.spec.whatwg.org/#shorten-a-urls-path fn shorten_path(&mut self, scheme_type: SchemeType, path_start: usize) { // If path is empty, then return. if self.serialization.len() == path_start { return; } // If url’s scheme is "file", path’s size is 1, and path[0] is a normalized Windows drive letter, then return. if scheme_type.is_file() && is_normalized_windows_drive_letter(&self.serialization[path_start..]) { return; } // Remove path’s last item. self.pop_path(scheme_type, path_start); } /// https://url.spec.whatwg.org/#pop-a-urls-path fn pop_path(&mut self, scheme_type: SchemeType, path_start: usize) { if self.serialization.len() > path_start { let slash_position = self.serialization[path_start..].rfind('/').unwrap(); // + 1 since rfind returns the position before the slash. let segment_start = path_start + slash_position + 1; // Don’t pop a Windows drive letter if !(scheme_type.is_file() && is_normalized_windows_drive_letter(&self.serialization[segment_start..])) { self.serialization.truncate(segment_start); } } } pub fn parse_cannot_be_a_base_path<'i>(&mut self, mut input: Input<'i>) -> Input<'i> { loop { let input_before_c = input.clone(); match input.next_utf8() { Some(('?', _)) | Some(('#', _)) if self.context == Context::UrlParser => { return input_before_c } Some((c, utf8_c)) => { self.check_url_code_point(c, &input); self.serialization .extend(utf8_percent_encode(utf8_c, CONTROLS)); } None => return input, } } } #[allow(clippy::too_many_arguments)] fn with_query_and_fragment( mut self, scheme_type: SchemeType, scheme_end: u32, username_end: u32, host_start: u32, host_end: u32, host: HostInternal, port: Option, path_start: u32, remaining: Input<'_>, ) -> ParseResult { let (query_start, fragment_start) = self.parse_query_and_fragment(scheme_type, scheme_end, remaining)?; Ok(Url { serialization: self.serialization, scheme_end, username_end, host_start, host_end, host, port, path_start, query_start, fragment_start, }) } /// Return (query_start, fragment_start) fn parse_query_and_fragment( &mut self, scheme_type: SchemeType, scheme_end: u32, mut input: Input<'_>, ) -> ParseResult<(Option, Option)> { let mut query_start = None; match input.next() { Some('#') => {} Some('?') => { query_start = Some(to_u32(self.serialization.len())?); self.serialization.push('?'); let remaining = self.parse_query(scheme_type, scheme_end, input); if let Some(remaining) = remaining { input = remaining } else { return Ok((query_start, None)); } } None => return Ok((None, None)), _ => panic!("Programming error. parse_query_and_fragment() called without ? or #"), } let fragment_start = to_u32(self.serialization.len())?; self.serialization.push('#'); self.parse_fragment(input); Ok((query_start, Some(fragment_start))) } pub fn parse_query<'i>( &mut self, scheme_type: SchemeType, scheme_end: u32, mut input: Input<'i>, ) -> Option> { let mut query = String::new(); // FIXME: use a streaming decoder instead let mut remaining = None; while let Some(c) = input.next() { if c == '#' && self.context == Context::UrlParser { remaining = Some(input); break; } else { self.check_url_code_point(c, &input); query.push(c); } } let encoding = match &self.serialization[..scheme_end as usize] { "http" | "https" | "file" | "ftp" => self.query_encoding_override, _ => None, }; let query_bytes = if let Some(o) = encoding { o(&query) } else { query.as_bytes().into() }; let set = if scheme_type.is_special() { SPECIAL_QUERY } else { QUERY }; self.serialization.extend(percent_encode(&query_bytes, set)); remaining } fn fragment_only(mut self, base_url: &Url, mut input: Input<'_>) -> ParseResult { let before_fragment = match base_url.fragment_start { Some(i) => base_url.slice(..i), None => &*base_url.serialization, }; debug_assert!(self.serialization.is_empty()); self.serialization .reserve(before_fragment.len() + input.chars.as_str().len()); self.serialization.push_str(before_fragment); self.serialization.push('#'); let next = input.next(); debug_assert!(next == Some('#')); self.parse_fragment(input); Ok(Url { serialization: self.serialization, fragment_start: Some(to_u32(before_fragment.len())?), ..*base_url }) } pub fn parse_fragment(&mut self, mut input: Input<'_>) { while let Some((c, utf8_c)) = input.next_utf8() { if c == '\0' { self.log_violation(SyntaxViolation::NullInFragment) } else { self.check_url_code_point(c, &input); } self.serialization .extend(utf8_percent_encode(utf8_c, FRAGMENT)); } } fn check_url_code_point(&self, c: char, input: &Input<'_>) { if let Some(vfn) = self.violation_fn { if c == '%' { let mut input = input.clone(); if !matches!((input.next(), input.next()), (Some(a), Some(b)) if is_ascii_hex_digit(a) && is_ascii_hex_digit(b)) { vfn(SyntaxViolation::PercentDecode) } } else if !is_url_code_point(c) { vfn(SyntaxViolation::NonUrlCodePoint) } } } } #[inline] fn is_ascii_hex_digit(c: char) -> bool { matches!(c, 'a'..='f' | 'A'..='F' | '0'..='9') } // Non URL code points: // U+0000 to U+0020 (space) // " # % < > [ \ ] ^ ` { | } // U+007F to U+009F // surrogates // U+FDD0 to U+FDEF // Last two of each plane: U+__FFFE to U+__FFFF for __ in 00 to 10 hex #[inline] fn is_url_code_point(c: char) -> bool { matches!(c, 'a'..='z' | 'A'..='Z' | '0'..='9' | '!' | '$' | '&' | '\'' | '(' | ')' | '*' | '+' | ',' | '-' | '.' | '/' | ':' | ';' | '=' | '?' | '@' | '_' | '~' | '\u{A0}'..='\u{D7FF}' | '\u{E000}'..='\u{FDCF}' | '\u{FDF0}'..='\u{FFFD}' | '\u{10000}'..='\u{1FFFD}' | '\u{20000}'..='\u{2FFFD}' | '\u{30000}'..='\u{3FFFD}' | '\u{40000}'..='\u{4FFFD}' | '\u{50000}'..='\u{5FFFD}' | '\u{60000}'..='\u{6FFFD}' | '\u{70000}'..='\u{7FFFD}' | '\u{80000}'..='\u{8FFFD}' | '\u{90000}'..='\u{9FFFD}' | '\u{A0000}'..='\u{AFFFD}' | '\u{B0000}'..='\u{BFFFD}' | '\u{C0000}'..='\u{CFFFD}' | '\u{D0000}'..='\u{DFFFD}' | '\u{E1000}'..='\u{EFFFD}' | '\u{F0000}'..='\u{FFFFD}' | '\u{100000}'..='\u{10FFFD}') } /// https://url.spec.whatwg.org/#c0-controls-and-space #[inline] fn c0_control_or_space(ch: char) -> bool { ch <= ' ' // U+0000 to U+0020 } /// https://infra.spec.whatwg.org/#ascii-tab-or-newline #[inline] fn ascii_tab_or_new_line(ch: char) -> bool { matches!(ch, '\t' | '\r' | '\n') } /// https://url.spec.whatwg.org/#ascii-alpha #[inline] pub fn ascii_alpha(ch: char) -> bool { matches!(ch, 'a'..='z' | 'A'..='Z') } #[inline] pub fn to_u32(i: usize) -> ParseResult { if i <= ::std::u32::MAX as usize { Ok(i as u32) } else { Err(ParseError::Overflow) } } fn is_normalized_windows_drive_letter(segment: &str) -> bool { is_windows_drive_letter(segment) && segment.as_bytes()[1] == b':' } /// Wether the scheme is file:, the path has a single segment, and that segment /// is a Windows drive letter #[inline] pub fn is_windows_drive_letter(segment: &str) -> bool { segment.len() == 2 && starts_with_windows_drive_letter(segment) } /// Wether path starts with a root slash /// and a windows drive letter eg: "/c:" or "/a:/" fn path_starts_with_windows_drive_letter(s: &str) -> bool { if let Some(c) = s.as_bytes().get(0) { matches!(c, b'/' | b'\\' | b'?' | b'#') && starts_with_windows_drive_letter(&s[1..]) } else { false } } fn starts_with_windows_drive_letter(s: &str) -> bool { s.len() >= 2 && ascii_alpha(s.as_bytes()[0] as char) && matches!(s.as_bytes()[1], b':' | b'|') && (s.len() == 2 || matches!(s.as_bytes()[2], b'/' | b'\\' | b'?' | b'#')) } /// https://url.spec.whatwg.org/#start-with-a-windows-drive-letter fn starts_with_windows_drive_letter_segment(input: &Input<'_>) -> bool { let mut input = input.clone(); match (input.next(), input.next(), input.next()) { // its first two code points are a Windows drive letter // its third code point is U+002F (/), U+005C (\), U+003F (?), or U+0023 (#). (Some(a), Some(b), Some(c)) if ascii_alpha(a) && matches!(b, ':' | '|') && matches!(c, '/' | '\\' | '?' | '#') => { true } // its first two code points are a Windows drive letter // its length is 2 (Some(a), Some(b), None) if ascii_alpha(a) && matches!(b, ':' | '|') => true, _ => false, } } vendor/url/src/slicing.rs0000664000175000017500000001441514160055207016243 0ustar mwhudsonmwhudson// Copyright 2016 The rust-url developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use crate::Url; use std::ops::{Index, Range, RangeFrom, RangeFull, RangeTo}; impl Index for Url { type Output = str; fn index(&self, _: RangeFull) -> &str { &self.serialization } } impl Index> for Url { type Output = str; fn index(&self, range: RangeFrom) -> &str { &self.serialization[self.index(range.start)..] } } impl Index> for Url { type Output = str; fn index(&self, range: RangeTo) -> &str { &self.serialization[..self.index(range.end)] } } impl Index> for Url { type Output = str; fn index(&self, range: Range) -> &str { &self.serialization[self.index(range.start)..self.index(range.end)] } } /// Indicates a position within a URL based on its components. /// /// A range of positions can be used for slicing `Url`: /// /// ```rust /// # use url::{Url, Position}; /// # fn something(some_url: Url) { /// let serialization: &str = &some_url[..]; /// let serialization_without_fragment: &str = &some_url[..Position::AfterQuery]; /// let authority: &str = &some_url[Position::BeforeUsername..Position::AfterPort]; /// let data_url_payload: &str = &some_url[Position::BeforePath..Position::AfterQuery]; /// let scheme_relative: &str = &some_url[Position::BeforeUsername..]; /// # } /// ``` /// /// In a pseudo-grammar (where `[`…`]?` makes a sub-sequence optional), /// URL components and delimiters that separate them are: /// /// ```notrust /// url = /// scheme ":" /// [ "//" [ username [ ":" password ]? "@" ]? host [ ":" port ]? ]? /// path [ "?" query ]? [ "#" fragment ]? /// ``` /// /// When a given component is not present, /// its "before" and "after" position are the same /// (so that `&some_url[BeforeFoo..AfterFoo]` is the empty string) /// and component ordering is preserved /// (so that a missing query "is between" a path and a fragment). /// /// The end of a component and the start of the next are either the same or separate /// by a delimiter. /// (Not that the initial `/` of a path is considered part of the path here, not a delimiter.) /// For example, `&url[..BeforeFragment]` would include a `#` delimiter (if present in `url`), /// so `&url[..AfterQuery]` might be desired instead. /// /// `BeforeScheme` and `AfterFragment` are always the start and end of the entire URL, /// so `&url[BeforeScheme..X]` is the same as `&url[..X]` /// and `&url[X..AfterFragment]` is the same as `&url[X..]`. #[derive(Copy, Clone, Debug)] pub enum Position { BeforeScheme, AfterScheme, BeforeUsername, AfterUsername, BeforePassword, AfterPassword, BeforeHost, AfterHost, BeforePort, AfterPort, BeforePath, AfterPath, BeforeQuery, AfterQuery, BeforeFragment, AfterFragment, } impl Url { #[inline] fn index(&self, position: Position) -> usize { match position { Position::BeforeScheme => 0, Position::AfterScheme => self.scheme_end as usize, Position::BeforeUsername => { if self.has_authority() { self.scheme_end as usize + "://".len() } else { debug_assert!(self.byte_at(self.scheme_end) == b':'); debug_assert!(self.scheme_end + ":".len() as u32 == self.username_end); self.scheme_end as usize + ":".len() } } Position::AfterUsername => self.username_end as usize, Position::BeforePassword => { if self.has_authority() && self.byte_at(self.username_end) == b':' { self.username_end as usize + ":".len() } else { debug_assert!(self.username_end == self.host_start); self.username_end as usize } } Position::AfterPassword => { if self.has_authority() && self.byte_at(self.username_end) == b':' { debug_assert!(self.byte_at(self.host_start - "@".len() as u32) == b'@'); self.host_start as usize - "@".len() } else { debug_assert!(self.username_end == self.host_start); self.host_start as usize } } Position::BeforeHost => self.host_start as usize, Position::AfterHost => self.host_end as usize, Position::BeforePort => { if self.port.is_some() { debug_assert!(self.byte_at(self.host_end) == b':'); self.host_end as usize + ":".len() } else { self.host_end as usize } } Position::AfterPort => self.path_start as usize, Position::BeforePath => self.path_start as usize, Position::AfterPath => match (self.query_start, self.fragment_start) { (Some(q), _) => q as usize, (None, Some(f)) => f as usize, (None, None) => self.serialization.len(), }, Position::BeforeQuery => match (self.query_start, self.fragment_start) { (Some(q), _) => { debug_assert!(self.byte_at(q) == b'?'); q as usize + "?".len() } (None, Some(f)) => f as usize, (None, None) => self.serialization.len(), }, Position::AfterQuery => match self.fragment_start { None => self.serialization.len(), Some(f) => f as usize, }, Position::BeforeFragment => match self.fragment_start { Some(f) => { debug_assert!(self.byte_at(f) == b'#'); f as usize + "#".len() } None => self.serialization.len(), }, Position::AfterFragment => self.serialization.len(), } } } vendor/url/src/host.rs0000664000175000017500000003436214160055207015573 0ustar mwhudsonmwhudson// Copyright 2013-2016 The rust-url developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use std::cmp; use std::fmt::{self, Formatter}; use std::net::{Ipv4Addr, Ipv6Addr}; use percent_encoding::{percent_decode, utf8_percent_encode, CONTROLS}; #[cfg(feature = "serde")] use serde::{Deserialize, Serialize}; use crate::parser::{ParseError, ParseResult}; #[cfg_attr(feature = "serde", derive(Deserialize, Serialize))] #[derive(Copy, Clone, Debug, Eq, PartialEq)] pub(crate) enum HostInternal { None, Domain, Ipv4(Ipv4Addr), Ipv6(Ipv6Addr), } impl From> for HostInternal { fn from(host: Host) -> HostInternal { match host { Host::Domain(ref s) if s.is_empty() => HostInternal::None, Host::Domain(_) => HostInternal::Domain, Host::Ipv4(address) => HostInternal::Ipv4(address), Host::Ipv6(address) => HostInternal::Ipv6(address), } } } /// The host name of an URL. #[cfg_attr(feature = "serde", derive(Deserialize, Serialize))] #[derive(Clone, Debug, Eq, Ord, PartialOrd, Hash)] pub enum Host { /// A DNS domain name, as '.' dot-separated labels. /// Non-ASCII labels are encoded in punycode per IDNA if this is the host of /// a special URL, or percent encoded for non-special URLs. Hosts for /// non-special URLs are also called opaque hosts. Domain(S), /// An IPv4 address. /// `Url::host_str` returns the serialization of this address, /// as four decimal integers separated by `.` dots. Ipv4(Ipv4Addr), /// An IPv6 address. /// `Url::host_str` returns the serialization of that address between `[` and `]` brackets, /// in the format per [RFC 5952 *A Recommendation /// for IPv6 Address Text Representation*](https://tools.ietf.org/html/rfc5952): /// lowercase hexadecimal with maximal `::` compression. Ipv6(Ipv6Addr), } impl<'a> Host<&'a str> { /// Return a copy of `self` that owns an allocated `String` but does not borrow an `&Url`. pub fn to_owned(&self) -> Host { match *self { Host::Domain(domain) => Host::Domain(domain.to_owned()), Host::Ipv4(address) => Host::Ipv4(address), Host::Ipv6(address) => Host::Ipv6(address), } } } impl Host { /// Parse a host: either an IPv6 address in [] square brackets, or a domain. /// /// pub fn parse(input: &str) -> Result { if input.starts_with('[') { if !input.ends_with(']') { return Err(ParseError::InvalidIpv6Address); } return parse_ipv6addr(&input[1..input.len() - 1]).map(Host::Ipv6); } let domain = percent_decode(input.as_bytes()).decode_utf8_lossy(); let domain = idna::domain_to_ascii(&domain)?; if domain.is_empty() { return Err(ParseError::EmptyHost); } let is_invalid_domain_char = |c| { matches!( c, '\0' | '\t' | '\n' | '\r' | ' ' | '#' | '%' | '/' | ':' | '<' | '>' | '?' | '@' | '[' | '\\' | ']' | '^' ) }; if domain.find(is_invalid_domain_char).is_some() { Err(ParseError::InvalidDomainCharacter) } else if let Some(address) = parse_ipv4addr(&domain)? { Ok(Host::Ipv4(address)) } else { Ok(Host::Domain(domain)) } } // pub fn parse_opaque(input: &str) -> Result { if input.starts_with('[') { if !input.ends_with(']') { return Err(ParseError::InvalidIpv6Address); } return parse_ipv6addr(&input[1..input.len() - 1]).map(Host::Ipv6); } let is_invalid_host_char = |c| { matches!( c, '\0' | '\t' | '\n' | '\r' | ' ' | '#' | '/' | ':' | '<' | '>' | '?' | '@' | '[' | '\\' | ']' | '^' ) }; if input.find(is_invalid_host_char).is_some() { Err(ParseError::InvalidDomainCharacter) } else { Ok(Host::Domain( utf8_percent_encode(input, CONTROLS).to_string(), )) } } } impl> fmt::Display for Host { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { match *self { Host::Domain(ref domain) => domain.as_ref().fmt(f), Host::Ipv4(ref addr) => addr.fmt(f), Host::Ipv6(ref addr) => { f.write_str("[")?; write_ipv6(addr, f)?; f.write_str("]") } } } } impl PartialEq> for Host where S: PartialEq, { fn eq(&self, other: &Host) -> bool { match (self, other) { (Host::Domain(a), Host::Domain(b)) => a == b, (Host::Ipv4(a), Host::Ipv4(b)) => a == b, (Host::Ipv6(a), Host::Ipv6(b)) => a == b, (_, _) => false, } } } fn write_ipv6(addr: &Ipv6Addr, f: &mut Formatter<'_>) -> fmt::Result { let segments = addr.segments(); let (compress_start, compress_end) = longest_zero_sequence(&segments); let mut i = 0; while i < 8 { if i == compress_start { f.write_str(":")?; if i == 0 { f.write_str(":")?; } if compress_end < 8 { i = compress_end; } else { break; } } write!(f, "{:x}", segments[i as usize])?; if i < 7 { f.write_str(":")?; } i += 1; } Ok(()) } // https://url.spec.whatwg.org/#concept-ipv6-serializer step 2 and 3 fn longest_zero_sequence(pieces: &[u16; 8]) -> (isize, isize) { let mut longest = -1; let mut longest_length = -1; let mut start = -1; macro_rules! finish_sequence( ($end: expr) => { if start >= 0 { let length = $end - start; if length > longest_length { longest = start; longest_length = length; } } }; ); for i in 0..8 { if pieces[i as usize] == 0 { if start < 0 { start = i; } } else { finish_sequence!(i); start = -1; } } finish_sequence!(8); // https://url.spec.whatwg.org/#concept-ipv6-serializer // step 3: ignore lone zeroes if longest_length < 2 { (-1, -2) } else { (longest, longest + longest_length) } } /// fn parse_ipv4number(mut input: &str) -> Result, ()> { let mut r = 10; if input.starts_with("0x") || input.starts_with("0X") { input = &input[2..]; r = 16; } else if input.len() >= 2 && input.starts_with('0') { input = &input[1..]; r = 8; } // At the moment we can't know the reason why from_str_radix fails // https://github.com/rust-lang/rust/issues/22639 // So instead we check if the input looks like a real number and only return // an error when it's an overflow. let valid_number = match r { 8 => input.chars().all(|c| ('0'..='7').contains(&c)), 10 => input.chars().all(|c| ('0'..='9').contains(&c)), 16 => input.chars().all(|c| { ('0'..='9').contains(&c) || ('a'..='f').contains(&c) || ('A'..='F').contains(&c) }), _ => false, }; if !valid_number { return Ok(None); } if input.is_empty() { return Ok(Some(0)); } if input.starts_with('+') { return Ok(None); } match u32::from_str_radix(input, r) { Ok(number) => Ok(Some(number)), Err(_) => Err(()), } } /// fn parse_ipv4addr(input: &str) -> ParseResult> { if input.is_empty() { return Ok(None); } let mut parts: Vec<&str> = input.split('.').collect(); if parts.last() == Some(&"") { parts.pop(); } if parts.len() > 4 { return Ok(None); } let mut numbers: Vec = Vec::new(); let mut overflow = false; for part in parts { if part.is_empty() { return Ok(None); } match parse_ipv4number(part) { Ok(Some(n)) => numbers.push(n), Ok(None) => return Ok(None), Err(()) => overflow = true, }; } if overflow { return Err(ParseError::InvalidIpv4Address); } let mut ipv4 = numbers.pop().expect("a non-empty list of numbers"); // Equivalent to: ipv4 >= 256 ** (4 − numbers.len()) if ipv4 > u32::max_value() >> (8 * numbers.len() as u32) { return Err(ParseError::InvalidIpv4Address); } if numbers.iter().any(|x| *x > 255) { return Err(ParseError::InvalidIpv4Address); } for (counter, n) in numbers.iter().enumerate() { ipv4 += n << (8 * (3 - counter as u32)) } Ok(Some(Ipv4Addr::from(ipv4))) } /// fn parse_ipv6addr(input: &str) -> ParseResult { let input = input.as_bytes(); let len = input.len(); let mut is_ip_v4 = false; let mut pieces = [0, 0, 0, 0, 0, 0, 0, 0]; let mut piece_pointer = 0; let mut compress_pointer = None; let mut i = 0; if len < 2 { return Err(ParseError::InvalidIpv6Address); } if input[0] == b':' { if input[1] != b':' { return Err(ParseError::InvalidIpv6Address); } i = 2; piece_pointer = 1; compress_pointer = Some(1); } while i < len { if piece_pointer == 8 { return Err(ParseError::InvalidIpv6Address); } if input[i] == b':' { if compress_pointer.is_some() { return Err(ParseError::InvalidIpv6Address); } i += 1; piece_pointer += 1; compress_pointer = Some(piece_pointer); continue; } let start = i; let end = cmp::min(len, start + 4); let mut value = 0u16; while i < end { match (input[i] as char).to_digit(16) { Some(digit) => { value = value * 0x10 + digit as u16; i += 1; } None => break, } } if i < len { match input[i] { b'.' => { if i == start { return Err(ParseError::InvalidIpv6Address); } i = start; if piece_pointer > 6 { return Err(ParseError::InvalidIpv6Address); } is_ip_v4 = true; } b':' => { i += 1; if i == len { return Err(ParseError::InvalidIpv6Address); } } _ => return Err(ParseError::InvalidIpv6Address), } } if is_ip_v4 { break; } pieces[piece_pointer] = value; piece_pointer += 1; } if is_ip_v4 { if piece_pointer > 6 { return Err(ParseError::InvalidIpv6Address); } let mut numbers_seen = 0; while i < len { if numbers_seen > 0 { if numbers_seen < 4 && (i < len && input[i] == b'.') { i += 1 } else { return Err(ParseError::InvalidIpv6Address); } } let mut ipv4_piece = None; while i < len { let digit = match input[i] { c @ b'0'..=b'9' => c - b'0', _ => break, }; match ipv4_piece { None => ipv4_piece = Some(digit as u16), Some(0) => return Err(ParseError::InvalidIpv6Address), // No leading zero Some(ref mut v) => { *v = *v * 10 + digit as u16; if *v > 255 { return Err(ParseError::InvalidIpv6Address); } } } i += 1; } pieces[piece_pointer] = if let Some(v) = ipv4_piece { pieces[piece_pointer] * 0x100 + v } else { return Err(ParseError::InvalidIpv6Address); }; numbers_seen += 1; if numbers_seen == 2 || numbers_seen == 4 { piece_pointer += 1; } } if numbers_seen != 4 { return Err(ParseError::InvalidIpv6Address); } } if i < len { return Err(ParseError::InvalidIpv6Address); } match compress_pointer { Some(compress_pointer) => { let mut swaps = piece_pointer - compress_pointer; piece_pointer = 7; while swaps > 0 { pieces.swap(piece_pointer, compress_pointer + swaps - 1); swaps -= 1; piece_pointer -= 1; } } _ => { if piece_pointer != 8 { return Err(ParseError::InvalidIpv6Address); } } } Ok(Ipv6Addr::new( pieces[0], pieces[1], pieces[2], pieces[3], pieces[4], pieces[5], pieces[6], pieces[7], )) } vendor/url/src/lib.rs0000664000175000017500000027445114160055207015371 0ustar mwhudsonmwhudson// Copyright 2013-2015 The rust-url developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. /*! rust-url is an implementation of the [URL Standard](http://url.spec.whatwg.org/) for the [Rust](http://rust-lang.org/) programming language. # URL parsing and data structures First, URL parsing may fail for various reasons and therefore returns a `Result`. ``` use url::{Url, ParseError}; assert!(Url::parse("http://[:::1]") == Err(ParseError::InvalidIpv6Address)) ``` Let’s parse a valid URL and look at its components. ``` use url::{Url, Host, Position}; # use url::ParseError; # fn run() -> Result<(), ParseError> { let issue_list_url = Url::parse( "https://github.com/rust-lang/rust/issues?labels=E-easy&state=open" )?; assert!(issue_list_url.scheme() == "https"); assert!(issue_list_url.username() == ""); assert!(issue_list_url.password() == None); assert!(issue_list_url.host_str() == Some("github.com")); assert!(issue_list_url.host() == Some(Host::Domain("github.com"))); assert!(issue_list_url.port() == None); assert!(issue_list_url.path() == "/rust-lang/rust/issues"); assert!(issue_list_url.path_segments().map(|c| c.collect::>()) == Some(vec!["rust-lang", "rust", "issues"])); assert!(issue_list_url.query() == Some("labels=E-easy&state=open")); assert!(&issue_list_url[Position::BeforePath..] == "/rust-lang/rust/issues?labels=E-easy&state=open"); assert!(issue_list_url.fragment() == None); assert!(!issue_list_url.cannot_be_a_base()); # Ok(()) # } # run().unwrap(); ``` Some URLs are said to be *cannot-be-a-base*: they don’t have a username, password, host, or port, and their "path" is an arbitrary string rather than slash-separated segments: ``` use url::Url; # use url::ParseError; # fn run() -> Result<(), ParseError> { let data_url = Url::parse("data:text/plain,Hello?World#")?; assert!(data_url.cannot_be_a_base()); assert!(data_url.scheme() == "data"); assert!(data_url.path() == "text/plain,Hello"); assert!(data_url.path_segments().is_none()); assert!(data_url.query() == Some("World")); assert!(data_url.fragment() == Some("")); # Ok(()) # } # run().unwrap(); ``` ## Serde Enable the `serde` feature to include `Deserialize` and `Serialize` implementations for `url::Url`. # Base URL Many contexts allow URL *references* that can be relative to a *base URL*: ```html ``` Since parsed URLs are absolute, giving a base is required for parsing relative URLs: ``` use url::{Url, ParseError}; assert!(Url::parse("../main.css") == Err(ParseError::RelativeUrlWithoutBase)) ``` Use the `join` method on an `Url` to use it as a base URL: ``` use url::Url; # use url::ParseError; # fn run() -> Result<(), ParseError> { let this_document = Url::parse("http://servo.github.io/rust-url/url/index.html")?; let css_url = this_document.join("../main.css")?; assert_eq!(css_url.as_str(), "http://servo.github.io/rust-url/main.css"); # Ok(()) # } # run().unwrap(); ``` # Feature: `serde` If you enable the `serde` feature, [`Url`](struct.Url.html) will implement [`serde::Serialize`](https://docs.rs/serde/1/serde/trait.Serialize.html) and [`serde::Deserialize`](https://docs.rs/serde/1/serde/trait.Deserialize.html). See [serde documentation](https://serde.rs) for more information. ```toml url = { version = "2", features = ["serde"] } ``` */ #![doc(html_root_url = "https://docs.rs/url/2.2.2")] #[macro_use] extern crate matches; pub use form_urlencoded; #[cfg(feature = "serde")] extern crate serde; use crate::host::HostInternal; use crate::parser::{to_u32, Context, Parser, SchemeType, PATH_SEGMENT, USERINFO}; use percent_encoding::{percent_decode, percent_encode, utf8_percent_encode}; use std::borrow::Borrow; use std::cmp; use std::fmt::{self, Write}; use std::hash; use std::io; use std::mem; use std::net::{IpAddr, SocketAddr, ToSocketAddrs}; use std::ops::{Range, RangeFrom, RangeTo}; use std::path::{Path, PathBuf}; use std::str; use std::convert::TryFrom; pub use crate::host::Host; pub use crate::origin::{OpaqueOrigin, Origin}; pub use crate::parser::{ParseError, SyntaxViolation}; pub use crate::path_segments::PathSegmentsMut; pub use crate::slicing::Position; pub use form_urlencoded::EncodingOverride; mod host; mod origin; mod parser; mod path_segments; mod slicing; #[doc(hidden)] pub mod quirks; /// A parsed URL record. #[derive(Clone)] pub struct Url { /// Syntax in pseudo-BNF: /// /// url = scheme ":" [ hierarchical | non-hierarchical ] [ "?" query ]? [ "#" fragment ]? /// non-hierarchical = non-hierarchical-path /// non-hierarchical-path = /* Does not start with "/" */ /// hierarchical = authority? hierarchical-path /// authority = "//" userinfo? host [ ":" port ]? /// userinfo = username [ ":" password ]? "@" /// hierarchical-path = [ "/" path-segment ]+ serialization: String, // Components scheme_end: u32, // Before ':' username_end: u32, // Before ':' (if a password is given) or '@' (if not) host_start: u32, host_end: u32, host: HostInternal, port: Option, path_start: u32, // Before initial '/', if any query_start: Option, // Before '?', unlike Position::QueryStart fragment_start: Option, // Before '#', unlike Position::FragmentStart } /// Full configuration for the URL parser. #[derive(Copy, Clone)] pub struct ParseOptions<'a> { base_url: Option<&'a Url>, encoding_override: EncodingOverride<'a>, violation_fn: Option<&'a dyn Fn(SyntaxViolation)>, } impl<'a> ParseOptions<'a> { /// Change the base URL pub fn base_url(mut self, new: Option<&'a Url>) -> Self { self.base_url = new; self } /// Override the character encoding of query strings. /// This is a legacy concept only relevant for HTML. pub fn encoding_override(mut self, new: EncodingOverride<'a>) -> Self { self.encoding_override = new; self } /// Call the provided function or closure for a non-fatal `SyntaxViolation` /// when it occurs during parsing. Note that since the provided function is /// `Fn`, the caller might need to utilize _interior mutability_, such as with /// a `RefCell`, to collect the violations. /// /// ## Example /// ``` /// use std::cell::RefCell; /// use url::{Url, SyntaxViolation}; /// # use url::ParseError; /// # fn run() -> Result<(), url::ParseError> { /// let violations = RefCell::new(Vec::new()); /// let url = Url::options() /// .syntax_violation_callback(Some(&|v| violations.borrow_mut().push(v))) /// .parse("https:////example.com")?; /// assert_eq!(url.as_str(), "https://example.com/"); /// assert_eq!(violations.into_inner(), /// vec!(SyntaxViolation::ExpectedDoubleSlash)); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn syntax_violation_callback(mut self, new: Option<&'a dyn Fn(SyntaxViolation)>) -> Self { self.violation_fn = new; self } /// Parse an URL string with the configuration so far. pub fn parse(self, input: &str) -> Result { Parser { serialization: String::with_capacity(input.len()), base_url: self.base_url, query_encoding_override: self.encoding_override, violation_fn: self.violation_fn, context: Context::UrlParser, } .parse_url(input) } } impl Url { /// Parse an absolute URL from a string. /// /// # Examples /// /// ```rust /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("https://example.net")?; /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// # Errors /// /// If the function can not parse an absolute URL from the given string, /// a [`ParseError`] variant will be returned. /// /// [`ParseError`]: enum.ParseError.html #[inline] pub fn parse(input: &str) -> Result { Url::options().parse(input) } /// Parse an absolute URL from a string and add params to its query string. /// /// Existing params are not removed. /// /// # Examples /// /// ```rust /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse_with_params("https://example.net?dont=clobberme", /// &[("lang", "rust"), ("browser", "servo")])?; /// assert_eq!("https://example.net/?dont=clobberme&lang=rust&browser=servo", url.as_str()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// # Errors /// /// If the function can not parse an absolute URL from the given string, /// a [`ParseError`] variant will be returned. /// /// [`ParseError`]: enum.ParseError.html #[inline] pub fn parse_with_params(input: &str, iter: I) -> Result where I: IntoIterator, I::Item: Borrow<(K, V)>, K: AsRef, V: AsRef, { let mut url = Url::options().parse(input); if let Ok(ref mut url) = url { url.query_pairs_mut().extend_pairs(iter); } url } /// Parse a string as an URL, with this URL as the base URL. /// /// The inverse of this is [`make_relative`]. /// /// Note: a trailing slash is significant. /// Without it, the last path component is considered to be a “file†name /// to be removed to get at the “directory†that is used as the base: /// /// # Examples /// /// ```rust /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let base = Url::parse("https://example.net/a/b.html")?; /// let url = base.join("c.png")?; /// assert_eq!(url.as_str(), "https://example.net/a/c.png"); // Not /a/b.html/c.png /// /// let base = Url::parse("https://example.net/a/b/")?; /// let url = base.join("c.png")?; /// assert_eq!(url.as_str(), "https://example.net/a/b/c.png"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// # Errors /// /// If the function can not parse an URL from the given string /// with this URL as the base URL, a [`ParseError`] variant will be returned. /// /// [`ParseError`]: enum.ParseError.html /// [`make_relative`]: #method.make_relative #[inline] pub fn join(&self, input: &str) -> Result { Url::options().base_url(Some(self)).parse(input) } /// Creates a relative URL if possible, with this URL as the base URL. /// /// This is the inverse of [`join`]. /// /// # Examples /// /// ```rust /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let base = Url::parse("https://example.net/a/b.html")?; /// let url = Url::parse("https://example.net/a/c.png")?; /// let relative = base.make_relative(&url); /// assert_eq!(relative.as_ref().map(|s| s.as_str()), Some("c.png")); /// /// let base = Url::parse("https://example.net/a/b/")?; /// let url = Url::parse("https://example.net/a/b/c.png")?; /// let relative = base.make_relative(&url); /// assert_eq!(relative.as_ref().map(|s| s.as_str()), Some("c.png")); /// /// let base = Url::parse("https://example.net/a/b/")?; /// let url = Url::parse("https://example.net/a/d/c.png")?; /// let relative = base.make_relative(&url); /// assert_eq!(relative.as_ref().map(|s| s.as_str()), Some("../d/c.png")); /// /// let base = Url::parse("https://example.net/a/b.html?c=d")?; /// let url = Url::parse("https://example.net/a/b.html?e=f")?; /// let relative = base.make_relative(&url); /// assert_eq!(relative.as_ref().map(|s| s.as_str()), Some("?e=f")); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// # Errors /// /// If this URL can't be a base for the given URL, `None` is returned. /// This is for example the case if the scheme, host or port are not the same. /// /// [`join`]: #method.join pub fn make_relative(&self, url: &Url) -> Option { if self.cannot_be_a_base() { return None; } // Scheme, host and port need to be the same if self.scheme() != url.scheme() || self.host() != url.host() || self.port() != url.port() { return None; } // We ignore username/password at this point // The path has to be transformed let mut relative = String::new(); // Extract the filename of both URIs, these need to be handled separately fn extract_path_filename(s: &str) -> (&str, &str) { let last_slash_idx = s.rfind('/').unwrap_or(0); let (path, filename) = s.split_at(last_slash_idx); if filename.is_empty() { (path, "") } else { (path, &filename[1..]) } } let (base_path, base_filename) = extract_path_filename(self.path()); let (url_path, url_filename) = extract_path_filename(url.path()); let mut base_path = base_path.split('/').peekable(); let mut url_path = url_path.split('/').peekable(); // Skip over the common prefix while base_path.peek().is_some() && base_path.peek() == url_path.peek() { base_path.next(); url_path.next(); } // Add `..` segments for the remainder of the base path for base_path_segment in base_path { // Skip empty last segments if base_path_segment.is_empty() { break; } if !relative.is_empty() { relative.push('/'); } relative.push_str(".."); } // Append the remainder of the other URI for url_path_segment in url_path { if !relative.is_empty() { relative.push('/'); } relative.push_str(url_path_segment); } // Add the filename if they are not the same if base_filename != url_filename { // If the URIs filename is empty this means that it was a directory // so we'll have to append a '/'. // // Otherwise append it directly as the new filename. if url_filename.is_empty() { relative.push('/'); } else { if !relative.is_empty() { relative.push('/'); } relative.push_str(url_filename); } } // Query and fragment are only taken from the other URI if let Some(query) = url.query() { relative.push('?'); relative.push_str(query); } if let Some(fragment) = url.fragment() { relative.push('#'); relative.push_str(fragment); } Some(relative) } /// Return a default `ParseOptions` that can fully configure the URL parser. /// /// # Examples /// /// Get default `ParseOptions`, then change base url /// /// ```rust /// use url::Url; /// # use url::ParseError; /// # fn run() -> Result<(), ParseError> { /// let options = Url::options(); /// let api = Url::parse("https://api.example.com")?; /// let base_url = options.base_url(Some(&api)); /// let version_url = base_url.parse("version.json")?; /// assert_eq!(version_url.as_str(), "https://api.example.com/version.json"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn options<'a>() -> ParseOptions<'a> { ParseOptions { base_url: None, encoding_override: None, violation_fn: None, } } /// Return the serialization of this URL. /// /// This is fast since that serialization is already stored in the `Url` struct. /// /// # Examples /// /// ```rust /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url_str = "https://example.net/"; /// let url = Url::parse(url_str)?; /// assert_eq!(url.as_str(), url_str); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[inline] pub fn as_str(&self) -> &str { &self.serialization } /// Return the serialization of this URL. /// /// This consumes the `Url` and takes ownership of the `String` stored in it. /// /// # Examples /// /// ```rust /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url_str = "https://example.net/"; /// let url = Url::parse(url_str)?; /// assert_eq!(String::from(url), url_str); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[inline] #[deprecated(since = "2.3.0", note = "use Into")] pub fn into_string(self) -> String { self.into() } /// For internal testing, not part of the public API. /// /// Methods of the `Url` struct assume a number of invariants. /// This checks each of these invariants and panic if one is not met. /// This is for testing rust-url itself. #[doc(hidden)] pub fn check_invariants(&self) -> Result<(), String> { macro_rules! assert { ($x: expr) => { if !$x { return Err(format!( "!( {} ) for URL {:?}", stringify!($x), self.serialization )); } }; } macro_rules! assert_eq { ($a: expr, $b: expr) => { { let a = $a; let b = $b; if a != b { return Err(format!("{:?} != {:?} ({} != {}) for URL {:?}", a, b, stringify!($a), stringify!($b), self.serialization)) } } } } assert!(self.scheme_end >= 1); assert!(matches!(self.byte_at(0), b'a'..=b'z' | b'A'..=b'Z')); assert!(self .slice(1..self.scheme_end) .chars() .all(|c| matches!(c, 'a'..='z' | 'A'..='Z' | '0'..='9' | '+' | '-' | '.'))); assert_eq!(self.byte_at(self.scheme_end), b':'); if self.slice(self.scheme_end + 1..).starts_with("//") { // URL with authority if self.username_end != self.serialization.len() as u32 { match self.byte_at(self.username_end) { b':' => { assert!(self.host_start >= self.username_end + 2); assert_eq!(self.byte_at(self.host_start - 1), b'@'); } b'@' => assert!(self.host_start == self.username_end + 1), _ => assert_eq!(self.username_end, self.scheme_end + 3), } } assert!(self.host_start >= self.username_end); assert!(self.host_end >= self.host_start); let host_str = self.slice(self.host_start..self.host_end); match self.host { HostInternal::None => assert_eq!(host_str, ""), HostInternal::Ipv4(address) => assert_eq!(host_str, address.to_string()), HostInternal::Ipv6(address) => { let h: Host = Host::Ipv6(address); assert_eq!(host_str, h.to_string()) } HostInternal::Domain => { if SchemeType::from(self.scheme()).is_special() { assert!(!host_str.is_empty()) } } } if self.path_start == self.host_end { assert_eq!(self.port, None); } else { assert_eq!(self.byte_at(self.host_end), b':'); let port_str = self.slice(self.host_end + 1..self.path_start); assert_eq!( self.port, Some(port_str.parse::().expect("Couldn't parse port?")) ); } assert!( self.path_start as usize == self.serialization.len() || matches!(self.byte_at(self.path_start), b'/' | b'#' | b'?') ); } else { // Anarchist URL (no authority) assert_eq!(self.username_end, self.scheme_end + 1); assert_eq!(self.host_start, self.scheme_end + 1); assert_eq!(self.host_end, self.scheme_end + 1); assert_eq!(self.host, HostInternal::None); assert_eq!(self.port, None); assert_eq!(self.path_start, self.scheme_end + 1); } if let Some(start) = self.query_start { assert!(start >= self.path_start); assert_eq!(self.byte_at(start), b'?'); } if let Some(start) = self.fragment_start { assert!(start >= self.path_start); assert_eq!(self.byte_at(start), b'#'); } if let (Some(query_start), Some(fragment_start)) = (self.query_start, self.fragment_start) { assert!(fragment_start > query_start); } let other = Url::parse(self.as_str()).expect("Failed to parse myself?"); assert_eq!(&self.serialization, &other.serialization); assert_eq!(self.scheme_end, other.scheme_end); assert_eq!(self.username_end, other.username_end); assert_eq!(self.host_start, other.host_start); assert_eq!(self.host_end, other.host_end); assert!( self.host == other.host || // XXX No host round-trips to empty host. // See https://github.com/whatwg/url/issues/79 (self.host_str(), other.host_str()) == (None, Some("")) ); assert_eq!(self.port, other.port); assert_eq!(self.path_start, other.path_start); assert_eq!(self.query_start, other.query_start); assert_eq!(self.fragment_start, other.fragment_start); Ok(()) } /// Return the origin of this URL () /// /// Note: this returns an opaque origin for `file:` URLs, which causes /// `url.origin() != url.origin()`. /// /// # Examples /// /// URL with `ftp` scheme: /// /// ```rust /// use url::{Host, Origin, Url}; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("ftp://example.com/foo")?; /// assert_eq!(url.origin(), /// Origin::Tuple("ftp".into(), /// Host::Domain("example.com".into()), /// 21)); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// URL with `blob` scheme: /// /// ```rust /// use url::{Host, Origin, Url}; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("blob:https://example.com/foo")?; /// assert_eq!(url.origin(), /// Origin::Tuple("https".into(), /// Host::Domain("example.com".into()), /// 443)); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// URL with `file` scheme: /// /// ```rust /// use url::{Host, Origin, Url}; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("file:///tmp/foo")?; /// assert!(!url.origin().is_tuple()); /// /// let other_url = Url::parse("file:///tmp/foo")?; /// assert!(url.origin() != other_url.origin()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// URL with other scheme: /// /// ```rust /// use url::{Host, Origin, Url}; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("foo:bar")?; /// assert!(!url.origin().is_tuple()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[inline] pub fn origin(&self) -> Origin { origin::url_origin(self) } /// Return the scheme of this URL, lower-cased, as an ASCII string without the ':' delimiter. /// /// # Examples /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("file:///tmp/foo")?; /// assert_eq!(url.scheme(), "file"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[inline] pub fn scheme(&self) -> &str { self.slice(..self.scheme_end) } /// Return whether the URL has an 'authority', /// which can contain a username, password, host, and port number. /// /// URLs that do *not* are either path-only like `unix:/run/foo.socket` /// or cannot-be-a-base like `data:text/plain,Stuff`. /// /// # Examples /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("ftp://rms@example.com")?; /// assert!(url.has_authority()); /// /// let url = Url::parse("unix:/run/foo.socket")?; /// assert!(!url.has_authority()); /// /// let url = Url::parse("data:text/plain,Stuff")?; /// assert!(!url.has_authority()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[inline] pub fn has_authority(&self) -> bool { debug_assert!(self.byte_at(self.scheme_end) == b':'); self.slice(self.scheme_end..).starts_with("://") } /// Return whether this URL is a cannot-be-a-base URL, /// meaning that parsing a relative URL string with this URL as the base will return an error. /// /// This is the case if the scheme and `:` delimiter are not followed by a `/` slash, /// as is typically the case of `data:` and `mailto:` URLs. /// /// # Examples /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("ftp://rms@example.com")?; /// assert!(!url.cannot_be_a_base()); /// /// let url = Url::parse("unix:/run/foo.socket")?; /// assert!(!url.cannot_be_a_base()); /// /// let url = Url::parse("data:text/plain,Stuff")?; /// assert!(url.cannot_be_a_base()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[inline] pub fn cannot_be_a_base(&self) -> bool { !self.slice(self.scheme_end + 1..).starts_with('/') } /// Return the username for this URL (typically the empty string) /// as a percent-encoded ASCII string. /// /// # Examples /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("ftp://rms@example.com")?; /// assert_eq!(url.username(), "rms"); /// /// let url = Url::parse("ftp://:secret123@example.com")?; /// assert_eq!(url.username(), ""); /// /// let url = Url::parse("https://example.com")?; /// assert_eq!(url.username(), ""); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn username(&self) -> &str { let scheme_separator_len = "://".len() as u32; if self.has_authority() && self.username_end > self.scheme_end + scheme_separator_len { self.slice(self.scheme_end + scheme_separator_len..self.username_end) } else { "" } } /// Return the password for this URL, if any, as a percent-encoded ASCII string. /// /// # Examples /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("ftp://rms:secret123@example.com")?; /// assert_eq!(url.password(), Some("secret123")); /// /// let url = Url::parse("ftp://:secret123@example.com")?; /// assert_eq!(url.password(), Some("secret123")); /// /// let url = Url::parse("ftp://rms@example.com")?; /// assert_eq!(url.password(), None); /// /// let url = Url::parse("https://example.com")?; /// assert_eq!(url.password(), None); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn password(&self) -> Option<&str> { // This ':' is not the one marking a port number since a host can not be empty. // (Except for file: URLs, which do not have port numbers.) if self.has_authority() && self.username_end != self.serialization.len() as u32 && self.byte_at(self.username_end) == b':' { debug_assert!(self.byte_at(self.host_start - 1) == b'@'); Some(self.slice(self.username_end + 1..self.host_start - 1)) } else { None } } /// Equivalent to `url.host().is_some()`. /// /// # Examples /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("ftp://rms@example.com")?; /// assert!(url.has_host()); /// /// let url = Url::parse("unix:/run/foo.socket")?; /// assert!(!url.has_host()); /// /// let url = Url::parse("data:text/plain,Stuff")?; /// assert!(!url.has_host()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn has_host(&self) -> bool { !matches!(self.host, HostInternal::None) } /// Return the string representation of the host (domain or IP address) for this URL, if any. /// /// Non-ASCII domains are punycode-encoded per IDNA if this is the host /// of a special URL, or percent encoded for non-special URLs. /// IPv6 addresses are given between `[` and `]` brackets. /// /// Cannot-be-a-base URLs (typical of `data:` and `mailto:`) and some `file:` URLs /// don’t have a host. /// /// See also the `host` method. /// /// # Examples /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("https://127.0.0.1/index.html")?; /// assert_eq!(url.host_str(), Some("127.0.0.1")); /// /// let url = Url::parse("ftp://rms@example.com")?; /// assert_eq!(url.host_str(), Some("example.com")); /// /// let url = Url::parse("unix:/run/foo.socket")?; /// assert_eq!(url.host_str(), None); /// /// let url = Url::parse("data:text/plain,Stuff")?; /// assert_eq!(url.host_str(), None); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn host_str(&self) -> Option<&str> { if self.has_host() { Some(self.slice(self.host_start..self.host_end)) } else { None } } /// Return the parsed representation of the host for this URL. /// Non-ASCII domain labels are punycode-encoded per IDNA if this is the host /// of a special URL, or percent encoded for non-special URLs. /// /// Cannot-be-a-base URLs (typical of `data:` and `mailto:`) and some `file:` URLs /// don’t have a host. /// /// See also the `host_str` method. /// /// # Examples /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("https://127.0.0.1/index.html")?; /// assert!(url.host().is_some()); /// /// let url = Url::parse("ftp://rms@example.com")?; /// assert!(url.host().is_some()); /// /// let url = Url::parse("unix:/run/foo.socket")?; /// assert!(url.host().is_none()); /// /// let url = Url::parse("data:text/plain,Stuff")?; /// assert!(url.host().is_none()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn host(&self) -> Option> { match self.host { HostInternal::None => None, HostInternal::Domain => Some(Host::Domain(self.slice(self.host_start..self.host_end))), HostInternal::Ipv4(address) => Some(Host::Ipv4(address)), HostInternal::Ipv6(address) => Some(Host::Ipv6(address)), } } /// If this URL has a host and it is a domain name (not an IP address), return it. /// Non-ASCII domains are punycode-encoded per IDNA if this is the host /// of a special URL, or percent encoded for non-special URLs. /// /// # Examples /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("https://127.0.0.1/")?; /// assert_eq!(url.domain(), None); /// /// let url = Url::parse("mailto:rms@example.net")?; /// assert_eq!(url.domain(), None); /// /// let url = Url::parse("https://example.com/")?; /// assert_eq!(url.domain(), Some("example.com")); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn domain(&self) -> Option<&str> { match self.host { HostInternal::Domain => Some(self.slice(self.host_start..self.host_end)), _ => None, } } /// Return the port number for this URL, if any. /// /// Note that default port numbers are never reflected by the serialization, /// use the `port_or_known_default()` method if you want a default port number returned. /// /// # Examples /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("https://example.com")?; /// assert_eq!(url.port(), None); /// /// let url = Url::parse("https://example.com:443/")?; /// assert_eq!(url.port(), None); /// /// let url = Url::parse("ssh://example.com:22")?; /// assert_eq!(url.port(), Some(22)); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[inline] pub fn port(&self) -> Option { self.port } /// Return the port number for this URL, or the default port number if it is known. /// /// This method only knows the default port number /// of the `http`, `https`, `ws`, `wss` and `ftp` schemes. /// /// For URLs in these schemes, this method always returns `Some(_)`. /// For other schemes, it is the same as `Url::port()`. /// /// # Examples /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("foo://example.com")?; /// assert_eq!(url.port_or_known_default(), None); /// /// let url = Url::parse("foo://example.com:1456")?; /// assert_eq!(url.port_or_known_default(), Some(1456)); /// /// let url = Url::parse("https://example.com")?; /// assert_eq!(url.port_or_known_default(), Some(443)); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[inline] pub fn port_or_known_default(&self) -> Option { self.port.or_else(|| parser::default_port(self.scheme())) } /// Resolve a URL’s host and port number to `SocketAddr`. /// /// If the URL has the default port number of a scheme that is unknown to this library, /// `default_port_number` provides an opportunity to provide the actual port number. /// In non-example code this should be implemented either simply as `|| None`, /// or by matching on the URL’s `.scheme()`. /// /// If the host is a domain, it is resolved using the standard library’s DNS support. /// /// # Examples /// /// ```no_run /// let url = url::Url::parse("https://example.net/").unwrap(); /// let addrs = url.socket_addrs(|| None).unwrap(); /// std::net::TcpStream::connect(&*addrs) /// # ; /// ``` /// /// ``` /// /// With application-specific known default port numbers /// fn socket_addrs(url: url::Url) -> std::io::Result> { /// url.socket_addrs(|| match url.scheme() { /// "socks5" | "socks5h" => Some(1080), /// _ => None, /// }) /// } /// ``` pub fn socket_addrs( &self, default_port_number: impl Fn() -> Option, ) -> io::Result> { // Note: trying to avoid the Vec allocation by returning `impl AsRef<[SocketAddr]>` // causes borrowck issues because the return value borrows `default_port_number`: // // https://github.com/rust-lang/rfcs/blob/master/text/1951-expand-impl-trait.md#scoping-for-type-and-lifetime-parameters // // > This RFC proposes that *all* type parameters are considered in scope // > for `impl Trait` in return position fn io_result(opt: Option, message: &str) -> io::Result { opt.ok_or_else(|| io::Error::new(io::ErrorKind::InvalidData, message)) } let host = io_result(self.host(), "No host name in the URL")?; let port = io_result( self.port_or_known_default().or_else(default_port_number), "No port number in the URL", )?; Ok(match host { Host::Domain(domain) => (domain, port).to_socket_addrs()?.collect(), Host::Ipv4(ip) => vec![(ip, port).into()], Host::Ipv6(ip) => vec![(ip, port).into()], }) } /// Return the path for this URL, as a percent-encoded ASCII string. /// For cannot-be-a-base URLs, this is an arbitrary string that doesn’t start with '/'. /// For other URLs, this starts with a '/' slash /// and continues with slash-separated path segments. /// /// # Examples /// /// ```rust /// use url::{Url, ParseError}; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("https://example.com/api/versions?page=2")?; /// assert_eq!(url.path(), "/api/versions"); /// /// let url = Url::parse("https://example.com")?; /// assert_eq!(url.path(), "/"); /// /// let url = Url::parse("https://example.com/countries/việt nam")?; /// assert_eq!(url.path(), "/countries/vi%E1%BB%87t%20nam"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn path(&self) -> &str { match (self.query_start, self.fragment_start) { (None, None) => self.slice(self.path_start..), (Some(next_component_start), _) | (None, Some(next_component_start)) => { self.slice(self.path_start..next_component_start) } } } /// Unless this URL is cannot-be-a-base, /// return an iterator of '/' slash-separated path segments, /// each as a percent-encoded ASCII string. /// /// Return `None` for cannot-be-a-base URLs. /// /// When `Some` is returned, the iterator always contains at least one string /// (which may be empty). /// /// # Examples /// /// ``` /// use url::Url; /// # use std::error::Error; /// /// # fn run() -> Result<(), Box> { /// let url = Url::parse("https://example.com/foo/bar")?; /// let mut path_segments = url.path_segments().ok_or_else(|| "cannot be base")?; /// assert_eq!(path_segments.next(), Some("foo")); /// assert_eq!(path_segments.next(), Some("bar")); /// assert_eq!(path_segments.next(), None); /// /// let url = Url::parse("https://example.com")?; /// let mut path_segments = url.path_segments().ok_or_else(|| "cannot be base")?; /// assert_eq!(path_segments.next(), Some("")); /// assert_eq!(path_segments.next(), None); /// /// let url = Url::parse("data:text/plain,HelloWorld")?; /// assert!(url.path_segments().is_none()); /// /// let url = Url::parse("https://example.com/countries/việt nam")?; /// let mut path_segments = url.path_segments().ok_or_else(|| "cannot be base")?; /// assert_eq!(path_segments.next(), Some("countries")); /// assert_eq!(path_segments.next(), Some("vi%E1%BB%87t%20nam")); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[allow(clippy::manual_strip)] // introduced in 1.45, MSRV is 1.36 pub fn path_segments(&self) -> Option> { let path = self.path(); if path.starts_with('/') { Some(path[1..].split('/')) } else { None } } /// Return this URL’s query string, if any, as a percent-encoded ASCII string. /// /// # Examples /// /// ```rust /// use url::Url; /// # use url::ParseError; /// /// fn run() -> Result<(), ParseError> { /// let url = Url::parse("https://example.com/products?page=2")?; /// let query = url.query(); /// assert_eq!(query, Some("page=2")); /// /// let url = Url::parse("https://example.com/products")?; /// let query = url.query(); /// assert!(query.is_none()); /// /// let url = Url::parse("https://example.com/?country=español")?; /// let query = url.query(); /// assert_eq!(query, Some("country=espa%C3%B1ol")); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn query(&self) -> Option<&str> { match (self.query_start, self.fragment_start) { (None, _) => None, (Some(query_start), None) => { debug_assert!(self.byte_at(query_start) == b'?'); Some(self.slice(query_start + 1..)) } (Some(query_start), Some(fragment_start)) => { debug_assert!(self.byte_at(query_start) == b'?'); Some(self.slice(query_start + 1..fragment_start)) } } } /// Parse the URL’s query string, if any, as `application/x-www-form-urlencoded` /// and return an iterator of (key, value) pairs. /// /// # Examples /// /// ```rust /// use std::borrow::Cow; /// /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("https://example.com/products?page=2&sort=desc")?; /// let mut pairs = url.query_pairs(); /// /// assert_eq!(pairs.count(), 2); /// /// assert_eq!(pairs.next(), Some((Cow::Borrowed("page"), Cow::Borrowed("2")))); /// assert_eq!(pairs.next(), Some((Cow::Borrowed("sort"), Cow::Borrowed("desc")))); /// # Ok(()) /// # } /// # run().unwrap(); /// #[inline] pub fn query_pairs(&self) -> form_urlencoded::Parse<'_> { form_urlencoded::parse(self.query().unwrap_or("").as_bytes()) } /// Return this URL’s fragment identifier, if any. /// /// A fragment is the part of the URL after the `#` symbol. /// The fragment is optional and, if present, contains a fragment identifier /// that identifies a secondary resource, such as a section heading /// of a document. /// /// In HTML, the fragment identifier is usually the id attribute of a an element /// that is scrolled to on load. Browsers typically will not send the fragment portion /// of a URL to the server. /// /// **Note:** the parser did *not* percent-encode this component, /// but the input may have been percent-encoded already. /// /// # Examples /// /// ```rust /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let url = Url::parse("https://example.com/data.csv#row=4")?; /// /// assert_eq!(url.fragment(), Some("row=4")); /// /// let url = Url::parse("https://example.com/data.csv#cell=4,1-6,2")?; /// /// assert_eq!(url.fragment(), Some("cell=4,1-6,2")); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn fragment(&self) -> Option<&str> { self.fragment_start.map(|start| { debug_assert!(self.byte_at(start) == b'#'); self.slice(start + 1..) }) } fn mutate) -> R, R>(&mut self, f: F) -> R { let mut parser = Parser::for_setter(mem::replace(&mut self.serialization, String::new())); let result = f(&mut parser); self.serialization = parser.serialization; result } /// Change this URL’s fragment identifier. /// /// # Examples /// /// ```rust /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("https://example.com/data.csv")?; /// assert_eq!(url.as_str(), "https://example.com/data.csv"); /// url.set_fragment(Some("cell=4,1-6,2")); /// assert_eq!(url.as_str(), "https://example.com/data.csv#cell=4,1-6,2"); /// assert_eq!(url.fragment(), Some("cell=4,1-6,2")); /// /// url.set_fragment(None); /// assert_eq!(url.as_str(), "https://example.com/data.csv"); /// assert!(url.fragment().is_none()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn set_fragment(&mut self, fragment: Option<&str>) { // Remove any previous fragment if let Some(start) = self.fragment_start { debug_assert!(self.byte_at(start) == b'#'); self.serialization.truncate(start as usize); } // Write the new one if let Some(input) = fragment { self.fragment_start = Some(to_u32(self.serialization.len()).unwrap()); self.serialization.push('#'); self.mutate(|parser| parser.parse_fragment(parser::Input::no_trim(input))) } else { self.fragment_start = None } } fn take_fragment(&mut self) -> Option { self.fragment_start.take().map(|start| { debug_assert!(self.byte_at(start) == b'#'); let fragment = self.slice(start + 1..).to_owned(); self.serialization.truncate(start as usize); fragment }) } fn restore_already_parsed_fragment(&mut self, fragment: Option) { if let Some(ref fragment) = fragment { assert!(self.fragment_start.is_none()); self.fragment_start = Some(to_u32(self.serialization.len()).unwrap()); self.serialization.push('#'); self.serialization.push_str(fragment); } } /// Change this URL’s query string. /// /// # Examples /// /// ```rust /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("https://example.com/products")?; /// assert_eq!(url.as_str(), "https://example.com/products"); /// /// url.set_query(Some("page=2")); /// assert_eq!(url.as_str(), "https://example.com/products?page=2"); /// assert_eq!(url.query(), Some("page=2")); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn set_query(&mut self, query: Option<&str>) { let fragment = self.take_fragment(); // Remove any previous query if let Some(start) = self.query_start.take() { debug_assert!(self.byte_at(start) == b'?'); self.serialization.truncate(start as usize); } // Write the new query, if any if let Some(input) = query { self.query_start = Some(to_u32(self.serialization.len()).unwrap()); self.serialization.push('?'); let scheme_type = SchemeType::from(self.scheme()); let scheme_end = self.scheme_end; self.mutate(|parser| { let vfn = parser.violation_fn; parser.parse_query( scheme_type, scheme_end, parser::Input::trim_tab_and_newlines(input, vfn), ) }); } self.restore_already_parsed_fragment(fragment); } /// Manipulate this URL’s query string, viewed as a sequence of name/value pairs /// in `application/x-www-form-urlencoded` syntax. /// /// The return value has a method-chaining API: /// /// ```rust /// # use url::{Url, ParseError}; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("https://example.net?lang=fr#nav")?; /// assert_eq!(url.query(), Some("lang=fr")); /// /// url.query_pairs_mut().append_pair("foo", "bar"); /// assert_eq!(url.query(), Some("lang=fr&foo=bar")); /// assert_eq!(url.as_str(), "https://example.net/?lang=fr&foo=bar#nav"); /// /// url.query_pairs_mut() /// .clear() /// .append_pair("foo", "bar & baz") /// .append_pair("saisons", "\u{00C9}t\u{00E9}+hiver"); /// assert_eq!(url.query(), Some("foo=bar+%26+baz&saisons=%C3%89t%C3%A9%2Bhiver")); /// assert_eq!(url.as_str(), /// "https://example.net/?foo=bar+%26+baz&saisons=%C3%89t%C3%A9%2Bhiver#nav"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// Note: `url.query_pairs_mut().clear();` is equivalent to `url.set_query(Some(""))`, /// not `url.set_query(None)`. /// /// The state of `Url` is unspecified if this return value is leaked without being dropped. pub fn query_pairs_mut(&mut self) -> form_urlencoded::Serializer<'_, UrlQuery<'_>> { let fragment = self.take_fragment(); let query_start; if let Some(start) = self.query_start { debug_assert!(self.byte_at(start) == b'?'); query_start = start as usize; } else { query_start = self.serialization.len(); self.query_start = Some(to_u32(query_start).unwrap()); self.serialization.push('?'); } let query = UrlQuery { url: Some(self), fragment, }; form_urlencoded::Serializer::for_suffix(query, query_start + "?".len()) } fn take_after_path(&mut self) -> String { match (self.query_start, self.fragment_start) { (Some(i), _) | (None, Some(i)) => { let after_path = self.slice(i..).to_owned(); self.serialization.truncate(i as usize); after_path } (None, None) => String::new(), } } /// Change this URL’s path. /// /// # Examples /// /// ```rust /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("https://example.com")?; /// url.set_path("api/comments"); /// assert_eq!(url.as_str(), "https://example.com/api/comments"); /// assert_eq!(url.path(), "/api/comments"); /// /// let mut url = Url::parse("https://example.com/api")?; /// url.set_path("data/report.csv"); /// assert_eq!(url.as_str(), "https://example.com/data/report.csv"); /// assert_eq!(url.path(), "/data/report.csv"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` pub fn set_path(&mut self, mut path: &str) { let after_path = self.take_after_path(); let old_after_path_pos = to_u32(self.serialization.len()).unwrap(); let cannot_be_a_base = self.cannot_be_a_base(); let scheme_type = SchemeType::from(self.scheme()); self.serialization.truncate(self.path_start as usize); self.mutate(|parser| { if cannot_be_a_base { if path.starts_with('/') { parser.serialization.push_str("%2F"); path = &path[1..]; } parser.parse_cannot_be_a_base_path(parser::Input::new(path)); } else { let mut has_host = true; // FIXME parser.parse_path_start(scheme_type, &mut has_host, parser::Input::new(path)); } }); self.restore_after_path(old_after_path_pos, &after_path); } /// Return an object with methods to manipulate this URL’s path segments. /// /// Return `Err(())` if this URL is cannot-be-a-base. #[allow(clippy::result_unit_err)] pub fn path_segments_mut(&mut self) -> Result, ()> { if self.cannot_be_a_base() { Err(()) } else { Ok(path_segments::new(self)) } } fn restore_after_path(&mut self, old_after_path_position: u32, after_path: &str) { let new_after_path_position = to_u32(self.serialization.len()).unwrap(); let adjust = |index: &mut u32| { *index -= old_after_path_position; *index += new_after_path_position; }; if let Some(ref mut index) = self.query_start { adjust(index) } if let Some(ref mut index) = self.fragment_start { adjust(index) } self.serialization.push_str(after_path) } /// Change this URL’s port number. /// /// Note that default port numbers are not reflected in the serialization. /// /// If this URL is cannot-be-a-base, does not have a host, or has the `file` scheme; /// do nothing and return `Err`. /// /// # Examples /// /// ``` /// use url::Url; /// # use std::error::Error; /// /// # fn run() -> Result<(), Box> { /// let mut url = Url::parse("ssh://example.net:2048/")?; /// /// url.set_port(Some(4096)).map_err(|_| "cannot be base")?; /// assert_eq!(url.as_str(), "ssh://example.net:4096/"); /// /// url.set_port(None).map_err(|_| "cannot be base")?; /// assert_eq!(url.as_str(), "ssh://example.net/"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// Known default port numbers are not reflected: /// /// ```rust /// use url::Url; /// # use std::error::Error; /// /// # fn run() -> Result<(), Box> { /// let mut url = Url::parse("https://example.org/")?; /// /// url.set_port(Some(443)).map_err(|_| "cannot be base")?; /// assert!(url.port().is_none()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// Cannot set port for cannot-be-a-base URLs: /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("mailto:rms@example.net")?; /// /// let result = url.set_port(Some(80)); /// assert!(result.is_err()); /// /// let result = url.set_port(None); /// assert!(result.is_err()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[allow(clippy::result_unit_err)] pub fn set_port(&mut self, mut port: Option) -> Result<(), ()> { // has_host implies !cannot_be_a_base if !self.has_host() || self.host() == Some(Host::Domain("")) || self.scheme() == "file" { return Err(()); } if port.is_some() && port == parser::default_port(self.scheme()) { port = None } self.set_port_internal(port); Ok(()) } fn set_port_internal(&mut self, port: Option) { match (self.port, port) { (None, None) => {} (Some(_), None) => { self.serialization .drain(self.host_end as usize..self.path_start as usize); let offset = self.path_start - self.host_end; self.path_start = self.host_end; if let Some(ref mut index) = self.query_start { *index -= offset } if let Some(ref mut index) = self.fragment_start { *index -= offset } } (Some(old), Some(new)) if old == new => {} (_, Some(new)) => { let path_and_after = self.slice(self.path_start..).to_owned(); self.serialization.truncate(self.host_end as usize); write!(&mut self.serialization, ":{}", new).unwrap(); let old_path_start = self.path_start; let new_path_start = to_u32(self.serialization.len()).unwrap(); self.path_start = new_path_start; let adjust = |index: &mut u32| { *index -= old_path_start; *index += new_path_start; }; if let Some(ref mut index) = self.query_start { adjust(index) } if let Some(ref mut index) = self.fragment_start { adjust(index) } self.serialization.push_str(&path_and_after); } } self.port = port; } /// Change this URL’s host. /// /// Removing the host (calling this with `None`) /// will also remove any username, password, and port number. /// /// # Examples /// /// Change host: /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("https://example.net")?; /// let result = url.set_host(Some("rust-lang.org")); /// assert!(result.is_ok()); /// assert_eq!(url.as_str(), "https://rust-lang.org/"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// Remove host: /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("foo://example.net")?; /// let result = url.set_host(None); /// assert!(result.is_ok()); /// assert_eq!(url.as_str(), "foo:/"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// Cannot remove host for 'special' schemes (e.g. `http`): /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("https://example.net")?; /// let result = url.set_host(None); /// assert!(result.is_err()); /// assert_eq!(url.as_str(), "https://example.net/"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// Cannot change or remove host for cannot-be-a-base URLs: /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("mailto:rms@example.net")?; /// /// let result = url.set_host(Some("rust-lang.org")); /// assert!(result.is_err()); /// assert_eq!(url.as_str(), "mailto:rms@example.net"); /// /// let result = url.set_host(None); /// assert!(result.is_err()); /// assert_eq!(url.as_str(), "mailto:rms@example.net"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// # Errors /// /// If this URL is cannot-be-a-base or there is an error parsing the given `host`, /// a [`ParseError`] variant will be returned. /// /// [`ParseError`]: enum.ParseError.html pub fn set_host(&mut self, host: Option<&str>) -> Result<(), ParseError> { if self.cannot_be_a_base() { return Err(ParseError::SetHostOnCannotBeABaseUrl); } if let Some(host) = host { if host.is_empty() && SchemeType::from(self.scheme()).is_special() { return Err(ParseError::EmptyHost); } let mut host_substr = host; // Otherwise, if c is U+003A (:) and the [] flag is unset, then if !host.starts_with('[') || !host.ends_with(']') { match host.find(':') { Some(0) => { // If buffer is the empty string, validation error, return failure. return Err(ParseError::InvalidDomainCharacter); } // Let host be the result of host parsing buffer Some(colon_index) => { host_substr = &host[..colon_index]; } None => {} } } if SchemeType::from(self.scheme()).is_special() { self.set_host_internal(Host::parse(host_substr)?, None); } else { self.set_host_internal(Host::parse_opaque(host_substr)?, None); } } else if self.has_host() { let scheme_type = SchemeType::from(self.scheme()); if scheme_type.is_special() { return Err(ParseError::EmptyHost); } else if self.serialization.len() == self.path_start as usize { self.serialization.push('/'); } debug_assert!(self.byte_at(self.scheme_end) == b':'); debug_assert!(self.byte_at(self.path_start) == b'/'); let new_path_start = self.scheme_end + 1; self.serialization .drain(new_path_start as usize..self.path_start as usize); let offset = self.path_start - new_path_start; self.path_start = new_path_start; self.username_end = new_path_start; self.host_start = new_path_start; self.host_end = new_path_start; self.port = None; if let Some(ref mut index) = self.query_start { *index -= offset } if let Some(ref mut index) = self.fragment_start { *index -= offset } } Ok(()) } /// opt_new_port: None means leave unchanged, Some(None) means remove any port number. fn set_host_internal(&mut self, host: Host, opt_new_port: Option>) { let old_suffix_pos = if opt_new_port.is_some() { self.path_start } else { self.host_end }; let suffix = self.slice(old_suffix_pos..).to_owned(); self.serialization.truncate(self.host_start as usize); if !self.has_authority() { debug_assert!(self.slice(self.scheme_end..self.host_start) == ":"); debug_assert!(self.username_end == self.host_start); self.serialization.push('/'); self.serialization.push('/'); self.username_end += 2; self.host_start += 2; } write!(&mut self.serialization, "{}", host).unwrap(); self.host_end = to_u32(self.serialization.len()).unwrap(); self.host = host.into(); if let Some(new_port) = opt_new_port { self.port = new_port; if let Some(port) = new_port { write!(&mut self.serialization, ":{}", port).unwrap(); } } let new_suffix_pos = to_u32(self.serialization.len()).unwrap(); self.serialization.push_str(&suffix); let adjust = |index: &mut u32| { *index -= old_suffix_pos; *index += new_suffix_pos; }; adjust(&mut self.path_start); if let Some(ref mut index) = self.query_start { adjust(index) } if let Some(ref mut index) = self.fragment_start { adjust(index) } } /// Change this URL’s host to the given IP address. /// /// If this URL is cannot-be-a-base, do nothing and return `Err`. /// /// Compared to `Url::set_host`, this skips the host parser. /// /// # Examples /// /// ```rust /// use url::{Url, ParseError}; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("http://example.com")?; /// url.set_ip_host("127.0.0.1".parse().unwrap()); /// assert_eq!(url.host_str(), Some("127.0.0.1")); /// assert_eq!(url.as_str(), "http://127.0.0.1/"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// Cannot change URL's from mailto(cannot-be-base) to ip: /// /// ```rust /// use url::{Url, ParseError}; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("mailto:rms@example.com")?; /// let result = url.set_ip_host("127.0.0.1".parse().unwrap()); /// /// assert_eq!(url.as_str(), "mailto:rms@example.com"); /// assert!(result.is_err()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// #[allow(clippy::result_unit_err)] pub fn set_ip_host(&mut self, address: IpAddr) -> Result<(), ()> { if self.cannot_be_a_base() { return Err(()); } let address = match address { IpAddr::V4(address) => Host::Ipv4(address), IpAddr::V6(address) => Host::Ipv6(address), }; self.set_host_internal(address, None); Ok(()) } /// Change this URL’s password. /// /// If this URL is cannot-be-a-base or does not have a host, do nothing and return `Err`. /// /// # Examples /// /// ```rust /// use url::{Url, ParseError}; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("mailto:rmz@example.com")?; /// let result = url.set_password(Some("secret_password")); /// assert!(result.is_err()); /// /// let mut url = Url::parse("ftp://user1:secret1@example.com")?; /// let result = url.set_password(Some("secret_password")); /// assert_eq!(url.password(), Some("secret_password")); /// /// let mut url = Url::parse("ftp://user2:@example.com")?; /// let result = url.set_password(Some("secret2")); /// assert!(result.is_ok()); /// assert_eq!(url.password(), Some("secret2")); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[allow(clippy::result_unit_err)] pub fn set_password(&mut self, password: Option<&str>) -> Result<(), ()> { // has_host implies !cannot_be_a_base if !self.has_host() || self.host() == Some(Host::Domain("")) || self.scheme() == "file" { return Err(()); } if let Some(password) = password { let host_and_after = self.slice(self.host_start..).to_owned(); self.serialization.truncate(self.username_end as usize); self.serialization.push(':'); self.serialization .extend(utf8_percent_encode(password, USERINFO)); self.serialization.push('@'); let old_host_start = self.host_start; let new_host_start = to_u32(self.serialization.len()).unwrap(); let adjust = |index: &mut u32| { *index -= old_host_start; *index += new_host_start; }; self.host_start = new_host_start; adjust(&mut self.host_end); adjust(&mut self.path_start); if let Some(ref mut index) = self.query_start { adjust(index) } if let Some(ref mut index) = self.fragment_start { adjust(index) } self.serialization.push_str(&host_and_after); } else if self.byte_at(self.username_end) == b':' { // If there is a password to remove let has_username_or_password = self.byte_at(self.host_start - 1) == b'@'; debug_assert!(has_username_or_password); let username_start = self.scheme_end + 3; let empty_username = username_start == self.username_end; let start = self.username_end; // Remove the ':' let end = if empty_username { self.host_start // Remove the '@' as well } else { self.host_start - 1 // Keep the '@' to separate the username from the host }; self.serialization.drain(start as usize..end as usize); let offset = end - start; self.host_start -= offset; self.host_end -= offset; self.path_start -= offset; if let Some(ref mut index) = self.query_start { *index -= offset } if let Some(ref mut index) = self.fragment_start { *index -= offset } } Ok(()) } /// Change this URL’s username. /// /// If this URL is cannot-be-a-base or does not have a host, do nothing and return `Err`. /// # Examples /// /// Cannot setup username from mailto(cannot-be-base) /// /// ```rust /// use url::{Url, ParseError}; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("mailto:rmz@example.com")?; /// let result = url.set_username("user1"); /// assert_eq!(url.as_str(), "mailto:rmz@example.com"); /// assert!(result.is_err()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// Setup username to user1 /// /// ```rust /// use url::{Url, ParseError}; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("ftp://:secre1@example.com/")?; /// let result = url.set_username("user1"); /// assert!(result.is_ok()); /// assert_eq!(url.username(), "user1"); /// assert_eq!(url.as_str(), "ftp://user1:secre1@example.com/"); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[allow(clippy::result_unit_err)] pub fn set_username(&mut self, username: &str) -> Result<(), ()> { // has_host implies !cannot_be_a_base if !self.has_host() || self.host() == Some(Host::Domain("")) || self.scheme() == "file" { return Err(()); } let username_start = self.scheme_end + 3; debug_assert!(self.slice(self.scheme_end..username_start) == "://"); if self.slice(username_start..self.username_end) == username { return Ok(()); } let after_username = self.slice(self.username_end..).to_owned(); self.serialization.truncate(username_start as usize); self.serialization .extend(utf8_percent_encode(username, USERINFO)); let mut removed_bytes = self.username_end; self.username_end = to_u32(self.serialization.len()).unwrap(); let mut added_bytes = self.username_end; let new_username_is_empty = self.username_end == username_start; match (new_username_is_empty, after_username.chars().next()) { (true, Some('@')) => { removed_bytes += 1; self.serialization.push_str(&after_username[1..]); } (false, Some('@')) | (_, Some(':')) | (true, _) => { self.serialization.push_str(&after_username); } (false, _) => { added_bytes += 1; self.serialization.push('@'); self.serialization.push_str(&after_username); } } let adjust = |index: &mut u32| { *index -= removed_bytes; *index += added_bytes; }; adjust(&mut self.host_start); adjust(&mut self.host_end); adjust(&mut self.path_start); if let Some(ref mut index) = self.query_start { adjust(index) } if let Some(ref mut index) = self.fragment_start { adjust(index) } Ok(()) } /// Change this URL’s scheme. /// /// Do nothing and return `Err` under the following circumstances: /// /// * If the new scheme is not in `[a-zA-Z][a-zA-Z0-9+.-]+` /// * If this URL is cannot-be-a-base and the new scheme is one of /// `http`, `https`, `ws`, `wss` or `ftp` /// * If either the old or new scheme is `http`, `https`, `ws`, /// `wss` or `ftp` and the other is not one of these /// * If the new scheme is `file` and this URL includes credentials /// or has a non-null port /// * If this URL's scheme is `file` and its host is empty or null /// /// See also [the URL specification's section on legal scheme state /// overrides](https://url.spec.whatwg.org/#scheme-state). /// /// # Examples /// /// Change the URL’s scheme from `https` to `foo`: /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("https://example.net")?; /// let result = url.set_scheme("http"); /// assert_eq!(url.as_str(), "http://example.net/"); /// assert!(result.is_ok()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// Change the URL’s scheme from `foo` to `bar`: /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("foo://example.net")?; /// let result = url.set_scheme("bar"); /// assert_eq!(url.as_str(), "bar://example.net"); /// assert!(result.is_ok()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// Cannot change URL’s scheme from `https` to `foõ`: /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("https://example.net")?; /// let result = url.set_scheme("foõ"); /// assert_eq!(url.as_str(), "https://example.net/"); /// assert!(result.is_err()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// /// Cannot change URL’s scheme from `mailto` (cannot-be-a-base) to `https`: /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("mailto:rms@example.net")?; /// let result = url.set_scheme("https"); /// assert_eq!(url.as_str(), "mailto:rms@example.net"); /// assert!(result.is_err()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// Cannot change the URL’s scheme from `foo` to `https`: /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("foo://example.net")?; /// let result = url.set_scheme("https"); /// assert_eq!(url.as_str(), "foo://example.net"); /// assert!(result.is_err()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` /// Cannot change the URL’s scheme from `http` to `foo`: /// /// ``` /// use url::Url; /// # use url::ParseError; /// /// # fn run() -> Result<(), ParseError> { /// let mut url = Url::parse("http://example.net")?; /// let result = url.set_scheme("foo"); /// assert_eq!(url.as_str(), "http://example.net/"); /// assert!(result.is_err()); /// # Ok(()) /// # } /// # run().unwrap(); /// ``` #[allow(clippy::result_unit_err, clippy::suspicious_operation_groupings)] pub fn set_scheme(&mut self, scheme: &str) -> Result<(), ()> { let mut parser = Parser::for_setter(String::new()); let remaining = parser.parse_scheme(parser::Input::new(scheme))?; let new_scheme_type = SchemeType::from(&parser.serialization); let old_scheme_type = SchemeType::from(self.scheme()); // If url’s scheme is a special scheme and buffer is not a special scheme, then return. if (new_scheme_type.is_special() && !old_scheme_type.is_special()) || // If url’s scheme is not a special scheme and buffer is a special scheme, then return. (!new_scheme_type.is_special() && old_scheme_type.is_special()) || // If url includes credentials or has a non-null port, and buffer is "file", then return. // If url’s scheme is "file" and its host is an empty host or null, then return. (new_scheme_type.is_file() && self.has_authority()) { return Err(()); } if !remaining.is_empty() || (!self.has_host() && new_scheme_type.is_special()) { return Err(()); } let old_scheme_end = self.scheme_end; let new_scheme_end = to_u32(parser.serialization.len()).unwrap(); let adjust = |index: &mut u32| { *index -= old_scheme_end; *index += new_scheme_end; }; self.scheme_end = new_scheme_end; adjust(&mut self.username_end); adjust(&mut self.host_start); adjust(&mut self.host_end); adjust(&mut self.path_start); if let Some(ref mut index) = self.query_start { adjust(index) } if let Some(ref mut index) = self.fragment_start { adjust(index) } parser.serialization.push_str(self.slice(old_scheme_end..)); self.serialization = parser.serialization; // Update the port so it can be removed // If it is the scheme's default // we don't mind it silently failing // if there was no port in the first place let previous_port = self.port(); let _ = self.set_port(previous_port); Ok(()) } /// Convert a file name as `std::path::Path` into an URL in the `file` scheme. /// /// This returns `Err` if the given path is not absolute or, /// on Windows, if the prefix is not a disk prefix (e.g. `C:`) or a UNC prefix (`\\`). /// /// # Examples /// /// On Unix-like platforms: /// /// ``` /// # if cfg!(unix) { /// use url::Url; /// /// # fn run() -> Result<(), ()> { /// let url = Url::from_file_path("/tmp/foo.txt")?; /// assert_eq!(url.as_str(), "file:///tmp/foo.txt"); /// /// let url = Url::from_file_path("../foo.txt"); /// assert!(url.is_err()); /// /// let url = Url::from_file_path("https://google.com/"); /// assert!(url.is_err()); /// # Ok(()) /// # } /// # run().unwrap(); /// # } /// ``` #[cfg(any(unix, windows, target_os = "redox"))] #[allow(clippy::result_unit_err)] pub fn from_file_path>(path: P) -> Result { let mut serialization = "file://".to_owned(); let host_start = serialization.len() as u32; let (host_end, host) = path_to_file_url_segments(path.as_ref(), &mut serialization)?; Ok(Url { serialization, scheme_end: "file".len() as u32, username_end: host_start, host_start, host_end, host, port: None, path_start: host_end, query_start: None, fragment_start: None, }) } /// Convert a directory name as `std::path::Path` into an URL in the `file` scheme. /// /// This returns `Err` if the given path is not absolute or, /// on Windows, if the prefix is not a disk prefix (e.g. `C:`) or a UNC prefix (`\\`). /// /// Compared to `from_file_path`, this ensure that URL’s the path has a trailing slash /// so that the entire path is considered when using this URL as a base URL. /// /// For example: /// /// * `"index.html"` parsed with `Url::from_directory_path(Path::new("/var/www"))` /// as the base URL is `file:///var/www/index.html` /// * `"index.html"` parsed with `Url::from_file_path(Path::new("/var/www"))` /// as the base URL is `file:///var/index.html`, which might not be what was intended. /// /// Note that `std::path` does not consider trailing slashes significant /// and usually does not include them (e.g. in `Path::parent()`). #[cfg(any(unix, windows, target_os = "redox"))] #[allow(clippy::result_unit_err)] pub fn from_directory_path>(path: P) -> Result { let mut url = Url::from_file_path(path)?; if !url.serialization.ends_with('/') { url.serialization.push('/') } Ok(url) } /// Serialize with Serde using the internal representation of the `Url` struct. /// /// The corresponding `deserialize_internal` method sacrifices some invariant-checking /// for speed, compared to the `Deserialize` trait impl. /// /// This method is only available if the `serde` Cargo feature is enabled. #[cfg(feature = "serde")] #[deny(unused)] pub fn serialize_internal(&self, serializer: S) -> Result where S: serde::Serializer, { use serde::Serialize; // Destructuring first lets us ensure that adding or removing fields forces this method // to be updated let Url { ref serialization, ref scheme_end, ref username_end, ref host_start, ref host_end, ref host, ref port, ref path_start, ref query_start, ref fragment_start, } = *self; ( serialization, scheme_end, username_end, host_start, host_end, host, port, path_start, query_start, fragment_start, ) .serialize(serializer) } /// Serialize with Serde using the internal representation of the `Url` struct. /// /// The corresponding `deserialize_internal` method sacrifices some invariant-checking /// for speed, compared to the `Deserialize` trait impl. /// /// This method is only available if the `serde` Cargo feature is enabled. #[cfg(feature = "serde")] #[deny(unused)] pub fn deserialize_internal<'de, D>(deserializer: D) -> Result where D: serde::Deserializer<'de>, { use serde::de::{Deserialize, Error, Unexpected}; let ( serialization, scheme_end, username_end, host_start, host_end, host, port, path_start, query_start, fragment_start, ) = Deserialize::deserialize(deserializer)?; let url = Url { serialization, scheme_end, username_end, host_start, host_end, host, port, path_start, query_start, fragment_start, }; if cfg!(debug_assertions) { url.check_invariants().map_err(|reason| { let reason: &str = &reason; Error::invalid_value(Unexpected::Other("value"), &reason) })? } Ok(url) } /// Assuming the URL is in the `file` scheme or similar, /// convert its path to an absolute `std::path::Path`. /// /// **Note:** This does not actually check the URL’s `scheme`, /// and may give nonsensical results for other schemes. /// It is the user’s responsibility to check the URL’s scheme before calling this. /// /// ``` /// # use url::Url; /// # let url = Url::parse("file:///etc/passwd").unwrap(); /// let path = url.to_file_path(); /// ``` /// /// Returns `Err` if the host is neither empty nor `"localhost"` (except on Windows, where /// `file:` URLs may have a non-local host), /// or if `Path::new_opt()` returns `None`. /// (That is, if the percent-decoded path contains a NUL byte or, /// for a Windows path, is not UTF-8.) #[inline] #[cfg(any(unix, windows, target_os = "redox"))] #[allow(clippy::result_unit_err)] pub fn to_file_path(&self) -> Result { if let Some(segments) = self.path_segments() { let host = match self.host() { None | Some(Host::Domain("localhost")) => None, Some(_) if cfg!(windows) && self.scheme() == "file" => { Some(&self.serialization[self.host_start as usize..self.host_end as usize]) } _ => return Err(()), }; return file_url_segments_to_pathbuf(host, segments); } Err(()) } // Private helper methods: #[inline] fn slice(&self, range: R) -> &str where R: RangeArg, { range.slice_of(&self.serialization) } #[inline] fn byte_at(&self, i: u32) -> u8 { self.serialization.as_bytes()[i as usize] } } /// Parse a string as an URL, without a base URL or encoding override. impl str::FromStr for Url { type Err = ParseError; #[inline] fn from_str(input: &str) -> Result { Url::parse(input) } } impl<'a> TryFrom<&'a str> for Url { type Error = ParseError; fn try_from(s: &'a str) -> Result { Url::parse(s) } } /// Display the serialization of this URL. impl fmt::Display for Url { #[inline] fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result { fmt::Display::fmt(&self.serialization, formatter) } } /// String converstion. impl From for String { fn from(value: Url) -> String { value.serialization } } /// Debug the serialization of this URL. impl fmt::Debug for Url { #[inline] fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result { formatter .debug_struct("Url") .field("scheme", &self.scheme()) .field("cannot_be_a_base", &self.cannot_be_a_base()) .field("username", &self.username()) .field("password", &self.password()) .field("host", &self.host()) .field("port", &self.port()) .field("path", &self.path()) .field("query", &self.query()) .field("fragment", &self.fragment()) .finish() } } /// URLs compare like their serialization. impl Eq for Url {} /// URLs compare like their serialization. impl PartialEq for Url { #[inline] fn eq(&self, other: &Self) -> bool { self.serialization == other.serialization } } /// URLs compare like their serialization. impl Ord for Url { #[inline] fn cmp(&self, other: &Self) -> cmp::Ordering { self.serialization.cmp(&other.serialization) } } /// URLs compare like their serialization. impl PartialOrd for Url { #[inline] fn partial_cmp(&self, other: &Self) -> Option { self.serialization.partial_cmp(&other.serialization) } } /// URLs hash like their serialization. impl hash::Hash for Url { #[inline] fn hash(&self, state: &mut H) where H: hash::Hasher, { hash::Hash::hash(&self.serialization, state) } } /// Return the serialization of this URL. impl AsRef for Url { #[inline] fn as_ref(&self) -> &str { &self.serialization } } trait RangeArg { fn slice_of<'a>(&self, s: &'a str) -> &'a str; } impl RangeArg for Range { #[inline] fn slice_of<'a>(&self, s: &'a str) -> &'a str { &s[self.start as usize..self.end as usize] } } impl RangeArg for RangeFrom { #[inline] fn slice_of<'a>(&self, s: &'a str) -> &'a str { &s[self.start as usize..] } } impl RangeArg for RangeTo { #[inline] fn slice_of<'a>(&self, s: &'a str) -> &'a str { &s[..self.end as usize] } } /// Serializes this URL into a `serde` stream. /// /// This implementation is only available if the `serde` Cargo feature is enabled. #[cfg(feature = "serde")] impl serde::Serialize for Url { fn serialize(&self, serializer: S) -> Result where S: serde::Serializer, { serializer.serialize_str(self.as_str()) } } /// Deserializes this URL from a `serde` stream. /// /// This implementation is only available if the `serde` Cargo feature is enabled. #[cfg(feature = "serde")] impl<'de> serde::Deserialize<'de> for Url { fn deserialize(deserializer: D) -> Result where D: serde::Deserializer<'de>, { use serde::de::{Error, Unexpected, Visitor}; struct UrlVisitor; impl<'de> Visitor<'de> for UrlVisitor { type Value = Url; fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result { formatter.write_str("a string representing an URL") } fn visit_str(self, s: &str) -> Result where E: Error, { Url::parse(s).map_err(|err| { let err_s = format!("{}", err); Error::invalid_value(Unexpected::Str(s), &err_s.as_str()) }) } } deserializer.deserialize_str(UrlVisitor) } } #[cfg(any(unix, target_os = "redox"))] fn path_to_file_url_segments( path: &Path, serialization: &mut String, ) -> Result<(u32, HostInternal), ()> { use std::os::unix::prelude::OsStrExt; if !path.is_absolute() { return Err(()); } let host_end = to_u32(serialization.len()).unwrap(); let mut empty = true; // skip the root component for component in path.components().skip(1) { empty = false; serialization.push('/'); serialization.extend(percent_encode( component.as_os_str().as_bytes(), PATH_SEGMENT, )); } if empty { // An URL’s path must not be empty. serialization.push('/'); } Ok((host_end, HostInternal::None)) } #[cfg(windows)] fn path_to_file_url_segments( path: &Path, serialization: &mut String, ) -> Result<(u32, HostInternal), ()> { path_to_file_url_segments_windows(path, serialization) } // Build this unconditionally to alleviate https://github.com/servo/rust-url/issues/102 #[cfg_attr(not(windows), allow(dead_code))] fn path_to_file_url_segments_windows( path: &Path, serialization: &mut String, ) -> Result<(u32, HostInternal), ()> { use std::path::{Component, Prefix}; if !path.is_absolute() { return Err(()); } let mut components = path.components(); let host_start = serialization.len() + 1; let host_end; let host_internal; match components.next() { Some(Component::Prefix(ref p)) => match p.kind() { Prefix::Disk(letter) | Prefix::VerbatimDisk(letter) => { host_end = to_u32(serialization.len()).unwrap(); host_internal = HostInternal::None; serialization.push('/'); serialization.push(letter as char); serialization.push(':'); } Prefix::UNC(server, share) | Prefix::VerbatimUNC(server, share) => { let host = Host::parse(server.to_str().ok_or(())?).map_err(|_| ())?; write!(serialization, "{}", host).unwrap(); host_end = to_u32(serialization.len()).unwrap(); host_internal = host.into(); serialization.push('/'); let share = share.to_str().ok_or(())?; serialization.extend(percent_encode(share.as_bytes(), PATH_SEGMENT)); } _ => return Err(()), }, _ => return Err(()), } let mut path_only_has_prefix = true; for component in components { if component == Component::RootDir { continue; } path_only_has_prefix = false; // FIXME: somehow work with non-unicode? let component = component.as_os_str().to_str().ok_or(())?; serialization.push('/'); serialization.extend(percent_encode(component.as_bytes(), PATH_SEGMENT)); } // A windows drive letter must end with a slash. if serialization.len() > host_start && parser::is_windows_drive_letter(&serialization[host_start..]) && path_only_has_prefix { serialization.push('/'); } Ok((host_end, host_internal)) } #[cfg(any(unix, target_os = "redox"))] fn file_url_segments_to_pathbuf( host: Option<&str>, segments: str::Split<'_, char>, ) -> Result { use std::ffi::OsStr; use std::os::unix::prelude::OsStrExt; if host.is_some() { return Err(()); } let mut bytes = if cfg!(target_os = "redox") { b"file:".to_vec() } else { Vec::new() }; for segment in segments { bytes.push(b'/'); bytes.extend(percent_decode(segment.as_bytes())); } // A windows drive letter must end with a slash. if bytes.len() > 2 && matches!(bytes[bytes.len() - 2], b'a'..=b'z' | b'A'..=b'Z') && matches!(bytes[bytes.len() - 1], b':' | b'|') { bytes.push(b'/'); } let os_str = OsStr::from_bytes(&bytes); let path = PathBuf::from(os_str); debug_assert!( path.is_absolute(), "to_file_path() failed to produce an absolute Path" ); Ok(path) } #[cfg(windows)] fn file_url_segments_to_pathbuf( host: Option<&str>, segments: str::Split, ) -> Result { file_url_segments_to_pathbuf_windows(host, segments) } // Build this unconditionally to alleviate https://github.com/servo/rust-url/issues/102 #[cfg_attr(not(windows), allow(dead_code))] fn file_url_segments_to_pathbuf_windows( host: Option<&str>, mut segments: str::Split<'_, char>, ) -> Result { let mut string = if let Some(host) = host { r"\\".to_owned() + host } else { let first = segments.next().ok_or(())?; match first.len() { 2 => { if !first.starts_with(parser::ascii_alpha) || first.as_bytes()[1] != b':' { return Err(()); } first.to_owned() } 4 => { if !first.starts_with(parser::ascii_alpha) { return Err(()); } let bytes = first.as_bytes(); if bytes[1] != b'%' || bytes[2] != b'3' || (bytes[3] != b'a' && bytes[3] != b'A') { return Err(()); } first[0..1].to_owned() + ":" } _ => return Err(()), } }; for segment in segments { string.push('\\'); // Currently non-unicode windows paths cannot be represented match String::from_utf8(percent_decode(segment.as_bytes()).collect()) { Ok(s) => string.push_str(&s), Err(..) => return Err(()), } } let path = PathBuf::from(string); debug_assert!( path.is_absolute(), "to_file_path() failed to produce an absolute Path" ); Ok(path) } /// Implementation detail of `Url::query_pairs_mut`. Typically not used directly. #[derive(Debug)] pub struct UrlQuery<'a> { url: Option<&'a mut Url>, fragment: Option, } // `as_mut_string` string here exposes the internal serialization of an `Url`, // which should not be exposed to users. // We achieve that by not giving users direct access to `UrlQuery`: // * Its fields are private // (and so can not be constructed with struct literal syntax outside of this crate), // * It has no constructor // * It is only visible (on the type level) to users in the return type of // `Url::query_pairs_mut` which is `Serializer` // * `Serializer` keeps its target in a private field // * Unlike in other `Target` impls, `UrlQuery::finished` does not return `Self`. impl<'a> form_urlencoded::Target for UrlQuery<'a> { fn as_mut_string(&mut self) -> &mut String { &mut self.url.as_mut().unwrap().serialization } fn finish(mut self) -> &'a mut Url { let url = self.url.take().unwrap(); url.restore_already_parsed_fragment(self.fragment.take()); url } type Finished = &'a mut Url; } impl<'a> Drop for UrlQuery<'a> { fn drop(&mut self) { if let Some(url) = self.url.take() { url.restore_already_parsed_fragment(self.fragment.take()) } } } vendor/url/tests/0000775000175000017500000000000014160055207014613 5ustar mwhudsonmwhudsonvendor/url/tests/urltestdata.json0000664000175000017500000045656214160055207020064 0ustar mwhudsonmwhudson[ "# Based on http://trac.webkit.org/browser/trunk/LayoutTests/fast/url/script-tests/segments.js", "# AS OF https://github.com/jsdom/whatwg-url/commit/35f04dfd3048cf6362f4398745bb13375c5020c2", { "input": "http://example\t.\norg", "base": "http://example.org/foo/bar", "href": "http://example.org/", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://user:pass@foo:21/bar;par?b#c", "base": "http://example.org/foo/bar", "href": "http://user:pass@foo:21/bar;par?b#c", "origin": "http://foo:21", "protocol": "http:", "username": "user", "password": "pass", "host": "foo:21", "hostname": "foo", "port": "21", "pathname": "/bar;par", "search": "?b", "hash": "#c" }, { "input": "https://test:@test", "base": "about:blank", "href": "https://test@test/", "origin": "https://test", "protocol": "https:", "username": "test", "password": "", "host": "test", "hostname": "test", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "https://:@test", "base": "about:blank", "href": "https://test/", "origin": "https://test", "protocol": "https:", "username": "", "password": "", "host": "test", "hostname": "test", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "non-special://test:@test/x", "base": "about:blank", "href": "non-special://test@test/x", "origin": "null", "protocol": "non-special:", "username": "test", "password": "", "host": "test", "hostname": "test", "port": "", "pathname": "/x", "search": "", "hash": "" }, { "input": "non-special://:@test/x", "base": "about:blank", "href": "non-special://test/x", "origin": "null", "protocol": "non-special:", "username": "", "password": "", "host": "test", "hostname": "test", "port": "", "pathname": "/x", "search": "", "hash": "" }, { "input": "http:foo.com", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/foo.com", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/foo.com", "search": "", "hash": "" }, { "input": "\t :foo.com \n", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/:foo.com", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/:foo.com", "search": "", "hash": "" }, { "input": " foo.com ", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/foo.com", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/foo.com", "search": "", "hash": "" }, { "input": "a:\t foo.com", "base": "http://example.org/foo/bar", "href": "a: foo.com", "origin": "null", "protocol": "a:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": " foo.com", "search": "", "hash": "" }, { "input": "http://f:21/ b ? d # e ", "base": "http://example.org/foo/bar", "href": "http://f:21/%20b%20?%20d%20#%20e", "origin": "http://f:21", "protocol": "http:", "username": "", "password": "", "host": "f:21", "hostname": "f", "port": "21", "pathname": "/%20b%20", "search": "?%20d%20", "hash": "#%20e" }, { "input": "lolscheme:x x#x x", "base": "about:blank", "href": "lolscheme:x x#x%20x", "protocol": "lolscheme:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "x x", "search": "", "hash": "#x%20x" }, { "input": "http://f:/c", "base": "http://example.org/foo/bar", "href": "http://f/c", "origin": "http://f", "protocol": "http:", "username": "", "password": "", "host": "f", "hostname": "f", "port": "", "pathname": "/c", "search": "", "hash": "" }, { "input": "http://f:0/c", "base": "http://example.org/foo/bar", "href": "http://f:0/c", "origin": "http://f:0", "protocol": "http:", "username": "", "password": "", "host": "f:0", "hostname": "f", "port": "0", "pathname": "/c", "search": "", "hash": "" }, { "input": "http://f:00000000000000/c", "base": "http://example.org/foo/bar", "href": "http://f:0/c", "origin": "http://f:0", "protocol": "http:", "username": "", "password": "", "host": "f:0", "hostname": "f", "port": "0", "pathname": "/c", "search": "", "hash": "" }, { "input": "http://f:00000000000000000000080/c", "base": "http://example.org/foo/bar", "href": "http://f/c", "origin": "http://f", "protocol": "http:", "username": "", "password": "", "host": "f", "hostname": "f", "port": "", "pathname": "/c", "search": "", "hash": "" }, { "input": "http://f:b/c", "base": "http://example.org/foo/bar", "failure": true }, { "input": "http://f: /c", "base": "http://example.org/foo/bar", "failure": true }, { "input": "http://f:\n/c", "base": "http://example.org/foo/bar", "href": "http://f/c", "origin": "http://f", "protocol": "http:", "username": "", "password": "", "host": "f", "hostname": "f", "port": "", "pathname": "/c", "search": "", "hash": "" }, { "input": "http://f:fifty-two/c", "base": "http://example.org/foo/bar", "failure": true }, { "input": "http://f:999999/c", "base": "http://example.org/foo/bar", "failure": true }, { "input": "non-special://f:999999/c", "base": "http://example.org/foo/bar", "failure": true }, { "input": "http://f: 21 / b ? d # e ", "base": "http://example.org/foo/bar", "failure": true }, { "input": "", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/bar", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/bar", "search": "", "hash": "" }, { "input": " \t", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/bar", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/bar", "search": "", "hash": "" }, { "input": ":foo.com/", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/:foo.com/", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/:foo.com/", "search": "", "hash": "" }, { "input": ":foo.com\\", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/:foo.com/", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/:foo.com/", "search": "", "hash": "" }, { "input": ":", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/:", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/:", "search": "", "hash": "" }, { "input": ":a", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/:a", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/:a", "search": "", "hash": "" }, { "input": ":/", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/:/", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/:/", "search": "", "hash": "" }, { "input": ":\\", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/:/", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/:/", "search": "", "hash": "" }, { "input": ":#", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/:#", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/:", "search": "", "hash": "" }, { "input": "#", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/bar#", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/bar", "search": "", "hash": "" }, { "input": "#/", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/bar#/", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/bar", "search": "", "hash": "#/" }, { "input": "#\\", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/bar#\\", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/bar", "search": "", "hash": "#\\" }, { "input": "#;?", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/bar#;?", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/bar", "search": "", "hash": "#;?" }, { "input": "?", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/bar?", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/bar", "search": "", "hash": "" }, { "input": "/", "base": "http://example.org/foo/bar", "href": "http://example.org/", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": ":23", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/:23", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/:23", "search": "", "hash": "" }, { "input": "/:23", "base": "http://example.org/foo/bar", "href": "http://example.org/:23", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/:23", "search": "", "hash": "" }, { "input": "::", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/::", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/::", "search": "", "hash": "" }, { "input": "::23", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/::23", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/::23", "search": "", "hash": "" }, { "input": "foo://", "base": "http://example.org/foo/bar", "href": "foo://", "origin": "null", "protocol": "foo:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "", "search": "", "hash": "" }, { "input": "http://a:b@c:29/d", "base": "http://example.org/foo/bar", "href": "http://a:b@c:29/d", "origin": "http://c:29", "protocol": "http:", "username": "a", "password": "b", "host": "c:29", "hostname": "c", "port": "29", "pathname": "/d", "search": "", "hash": "" }, { "input": "http::@c:29", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/:@c:29", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/:@c:29", "search": "", "hash": "" }, { "input": "http://&a:foo(b]c@d:2/", "base": "http://example.org/foo/bar", "href": "http://&a:foo(b%5Dc@d:2/", "origin": "http://d:2", "protocol": "http:", "username": "&a", "password": "foo(b%5Dc", "host": "d:2", "hostname": "d", "port": "2", "pathname": "/", "search": "", "hash": "" }, { "input": "http://::@c@d:2", "base": "http://example.org/foo/bar", "href": "http://:%3A%40c@d:2/", "origin": "http://d:2", "protocol": "http:", "username": "", "password": "%3A%40c", "host": "d:2", "hostname": "d", "port": "2", "pathname": "/", "search": "", "hash": "" }, { "input": "http://foo.com:b@d/", "base": "http://example.org/foo/bar", "href": "http://foo.com:b@d/", "origin": "http://d", "protocol": "http:", "username": "foo.com", "password": "b", "host": "d", "hostname": "d", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://foo.com/\\@", "base": "http://example.org/foo/bar", "href": "http://foo.com//@", "origin": "http://foo.com", "protocol": "http:", "username": "", "password": "", "host": "foo.com", "hostname": "foo.com", "port": "", "pathname": "//@", "search": "", "hash": "" }, { "input": "http:\\\\foo.com\\", "base": "http://example.org/foo/bar", "href": "http://foo.com/", "origin": "http://foo.com", "protocol": "http:", "username": "", "password": "", "host": "foo.com", "hostname": "foo.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http:\\\\a\\b:c\\d@foo.com\\", "base": "http://example.org/foo/bar", "href": "http://a/b:c/d@foo.com/", "origin": "http://a", "protocol": "http:", "username": "", "password": "", "host": "a", "hostname": "a", "port": "", "pathname": "/b:c/d@foo.com/", "search": "", "hash": "" }, { "input": "foo:/", "base": "http://example.org/foo/bar", "href": "foo:/", "origin": "null", "protocol": "foo:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "foo:/bar.com/", "base": "http://example.org/foo/bar", "href": "foo:/bar.com/", "origin": "null", "protocol": "foo:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/bar.com/", "search": "", "hash": "" }, { "input": "foo://///////", "base": "http://example.org/foo/bar", "href": "foo://///////", "origin": "null", "protocol": "foo:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "///////", "search": "", "hash": "" }, { "input": "foo://///////bar.com/", "base": "http://example.org/foo/bar", "href": "foo://///////bar.com/", "origin": "null", "protocol": "foo:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "///////bar.com/", "search": "", "hash": "" }, { "input": "foo:////://///", "base": "http://example.org/foo/bar", "href": "foo:////://///", "origin": "null", "protocol": "foo:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "//://///", "search": "", "hash": "" }, { "input": "c:/foo", "base": "http://example.org/foo/bar", "href": "c:/foo", "origin": "null", "protocol": "c:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/foo", "search": "", "hash": "" }, { "input": "//foo/bar", "base": "http://example.org/foo/bar", "href": "http://foo/bar", "origin": "http://foo", "protocol": "http:", "username": "", "password": "", "host": "foo", "hostname": "foo", "port": "", "pathname": "/bar", "search": "", "hash": "" }, { "input": "http://foo/path;a??e#f#g", "base": "http://example.org/foo/bar", "href": "http://foo/path;a??e#f#g", "origin": "http://foo", "protocol": "http:", "username": "", "password": "", "host": "foo", "hostname": "foo", "port": "", "pathname": "/path;a", "search": "??e", "hash": "#f#g" }, { "input": "http://foo/abcd?efgh?ijkl", "base": "http://example.org/foo/bar", "href": "http://foo/abcd?efgh?ijkl", "origin": "http://foo", "protocol": "http:", "username": "", "password": "", "host": "foo", "hostname": "foo", "port": "", "pathname": "/abcd", "search": "?efgh?ijkl", "hash": "" }, { "input": "http://foo/abcd#foo?bar", "base": "http://example.org/foo/bar", "href": "http://foo/abcd#foo?bar", "origin": "http://foo", "protocol": "http:", "username": "", "password": "", "host": "foo", "hostname": "foo", "port": "", "pathname": "/abcd", "search": "", "hash": "#foo?bar" }, { "input": "[61:24:74]:98", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/[61:24:74]:98", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/[61:24:74]:98", "search": "", "hash": "" }, { "input": "http:[61:27]/:foo", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/[61:27]/:foo", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/[61:27]/:foo", "search": "", "hash": "" }, { "input": "http://[1::2]:3:4", "base": "http://example.org/foo/bar", "failure": true }, { "input": "http://2001::1", "base": "http://example.org/foo/bar", "failure": true }, { "input": "http://2001::1]", "base": "http://example.org/foo/bar", "failure": true }, { "input": "http://2001::1]:80", "base": "http://example.org/foo/bar", "failure": true }, { "input": "http://[2001::1]", "base": "http://example.org/foo/bar", "href": "http://[2001::1]/", "origin": "http://[2001::1]", "protocol": "http:", "username": "", "password": "", "host": "[2001::1]", "hostname": "[2001::1]", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://[::127.0.0.1]", "base": "http://example.org/foo/bar", "href": "http://[::7f00:1]/", "origin": "http://[::7f00:1]", "protocol": "http:", "username": "", "password": "", "host": "[::7f00:1]", "hostname": "[::7f00:1]", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://[0:0:0:0:0:0:13.1.68.3]", "base": "http://example.org/foo/bar", "href": "http://[::d01:4403]/", "origin": "http://[::d01:4403]", "protocol": "http:", "username": "", "password": "", "host": "[::d01:4403]", "hostname": "[::d01:4403]", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://[2001::1]:80", "base": "http://example.org/foo/bar", "href": "http://[2001::1]/", "origin": "http://[2001::1]", "protocol": "http:", "username": "", "password": "", "host": "[2001::1]", "hostname": "[2001::1]", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http:/example.com/", "base": "http://example.org/foo/bar", "href": "http://example.org/example.com/", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "ftp:/example.com/", "base": "http://example.org/foo/bar", "href": "ftp://example.com/", "origin": "ftp://example.com", "protocol": "ftp:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "https:/example.com/", "base": "http://example.org/foo/bar", "href": "https://example.com/", "origin": "https://example.com", "protocol": "https:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "madeupscheme:/example.com/", "base": "http://example.org/foo/bar", "href": "madeupscheme:/example.com/", "origin": "null", "protocol": "madeupscheme:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "file:/example.com/", "base": "http://example.org/foo/bar", "href": "file:///example.com/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "file://example:1/", "base": "about:blank", "failure": true }, { "input": "file://example:test/", "base": "about:blank", "failure": true }, { "input": "file://example%/", "base": "about:blank", "failure": true }, { "input": "file://[example]/", "base": "about:blank", "failure": true }, { "input": "ftps:/example.com/", "base": "http://example.org/foo/bar", "href": "ftps:/example.com/", "origin": "null", "protocol": "ftps:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "gopher:/example.com/", "base": "http://example.org/foo/bar", "href": "gopher:/example.com/", "origin": "null", "protocol": "gopher:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "ws:/example.com/", "base": "http://example.org/foo/bar", "href": "ws://example.com/", "origin": "ws://example.com", "protocol": "ws:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "wss:/example.com/", "base": "http://example.org/foo/bar", "href": "wss://example.com/", "origin": "wss://example.com", "protocol": "wss:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "data:/example.com/", "base": "http://example.org/foo/bar", "href": "data:/example.com/", "origin": "null", "protocol": "data:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "javascript:/example.com/", "base": "http://example.org/foo/bar", "href": "javascript:/example.com/", "origin": "null", "protocol": "javascript:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "mailto:/example.com/", "base": "http://example.org/foo/bar", "href": "mailto:/example.com/", "origin": "null", "protocol": "mailto:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "http:example.com/", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/example.com/", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/example.com/", "search": "", "hash": "" }, { "input": "ftp:example.com/", "base": "http://example.org/foo/bar", "href": "ftp://example.com/", "origin": "ftp://example.com", "protocol": "ftp:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "https:example.com/", "base": "http://example.org/foo/bar", "href": "https://example.com/", "origin": "https://example.com", "protocol": "https:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "madeupscheme:example.com/", "base": "http://example.org/foo/bar", "href": "madeupscheme:example.com/", "origin": "null", "protocol": "madeupscheme:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, { "input": "ftps:example.com/", "base": "http://example.org/foo/bar", "href": "ftps:example.com/", "origin": "null", "protocol": "ftps:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, { "input": "gopher:example.com/", "base": "http://example.org/foo/bar", "href": "gopher:example.com/", "origin": "null", "protocol": "gopher:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, { "input": "ws:example.com/", "base": "http://example.org/foo/bar", "href": "ws://example.com/", "origin": "ws://example.com", "protocol": "ws:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "wss:example.com/", "base": "http://example.org/foo/bar", "href": "wss://example.com/", "origin": "wss://example.com", "protocol": "wss:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "data:example.com/", "base": "http://example.org/foo/bar", "href": "data:example.com/", "origin": "null", "protocol": "data:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, { "input": "javascript:example.com/", "base": "http://example.org/foo/bar", "href": "javascript:example.com/", "origin": "null", "protocol": "javascript:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, { "input": "mailto:example.com/", "base": "http://example.org/foo/bar", "href": "mailto:example.com/", "origin": "null", "protocol": "mailto:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, { "input": "/a/b/c", "base": "http://example.org/foo/bar", "href": "http://example.org/a/b/c", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/a/b/c", "search": "", "hash": "" }, { "input": "/a/ /c", "base": "http://example.org/foo/bar", "href": "http://example.org/a/%20/c", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/a/%20/c", "search": "", "hash": "" }, { "input": "/a%2fc", "base": "http://example.org/foo/bar", "href": "http://example.org/a%2fc", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/a%2fc", "search": "", "hash": "" }, { "input": "/a/%2f/c", "base": "http://example.org/foo/bar", "href": "http://example.org/a/%2f/c", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/a/%2f/c", "search": "", "hash": "" }, { "input": "#β", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/bar#%CE%B2", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/bar", "search": "", "hash": "#%CE%B2" }, { "input": "data:text/html,test#test", "base": "http://example.org/foo/bar", "href": "data:text/html,test#test", "origin": "null", "protocol": "data:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "text/html,test", "search": "", "hash": "#test" }, { "input": "tel:1234567890", "base": "http://example.org/foo/bar", "href": "tel:1234567890", "origin": "null", "protocol": "tel:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "1234567890", "search": "", "hash": "" }, "# Based on https://felixfbecker.github.io/whatwg-url-custom-host-repro/", { "input": "ssh://example.com/foo/bar.git", "base": "http://example.org/", "href": "ssh://example.com/foo/bar.git", "origin": "null", "protocol": "ssh:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo/bar.git", "search": "", "hash": "" }, "# Based on http://trac.webkit.org/browser/trunk/LayoutTests/fast/url/file.html", { "input": "file:c:\\foo\\bar.html", "base": "file:///tmp/mock/path", "href": "file:///c:/foo/bar.html", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/c:/foo/bar.html", "search": "", "hash": "" }, { "input": " File:c|////foo\\bar.html", "base": "file:///tmp/mock/path", "href": "file:///c:////foo/bar.html", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/c:////foo/bar.html", "search": "", "hash": "" }, { "input": "C|/foo/bar", "base": "file:///tmp/mock/path", "href": "file:///C:/foo/bar", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/foo/bar", "search": "", "hash": "" }, { "input": "/C|\\foo\\bar", "base": "file:///tmp/mock/path", "href": "file:///C:/foo/bar", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/foo/bar", "search": "", "hash": "" }, { "input": "//C|/foo/bar", "base": "file:///tmp/mock/path", "href": "file:///C:/foo/bar", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/foo/bar", "search": "", "hash": "" }, { "input": "//server/file", "base": "file:///tmp/mock/path", "href": "file://server/file", "protocol": "file:", "username": "", "password": "", "host": "server", "hostname": "server", "port": "", "pathname": "/file", "search": "", "hash": "" }, { "input": "\\\\server\\file", "base": "file:///tmp/mock/path", "href": "file://server/file", "protocol": "file:", "username": "", "password": "", "host": "server", "hostname": "server", "port": "", "pathname": "/file", "search": "", "hash": "" }, { "input": "/\\server/file", "base": "file:///tmp/mock/path", "href": "file://server/file", "protocol": "file:", "username": "", "password": "", "host": "server", "hostname": "server", "port": "", "pathname": "/file", "search": "", "hash": "" }, { "input": "file:///foo/bar.txt", "base": "file:///tmp/mock/path", "href": "file:///foo/bar.txt", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/foo/bar.txt", "search": "", "hash": "" }, { "input": "file:///home/me", "base": "file:///tmp/mock/path", "href": "file:///home/me", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/home/me", "search": "", "hash": "" }, { "input": "//", "base": "file:///tmp/mock/path", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "///", "base": "file:///tmp/mock/path", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "///test", "base": "file:///tmp/mock/path", "href": "file:///test", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test", "search": "", "hash": "" }, { "input": "file://test", "base": "file:///tmp/mock/path", "href": "file://test/", "protocol": "file:", "username": "", "password": "", "host": "test", "hostname": "test", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "file://localhost", "base": "file:///tmp/mock/path", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "file://localhost/", "base": "file:///tmp/mock/path", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "file://localhost/test", "base": "file:///tmp/mock/path", "href": "file:///test", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test", "search": "", "hash": "" }, { "input": "test", "base": "file:///tmp/mock/path", "href": "file:///tmp/mock/test", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/tmp/mock/test", "search": "", "hash": "" }, { "input": "file:test", "base": "file:///tmp/mock/path", "href": "file:///tmp/mock/test", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/tmp/mock/test", "search": "", "hash": "" }, "# Based on http://trac.webkit.org/browser/trunk/LayoutTests/fast/url/script-tests/path.js", { "input": "http://example.com/././foo", "base": "about:blank", "href": "http://example.com/foo", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo", "search": "", "hash": "" }, { "input": "http://example.com/./.foo", "base": "about:blank", "href": "http://example.com/.foo", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/.foo", "search": "", "hash": "" }, { "input": "http://example.com/foo/.", "base": "about:blank", "href": "http://example.com/foo/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo/", "search": "", "hash": "" }, { "input": "http://example.com/foo/./", "base": "about:blank", "href": "http://example.com/foo/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo/", "search": "", "hash": "" }, { "input": "http://example.com/foo/bar/..", "base": "about:blank", "href": "http://example.com/foo/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo/", "search": "", "hash": "" }, { "input": "http://example.com/foo/bar/../", "base": "about:blank", "href": "http://example.com/foo/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo/", "search": "", "hash": "" }, { "input": "http://example.com/foo/..bar", "base": "about:blank", "href": "http://example.com/foo/..bar", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo/..bar", "search": "", "hash": "" }, { "input": "http://example.com/foo/bar/../ton", "base": "about:blank", "href": "http://example.com/foo/ton", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo/ton", "search": "", "hash": "" }, { "input": "http://example.com/foo/bar/../ton/../../a", "base": "about:blank", "href": "http://example.com/a", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/a", "search": "", "hash": "" }, { "input": "http://example.com/foo/../../..", "base": "about:blank", "href": "http://example.com/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://example.com/foo/../../../ton", "base": "about:blank", "href": "http://example.com/ton", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/ton", "search": "", "hash": "" }, { "input": "http://example.com/foo/%2e", "base": "about:blank", "href": "http://example.com/foo/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo/", "search": "", "hash": "" }, { "input": "http://example.com/foo/%2e%2", "base": "about:blank", "href": "http://example.com/foo/%2e%2", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo/%2e%2", "search": "", "hash": "" }, { "input": "http://example.com/foo/%2e./%2e%2e/.%2e/%2e.bar", "base": "about:blank", "href": "http://example.com/%2e.bar", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/%2e.bar", "search": "", "hash": "" }, { "input": "http://example.com////../..", "base": "about:blank", "href": "http://example.com//", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "//", "search": "", "hash": "" }, { "input": "http://example.com/foo/bar//../..", "base": "about:blank", "href": "http://example.com/foo/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo/", "search": "", "hash": "" }, { "input": "http://example.com/foo/bar//..", "base": "about:blank", "href": "http://example.com/foo/bar/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo/bar/", "search": "", "hash": "" }, { "input": "http://example.com/foo", "base": "about:blank", "href": "http://example.com/foo", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo", "search": "", "hash": "" }, { "input": "http://example.com/%20foo", "base": "about:blank", "href": "http://example.com/%20foo", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/%20foo", "search": "", "hash": "" }, { "input": "http://example.com/foo%", "base": "about:blank", "href": "http://example.com/foo%", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo%", "search": "", "hash": "" }, { "input": "http://example.com/foo%2", "base": "about:blank", "href": "http://example.com/foo%2", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo%2", "search": "", "hash": "" }, { "input": "http://example.com/foo%2zbar", "base": "about:blank", "href": "http://example.com/foo%2zbar", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo%2zbar", "search": "", "hash": "" }, { "input": "http://example.com/foo%2©zbar", "base": "about:blank", "href": "http://example.com/foo%2%C3%82%C2%A9zbar", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo%2%C3%82%C2%A9zbar", "search": "", "hash": "" }, { "input": "http://example.com/foo%41%7a", "base": "about:blank", "href": "http://example.com/foo%41%7a", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo%41%7a", "search": "", "hash": "" }, { "input": "http://example.com/foo\t\u0091%91", "base": "about:blank", "href": "http://example.com/foo%C2%91%91", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo%C2%91%91", "search": "", "hash": "" }, { "input": "http://example.com/foo%00%51", "base": "about:blank", "href": "http://example.com/foo%00%51", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foo%00%51", "search": "", "hash": "" }, { "input": "http://example.com/(%28:%3A%29)", "base": "about:blank", "href": "http://example.com/(%28:%3A%29)", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/(%28:%3A%29)", "search": "", "hash": "" }, { "input": "http://example.com/%3A%3a%3C%3c", "base": "about:blank", "href": "http://example.com/%3A%3a%3C%3c", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/%3A%3a%3C%3c", "search": "", "hash": "" }, { "input": "http://example.com/foo\tbar", "base": "about:blank", "href": "http://example.com/foobar", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/foobar", "search": "", "hash": "" }, { "input": "http://example.com\\\\foo\\\\bar", "base": "about:blank", "href": "http://example.com//foo//bar", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "//foo//bar", "search": "", "hash": "" }, { "input": "http://example.com/%7Ffp3%3Eju%3Dduvgw%3Dd", "base": "about:blank", "href": "http://example.com/%7Ffp3%3Eju%3Dduvgw%3Dd", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/%7Ffp3%3Eju%3Dduvgw%3Dd", "search": "", "hash": "" }, { "input": "http://example.com/@asdf%40", "base": "about:blank", "href": "http://example.com/@asdf%40", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/@asdf%40", "search": "", "hash": "" }, { "input": "http://example.com/你好你好", "base": "about:blank", "href": "http://example.com/%E4%BD%A0%E5%A5%BD%E4%BD%A0%E5%A5%BD", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/%E4%BD%A0%E5%A5%BD%E4%BD%A0%E5%A5%BD", "search": "", "hash": "" }, { "input": "http://example.com/‥/foo", "base": "about:blank", "href": "http://example.com/%E2%80%A5/foo", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/%E2%80%A5/foo", "search": "", "hash": "" }, { "input": "http://example.com//foo", "base": "about:blank", "href": "http://example.com/%EF%BB%BF/foo", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/%EF%BB%BF/foo", "search": "", "hash": "" }, { "input": "http://example.com/‮/foo/‭/bar", "base": "about:blank", "href": "http://example.com/%E2%80%AE/foo/%E2%80%AD/bar", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/%E2%80%AE/foo/%E2%80%AD/bar", "search": "", "hash": "" }, "# Based on http://trac.webkit.org/browser/trunk/LayoutTests/fast/url/script-tests/relative.js", { "input": "http://www.google.com/foo?bar=baz#", "base": "about:blank", "href": "http://www.google.com/foo?bar=baz#", "origin": "http://www.google.com", "protocol": "http:", "username": "", "password": "", "host": "www.google.com", "hostname": "www.google.com", "port": "", "pathname": "/foo", "search": "?bar=baz", "hash": "" }, { "input": "http://www.google.com/foo?bar=baz# »", "base": "about:blank", "href": "http://www.google.com/foo?bar=baz#%20%C2%BB", "origin": "http://www.google.com", "protocol": "http:", "username": "", "password": "", "host": "www.google.com", "hostname": "www.google.com", "port": "", "pathname": "/foo", "search": "?bar=baz", "hash": "#%20%C2%BB" }, { "input": "data:test# »", "base": "about:blank", "href": "data:test#%20%C2%BB", "origin": "null", "protocol": "data:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "test", "search": "", "hash": "#%20%C2%BB" }, { "input": "http://www.google.com", "base": "about:blank", "href": "http://www.google.com/", "origin": "http://www.google.com", "protocol": "http:", "username": "", "password": "", "host": "www.google.com", "hostname": "www.google.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://192.0x00A80001", "base": "about:blank", "href": "http://192.168.0.1/", "origin": "http://192.168.0.1", "protocol": "http:", "username": "", "password": "", "host": "192.168.0.1", "hostname": "192.168.0.1", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://www/foo%2Ehtml", "base": "about:blank", "href": "http://www/foo%2Ehtml", "origin": "http://www", "protocol": "http:", "username": "", "password": "", "host": "www", "hostname": "www", "port": "", "pathname": "/foo%2Ehtml", "search": "", "hash": "" }, { "input": "http://www/foo/%2E/html", "base": "about:blank", "href": "http://www/foo/html", "origin": "http://www", "protocol": "http:", "username": "", "password": "", "host": "www", "hostname": "www", "port": "", "pathname": "/foo/html", "search": "", "hash": "" }, { "input": "http://user:pass@/", "base": "about:blank", "failure": true }, { "input": "http://%25DOMAIN:foobar@foodomain.com/", "base": "about:blank", "href": "http://%25DOMAIN:foobar@foodomain.com/", "origin": "http://foodomain.com", "protocol": "http:", "username": "%25DOMAIN", "password": "foobar", "host": "foodomain.com", "hostname": "foodomain.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http:\\\\www.google.com\\foo", "base": "about:blank", "href": "http://www.google.com/foo", "origin": "http://www.google.com", "protocol": "http:", "username": "", "password": "", "host": "www.google.com", "hostname": "www.google.com", "port": "", "pathname": "/foo", "search": "", "hash": "" }, { "input": "http://foo:80/", "base": "about:blank", "href": "http://foo/", "origin": "http://foo", "protocol": "http:", "username": "", "password": "", "host": "foo", "hostname": "foo", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://foo:81/", "base": "about:blank", "href": "http://foo:81/", "origin": "http://foo:81", "protocol": "http:", "username": "", "password": "", "host": "foo:81", "hostname": "foo", "port": "81", "pathname": "/", "search": "", "hash": "" }, { "input": "httpa://foo:80/", "base": "about:blank", "href": "httpa://foo:80/", "origin": "null", "protocol": "httpa:", "username": "", "password": "", "host": "foo:80", "hostname": "foo", "port": "80", "pathname": "/", "search": "", "hash": "" }, { "input": "http://foo:-80/", "base": "about:blank", "failure": true }, { "input": "https://foo:443/", "base": "about:blank", "href": "https://foo/", "origin": "https://foo", "protocol": "https:", "username": "", "password": "", "host": "foo", "hostname": "foo", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "https://foo:80/", "base": "about:blank", "href": "https://foo:80/", "origin": "https://foo:80", "protocol": "https:", "username": "", "password": "", "host": "foo:80", "hostname": "foo", "port": "80", "pathname": "/", "search": "", "hash": "" }, { "input": "ftp://foo:21/", "base": "about:blank", "href": "ftp://foo/", "origin": "ftp://foo", "protocol": "ftp:", "username": "", "password": "", "host": "foo", "hostname": "foo", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "ftp://foo:80/", "base": "about:blank", "href": "ftp://foo:80/", "origin": "ftp://foo:80", "protocol": "ftp:", "username": "", "password": "", "host": "foo:80", "hostname": "foo", "port": "80", "pathname": "/", "search": "", "hash": "" }, { "input": "gopher://foo:70/", "base": "about:blank", "href": "gopher://foo:70/", "origin": "null", "protocol": "gopher:", "username": "", "password": "", "host": "foo:70", "hostname": "foo", "port": "70", "pathname": "/", "search": "", "hash": "" }, { "input": "gopher://foo:443/", "base": "about:blank", "href": "gopher://foo:443/", "origin": "null", "protocol": "gopher:", "username": "", "password": "", "host": "foo:443", "hostname": "foo", "port": "443", "pathname": "/", "search": "", "hash": "" }, { "input": "ws://foo:80/", "base": "about:blank", "href": "ws://foo/", "origin": "ws://foo", "protocol": "ws:", "username": "", "password": "", "host": "foo", "hostname": "foo", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "ws://foo:81/", "base": "about:blank", "href": "ws://foo:81/", "origin": "ws://foo:81", "protocol": "ws:", "username": "", "password": "", "host": "foo:81", "hostname": "foo", "port": "81", "pathname": "/", "search": "", "hash": "" }, { "input": "ws://foo:443/", "base": "about:blank", "href": "ws://foo:443/", "origin": "ws://foo:443", "protocol": "ws:", "username": "", "password": "", "host": "foo:443", "hostname": "foo", "port": "443", "pathname": "/", "search": "", "hash": "" }, { "input": "ws://foo:815/", "base": "about:blank", "href": "ws://foo:815/", "origin": "ws://foo:815", "protocol": "ws:", "username": "", "password": "", "host": "foo:815", "hostname": "foo", "port": "815", "pathname": "/", "search": "", "hash": "" }, { "input": "wss://foo:80/", "base": "about:blank", "href": "wss://foo:80/", "origin": "wss://foo:80", "protocol": "wss:", "username": "", "password": "", "host": "foo:80", "hostname": "foo", "port": "80", "pathname": "/", "search": "", "hash": "" }, { "input": "wss://foo:81/", "base": "about:blank", "href": "wss://foo:81/", "origin": "wss://foo:81", "protocol": "wss:", "username": "", "password": "", "host": "foo:81", "hostname": "foo", "port": "81", "pathname": "/", "search": "", "hash": "" }, { "input": "wss://foo:443/", "base": "about:blank", "href": "wss://foo/", "origin": "wss://foo", "protocol": "wss:", "username": "", "password": "", "host": "foo", "hostname": "foo", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "wss://foo:815/", "base": "about:blank", "href": "wss://foo:815/", "origin": "wss://foo:815", "protocol": "wss:", "username": "", "password": "", "host": "foo:815", "hostname": "foo", "port": "815", "pathname": "/", "search": "", "hash": "" }, { "input": "http:/example.com/", "base": "about:blank", "href": "http://example.com/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "ftp:/example.com/", "base": "about:blank", "href": "ftp://example.com/", "origin": "ftp://example.com", "protocol": "ftp:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "https:/example.com/", "base": "about:blank", "href": "https://example.com/", "origin": "https://example.com", "protocol": "https:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "madeupscheme:/example.com/", "base": "about:blank", "href": "madeupscheme:/example.com/", "origin": "null", "protocol": "madeupscheme:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "file:/example.com/", "base": "about:blank", "href": "file:///example.com/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "ftps:/example.com/", "base": "about:blank", "href": "ftps:/example.com/", "origin": "null", "protocol": "ftps:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "gopher:/example.com/", "base": "about:blank", "href": "gopher:/example.com/", "origin": "null", "protocol": "gopher:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "ws:/example.com/", "base": "about:blank", "href": "ws://example.com/", "origin": "ws://example.com", "protocol": "ws:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "wss:/example.com/", "base": "about:blank", "href": "wss://example.com/", "origin": "wss://example.com", "protocol": "wss:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "data:/example.com/", "base": "about:blank", "href": "data:/example.com/", "origin": "null", "protocol": "data:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "javascript:/example.com/", "base": "about:blank", "href": "javascript:/example.com/", "origin": "null", "protocol": "javascript:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "mailto:/example.com/", "base": "about:blank", "href": "mailto:/example.com/", "origin": "null", "protocol": "mailto:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/example.com/", "search": "", "hash": "" }, { "input": "http:example.com/", "base": "about:blank", "href": "http://example.com/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "ftp:example.com/", "base": "about:blank", "href": "ftp://example.com/", "origin": "ftp://example.com", "protocol": "ftp:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "https:example.com/", "base": "about:blank", "href": "https://example.com/", "origin": "https://example.com", "protocol": "https:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "madeupscheme:example.com/", "base": "about:blank", "href": "madeupscheme:example.com/", "origin": "null", "protocol": "madeupscheme:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, { "input": "ftps:example.com/", "base": "about:blank", "href": "ftps:example.com/", "origin": "null", "protocol": "ftps:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, { "input": "gopher:example.com/", "base": "about:blank", "href": "gopher:example.com/", "origin": "null", "protocol": "gopher:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, { "input": "ws:example.com/", "base": "about:blank", "href": "ws://example.com/", "origin": "ws://example.com", "protocol": "ws:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "wss:example.com/", "base": "about:blank", "href": "wss://example.com/", "origin": "wss://example.com", "protocol": "wss:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "data:example.com/", "base": "about:blank", "href": "data:example.com/", "origin": "null", "protocol": "data:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, { "input": "javascript:example.com/", "base": "about:blank", "href": "javascript:example.com/", "origin": "null", "protocol": "javascript:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, { "input": "mailto:example.com/", "base": "about:blank", "href": "mailto:example.com/", "origin": "null", "protocol": "mailto:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "example.com/", "search": "", "hash": "" }, "# Based on http://trac.webkit.org/browser/trunk/LayoutTests/fast/url/segments-userinfo-vs-host.html", { "input": "http:@www.example.com", "base": "about:blank", "href": "http://www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http:/@www.example.com", "base": "about:blank", "href": "http://www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://@www.example.com", "base": "about:blank", "href": "http://www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http:a:b@www.example.com", "base": "about:blank", "href": "http://a:b@www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "a", "password": "b", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http:/a:b@www.example.com", "base": "about:blank", "href": "http://a:b@www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "a", "password": "b", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://a:b@www.example.com", "base": "about:blank", "href": "http://a:b@www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "a", "password": "b", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://@pple.com", "base": "about:blank", "href": "http://pple.com/", "origin": "http://pple.com", "protocol": "http:", "username": "", "password": "", "host": "pple.com", "hostname": "pple.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http::b@www.example.com", "base": "about:blank", "href": "http://:b@www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "b", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http:/:b@www.example.com", "base": "about:blank", "href": "http://:b@www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "b", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://:b@www.example.com", "base": "about:blank", "href": "http://:b@www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "b", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http:/:@/www.example.com", "base": "about:blank", "failure": true }, { "input": "http://user@/www.example.com", "base": "about:blank", "failure": true }, { "input": "http:@/www.example.com", "base": "about:blank", "failure": true }, { "input": "http:/@/www.example.com", "base": "about:blank", "failure": true }, { "input": "http://@/www.example.com", "base": "about:blank", "failure": true }, { "input": "https:@/www.example.com", "base": "about:blank", "failure": true }, { "input": "http:a:b@/www.example.com", "base": "about:blank", "failure": true }, { "input": "http:/a:b@/www.example.com", "base": "about:blank", "failure": true }, { "input": "http://a:b@/www.example.com", "base": "about:blank", "failure": true }, { "input": "http::@/www.example.com", "base": "about:blank", "failure": true }, { "input": "http:a:@www.example.com", "base": "about:blank", "href": "http://a@www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "a", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http:/a:@www.example.com", "base": "about:blank", "href": "http://a@www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "a", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://a:@www.example.com", "base": "about:blank", "href": "http://a@www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "a", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://www.@pple.com", "base": "about:blank", "href": "http://www.@pple.com/", "origin": "http://pple.com", "protocol": "http:", "username": "www.", "password": "", "host": "pple.com", "hostname": "pple.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http:@:www.example.com", "base": "about:blank", "failure": true }, { "input": "http:/@:www.example.com", "base": "about:blank", "failure": true }, { "input": "http://@:www.example.com", "base": "about:blank", "failure": true }, { "input": "http://:@www.example.com", "base": "about:blank", "href": "http://www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, "# Others", { "input": "/", "base": "http://www.example.com/test", "href": "http://www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "/test.txt", "base": "http://www.example.com/test", "href": "http://www.example.com/test.txt", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/test.txt", "search": "", "hash": "" }, { "input": ".", "base": "http://www.example.com/test", "href": "http://www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "..", "base": "http://www.example.com/test", "href": "http://www.example.com/", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "test.txt", "base": "http://www.example.com/test", "href": "http://www.example.com/test.txt", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/test.txt", "search": "", "hash": "" }, { "input": "./test.txt", "base": "http://www.example.com/test", "href": "http://www.example.com/test.txt", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/test.txt", "search": "", "hash": "" }, { "input": "../test.txt", "base": "http://www.example.com/test", "href": "http://www.example.com/test.txt", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/test.txt", "search": "", "hash": "" }, { "input": "../aaa/test.txt", "base": "http://www.example.com/test", "href": "http://www.example.com/aaa/test.txt", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/aaa/test.txt", "search": "", "hash": "" }, { "input": "../../test.txt", "base": "http://www.example.com/test", "href": "http://www.example.com/test.txt", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/test.txt", "search": "", "hash": "" }, { "input": "中/test.txt", "base": "http://www.example.com/test", "href": "http://www.example.com/%E4%B8%AD/test.txt", "origin": "http://www.example.com", "protocol": "http:", "username": "", "password": "", "host": "www.example.com", "hostname": "www.example.com", "port": "", "pathname": "/%E4%B8%AD/test.txt", "search": "", "hash": "" }, { "input": "http://www.example2.com", "base": "http://www.example.com/test", "href": "http://www.example2.com/", "origin": "http://www.example2.com", "protocol": "http:", "username": "", "password": "", "host": "www.example2.com", "hostname": "www.example2.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "//www.example2.com", "base": "http://www.example.com/test", "href": "http://www.example2.com/", "origin": "http://www.example2.com", "protocol": "http:", "username": "", "password": "", "host": "www.example2.com", "hostname": "www.example2.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "file:...", "base": "http://www.example.com/test", "href": "file:///...", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/...", "search": "", "hash": "" }, { "input": "file:..", "base": "http://www.example.com/test", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "file:a", "base": "http://www.example.com/test", "href": "file:///a", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/a", "search": "", "hash": "" }, "# Based on http://trac.webkit.org/browser/trunk/LayoutTests/fast/url/host.html", "Basic canonicalization, uppercase should be converted to lowercase", { "input": "http://ExAmPlE.CoM", "base": "http://other.com/", "href": "http://example.com/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://example example.com", "base": "http://other.com/", "failure": true }, { "input": "http://Goo%20 goo%7C|.com", "base": "http://other.com/", "failure": true }, { "input": "http://[]", "base": "http://other.com/", "failure": true }, { "input": "http://[:]", "base": "http://other.com/", "failure": true }, "U+3000 is mapped to U+0020 (space) which is disallowed", { "input": "http://GOO\u00a0\u3000goo.com", "base": "http://other.com/", "failure": true }, "Other types of space (no-break, zero-width, zero-width-no-break) are name-prepped away to nothing. U+200B, U+2060, and U+FEFF, are ignored", { "input": "http://GOO\u200b\u2060\ufeffgoo.com", "base": "http://other.com/", "href": "http://googoo.com/", "origin": "http://googoo.com", "protocol": "http:", "username": "", "password": "", "host": "googoo.com", "hostname": "googoo.com", "port": "", "pathname": "/", "search": "", "hash": "" }, "Leading and trailing C0 control or space", { "input": "\u0000\u001b\u0004\u0012 http://example.com/\u001f \u000d ", "base": "about:blank", "href": "http://example.com/", "origin": "http://example.com", "protocol": "http:", "username": "", "password": "", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/", "search": "", "hash": "" }, "Ideographic full stop (full-width period for Chinese, etc.) should be treated as a dot. U+3002 is mapped to U+002E (dot)", { "input": "http://www.foo。bar.com", "base": "http://other.com/", "href": "http://www.foo.bar.com/", "origin": "http://www.foo.bar.com", "protocol": "http:", "username": "", "password": "", "host": "www.foo.bar.com", "hostname": "www.foo.bar.com", "port": "", "pathname": "/", "search": "", "hash": "" }, "Invalid unicode characters should fail... U+FDD0 is disallowed; %ef%b7%90 is U+FDD0", { "input": "http://\ufdd0zyx.com", "base": "http://other.com/", "failure": true }, "This is the same as previous but escaped", { "input": "http://%ef%b7%90zyx.com", "base": "http://other.com/", "failure": true }, "U+FFFD", { "input": "https://\ufffd", "base": "about:blank", "failure": true }, { "input": "https://%EF%BF%BD", "base": "about:blank", "failure": true }, { "input": "https://x/\ufffd?\ufffd#\ufffd", "base": "about:blank", "href": "https://x/%EF%BF%BD?%EF%BF%BD#%EF%BF%BD", "origin": "https://x", "protocol": "https:", "username": "", "password": "", "host": "x", "hostname": "x", "port": "", "pathname": "/%EF%BF%BD", "search": "?%EF%BF%BD", "hash": "#%EF%BF%BD" }, "Test name prepping, fullwidth input should be converted to ASCII and NOT IDN-ized. This is 'Go' in fullwidth UTF-8/UTF-16.", { "input": "http://ï¼§ï½.com", "base": "http://other.com/", "href": "http://go.com/", "origin": "http://go.com", "protocol": "http:", "username": "", "password": "", "host": "go.com", "hostname": "go.com", "port": "", "pathname": "/", "search": "", "hash": "" }, "URL spec forbids the following. https://www.w3.org/Bugs/Public/show_bug.cgi?id=24257", { "input": "http://%41.com", "base": "http://other.com/", "failure": true }, { "input": "http://%ef%bc%85%ef%bc%94%ef%bc%91.com", "base": "http://other.com/", "failure": true }, "...%00 in fullwidth should fail (also as escaped UTF-8 input)", { "input": "http://ï¼…ï¼ï¼.com", "base": "http://other.com/", "failure": true }, { "input": "http://%ef%bc%85%ef%bc%90%ef%bc%90.com", "base": "http://other.com/", "failure": true }, "Basic IDN support, UTF-8 and UTF-16 input should be converted to IDN", { "input": "http://你好你好", "base": "http://other.com/", "href": "http://xn--6qqa088eba/", "origin": "http://xn--6qqa088eba", "protocol": "http:", "username": "", "password": "", "host": "xn--6qqa088eba", "hostname": "xn--6qqa088eba", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "https://faß.ExAmPlE/", "base": "about:blank", "href": "https://xn--fa-hia.example/", "origin": "https://xn--fa-hia.example", "protocol": "https:", "username": "", "password": "", "host": "xn--fa-hia.example", "hostname": "xn--fa-hia.example", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "sc://faß.ExAmPlE/", "base": "about:blank", "href": "sc://fa%C3%9F.ExAmPlE/", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "fa%C3%9F.ExAmPlE", "hostname": "fa%C3%9F.ExAmPlE", "port": "", "pathname": "/", "search": "", "hash": "" }, "Invalid escaped characters should fail and the percents should be escaped. https://www.w3.org/Bugs/Public/show_bug.cgi?id=24191", { "input": "http://%zz%66%a.com", "base": "http://other.com/", "failure": true }, "If we get an invalid character that has been escaped.", { "input": "http://%25", "base": "http://other.com/", "failure": true }, { "input": "http://hello%00", "base": "http://other.com/", "failure": true }, "Escaped numbers should be treated like IP addresses if they are.", { "input": "http://%30%78%63%30%2e%30%32%35%30.01", "base": "http://other.com/", "href": "http://192.168.0.1/", "origin": "http://192.168.0.1", "protocol": "http:", "username": "", "password": "", "host": "192.168.0.1", "hostname": "192.168.0.1", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://%30%78%63%30%2e%30%32%35%30.01%2e", "base": "http://other.com/", "href": "http://192.168.0.1/", "origin": "http://192.168.0.1", "protocol": "http:", "username": "", "password": "", "host": "192.168.0.1", "hostname": "192.168.0.1", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://192.168.0.257", "base": "http://other.com/", "failure": true }, "Invalid escaping in hosts causes failure", { "input": "http://%3g%78%63%30%2e%30%32%35%30%2E.01", "base": "http://other.com/", "failure": true }, "A space in a host causes failure", { "input": "http://192.168.0.1 hello", "base": "http://other.com/", "failure": true }, { "input": "https://x x:12", "base": "about:blank", "failure": true }, "Fullwidth and escaped UTF-8 fullwidth should still be treated as IP", { "input": "http://ï¼ï¼¸ï½ƒï¼ï¼Žï¼ï¼’5ï¼ï¼Žï¼ï¼‘", "base": "http://other.com/", "href": "http://192.168.0.1/", "origin": "http://192.168.0.1", "protocol": "http:", "username": "", "password": "", "host": "192.168.0.1", "hostname": "192.168.0.1", "port": "", "pathname": "/", "search": "", "hash": "" }, "Domains with empty labels", { "input": "http://./", "base": "about:blank", "href": "http://./", "origin": "http://.", "protocol": "http:", "username": "", "password": "", "host": ".", "hostname": ".", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://../", "base": "about:blank", "href": "http://../", "origin": "http://..", "protocol": "http:", "username": "", "password": "", "host": "..", "hostname": "..", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://0..0x300/", "base": "about:blank", "href": "http://0..0x300/", "origin": "http://0..0x300", "protocol": "http:", "username": "", "password": "", "host": "0..0x300", "hostname": "0..0x300", "port": "", "pathname": "/", "search": "", "hash": "" }, "Broken IPv6", { "input": "http://[www.google.com]/", "base": "about:blank", "failure": true }, { "input": "http://[google.com]", "base": "http://other.com/", "failure": true }, { "input": "http://[::1.2.3.4x]", "base": "http://other.com/", "failure": true }, { "input": "http://[::1.2.3.]", "base": "http://other.com/", "failure": true }, { "input": "http://[::1.2.]", "base": "http://other.com/", "failure": true }, { "input": "http://[::1.]", "base": "http://other.com/", "failure": true }, "Misc Unicode", { "input": "http://foo:💩@example.com/bar", "base": "http://other.com/", "href": "http://foo:%F0%9F%92%A9@example.com/bar", "origin": "http://example.com", "protocol": "http:", "username": "foo", "password": "%F0%9F%92%A9", "host": "example.com", "hostname": "example.com", "port": "", "pathname": "/bar", "search": "", "hash": "" }, "# resolving a fragment against any scheme succeeds", { "input": "#", "base": "test:test", "href": "test:test#", "origin": "null", "protocol": "test:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "test", "search": "", "hash": "" }, { "input": "#x", "base": "mailto:x@x.com", "href": "mailto:x@x.com#x", "origin": "null", "protocol": "mailto:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "x@x.com", "search": "", "hash": "#x" }, { "input": "#x", "base": "data:,", "href": "data:,#x", "origin": "null", "protocol": "data:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": ",", "search": "", "hash": "#x" }, { "input": "#x", "base": "about:blank", "href": "about:blank#x", "origin": "null", "protocol": "about:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "blank", "search": "", "hash": "#x" }, { "input": "#", "base": "test:test?test", "href": "test:test?test#", "origin": "null", "protocol": "test:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "test", "search": "?test", "hash": "" }, "# multiple @ in authority state", { "input": "https://@test@test@example:800/", "base": "http://doesnotmatter/", "href": "https://%40test%40test@example:800/", "origin": "https://example:800", "protocol": "https:", "username": "%40test%40test", "password": "", "host": "example:800", "hostname": "example", "port": "800", "pathname": "/", "search": "", "hash": "" }, { "input": "https://@@@example", "base": "http://doesnotmatter/", "href": "https://%40%40@example/", "origin": "https://example", "protocol": "https:", "username": "%40%40", "password": "", "host": "example", "hostname": "example", "port": "", "pathname": "/", "search": "", "hash": "" }, "non-az-09 characters", { "input": "http://`{}:`{}@h/`{}?`{}", "base": "http://doesnotmatter/", "href": "http://%60%7B%7D:%60%7B%7D@h/%60%7B%7D?`{}", "origin": "http://h", "protocol": "http:", "username": "%60%7B%7D", "password": "%60%7B%7D", "host": "h", "hostname": "h", "port": "", "pathname": "/%60%7B%7D", "search": "?`{}", "hash": "" }, "byte is ' and url is special", { "input": "http://host/?'", "base": "about:blank", "href": "http://host/?%27", "origin": "http://host", "protocol": "http:", "username": "", "password": "", "host": "host", "hostname": "host", "port": "", "pathname": "/", "search": "?%27", "hash": "" }, { "input": "notspecial://host/?'", "base": "about:blank", "href": "notspecial://host/?'", "origin": "null", "protocol": "notspecial:", "username": "", "password": "", "host": "host", "hostname": "host", "port": "", "pathname": "/", "search": "?'", "hash": "" }, "# Credentials in base", { "input": "/some/path", "base": "http://user@example.org/smth", "href": "http://user@example.org/some/path", "origin": "http://example.org", "protocol": "http:", "username": "user", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/some/path", "search": "", "hash": "" }, { "input": "", "base": "http://user:pass@example.org:21/smth", "href": "http://user:pass@example.org:21/smth", "origin": "http://example.org:21", "protocol": "http:", "username": "user", "password": "pass", "host": "example.org:21", "hostname": "example.org", "port": "21", "pathname": "/smth", "search": "", "hash": "" }, { "input": "/some/path", "base": "http://user:pass@example.org:21/smth", "href": "http://user:pass@example.org:21/some/path", "origin": "http://example.org:21", "protocol": "http:", "username": "user", "password": "pass", "host": "example.org:21", "hostname": "example.org", "port": "21", "pathname": "/some/path", "search": "", "hash": "" }, "# a set of tests designed by zcorpan for relative URLs with unknown schemes", { "input": "i", "base": "sc:sd", "failure": true }, { "input": "i", "base": "sc:sd/sd", "failure": true }, { "input": "i", "base": "sc:/pa/pa", "href": "sc:/pa/i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/pa/i", "search": "", "hash": "" }, { "input": "i", "base": "sc://ho/pa", "href": "sc://ho/i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "ho", "hostname": "ho", "port": "", "pathname": "/i", "search": "", "hash": "" }, { "input": "i", "base": "sc:///pa/pa", "href": "sc:///pa/i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/pa/i", "search": "", "hash": "" }, { "input": "../i", "base": "sc:sd", "failure": true }, { "input": "../i", "base": "sc:sd/sd", "failure": true }, { "input": "../i", "base": "sc:/pa/pa", "href": "sc:/i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/i", "search": "", "hash": "" }, { "input": "../i", "base": "sc://ho/pa", "href": "sc://ho/i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "ho", "hostname": "ho", "port": "", "pathname": "/i", "search": "", "hash": "" }, { "input": "../i", "base": "sc:///pa/pa", "href": "sc:///i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/i", "search": "", "hash": "" }, { "input": "/i", "base": "sc:sd", "failure": true }, { "input": "/i", "base": "sc:sd/sd", "failure": true }, { "input": "/i", "base": "sc:/pa/pa", "href": "sc:/i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/i", "search": "", "hash": "" }, { "input": "/i", "base": "sc://ho/pa", "href": "sc://ho/i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "ho", "hostname": "ho", "port": "", "pathname": "/i", "search": "", "hash": "" }, { "input": "/i", "base": "sc:///pa/pa", "href": "sc:///i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/i", "search": "", "hash": "" }, { "input": "?i", "base": "sc:sd", "failure": true }, { "input": "?i", "base": "sc:sd/sd", "failure": true }, { "input": "?i", "base": "sc:/pa/pa", "href": "sc:/pa/pa?i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/pa/pa", "search": "?i", "hash": "" }, { "input": "?i", "base": "sc://ho/pa", "href": "sc://ho/pa?i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "ho", "hostname": "ho", "port": "", "pathname": "/pa", "search": "?i", "hash": "" }, { "input": "?i", "base": "sc:///pa/pa", "href": "sc:///pa/pa?i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/pa/pa", "search": "?i", "hash": "" }, { "input": "#i", "base": "sc:sd", "href": "sc:sd#i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "sd", "search": "", "hash": "#i" }, { "input": "#i", "base": "sc:sd/sd", "href": "sc:sd/sd#i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "sd/sd", "search": "", "hash": "#i" }, { "input": "#i", "base": "sc:/pa/pa", "href": "sc:/pa/pa#i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/pa/pa", "search": "", "hash": "#i" }, { "input": "#i", "base": "sc://ho/pa", "href": "sc://ho/pa#i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "ho", "hostname": "ho", "port": "", "pathname": "/pa", "search": "", "hash": "#i" }, { "input": "#i", "base": "sc:///pa/pa", "href": "sc:///pa/pa#i", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/pa/pa", "search": "", "hash": "#i" }, "# make sure that relative URL logic works on known typically non-relative schemes too", { "input": "about:/../", "base": "about:blank", "href": "about:/", "origin": "null", "protocol": "about:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "data:/../", "base": "about:blank", "href": "data:/", "origin": "null", "protocol": "data:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "javascript:/../", "base": "about:blank", "href": "javascript:/", "origin": "null", "protocol": "javascript:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "mailto:/../", "base": "about:blank", "href": "mailto:/", "origin": "null", "protocol": "mailto:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, "# unknown schemes and their hosts", { "input": "sc://ñ.test/", "base": "about:blank", "href": "sc://%C3%B1.test/", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "%C3%B1.test", "hostname": "%C3%B1.test", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "sc://\u0000/", "base": "about:blank", "failure": true }, { "input": "sc:// /", "base": "about:blank", "failure": true }, { "input": "sc://%/", "base": "about:blank", "href": "sc://%/", "protocol": "sc:", "username": "", "password": "", "host": "%", "hostname": "%", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "sc://@/", "base": "about:blank", "failure": true }, { "input": "sc://te@s:t@/", "base": "about:blank", "failure": true }, { "input": "sc://:/", "base": "about:blank", "failure": true }, { "input": "sc://:12/", "base": "about:blank", "failure": true }, { "input": "sc://[/", "base": "about:blank", "failure": true }, { "input": "sc://\\/", "base": "about:blank", "failure": true }, { "input": "sc://]/", "base": "about:blank", "failure": true }, { "input": "x", "base": "sc://ñ", "href": "sc://%C3%B1/x", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "%C3%B1", "hostname": "%C3%B1", "port": "", "pathname": "/x", "search": "", "hash": "" }, "# unknown schemes and backslashes", { "input": "sc:\\../", "base": "about:blank", "href": "sc:\\../", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "\\../", "search": "", "hash": "" }, "# unknown scheme with path looking like a password", { "input": "sc::a@example.net", "base": "about:blank", "href": "sc::a@example.net", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": ":a@example.net", "search": "", "hash": "" }, "# unknown scheme with bogus percent-encoding", { "input": "wow:%NBD", "base": "about:blank", "href": "wow:%NBD", "origin": "null", "protocol": "wow:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "%NBD", "search": "", "hash": "" }, { "input": "wow:%1G", "base": "about:blank", "href": "wow:%1G", "origin": "null", "protocol": "wow:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "%1G", "search": "", "hash": "" }, "# unknown scheme with non-URL characters in the path", { "input": "wow:\uFFFF", "base": "about:blank", "href": "wow:%EF%BF%BF", "origin": "null", "protocol": "wow:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "%EF%BF%BF", "search": "", "hash": "" }, "Forbidden host code points", { "input": "http://ab", "base": "about:blank", "failure": true }, { "input": "http://a^b", "base": "about:blank", "failure": true }, { "input": "non-special://ab", "base": "about:blank", "failure": true }, { "input": "non-special://a^b", "base": "about:blank", "failure": true }, "Allowed host code points", { "input": "http://\u001F!\"$&'()*+,-.;=_`{|}~/", "base": "about:blank", "href": "http://\u001F!\"$&'()*+,-.;=_`{|}~/", "origin": "http://\u001F!\"$&'()*+,-.;=_`{|}~", "protocol": "http:", "username": "", "password": "", "host": "\u001F!\"$&'()*+,-.;=_`{|}~", "hostname": "\u001F!\"$&'()*+,-.;=_`{|}~", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "sc://\u001F!\"$&'()*+,-.;=_`{|}~/", "base": "about:blank", "href": "sc://%1F!\"$&'()*+,-.;=_`{|}~/", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "%1F!\"$&'()*+,-.;=_`{|}~", "hostname": "%1F!\"$&'()*+,-.;=_`{|}~", "port": "", "pathname": "/", "search": "", "hash": "" }, "# Hosts and percent-encoding", { "input": "ftp://example.com%80/", "base": "about:blank", "failure": true }, { "input": "ftp://example.com%A0/", "base": "about:blank", "failure": true }, { "input": "https://example.com%80/", "base": "about:blank", "failure": true }, { "input": "https://example.com%A0/", "base": "about:blank", "failure": true }, { "input": "ftp://%e2%98%83", "base": "about:blank", "href": "ftp://xn--n3h/", "origin": "ftp://xn--n3h", "protocol": "ftp:", "username": "", "password": "", "host": "xn--n3h", "hostname": "xn--n3h", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "https://%e2%98%83", "base": "about:blank", "href": "https://xn--n3h/", "origin": "https://xn--n3h", "protocol": "https:", "username": "", "password": "", "host": "xn--n3h", "hostname": "xn--n3h", "port": "", "pathname": "/", "search": "", "hash": "" }, "# tests from jsdom/whatwg-url designed for code coverage", { "input": "http://127.0.0.1:10100/relative_import.html", "base": "about:blank", "href": "http://127.0.0.1:10100/relative_import.html", "origin": "http://127.0.0.1:10100", "protocol": "http:", "username": "", "password": "", "host": "127.0.0.1:10100", "hostname": "127.0.0.1", "port": "10100", "pathname": "/relative_import.html", "search": "", "hash": "" }, { "input": "http://facebook.com/?foo=%7B%22abc%22", "base": "about:blank", "href": "http://facebook.com/?foo=%7B%22abc%22", "origin": "http://facebook.com", "protocol": "http:", "username": "", "password": "", "host": "facebook.com", "hostname": "facebook.com", "port": "", "pathname": "/", "search": "?foo=%7B%22abc%22", "hash": "" }, { "input": "https://localhost:3000/jqueryui@1.2.3", "base": "about:blank", "href": "https://localhost:3000/jqueryui@1.2.3", "origin": "https://localhost:3000", "protocol": "https:", "username": "", "password": "", "host": "localhost:3000", "hostname": "localhost", "port": "3000", "pathname": "/jqueryui@1.2.3", "search": "", "hash": "" }, "# tab/LF/CR", { "input": "h\tt\nt\rp://h\to\ns\rt:9\t0\n0\r0/p\ta\nt\rh?q\tu\ne\rry#f\tr\na\rg", "base": "about:blank", "href": "http://host:9000/path?query#frag", "origin": "http://host:9000", "protocol": "http:", "username": "", "password": "", "host": "host:9000", "hostname": "host", "port": "9000", "pathname": "/path", "search": "?query", "hash": "#frag" }, "# Stringification of URL.searchParams", { "input": "?a=b&c=d", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/bar?a=b&c=d", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/bar", "search": "?a=b&c=d", "searchParams": "a=b&c=d", "hash": "" }, { "input": "??a=b&c=d", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/bar??a=b&c=d", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/bar", "search": "??a=b&c=d", "searchParams": "%3Fa=b&c=d", "hash": "" }, "# Scheme only", { "input": "http:", "base": "http://example.org/foo/bar", "href": "http://example.org/foo/bar", "origin": "http://example.org", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/foo/bar", "search": "", "searchParams": "", "hash": "" }, { "input": "http:", "base": "https://example.org/foo/bar", "failure": true }, { "input": "sc:", "base": "https://example.org/foo/bar", "href": "sc:", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "", "search": "", "searchParams": "", "hash": "" }, "# Percent encoding of fragments", { "input": "http://foo.bar/baz?qux#foo\bbar", "base": "about:blank", "href": "http://foo.bar/baz?qux#foo%08bar", "origin": "http://foo.bar", "protocol": "http:", "username": "", "password": "", "host": "foo.bar", "hostname": "foo.bar", "port": "", "pathname": "/baz", "search": "?qux", "searchParams": "qux=", "hash": "#foo%08bar" }, { "input": "http://foo.bar/baz?qux#foo\"bar", "base": "about:blank", "href": "http://foo.bar/baz?qux#foo%22bar", "origin": "http://foo.bar", "protocol": "http:", "username": "", "password": "", "host": "foo.bar", "hostname": "foo.bar", "port": "", "pathname": "/baz", "search": "?qux", "searchParams": "qux=", "hash": "#foo%22bar" }, { "input": "http://foo.bar/baz?qux#foobar", "base": "about:blank", "href": "http://foo.bar/baz?qux#foo%3Ebar", "origin": "http://foo.bar", "protocol": "http:", "username": "", "password": "", "host": "foo.bar", "hostname": "foo.bar", "port": "", "pathname": "/baz", "search": "?qux", "searchParams": "qux=", "hash": "#foo%3Ebar" }, { "input": "http://foo.bar/baz?qux#foo`bar", "base": "about:blank", "href": "http://foo.bar/baz?qux#foo%60bar", "origin": "http://foo.bar", "protocol": "http:", "username": "", "password": "", "host": "foo.bar", "hostname": "foo.bar", "port": "", "pathname": "/baz", "search": "?qux", "searchParams": "qux=", "hash": "#foo%60bar" }, "# IPv4 parsing (via https://github.com/nodejs/node/pull/10317)", { "input": "http://192.168.257", "base": "http://other.com/", "href": "http://192.168.1.1/", "origin": "http://192.168.1.1", "protocol": "http:", "username": "", "password": "", "host": "192.168.1.1", "hostname": "192.168.1.1", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://192.168.257.com", "base": "http://other.com/", "href": "http://192.168.257.com/", "origin": "http://192.168.257.com", "protocol": "http:", "username": "", "password": "", "host": "192.168.257.com", "hostname": "192.168.257.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://256", "base": "http://other.com/", "href": "http://0.0.1.0/", "origin": "http://0.0.1.0", "protocol": "http:", "username": "", "password": "", "host": "0.0.1.0", "hostname": "0.0.1.0", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://256.com", "base": "http://other.com/", "href": "http://256.com/", "origin": "http://256.com", "protocol": "http:", "username": "", "password": "", "host": "256.com", "hostname": "256.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://999999999", "base": "http://other.com/", "href": "http://59.154.201.255/", "origin": "http://59.154.201.255", "protocol": "http:", "username": "", "password": "", "host": "59.154.201.255", "hostname": "59.154.201.255", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://999999999.com", "base": "http://other.com/", "href": "http://999999999.com/", "origin": "http://999999999.com", "protocol": "http:", "username": "", "password": "", "host": "999999999.com", "hostname": "999999999.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://10000000000", "base": "http://other.com/", "failure": true }, { "input": "http://10000000000.com", "base": "http://other.com/", "href": "http://10000000000.com/", "origin": "http://10000000000.com", "protocol": "http:", "username": "", "password": "", "host": "10000000000.com", "hostname": "10000000000.com", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://4294967295", "base": "http://other.com/", "href": "http://255.255.255.255/", "origin": "http://255.255.255.255", "protocol": "http:", "username": "", "password": "", "host": "255.255.255.255", "hostname": "255.255.255.255", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://4294967296", "base": "http://other.com/", "failure": true }, { "input": "http://0xffffffff", "base": "http://other.com/", "href": "http://255.255.255.255/", "origin": "http://255.255.255.255", "protocol": "http:", "username": "", "password": "", "host": "255.255.255.255", "hostname": "255.255.255.255", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://0xffffffff1", "base": "http://other.com/", "failure": true }, { "input": "http://256.256.256.256", "base": "http://other.com/", "failure": true }, { "input": "http://256.256.256.256.256", "base": "http://other.com/", "href": "http://256.256.256.256.256/", "origin": "http://256.256.256.256.256", "protocol": "http:", "username": "", "password": "", "host": "256.256.256.256.256", "hostname": "256.256.256.256.256", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "https://0x.0x.0", "base": "about:blank", "href": "https://0.0.0.0/", "origin": "https://0.0.0.0", "protocol": "https:", "username": "", "password": "", "host": "0.0.0.0", "hostname": "0.0.0.0", "port": "", "pathname": "/", "search": "", "hash": "" }, "More IPv4 parsing (via https://github.com/jsdom/whatwg-url/issues/92)", { "input": "https://0x100000000/test", "base": "about:blank", "failure": true }, { "input": "https://256.0.0.1/test", "base": "about:blank", "failure": true }, "# file URLs containing percent-encoded Windows drive letters (shouldn't work)", { "input": "file:///C%3A/", "base": "about:blank", "href": "file:///C%3A/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C%3A/", "search": "", "hash": "" }, { "input": "file:///C%7C/", "base": "about:blank", "href": "file:///C%7C/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C%7C/", "search": "", "hash": "" }, "# file URLs relative to other file URLs (via https://github.com/jsdom/whatwg-url/pull/60)", { "input": "pix/submit.gif", "base": "file:///C:/Users/Domenic/Dropbox/GitHub/tmpvar/jsdom/test/level2/html/files/anchor.html", "href": "file:///C:/Users/Domenic/Dropbox/GitHub/tmpvar/jsdom/test/level2/html/files/pix/submit.gif", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/Users/Domenic/Dropbox/GitHub/tmpvar/jsdom/test/level2/html/files/pix/submit.gif", "search": "", "hash": "" }, { "input": "..", "base": "file:///C:/", "href": "file:///C:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/", "search": "", "hash": "" }, { "input": "..", "base": "file:///", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, "# More file URL tests by zcorpan and annevk", { "input": "/", "base": "file:///C:/a/b", "href": "file:///C:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/", "search": "", "hash": "" }, { "input": "//d:", "base": "file:///C:/a/b", "href": "file:///d:", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/d:", "search": "", "hash": "" }, { "input": "//d:/..", "base": "file:///C:/a/b", "href": "file:///d:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/d:/", "search": "", "hash": "" }, { "input": "..", "base": "file:///ab:/", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "..", "base": "file:///1:/", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "", "base": "file:///test?test#test", "href": "file:///test?test", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test", "search": "?test", "hash": "" }, { "input": "file:", "base": "file:///test?test#test", "href": "file:///test?test", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test", "search": "?test", "hash": "" }, { "input": "?x", "base": "file:///test?test#test", "href": "file:///test?x", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test", "search": "?x", "hash": "" }, { "input": "file:?x", "base": "file:///test?test#test", "href": "file:///test?x", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test", "search": "?x", "hash": "" }, { "input": "#x", "base": "file:///test?test#test", "href": "file:///test?test#x", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test", "search": "?test", "hash": "#x" }, { "input": "file:#x", "base": "file:///test?test#test", "href": "file:///test?test#x", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test", "search": "?test", "hash": "#x" }, "# File URLs and many (back)slashes", { "input": "file:\\\\//", "base": "about:blank", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "file:\\\\\\\\", "base": "about:blank", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "file:\\\\\\\\?fox", "base": "about:blank", "href": "file:///?fox", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "?fox", "hash": "" }, { "input": "file:\\\\\\\\#guppy", "base": "about:blank", "href": "file:///#guppy", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "#guppy" }, { "input": "file://spider///", "base": "about:blank", "href": "file://spider/", "protocol": "file:", "username": "", "password": "", "host": "spider", "hostname": "spider", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "file:\\\\localhost//", "base": "about:blank", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "file:///localhost//cat", "base": "about:blank", "href": "file:///localhost//cat", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/localhost//cat", "search": "", "hash": "" }, { "input": "file://\\/localhost//cat", "base": "about:blank", "href": "file:///localhost//cat", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/localhost//cat", "search": "", "hash": "" }, { "input": "file://localhost//a//../..//", "base": "about:blank", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "/////mouse", "base": "file:///elephant", "href": "file:///mouse", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/mouse", "search": "", "hash": "" }, { "input": "\\//pig", "base": "file://lion/", "href": "file:///pig", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/pig", "search": "", "hash": "" }, { "input": "\\/localhost//pig", "base": "file://lion/", "href": "file:///pig", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/pig", "search": "", "hash": "" }, { "input": "//localhost//pig", "base": "file://lion/", "href": "file:///pig", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/pig", "search": "", "hash": "" }, { "input": "/..//localhost//pig", "base": "file://lion/", "href": "file://lion/localhost//pig", "protocol": "file:", "username": "", "password": "", "host": "lion", "hostname": "lion", "port": "", "pathname": "/localhost//pig", "search": "", "hash": "" }, { "input": "file://", "base": "file://ape/", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, "# File URLs with non-empty hosts", { "input": "/rooibos", "base": "file://tea/", "href": "file://tea/rooibos", "protocol": "file:", "username": "", "password": "", "host": "tea", "hostname": "tea", "port": "", "pathname": "/rooibos", "search": "", "hash": "" }, { "input": "/?chai", "base": "file://tea/", "href": "file://tea/?chai", "protocol": "file:", "username": "", "password": "", "host": "tea", "hostname": "tea", "port": "", "pathname": "/", "search": "?chai", "hash": "" }, "# Windows drive letter handling with the 'file:' base URL", { "input": "C|", "base": "file://host/dir/file", "href": "file:///C:", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:", "search": "", "hash": "" }, { "input": "C|#", "base": "file://host/dir/file", "href": "file:///C:#", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:", "search": "", "hash": "" }, { "input": "C|?", "base": "file://host/dir/file", "href": "file:///C:?", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:", "search": "", "hash": "" }, { "input": "C|/", "base": "file://host/dir/file", "href": "file:///C:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/", "search": "", "hash": "" }, { "input": "C|\n/", "base": "file://host/dir/file", "href": "file:///C:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/", "search": "", "hash": "" }, { "input": "C|\\", "base": "file://host/dir/file", "href": "file:///C:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/", "search": "", "hash": "" }, { "input": "C", "base": "file://host/dir/file", "href": "file://host/dir/C", "protocol": "file:", "username": "", "password": "", "host": "host", "hostname": "host", "port": "", "pathname": "/dir/C", "search": "", "hash": "" }, { "input": "C|a", "base": "file://host/dir/file", "href": "file://host/dir/C|a", "protocol": "file:", "username": "", "password": "", "host": "host", "hostname": "host", "port": "", "pathname": "/dir/C|a", "search": "", "hash": "" }, "# Windows drive letter quirk in the file slash state", { "input": "/c:/foo/bar", "base": "file:///c:/baz/qux", "href": "file:///c:/foo/bar", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/c:/foo/bar", "search": "", "hash": "" }, { "input": "/c|/foo/bar", "base": "file:///c:/baz/qux", "href": "file:///c:/foo/bar", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/c:/foo/bar", "search": "", "hash": "" }, { "input": "file:\\c:\\foo\\bar", "base": "file:///c:/baz/qux", "href": "file:///c:/foo/bar", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/c:/foo/bar", "search": "", "hash": "" }, { "input": "/c:/foo/bar", "base": "file://host/path", "href": "file:///c:/foo/bar", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/c:/foo/bar", "search": "", "hash": "" }, "# Windows drive letter quirk with not empty host", { "input": "file://example.net/C:/", "base": "about:blank", "href": "file:///C:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/", "search": "", "hash": "" }, { "input": "file://1.2.3.4/C:/", "base": "about:blank", "href": "file:///C:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/", "search": "", "hash": "" }, { "input": "file://[1::8]/C:/", "base": "about:blank", "href": "file:///C:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/", "search": "", "hash": "" }, "# Windows drive letter quirk (no host)", { "input": "file:/C|/", "base": "about:blank", "href": "file:///C:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/", "search": "", "hash": "" }, { "input": "file://C|/", "base": "about:blank", "href": "file:///C:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/C:/", "search": "", "hash": "" }, "# file URLs without base URL by Rimas MiseviÄius", { "input": "file:", "base": "about:blank", "href": "file:///", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "file:?q=v", "base": "about:blank", "href": "file:///?q=v", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "?q=v", "hash": "" }, { "input": "file:#frag", "base": "about:blank", "href": "file:///#frag", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "#frag" }, "# file: drive letter cases from https://crbug.com/1078698", { "input": "file:///Y:", "base": "about:blank", "href": "file:///Y:", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/Y:", "search": "", "hash": "" }, { "input": "file:///Y:/", "base": "about:blank", "href": "file:///Y:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/Y:/", "search": "", "hash": "" }, { "input": "file:///./Y", "base": "about:blank", "href": "file:///Y", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/Y", "search": "", "hash": "" }, { "input": "file:///./Y:", "base": "about:blank", "href": "file:///Y:", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/Y:", "search": "", "hash": "" }, { "input": "\\\\\\.\\Y:", "base": "about:blank", "failure": true }, "# file: drive letter cases from https://crbug.com/1078698 but lowercased", { "input": "file:///y:", "base": "about:blank", "href": "file:///y:", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/y:", "search": "", "hash": "" }, { "input": "file:///y:/", "base": "about:blank", "href": "file:///y:/", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/y:/", "search": "", "hash": "" }, { "input": "file:///./y", "base": "about:blank", "href": "file:///y", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/y", "search": "", "hash": "" }, { "input": "file:///./y:", "base": "about:blank", "href": "file:///y:", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/y:", "search": "", "hash": "" }, { "input": "\\\\\\.\\y:", "base": "about:blank", "failure": true }, "# IPv6 tests", { "input": "http://[1:0::]", "base": "http://example.net/", "href": "http://[1::]/", "origin": "http://[1::]", "protocol": "http:", "username": "", "password": "", "host": "[1::]", "hostname": "[1::]", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://[0:1:2:3:4:5:6:7:8]", "base": "http://example.net/", "failure": true }, { "input": "https://[0::0::0]", "base": "about:blank", "failure": true }, { "input": "https://[0:.0]", "base": "about:blank", "failure": true }, { "input": "https://[0:0:]", "base": "about:blank", "failure": true }, { "input": "https://[0:1:2:3:4:5:6:7.0.0.0.1]", "base": "about:blank", "failure": true }, { "input": "https://[0:1.00.0.0.0]", "base": "about:blank", "failure": true }, { "input": "https://[0:1.290.0.0.0]", "base": "about:blank", "failure": true }, { "input": "https://[0:1.23.23]", "base": "about:blank", "failure": true }, "# Empty host", { "input": "http://?", "base": "about:blank", "failure": true }, { "input": "http://#", "base": "about:blank", "failure": true }, "Port overflow (2^32 + 81)", { "input": "http://f:4294967377/c", "base": "http://example.org/", "failure": true }, "Port overflow (2^64 + 81)", { "input": "http://f:18446744073709551697/c", "base": "http://example.org/", "failure": true }, "Port overflow (2^128 + 81)", { "input": "http://f:340282366920938463463374607431768211537/c", "base": "http://example.org/", "failure": true }, "# Non-special-URL path tests", { "input": "sc://ñ", "base": "about:blank", "href": "sc://%C3%B1", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "%C3%B1", "hostname": "%C3%B1", "port": "", "pathname": "", "search": "", "hash": "" }, { "input": "sc://ñ?x", "base": "about:blank", "href": "sc://%C3%B1?x", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "%C3%B1", "hostname": "%C3%B1", "port": "", "pathname": "", "search": "?x", "hash": "" }, { "input": "sc://ñ#x", "base": "about:blank", "href": "sc://%C3%B1#x", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "%C3%B1", "hostname": "%C3%B1", "port": "", "pathname": "", "search": "", "hash": "#x" }, { "input": "#x", "base": "sc://ñ", "href": "sc://%C3%B1#x", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "%C3%B1", "hostname": "%C3%B1", "port": "", "pathname": "", "search": "", "hash": "#x" }, { "input": "?x", "base": "sc://ñ", "href": "sc://%C3%B1?x", "origin": "null", "protocol": "sc:", "username": "", "password": "", "host": "%C3%B1", "hostname": "%C3%B1", "port": "", "pathname": "", "search": "?x", "hash": "" }, { "input": "sc://?", "base": "about:blank", "href": "sc://?", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "", "search": "", "hash": "" }, { "input": "sc://#", "base": "about:blank", "href": "sc://#", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "", "search": "", "hash": "" }, { "input": "///", "base": "sc://x/", "href": "sc:///", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "////", "base": "sc://x/", "href": "sc:////", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "//", "search": "", "hash": "" }, { "input": "////x/", "base": "sc://x/", "href": "sc:////x/", "protocol": "sc:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "//x/", "search": "", "hash": "" }, { "input": "tftp://foobar.com/someconfig;mode=netascii", "base": "about:blank", "href": "tftp://foobar.com/someconfig;mode=netascii", "origin": "null", "protocol": "tftp:", "username": "", "password": "", "host": "foobar.com", "hostname": "foobar.com", "port": "", "pathname": "/someconfig;mode=netascii", "search": "", "hash": "" }, { "input": "telnet://user:pass@foobar.com:23/", "base": "about:blank", "href": "telnet://user:pass@foobar.com:23/", "origin": "null", "protocol": "telnet:", "username": "user", "password": "pass", "host": "foobar.com:23", "hostname": "foobar.com", "port": "23", "pathname": "/", "search": "", "hash": "" }, { "input": "ut2004://10.10.10.10:7777/Index.ut2", "base": "about:blank", "href": "ut2004://10.10.10.10:7777/Index.ut2", "origin": "null", "protocol": "ut2004:", "username": "", "password": "", "host": "10.10.10.10:7777", "hostname": "10.10.10.10", "port": "7777", "pathname": "/Index.ut2", "search": "", "hash": "" }, { "input": "redis://foo:bar@somehost:6379/0?baz=bam&qux=baz", "base": "about:blank", "href": "redis://foo:bar@somehost:6379/0?baz=bam&qux=baz", "origin": "null", "protocol": "redis:", "username": "foo", "password": "bar", "host": "somehost:6379", "hostname": "somehost", "port": "6379", "pathname": "/0", "search": "?baz=bam&qux=baz", "hash": "" }, { "input": "rsync://foo@host:911/sup", "base": "about:blank", "href": "rsync://foo@host:911/sup", "origin": "null", "protocol": "rsync:", "username": "foo", "password": "", "host": "host:911", "hostname": "host", "port": "911", "pathname": "/sup", "search": "", "hash": "" }, { "input": "git://github.com/foo/bar.git", "base": "about:blank", "href": "git://github.com/foo/bar.git", "origin": "null", "protocol": "git:", "username": "", "password": "", "host": "github.com", "hostname": "github.com", "port": "", "pathname": "/foo/bar.git", "search": "", "hash": "" }, { "input": "irc://myserver.com:6999/channel?passwd", "base": "about:blank", "href": "irc://myserver.com:6999/channel?passwd", "origin": "null", "protocol": "irc:", "username": "", "password": "", "host": "myserver.com:6999", "hostname": "myserver.com", "port": "6999", "pathname": "/channel", "search": "?passwd", "hash": "" }, { "input": "dns://fw.example.org:9999/foo.bar.org?type=TXT", "base": "about:blank", "href": "dns://fw.example.org:9999/foo.bar.org?type=TXT", "origin": "null", "protocol": "dns:", "username": "", "password": "", "host": "fw.example.org:9999", "hostname": "fw.example.org", "port": "9999", "pathname": "/foo.bar.org", "search": "?type=TXT", "hash": "" }, { "input": "ldap://localhost:389/ou=People,o=JNDITutorial", "base": "about:blank", "href": "ldap://localhost:389/ou=People,o=JNDITutorial", "origin": "null", "protocol": "ldap:", "username": "", "password": "", "host": "localhost:389", "hostname": "localhost", "port": "389", "pathname": "/ou=People,o=JNDITutorial", "search": "", "hash": "" }, { "input": "git+https://github.com/foo/bar", "base": "about:blank", "href": "git+https://github.com/foo/bar", "origin": "null", "protocol": "git+https:", "username": "", "password": "", "host": "github.com", "hostname": "github.com", "port": "", "pathname": "/foo/bar", "search": "", "hash": "" }, { "input": "urn:ietf:rfc:2648", "base": "about:blank", "href": "urn:ietf:rfc:2648", "origin": "null", "protocol": "urn:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "ietf:rfc:2648", "search": "", "hash": "" }, { "input": "tag:joe@example.org,2001:foo/bar", "base": "about:blank", "href": "tag:joe@example.org,2001:foo/bar", "origin": "null", "protocol": "tag:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "joe@example.org,2001:foo/bar", "search": "", "hash": "" }, "# percent encoded hosts in non-special-URLs", { "input": "non-special://%E2%80%A0/", "base": "about:blank", "href": "non-special://%E2%80%A0/", "protocol": "non-special:", "username": "", "password": "", "host": "%E2%80%A0", "hostname": "%E2%80%A0", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "non-special://H%4fSt/path", "base": "about:blank", "href": "non-special://H%4fSt/path", "protocol": "non-special:", "username": "", "password": "", "host": "H%4fSt", "hostname": "H%4fSt", "port": "", "pathname": "/path", "search": "", "hash": "" }, "# IPv6 in non-special-URLs", { "input": "non-special://[1:2:0:0:5:0:0:0]/", "base": "about:blank", "href": "non-special://[1:2:0:0:5::]/", "protocol": "non-special:", "username": "", "password": "", "host": "[1:2:0:0:5::]", "hostname": "[1:2:0:0:5::]", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "non-special://[1:2:0:0:0:0:0:3]/", "base": "about:blank", "href": "non-special://[1:2::3]/", "protocol": "non-special:", "username": "", "password": "", "host": "[1:2::3]", "hostname": "[1:2::3]", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "non-special://[1:2::3]:80/", "base": "about:blank", "href": "non-special://[1:2::3]:80/", "protocol": "non-special:", "username": "", "password": "", "host": "[1:2::3]:80", "hostname": "[1:2::3]", "port": "80", "pathname": "/", "search": "", "hash": "" }, { "input": "non-special://[:80/", "base": "about:blank", "failure": true }, { "input": "blob:https://example.com:443/", "base": "about:blank", "href": "blob:https://example.com:443/", "protocol": "blob:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "https://example.com:443/", "search": "", "hash": "" }, { "input": "blob:d3958f5c-0777-0845-9dcf-2cb28783acaf", "base": "about:blank", "href": "blob:d3958f5c-0777-0845-9dcf-2cb28783acaf", "protocol": "blob:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "d3958f5c-0777-0845-9dcf-2cb28783acaf", "search": "", "hash": "" }, "Invalid IPv4 radix digits", { "input": "http://0177.0.0.0189", "base": "about:blank", "href": "http://0177.0.0.0189/", "protocol": "http:", "username": "", "password": "", "host": "0177.0.0.0189", "hostname": "0177.0.0.0189", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://0x7f.0.0.0x7g", "base": "about:blank", "href": "http://0x7f.0.0.0x7g/", "protocol": "http:", "username": "", "password": "", "host": "0x7f.0.0.0x7g", "hostname": "0x7f.0.0.0x7g", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://0X7F.0.0.0X7G", "base": "about:blank", "href": "http://0x7f.0.0.0x7g/", "protocol": "http:", "username": "", "password": "", "host": "0x7f.0.0.0x7g", "hostname": "0x7f.0.0.0x7g", "port": "", "pathname": "/", "search": "", "hash": "" }, "Invalid IPv4 portion of IPv6 address", { "input": "http://[::127.0.0.0.1]", "base": "about:blank", "failure": true }, "Uncompressed IPv6 addresses with 0", { "input": "http://[0:1:0:1:0:1:0:1]", "base": "about:blank", "href": "http://[0:1:0:1:0:1:0:1]/", "protocol": "http:", "username": "", "password": "", "host": "[0:1:0:1:0:1:0:1]", "hostname": "[0:1:0:1:0:1:0:1]", "port": "", "pathname": "/", "search": "", "hash": "" }, { "input": "http://[1:0:1:0:1:0:1:0]", "base": "about:blank", "href": "http://[1:0:1:0:1:0:1:0]/", "protocol": "http:", "username": "", "password": "", "host": "[1:0:1:0:1:0:1:0]", "hostname": "[1:0:1:0:1:0:1:0]", "port": "", "pathname": "/", "search": "", "hash": "" }, "Percent-encoded query and fragment", { "input": "http://example.org/test?\u0022", "base": "about:blank", "href": "http://example.org/test?%22", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/test", "search": "?%22", "hash": "" }, { "input": "http://example.org/test?\u0023", "base": "about:blank", "href": "http://example.org/test?#", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/test", "search": "", "hash": "" }, { "input": "http://example.org/test?\u003C", "base": "about:blank", "href": "http://example.org/test?%3C", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/test", "search": "?%3C", "hash": "" }, { "input": "http://example.org/test?\u003E", "base": "about:blank", "href": "http://example.org/test?%3E", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/test", "search": "?%3E", "hash": "" }, { "input": "http://example.org/test?\u2323", "base": "about:blank", "href": "http://example.org/test?%E2%8C%A3", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/test", "search": "?%E2%8C%A3", "hash": "" }, { "input": "http://example.org/test?%23%23", "base": "about:blank", "href": "http://example.org/test?%23%23", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/test", "search": "?%23%23", "hash": "" }, { "input": "http://example.org/test?%GH", "base": "about:blank", "href": "http://example.org/test?%GH", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/test", "search": "?%GH", "hash": "" }, { "input": "http://example.org/test?a#%EF", "base": "about:blank", "href": "http://example.org/test?a#%EF", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/test", "search": "?a", "hash": "#%EF" }, { "input": "http://example.org/test?a#%GH", "base": "about:blank", "href": "http://example.org/test?a#%GH", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/test", "search": "?a", "hash": "#%GH" }, "URLs that require a non-about:blank base. (Also serve as invalid base tests.)", { "input": "a", "base": "about:blank", "failure": true }, { "input": "a/", "base": "about:blank", "failure": true }, { "input": "a//", "base": "about:blank", "failure": true }, "Bases that don't fail to parse but fail to be bases", { "input": "test-a-colon.html", "base": "a:", "failure": true }, { "input": "test-a-colon-b.html", "base": "a:b", "failure": true }, "Other base URL tests, that must succeed", { "input": "test-a-colon-slash.html", "base": "a:/", "href": "a:/test-a-colon-slash.html", "protocol": "a:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test-a-colon-slash.html", "search": "", "hash": "" }, { "input": "test-a-colon-slash-slash.html", "base": "a://", "href": "a:///test-a-colon-slash-slash.html", "protocol": "a:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test-a-colon-slash-slash.html", "search": "", "hash": "" }, { "input": "test-a-colon-slash-b.html", "base": "a:/b", "href": "a:/test-a-colon-slash-b.html", "protocol": "a:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test-a-colon-slash-b.html", "search": "", "hash": "" }, { "input": "test-a-colon-slash-slash-b.html", "base": "a://b", "href": "a://b/test-a-colon-slash-slash-b.html", "protocol": "a:", "username": "", "password": "", "host": "b", "hostname": "b", "port": "", "pathname": "/test-a-colon-slash-slash-b.html", "search": "", "hash": "" }, "Null code point in fragment", { "input": "http://example.org/test?a#b\u0000c", "base": "about:blank", "href": "http://example.org/test?a#b%00c", "protocol": "http:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/test", "search": "?a", "hash": "#b%00c" }, { "input": "non-spec://example.org/test?a#b\u0000c", "base": "about:blank", "href": "non-spec://example.org/test?a#b%00c", "protocol": "non-spec:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/test", "search": "?a", "hash": "#b%00c" }, { "input": "non-spec:/test?a#b\u0000c", "base": "about:blank", "href": "non-spec:/test?a#b%00c", "protocol": "non-spec:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/test", "search": "?a", "hash": "#b%00c" }, "First scheme char - not allowed: https://github.com/whatwg/url/issues/464", { "input": "10.0.0.7:8080/foo.html", "base": "file:///some/dir/bar.html", "href": "file:///some/dir/10.0.0.7:8080/foo.html", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/some/dir/10.0.0.7:8080/foo.html", "search": "", "hash": "" }, "Subsequent scheme chars - not allowed", { "input": "a!@$*=/foo.html", "base": "file:///some/dir/bar.html", "href": "file:///some/dir/a!@$*=/foo.html", "protocol": "file:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "/some/dir/a!@$*=/foo.html", "search": "", "hash": "" }, "First and subsequent scheme chars - allowed", { "input": "a1234567890-+.:foo/bar", "base": "http://example.com/dir/file", "href": "a1234567890-+.:foo/bar", "protocol": "a1234567890-+.:", "username": "", "password": "", "host": "", "hostname": "", "port": "", "pathname": "foo/bar", "search": "", "hash": "" }, "IDNA ignored code points in file URLs hosts", { "input": "file://a\u00ADb/p", "base": "about:blank", "href": "file://ab/p", "protocol": "file:", "username": "", "password": "", "host": "ab", "hostname": "ab", "port": "", "pathname": "/p", "search": "", "hash": "" }, { "input": "file://a%C2%ADb/p", "base": "about:blank", "href": "file://ab/p", "protocol": "file:", "username": "", "password": "", "host": "ab", "hostname": "ab", "port": "", "pathname": "/p", "search": "", "hash": "" }, "Empty host after the domain to ASCII", { "input": "file://\u00ad/p", "base": "about:blank", "failure": true }, { "input": "file://%C2%AD/p", "base": "about:blank", "failure": true }, { "input": "file://xn--/p", "base": "about:blank", "failure": true }, "https://bugzilla.mozilla.org/show_bug.cgi?id=1647058", { "input": "#link", "base": "https://example.org/##link", "href": "https://example.org/#link", "protocol": "https:", "username": "", "password": "", "host": "example.org", "hostname": "example.org", "port": "", "pathname": "/", "search": "", "hash": "#link" } ] vendor/url/tests/unit.rs0000664000175000017500000010627514160055207016153 0ustar mwhudsonmwhudson// Copyright 2013-2014 The rust-url developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Unit tests use std::borrow::Cow; use std::cell::{Cell, RefCell}; use std::net::{Ipv4Addr, Ipv6Addr}; use std::path::{Path, PathBuf}; use url::{form_urlencoded, Host, Origin, Url}; #[test] fn size() { use std::mem::size_of; assert_eq!(size_of::(), size_of::>()); } #[test] fn test_relative() { let base: Url = "sc://%C3%B1".parse().unwrap(); let url = base.join("/resources/testharness.js").unwrap(); assert_eq!(url.as_str(), "sc://%C3%B1/resources/testharness.js"); } #[test] fn test_relative_empty() { let base: Url = "sc://%C3%B1".parse().unwrap(); let url = base.join("").unwrap(); assert_eq!(url.as_str(), "sc://%C3%B1"); } #[test] fn test_set_empty_host() { let mut base: Url = "moz://foo:bar@servo/baz".parse().unwrap(); base.set_username("").unwrap(); assert_eq!(base.as_str(), "moz://:bar@servo/baz"); base.set_host(None).unwrap(); assert_eq!(base.as_str(), "moz:/baz"); base.set_host(Some("servo")).unwrap(); assert_eq!(base.as_str(), "moz://servo/baz"); } #[test] fn test_set_empty_hostname() { use url::quirks; let mut base: Url = "moz://foo@servo/baz".parse().unwrap(); assert!( quirks::set_hostname(&mut base, "").is_err(), "setting an empty hostname to a url with a username should fail" ); base = "moz://:pass@servo/baz".parse().unwrap(); assert!( quirks::set_hostname(&mut base, "").is_err(), "setting an empty hostname to a url with a password should fail" ); base = "moz://servo/baz".parse().unwrap(); quirks::set_hostname(&mut base, "").unwrap(); assert_eq!(base.as_str(), "moz:///baz"); } macro_rules! assert_from_file_path { ($path: expr) => { assert_from_file_path!($path, $path) }; ($path: expr, $url_path: expr) => {{ let url = Url::from_file_path(Path::new($path)).unwrap(); assert_eq!(url.host(), None); assert_eq!(url.path(), $url_path); assert_eq!(url.to_file_path(), Ok(PathBuf::from($path))); }}; } #[test] fn new_file_paths() { if cfg!(unix) { assert_eq!(Url::from_file_path(Path::new("relative")), Err(())); assert_eq!(Url::from_file_path(Path::new("../relative")), Err(())); } if cfg!(windows) { assert_eq!(Url::from_file_path(Path::new("relative")), Err(())); assert_eq!(Url::from_file_path(Path::new(r"..\relative")), Err(())); assert_eq!(Url::from_file_path(Path::new(r"\drive-relative")), Err(())); assert_eq!(Url::from_file_path(Path::new(r"\\ucn\")), Err(())); } if cfg!(unix) { assert_from_file_path!("/foo/bar"); assert_from_file_path!("/foo/ba\0r", "/foo/ba%00r"); assert_from_file_path!("/foo/ba%00r", "/foo/ba%2500r"); } } #[test] #[cfg(unix)] fn new_path_bad_utf8() { use std::ffi::OsStr; use std::os::unix::prelude::*; let url = Url::from_file_path(Path::new(OsStr::from_bytes(b"/foo/ba\x80r"))).unwrap(); let os_str = OsStr::from_bytes(b"/foo/ba\x80r"); assert_eq!(url.to_file_path(), Ok(PathBuf::from(os_str))); } #[test] fn new_path_windows_fun() { if cfg!(windows) { assert_from_file_path!(r"C:\foo\bar", "/C:/foo/bar"); assert_from_file_path!("C:\\foo\\ba\0r", "/C:/foo/ba%00r"); // Invalid UTF-8 assert!(Url::parse("file:///C:/foo/ba%80r") .unwrap() .to_file_path() .is_err()); // test windows canonicalized path let path = PathBuf::from(r"\\?\C:\foo\bar"); assert!(Url::from_file_path(path).is_ok()); // Percent-encoded drive letter let url = Url::parse("file:///C%3A/foo/bar").unwrap(); assert_eq!(url.to_file_path(), Ok(PathBuf::from(r"C:\foo\bar"))); } } #[test] fn new_directory_paths() { if cfg!(unix) { assert_eq!(Url::from_directory_path(Path::new("relative")), Err(())); assert_eq!(Url::from_directory_path(Path::new("../relative")), Err(())); let url = Url::from_directory_path(Path::new("/foo/bar")).unwrap(); assert_eq!(url.host(), None); assert_eq!(url.path(), "/foo/bar/"); } if cfg!(windows) { assert_eq!(Url::from_directory_path(Path::new("relative")), Err(())); assert_eq!(Url::from_directory_path(Path::new(r"..\relative")), Err(())); assert_eq!( Url::from_directory_path(Path::new(r"\drive-relative")), Err(()) ); assert_eq!(Url::from_directory_path(Path::new(r"\\ucn\")), Err(())); let url = Url::from_directory_path(Path::new(r"C:\foo\bar")).unwrap(); assert_eq!(url.host(), None); assert_eq!(url.path(), "/C:/foo/bar/"); } } #[test] fn path_backslash_fun() { let mut special_url = "http://foobar.com".parse::().unwrap(); special_url.path_segments_mut().unwrap().push("foo\\bar"); assert_eq!(special_url.as_str(), "http://foobar.com/foo%5Cbar"); let mut nonspecial_url = "thing://foobar.com".parse::().unwrap(); nonspecial_url.path_segments_mut().unwrap().push("foo\\bar"); assert_eq!(nonspecial_url.as_str(), "thing://foobar.com/foo\\bar"); } #[test] fn from_str() { assert!("http://testing.com/this".parse::().is_ok()); } #[test] fn parse_with_params() { let url = Url::parse_with_params( "http://testing.com/this?dont=clobberme", &[("lang", "rust")], ) .unwrap(); assert_eq!( url.as_str(), "http://testing.com/this?dont=clobberme&lang=rust" ); } #[test] fn issue_124() { let url: Url = "file:a".parse().unwrap(); assert_eq!(url.path(), "/a"); let url: Url = "file:...".parse().unwrap(); assert_eq!(url.path(), "/..."); let url: Url = "file:..".parse().unwrap(); assert_eq!(url.path(), "/"); } #[test] fn test_equality() { use std::collections::hash_map::DefaultHasher; use std::hash::{Hash, Hasher}; fn check_eq(a: &Url, b: &Url) { assert_eq!(a, b); let mut h1 = DefaultHasher::new(); a.hash(&mut h1); let mut h2 = DefaultHasher::new(); b.hash(&mut h2); assert_eq!(h1.finish(), h2.finish()); } fn url(s: &str) -> Url { let rv = s.parse().unwrap(); check_eq(&rv, &rv); rv } // Doesn't care if default port is given. let a: Url = url("https://example.com/"); let b: Url = url("https://example.com:443/"); check_eq(&a, &b); // Different ports let a: Url = url("http://example.com/"); let b: Url = url("http://example.com:8080/"); assert!(a != b, "{:?} != {:?}", a, b); // Different scheme let a: Url = url("http://example.com/"); let b: Url = url("https://example.com/"); assert_ne!(a, b); // Different host let a: Url = url("http://foo.com/"); let b: Url = url("http://bar.com/"); assert_ne!(a, b); // Missing path, automatically substituted. Semantically the same. let a: Url = url("http://foo.com"); let b: Url = url("http://foo.com/"); check_eq(&a, &b); } #[test] fn host() { fn assert_host(input: &str, host: Host<&str>) { assert_eq!(Url::parse(input).unwrap().host(), Some(host)); } assert_host("http://www.mozilla.org", Host::Domain("www.mozilla.org")); assert_host( "http://1.35.33.49", Host::Ipv4(Ipv4Addr::new(1, 35, 33, 49)), ); assert_host( "http://[2001:0db8:85a3:08d3:1319:8a2e:0370:7344]", Host::Ipv6(Ipv6Addr::new( 0x2001, 0x0db8, 0x85a3, 0x08d3, 0x1319, 0x8a2e, 0x0370, 0x7344, )), ); assert_host("http://1.35.+33.49", Host::Domain("1.35.+33.49")); assert_host( "http://[::]", Host::Ipv6(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 0)), ); assert_host( "http://[::1]", Host::Ipv6(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1)), ); assert_host( "http://0x1.0X23.0x21.061", Host::Ipv4(Ipv4Addr::new(1, 35, 33, 49)), ); assert_host("http://0x1232131", Host::Ipv4(Ipv4Addr::new(1, 35, 33, 49))); assert_host("http://111", Host::Ipv4(Ipv4Addr::new(0, 0, 0, 111))); assert_host("http://2..2.3", Host::Domain("2..2.3")); assert!(Url::parse("http://42.0x1232131").is_err()); assert!(Url::parse("http://192.168.0.257").is_err()); assert_eq!(Host::Domain("foo"), Host::Domain("foo").to_owned()); assert_ne!(Host::Domain("foo"), Host::Domain("bar").to_owned()); } #[test] fn host_serialization() { // libstd’s `Display for Ipv6Addr` serializes 0:0:0:0:0:0:_:_ and 0:0:0:0:0:ffff:_:_ // using IPv4-like syntax, as suggested in https://tools.ietf.org/html/rfc5952#section-4 // but https://url.spec.whatwg.org/#concept-ipv6-serializer specifies not to. // Not [::0.0.0.2] / [::ffff:0.0.0.2] assert_eq!( Url::parse("http://[0::2]").unwrap().host_str(), Some("[::2]") ); assert_eq!( Url::parse("http://[0::ffff:0:2]").unwrap().host_str(), Some("[::ffff:0:2]") ); } #[test] fn test_idna() { assert!("http://goÈ™u.ro".parse::().is_ok()); assert_eq!( Url::parse("http://☃.net/").unwrap().host(), Some(Host::Domain("xn--n3h.net")) ); assert!("https://r2---sn-huoa-cvhl.googlevideo.com/crossdomain.xml" .parse::() .is_ok()); } #[test] fn test_serialization() { let data = [ ("http://example.com/", "http://example.com/"), ("http://addslash.com", "http://addslash.com/"), ("http://@emptyuser.com/", "http://emptyuser.com/"), ("http://:@emptypass.com/", "http://emptypass.com/"), ("http://user@user.com/", "http://user@user.com/"), ( "http://user:pass@userpass.com/", "http://user:pass@userpass.com/", ), ( "http://slashquery.com/path/?q=something", "http://slashquery.com/path/?q=something", ), ( "http://noslashquery.com/path?q=something", "http://noslashquery.com/path?q=something", ), ]; for &(input, result) in &data { let url = Url::parse(input).unwrap(); assert_eq!(url.as_str(), result); } } #[test] fn test_form_urlencoded() { let pairs: &[(Cow<'_, str>, Cow<'_, str>)] = &[ ("foo".into(), "é&".into()), ("bar".into(), "".into()), ("foo".into(), "#".into()), ]; let encoded = form_urlencoded::Serializer::new(String::new()) .extend_pairs(pairs) .finish(); assert_eq!(encoded, "foo=%C3%A9%26&bar=&foo=%23"); assert_eq!( form_urlencoded::parse(encoded.as_bytes()).collect::>(), pairs.to_vec() ); } #[test] fn test_form_serialize() { let encoded = form_urlencoded::Serializer::new(String::new()) .append_pair("foo", "é&") .append_pair("bar", "") .append_pair("foo", "#") .append_key_only("json") .finish(); assert_eq!(encoded, "foo=%C3%A9%26&bar=&foo=%23&json"); } #[test] fn form_urlencoded_encoding_override() { let encoded = form_urlencoded::Serializer::new(String::new()) .encoding_override(Some(&|s| s.as_bytes().to_ascii_uppercase().into())) .append_pair("foo", "bar") .append_key_only("xml") .finish(); assert_eq!(encoded, "FOO=BAR&XML"); } #[test] /// https://github.com/servo/rust-url/issues/61 fn issue_61() { let mut url = Url::parse("http://mozilla.org").unwrap(); url.set_scheme("https").unwrap(); assert_eq!(url.port(), None); assert_eq!(url.port_or_known_default(), Some(443)); url.check_invariants().unwrap(); } #[test] #[cfg(not(windows))] /// https://github.com/servo/rust-url/issues/197 fn issue_197() { let mut url = Url::from_file_path("/").expect("Failed to parse path"); url.check_invariants().unwrap(); assert_eq!( url, Url::parse("file:///").expect("Failed to parse path + protocol") ); url.path_segments_mut() .expect("path_segments_mut") .pop_if_empty(); } #[test] fn issue_241() { Url::parse("mailto:").unwrap().cannot_be_a_base(); } #[test] /// https://github.com/servo/rust-url/issues/222 fn append_trailing_slash() { let mut url: Url = "http://localhost:6767/foo/bar?a=b".parse().unwrap(); url.check_invariants().unwrap(); url.path_segments_mut().unwrap().push(""); url.check_invariants().unwrap(); assert_eq!(url.to_string(), "http://localhost:6767/foo/bar/?a=b"); } #[test] /// https://github.com/servo/rust-url/issues/227 fn extend_query_pairs_then_mutate() { let mut url: Url = "http://localhost:6767/foo/bar".parse().unwrap(); url.query_pairs_mut() .extend_pairs(vec![("auth", "my-token")].into_iter()); url.check_invariants().unwrap(); assert_eq!( url.to_string(), "http://localhost:6767/foo/bar?auth=my-token" ); url.path_segments_mut().unwrap().push("some_other_path"); url.check_invariants().unwrap(); assert_eq!( url.to_string(), "http://localhost:6767/foo/bar/some_other_path?auth=my-token" ); } #[test] /// https://github.com/servo/rust-url/issues/222 fn append_empty_segment_then_mutate() { let mut url: Url = "http://localhost:6767/foo/bar?a=b".parse().unwrap(); url.check_invariants().unwrap(); url.path_segments_mut().unwrap().push("").pop(); url.check_invariants().unwrap(); assert_eq!(url.to_string(), "http://localhost:6767/foo/bar?a=b"); } #[test] /// https://github.com/servo/rust-url/issues/243 fn test_set_host() { let mut url = Url::parse("https://example.net/hello").unwrap(); url.set_host(Some("foo.com")).unwrap(); assert_eq!(url.as_str(), "https://foo.com/hello"); assert!(url.set_host(None).is_err()); assert_eq!(url.as_str(), "https://foo.com/hello"); assert!(url.set_host(Some("")).is_err()); assert_eq!(url.as_str(), "https://foo.com/hello"); let mut url = Url::parse("foobar://example.net/hello").unwrap(); url.set_host(None).unwrap(); assert_eq!(url.as_str(), "foobar:/hello"); let mut url = Url::parse("foo://È™").unwrap(); assert_eq!(url.as_str(), "foo://%C8%99"); url.set_host(Some("goÈ™u.ro")).unwrap(); assert_eq!(url.as_str(), "foo://go%C8%99u.ro"); } #[test] // https://github.com/servo/rust-url/issues/166 fn test_leading_dots() { assert_eq!( Host::parse(".org").unwrap(), Host::Domain(".org".to_owned()) ); assert_eq!(Url::parse("file://./foo").unwrap().domain(), Some(".")); } #[test] /// https://github.com/servo/rust-url/issues/302 fn test_origin_hash() { use std::collections::hash_map::DefaultHasher; use std::hash::{Hash, Hasher}; fn hash(value: &T) -> u64 { let mut hasher = DefaultHasher::new(); value.hash(&mut hasher); hasher.finish() } let origin = &Url::parse("http://example.net/").unwrap().origin(); let origins_to_compare = [ Url::parse("http://example.net:80/").unwrap().origin(), Url::parse("http://example.net:81/").unwrap().origin(), Url::parse("http://example.net").unwrap().origin(), Url::parse("http://example.net/hello").unwrap().origin(), Url::parse("https://example.net").unwrap().origin(), Url::parse("ftp://example.net").unwrap().origin(), Url::parse("file://example.net").unwrap().origin(), Url::parse("http://user@example.net/").unwrap().origin(), Url::parse("http://user:pass@example.net/") .unwrap() .origin(), ]; for origin_to_compare in &origins_to_compare { if origin == origin_to_compare { assert_eq!(hash(origin), hash(origin_to_compare)); } else { assert_ne!(hash(origin), hash(origin_to_compare)); } } let opaque_origin = Url::parse("file://example.net").unwrap().origin(); let same_opaque_origin = Url::parse("file://example.net").unwrap().origin(); let other_opaque_origin = Url::parse("file://other").unwrap().origin(); assert_ne!(hash(&opaque_origin), hash(&same_opaque_origin)); assert_ne!(hash(&opaque_origin), hash(&other_opaque_origin)); } #[test] fn test_origin_blob_equality() { let origin = &Url::parse("http://example.net/").unwrap().origin(); let blob_origin = &Url::parse("blob:http://example.net/").unwrap().origin(); assert_eq!(origin, blob_origin); } #[test] fn test_origin_opaque() { assert!(!Origin::new_opaque().is_tuple()); assert!(!&Url::parse("blob:malformed//").unwrap().origin().is_tuple()) } #[test] fn test_origin_unicode_serialization() { let data = [ ("http://😅.com", "http://😅.com"), ("ftp://😅:🙂@🙂.com", "ftp://🙂.com"), ("https://user@😅.com", "https://😅.com"), ("http://😅.🙂:40", "http://😅.🙂:40"), ]; for &(unicode_url, expected_serialization) in &data { let origin = Url::parse(unicode_url).unwrap().origin(); assert_eq!(origin.unicode_serialization(), *expected_serialization); } let ascii_origins = [ Url::parse("http://example.net/").unwrap().origin(), Url::parse("http://example.net:80/").unwrap().origin(), Url::parse("http://example.net:81/").unwrap().origin(), Url::parse("http://example.net").unwrap().origin(), Url::parse("http://example.net/hello").unwrap().origin(), Url::parse("https://example.net").unwrap().origin(), Url::parse("ftp://example.net").unwrap().origin(), Url::parse("file://example.net").unwrap().origin(), Url::parse("http://user@example.net/").unwrap().origin(), Url::parse("http://user:pass@example.net/") .unwrap() .origin(), Url::parse("http://127.0.0.1").unwrap().origin(), ]; for ascii_origin in &ascii_origins { assert_eq!( ascii_origin.ascii_serialization(), ascii_origin.unicode_serialization() ); } } #[test] fn test_socket_addrs() { use std::net::ToSocketAddrs; let data = [ ("https://127.0.0.1/", "127.0.0.1", 443), ("https://127.0.0.1:9742/", "127.0.0.1", 9742), ("custom-protocol://127.0.0.1:9742/", "127.0.0.1", 9742), ("custom-protocol://127.0.0.1/", "127.0.0.1", 9743), ("https://[::1]/", "::1", 443), ("https://[::1]:9742/", "::1", 9742), ("custom-protocol://[::1]:9742/", "::1", 9742), ("custom-protocol://[::1]/", "::1", 9743), ("https://localhost/", "localhost", 443), ("https://localhost:9742/", "localhost", 9742), ("custom-protocol://localhost:9742/", "localhost", 9742), ("custom-protocol://localhost/", "localhost", 9743), ]; for (url_string, host, port) in &data { let url = url::Url::parse(url_string).unwrap(); let addrs = url .socket_addrs(|| match url.scheme() { "custom-protocol" => Some(9743), _ => None, }) .unwrap(); assert_eq!( Some(addrs[0]), (*host, *port).to_socket_addrs().unwrap().next() ); } } #[test] fn test_no_base_url() { let mut no_base_url = Url::parse("mailto:test@example.net").unwrap(); assert!(no_base_url.cannot_be_a_base()); assert!(no_base_url.path_segments().is_none()); assert!(no_base_url.path_segments_mut().is_err()); assert!(no_base_url.set_host(Some("foo")).is_err()); assert!(no_base_url .set_ip_host("127.0.0.1".parse().unwrap()) .is_err()); no_base_url.set_path("/foo"); assert_eq!(no_base_url.path(), "%2Ffoo"); } #[test] fn test_domain() { let url = Url::parse("https://127.0.0.1/").unwrap(); assert_eq!(url.domain(), None); let url = Url::parse("mailto:test@example.net").unwrap(); assert_eq!(url.domain(), None); let url = Url::parse("https://example.com/").unwrap(); assert_eq!(url.domain(), Some("example.com")); } #[test] fn test_query() { let url = Url::parse("https://example.com/products?page=2#fragment").unwrap(); assert_eq!(url.query(), Some("page=2")); assert_eq!( url.query_pairs().next(), Some((Cow::Borrowed("page"), Cow::Borrowed("2"))) ); let url = Url::parse("https://example.com/products").unwrap(); assert!(url.query().is_none()); assert_eq!(url.query_pairs().count(), 0); let url = Url::parse("https://example.com/?country=español").unwrap(); assert_eq!(url.query(), Some("country=espa%C3%B1ol")); assert_eq!( url.query_pairs().next(), Some((Cow::Borrowed("country"), Cow::Borrowed("español"))) ); let url = Url::parse("https://example.com/products?page=2&sort=desc").unwrap(); assert_eq!(url.query(), Some("page=2&sort=desc")); let mut pairs = url.query_pairs(); assert_eq!(pairs.count(), 2); assert_eq!( pairs.next(), Some((Cow::Borrowed("page"), Cow::Borrowed("2"))) ); assert_eq!( pairs.next(), Some((Cow::Borrowed("sort"), Cow::Borrowed("desc"))) ); } #[test] fn test_fragment() { let url = Url::parse("https://example.com/#fragment").unwrap(); assert_eq!(url.fragment(), Some("fragment")); let url = Url::parse("https://example.com/").unwrap(); assert_eq!(url.fragment(), None); } #[test] fn test_set_ip_host() { let mut url = Url::parse("http://example.com").unwrap(); url.set_ip_host("127.0.0.1".parse().unwrap()).unwrap(); assert_eq!(url.host_str(), Some("127.0.0.1")); url.set_ip_host("::1".parse().unwrap()).unwrap(); assert_eq!(url.host_str(), Some("[::1]")); } #[test] fn test_set_href() { use url::quirks::set_href; let mut url = Url::parse("https://existing.url").unwrap(); assert!(set_href(&mut url, "mal//formed").is_err()); assert!(set_href( &mut url, "https://user:pass@domain.com:9742/path/file.ext?key=val&key2=val2#fragment" ) .is_ok()); assert_eq!( url, Url::parse("https://user:pass@domain.com:9742/path/file.ext?key=val&key2=val2#fragment") .unwrap() ); } #[test] fn test_domain_encoding_quirks() { use url::quirks::{domain_to_ascii, domain_to_unicode}; let data = [ ("http://example.com", "", ""), ("😅.🙂", "xn--j28h.xn--938h", "😅.🙂"), ("example.com", "example.com", "example.com"), ("mailto:test@example.net", "", ""), ]; for url in &data { assert_eq!(domain_to_ascii(url.0), url.1); assert_eq!(domain_to_unicode(url.0), url.2); } } #[test] fn test_windows_unc_path() { if !cfg!(windows) { return; } let url = Url::from_file_path(Path::new(r"\\host\share\path\file.txt")).unwrap(); assert_eq!(url.as_str(), "file://host/share/path/file.txt"); let url = Url::from_file_path(Path::new(r"\\höst\share\path\file.txt")).unwrap(); assert_eq!(url.as_str(), "file://xn--hst-sna/share/path/file.txt"); let url = Url::from_file_path(Path::new(r"\\192.168.0.1\share\path\file.txt")).unwrap(); assert_eq!(url.host(), Some(Host::Ipv4(Ipv4Addr::new(192, 168, 0, 1)))); let path = url.to_file_path().unwrap(); assert_eq!(path.to_str(), Some(r"\\192.168.0.1\share\path\file.txt")); // Another way to write these: let url = Url::from_file_path(Path::new(r"\\?\UNC\host\share\path\file.txt")).unwrap(); assert_eq!(url.as_str(), "file://host/share/path/file.txt"); // Paths starting with "\\.\" (Local Device Paths) are intentionally not supported. let url = Url::from_file_path(Path::new(r"\\.\some\path\file.txt")); assert!(url.is_err()); } #[test] fn test_syntax_violation_callback() { use url::SyntaxViolation::*; let violation = Cell::new(None); let url = Url::options() .syntax_violation_callback(Some(&|v| violation.set(Some(v)))) .parse("http:////mozilla.org:42") .unwrap(); assert_eq!(url.port(), Some(42)); let v = violation.take().unwrap(); assert_eq!(v, ExpectedDoubleSlash); assert_eq!(v.description(), "expected //"); assert_eq!(v.to_string(), "expected //"); } #[test] fn test_syntax_violation_callback_lifetimes() { use url::SyntaxViolation::*; let violation = Cell::new(None); let vfn = |s| violation.set(Some(s)); let url = Url::options() .syntax_violation_callback(Some(&vfn)) .parse("http:////mozilla.org:42") .unwrap(); assert_eq!(url.port(), Some(42)); assert_eq!(violation.take(), Some(ExpectedDoubleSlash)); let url = Url::options() .syntax_violation_callback(Some(&vfn)) .parse("http://mozilla.org\\path") .unwrap(); assert_eq!(url.path(), "/path"); assert_eq!(violation.take(), Some(Backslash)); } #[test] fn test_syntax_violation_callback_types() { use url::SyntaxViolation::*; let data = [ ("http://mozilla.org/\\foo", Backslash, "backslash"), (" http://mozilla.org", C0SpaceIgnored, "leading or trailing control or space character are ignored in URLs"), ("http://user:pass@mozilla.org", EmbeddedCredentials, "embedding authentication information (username or password) in an URL is not recommended"), ("http:///mozilla.org", ExpectedDoubleSlash, "expected //"), ("file:/foo.txt", ExpectedFileDoubleSlash, "expected // after file:"), ("file://mozilla.org/c:/file.txt", FileWithHostAndWindowsDrive, "file: with host and Windows drive letter"), ("http://mozilla.org/^", NonUrlCodePoint, "non-URL code point"), ("http://mozilla.org/#\00", NullInFragment, "NULL characters are ignored in URL fragment identifiers"), ("http://mozilla.org/%1", PercentDecode, "expected 2 hex digits after %"), ("http://mozilla.org\t/foo", TabOrNewlineIgnored, "tabs or newlines are ignored in URLs"), ("http://user@:pass@mozilla.org", UnencodedAtSign, "unencoded @ sign in username or password") ]; for test_case in &data { let violation = Cell::new(None); Url::options() .syntax_violation_callback(Some(&|v| violation.set(Some(v)))) .parse(test_case.0) .unwrap(); let v = violation.take(); assert_eq!(v, Some(test_case.1)); assert_eq!(v.unwrap().description(), test_case.2); assert_eq!(v.unwrap().to_string(), test_case.2); } } #[test] fn test_options_reuse() { use url::SyntaxViolation::*; let violations = RefCell::new(Vec::new()); let vfn = |v| violations.borrow_mut().push(v); let options = Url::options().syntax_violation_callback(Some(&vfn)); let url = options.parse("http:////mozilla.org").unwrap(); let options = options.base_url(Some(&url)); let url = options.parse("/sub\\path").unwrap(); assert_eq!(url.as_str(), "http://mozilla.org/sub/path"); assert_eq!(*violations.borrow(), vec!(ExpectedDoubleSlash, Backslash)); } /// https://github.com/servo/rust-url/issues/505 #[cfg(windows)] #[test] fn test_url_from_file_path() { use std::path::PathBuf; use url::Url; let p = PathBuf::from("c:///"); let u = Url::from_file_path(p).unwrap(); let path = u.to_file_path().unwrap(); assert_eq!("C:\\", path.to_str().unwrap()); } /// https://github.com/servo/rust-url/issues/505 #[cfg(not(windows))] #[test] fn test_url_from_file_path() { use std::path::PathBuf; use url::Url; let p = PathBuf::from("/c:/"); let u = Url::from_file_path(p).unwrap(); let path = u.to_file_path().unwrap(); assert_eq!("/c:/", path.to_str().unwrap()); } #[test] fn test_non_special_path() { let mut db_url = url::Url::parse("postgres://postgres@localhost/").unwrap(); assert_eq!(db_url.as_str(), "postgres://postgres@localhost/"); db_url.set_path("diesel_foo"); assert_eq!(db_url.as_str(), "postgres://postgres@localhost/diesel_foo"); assert_eq!(db_url.path(), "/diesel_foo"); } #[test] fn test_non_special_path2() { let mut db_url = url::Url::parse("postgres://postgres@localhost/").unwrap(); assert_eq!(db_url.as_str(), "postgres://postgres@localhost/"); db_url.set_path(""); assert_eq!(db_url.path(), ""); assert_eq!(db_url.as_str(), "postgres://postgres@localhost"); db_url.set_path("foo"); assert_eq!(db_url.path(), "/foo"); assert_eq!(db_url.as_str(), "postgres://postgres@localhost/foo"); db_url.set_path("/bar"); assert_eq!(db_url.path(), "/bar"); assert_eq!(db_url.as_str(), "postgres://postgres@localhost/bar"); } #[test] fn test_non_special_path3() { let mut db_url = url::Url::parse("postgres://postgres@localhost/").unwrap(); assert_eq!(db_url.as_str(), "postgres://postgres@localhost/"); db_url.set_path("/"); assert_eq!(db_url.as_str(), "postgres://postgres@localhost/"); assert_eq!(db_url.path(), "/"); db_url.set_path("/foo"); assert_eq!(db_url.as_str(), "postgres://postgres@localhost/foo"); assert_eq!(db_url.path(), "/foo"); } #[test] fn test_set_scheme_to_file_with_host() { let mut url: Url = "http://localhost:6767/foo/bar".parse().unwrap(); let result = url.set_scheme("file"); assert_eq!(url.to_string(), "http://localhost:6767/foo/bar"); assert_eq!(result, Err(())); } #[test] fn no_panic() { let mut url = Url::parse("arhttpsps:/.//eom/dae.com/\\\\t\\:").unwrap(); url::quirks::set_hostname(&mut url, "//eom/datcom/\\\\t\\://eom/data.cs").unwrap(); } #[test] fn pop_if_empty_in_bounds() { let mut url = Url::parse("m://").unwrap(); let mut segments = url.path_segments_mut().unwrap(); segments.pop_if_empty(); segments.pop(); } #[test] fn test_slicing() { use url::Position::*; #[derive(Default)] struct ExpectedSlices<'a> { full: &'a str, scheme: &'a str, username: &'a str, password: &'a str, host: &'a str, port: &'a str, path: &'a str, query: &'a str, fragment: &'a str, } let data = [ ExpectedSlices { full: "https://user:pass@domain.com:9742/path/file.ext?key=val&key2=val2#fragment", scheme: "https", username: "user", password: "pass", host: "domain.com", port: "9742", path: "/path/file.ext", query: "key=val&key2=val2", fragment: "fragment", }, ExpectedSlices { full: "https://domain.com:9742/path/file.ext#fragment", scheme: "https", host: "domain.com", port: "9742", path: "/path/file.ext", fragment: "fragment", ..Default::default() }, ExpectedSlices { full: "https://domain.com:9742/path/file.ext", scheme: "https", host: "domain.com", port: "9742", path: "/path/file.ext", ..Default::default() }, ExpectedSlices { full: "blob:blob-info", scheme: "blob", path: "blob-info", ..Default::default() }, ]; for expected_slices in &data { let url = Url::parse(expected_slices.full).unwrap(); assert_eq!(&url[..], expected_slices.full); assert_eq!(&url[BeforeScheme..AfterScheme], expected_slices.scheme); assert_eq!( &url[BeforeUsername..AfterUsername], expected_slices.username ); assert_eq!( &url[BeforePassword..AfterPassword], expected_slices.password ); assert_eq!(&url[BeforeHost..AfterHost], expected_slices.host); assert_eq!(&url[BeforePort..AfterPort], expected_slices.port); assert_eq!(&url[BeforePath..AfterPath], expected_slices.path); assert_eq!(&url[BeforeQuery..AfterQuery], expected_slices.query); assert_eq!( &url[BeforeFragment..AfterFragment], expected_slices.fragment ); assert_eq!(&url[..AfterFragment], expected_slices.full); } } #[test] fn test_make_relative() { let tests = [ ( "http://127.0.0.1:8080/test", "http://127.0.0.1:8080/test", "", ), ( "http://127.0.0.1:8080/test", "http://127.0.0.1:8080/test/", "test/", ), ( "http://127.0.0.1:8080/test/", "http://127.0.0.1:8080/test", "../test", ), ( "http://127.0.0.1:8080/", "http://127.0.0.1:8080/?foo=bar#123", "?foo=bar#123", ), ( "http://127.0.0.1:8080/", "http://127.0.0.1:8080/test/video", "test/video", ), ( "http://127.0.0.1:8080/test", "http://127.0.0.1:8080/test/video", "test/video", ), ( "http://127.0.0.1:8080/test/", "http://127.0.0.1:8080/test/video", "video", ), ( "http://127.0.0.1:8080/test", "http://127.0.0.1:8080/test2/video", "test2/video", ), ( "http://127.0.0.1:8080/test/", "http://127.0.0.1:8080/test2/video", "../test2/video", ), ( "http://127.0.0.1:8080/test/bla", "http://127.0.0.1:8080/test2/video", "../test2/video", ), ( "http://127.0.0.1:8080/test/bla/", "http://127.0.0.1:8080/test2/video", "../../test2/video", ), ( "http://127.0.0.1:8080/test/?foo=bar#123", "http://127.0.0.1:8080/test/video", "video", ), ( "http://127.0.0.1:8080/test/", "http://127.0.0.1:8080/test/video?baz=meh#456", "video?baz=meh#456", ), ( "http://127.0.0.1:8080/test", "http://127.0.0.1:8080/test?baz=meh#456", "?baz=meh#456", ), ( "http://127.0.0.1:8080/test/", "http://127.0.0.1:8080/test?baz=meh#456", "../test?baz=meh#456", ), ( "http://127.0.0.1:8080/test/", "http://127.0.0.1:8080/test/?baz=meh#456", "?baz=meh#456", ), ( "http://127.0.0.1:8080/test/?foo=bar#123", "http://127.0.0.1:8080/test/video?baz=meh#456", "video?baz=meh#456", ), ]; for (base, uri, relative) in &tests { let base_uri = url::Url::parse(base).unwrap(); let relative_uri = url::Url::parse(uri).unwrap(); let make_relative = base_uri.make_relative(&relative_uri).unwrap(); assert_eq!( make_relative, *relative, "base: {}, uri: {}, relative: {}", base, uri, relative ); assert_eq!( base_uri.join(&relative).unwrap().as_str(), *uri, "base: {}, uri: {}, relative: {}", base, uri, relative ); } let error_tests = [ ("http://127.0.0.1:8080/", "https://127.0.0.1:8080/test/"), ("http://127.0.0.1:8080/", "http://127.0.0.1:8081/test/"), ("http://127.0.0.1:8080/", "http://127.0.0.2:8080/test/"), ("mailto:a@example.com", "mailto:b@example.com"), ]; for (base, uri) in &error_tests { let base_uri = url::Url::parse(base).unwrap(); let relative_uri = url::Url::parse(uri).unwrap(); let make_relative = base_uri.make_relative(&relative_uri); assert_eq!(make_relative, None, "base: {}, uri: {}", base, uri); } } vendor/url/tests/data.rs0000664000175000017500000001604014160055207016073 0ustar mwhudsonmwhudson// Copyright 2013-2014 The rust-url developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Data-driven tests use std::ops::Deref; use std::str::FromStr; use serde_json::Value; use url::{quirks, Url}; #[test] fn urltestdata() { // Copied form https://github.com/w3c/web-platform-tests/blob/master/url/ let mut json = Value::from_str(include_str!("urltestdata.json")) .expect("JSON parse error in urltestdata.json"); let mut passed = true; for entry in json.as_array_mut().unwrap() { if entry.is_string() { continue; // ignore comments } let base = entry.take_string("base"); let input = entry.take_string("input"); let failure = entry.take_key("failure").is_some(); let base = match Url::parse(&base) { Ok(base) => base, Err(_) if failure => continue, Err(message) => { eprint_failure( format!(" failed: error parsing base {:?}: {}", base, message), &format!("parse base for {:?}", input), None, ); passed = false; continue; } }; let url = match (base.join(&input), failure) { (Ok(url), false) => url, (Err(_), true) => continue, (Err(message), false) => { eprint_failure( format!(" failed: {}", message), &format!("parse URL for {:?}", input), None, ); passed = false; continue; } (Ok(_), true) => { eprint_failure( format!(" failed: expected parse error for URL {:?}", input), &format!("parse URL for {:?}", input), None, ); passed = false; continue; } }; passed &= check_invariants(&url, &format!("invariants for {:?}", input), None); for &attr in ATTRIBS { passed &= test_eq_eprint( entry.take_string(attr), get(&url, attr), &format!("{:?} - {}", input, attr), None, ); } if let Some(expected_origin) = entry.take_key("origin").map(|s| s.string()) { passed &= test_eq_eprint( expected_origin, &quirks::origin(&url), &format!("origin for {:?}", input), None, ); } } assert!(passed) } #[allow(clippy::option_as_ref_deref)] // introduced in 1.40, MSRV is 1.36 #[test] fn setters_tests() { let mut json = Value::from_str(include_str!("setters_tests.json")) .expect("JSON parse error in setters_tests.json"); let mut passed = true; for &attr in ATTRIBS { if attr == "href" { continue; } let mut tests = json.take_key(attr).unwrap(); for mut test in tests.as_array_mut().unwrap().drain(..) { let comment = test.take_key("comment").map(|s| s.string()); let href = test.take_string("href"); let new_value = test.take_string("new_value"); let name = format!("{:?}.{} = {:?}", href, attr, new_value); let mut expected = test.take_key("expected").unwrap(); let mut url = Url::parse(&href).unwrap(); let comment_ref = comment.as_ref().map(|s| s.deref()); passed &= check_invariants(&url, &name, comment_ref); let _ = set(&mut url, attr, &new_value); for attr in ATTRIBS { if let Some(value) = expected.take_key(attr) { passed &= test_eq_eprint(value.string(), get(&url, attr), &name, comment_ref); }; } passed &= check_invariants(&url, &name, comment_ref); } } assert!(passed); } fn check_invariants(url: &Url, name: &str, comment: Option<&str>) -> bool { let mut passed = true; if let Err(e) = url.check_invariants() { passed = false; eprint_failure( format!(" failed: invariants checked -> {:?}", e), name, comment, ); } #[cfg(feature = "serde")] { let bytes = serde_json::to_vec(url).unwrap(); let new_url: Url = serde_json::from_slice(&bytes).unwrap(); passed &= test_eq_eprint(url.to_string(), &new_url.to_string(), name, comment); } passed } trait JsonExt { fn take_key(&mut self, key: &str) -> Option; fn string(self) -> String; fn take_string(&mut self, key: &str) -> String; } impl JsonExt for Value { fn take_key(&mut self, key: &str) -> Option { self.as_object_mut().unwrap().remove(key) } fn string(self) -> String { if let Value::String(s) = self { s } else { panic!("Not a Value::String") } } fn take_string(&mut self, key: &str) -> String { self.take_key(key).unwrap().string() } } fn get<'a>(url: &'a Url, attr: &str) -> &'a str { match attr { "href" => quirks::href(url), "protocol" => quirks::protocol(url), "username" => quirks::username(url), "password" => quirks::password(url), "hostname" => quirks::hostname(url), "host" => quirks::host(url), "port" => quirks::port(url), "pathname" => quirks::pathname(url), "search" => quirks::search(url), "hash" => quirks::hash(url), _ => unreachable!(), } } #[allow(clippy::unit_arg)] fn set<'a>(url: &'a mut Url, attr: &str, new: &str) { let _ = match attr { "protocol" => quirks::set_protocol(url, new), "username" => quirks::set_username(url, new), "password" => quirks::set_password(url, new), "hostname" => quirks::set_hostname(url, new), "host" => quirks::set_host(url, new), "port" => quirks::set_port(url, new), "pathname" => Ok(quirks::set_pathname(url, new)), "search" => Ok(quirks::set_search(url, new)), "hash" => Ok(quirks::set_hash(url, new)), _ => unreachable!(), }; } fn test_eq_eprint(expected: String, actual: &str, name: &str, comment: Option<&str>) -> bool { if expected == actual { return true; } eprint_failure( format!("expected: {}\n actual: {}", expected, actual), name, comment, ); false } fn eprint_failure(err: String, name: &str, comment: Option<&str>) { eprintln!(" test: {}\n{}", name, err); if let Some(comment) = comment { eprintln!("{}\n", comment); } else { eprintln!(); } } const ATTRIBS: &[&str] = &[ "href", "protocol", "username", "password", "host", "hostname", "port", "pathname", "search", "hash", ]; vendor/url/tests/setters_tests.json0000664000175000017500000016440014160055207020426 0ustar mwhudsonmwhudson{ "comment": [ "AS OF https://github.com/jsdom/whatwg-url/commit/35f04dfd3048cf6362f4398745bb13375c5020c2", "## Tests for setters of https://url.spec.whatwg.org/#urlutils-members", "", "This file contains a JSON object.", "Other than 'comment', each key is an attribute of the `URL` interface", "defined in WHATWG’s URL Standard.", "The values are arrays of test case objects for that attribute.", "", "To run a test case for the attribute `attr`:", "", "* Create a new `URL` object with the value for the 'href' key", " the constructor single parameter. (Without a base URL.)", " This must not throw.", "* Set the attribute `attr` to (invoke its setter with)", " with the value of for 'new_value' key.", "* The value for the 'expected' key is another object.", " For each `key` / `value` pair of that object,", " get the attribute `key` (invoke its getter).", " The returned string must be equal to `value`.", "", "Note: the 'href' setter is already covered by urltestdata.json." ], "protocol": [ { "comment": "The empty string is not a valid scheme. Setter leaves the URL unchanged.", "href": "a://example.net", "new_value": "", "expected": { "href": "a://example.net", "protocol": "a:" } }, { "href": "a://example.net", "new_value": "b", "expected": { "href": "b://example.net", "protocol": "b:" } }, { "href": "javascript:alert(1)", "new_value": "defuse", "expected": { "href": "defuse:alert(1)", "protocol": "defuse:" } }, { "comment": "Upper-case ASCII is lower-cased", "href": "a://example.net", "new_value": "B", "expected": { "href": "b://example.net", "protocol": "b:" } }, { "comment": "Non-ASCII is rejected", "href": "a://example.net", "new_value": "é", "expected": { "href": "a://example.net", "protocol": "a:" } }, { "comment": "No leading digit", "href": "a://example.net", "new_value": "0b", "expected": { "href": "a://example.net", "protocol": "a:" } }, { "comment": "No leading punctuation", "href": "a://example.net", "new_value": "+b", "expected": { "href": "a://example.net", "protocol": "a:" } }, { "href": "a://example.net", "new_value": "bC0+-.", "expected": { "href": "bc0+-.://example.net", "protocol": "bc0+-.:" } }, { "comment": "Only some punctuation is acceptable", "href": "a://example.net", "new_value": "b,c", "expected": { "href": "a://example.net", "protocol": "a:" } }, { "comment": "Non-ASCII is rejected", "href": "a://example.net", "new_value": "bé", "expected": { "href": "a://example.net", "protocol": "a:" } }, { "comment": "Can’t switch from URL containing username/password/port to file", "href": "http://test@example.net", "new_value": "file", "expected": { "href": "http://test@example.net/", "protocol": "http:" } }, { "href": "gopher://example.net:1234", "new_value": "file", "expected": { "href": "gopher://example.net:1234", "protocol": "gopher:" } }, { "href": "wss://x:x@example.net:1234", "new_value": "file", "expected": { "href": "wss://x:x@example.net:1234/", "protocol": "wss:" } }, { "comment": "Can’t switch from file URL with no host", "href": "file://localhost/", "new_value": "http", "expected": { "href": "file:///", "protocol": "file:" } }, { "href": "file:///test", "new_value": "gopher", "expected": { "href": "file:///test", "protocol": "file:" } }, { "href": "file:", "new_value": "wss", "expected": { "href": "file:///", "protocol": "file:" } }, { "comment": "Can’t switch from special scheme to non-special", "href": "http://example.net", "new_value": "b", "expected": { "href": "http://example.net/", "protocol": "http:" } }, { "href": "file://hi/path", "new_value": "s", "expected": { "href": "file://hi/path", "protocol": "file:" } }, { "href": "https://example.net", "new_value": "s", "expected": { "href": "https://example.net/", "protocol": "https:" } }, { "href": "ftp://example.net", "new_value": "test", "expected": { "href": "ftp://example.net/", "protocol": "ftp:" } }, { "comment": "Cannot-be-a-base URL doesn’t have a host, but URL in a special scheme must.", "href": "mailto:me@example.net", "new_value": "http", "expected": { "href": "mailto:me@example.net", "protocol": "mailto:" } }, { "comment": "Can’t switch from non-special scheme to special", "href": "ssh://me@example.net", "new_value": "http", "expected": { "href": "ssh://me@example.net", "protocol": "ssh:" } }, { "href": "ssh://me@example.net", "new_value": "https", "expected": { "href": "ssh://me@example.net", "protocol": "ssh:" } }, { "href": "ssh://me@example.net", "new_value": "file", "expected": { "href": "ssh://me@example.net", "protocol": "ssh:" } }, { "href": "ssh://example.net", "new_value": "file", "expected": { "href": "ssh://example.net", "protocol": "ssh:" } }, { "href": "nonsense:///test", "new_value": "https", "expected": { "href": "nonsense:///test", "protocol": "nonsense:" } }, { "comment": "Stuff after the first ':' is ignored", "href": "http://example.net", "new_value": "https:foo : bar", "expected": { "href": "https://example.net/", "protocol": "https:" } }, { "comment": "Stuff after the first ':' is ignored", "href": "data:text/html,

Test", "new_value": "view-source+data:foo : bar", "expected": { "href": "view-source+data:text/html,

Test", "protocol": "view-source+data:" } }, { "comment": "Port is set to null if it is the default for new scheme.", "href": "http://foo.com:443/", "new_value": "https", "expected": { "href": "https://foo.com/", "protocol": "https:", "port": "" } } ], "username": [ { "comment": "No host means no username", "href": "file:///home/you/index.html", "new_value": "me", "expected": { "href": "file:///home/you/index.html", "username": "" } }, { "comment": "No host means no username", "href": "unix:/run/foo.socket", "new_value": "me", "expected": { "href": "unix:/run/foo.socket", "username": "" } }, { "comment": "Cannot-be-a-base means no username", "href": "mailto:you@example.net", "new_value": "me", "expected": { "href": "mailto:you@example.net", "username": "" } }, { "href": "javascript:alert(1)", "new_value": "wario", "expected": { "href": "javascript:alert(1)", "username": "" } }, { "href": "http://example.net", "new_value": "me", "expected": { "href": "http://me@example.net/", "username": "me" } }, { "href": "http://:secret@example.net", "new_value": "me", "expected": { "href": "http://me:secret@example.net/", "username": "me" } }, { "href": "http://me@example.net", "new_value": "", "expected": { "href": "http://example.net/", "username": "" } }, { "href": "http://me:secret@example.net", "new_value": "", "expected": { "href": "http://:secret@example.net/", "username": "" } }, { "comment": "UTF-8 percent encoding with the userinfo encode set.", "href": "http://example.net", "new_value": "\u0000\u0001\t\n\r\u001f !\"#$%&'()*+,-./09:;<=>?@AZ[\\]^_`az{|}~\u007f\u0080\u0081Éé", "expected": { "href": "http://%00%01%09%0A%0D%1F%20!%22%23$%&'()*+,-.%2F09%3A%3B%3C%3D%3E%3F%40AZ%5B%5C%5D%5E_%60az%7B%7C%7D~%7F%C2%80%C2%81%C3%89%C3%A9@example.net/", "username": "%00%01%09%0A%0D%1F%20!%22%23$%&'()*+,-.%2F09%3A%3B%3C%3D%3E%3F%40AZ%5B%5C%5D%5E_%60az%7B%7C%7D~%7F%C2%80%C2%81%C3%89%C3%A9" } }, { "comment": "Bytes already percent-encoded are left as-is.", "href": "http://example.net", "new_value": "%c3%89té", "expected": { "href": "http://%c3%89t%C3%A9@example.net/", "username": "%c3%89t%C3%A9" } }, { "href": "sc:///", "new_value": "x", "expected": { "href": "sc:///", "username": "" } }, { "href": "javascript://x/", "new_value": "wario", "expected": { "href": "javascript://wario@x/", "username": "wario" } }, { "href": "file://test/", "new_value": "test", "expected": { "href": "file://test/", "username": "" } } ], "password": [ { "comment": "No host means no password", "href": "file:///home/me/index.html", "new_value": "secret", "expected": { "href": "file:///home/me/index.html", "password": "" } }, { "comment": "No host means no password", "href": "unix:/run/foo.socket", "new_value": "secret", "expected": { "href": "unix:/run/foo.socket", "password": "" } }, { "comment": "Cannot-be-a-base means no password", "href": "mailto:me@example.net", "new_value": "secret", "expected": { "href": "mailto:me@example.net", "password": "" } }, { "href": "http://example.net", "new_value": "secret", "expected": { "href": "http://:secret@example.net/", "password": "secret" } }, { "href": "http://me@example.net", "new_value": "secret", "expected": { "href": "http://me:secret@example.net/", "password": "secret" } }, { "href": "http://:secret@example.net", "new_value": "", "expected": { "href": "http://example.net/", "password": "" } }, { "href": "http://me:secret@example.net", "new_value": "", "expected": { "href": "http://me@example.net/", "password": "" } }, { "comment": "UTF-8 percent encoding with the userinfo encode set.", "href": "http://example.net", "new_value": "\u0000\u0001\t\n\r\u001f !\"#$%&'()*+,-./09:;<=>?@AZ[\\]^_`az{|}~\u007f\u0080\u0081Éé", "expected": { "href": "http://:%00%01%09%0A%0D%1F%20!%22%23$%&'()*+,-.%2F09%3A%3B%3C%3D%3E%3F%40AZ%5B%5C%5D%5E_%60az%7B%7C%7D~%7F%C2%80%C2%81%C3%89%C3%A9@example.net/", "password": "%00%01%09%0A%0D%1F%20!%22%23$%&'()*+,-.%2F09%3A%3B%3C%3D%3E%3F%40AZ%5B%5C%5D%5E_%60az%7B%7C%7D~%7F%C2%80%C2%81%C3%89%C3%A9" } }, { "comment": "Bytes already percent-encoded are left as-is.", "href": "http://example.net", "new_value": "%c3%89té", "expected": { "href": "http://:%c3%89t%C3%A9@example.net/", "password": "%c3%89t%C3%A9" } }, { "href": "sc:///", "new_value": "x", "expected": { "href": "sc:///", "password": "" } }, { "href": "javascript://x/", "new_value": "bowser", "expected": { "href": "javascript://:bowser@x/", "password": "bowser" } }, { "href": "file://test/", "new_value": "test", "expected": { "href": "file://test/", "password": "" } } ], "host": [ { "comment": "Non-special scheme", "href": "sc://x/", "new_value": "\u0000", "expected": { "href": "sc://x/", "host": "x", "hostname": "x" } }, { "href": "sc://x/", "new_value": "\u0009", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": "\u000A", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": "\u000D", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": " ", "expected": { "href": "sc://x/", "host": "x", "hostname": "x" } }, { "href": "sc://x/", "new_value": "#", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": "/", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": "?", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": "@", "expected": { "href": "sc://x/", "host": "x", "hostname": "x" } }, { "href": "sc://x/", "new_value": "ß", "expected": { "href": "sc://%C3%9F/", "host": "%C3%9F", "hostname": "%C3%9F" } }, { "comment": "IDNA Nontransitional_Processing", "href": "https://x/", "new_value": "ß", "expected": { "href": "https://xn--zca/", "host": "xn--zca", "hostname": "xn--zca" } }, { "comment": "Cannot-be-a-base means no host", "href": "mailto:me@example.net", "new_value": "example.com", "expected": { "href": "mailto:me@example.net", "host": "" } }, { "comment": "Cannot-be-a-base means no host", "href": "data:text/plain,Stuff", "new_value": "example.net", "expected": { "href": "data:text/plain,Stuff", "host": "" } }, { "href": "http://example.net", "new_value": "example.com:8080", "expected": { "href": "http://example.com:8080/", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "Port number is unchanged if not specified in the new value", "href": "http://example.net:8080", "new_value": "example.com", "expected": { "href": "http://example.com:8080/", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "Port number is unchanged if not specified", "href": "http://example.net:8080", "new_value": "example.com:", "expected": { "href": "http://example.com:8080/", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "The empty host is not valid for special schemes", "href": "http://example.net", "new_value": "", "expected": { "href": "http://example.net/", "host": "example.net" } }, { "comment": "The empty host is OK for non-special schemes", "href": "view-source+http://example.net/foo", "new_value": "", "expected": { "href": "view-source+http:///foo", "host": "" } }, { "comment": "Path-only URLs can gain a host", "href": "a:/foo", "new_value": "example.net", "expected": { "href": "a://example.net/foo", "host": "example.net" } }, { "comment": "IPv4 address syntax is normalized", "href": "http://example.net", "new_value": "0x7F000001:8080", "expected": { "href": "http://127.0.0.1:8080/", "host": "127.0.0.1:8080", "hostname": "127.0.0.1", "port": "8080" } }, { "comment": "IPv6 address syntax is normalized", "href": "http://example.net", "new_value": "[::0:01]:2", "expected": { "href": "http://[::1]:2/", "host": "[::1]:2", "hostname": "[::1]", "port": "2" } }, { "comment": "IPv6 literal address with port, crbug.com/1012416", "href": "http://example.net", "new_value": "[2001:db8::2]:4002", "expected": { "href": "http://[2001:db8::2]:4002/", "host": "[2001:db8::2]:4002", "hostname": "[2001:db8::2]", "port": "4002" } }, { "comment": "Default port number is removed", "href": "http://example.net", "new_value": "example.com:80", "expected": { "href": "http://example.com/", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Default port number is removed", "href": "https://example.net", "new_value": "example.com:443", "expected": { "href": "https://example.com/", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Default port number is only removed for the relevant scheme", "href": "https://example.net", "new_value": "example.com:80", "expected": { "href": "https://example.com:80/", "host": "example.com:80", "hostname": "example.com", "port": "80" } }, { "comment": "Port number is removed if new port is scheme default and existing URL has a non-default port", "href": "http://example.net:8080", "new_value": "example.com:80", "expected": { "href": "http://example.com/", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Stuff after a / delimiter is ignored", "href": "http://example.net/path", "new_value": "example.com/stuff", "expected": { "href": "http://example.com/path", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Stuff after a / delimiter is ignored", "href": "http://example.net/path", "new_value": "example.com:8080/stuff", "expected": { "href": "http://example.com:8080/path", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "Stuff after a ? delimiter is ignored", "href": "http://example.net/path", "new_value": "example.com?stuff", "expected": { "href": "http://example.com/path", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Stuff after a ? delimiter is ignored", "href": "http://example.net/path", "new_value": "example.com:8080?stuff", "expected": { "href": "http://example.com:8080/path", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "Stuff after a # delimiter is ignored", "href": "http://example.net/path", "new_value": "example.com#stuff", "expected": { "href": "http://example.com/path", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Stuff after a # delimiter is ignored", "href": "http://example.net/path", "new_value": "example.com:8080#stuff", "expected": { "href": "http://example.com:8080/path", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "Stuff after a \\ delimiter is ignored for special schemes", "href": "http://example.net/path", "new_value": "example.com\\stuff", "expected": { "href": "http://example.com/path", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Stuff after a \\ delimiter is ignored for special schemes", "href": "http://example.net/path", "new_value": "example.com:8080\\stuff", "expected": { "href": "http://example.com:8080/path", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "\\ is not a delimiter for non-special schemes, but still forbidden in hosts", "href": "view-source+http://example.net/path", "new_value": "example.com\\stuff", "expected": { "href": "view-source+http://example.net/path", "host": "example.net", "hostname": "example.net", "port": "" } }, { "comment": "Anything other than ASCII digit stops the port parser in a setter but is not an error", "href": "view-source+http://example.net/path", "new_value": "example.com:8080stuff2", "expected": { "href": "view-source+http://example.com:8080/path", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "Anything other than ASCII digit stops the port parser in a setter but is not an error", "href": "http://example.net/path", "new_value": "example.com:8080stuff2", "expected": { "href": "http://example.com:8080/path", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "Anything other than ASCII digit stops the port parser in a setter but is not an error", "href": "http://example.net/path", "new_value": "example.com:8080+2", "expected": { "href": "http://example.com:8080/path", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "Port numbers are 16 bit integers", "href": "http://example.net/path", "new_value": "example.com:65535", "expected": { "href": "http://example.com:65535/path", "host": "example.com:65535", "hostname": "example.com", "port": "65535" } }, { "comment": "Port numbers are 16 bit integers, overflowing is an error. Hostname is still set, though.", "href": "http://example.net/path", "new_value": "example.com:65536", "expected": { "href": "http://example.com/path", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Broken IPv6", "href": "http://example.net/", "new_value": "[google.com]", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net" } }, { "href": "http://example.net/", "new_value": "[::1.2.3.4x]", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net" } }, { "href": "http://example.net/", "new_value": "[::1.2.3.]", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net" } }, { "href": "http://example.net/", "new_value": "[::1.2.]", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net" } }, { "href": "http://example.net/", "new_value": "[::1.]", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net" } }, { "href": "file://y/", "new_value": "x:123", "expected": { "href": "file://y/", "host": "y", "hostname": "y", "port": "" } }, { "href": "file://y/", "new_value": "loc%41lhost", "expected": { "href": "file:///", "host": "", "hostname": "", "port": "" } }, { "href": "sc://test@test/", "new_value": "", "expected": { "href": "sc://test@test/", "host": "test", "hostname": "test", "username": "test" } }, { "href": "sc://test:12/", "new_value": "", "expected": { "href": "sc://test:12/", "host": "test:12", "hostname": "test", "port": "12" } } ], "hostname": [ { "comment": "Non-special scheme", "href": "sc://x/", "new_value": "\u0000", "expected": { "href": "sc://x/", "host": "x", "hostname": "x" } }, { "href": "sc://x/", "new_value": "\u0009", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": "\u000A", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": "\u000D", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": " ", "expected": { "href": "sc://x/", "host": "x", "hostname": "x" } }, { "href": "sc://x/", "new_value": "#", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": "/", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": "?", "expected": { "href": "sc:///", "host": "", "hostname": "" } }, { "href": "sc://x/", "new_value": "@", "expected": { "href": "sc://x/", "host": "x", "hostname": "x" } }, { "comment": "Cannot-be-a-base means no host", "href": "mailto:me@example.net", "new_value": "example.com", "expected": { "href": "mailto:me@example.net", "host": "" } }, { "comment": "Cannot-be-a-base means no host", "href": "data:text/plain,Stuff", "new_value": "example.net", "expected": { "href": "data:text/plain,Stuff", "host": "" } }, { "href": "http://example.net:8080", "new_value": "example.com", "expected": { "href": "http://example.com:8080/", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "The empty host is not valid for special schemes", "href": "http://example.net", "new_value": "", "expected": { "href": "http://example.net/", "host": "example.net" } }, { "comment": "The empty host is OK for non-special schemes", "href": "view-source+http://example.net/foo", "new_value": "", "expected": { "href": "view-source+http:///foo", "host": "" } }, { "comment": "Path-only URLs can gain a host", "href": "a:/foo", "new_value": "example.net", "expected": { "href": "a://example.net/foo", "host": "example.net" } }, { "comment": "IPv4 address syntax is normalized", "href": "http://example.net:8080", "new_value": "0x7F000001", "expected": { "href": "http://127.0.0.1:8080/", "host": "127.0.0.1:8080", "hostname": "127.0.0.1", "port": "8080" } }, { "comment": "IPv6 address syntax is normalized", "href": "http://example.net", "new_value": "[::0:01]", "expected": { "href": "http://[::1]/", "host": "[::1]", "hostname": "[::1]", "port": "" } }, { "comment": "Stuff after a : delimiter is ignored", "href": "http://example.net/path", "new_value": "example.com:8080", "expected": { "href": "http://example.com/path", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Stuff after a : delimiter is ignored", "href": "http://example.net:8080/path", "new_value": "example.com:", "expected": { "href": "http://example.com:8080/path", "host": "example.com:8080", "hostname": "example.com", "port": "8080" } }, { "comment": "Stuff after a / delimiter is ignored", "href": "http://example.net/path", "new_value": "example.com/stuff", "expected": { "href": "http://example.com/path", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Stuff after a ? delimiter is ignored", "href": "http://example.net/path", "new_value": "example.com?stuff", "expected": { "href": "http://example.com/path", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Stuff after a # delimiter is ignored", "href": "http://example.net/path", "new_value": "example.com#stuff", "expected": { "href": "http://example.com/path", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "Stuff after a \\ delimiter is ignored for special schemes", "href": "http://example.net/path", "new_value": "example.com\\stuff", "expected": { "href": "http://example.com/path", "host": "example.com", "hostname": "example.com", "port": "" } }, { "comment": "\\ is not a delimiter for non-special schemes, but still forbidden in hosts", "href": "view-source+http://example.net/path", "new_value": "example.com\\stuff", "expected": { "href": "view-source+http://example.net/path", "host": "example.net", "hostname": "example.net", "port": "" } }, { "comment": "Broken IPv6", "href": "http://example.net/", "new_value": "[google.com]", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net" } }, { "href": "http://example.net/", "new_value": "[::1.2.3.4x]", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net" } }, { "href": "http://example.net/", "new_value": "[::1.2.3.]", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net" } }, { "href": "http://example.net/", "new_value": "[::1.2.]", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net" } }, { "href": "http://example.net/", "new_value": "[::1.]", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net" } }, { "href": "file://y/", "new_value": "x:123", "expected": { "href": "file://y/", "host": "y", "hostname": "y", "port": "" } }, { "href": "file://y/", "new_value": "loc%41lhost", "expected": { "href": "file:///", "host": "", "hostname": "", "port": "" } }, { "href": "sc://test@test/", "new_value": "", "expected": { "href": "sc://test@test/", "host": "test", "hostname": "test", "username": "test" } }, { "href": "sc://test:12/", "new_value": "", "expected": { "href": "sc://test:12/", "host": "test:12", "hostname": "test", "port": "12" } } ], "port": [ { "href": "http://example.net", "new_value": "8080", "expected": { "href": "http://example.net:8080/", "host": "example.net:8080", "hostname": "example.net", "port": "8080" } }, { "comment": "Port number is removed if empty is the new value", "href": "http://example.net:8080", "new_value": "", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net", "port": "" } }, { "comment": "Default port number is removed", "href": "http://example.net:8080", "new_value": "80", "expected": { "href": "http://example.net/", "host": "example.net", "hostname": "example.net", "port": "" } }, { "comment": "Default port number is removed", "href": "https://example.net:4433", "new_value": "443", "expected": { "href": "https://example.net/", "host": "example.net", "hostname": "example.net", "port": "" } }, { "comment": "Default port number is only removed for the relevant scheme", "href": "https://example.net", "new_value": "80", "expected": { "href": "https://example.net:80/", "host": "example.net:80", "hostname": "example.net", "port": "80" } }, { "comment": "Stuff after a / delimiter is ignored", "href": "http://example.net/path", "new_value": "8080/stuff", "expected": { "href": "http://example.net:8080/path", "host": "example.net:8080", "hostname": "example.net", "port": "8080" } }, { "comment": "Stuff after a ? delimiter is ignored", "href": "http://example.net/path", "new_value": "8080?stuff", "expected": { "href": "http://example.net:8080/path", "host": "example.net:8080", "hostname": "example.net", "port": "8080" } }, { "comment": "Stuff after a # delimiter is ignored", "href": "http://example.net/path", "new_value": "8080#stuff", "expected": { "href": "http://example.net:8080/path", "host": "example.net:8080", "hostname": "example.net", "port": "8080" } }, { "comment": "Stuff after a \\ delimiter is ignored for special schemes", "href": "http://example.net/path", "new_value": "8080\\stuff", "expected": { "href": "http://example.net:8080/path", "host": "example.net:8080", "hostname": "example.net", "port": "8080" } }, { "comment": "Anything other than ASCII digit stops the port parser in a setter but is not an error", "href": "view-source+http://example.net/path", "new_value": "8080stuff2", "expected": { "href": "view-source+http://example.net:8080/path", "host": "example.net:8080", "hostname": "example.net", "port": "8080" } }, { "comment": "Anything other than ASCII digit stops the port parser in a setter but is not an error", "href": "http://example.net/path", "new_value": "8080stuff2", "expected": { "href": "http://example.net:8080/path", "host": "example.net:8080", "hostname": "example.net", "port": "8080" } }, { "comment": "Anything other than ASCII digit stops the port parser in a setter but is not an error", "href": "http://example.net/path", "new_value": "8080+2", "expected": { "href": "http://example.net:8080/path", "host": "example.net:8080", "hostname": "example.net", "port": "8080" } }, { "comment": "Port numbers are 16 bit integers", "href": "http://example.net/path", "new_value": "65535", "expected": { "href": "http://example.net:65535/path", "host": "example.net:65535", "hostname": "example.net", "port": "65535" } }, { "comment": "Port numbers are 16 bit integers, overflowing is an error", "href": "http://example.net:8080/path", "new_value": "65536", "expected": { "href": "http://example.net:8080/path", "host": "example.net:8080", "hostname": "example.net", "port": "8080" } }, { "comment": "Port numbers are 16 bit integers, overflowing is an error", "href": "non-special://example.net:8080/path", "new_value": "65536", "expected": { "href": "non-special://example.net:8080/path", "host": "example.net:8080", "hostname": "example.net", "port": "8080" } }, { "href": "file://test/", "new_value": "12", "expected": { "href": "file://test/", "port": "" } }, { "href": "file://localhost/", "new_value": "12", "expected": { "href": "file:///", "port": "" } }, { "href": "non-base:value", "new_value": "12", "expected": { "href": "non-base:value", "port": "" } }, { "href": "sc:///", "new_value": "12", "expected": { "href": "sc:///", "port": "" } }, { "href": "sc://x/", "new_value": "12", "expected": { "href": "sc://x:12/", "port": "12" } }, { "href": "javascript://x/", "new_value": "12", "expected": { "href": "javascript://x:12/", "port": "12" } } ], "pathname": [ { "comment": "Cannot-be-a-base don’t have a path", "href": "mailto:me@example.net", "new_value": "/foo", "expected": { "href": "mailto:me@example.net", "pathname": "me@example.net" } }, { "href": "unix:/run/foo.socket?timeout=10", "new_value": "/var/log/../run/bar.socket", "expected": { "href": "unix:/var/run/bar.socket?timeout=10", "pathname": "/var/run/bar.socket" } }, { "href": "https://example.net#nav", "new_value": "home", "expected": { "href": "https://example.net/home#nav", "pathname": "/home" } }, { "href": "https://example.net#nav", "new_value": "../home", "expected": { "href": "https://example.net/home#nav", "pathname": "/home" } }, { "comment": "\\ is a segment delimiter for 'special' URLs", "href": "http://example.net/home?lang=fr#nav", "new_value": "\\a\\%2E\\b\\%2e.\\c", "expected": { "href": "http://example.net/a/c?lang=fr#nav", "pathname": "/a/c" } }, { "comment": "\\ is *not* a segment delimiter for non-'special' URLs", "href": "view-source+http://example.net/home?lang=fr#nav", "new_value": "\\a\\%2E\\b\\%2e.\\c", "expected": { "href": "view-source+http://example.net/\\a\\%2E\\b\\%2e.\\c?lang=fr#nav", "pathname": "/\\a\\%2E\\b\\%2e.\\c" } }, { "comment": "UTF-8 percent encoding with the default encode set. Tabs and newlines are removed.", "href": "a:/", "new_value": "\u0000\u0001\t\n\r\u001f !\"#$%&'()*+,-./09:;<=>?@AZ[\\]^_`az{|}~\u007f\u0080\u0081Éé", "expected": { "href": "a:/%00%01%1F%20!%22%23$%&'()*+,-./09:;%3C=%3E%3F@AZ[\\]^_%60az%7B|%7D~%7F%C2%80%C2%81%C3%89%C3%A9", "pathname": "/%00%01%1F%20!%22%23$%&'()*+,-./09:;%3C=%3E%3F@AZ[\\]^_%60az%7B|%7D~%7F%C2%80%C2%81%C3%89%C3%A9" } }, { "comment": "Bytes already percent-encoded are left as-is, including %2E outside dotted segments.", "href": "http://example.net", "new_value": "%2e%2E%c3%89té", "expected": { "href": "http://example.net/%2e%2E%c3%89t%C3%A9", "pathname": "/%2e%2E%c3%89t%C3%A9" } }, { "comment": "? needs to be encoded", "href": "http://example.net", "new_value": "?", "expected": { "href": "http://example.net/%3F", "pathname": "/%3F" } }, { "comment": "# needs to be encoded", "href": "http://example.net", "new_value": "#", "expected": { "href": "http://example.net/%23", "pathname": "/%23" } }, { "comment": "? needs to be encoded, non-special scheme", "href": "sc://example.net", "new_value": "?", "expected": { "href": "sc://example.net/%3F", "pathname": "/%3F" } }, { "comment": "# needs to be encoded, non-special scheme", "href": "sc://example.net", "new_value": "#", "expected": { "href": "sc://example.net/%23", "pathname": "/%23" } }, { "comment": "File URLs and (back)slashes", "href": "file://monkey/", "new_value": "\\\\", "expected": { "href": "file://monkey/", "pathname": "/" } }, { "comment": "File URLs and (back)slashes", "href": "file:///unicorn", "new_value": "//\\/", "expected": { "href": "file:///", "pathname": "/" } }, { "comment": "File URLs and (back)slashes", "href": "file:///unicorn", "new_value": "//monkey/..//", "expected": { "href": "file:///", "pathname": "/" } } ], "search": [ { "href": "https://example.net#nav", "new_value": "lang=fr", "expected": { "href": "https://example.net/?lang=fr#nav", "search": "?lang=fr" } }, { "href": "https://example.net?lang=en-US#nav", "new_value": "lang=fr", "expected": { "href": "https://example.net/?lang=fr#nav", "search": "?lang=fr" } }, { "href": "https://example.net?lang=en-US#nav", "new_value": "?lang=fr", "expected": { "href": "https://example.net/?lang=fr#nav", "search": "?lang=fr" } }, { "href": "https://example.net?lang=en-US#nav", "new_value": "??lang=fr", "expected": { "href": "https://example.net/??lang=fr#nav", "search": "??lang=fr" } }, { "href": "https://example.net?lang=en-US#nav", "new_value": "?", "expected": { "href": "https://example.net/?#nav", "search": "" } }, { "href": "https://example.net?lang=en-US#nav", "new_value": "", "expected": { "href": "https://example.net/#nav", "search": "" } }, { "href": "https://example.net?lang=en-US", "new_value": "", "expected": { "href": "https://example.net/", "search": "" } }, { "href": "https://example.net", "new_value": "", "expected": { "href": "https://example.net/", "search": "" } }, { "comment": "UTF-8 percent encoding with the query encode set. Tabs and newlines are removed.", "href": "a:/", "new_value": "\u0000\u0001\t\n\r\u001f !\"#$%&'()*+,-./09:;<=>?@AZ[\\]^_`az{|}~\u007f\u0080\u0081Éé", "expected": { "href": "a:/?%00%01%1F%20!%22%23$%&'()*+,-./09:;%3C=%3E?@AZ[\\]^_`az{|}~%7F%C2%80%C2%81%C3%89%C3%A9", "search": "?%00%01%1F%20!%22%23$%&'()*+,-./09:;%3C=%3E?@AZ[\\]^_`az{|}~%7F%C2%80%C2%81%C3%89%C3%A9" } }, { "comment": "Bytes already percent-encoded are left as-is", "href": "http://example.net", "new_value": "%c3%89té", "expected": { "href": "http://example.net/?%c3%89t%C3%A9", "search": "?%c3%89t%C3%A9" } } ], "hash": [ { "href": "https://example.net", "new_value": "main", "expected": { "href": "https://example.net/#main", "hash": "#main" } }, { "href": "https://example.net#nav", "new_value": "main", "expected": { "href": "https://example.net/#main", "hash": "#main" } }, { "href": "https://example.net?lang=en-US", "new_value": "##nav", "expected": { "href": "https://example.net/?lang=en-US##nav", "hash": "##nav" } }, { "href": "https://example.net?lang=en-US#nav", "new_value": "#main", "expected": { "href": "https://example.net/?lang=en-US#main", "hash": "#main" } }, { "href": "https://example.net?lang=en-US#nav", "new_value": "#", "expected": { "href": "https://example.net/?lang=en-US#", "hash": "" } }, { "href": "https://example.net?lang=en-US#nav", "new_value": "", "expected": { "href": "https://example.net/?lang=en-US", "hash": "" } }, { "href": "http://example.net", "new_value": "#foo bar", "expected": { "href": "http://example.net/#foo%20bar", "hash": "#foo%20bar" } }, { "href": "http://example.net", "new_value": "#foo\"bar", "expected": { "href": "http://example.net/#foo%22bar", "hash": "#foo%22bar" } }, { "href": "http://example.net", "new_value": "#foobar", "expected": { "href": "http://example.net/#foo%3Ebar", "hash": "#foo%3Ebar" } }, { "href": "http://example.net", "new_value": "#foo`bar", "expected": { "href": "http://example.net/#foo%60bar", "hash": "#foo%60bar" } }, { "comment": "Simple percent-encoding; tabs and newlines are removed", "href": "a:/", "new_value": "\u0000\u0001\t\n\r\u001f !\"#$%&'()*+,-./09:;<=>?@AZ[\\]^_`az{|}~\u007f\u0080\u0081Éé", "expected": { "href": "a:/#%00%01%1F%20!%22#$%&'()*+,-./09:;%3C=%3E?@AZ[\\]^_%60az{|}~%7F%C2%80%C2%81%C3%89%C3%A9", "hash": "#%00%01%1F%20!%22#$%&'()*+,-./09:;%3C=%3E?@AZ[\\]^_%60az{|}~%7F%C2%80%C2%81%C3%89%C3%A9" } }, { "comment": "Percent-encode NULLs in fragment", "href": "http://example.net", "new_value": "a\u0000b", "expected": { "href": "http://example.net/#a%00b", "hash": "#a%00b" } }, { "comment": "Percent-encode NULLs in fragment", "href": "non-spec:/", "new_value": "a\u0000b", "expected": { "href": "non-spec:/#a%00b", "hash": "#a%00b" } }, { "comment": "Bytes already percent-encoded are left as-is", "href": "http://example.net", "new_value": "%c3%89té", "expected": { "href": "http://example.net/#%c3%89t%C3%A9", "hash": "#%c3%89t%C3%A9" } }, { "href": "javascript:alert(1)", "new_value": "castle", "expected": { "href": "javascript:alert(1)#castle", "hash": "#castle" } } ] } vendor/url/LICENSE-MIT0000664000175000017500000000206014160055207015103 0ustar mwhudsonmwhudsonCopyright (c) 2013-2016 The rust-url developers Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/hex/0000775000175000017500000000000014160055207013433 5ustar mwhudsonmwhudsonvendor/hex/.cargo-checksum.json0000664000175000017500000000013114160055207017272 0ustar mwhudsonmwhudson{"files":{},"package":"7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"}vendor/hex/LICENSE-APACHE0000664000175000017500000002613614160055207015367 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/hex/Cargo.toml0000664000175000017500000000305214160055207015363 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] edition = "2018" name = "hex" version = "0.4.3" authors = ["KokaKiwi "] description = "Encoding and decoding data into/from hexadecimal representation." documentation = "https://docs.rs/hex/" readme = "README.md" keywords = ["no_std", "hex"] categories = ["encoding", "no-std"] license = "MIT OR Apache-2.0" repository = "https://github.com/KokaKiwi/rust-hex" [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] [[bench]] name = "hex" harness = false [dependencies.serde] version = "1.0" optional = true default-features = false [dev-dependencies.criterion] version = "0.3" [dev-dependencies.faster-hex] version = "0.5" [dev-dependencies.pretty_assertions] version = "0.6" [dev-dependencies.rustc-hex] version = "2.1" [dev-dependencies.serde] version = "1.0" features = ["derive"] [dev-dependencies.serde_json] version = "1.0" [dev-dependencies.version-sync] version = "0.9" [features] alloc = [] default = ["std"] std = ["alloc"] [badges.maintenance] status = "actively-developed" vendor/hex/benches/0000775000175000017500000000000014160055207015042 5ustar mwhudsonmwhudsonvendor/hex/benches/hex.rs0000664000175000017500000000353614160055207016203 0ustar mwhudsonmwhudsonuse criterion::{criterion_group, criterion_main, Criterion}; use rustc_hex::{FromHex, ToHex}; const DATA: &[u8] = include_bytes!("../src/lib.rs"); fn bench_encode(c: &mut Criterion) { c.bench_function("hex_encode", |b| b.iter(|| hex::encode(DATA))); c.bench_function("rustc_hex_encode", |b| b.iter(|| DATA.to_hex::())); c.bench_function("faster_hex_encode", |b| { b.iter(|| faster_hex::hex_string(DATA).unwrap()) }); c.bench_function("faster_hex_encode_fallback", |b| { b.iter(|| { let mut dst = vec![0; DATA.len() * 2]; faster_hex::hex_encode_fallback(DATA, &mut dst); dst }) }); } fn bench_decode(c: &mut Criterion) { c.bench_function("hex_decode", |b| { let hex = hex::encode(DATA); b.iter(|| hex::decode(&hex).unwrap()) }); c.bench_function("rustc_hex_decode", |b| { let hex = DATA.to_hex::(); b.iter(|| hex.from_hex::>().unwrap()) }); c.bench_function("faster_hex_decode", move |b| { let hex = faster_hex::hex_string(DATA).unwrap(); let len = DATA.len(); let mut dst = vec![0; len]; b.iter(|| faster_hex::hex_decode(hex.as_bytes(), &mut dst).unwrap()) }); c.bench_function("faster_hex_decode_unchecked", |b| { let hex = faster_hex::hex_string(DATA).unwrap(); let len = DATA.len(); let mut dst = vec![0; len]; b.iter(|| faster_hex::hex_decode_unchecked(hex.as_bytes(), &mut dst)) }); c.bench_function("faster_hex_decode_fallback", |b| { let hex = faster_hex::hex_string(DATA).unwrap(); let len = DATA.len(); let mut dst = vec![0; len]; b.iter(|| faster_hex::hex_decode_fallback(hex.as_bytes(), &mut dst)) }); } criterion_group!(benches, bench_encode, bench_decode); criterion_main!(benches); vendor/hex/src/0000775000175000017500000000000014160055207014222 5ustar mwhudsonmwhudsonvendor/hex/src/error.rs0000664000175000017500000000347114160055207015726 0ustar mwhudsonmwhudsonuse core::fmt; /// The error type for decoding a hex string into `Vec` or `[u8; N]`. #[derive(Debug, Clone, Copy, PartialEq)] pub enum FromHexError { /// An invalid character was found. Valid ones are: `0...9`, `a...f` /// or `A...F`. InvalidHexCharacter { c: char, index: usize }, /// A hex string's length needs to be even, as two digits correspond to /// one byte. OddLength, /// If the hex string is decoded into a fixed sized container, such as an /// array, the hex string's length * 2 has to match the container's /// length. InvalidStringLength, } #[cfg(feature = "std")] impl std::error::Error for FromHexError {} impl fmt::Display for FromHexError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match *self { FromHexError::InvalidHexCharacter { c, index } => { write!(f, "Invalid character {:?} at position {}", c, index) } FromHexError::OddLength => write!(f, "Odd number of digits"), FromHexError::InvalidStringLength => write!(f, "Invalid string length"), } } } #[cfg(test)] // this feature flag is here to suppress unused // warnings of `super::*` and `pretty_assertions::assert_eq` #[cfg(feature = "alloc")] mod tests { use super::*; #[cfg(feature = "alloc")] use alloc::string::ToString; use pretty_assertions::assert_eq; #[test] #[cfg(feature = "alloc")] fn test_display() { assert_eq!( FromHexError::InvalidHexCharacter { c: '\n', index: 5 }.to_string(), "Invalid character '\\n' at position 5" ); assert_eq!(FromHexError::OddLength.to_string(), "Odd number of digits"); assert_eq!( FromHexError::InvalidStringLength.to_string(), "Invalid string length" ); } } vendor/hex/src/serde.rs0000664000175000017500000000505014160055207015672 0ustar mwhudsonmwhudson//! Hex encoding with `serde`. #[cfg_attr( all(feature = "alloc", feature = "serde"), doc = r##" # Example ``` use serde::{Serialize, Deserialize}; #[derive(Serialize, Deserialize)] struct Foo { #[serde(with = "hex")] bar: Vec, } ``` "## )] use serde::de::{Error, Visitor}; use serde::Deserializer; #[cfg(feature = "alloc")] use serde::Serializer; #[cfg(feature = "alloc")] use alloc::string::String; use core::fmt; use core::marker::PhantomData; use crate::FromHex; #[cfg(feature = "alloc")] use crate::ToHex; /// Serializes `data` as hex string using uppercase characters. /// /// Apart from the characters' casing, this works exactly like `serialize()`. #[cfg(feature = "alloc")] pub fn serialize_upper(data: T, serializer: S) -> Result where S: Serializer, T: ToHex, { let s = data.encode_hex_upper::(); serializer.serialize_str(&s) } /// Serializes `data` as hex string using lowercase characters. /// /// Lowercase characters are used (e.g. `f9b4ca`). The resulting string's length /// is always even, each byte in data is always encoded using two hex digits. /// Thus, the resulting string contains exactly twice as many bytes as the input /// data. #[cfg(feature = "alloc")] pub fn serialize(data: T, serializer: S) -> Result where S: Serializer, T: ToHex, { let s = data.encode_hex::(); serializer.serialize_str(&s) } /// Deserializes a hex string into raw bytes. /// /// Both, upper and lower case characters are valid in the input string and can /// even be mixed (e.g. `f9b4ca`, `F9B4CA` and `f9B4Ca` are all valid strings). pub fn deserialize<'de, D, T>(deserializer: D) -> Result where D: Deserializer<'de>, T: FromHex, ::Error: fmt::Display, { struct HexStrVisitor(PhantomData); impl<'de, T> Visitor<'de> for HexStrVisitor where T: FromHex, ::Error: fmt::Display, { type Value = T; fn expecting(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "a hex encoded string") } fn visit_str(self, data: &str) -> Result where E: Error, { FromHex::from_hex(data).map_err(Error::custom) } fn visit_borrowed_str(self, data: &'de str) -> Result where E: Error, { FromHex::from_hex(data).map_err(Error::custom) } } deserializer.deserialize_str(HexStrVisitor(PhantomData)) } vendor/hex/src/lib.rs0000664000175000017500000003515514160055207015347 0ustar mwhudsonmwhudson// Copyright (c) 2013-2014 The Rust Project Developers. // Copyright (c) 2015-2020 The rust-hex Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Encoding and decoding hex strings. //! //! For most cases, you can simply use the [`decode`], [`encode`] and //! [`encode_upper`] functions. If you need a bit more control, use the traits //! [`ToHex`] and [`FromHex`] instead. //! //! # Example //! //! ``` //! # #[cfg(not(feature = "alloc"))] //! # let mut output = [0; 0x18]; //! # //! # #[cfg(not(feature = "alloc"))] //! # hex::encode_to_slice(b"Hello world!", &mut output).unwrap(); //! # //! # #[cfg(not(feature = "alloc"))] //! # let hex_string = ::core::str::from_utf8(&output).unwrap(); //! # //! # #[cfg(feature = "alloc")] //! let hex_string = hex::encode("Hello world!"); //! //! println!("{}", hex_string); // Prints "48656c6c6f20776f726c6421" //! //! # assert_eq!(hex_string, "48656c6c6f20776f726c6421"); //! ``` #![doc(html_root_url = "https://docs.rs/hex/0.4.3")] #![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(docsrs, feature(doc_cfg))] #![allow(clippy::unreadable_literal)] #[cfg(feature = "alloc")] extern crate alloc; #[cfg(feature = "alloc")] use alloc::{string::String, vec::Vec}; use core::iter; mod error; pub use crate::error::FromHexError; #[cfg(feature = "serde")] #[cfg_attr(docsrs, doc(cfg(feature = "serde")))] pub mod serde; #[cfg(feature = "serde")] pub use crate::serde::deserialize; #[cfg(all(feature = "alloc", feature = "serde"))] pub use crate::serde::{serialize, serialize_upper}; /// Encoding values as hex string. /// /// This trait is implemented for all `T` which implement `AsRef<[u8]>`. This /// includes `String`, `str`, `Vec` and `[u8]`. /// /// # Example /// /// ``` /// use hex::ToHex; /// /// println!("{}", "Hello world!".encode_hex::()); /// # assert_eq!("Hello world!".encode_hex::(), "48656c6c6f20776f726c6421".to_string()); /// ``` /// /// *Note*: instead of using this trait, you might want to use [`encode()`]. pub trait ToHex { /// Encode the hex strict representing `self` into the result. Lower case /// letters are used (e.g. `f9b4ca`) fn encode_hex>(&self) -> T; /// Encode the hex strict representing `self` into the result. Upper case /// letters are used (e.g. `F9B4CA`) fn encode_hex_upper>(&self) -> T; } const HEX_CHARS_LOWER: &[u8; 16] = b"0123456789abcdef"; const HEX_CHARS_UPPER: &[u8; 16] = b"0123456789ABCDEF"; struct BytesToHexChars<'a> { inner: ::core::slice::Iter<'a, u8>, table: &'static [u8; 16], next: Option, } impl<'a> BytesToHexChars<'a> { fn new(inner: &'a [u8], table: &'static [u8; 16]) -> BytesToHexChars<'a> { BytesToHexChars { inner: inner.iter(), table, next: None, } } } impl<'a> Iterator for BytesToHexChars<'a> { type Item = char; fn next(&mut self) -> Option { match self.next.take() { Some(current) => Some(current), None => self.inner.next().map(|byte| { let current = self.table[(byte >> 4) as usize] as char; self.next = Some(self.table[(byte & 0x0F) as usize] as char); current }), } } fn size_hint(&self) -> (usize, Option) { let length = self.len(); (length, Some(length)) } } impl<'a> iter::ExactSizeIterator for BytesToHexChars<'a> { fn len(&self) -> usize { let mut length = self.inner.len() * 2; if self.next.is_some() { length += 1; } length } } #[inline] fn encode_to_iter>(table: &'static [u8; 16], source: &[u8]) -> T { BytesToHexChars::new(source, table).collect() } impl> ToHex for T { fn encode_hex>(&self) -> U { encode_to_iter(HEX_CHARS_LOWER, self.as_ref()) } fn encode_hex_upper>(&self) -> U { encode_to_iter(HEX_CHARS_UPPER, self.as_ref()) } } /// Types that can be decoded from a hex string. /// /// This trait is implemented for `Vec` and small `u8`-arrays. /// /// # Example /// /// ``` /// use core::str; /// use hex::FromHex; /// /// let buffer = <[u8; 12]>::from_hex("48656c6c6f20776f726c6421")?; /// let string = str::from_utf8(&buffer).expect("invalid buffer length"); /// /// println!("{}", string); // prints "Hello world!" /// # assert_eq!("Hello world!", string); /// # Ok::<(), hex::FromHexError>(()) /// ``` pub trait FromHex: Sized { type Error; /// Creates an instance of type `Self` from the given hex string, or fails /// with a custom error type. /// /// Both, upper and lower case characters are valid and can even be /// mixed (e.g. `f9b4ca`, `F9B4CA` and `f9B4Ca` are all valid strings). fn from_hex>(hex: T) -> Result; } fn val(c: u8, idx: usize) -> Result { match c { b'A'..=b'F' => Ok(c - b'A' + 10), b'a'..=b'f' => Ok(c - b'a' + 10), b'0'..=b'9' => Ok(c - b'0'), _ => Err(FromHexError::InvalidHexCharacter { c: c as char, index: idx, }), } } #[cfg(feature = "alloc")] impl FromHex for Vec { type Error = FromHexError; fn from_hex>(hex: T) -> Result { let hex = hex.as_ref(); if hex.len() % 2 != 0 { return Err(FromHexError::OddLength); } hex.chunks(2) .enumerate() .map(|(i, pair)| Ok(val(pair[0], 2 * i)? << 4 | val(pair[1], 2 * i + 1)?)) .collect() } } // Helper macro to implement the trait for a few fixed sized arrays. Once Rust // has type level integers, this should be removed. macro_rules! from_hex_array_impl { ($($len:expr)+) => {$( impl FromHex for [u8; $len] { type Error = FromHexError; fn from_hex>(hex: T) -> Result { let mut out = [0_u8; $len]; decode_to_slice(hex, &mut out as &mut [u8])?; Ok(out) } } )+} } from_hex_array_impl! { 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 160 192 200 224 256 384 512 768 1024 2048 4096 8192 16384 32768 } #[cfg(any(target_pointer_width = "32", target_pointer_width = "64"))] from_hex_array_impl! { 65536 131072 262144 524288 1048576 2097152 4194304 8388608 16777216 33554432 67108864 134217728 268435456 536870912 1073741824 2147483648 } #[cfg(target_pointer_width = "64")] from_hex_array_impl! { 4294967296 } /// Encodes `data` as hex string using lowercase characters. /// /// Lowercase characters are used (e.g. `f9b4ca`). The resulting string's /// length is always even, each byte in `data` is always encoded using two hex /// digits. Thus, the resulting string contains exactly twice as many bytes as /// the input data. /// /// # Example /// /// ``` /// assert_eq!(hex::encode("Hello world!"), "48656c6c6f20776f726c6421"); /// assert_eq!(hex::encode(vec![1, 2, 3, 15, 16]), "0102030f10"); /// ``` #[must_use] #[cfg(feature = "alloc")] pub fn encode>(data: T) -> String { data.encode_hex() } /// Encodes `data` as hex string using uppercase characters. /// /// Apart from the characters' casing, this works exactly like `encode()`. /// /// # Example /// /// ``` /// assert_eq!(hex::encode_upper("Hello world!"), "48656C6C6F20776F726C6421"); /// assert_eq!(hex::encode_upper(vec![1, 2, 3, 15, 16]), "0102030F10"); /// ``` #[must_use] #[cfg(feature = "alloc")] pub fn encode_upper>(data: T) -> String { data.encode_hex_upper() } /// Decodes a hex string into raw bytes. /// /// Both, upper and lower case characters are valid in the input string and can /// even be mixed (e.g. `f9b4ca`, `F9B4CA` and `f9B4Ca` are all valid strings). /// /// # Example /// /// ``` /// assert_eq!( /// hex::decode("48656c6c6f20776f726c6421"), /// Ok("Hello world!".to_owned().into_bytes()) /// ); /// /// assert_eq!(hex::decode("123"), Err(hex::FromHexError::OddLength)); /// assert!(hex::decode("foo").is_err()); /// ``` #[cfg(feature = "alloc")] pub fn decode>(data: T) -> Result, FromHexError> { FromHex::from_hex(data) } /// Decode a hex string into a mutable bytes slice. /// /// Both, upper and lower case characters are valid in the input string and can /// even be mixed (e.g. `f9b4ca`, `F9B4CA` and `f9B4Ca` are all valid strings). /// /// # Example /// /// ``` /// let mut bytes = [0u8; 4]; /// assert_eq!(hex::decode_to_slice("6b697769", &mut bytes as &mut [u8]), Ok(())); /// assert_eq!(&bytes, b"kiwi"); /// ``` pub fn decode_to_slice>(data: T, out: &mut [u8]) -> Result<(), FromHexError> { let data = data.as_ref(); if data.len() % 2 != 0 { return Err(FromHexError::OddLength); } if data.len() / 2 != out.len() { return Err(FromHexError::InvalidStringLength); } for (i, byte) in out.iter_mut().enumerate() { *byte = val(data[2 * i], 2 * i)? << 4 | val(data[2 * i + 1], 2 * i + 1)?; } Ok(()) } // generates an iterator like this // (0, 1) // (2, 3) // (4, 5) // (6, 7) // ... #[inline] fn generate_iter(len: usize) -> impl Iterator { (0..len).step_by(2).zip((0..len).skip(1).step_by(2)) } // the inverse of `val`. #[inline] #[must_use] fn byte2hex(byte: u8, table: &[u8; 16]) -> (u8, u8) { let high = table[((byte & 0xf0) >> 4) as usize]; let low = table[(byte & 0x0f) as usize]; (high, low) } /// Encodes some bytes into a mutable slice of bytes. /// /// The output buffer, has to be able to hold at least `input.len() * 2` bytes, /// otherwise this function will return an error. /// /// # Example /// /// ``` /// # use hex::FromHexError; /// # fn main() -> Result<(), FromHexError> { /// let mut bytes = [0u8; 4 * 2]; /// /// hex::encode_to_slice(b"kiwi", &mut bytes)?; /// assert_eq!(&bytes, b"6b697769"); /// # Ok(()) /// # } /// ``` pub fn encode_to_slice>(input: T, output: &mut [u8]) -> Result<(), FromHexError> { if input.as_ref().len() * 2 != output.len() { return Err(FromHexError::InvalidStringLength); } for (byte, (i, j)) in input .as_ref() .iter() .zip(generate_iter(input.as_ref().len() * 2)) { let (high, low) = byte2hex(*byte, HEX_CHARS_LOWER); output[i] = high; output[j] = low; } Ok(()) } #[cfg(test)] mod test { use super::*; #[cfg(feature = "alloc")] use alloc::string::ToString; use pretty_assertions::assert_eq; #[test] #[cfg(feature = "alloc")] fn test_gen_iter() { let result = vec![(0, 1), (2, 3)]; assert_eq!(generate_iter(5).collect::>(), result); } #[test] fn test_encode_to_slice() { let mut output_1 = [0; 4 * 2]; encode_to_slice(b"kiwi", &mut output_1).unwrap(); assert_eq!(&output_1, b"6b697769"); let mut output_2 = [0; 5 * 2]; encode_to_slice(b"kiwis", &mut output_2).unwrap(); assert_eq!(&output_2, b"6b69776973"); let mut output_3 = [0; 100]; assert_eq!( encode_to_slice(b"kiwis", &mut output_3), Err(FromHexError::InvalidStringLength) ); } #[test] fn test_decode_to_slice() { let mut output_1 = [0; 4]; decode_to_slice(b"6b697769", &mut output_1).unwrap(); assert_eq!(&output_1, b"kiwi"); let mut output_2 = [0; 5]; decode_to_slice(b"6b69776973", &mut output_2).unwrap(); assert_eq!(&output_2, b"kiwis"); let mut output_3 = [0; 4]; assert_eq!( decode_to_slice(b"6", &mut output_3), Err(FromHexError::OddLength) ); } #[test] #[cfg(feature = "alloc")] fn test_encode() { assert_eq!(encode("foobar"), "666f6f626172"); } #[test] #[cfg(feature = "alloc")] fn test_decode() { assert_eq!( decode("666f6f626172"), Ok(String::from("foobar").into_bytes()) ); } #[test] #[cfg(feature = "alloc")] pub fn test_from_hex_okay_str() { assert_eq!(Vec::from_hex("666f6f626172").unwrap(), b"foobar"); assert_eq!(Vec::from_hex("666F6F626172").unwrap(), b"foobar"); } #[test] #[cfg(feature = "alloc")] pub fn test_from_hex_okay_bytes() { assert_eq!(Vec::from_hex(b"666f6f626172").unwrap(), b"foobar"); assert_eq!(Vec::from_hex(b"666F6F626172").unwrap(), b"foobar"); } #[test] #[cfg(feature = "alloc")] pub fn test_invalid_length() { assert_eq!(Vec::from_hex("1").unwrap_err(), FromHexError::OddLength); assert_eq!( Vec::from_hex("666f6f6261721").unwrap_err(), FromHexError::OddLength ); } #[test] #[cfg(feature = "alloc")] pub fn test_invalid_char() { assert_eq!( Vec::from_hex("66ag").unwrap_err(), FromHexError::InvalidHexCharacter { c: 'g', index: 3 } ); } #[test] #[cfg(feature = "alloc")] pub fn test_empty() { assert_eq!(Vec::from_hex("").unwrap(), b""); } #[test] #[cfg(feature = "alloc")] pub fn test_from_hex_whitespace() { assert_eq!( Vec::from_hex("666f 6f62617").unwrap_err(), FromHexError::InvalidHexCharacter { c: ' ', index: 4 } ); } #[test] pub fn test_from_hex_array() { assert_eq!( <[u8; 6] as FromHex>::from_hex("666f6f626172"), Ok([0x66, 0x6f, 0x6f, 0x62, 0x61, 0x72]) ); assert_eq!( <[u8; 5] as FromHex>::from_hex("666f6f626172"), Err(FromHexError::InvalidStringLength) ); } #[test] #[cfg(feature = "alloc")] fn test_to_hex() { assert_eq!( [0x66, 0x6f, 0x6f, 0x62, 0x61, 0x72].encode_hex::(), "666f6f626172".to_string(), ); assert_eq!( [0x66, 0x6f, 0x6f, 0x62, 0x61, 0x72].encode_hex_upper::(), "666F6F626172".to_string(), ); } } vendor/hex/tests/0000775000175000017500000000000014160055207014575 5ustar mwhudsonmwhudsonvendor/hex/tests/version-number.rs0000664000175000017500000000033614160055207020120 0ustar mwhudsonmwhudson#![allow(non_fmt_panic)] #[test] fn test_readme_deps() { version_sync::assert_markdown_deps_updated!("README.md"); } #[test] fn test_html_root_url() { version_sync::assert_html_root_url_updated!("src/lib.rs"); } vendor/hex/tests/serde.rs0000664000175000017500000000241114160055207016243 0ustar mwhudsonmwhudson#![cfg(all(feature = "serde", feature = "alloc"))] #![allow(clippy::blacklisted_name)] use serde::{Deserialize, Serialize}; #[derive(Debug, PartialEq, Eq, Serialize, Deserialize)] struct Foo { #[serde(with = "hex")] bar: Vec, } #[test] fn serialize() { let foo = Foo { bar: vec![1, 10, 100], }; let ser = serde_json::to_string(&foo).expect("serialization failed"); assert_eq!(ser, r#"{"bar":"010a64"}"#); } #[test] fn deserialize() { let foo = Foo { bar: vec![1, 10, 100], }; let de: Foo = serde_json::from_str(r#"{"bar":"010a64"}"#).expect("deserialization failed"); assert_eq!(de, foo); } #[derive(Debug, PartialEq, Eq, Serialize, Deserialize)] struct Bar { #[serde( serialize_with = "hex::serialize_upper", deserialize_with = "hex::deserialize" )] foo: Vec, } #[test] fn serialize_upper() { let bar = Bar { foo: vec![1, 10, 100], }; let ser = serde_json::to_string(&bar).expect("serialization failed"); assert_eq!(ser, r#"{"foo":"010A64"}"#); } #[test] fn deserialize_upper() { let bar = Bar { foo: vec![1, 10, 100], }; let de: Bar = serde_json::from_str(r#"{"foo":"010A64"}"#).expect("deserialization failed"); assert_eq!(de, bar); } vendor/hex/LICENSE-MIT0000664000175000017500000000214514160055207015071 0ustar mwhudsonmwhudsonCopyright (c) 2013-2014 The Rust Project Developers. Copyright (c) 2015-2020 The rust-hex Developers Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/hex/README.md0000664000175000017500000000354314160055207014717 0ustar mwhudsonmwhudson# hex [![Crates.io: hex](https://img.shields.io/crates/v/hex.svg)](https://crates.io/crates/hex) [![Documentation](https://docs.rs/hex/badge.svg)](https://docs.rs/hex) [![Build Status (Github Actions)](https://github.com/KokaKiwi/rust-hex/workflows/Test%20hex/badge.svg?master)](https://github.com/KokaKiwi/rust-hex/actions) Encoding and decoding data into/from hexadecimal representation. ## Examples Encoding a `String` ```rust let hex_string = hex::encode("Hello world!"); println!("{}", hex_string); // Prints "48656c6c6f20776f726c6421" ``` Decoding a `String` ```rust let decoded_string = hex::decode("48656c6c6f20776f726c6421"); println!("{}", decoded_string); // Prints "Hello world!" ``` You can find the [documentation](https://docs.rs/hex) here. ## Installation In order to use this crate, you have to add it under `[dependencies]` to your `Cargo.toml` ```toml [dependencies] hex = "0.4" ``` By default this will import `std`, if you are working in a [`no_std`](https://rust-embedded.github.io/book/intro/no-std.html) environment you can turn this off by adding the following ```toml [dependencies] hex = { version = "0.4", default-features = false } ``` ## Features - `std`: Enabled by default. Add support for Rust's libstd types. - `serde`: Disabled by default. Add support for `serde` de/serializing library. See the `serde` module documentation for usage. ## License Licensed under either of - Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) - MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. vendor/openssl-probe/0000775000175000017500000000000014172417313015442 5ustar mwhudsonmwhudsonvendor/openssl-probe/.cargo-checksum.json0000664000175000017500000000013114172417313021301 0ustar mwhudsonmwhudson{"files":{},"package":"ff011a302c396a5197692431fc1948019154afc178baf7d8e37367442a4601cf"}vendor/openssl-probe/LICENSE-APACHE0000664000175000017500000002513714160055207017373 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/openssl-probe/Cargo.toml0000664000175000017500000000150114172417313017367 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] name = "openssl-probe" version = "0.1.5" authors = ["Alex Crichton "] description = "Tool for helping to find SSL certificate locations on the system for OpenSSL\n" homepage = "https://github.com/alexcrichton/openssl-probe" readme = "README.md" license = "MIT/Apache-2.0" repository = "https://github.com/alexcrichton/openssl-probe" vendor/openssl-probe/src/0000775000175000017500000000000014172417313016231 5ustar mwhudsonmwhudsonvendor/openssl-probe/src/lib.rs0000664000175000017500000001103014172417313017340 0ustar mwhudsonmwhudsonuse std::env; use std::path::{Path, PathBuf}; /// The OpenSSL environment variable to configure what certificate file to use. pub const ENV_CERT_FILE: &'static str = "SSL_CERT_FILE"; /// The OpenSSL environment variable to configure what certificates directory to use. pub const ENV_CERT_DIR: &'static str = "SSL_CERT_DIR"; pub struct ProbeResult { pub cert_file: Option, pub cert_dir: Option, } /// Probe the system for the directory in which CA certificates should likely be /// found. /// /// This will only search known system locations. pub fn find_certs_dirs() -> Vec { cert_dirs_iter().map(Path::to_path_buf).collect() } // TODO: when we bump to 0.2, make this the `find_certs_dirs` function fn cert_dirs_iter() -> impl Iterator { // see http://gagravarr.org/writing/openssl-certs/others.shtml [ "/var/ssl", "/usr/share/ssl", "/usr/local/ssl", "/usr/local/openssl", "/usr/local/etc/openssl", "/usr/local/share", "/usr/lib/ssl", "/usr/ssl", "/etc/openssl", "/etc/pki/ca-trust/extracted/pem", "/etc/pki/tls", "/etc/ssl", "/etc/certs", "/opt/etc/ssl", // Entware "/data/data/com.termux/files/usr/etc/tls", "/boot/system/data/ssl", ] .iter().map(Path::new).filter(|p| p.exists()) } /// Probe for SSL certificates on the system, then configure the SSL certificate `SSL_CERT_FILE` /// and `SSL_CERT_DIR` environment variables in this process for OpenSSL to use. /// /// Preconfigured values in the environment variables will not be overwritten if the paths they /// point to exist and are accessible. pub fn init_ssl_cert_env_vars() { try_init_ssl_cert_env_vars(); } /// Probe for SSL certificates on the system, then configure the SSL certificate `SSL_CERT_FILE` /// and `SSL_CERT_DIR` environment variables in this process for OpenSSL to use. /// /// Preconfigured values in the environment variables will not be overwritten if the paths they /// point to exist and are accessible. /// /// Returns `true` if any certificate file or directory was found while probing. /// Combine this with `has_ssl_cert_env_vars()` to check whether previously configured environment /// variables are valid. pub fn try_init_ssl_cert_env_vars() -> bool { let ProbeResult { cert_file, cert_dir } = probe(); // we won't be overwriting existing env variables because if they're valid probe() will have // returned them unchanged if let Some(path) = &cert_file { env::set_var(ENV_CERT_FILE, path); } if let Some(path) = &cert_dir { env::set_var(ENV_CERT_DIR, path); } cert_file.is_some() || cert_dir.is_some() } /// Check whether the OpenSSL `SSL_CERT_FILE` and/or `SSL_CERT_DIR` environment variable is /// configured in this process with an existing file or directory. /// /// That being the case would indicate that certificates will be found successfully by OpenSSL. /// /// Returns `true` if either variable is set to an existing file or directory. pub fn has_ssl_cert_env_vars() -> bool { let probe = probe_from_env(); probe.cert_file.is_some() || probe.cert_dir.is_some() } fn probe_from_env() -> ProbeResult { let var = |name| { env::var_os(name) .map(PathBuf::from) .filter(|p| p.exists()) }; ProbeResult { cert_file: var(ENV_CERT_FILE), cert_dir: var(ENV_CERT_DIR), } } pub fn probe() -> ProbeResult { let mut result = probe_from_env(); for certs_dir in cert_dirs_iter() { // cert.pem looks to be an openssl 1.0.1 thing, while // certs/ca-certificates.crt appears to be a 0.9.8 thing let cert_filenames = [ "cert.pem", "certs.pem", "ca-bundle.pem", "cacert.pem", "ca-certificates.crt", "certs/ca-certificates.crt", "certs/ca-root-nss.crt", "certs/ca-bundle.crt", "CARootCertificates.pem", "tls-ca-bundle.pem", ]; if result.cert_file.is_none() { result.cert_file = cert_filenames .iter() .map(|fname| certs_dir.join(fname)) .find(|p| p.exists()); } if result.cert_dir.is_none() { let cert_dir = certs_dir.join("certs"); if cert_dir.exists() { result.cert_dir = Some(cert_dir); } } if result.cert_file.is_some() && result.cert_dir.is_some() { break; } } result } vendor/openssl-probe/examples/0000775000175000017500000000000014172417313017260 5ustar mwhudsonmwhudsonvendor/openssl-probe/examples/probe.rs0000664000175000017500000000021514172417313020733 0ustar mwhudsonmwhudsonfn main() { let r = openssl_probe::probe(); println!("cert_dir: {:?}", r.cert_dir); println!("cert_file: {:?}", r.cert_file); } vendor/openssl-probe/LICENSE-MIT0000664000175000017500000000204114160055207017070 0ustar mwhudsonmwhudsonCopyright (c) 2014 Alex Crichton Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/openssl-probe/README.md0000664000175000017500000000146714160055207016726 0ustar mwhudsonmwhudson# openssl-probe Tool for helping to find SSL certificate locations on the system for OpenSSL [![Crates.io](https://img.shields.io/crates/v/openssl-probe.svg?maxAge=2592000)](https://crates.io/crates/openssl-probe) [![docs.rs](https://docs.rs/openssl-probe/badge.svg)](https://docs.rs/openssl-probe/) ## Usage First, add this to your `Cargo.toml`: ```toml [dependencies] openssl-probe = "0.1.2" ``` Then add this to your crate: ```rust extern crate openssl_probe; fn main() { openssl_probe::init_ssl_cert_env_vars(); //... your code } ``` ## License `openssl-probe` is primarily distributed under the terms of both the MIT license and the Apache License (Version 2.0), with portions covered by various BSD-like licenses. See [LICENSE-APACHE](./LICENSE-APACHE), and [LICENSE-MIT](LICENSE-MIT) for details. vendor/openssl-probe/Cargo.lock0000664000175000017500000000023514172417313017347 0ustar mwhudsonmwhudson# This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "openssl-probe" version = "0.1.5" vendor/core-foundation/0000775000175000017500000000000014160055207015743 5ustar mwhudsonmwhudsonvendor/core-foundation/.cargo-checksum.json0000664000175000017500000000013114160055207021602 0ustar mwhudsonmwhudson{"files":{},"package":"6888e10551bb93e424d8df1d07f1a8b4fceb0001a3a4b048bfc47554946f47b3"}vendor/core-foundation/LICENSE-APACHE0000664000175000017500000002513714160055207017677 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/core-foundation/Cargo.toml0000664000175000017500000000252014160055207017672 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] name = "core-foundation" version = "0.9.2" authors = ["The Servo Project Developers"] description = "Bindings to Core Foundation for macOS" homepage = "https://github.com/servo/core-foundation-rs" keywords = ["macos", "framework", "objc"] categories = ["os::macos-apis"] license = "MIT / Apache-2.0" repository = "https://github.com/servo/core-foundation-rs" [package.metadata.docs.rs] default-target = "x86_64-apple-darwin" [dependencies.chrono] version = "0.4" optional = true [dependencies.core-foundation-sys] version = "0.8.0" [dependencies.libc] version = "0.2" [dependencies.uuid] version = ">= 0.7, < 0.9" optional = true [features] mac_os_10_7_support = ["core-foundation-sys/mac_os_10_7_support"] mac_os_10_8_features = ["core-foundation-sys/mac_os_10_8_features"] with-chrono = ["chrono"] with-uuid = ["uuid"] vendor/core-foundation/debian/0000775000175000017500000000000014160055207017165 5ustar mwhudsonmwhudsonvendor/core-foundation/debian/patches/0000775000175000017500000000000014160055207020614 5ustar mwhudsonmwhudsonvendor/core-foundation/debian/patches/update-dep-uuid-version.patch0000664000175000017500000000057114160055207026317 0ustar mwhudsonmwhudson--- a/Cargo.toml +++ b/Cargo.toml @@ -29,7 +29,7 @@ version = "0.2" [dependencies.uuid] -version = "0.5" +version = ">= 0.7, < 0.9" optional = true [features] --- a/src/uuid.rs +++ b/src/uuid.rs @@ -62,7 +62,7 @@ b.byte14, b.byte15, ]; - Uuid::from_bytes(&bytes).unwrap() + Uuid::from_slice(&bytes).unwrap() } } vendor/core-foundation/debian/patches/series0000664000175000017500000000003614160055207022030 0ustar mwhudsonmwhudsonupdate-dep-uuid-version.patch vendor/core-foundation/src/0000775000175000017500000000000014160055207016532 5ustar mwhudsonmwhudsonvendor/core-foundation/src/mach_port.rs0000664000175000017500000000142414160055207021055 0ustar mwhudsonmwhudsonuse base::TCFType; use core_foundation_sys::base::kCFAllocatorDefault; use runloop::CFRunLoopSource; pub use core_foundation_sys::mach_port::*; declare_TCFType! { /// An immutable numeric value. CFMachPort, CFMachPortRef } impl_TCFType!(CFMachPort, CFMachPortRef, CFMachPortGetTypeID); impl_CFTypeDescription!(CFMachPort); impl CFMachPort { pub fn create_runloop_source( &self, order: CFIndex, ) -> Result { unsafe { let runloop_source_ref = CFMachPortCreateRunLoopSource(kCFAllocatorDefault, self.0, order); if runloop_source_ref.is_null() { Err(()) } else { Ok(CFRunLoopSource::wrap_under_create_rule(runloop_source_ref)) } } } } vendor/core-foundation/src/base.rs0000664000175000017500000003255014160055207020017 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use std; use std::fmt; use std::marker::PhantomData; use std::mem; use std::mem::ManuallyDrop; use std::ops::{Deref, DerefMut}; use std::os::raw::c_void; pub use core_foundation_sys::base::*; use string::CFString; use ConcreteCFType; pub trait CFIndexConvertible { /// Always use this method to construct a `CFIndex` value. It performs bounds checking to /// ensure the value is in range. fn to_CFIndex(self) -> CFIndex; } impl CFIndexConvertible for usize { #[inline] fn to_CFIndex(self) -> CFIndex { let max_CFIndex = CFIndex::max_value(); if self > (max_CFIndex as usize) { panic!("value out of range") } self as CFIndex } } declare_TCFType!{ /// Superclass of all Core Foundation objects. CFType, CFTypeRef } impl CFType { /// Try to downcast the `CFType` to a subclass. Checking if the instance is the /// correct subclass happens at runtime and `None` is returned if it is not the correct type. /// Works similar to [`Box::downcast`] and [`CFPropertyList::downcast`]. /// /// # Examples /// /// ``` /// # use core_foundation::string::CFString; /// # use core_foundation::boolean::CFBoolean; /// # use core_foundation::base::{CFType, TCFType}; /// # /// // Create a string. /// let string: CFString = CFString::from_static_string("FooBar"); /// // Cast it up to a CFType. /// let cf_type: CFType = string.as_CFType(); /// // Cast it down again. /// assert_eq!(cf_type.downcast::().unwrap().to_string(), "FooBar"); /// // Casting it to some other type will yield `None` /// assert!(cf_type.downcast::().is_none()); /// ``` /// /// ```compile_fail /// # use core_foundation::array::CFArray; /// # use core_foundation::base::TCFType; /// # use core_foundation::boolean::CFBoolean; /// # use core_foundation::string::CFString; /// # /// let boolean_array = CFArray::from_CFTypes(&[CFBoolean::true_value()]).into_CFType(); /// /// // This downcast is not allowed and causes compiler error, since it would cause undefined /// // behavior to access the elements of the array as a CFString: /// let invalid_string_array = boolean_array /// .downcast_into::>() /// .unwrap(); /// ``` /// /// [`Box::downcast`]: https://doc.rust-lang.org/std/boxed/struct.Box.html#method.downcast /// [`CFPropertyList::downcast`]: ../propertylist/struct.CFPropertyList.html#method.downcast #[inline] pub fn downcast(&self) -> Option { if self.instance_of::() { unsafe { let reference = T::Ref::from_void_ptr(self.0); Some(T::wrap_under_get_rule(reference)) } } else { None } } /// Similar to [`downcast`], but consumes self and can thus avoid touching the retain count. /// /// [`downcast`]: #method.downcast #[inline] pub fn downcast_into(self) -> Option { if self.instance_of::() { unsafe { let reference = T::Ref::from_void_ptr(self.0); mem::forget(self); Some(T::wrap_under_create_rule(reference)) } } else { None } } } impl fmt::Debug for CFType { /// Formats the value using [`CFCopyDescription`]. /// /// [`CFCopyDescription`]: https://developer.apple.com/documentation/corefoundation/1521252-cfcopydescription?language=objc fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let desc = unsafe { CFString::wrap_under_create_rule(CFCopyDescription(self.0)) }; desc.fmt(f) } } impl Clone for CFType { #[inline] fn clone(&self) -> CFType { unsafe { TCFType::wrap_under_get_rule(self.0) } } } impl PartialEq for CFType { #[inline] fn eq(&self, other: &CFType) -> bool { unsafe { CFEqual(self.as_CFTypeRef(), other.as_CFTypeRef()) != 0 } } } declare_TCFType!(CFAllocator, CFAllocatorRef); impl_TCFType!(CFAllocator, CFAllocatorRef, CFAllocatorGetTypeID); impl CFAllocator { #[inline] pub fn new(mut context: CFAllocatorContext) -> CFAllocator { unsafe { let allocator_ref = CFAllocatorCreate(kCFAllocatorDefault, &mut context); TCFType::wrap_under_create_rule(allocator_ref) } } } /// All Core Foundation types implement this trait. The associated type `Ref` specifies the /// associated Core Foundation type: e.g. for `CFType` this is `CFTypeRef`; for `CFArray` this is /// `CFArrayRef`. /// /// Most structs that implement this trait will do so via the [`impl_TCFType`] macro. /// /// [`impl_TCFType`]: ../macro.impl_TCFType.html pub trait TCFType { /// The reference type wrapped inside this type. type Ref: TCFTypeRef; /// Returns the object as its concrete TypeRef. fn as_concrete_TypeRef(&self) -> Self::Ref; /// Returns an instance of the object, wrapping the underlying `CFTypeRef` subclass. Use this /// when following Core Foundation's "Create Rule". The reference count is *not* bumped. unsafe fn wrap_under_create_rule(obj: Self::Ref) -> Self; /// Returns the type ID for this class. fn type_id() -> CFTypeID; /// Returns the object as a wrapped `CFType`. The reference count is incremented by one. #[inline] fn as_CFType(&self) -> CFType { unsafe { TCFType::wrap_under_get_rule(self.as_CFTypeRef()) } } /// Returns the object as a wrapped `CFType`. Consumes self and avoids changing the reference /// count. #[inline] fn into_CFType(self) -> CFType where Self: Sized, { let reference = self.as_CFTypeRef(); mem::forget(self); unsafe { TCFType::wrap_under_create_rule(reference) } } /// Returns the object as a raw `CFTypeRef`. The reference count is not adjusted. fn as_CFTypeRef(&self) -> CFTypeRef; /// Returns an instance of the object, wrapping the underlying `CFTypeRef` subclass. Use this /// when following Core Foundation's "Get Rule". The reference count *is* bumped. unsafe fn wrap_under_get_rule(reference: Self::Ref) -> Self; /// Returns the reference count of the object. It is unwise to do anything other than test /// whether the return value of this method is greater than zero. #[inline] fn retain_count(&self) -> CFIndex { unsafe { CFGetRetainCount(self.as_CFTypeRef()) } } /// Returns the type ID of this object. #[inline] fn type_of(&self) -> CFTypeID { unsafe { CFGetTypeID(self.as_CFTypeRef()) } } /// Writes a debugging version of this object on standard error. fn show(&self) { unsafe { CFShow(self.as_CFTypeRef()) } } /// Returns true if this value is an instance of another type. #[inline] fn instance_of(&self) -> bool { self.type_of() == OtherCFType::type_id() } } impl TCFType for CFType { type Ref = CFTypeRef; #[inline] fn as_concrete_TypeRef(&self) -> CFTypeRef { self.0 } #[inline] unsafe fn wrap_under_get_rule(reference: CFTypeRef) -> CFType { assert!(!reference.is_null(), "Attempted to create a NULL object."); let reference: CFTypeRef = CFRetain(reference); TCFType::wrap_under_create_rule(reference) } #[inline] fn as_CFTypeRef(&self) -> CFTypeRef { self.as_concrete_TypeRef() } #[inline] unsafe fn wrap_under_create_rule(obj: CFTypeRef) -> CFType { assert!(!obj.is_null(), "Attempted to create a NULL object."); CFType(obj) } #[inline] fn type_id() -> CFTypeID { // FIXME(pcwalton): Is this right? 0 } } /// A reference to an element inside a container pub struct ItemRef<'a, T: 'a>(ManuallyDrop, PhantomData<&'a T>); impl<'a, T> Deref for ItemRef<'a, T> { type Target = T; fn deref(&self) -> &T { &self.0 } } impl<'a, T: fmt::Debug> fmt::Debug for ItemRef<'a, T> { fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> { self.0.fmt(f) } } impl<'a, T: PartialEq> PartialEq for ItemRef<'a, T> { fn eq(&self, other: &Self) -> bool { self.0.eq(&other.0) } } /// A reference to a mutable element inside a container pub struct ItemMutRef<'a, T: 'a>(ManuallyDrop, PhantomData<&'a T>); impl<'a, T> Deref for ItemMutRef<'a, T> { type Target = T; fn deref(&self) -> &T { &self.0 } } impl<'a, T> DerefMut for ItemMutRef<'a, T> { fn deref_mut(&mut self) -> &mut T { &mut self.0 } } impl<'a, T: fmt::Debug> fmt::Debug for ItemMutRef<'a, T> { fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> { self.0.fmt(f) } } impl<'a, T: PartialEq> PartialEq for ItemMutRef<'a, T> { fn eq(&self, other: &Self) -> bool { self.0.eq(&other.0) } } /// A trait describing how to convert from the stored *mut c_void to the desired T pub unsafe trait FromMutVoid { unsafe fn from_mut_void<'a>(x: *mut c_void) -> ItemMutRef<'a, Self> where Self: std::marker::Sized; } unsafe impl FromMutVoid for u32 { unsafe fn from_mut_void<'a>(x: *mut c_void) -> ItemMutRef<'a, Self> { ItemMutRef(ManuallyDrop::new(x as u32), PhantomData) } } unsafe impl FromMutVoid for *const c_void { unsafe fn from_mut_void<'a>(x: *mut c_void) -> ItemMutRef<'a, Self> { ItemMutRef(ManuallyDrop::new(x), PhantomData) } } unsafe impl FromMutVoid for T { unsafe fn from_mut_void<'a>(x: *mut c_void) -> ItemMutRef<'a, Self> { ItemMutRef(ManuallyDrop::new(TCFType::wrap_under_create_rule(T::Ref::from_void_ptr(x))), PhantomData) } } /// A trait describing how to convert from the stored *const c_void to the desired T pub unsafe trait FromVoid { unsafe fn from_void<'a>(x: *const c_void) -> ItemRef<'a, Self> where Self: std::marker::Sized; } unsafe impl FromVoid for u32 { unsafe fn from_void<'a>(x: *const c_void) -> ItemRef<'a, Self> { // Functions like CGFontCopyTableTags treat the void*'s as u32's // so we convert by casting directly ItemRef(ManuallyDrop::new(x as u32), PhantomData) } } unsafe impl FromVoid for *const c_void { unsafe fn from_void<'a>(x: *const c_void) -> ItemRef<'a, Self> { ItemRef(ManuallyDrop::new(x), PhantomData) } } unsafe impl FromVoid for T { unsafe fn from_void<'a>(x: *const c_void) -> ItemRef<'a, Self> { ItemRef(ManuallyDrop::new(TCFType::wrap_under_create_rule(T::Ref::from_void_ptr(x))), PhantomData) } } /// A trait describing how to convert from the stored *const c_void to the desired T pub unsafe trait ToVoid { fn to_void(&self) -> *const c_void; } unsafe impl ToVoid<*const c_void> for *const c_void { fn to_void(&self) -> *const c_void { *self } } unsafe impl<'a> ToVoid for &'a CFType { fn to_void(&self) -> *const ::std::os::raw::c_void { self.as_concrete_TypeRef().as_void_ptr() } } unsafe impl ToVoid for CFType { fn to_void(&self) -> *const ::std::os::raw::c_void { self.as_concrete_TypeRef().as_void_ptr() } } unsafe impl ToVoid for CFTypeRef { fn to_void(&self) -> *const ::std::os::raw::c_void { self.as_void_ptr() } } #[cfg(test)] mod tests { use super::*; use std::mem; use boolean::CFBoolean; #[test] fn cftype_instance_of() { let string = CFString::from_static_string("foo"); let cftype = string.as_CFType(); assert!(cftype.instance_of::()); assert!(!cftype.instance_of::()); } #[test] fn as_cftype_retain_count() { let string = CFString::from_static_string("bar"); assert_eq!(string.retain_count(), 1); let cftype = string.as_CFType(); assert_eq!(cftype.retain_count(), 2); mem::drop(string); assert_eq!(cftype.retain_count(), 1); } #[test] fn into_cftype_retain_count() { let string = CFString::from_static_string("bar"); assert_eq!(string.retain_count(), 1); let cftype = string.into_CFType(); assert_eq!(cftype.retain_count(), 1); } #[test] fn as_cftype_and_downcast() { let string = CFString::from_static_string("bar"); let cftype = string.as_CFType(); let string2 = cftype.downcast::().unwrap(); assert_eq!(string2.to_string(), "bar"); assert_eq!(string.retain_count(), 3); assert_eq!(cftype.retain_count(), 3); assert_eq!(string2.retain_count(), 3); } #[test] fn into_cftype_and_downcast_into() { let string = CFString::from_static_string("bar"); let cftype = string.into_CFType(); let string2 = cftype.downcast_into::().unwrap(); assert_eq!(string2.to_string(), "bar"); assert_eq!(string2.retain_count(), 1); } } vendor/core-foundation/src/error.rs0000664000175000017500000000357114160055207020237 0ustar mwhudsonmwhudson// Copyright 2016 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Core Foundation errors. pub use core_foundation_sys::error::*; use std::error::Error; use std::fmt; use base::{CFIndex, TCFType}; use string::CFString; declare_TCFType!{ /// An error value. CFError, CFErrorRef } impl_TCFType!(CFError, CFErrorRef, CFErrorGetTypeID); impl fmt::Debug for CFError { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { fmt.debug_struct("CFError") .field("domain", &self.domain()) .field("code", &self.code()) .field("description", &self.description()) .finish() } } impl fmt::Display for CFError { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { write!(fmt, "{}", self.description()) } } impl Error for CFError { fn description(&self) -> &str { "a Core Foundation error" } } impl CFError { /// Returns a string identifying the domain with which this error is /// associated. pub fn domain(&self) -> CFString { unsafe { let s = CFErrorGetDomain(self.0); CFString::wrap_under_get_rule(s) } } /// Returns the code identifying this type of error. pub fn code(&self) -> CFIndex { unsafe { CFErrorGetCode(self.0) } } /// Returns a human-presentable description of the error. pub fn description(&self) -> CFString { unsafe { let s = CFErrorCopyDescription(self.0); CFString::wrap_under_create_rule(s) } } } vendor/core-foundation/src/runloop.rs0000664000175000017500000001364614160055207020610 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![allow(non_upper_case_globals)] pub use core_foundation_sys::runloop::*; use core_foundation_sys::base::CFIndex; use core_foundation_sys::base::{kCFAllocatorDefault, CFOptionFlags}; use core_foundation_sys::string::CFStringRef; use base::{TCFType}; use date::{CFAbsoluteTime, CFTimeInterval}; use filedescriptor::CFFileDescriptor; use string::{CFString}; pub type CFRunLoopMode = CFStringRef; declare_TCFType!(CFRunLoop, CFRunLoopRef); impl_TCFType!(CFRunLoop, CFRunLoopRef, CFRunLoopGetTypeID); impl_CFTypeDescription!(CFRunLoop); impl CFRunLoop { pub fn get_current() -> CFRunLoop { unsafe { let run_loop_ref = CFRunLoopGetCurrent(); TCFType::wrap_under_get_rule(run_loop_ref) } } pub fn get_main() -> CFRunLoop { unsafe { let run_loop_ref = CFRunLoopGetMain(); TCFType::wrap_under_get_rule(run_loop_ref) } } pub fn run_current() { unsafe { CFRunLoopRun(); } } pub fn stop(&self) { unsafe { CFRunLoopStop(self.0); } } pub fn current_mode(&self) -> Option { unsafe { let string_ref = CFRunLoopCopyCurrentMode(self.0); if string_ref.is_null() { return None; } let cf_string: CFString = TCFType::wrap_under_create_rule(string_ref); Some(cf_string.to_string()) } } pub fn contains_timer(&self, timer: &CFRunLoopTimer, mode: CFRunLoopMode) -> bool { unsafe { CFRunLoopContainsTimer(self.0, timer.0, mode) != 0 } } pub fn add_timer(&self, timer: &CFRunLoopTimer, mode: CFRunLoopMode) { unsafe { CFRunLoopAddTimer(self.0, timer.0, mode); } } pub fn remove_timer(&self, timer: &CFRunLoopTimer, mode: CFRunLoopMode) { unsafe { CFRunLoopRemoveTimer(self.0, timer.0, mode); } } pub fn contains_source(&self, source: &CFRunLoopSource, mode: CFRunLoopMode) -> bool { unsafe { CFRunLoopContainsSource(self.0, source.0, mode) != 0 } } pub fn add_source(&self, source: &CFRunLoopSource, mode: CFRunLoopMode) { unsafe { CFRunLoopAddSource(self.0, source.0, mode); } } pub fn remove_source(&self, source: &CFRunLoopSource, mode: CFRunLoopMode) { unsafe { CFRunLoopRemoveSource(self.0, source.0, mode); } } pub fn contains_observer(&self, observer: &CFRunLoopObserver, mode: CFRunLoopMode) -> bool { unsafe { CFRunLoopContainsObserver(self.0, observer.0, mode) != 0 } } pub fn add_observer(&self, observer: &CFRunLoopObserver, mode: CFRunLoopMode) { unsafe { CFRunLoopAddObserver(self.0, observer.0, mode); } } pub fn remove_observer(&self, observer: &CFRunLoopObserver, mode: CFRunLoopMode) { unsafe { CFRunLoopRemoveObserver(self.0, observer.0, mode); } } } declare_TCFType!(CFRunLoopTimer, CFRunLoopTimerRef); impl_TCFType!(CFRunLoopTimer, CFRunLoopTimerRef, CFRunLoopTimerGetTypeID); impl CFRunLoopTimer { pub fn new(fireDate: CFAbsoluteTime, interval: CFTimeInterval, flags: CFOptionFlags, order: CFIndex, callout: CFRunLoopTimerCallBack, context: *mut CFRunLoopTimerContext) -> CFRunLoopTimer { unsafe { let timer_ref = CFRunLoopTimerCreate(kCFAllocatorDefault, fireDate, interval, flags, order, callout, context); TCFType::wrap_under_create_rule(timer_ref) } } } declare_TCFType!(CFRunLoopSource, CFRunLoopSourceRef); impl_TCFType!(CFRunLoopSource, CFRunLoopSourceRef, CFRunLoopSourceGetTypeID); impl CFRunLoopSource { pub fn from_file_descriptor(fd: &CFFileDescriptor, order: CFIndex) -> Option { fd.to_run_loop_source(order) } } declare_TCFType!(CFRunLoopObserver, CFRunLoopObserverRef); impl_TCFType!(CFRunLoopObserver, CFRunLoopObserverRef, CFRunLoopObserverGetTypeID); #[cfg(test)] mod test { use super::*; use date::{CFDate, CFAbsoluteTime}; use std::mem; use std::os::raw::c_void; use std::sync::mpsc; #[test] fn wait_200_milliseconds() { let run_loop = CFRunLoop::get_current(); let now = CFDate::now().abs_time(); let (elapsed_tx, elapsed_rx) = mpsc::channel(); let mut info = Info { start_time: now, elapsed_tx, }; let mut context = CFRunLoopTimerContext { version: 0, info: &mut info as *mut _ as *mut c_void, retain: None, release: None, copyDescription: None, }; let run_loop_timer = CFRunLoopTimer::new(now + 0.20f64, 0f64, 0, 0, timer_popped, &mut context); unsafe { run_loop.add_timer(&run_loop_timer, kCFRunLoopDefaultMode); } CFRunLoop::run_current(); let elapsed = elapsed_rx.try_recv().unwrap(); println!("wait_200_milliseconds, elapsed: {}", elapsed); assert!(elapsed > 0.19 && elapsed < 0.35); } struct Info { start_time: CFAbsoluteTime, elapsed_tx: mpsc::Sender, } extern "C" fn timer_popped(_timer: CFRunLoopTimerRef, raw_info: *mut c_void) { let info: *mut Info = unsafe { mem::transmute(raw_info) }; let now = CFDate::now().abs_time(); let elapsed = now - unsafe { (*info).start_time }; let _ = unsafe { (*info).elapsed_tx.send(elapsed) }; CFRunLoop::get_current().stop(); } } vendor/core-foundation/src/set.rs0000664000175000017500000000326114160055207017675 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! An immutable bag of elements. pub use core_foundation_sys::set::*; use core_foundation_sys::base::{CFTypeRef, CFRelease, kCFAllocatorDefault}; use base::{CFIndexConvertible, TCFType}; use std::os::raw::c_void; use std::marker::PhantomData; /// An immutable bag of elements. pub struct CFSet(CFSetRef, PhantomData); impl Drop for CFSet { fn drop(&mut self) { unsafe { CFRelease(self.as_CFTypeRef()) } } } impl_TCFType!(CFSet, CFSetRef, CFSetGetTypeID); impl_CFTypeDescription!(CFSet); impl CFSet { /// Creates a new set from a list of `CFType` instances. pub fn from_slice(elems: &[T]) -> CFSet where T: TCFType { unsafe { let elems: Vec = elems.iter().map(|elem| elem.as_CFTypeRef()).collect(); let set_ref = CFSetCreate(kCFAllocatorDefault, elems.as_ptr(), elems.len().to_CFIndex(), &kCFTypeSetCallBacks); TCFType::wrap_under_create_rule(set_ref) } } } impl CFSet { /// Get the number of elements in the CFSet pub fn len(&self) -> usize { unsafe { CFSetGetCount(self.0) as usize } } } vendor/core-foundation/src/boolean.rs0000664000175000017500000000350314160055207020520 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! A Boolean type. pub use core_foundation_sys::number::{CFBooleanRef, CFBooleanGetTypeID, kCFBooleanTrue, kCFBooleanFalse}; use base::TCFType; declare_TCFType!{ /// A Boolean type. /// /// FIXME(pcwalton): Should be a newtype struct, but that fails due to a Rust compiler bug. CFBoolean, CFBooleanRef } impl_TCFType!(CFBoolean, CFBooleanRef, CFBooleanGetTypeID); impl_CFTypeDescription!(CFBoolean); impl CFBoolean { pub fn true_value() -> CFBoolean { unsafe { TCFType::wrap_under_get_rule(kCFBooleanTrue) } } pub fn false_value() -> CFBoolean { unsafe { TCFType::wrap_under_get_rule(kCFBooleanFalse) } } } impl From for CFBoolean { fn from(value: bool) -> CFBoolean { if value { CFBoolean::true_value() } else { CFBoolean::false_value() } } } impl From for bool { fn from(value: CFBoolean) -> bool { value.0 == unsafe { kCFBooleanTrue } } } #[cfg(test)] mod tests { use super::*; #[test] fn to_and_from_bool() { let b_false = CFBoolean::from(false); let b_true = CFBoolean::from(true); assert_ne!(b_false, b_true); assert_eq!(b_false, CFBoolean::false_value()); assert_eq!(b_true, CFBoolean::true_value()); assert!(!bool::from(b_false)); assert!(bool::from(b_true)); } } vendor/core-foundation/src/date.rs0000664000175000017500000000733714160055207020027 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Core Foundation date objects. pub use core_foundation_sys::date::*; use core_foundation_sys::base::kCFAllocatorDefault; use base::TCFType; #[cfg(feature = "with-chrono")] use chrono::NaiveDateTime; declare_TCFType!{ /// A date. CFDate, CFDateRef } impl_TCFType!(CFDate, CFDateRef, CFDateGetTypeID); impl_CFTypeDescription!(CFDate); impl_CFComparison!(CFDate, CFDateCompare); impl CFDate { #[inline] pub fn new(time: CFAbsoluteTime) -> CFDate { unsafe { let date_ref = CFDateCreate(kCFAllocatorDefault, time); TCFType::wrap_under_create_rule(date_ref) } } #[inline] pub fn now() -> CFDate { CFDate::new(unsafe { CFAbsoluteTimeGetCurrent() }) } #[inline] pub fn abs_time(&self) -> CFAbsoluteTime { unsafe { CFDateGetAbsoluteTime(self.0) } } #[cfg(feature = "with-chrono")] pub fn naive_utc(&self) -> NaiveDateTime { let ts = unsafe { self.abs_time() + kCFAbsoluteTimeIntervalSince1970 }; let (secs, nanos) = if ts.is_sign_positive() { (ts.trunc() as i64, ts.fract()) } else { // nanoseconds can't be negative in NaiveDateTime (ts.trunc() as i64 - 1, 1.0 - ts.fract().abs()) }; NaiveDateTime::from_timestamp(secs, (nanos * 1e9).floor() as u32) } #[cfg(feature = "with-chrono")] pub fn from_naive_utc(time: NaiveDateTime) -> CFDate { let secs = time.timestamp(); let nanos = time.timestamp_subsec_nanos(); let ts = unsafe { secs as f64 + (nanos as f64 / 1e9) - kCFAbsoluteTimeIntervalSince1970 }; CFDate::new(ts) } } #[cfg(test)] mod test { use super::CFDate; use std::cmp::Ordering; #[cfg(feature = "with-chrono")] use chrono::NaiveDateTime; #[cfg(feature = "with-chrono")] fn approx_eq(a: f64, b: f64) -> bool { use std::f64; let same_sign = a.is_sign_positive() == b.is_sign_positive(); let equal = ((a - b).abs() / f64::min(a.abs() + b.abs(), f64::MAX)) < f64::EPSILON; (same_sign && equal) } #[test] fn date_comparison() { let now = CFDate::now(); let past = CFDate::new(now.abs_time() - 1.0); assert_eq!(now.cmp(&past), Ordering::Greater); assert_eq!(now.cmp(&now), Ordering::Equal); assert_eq!(past.cmp(&now), Ordering::Less); } #[test] fn date_equality() { let now = CFDate::now(); let same_time = CFDate::new(now.abs_time()); assert_eq!(now, same_time); } #[test] #[cfg(feature = "with-chrono")] fn date_chrono_conversion_positive() { let date = CFDate::now(); let datetime = date.naive_utc(); let converted = CFDate::from_naive_utc(datetime); assert!(approx_eq(date.abs_time(), converted.abs_time())); } #[test] #[cfg(feature = "with-chrono")] fn date_chrono_conversion_negative() { use super::kCFAbsoluteTimeIntervalSince1970; let ts = unsafe { kCFAbsoluteTimeIntervalSince1970 - 420.0 }; let date = CFDate::new(ts); let datetime: NaiveDateTime = date.naive_utc(); let converted = CFDate::from_naive_utc(datetime); assert!(approx_eq(date.abs_time(), converted.abs_time())); } } vendor/core-foundation/src/attributed_string.rs0000664000175000017500000000531314160055207022637 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. pub use core_foundation_sys::attributed_string::*; use base::TCFType; use core_foundation_sys::base::{CFIndex, CFRange, kCFAllocatorDefault}; use std::ptr::null; use string::{CFString, CFStringRef}; declare_TCFType!{ CFAttributedString, CFAttributedStringRef } impl_TCFType!(CFAttributedString, CFAttributedStringRef, CFAttributedStringGetTypeID); impl CFAttributedString { #[inline] pub fn new(string: &CFString) -> Self { unsafe { let astr_ref = CFAttributedStringCreate( kCFAllocatorDefault, string.as_concrete_TypeRef(), null()); CFAttributedString::wrap_under_create_rule(astr_ref) } } #[inline] pub fn char_len(&self) -> CFIndex { unsafe { CFAttributedStringGetLength(self.0) } } } declare_TCFType!{ CFMutableAttributedString, CFMutableAttributedStringRef } impl_TCFType!(CFMutableAttributedString, CFMutableAttributedStringRef, CFAttributedStringGetTypeID); impl CFMutableAttributedString { #[inline] pub fn new() -> Self { unsafe { let astr_ref = CFAttributedStringCreateMutable( kCFAllocatorDefault, 0); CFMutableAttributedString::wrap_under_create_rule(astr_ref) } } #[inline] pub fn char_len(&self) -> CFIndex { unsafe { CFAttributedStringGetLength(self.0) } } #[inline] pub fn replace_str(&mut self, string: &CFString, range: CFRange) { unsafe { CFAttributedStringReplaceString( self.0, range, string.as_concrete_TypeRef()); } } #[inline] pub fn set_attribute(&mut self, range: CFRange, name: CFStringRef, value: &T) { unsafe { CFAttributedStringSetAttribute( self.0, range, name, value.as_CFTypeRef()); } } } impl Default for CFMutableAttributedString { fn default() -> Self { Self::new() } } #[cfg(test)] mod tests { use super::*; #[test] fn attributed_string_type_id_comparison() { // CFMutableAttributedString TypeID must be equal to CFAttributedString TypeID. // Compilation must not fail. assert_eq!(::type_id(), ::type_id()); } }vendor/core-foundation/src/bundle.rs0000664000175000017500000001367114160055207020361 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Core Foundation Bundle Type use core_foundation_sys::base::kCFAllocatorDefault; pub use core_foundation_sys::bundle::*; use core_foundation_sys::url::kCFURLPOSIXPathStyle; use std::path::PathBuf; use base::{CFType, TCFType}; use url::CFURL; use dictionary::CFDictionary; use std::os::raw::c_void; use string::CFString; declare_TCFType!{ /// A Bundle type. CFBundle, CFBundleRef } impl_TCFType!(CFBundle, CFBundleRef, CFBundleGetTypeID); impl CFBundle { pub fn new(bundleURL: CFURL) -> Option { unsafe { let bundle_ref = CFBundleCreate(kCFAllocatorDefault, bundleURL.as_concrete_TypeRef()); if bundle_ref.is_null() { None } else { Some(TCFType::wrap_under_create_rule(bundle_ref)) } } } pub fn bundle_with_identifier(identifier: CFString) -> Option { unsafe { let bundle_ref = CFBundleGetBundleWithIdentifier(identifier.as_concrete_TypeRef()); if bundle_ref.is_null() { None } else { Some(TCFType::wrap_under_get_rule(bundle_ref)) } } } pub fn function_pointer_for_name(&self, function_name: CFString) -> *const c_void { unsafe { CFBundleGetFunctionPointerForName(self.as_concrete_TypeRef(), function_name.as_concrete_TypeRef()) } } pub fn main_bundle() -> CFBundle { unsafe { let bundle_ref = CFBundleGetMainBundle(); TCFType::wrap_under_get_rule(bundle_ref) } } pub fn info_dictionary(&self) -> CFDictionary { unsafe { let info_dictionary = CFBundleGetInfoDictionary(self.0); TCFType::wrap_under_get_rule(info_dictionary) } } pub fn executable_url(&self) -> Option { unsafe { let exe_url = CFBundleCopyExecutableURL(self.0); if exe_url.is_null() { None } else { Some(TCFType::wrap_under_create_rule(exe_url)) } } } /// Bundle's own location pub fn bundle_url(&self) -> Option { unsafe { let bundle_url = CFBundleCopyBundleURL(self.0); if bundle_url.is_null() { None } else { Some(TCFType::wrap_under_create_rule(bundle_url)) } } } /// Bundle's own location pub fn path(&self) -> Option { let url = self.bundle_url()?; Some(PathBuf::from(url.get_file_system_path(kCFURLPOSIXPathStyle).to_string())) } /// Bundle's resources location pub fn bundle_resources_url(&self) -> Option { unsafe { let bundle_url = CFBundleCopyResourcesDirectoryURL(self.0); if bundle_url.is_null() { None } else { Some(TCFType::wrap_under_create_rule(bundle_url)) } } } /// Bundle's resources location pub fn resources_path(&self) -> Option { let url = self.bundle_resources_url()?; Some(PathBuf::from(url.get_file_system_path(kCFURLPOSIXPathStyle).to_string())) } pub fn private_frameworks_url(&self) -> Option { unsafe { let fw_url = CFBundleCopyPrivateFrameworksURL(self.0); if fw_url.is_null() { None } else { Some(TCFType::wrap_under_create_rule(fw_url)) } } } pub fn shared_support_url(&self) -> Option { unsafe { let fw_url = CFBundleCopySharedSupportURL(self.0); if fw_url.is_null() { None } else { Some(TCFType::wrap_under_create_rule(fw_url)) } } } } #[test] fn safari_executable_url() { use string::CFString; use url::{CFURL, kCFURLPOSIXPathStyle}; let cfstr_path = CFString::from_static_string("/Applications/Safari.app"); let cfurl_path = CFURL::from_file_system_path(cfstr_path, kCFURLPOSIXPathStyle, true); let cfurl_executable = CFBundle::new(cfurl_path) .expect("Safari not present") .executable_url(); assert!(cfurl_executable.is_some()); assert_eq!(cfurl_executable .unwrap() .absolute() .get_file_system_path(kCFURLPOSIXPathStyle) .to_string(), "/Applications/Safari.app/Contents/MacOS/Safari"); } #[test] fn safari_private_frameworks_url() { use string::CFString; use url::{CFURL, kCFURLPOSIXPathStyle}; let cfstr_path = CFString::from_static_string("/Applications/Safari.app"); let cfurl_path = CFURL::from_file_system_path(cfstr_path, kCFURLPOSIXPathStyle, true); let cfurl_executable = CFBundle::new(cfurl_path) .expect("Safari not present") .private_frameworks_url(); assert!(cfurl_executable.is_some()); assert_eq!(cfurl_executable .unwrap() .absolute() .get_file_system_path(kCFURLPOSIXPathStyle) .to_string(), "/Applications/Safari.app/Contents/Frameworks"); } #[test] fn non_existant_bundle() { use string::CFString; use url::{CFURL, kCFURLPOSIXPathStyle}; let cfstr_path = CFString::from_static_string("/usr/local/foo"); let cfurl_path = CFURL::from_file_system_path(cfstr_path, kCFURLPOSIXPathStyle, true); assert!(CFBundle::new(cfurl_path).is_none()); } vendor/core-foundation/src/timezone.rs0000664000175000017500000000564314160055207020742 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Core Foundation time zone objects. pub use core_foundation_sys::timezone::*; use core_foundation_sys::base::kCFAllocatorDefault; use base::TCFType; use date::{CFDate, CFTimeInterval}; use string::CFString; #[cfg(feature = "with-chrono")] use chrono::{FixedOffset, NaiveDateTime}; declare_TCFType!{ /// A time zone. CFTimeZone, CFTimeZoneRef } impl_TCFType!(CFTimeZone, CFTimeZoneRef, CFTimeZoneGetTypeID); impl_CFTypeDescription!(CFTimeZone); impl Default for CFTimeZone { fn default() -> CFTimeZone { unsafe { let tz_ref = CFTimeZoneCopyDefault(); TCFType::wrap_under_create_rule(tz_ref) } } } impl CFTimeZone { #[inline] pub fn new(interval: CFTimeInterval) -> CFTimeZone { unsafe { let tz_ref = CFTimeZoneCreateWithTimeIntervalFromGMT(kCFAllocatorDefault, interval); TCFType::wrap_under_create_rule(tz_ref) } } #[inline] pub fn system() -> CFTimeZone { unsafe { let tz_ref = CFTimeZoneCopySystem(); TCFType::wrap_under_create_rule(tz_ref) } } pub fn seconds_from_gmt(&self, date: CFDate) -> CFTimeInterval { unsafe { CFTimeZoneGetSecondsFromGMT(self.0, date.abs_time()) } } #[cfg(feature = "with-chrono")] pub fn offset_at_date(&self, date: NaiveDateTime) -> FixedOffset { let date = CFDate::from_naive_utc(date); FixedOffset::east(self.seconds_from_gmt(date) as i32) } #[cfg(feature = "with-chrono")] pub fn from_offset(offset: FixedOffset) -> CFTimeZone { CFTimeZone::new(offset.local_minus_utc() as f64) } /// The timezone database ID that identifies the time zone. E.g. "America/Los_Angeles" or /// "Europe/Paris". pub fn name(&self) -> CFString { unsafe { CFString::wrap_under_get_rule(CFTimeZoneGetName(self.0)) } } } #[cfg(test)] mod test { use super::CFTimeZone; #[cfg(feature = "with-chrono")] use chrono::{NaiveDateTime, FixedOffset}; #[test] fn timezone_comparison() { let system = CFTimeZone::system(); let default = CFTimeZone::default(); assert_eq!(system, default); } #[test] #[cfg(feature = "with-chrono")] fn timezone_chrono_conversion() { let offset = FixedOffset::west(28800); let tz = CFTimeZone::from_offset(offset); let converted = tz.offset_at_date(NaiveDateTime::from_timestamp(0, 0)); assert_eq!(offset, converted); } } vendor/core-foundation/src/data.rs0000664000175000017500000001024614160055207020014 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Core Foundation byte buffers. pub use core_foundation_sys::data::*; use core_foundation_sys::base::CFIndex; use core_foundation_sys::base::{kCFAllocatorDefault}; use std::ops::Deref; use std::slice; use std::sync::Arc; use base::{CFIndexConvertible, TCFType}; declare_TCFType!{ /// A byte buffer. CFData, CFDataRef } impl_TCFType!(CFData, CFDataRef, CFDataGetTypeID); impl_CFTypeDescription!(CFData); impl CFData { /// Creates a CFData around a copy `buffer` pub fn from_buffer(buffer: &[u8]) -> CFData { unsafe { let data_ref = CFDataCreate(kCFAllocatorDefault, buffer.as_ptr(), buffer.len().to_CFIndex()); TCFType::wrap_under_create_rule(data_ref) } } /// Creates a CFData referencing `buffer` without creating a copy pub fn from_arc + Sync + Send>(buffer: Arc) -> Self { use std::os::raw::c_void; use crate::base::{CFAllocator, CFAllocatorContext}; unsafe { let ptr = (*buffer).as_ref().as_ptr() as *const _; let len = (*buffer).as_ref().len().to_CFIndex(); let info = Arc::into_raw(buffer) as *mut c_void; extern "C" fn deallocate(_: *mut c_void, info: *mut c_void) { unsafe { drop(Arc::from_raw(info as *mut T)); } } // Use a separate allocator for each allocation because // we need `info` to do the deallocation vs. `ptr` let allocator = CFAllocator::new(CFAllocatorContext { info, version: 0, retain: None, reallocate: None, release: None, copyDescription: None, allocate: None, deallocate: Some(deallocate::), preferredSize: None, }); let data_ref = CFDataCreateWithBytesNoCopy(kCFAllocatorDefault, ptr, len, allocator.as_CFTypeRef()); TCFType::wrap_under_create_rule(data_ref) } } /// Returns a pointer to the underlying bytes in this data. Note that this byte buffer is /// read-only. #[inline] pub fn bytes<'a>(&'a self) -> &'a [u8] { unsafe { slice::from_raw_parts(CFDataGetBytePtr(self.0), self.len() as usize) } } /// Returns the length of this byte buffer. #[inline] pub fn len(&self) -> CFIndex { unsafe { CFDataGetLength(self.0) } } } impl Deref for CFData { type Target = [u8]; #[inline] fn deref(&self) -> &[u8] { self.bytes() } } #[cfg(test)] mod test { use super::CFData; use std::sync::Arc; #[test] fn test_data_provider() { let l = vec![5]; CFData::from_arc(Arc::new(l)); let l = vec![5]; CFData::from_arc(Arc::new(l.into_boxed_slice())); // Make sure the buffer is actually dropped use std::sync::atomic::{AtomicBool, Ordering::SeqCst}; struct VecWrapper { inner: Vec, dropped: Arc, } impl Drop for VecWrapper { fn drop(&mut self) { self.dropped.store(true, SeqCst) } } impl std::convert::AsRef<[u8]> for VecWrapper { fn as_ref(&self) -> &[u8] { &self.inner } } let dropped = Arc::new(AtomicBool::default()); let l = Arc::new(VecWrapper {inner: vec![5], dropped: dropped.clone() }); let m = l.clone(); let dp = CFData::from_arc(l); drop(m); assert!(!dropped.load(SeqCst)); drop(dp); assert!(dropped.load(SeqCst)) } } vendor/core-foundation/src/uuid.rs0000664000175000017500000000532514160055207020053 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Core Foundation UUID objects. #[cfg(feature = "with-uuid")] extern crate uuid; pub use core_foundation_sys::uuid::*; use core_foundation_sys::base::kCFAllocatorDefault; use base::TCFType; #[cfg(feature = "with-uuid")] use self::uuid::Uuid; declare_TCFType! { /// A UUID. CFUUID, CFUUIDRef } impl_TCFType!(CFUUID, CFUUIDRef, CFUUIDGetTypeID); impl_CFTypeDescription!(CFUUID); impl CFUUID { #[inline] pub fn new() -> CFUUID { unsafe { let uuid_ref = CFUUIDCreate(kCFAllocatorDefault); TCFType::wrap_under_create_rule(uuid_ref) } } } impl Default for CFUUID { fn default() -> Self { Self::new() } } #[cfg(feature = "with-uuid")] impl Into for CFUUID { fn into(self) -> Uuid { let b = unsafe { CFUUIDGetUUIDBytes(self.0) }; let bytes = [ b.byte0, b.byte1, b.byte2, b.byte3, b.byte4, b.byte5, b.byte6, b.byte7, b.byte8, b.byte9, b.byte10, b.byte11, b.byte12, b.byte13, b.byte14, b.byte15, ]; Uuid::from_slice(&bytes).unwrap() } } #[cfg(feature = "with-uuid")] impl From for CFUUID { fn from(uuid: Uuid) -> CFUUID { let b = uuid.as_bytes(); let bytes = CFUUIDBytes { byte0: b[0], byte1: b[1], byte2: b[2], byte3: b[3], byte4: b[4], byte5: b[5], byte6: b[6], byte7: b[7], byte8: b[8], byte9: b[9], byte10: b[10], byte11: b[11], byte12: b[12], byte13: b[13], byte14: b[14], byte15: b[15], }; unsafe { let uuid_ref = CFUUIDCreateFromUUIDBytes(kCFAllocatorDefault, bytes); TCFType::wrap_under_create_rule(uuid_ref) } } } #[cfg(test)] #[cfg(feature = "with-uuid")] mod test { use super::CFUUID; use uuid::Uuid; #[test] fn uuid_conversion() { let cf_uuid = CFUUID::new(); let uuid: Uuid = cf_uuid.clone().into(); let converted = CFUUID::from(uuid); assert_eq!(cf_uuid, converted); } } vendor/core-foundation/src/array.rs0000664000175000017500000002062114160055207020217 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Heterogeneous immutable arrays. pub use core_foundation_sys::array::*; pub use core_foundation_sys::base::CFIndex; use core_foundation_sys::base::{CFTypeRef, CFRelease, kCFAllocatorDefault}; use std::mem; use std::marker::PhantomData; use std::os::raw::c_void; use std::ptr; use ConcreteCFType; use base::{CFIndexConvertible, TCFType, CFRange}; use base::{FromVoid, ItemRef}; /// A heterogeneous immutable array. pub struct CFArray(CFArrayRef, PhantomData); impl Drop for CFArray { fn drop(&mut self) { unsafe { CFRelease(self.as_CFTypeRef()) } } } pub struct CFArrayIterator<'a, T: 'a> { array: &'a CFArray, index: CFIndex, len: CFIndex, } impl<'a, T: FromVoid> Iterator for CFArrayIterator<'a, T> { type Item = ItemRef<'a, T>; fn next(&mut self) -> Option> { if self.index >= self.len { None } else { let value = unsafe { self.array.get_unchecked(self.index) }; self.index += 1; Some(value) } } } impl<'a, T: FromVoid> ExactSizeIterator for CFArrayIterator<'a, T> { fn len(&self) -> usize { (self.array.len() - self.index) as usize } } impl_TCFType!(CFArray, CFArrayRef, CFArrayGetTypeID); impl_CFTypeDescription!(CFArray); unsafe impl ConcreteCFType for CFArray<*const c_void> {} impl CFArray { /// Creates a new `CFArray` with the given elements, which must implement `Copy`. pub fn from_copyable(elems: &[T]) -> CFArray where T: Copy { unsafe { let array_ref = CFArrayCreate(kCFAllocatorDefault, elems.as_ptr() as *const *const c_void, elems.len().to_CFIndex(), ptr::null()); TCFType::wrap_under_create_rule(array_ref) } } /// Creates a new `CFArray` with the given elements, which must be `CFType` objects. pub fn from_CFTypes(elems: &[T]) -> CFArray where T: TCFType { unsafe { let elems: Vec = elems.iter().map(|elem| elem.as_CFTypeRef()).collect(); let array_ref = CFArrayCreate(kCFAllocatorDefault, elems.as_ptr(), elems.len().to_CFIndex(), &kCFTypeArrayCallBacks); TCFType::wrap_under_create_rule(array_ref) } } #[inline] pub fn to_untyped(&self) -> CFArray { unsafe { CFArray::wrap_under_get_rule(self.0) } } /// Returns the same array, but with the type reset to void pointers. /// Equal to `to_untyped`, but is faster since it does not increment the retain count. #[inline] pub fn into_untyped(self) -> CFArray { let reference = self.0; mem::forget(self); unsafe { CFArray::wrap_under_create_rule(reference) } } /// Iterates over the elements of this `CFArray`. /// /// Careful; the loop body must wrap the reference properly. Generally, when array elements are /// Core Foundation objects (not always true), they need to be wrapped with /// `TCFType::wrap_under_get_rule()`. #[inline] pub fn iter<'a>(&'a self) -> CFArrayIterator<'a, T> { CFArrayIterator { array: self, index: 0, len: self.len(), } } #[inline] pub fn len(&self) -> CFIndex { unsafe { CFArrayGetCount(self.0) } } #[inline] pub unsafe fn get_unchecked<'a>(&'a self, index: CFIndex) -> ItemRef<'a, T> where T: FromVoid { T::from_void(CFArrayGetValueAtIndex(self.0, index)) } #[inline] pub fn get<'a>(&'a self, index: CFIndex) -> Option> where T: FromVoid { if index < self.len() { Some(unsafe { T::from_void(CFArrayGetValueAtIndex(self.0, index)) } ) } else { None } } pub fn get_values(&self, range: CFRange) -> Vec<*const c_void> { let mut vec = Vec::with_capacity(range.length as usize); unsafe { CFArrayGetValues(self.0, range, vec.as_mut_ptr()); vec.set_len(range.length as usize); vec } } pub fn get_all_values(&self) -> Vec<*const c_void> { self.get_values(CFRange { location: 0, length: self.len() }) } } impl<'a, T: FromVoid> IntoIterator for &'a CFArray { type Item = ItemRef<'a, T>; type IntoIter = CFArrayIterator<'a, T>; fn into_iter(self) -> CFArrayIterator<'a, T> { self.iter() } } #[cfg(test)] mod tests { use super::*; use std::mem; use base::CFType; #[test] fn to_untyped_correct_retain_count() { let array = CFArray::::from_CFTypes(&[]); assert_eq!(array.retain_count(), 1); let untyped_array = array.to_untyped(); assert_eq!(array.retain_count(), 2); assert_eq!(untyped_array.retain_count(), 2); mem::drop(array); assert_eq!(untyped_array.retain_count(), 1); } #[test] fn into_untyped() { let array = CFArray::::from_CFTypes(&[]); let array2 = array.to_untyped(); assert_eq!(array.retain_count(), 2); let untyped_array = array.into_untyped(); assert_eq!(untyped_array.retain_count(), 2); mem::drop(array2); assert_eq!(untyped_array.retain_count(), 1); } #[test] fn borrow() { use string::CFString; let string = CFString::from_static_string("bar"); assert_eq!(string.retain_count(), 1); let x; { let arr: CFArray = CFArray::from_CFTypes(&[string]); { let p = arr.get(0).unwrap(); assert_eq!(p.retain_count(), 1); } { x = arr.get(0).unwrap().clone(); assert_eq!(x.retain_count(), 2); assert_eq!(x.to_string(), "bar"); } } assert_eq!(x.retain_count(), 1); } #[test] fn iter_untyped_array() { use string::{CFString, CFStringRef}; use base::TCFTypeRef; let cf_string = CFString::from_static_string("bar"); let array: CFArray = CFArray::from_CFTypes(&[cf_string.clone()]).into_untyped(); let cf_strings = array.iter().map(|ptr| { unsafe { CFString::wrap_under_get_rule(CFStringRef::from_void_ptr(*ptr)) } }).collect::>(); let strings = cf_strings.iter().map(|s| s.to_string()).collect::>(); assert_eq!(cf_string.retain_count(), 3); assert_eq!(&strings[..], &["bar"]); } #[test] fn should_box_and_unbox() { use number::CFNumber; let n0 = CFNumber::from(0); let n1 = CFNumber::from(1); let n2 = CFNumber::from(2); let n3 = CFNumber::from(3); let n4 = CFNumber::from(4); let n5 = CFNumber::from(5); let arr = CFArray::from_CFTypes(&[ n0.as_CFType(), n1.as_CFType(), n2.as_CFType(), n3.as_CFType(), n4.as_CFType(), n5.as_CFType(), ]); assert_eq!( arr.get_all_values(), &[ n0.as_CFTypeRef(), n1.as_CFTypeRef(), n2.as_CFTypeRef(), n3.as_CFTypeRef(), n4.as_CFTypeRef(), n5.as_CFTypeRef() ] ); let mut sum = 0; let mut iter = arr.iter(); assert_eq!(iter.len(), 6); assert!(iter.next().is_some()); assert_eq!(iter.len(), 5); for elem in iter { let number: CFNumber = elem.downcast::().unwrap(); sum += number.to_i64().unwrap() } assert_eq!(sum, 15); for elem in arr.iter() { let number: CFNumber = elem.downcast::().unwrap(); sum += number.to_i64().unwrap() } assert_eq!(sum, 30); } } vendor/core-foundation/src/dictionary.rs0000664000175000017500000003347414160055207021260 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Dictionaries of key-value pairs. pub use core_foundation_sys::dictionary::*; use core_foundation_sys::base::{CFTypeRef, CFRelease, kCFAllocatorDefault}; use std::mem; use std::os::raw::c_void; use std::ptr; use std::marker::PhantomData; use base::{ItemRef, FromVoid, ToVoid}; use base::{CFIndexConvertible, TCFType}; use ConcreteCFType; // consume the type parameters with PhantomDatas pub struct CFDictionary(CFDictionaryRef, PhantomData, PhantomData); impl Drop for CFDictionary { fn drop(&mut self) { unsafe { CFRelease(self.as_CFTypeRef()) } } } impl_TCFType!(CFDictionary, CFDictionaryRef, CFDictionaryGetTypeID); impl_CFTypeDescription!(CFDictionary); unsafe impl ConcreteCFType for CFDictionary<*const c_void, *const c_void> {} impl CFDictionary { pub fn from_CFType_pairs(pairs: &[(K, V)]) -> CFDictionary where K: TCFType, V: TCFType { let (keys, values): (Vec, Vec) = pairs .iter() .map(|&(ref key, ref value)| (key.as_CFTypeRef(), value.as_CFTypeRef())) .unzip(); unsafe { let dictionary_ref = CFDictionaryCreate(kCFAllocatorDefault, keys.as_ptr(), values.as_ptr(), keys.len().to_CFIndex(), &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); TCFType::wrap_under_create_rule(dictionary_ref) } } #[inline] pub fn to_untyped(&self) -> CFDictionary { unsafe { CFDictionary::wrap_under_get_rule(self.0) } } /// Returns a `CFMutableDictionary` pointing to the same underlying dictionary as this immutable one. /// This should only be used when the underlying dictionary is mutable. #[inline] pub unsafe fn to_mutable(&self) -> CFMutableDictionary { CFMutableDictionary::wrap_under_get_rule(self.0 as CFMutableDictionaryRef) } /// Returns the same dictionary, but with the types reset to void pointers. /// Equal to `to_untyped`, but is faster since it does not increment the retain count. #[inline] pub fn into_untyped(self) -> CFDictionary { let reference = self.0; mem::forget(self); unsafe { CFDictionary::wrap_under_create_rule(reference) } } #[inline] pub fn len(&self) -> usize { unsafe { CFDictionaryGetCount(self.0) as usize } } #[inline] pub fn is_empty(&self) -> bool { self.len() == 0 } #[inline] pub fn contains_key(&self, key: &K) -> bool where K: ToVoid { unsafe { CFDictionaryContainsKey(self.0, key.to_void()) != 0 } } #[inline] pub fn find<'a, T: ToVoid>(&'a self, key: T) -> Option> where V: FromVoid, K: ToVoid { unsafe { let mut value: *const c_void = ptr::null(); if CFDictionaryGetValueIfPresent(self.0, key.to_void(), &mut value) != 0 { Some(V::from_void(value)) } else { None } } } /// # Panics /// /// Panics if the key is not present in the dictionary. Use `find` to get an `Option` instead /// of panicking. #[inline] pub fn get<'a, T: ToVoid>(&'a self, key: T) -> ItemRef<'a, V> where V: FromVoid, K: ToVoid { let ptr = key.to_void(); self.find(key).unwrap_or_else(|| panic!("No entry found for key {:p}", ptr)) } pub fn get_keys_and_values(&self) -> (Vec<*const c_void>, Vec<*const c_void>) { let length = self.len(); let mut keys = Vec::with_capacity(length); let mut values = Vec::with_capacity(length); unsafe { CFDictionaryGetKeysAndValues(self.0, keys.as_mut_ptr(), values.as_mut_ptr()); keys.set_len(length); values.set_len(length); } (keys, values) } } // consume the type parameters with PhantomDatas pub struct CFMutableDictionary(CFMutableDictionaryRef, PhantomData, PhantomData); impl Drop for CFMutableDictionary { fn drop(&mut self) { unsafe { CFRelease(self.as_CFTypeRef()) } } } impl_TCFType!(CFMutableDictionary, CFMutableDictionaryRef, CFDictionaryGetTypeID); impl_CFTypeDescription!(CFMutableDictionary); impl CFMutableDictionary { pub fn new() -> Self { Self::with_capacity(0) } pub fn with_capacity(capacity: isize) -> Self { unsafe { let dictionary_ref = CFDictionaryCreateMutable(kCFAllocatorDefault, capacity as _, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); TCFType::wrap_under_create_rule(dictionary_ref) } } pub fn copy_with_capacity(&self, capacity: isize) -> Self { unsafe { let dictionary_ref = CFDictionaryCreateMutableCopy(kCFAllocatorDefault, capacity as _, self.0); TCFType::wrap_under_get_rule(dictionary_ref) } } pub fn from_CFType_pairs(pairs: &[(K, V)]) -> CFMutableDictionary where K: ToVoid, V: ToVoid { let mut result = Self::with_capacity(pairs.len() as _); for &(ref key, ref value) in pairs { result.add(key, value); } result } #[inline] pub fn to_untyped(&self) -> CFMutableDictionary { unsafe { CFMutableDictionary::wrap_under_get_rule(self.0) } } /// Returns the same dictionary, but with the types reset to void pointers. /// Equal to `to_untyped`, but is faster since it does not increment the retain count. #[inline] pub fn into_untyped(self) -> CFMutableDictionary { let reference = self.0; mem::forget(self); unsafe { CFMutableDictionary::wrap_under_create_rule(reference) } } /// Returns a `CFDictionary` pointing to the same underlying dictionary as this mutable one. #[inline] pub fn to_immutable(&self) -> CFDictionary { unsafe { CFDictionary::wrap_under_get_rule(self.0) } } // Immutable interface #[inline] pub fn len(&self) -> usize { unsafe { CFDictionaryGetCount(self.0) as usize } } #[inline] pub fn is_empty(&self) -> bool { self.len() == 0 } #[inline] pub fn contains_key(&self, key: *const c_void) -> bool { unsafe { CFDictionaryContainsKey(self.0, key) != 0 } } #[inline] pub fn find<'a>(&'a self, key: &K) -> Option> where V: FromVoid, K: ToVoid { unsafe { let mut value: *const c_void = ptr::null(); if CFDictionaryGetValueIfPresent(self.0, key.to_void(), &mut value) != 0 { Some(V::from_void(value)) } else { None } } } /// # Panics /// /// Panics if the key is not present in the dictionary. Use `find` to get an `Option` instead /// of panicking. #[inline] pub fn get<'a>(&'a self, key: &K) -> ItemRef<'a, V> where V: FromVoid, K: ToVoid { let ptr = key.to_void(); self.find(&key).unwrap_or_else(|| panic!("No entry found for key {:p}", ptr)) } pub fn get_keys_and_values(&self) -> (Vec<*const c_void>, Vec<*const c_void>) { let length = self.len(); let mut keys = Vec::with_capacity(length); let mut values = Vec::with_capacity(length); unsafe { CFDictionaryGetKeysAndValues(self.0, keys.as_mut_ptr(), values.as_mut_ptr()); keys.set_len(length); values.set_len(length); } (keys, values) } // Mutable interface /// Adds the key-value pair to the dictionary if no such key already exist. #[inline] pub fn add(&mut self, key: &K, value: &V) where K: ToVoid, V: ToVoid { unsafe { CFDictionaryAddValue(self.0, key.to_void(), value.to_void()) } } /// Sets the value of the key in the dictionary. #[inline] pub fn set(&mut self, key: K, value: V) where K: ToVoid, V: ToVoid { unsafe { CFDictionarySetValue(self.0, key.to_void(), value.to_void()) } } /// Replaces the value of the key in the dictionary. #[inline] pub fn replace(&mut self, key: K, value: V) where K: ToVoid, V: ToVoid { unsafe { CFDictionaryReplaceValue(self.0, key.to_void(), value.to_void()) } } /// Removes the value of the key from the dictionary. #[inline] pub fn remove(&mut self, key: K) where K: ToVoid { unsafe { CFDictionaryRemoveValue(self.0, key.to_void()) } } #[inline] pub fn remove_all(&mut self) { unsafe { CFDictionaryRemoveAllValues(self.0) } } } impl Default for CFMutableDictionary { fn default() -> Self { Self::new() } } impl<'a, K, V> From<&'a CFDictionary> for CFMutableDictionary { /// Creates a new mutable dictionary with the key-value pairs from another dictionary. /// The capacity of the new mutable dictionary is not limited. fn from(dict: &'a CFDictionary) -> Self { unsafe { let mut_dict_ref = CFDictionaryCreateMutableCopy(kCFAllocatorDefault, 0, dict.0); TCFType::wrap_under_create_rule(mut_dict_ref) } } } #[cfg(test)] pub mod test { use super::*; use base::{CFType, TCFType}; use boolean::CFBoolean; use number::CFNumber; use string::CFString; #[test] fn dictionary() { let bar = CFString::from_static_string("Bar"); let baz = CFString::from_static_string("Baz"); let boo = CFString::from_static_string("Boo"); let foo = CFString::from_static_string("Foo"); let tru = CFBoolean::true_value(); let n42 = CFNumber::from(42); let d = CFDictionary::from_CFType_pairs(&[ (bar.as_CFType(), boo.as_CFType()), (baz.as_CFType(), tru.as_CFType()), (foo.as_CFType(), n42.as_CFType()), ]); let (v1, v2) = d.get_keys_and_values(); assert_eq!(v1, &[bar.as_CFTypeRef(), baz.as_CFTypeRef(), foo.as_CFTypeRef()]); assert_eq!(v2, &[boo.as_CFTypeRef(), tru.as_CFTypeRef(), n42.as_CFTypeRef()]); } #[test] fn mutable_dictionary() { let bar = CFString::from_static_string("Bar"); let baz = CFString::from_static_string("Baz"); let boo = CFString::from_static_string("Boo"); let foo = CFString::from_static_string("Foo"); let tru = CFBoolean::true_value(); let n42 = CFNumber::from(42); let mut d = CFMutableDictionary::::new(); d.add(&bar, &boo.as_CFType()); d.add(&baz, &tru.as_CFType()); d.add(&foo, &n42.as_CFType()); assert_eq!(d.len(), 3); let (v1, v2) = d.get_keys_and_values(); assert_eq!(v1, &[bar.as_CFTypeRef(), baz.as_CFTypeRef(), foo.as_CFTypeRef()]); assert_eq!(v2, &[boo.as_CFTypeRef(), tru.as_CFTypeRef(), n42.as_CFTypeRef()]); d.remove(baz); assert_eq!(d.len(), 2); let (v1, v2) = d.get_keys_and_values(); assert_eq!(v1, &[bar.as_CFTypeRef(), foo.as_CFTypeRef()]); assert_eq!(v2, &[boo.as_CFTypeRef(), n42.as_CFTypeRef()]); d.remove_all(); assert_eq!(d.len(), 0) } #[test] fn dict_find_and_contains_key() { let dict = CFDictionary::from_CFType_pairs(&[ ( CFString::from_static_string("hello"), CFBoolean::true_value(), ), ]); let key = CFString::from_static_string("hello"); let invalid_key = CFString::from_static_string("foobar"); assert!(dict.contains_key(&key)); assert!(!dict.contains_key(&invalid_key)); let value = dict.find(&key).unwrap().clone(); assert_eq!(value, CFBoolean::true_value()); assert_eq!(dict.find(&invalid_key), None); } #[test] fn convert_immutable_to_mutable_dict() { let dict: CFDictionary = CFDictionary::from_CFType_pairs(&[ (CFString::from_static_string("Foo"), CFBoolean::true_value()), ]); let mut mut_dict = CFMutableDictionary::from(&dict); assert_eq!(dict.retain_count(), 1); assert_eq!(mut_dict.retain_count(), 1); assert_eq!(mut_dict.len(), 1); assert_eq!(*mut_dict.get(&CFString::from_static_string("Foo")), CFBoolean::true_value()); mut_dict.add(&CFString::from_static_string("Bar"), &CFBoolean::false_value()); assert_eq!(dict.len(), 1); assert_eq!(mut_dict.len(), 2); } #[test] fn mutable_dictionary_as_immutable() { let mut mut_dict: CFMutableDictionary = CFMutableDictionary::new(); mut_dict.add(&CFString::from_static_string("Bar"), &CFBoolean::false_value()); assert_eq!(mut_dict.retain_count(), 1); let dict = mut_dict.to_immutable(); assert_eq!(mut_dict.retain_count(), 2); assert_eq!(dict.retain_count(), 2); assert_eq!(*dict.get(&CFString::from_static_string("Bar")), CFBoolean::false_value()); mem::drop(dict); assert_eq!(mut_dict.retain_count(), 1); } } vendor/core-foundation/src/characterset.rs0000664000175000017500000000136714160055207021557 0ustar mwhudsonmwhudson// Copyright 2019 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! A set of Unicode compliant characters. pub use core_foundation_sys::characterset::*; use base::TCFType; declare_TCFType!{ /// An immutable set of Unicde characters. CFCharacterSet, CFCharacterSetRef } impl_TCFType!(CFCharacterSet, CFCharacterSetRef, CFCharacterSetGetTypeID); impl_CFTypeDescription!(CFCharacterSet); vendor/core-foundation/src/url.rs0000664000175000017500000001217414160055207017707 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! A URL type for Core Foundation. pub use core_foundation_sys::url::*; use base::{TCFType, CFIndex}; use string::{CFString}; use core_foundation_sys::base::{kCFAllocatorDefault, Boolean}; use std::fmt; use std::ptr; use std::path::{Path, PathBuf}; use libc::{c_char, strlen, PATH_MAX}; #[cfg(unix)] use std::os::unix::ffi::OsStrExt; #[cfg(unix)] use std::ffi::OsStr; declare_TCFType!(CFURL, CFURLRef); impl_TCFType!(CFURL, CFURLRef, CFURLGetTypeID); impl fmt::Debug for CFURL { #[inline] fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { unsafe { let string: CFString = TCFType::wrap_under_get_rule(CFURLGetString(self.0)); write!(f, "{}", string.to_string()) } } } impl CFURL { pub fn from_path>(path: P, isDirectory: bool) -> Option { let path_bytes; #[cfg(unix)] { path_bytes = path.as_ref().as_os_str().as_bytes() } #[cfg(not(unix))] { // XXX: Getting non-valid UTF8 paths into CoreFoundation on Windows is going to be unpleasant // CFURLGetWideFileSystemRepresentation might help path_bytes = match path.as_ref().to_str() { Some(path) => path, None => return None, } } unsafe { let url_ref = CFURLCreateFromFileSystemRepresentation(ptr::null_mut(), path_bytes.as_ptr(), path_bytes.len() as CFIndex, isDirectory as u8); if url_ref.is_null() { return None; } Some(TCFType::wrap_under_create_rule(url_ref)) } } pub fn from_file_system_path(filePath: CFString, pathStyle: CFURLPathStyle, isDirectory: bool) -> CFURL { unsafe { let url_ref = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, filePath.as_concrete_TypeRef(), pathStyle, isDirectory as u8); TCFType::wrap_under_create_rule(url_ref) } } #[cfg(unix)] pub fn to_path(&self) -> Option { // implementing this on Windows is more complicated because of the different OsStr representation unsafe { let mut buf = [0u8; PATH_MAX as usize]; let result = CFURLGetFileSystemRepresentation(self.0, true as Boolean, buf.as_mut_ptr(), buf.len() as CFIndex); if result == false as Boolean { return None; } let len = strlen(buf.as_ptr() as *const c_char); let path = OsStr::from_bytes(&buf[0..len]); Some(PathBuf::from(path)) } } pub fn get_string(&self) -> CFString { unsafe { TCFType::wrap_under_get_rule(CFURLGetString(self.0)) } } pub fn get_file_system_path(&self, pathStyle: CFURLPathStyle) -> CFString { unsafe { TCFType::wrap_under_create_rule(CFURLCopyFileSystemPath(self.as_concrete_TypeRef(), pathStyle)) } } pub fn absolute(&self) -> CFURL { unsafe { TCFType::wrap_under_create_rule(CFURLCopyAbsoluteURL(self.as_concrete_TypeRef())) } } } #[test] fn file_url_from_path() { let path = "/usr/local/foo/"; let cfstr_path = CFString::from_static_string(path); let cfurl = CFURL::from_file_system_path(cfstr_path, kCFURLPOSIXPathStyle, true); assert_eq!(cfurl.get_string().to_string(), "file:///usr/local/foo/"); } #[cfg(unix)] #[test] fn non_utf8() { use std::ffi::OsStr; let path = Path::new(OsStr::from_bytes(b"/\xC0/blame")); let cfurl = CFURL::from_path(path, false).unwrap(); assert_eq!(cfurl.to_path().unwrap(), path); let len = unsafe { CFURLGetBytes(cfurl.as_concrete_TypeRef(), ptr::null_mut(), 0) }; assert_eq!(len, 17); } #[test] fn absolute_file_url() { use core_foundation_sys::url::CFURLCreateWithFileSystemPathRelativeToBase; use std::path::PathBuf; let path = "/usr/local/foo"; let file = "bar"; let cfstr_path = CFString::from_static_string(path); let cfstr_file = CFString::from_static_string(file); let cfurl_base = CFURL::from_file_system_path(cfstr_path, kCFURLPOSIXPathStyle, true); let cfurl_relative: CFURL = unsafe { let url_ref = CFURLCreateWithFileSystemPathRelativeToBase(kCFAllocatorDefault, cfstr_file.as_concrete_TypeRef(), kCFURLPOSIXPathStyle, false as u8, cfurl_base.as_concrete_TypeRef()); TCFType::wrap_under_create_rule(url_ref) }; let mut absolute_path = PathBuf::from(path); absolute_path.push(file); assert_eq!(cfurl_relative.get_file_system_path(kCFURLPOSIXPathStyle).to_string(), file); assert_eq!(cfurl_relative.absolute().get_file_system_path(kCFURLPOSIXPathStyle).to_string(), absolute_path.to_str().unwrap()); } vendor/core-foundation/src/number.rs0000664000175000017500000000650714160055207020400 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Immutable numbers. use core_foundation_sys::base::kCFAllocatorDefault; pub use core_foundation_sys::number::*; use std::os::raw::c_void; use base::TCFType; declare_TCFType!{ /// An immutable numeric value. CFNumber, CFNumberRef } impl_TCFType!(CFNumber, CFNumberRef, CFNumberGetTypeID); impl_CFTypeDescription!(CFNumber); impl_CFComparison!(CFNumber, CFNumberCompare); impl CFNumber { #[inline] pub fn to_i32(&self) -> Option { unsafe { let mut value: i32 = 0; let ok = CFNumberGetValue(self.0, kCFNumberSInt32Type, &mut value as *mut i32 as *mut c_void); if ok { Some(value) } else { None } } } #[inline] pub fn to_i64(&self) -> Option { unsafe { let mut value: i64 = 0; let ok = CFNumberGetValue(self.0, kCFNumberSInt64Type, &mut value as *mut i64 as *mut c_void); if ok { Some(value) } else { None } } } #[inline] pub fn to_f32(&self) -> Option { unsafe { let mut value: f32 = 0.0; let ok = CFNumberGetValue(self.0, kCFNumberFloat32Type, &mut value as *mut f32 as *mut c_void); if ok { Some(value) } else { None } } } #[inline] pub fn to_f64(&self) -> Option { unsafe { let mut value: f64 = 0.0; let ok = CFNumberGetValue(self.0, kCFNumberFloat64Type, &mut value as *mut f64 as *mut c_void); if ok { Some(value) } else { None } } } } impl From for CFNumber { #[inline] fn from(value: i32) -> Self { unsafe { let number_ref = CFNumberCreate( kCFAllocatorDefault, kCFNumberSInt32Type, &value as *const i32 as *const c_void, ); TCFType::wrap_under_create_rule(number_ref) } } } impl From for CFNumber { #[inline] fn from(value: i64) -> Self { unsafe { let number_ref = CFNumberCreate( kCFAllocatorDefault, kCFNumberSInt64Type, &value as *const i64 as *const c_void, ); TCFType::wrap_under_create_rule(number_ref) } } } impl From for CFNumber { #[inline] fn from(value: f32) -> Self { unsafe { let number_ref = CFNumberCreate( kCFAllocatorDefault, kCFNumberFloat32Type, &value as *const f32 as *const c_void, ); TCFType::wrap_under_create_rule(number_ref) } } } impl From for CFNumber { #[inline] fn from(value: f64) -> Self { unsafe { let number_ref = CFNumberCreate( kCFAllocatorDefault, kCFNumberFloat64Type, &value as *const f64 as *const c_void, ); TCFType::wrap_under_create_rule(number_ref) } } } vendor/core-foundation/src/filedescriptor.rs0000664000175000017500000001444614160055207022127 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. pub use core_foundation_sys::filedescriptor::*; use core_foundation_sys::base::{Boolean, CFIndex}; use core_foundation_sys::base::{kCFAllocatorDefault, CFOptionFlags}; use base::TCFType; use runloop::CFRunLoopSource; use std::mem::MaybeUninit; use std::os::unix::io::{AsRawFd, RawFd}; use std::ptr; declare_TCFType!{ CFFileDescriptor, CFFileDescriptorRef } impl_TCFType!(CFFileDescriptor, CFFileDescriptorRef, CFFileDescriptorGetTypeID); impl CFFileDescriptor { pub fn new(fd: RawFd, closeOnInvalidate: bool, callout: CFFileDescriptorCallBack, context: Option<&CFFileDescriptorContext>) -> Option { let context = context.map_or(ptr::null(), |c| c as *const _); unsafe { let fd_ref = CFFileDescriptorCreate(kCFAllocatorDefault, fd, closeOnInvalidate as Boolean, callout, context); if fd_ref.is_null() { None } else { Some(TCFType::wrap_under_create_rule(fd_ref)) } } } pub fn context(&self) -> CFFileDescriptorContext { unsafe { let mut context = MaybeUninit::::uninit(); CFFileDescriptorGetContext(self.0, context.as_mut_ptr()); context.assume_init() } } pub fn enable_callbacks(&self, callback_types: CFOptionFlags) { unsafe { CFFileDescriptorEnableCallBacks(self.0, callback_types) } } pub fn disable_callbacks(&self, callback_types: CFOptionFlags) { unsafe { CFFileDescriptorDisableCallBacks(self.0, callback_types) } } pub fn valid(&self) -> bool { unsafe { CFFileDescriptorIsValid(self.0) != 0 } } pub fn invalidate(&self) { unsafe { CFFileDescriptorInvalidate(self.0) } } pub fn to_run_loop_source(&self, order: CFIndex) -> Option { unsafe { let source_ref = CFFileDescriptorCreateRunLoopSource( kCFAllocatorDefault, self.0, order ); if source_ref.is_null() { None } else { Some(TCFType::wrap_under_create_rule(source_ref)) } } } } impl AsRawFd for CFFileDescriptor { fn as_raw_fd(&self) -> RawFd { unsafe { CFFileDescriptorGetNativeDescriptor(self.0) } } } #[cfg(test)] mod test { extern crate libc; use super::*; use std::ffi::CString; use std::os::raw::c_void; use core_foundation_sys::base::{CFOptionFlags}; use core_foundation_sys::runloop::{kCFRunLoopDefaultMode}; use libc::O_RDWR; use runloop::{CFRunLoop}; #[test] fn test_consumed() { let path = CString::new("/dev/null").unwrap(); let raw_fd = unsafe { libc::open(path.as_ptr(), O_RDWR, 0) }; let cf_fd = CFFileDescriptor::new(raw_fd, true, never_callback, None); assert!(cf_fd.is_some()); let cf_fd = cf_fd.unwrap(); assert!(cf_fd.valid()); cf_fd.invalidate(); assert!(!cf_fd.valid()); // close() should fail assert_eq!(unsafe { libc::close(raw_fd) }, -1); } #[test] fn test_unconsumed() { let path = CString::new("/dev/null").unwrap(); let raw_fd = unsafe { libc::open(path.as_ptr(), O_RDWR, 0) }; let cf_fd = CFFileDescriptor::new(raw_fd, false, never_callback, None); assert!(cf_fd.is_some()); let cf_fd = cf_fd.unwrap(); assert!(cf_fd.valid()); cf_fd.invalidate(); assert!(!cf_fd.valid()); // close() should succeed assert_eq!(unsafe { libc::close(raw_fd) }, 0); } extern "C" fn never_callback(_f: CFFileDescriptorRef, _callback_types: CFOptionFlags, _info_ptr: *mut c_void) { unreachable!(); } struct TestInfo { value: CFOptionFlags } #[test] fn test_callback() { let mut info = TestInfo { value: 0 }; let context = CFFileDescriptorContext { version: 0, info: &mut info as *mut _ as *mut c_void, retain: None, release: None, copyDescription: None }; let path = CString::new("/dev/null").unwrap(); let raw_fd = unsafe { libc::open(path.as_ptr(), O_RDWR, 0) }; let cf_fd = CFFileDescriptor::new(raw_fd, true, callback, Some(&context)); assert!(cf_fd.is_some()); let cf_fd = cf_fd.unwrap(); assert!(cf_fd.valid()); let run_loop = CFRunLoop::get_current(); let source = CFRunLoopSource::from_file_descriptor(&cf_fd, 0); assert!(source.is_some()); unsafe { run_loop.add_source(&source.unwrap(), kCFRunLoopDefaultMode); } info.value = 0; cf_fd.enable_callbacks(kCFFileDescriptorReadCallBack); CFRunLoop::run_current(); assert_eq!(info.value, kCFFileDescriptorReadCallBack); info.value = 0; cf_fd.enable_callbacks(kCFFileDescriptorWriteCallBack); CFRunLoop::run_current(); assert_eq!(info.value, kCFFileDescriptorWriteCallBack); info.value = 0; cf_fd.disable_callbacks(kCFFileDescriptorReadCallBack | kCFFileDescriptorWriteCallBack); cf_fd.invalidate(); assert!(!cf_fd.valid()); } extern "C" fn callback(_f: CFFileDescriptorRef, callback_types: CFOptionFlags, info_ptr: *mut c_void) { assert!(!info_ptr.is_null()); let info: *mut TestInfo = info_ptr as *mut TestInfo; unsafe { (*info).value = callback_types }; CFRunLoop::get_current().stop(); } } vendor/core-foundation/src/lib.rs0000664000175000017500000001632014160055207017650 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![allow(non_snake_case)] //! This crate provides wrappers around the underlying CoreFoundation //! types and functions that are available on Apple's operating systems. //! //! It also provides a framework for other crates to use when wrapping //! other frameworks that use the CoreFoundation framework. extern crate core_foundation_sys; extern crate libc; #[cfg(feature = "with-chrono")] extern crate chrono; use base::TCFType; pub unsafe trait ConcreteCFType: TCFType {} /// Declare a Rust type that wraps an underlying CoreFoundation type. /// /// This will provide an implementation of `Drop` using [`CFRelease`]. /// The type must have an implementation of the [`TCFType`] trait, usually /// provided using the [`impl_TCFType`] macro. /// /// ``` /// #[macro_use] extern crate core_foundation; /// // Make sure that the `TCFType` trait is in scope. /// use core_foundation::base::{CFTypeID, TCFType}; /// /// extern "C" { /// // We need a function that returns the `CFTypeID`. /// pub fn ShrubberyGetTypeID() -> CFTypeID; /// } /// /// pub struct __Shrubbery {} /// // The ref type must be a pointer to the underlying struct. /// pub type ShrubberyRef = *const __Shrubbery; /// /// declare_TCFType!(Shrubbery, ShrubberyRef); /// impl_TCFType!(Shrubbery, ShrubberyRef, ShrubberyGetTypeID); /// # fn main() {} /// ``` /// /// [`CFRelease`]: https://developer.apple.com/documentation/corefoundation/1521153-cfrelease /// [`TCFType`]: base/trait.TCFType.html /// [`impl_TCFType`]: macro.impl_TCFType.html #[macro_export] macro_rules! declare_TCFType { ( $(#[$doc:meta])* $ty:ident, $raw:ident ) => { $(#[$doc])* pub struct $ty($raw); impl Drop for $ty { fn drop(&mut self) { unsafe { $crate::base::CFRelease(self.as_CFTypeRef()) } } } } } /// Provide an implementation of the [`TCFType`] trait for the Rust /// wrapper type around an underlying CoreFoundation type. /// /// See [`declare_TCFType`] for details. /// /// [`declare_TCFType`]: macro.declare_TCFType.html /// [`TCFType`]: base/trait.TCFType.html #[macro_export] macro_rules! impl_TCFType { ($ty:ident, $ty_ref:ident, $ty_id:ident) => { impl_TCFType!($ty<>, $ty_ref, $ty_id); unsafe impl $crate::ConcreteCFType for $ty { } }; ($ty:ident<$($p:ident $(: $bound:path)*),*>, $ty_ref:ident, $ty_id:ident) => { impl<$($p $(: $bound)*),*> $crate::base::TCFType for $ty<$($p),*> { type Ref = $ty_ref; #[inline] fn as_concrete_TypeRef(&self) -> $ty_ref { self.0 } #[inline] unsafe fn wrap_under_get_rule(reference: $ty_ref) -> Self { assert!(!reference.is_null(), "Attempted to create a NULL object."); let reference = $crate::base::CFRetain(reference as *const ::std::os::raw::c_void) as $ty_ref; $crate::base::TCFType::wrap_under_create_rule(reference) } #[inline] fn as_CFTypeRef(&self) -> $crate::base::CFTypeRef { self.as_concrete_TypeRef() as $crate::base::CFTypeRef } #[inline] unsafe fn wrap_under_create_rule(reference: $ty_ref) -> Self { assert!(!reference.is_null(), "Attempted to create a NULL object."); // we need one PhantomData for each type parameter so call ourselves // again with @Phantom $p to produce that $ty(reference $(, impl_TCFType!(@Phantom $p))*) } #[inline] fn type_id() -> $crate::base::CFTypeID { unsafe { $ty_id() } } } impl Clone for $ty { #[inline] fn clone(&self) -> $ty { unsafe { $ty::wrap_under_get_rule(self.0) } } } impl PartialEq for $ty { #[inline] fn eq(&self, other: &$ty) -> bool { self.as_CFType().eq(&other.as_CFType()) } } impl Eq for $ty { } unsafe impl<'a> $crate::base::ToVoid<$ty> for &'a $ty { fn to_void(&self) -> *const ::std::os::raw::c_void { use $crate::base::TCFTypeRef; self.as_concrete_TypeRef().as_void_ptr() } } unsafe impl $crate::base::ToVoid<$ty> for $ty { fn to_void(&self) -> *const ::std::os::raw::c_void { use $crate::base::TCFTypeRef; self.as_concrete_TypeRef().as_void_ptr() } } unsafe impl $crate::base::ToVoid<$ty> for $ty_ref { fn to_void(&self) -> *const ::std::os::raw::c_void { use $crate::base::TCFTypeRef; self.as_void_ptr() } } }; (@Phantom $x:ident) => { ::std::marker::PhantomData }; } /// Implement `std::fmt::Debug` for the given type. /// /// This will invoke the implementation of `Debug` for [`CFType`] /// which invokes [`CFCopyDescription`]. /// /// The type must have an implementation of the [`TCFType`] trait, usually /// provided using the [`impl_TCFType`] macro. /// /// [`CFType`]: base/struct.CFType.html#impl-Debug /// [`CFCopyDescription`]: https://developer.apple.com/documentation/corefoundation/1521252-cfcopydescription?language=objc /// [`TCFType`]: base/trait.TCFType.html /// [`impl_TCFType`]: macro.impl_TCFType.html #[macro_export] macro_rules! impl_CFTypeDescription { ($ty:ident) => { // it's fine to use an empty <> list impl_CFTypeDescription!($ty<>); }; ($ty:ident<$($p:ident $(: $bound:path)*),*>) => { impl<$($p $(: $bound)*),*> ::std::fmt::Debug for $ty<$($p),*> { fn fmt(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result { self.as_CFType().fmt(f) } } } } #[macro_export] macro_rules! impl_CFComparison { ($ty:ident, $compare:ident) => { impl PartialOrd for $ty { #[inline] fn partial_cmp(&self, other: &$ty) -> Option<::std::cmp::Ordering> { unsafe { Some($compare(self.as_concrete_TypeRef(), other.as_concrete_TypeRef(), ::std::ptr::null_mut()).into()) } } } impl Ord for $ty { #[inline] fn cmp(&self, other: &$ty) -> ::std::cmp::Ordering { self.partial_cmp(other).unwrap() } } } } pub mod array; pub mod attributed_string; pub mod base; pub mod boolean; pub mod characterset; pub mod data; pub mod date; pub mod dictionary; pub mod error; pub mod filedescriptor; pub mod number; pub mod set; pub mod string; pub mod url; pub mod bundle; pub mod propertylist; pub mod runloop; pub mod timezone; pub mod uuid; pub mod mach_port; vendor/core-foundation/src/propertylist.rs0000664000175000017500000002656214160055207021673 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Core Foundation property lists use std::ptr; use std::mem; use std::os::raw::c_void; use error::CFError; use data::CFData; use base::{CFType, TCFType, TCFTypeRef}; pub use core_foundation_sys::propertylist::*; use core_foundation_sys::error::CFErrorRef; use core_foundation_sys::base::{CFGetRetainCount, CFGetTypeID, CFIndex, CFRetain, CFShow, CFTypeID, kCFAllocatorDefault}; pub fn create_with_data(data: CFData, options: CFPropertyListMutabilityOptions) -> Result<(*const c_void, CFPropertyListFormat), CFError> { unsafe { let mut error: CFErrorRef = ptr::null_mut(); let mut format: CFPropertyListFormat = 0; let property_list = CFPropertyListCreateWithData(kCFAllocatorDefault, data.as_concrete_TypeRef(), options, &mut format, &mut error); if property_list.is_null() { Err(TCFType::wrap_under_create_rule(error)) } else { Ok((property_list, format)) } } } pub fn create_data(property_list: *const c_void, format: CFPropertyListFormat) -> Result { unsafe { let mut error: CFErrorRef = ptr::null_mut(); let data_ref = CFPropertyListCreateData(kCFAllocatorDefault, property_list, format, 0, &mut error); if data_ref.is_null() { Err(TCFType::wrap_under_create_rule(error)) } else { Ok(TCFType::wrap_under_create_rule(data_ref)) } } } /// Trait for all subclasses of [`CFPropertyList`]. /// /// [`CFPropertyList`]: struct.CFPropertyList.html pub trait CFPropertyListSubClass: TCFType { /// Create an instance of the superclass type [`CFPropertyList`] for this instance. /// /// [`CFPropertyList`]: struct.CFPropertyList.html #[inline] fn to_CFPropertyList(&self) -> CFPropertyList { unsafe { CFPropertyList::wrap_under_get_rule(self.as_concrete_TypeRef().as_void_ptr()) } } /// Equal to [`to_CFPropertyList`], but consumes self and avoids changing the reference count. /// /// [`to_CFPropertyList`]: #method.to_CFPropertyList #[inline] fn into_CFPropertyList(self) -> CFPropertyList where Self: Sized, { let reference = self.as_concrete_TypeRef().as_void_ptr(); mem::forget(self); unsafe { CFPropertyList::wrap_under_create_rule(reference) } } } impl CFPropertyListSubClass for ::data::CFData {} impl CFPropertyListSubClass for ::string::CFString {} impl CFPropertyListSubClass for ::array::CFArray {} impl CFPropertyListSubClass for ::dictionary::CFDictionary {} impl CFPropertyListSubClass for ::date::CFDate {} impl CFPropertyListSubClass for ::boolean::CFBoolean {} impl CFPropertyListSubClass for ::number::CFNumber {} declare_TCFType!{ /// A CFPropertyList struct. This is superclass to [`CFData`], [`CFString`], [`CFArray`], /// [`CFDictionary`], [`CFDate`], [`CFBoolean`], and [`CFNumber`]. /// /// This superclass type does not have its own `CFTypeID`, instead each instance has the `CFTypeID` /// of the subclass it is an instance of. Thus, this type cannot implement the [`TCFType`] trait, /// since it cannot implement the static [`TCFType::type_id()`] method. /// /// [`CFData`]: ../data/struct.CFData.html /// [`CFString`]: ../string/struct.CFString.html /// [`CFArray`]: ../array/struct.CFArray.html /// [`CFDictionary`]: ../dictionary/struct.CFDictionary.html /// [`CFDate`]: ../date/struct.CFDate.html /// [`CFBoolean`]: ../boolean/struct.CFBoolean.html /// [`CFNumber`]: ../number/struct.CFNumber.html /// [`TCFType`]: ../base/trait.TCFType.html /// [`TCFType::type_id()`]: ../base/trait.TCFType.html#method.type_of CFPropertyList, CFPropertyListRef } impl_CFTypeDescription!(CFPropertyList); impl CFPropertyList { #[inline] pub fn as_concrete_TypeRef(&self) -> CFPropertyListRef { self.0 } #[inline] pub unsafe fn wrap_under_get_rule(reference: CFPropertyListRef) -> CFPropertyList { assert!(!reference.is_null(), "Attempted to create a NULL object."); let reference = CFRetain(reference); CFPropertyList(reference) } #[inline] pub fn as_CFType(&self) -> CFType { unsafe { CFType::wrap_under_get_rule(self.as_CFTypeRef()) } } #[inline] pub fn into_CFType(self) -> CFType where Self: Sized, { let reference = self.as_CFTypeRef(); mem::forget(self); unsafe { TCFType::wrap_under_create_rule(reference) } } #[inline] pub fn as_CFTypeRef(&self) -> ::core_foundation_sys::base::CFTypeRef { self.as_concrete_TypeRef() } #[inline] pub unsafe fn wrap_under_create_rule(obj: CFPropertyListRef) -> CFPropertyList { assert!(!obj.is_null(), "Attempted to create a NULL object."); CFPropertyList(obj) } /// Returns the reference count of the object. It is unwise to do anything other than test /// whether the return value of this method is greater than zero. #[inline] pub fn retain_count(&self) -> CFIndex { unsafe { CFGetRetainCount(self.as_CFTypeRef()) } } /// Returns the type ID of this object. Will be one of CFData, CFString, CFArray, CFDictionary, /// CFDate, CFBoolean, or CFNumber. #[inline] pub fn type_of(&self) -> CFTypeID { unsafe { CFGetTypeID(self.as_CFTypeRef()) } } /// Writes a debugging version of this object on standard error. pub fn show(&self) { unsafe { CFShow(self.as_CFTypeRef()) } } /// Returns true if this value is an instance of another type. #[inline] pub fn instance_of(&self) -> bool { self.type_of() == OtherCFType::type_id() } } impl Clone for CFPropertyList { #[inline] fn clone(&self) -> CFPropertyList { unsafe { CFPropertyList::wrap_under_get_rule(self.0) } } } impl PartialEq for CFPropertyList { #[inline] fn eq(&self, other: &CFPropertyList) -> bool { self.as_CFType().eq(&other.as_CFType()) } } impl Eq for CFPropertyList {} impl CFPropertyList { /// Try to downcast the [`CFPropertyList`] to a subclass. Checking if the instance is the /// correct subclass happens at runtime and `None` is returned if it is not the correct type. /// Works similar to [`Box::downcast`] and [`CFType::downcast`]. /// /// # Examples /// /// ``` /// # use core_foundation::string::CFString; /// # use core_foundation::propertylist::{CFPropertyList, CFPropertyListSubClass}; /// # /// // Create a string. /// let string: CFString = CFString::from_static_string("FooBar"); /// // Cast it up to a property list. /// let propertylist: CFPropertyList = string.to_CFPropertyList(); /// // Cast it down again. /// assert_eq!(propertylist.downcast::().unwrap().to_string(), "FooBar"); /// ``` /// /// [`CFPropertyList`]: struct.CFPropertyList.html /// [`Box::downcast`]: https://doc.rust-lang.org/std/boxed/struct.Box.html#method.downcast pub fn downcast(&self) -> Option { if self.instance_of::() { unsafe { let subclass_ref = T::Ref::from_void_ptr(self.0); Some(T::wrap_under_get_rule(subclass_ref)) } } else { None } } /// Similar to [`downcast`], but consumes self and can thus avoid touching the retain count. /// /// [`downcast`]: #method.downcast pub fn downcast_into(self) -> Option { if self.instance_of::() { unsafe { let subclass_ref = T::Ref::from_void_ptr(self.0); mem::forget(self); Some(T::wrap_under_create_rule(subclass_ref)) } } else { None } } } #[cfg(test)] pub mod test { use super::*; use string::CFString; use boolean::CFBoolean; #[test] fn test_property_list_serialization() { use base::{TCFType, CFEqual}; use boolean::CFBoolean; use number::CFNumber; use dictionary::CFDictionary; use string::CFString; use super::*; let bar = CFString::from_static_string("Bar"); let baz = CFString::from_static_string("Baz"); let boo = CFString::from_static_string("Boo"); let foo = CFString::from_static_string("Foo"); let tru = CFBoolean::true_value(); let n42 = CFNumber::from(1i64<<33); let dict1 = CFDictionary::from_CFType_pairs(&[(bar.as_CFType(), boo.as_CFType()), (baz.as_CFType(), tru.as_CFType()), (foo.as_CFType(), n42.as_CFType())]); let data = create_data(dict1.as_CFTypeRef(), kCFPropertyListXMLFormat_v1_0).unwrap(); let (dict2, _) = create_with_data(data, kCFPropertyListImmutable).unwrap(); unsafe { assert_eq!(CFEqual(dict1.as_CFTypeRef(), dict2), 1); } } #[test] fn to_propertylist_retain_count() { let string = CFString::from_static_string("Bar"); assert_eq!(string.retain_count(), 1); let propertylist = string.to_CFPropertyList(); assert_eq!(string.retain_count(), 2); assert_eq!(propertylist.retain_count(), 2); mem::drop(string); assert_eq!(propertylist.retain_count(), 1); } #[test] fn downcast_string() { let propertylist = CFString::from_static_string("Bar").to_CFPropertyList(); assert_eq!(propertylist.downcast::().unwrap().to_string(), "Bar"); assert!(propertylist.downcast::().is_none()); } #[test] fn downcast_boolean() { let propertylist = CFBoolean::true_value().to_CFPropertyList(); assert!(propertylist.downcast::().is_some()); assert!(propertylist.downcast::().is_none()); } #[test] fn downcast_into_fail() { let string = CFString::from_static_string("Bar"); let propertylist = string.to_CFPropertyList(); assert_eq!(string.retain_count(), 2); assert!(propertylist.downcast_into::().is_none()); assert_eq!(string.retain_count(), 1); } #[test] fn downcast_into() { let string = CFString::from_static_string("Bar"); let propertylist = string.to_CFPropertyList(); assert_eq!(string.retain_count(), 2); let string2 = propertylist.downcast_into::().unwrap(); assert_eq!(string2.to_string(), "Bar"); assert_eq!(string2.retain_count(), 2); } } vendor/core-foundation/src/string.rs0000664000175000017500000001536314160055207020416 0ustar mwhudsonmwhudson// Copyright 2013 The Servo Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Immutable strings. pub use core_foundation_sys::string::*; use base::{CFIndexConvertible, TCFType}; use core_foundation_sys::base::{Boolean, CFIndex, CFRange}; use core_foundation_sys::base::{kCFAllocatorDefault, kCFAllocatorNull}; use std::borrow::Cow; use std::fmt; use std::str::{self, FromStr}; use std::ptr; use std::ffi::CStr; declare_TCFType!{ /// An immutable string in one of a variety of encodings. CFString, CFStringRef } impl_TCFType!(CFString, CFStringRef, CFStringGetTypeID); impl FromStr for CFString { type Err = (); /// See also CFString::new for a variant of this which does not return a Result #[inline] fn from_str(string: &str) -> Result { Ok(CFString::new(string)) } } impl<'a> From<&'a str> for CFString { #[inline] fn from(string: &'a str) -> CFString { CFString::new(string) } } impl<'a> From<&'a CFString> for Cow<'a, str> { fn from(cf_str: &'a CFString) -> Cow<'a, str> { unsafe { // Do this without allocating if we can get away with it let c_string = CFStringGetCStringPtr(cf_str.0, kCFStringEncodingUTF8); if !c_string.is_null() { let c_str = CStr::from_ptr(c_string); Cow::Borrowed(str::from_utf8_unchecked(c_str.to_bytes())) } else { let char_len = cf_str.char_len(); // First, ask how big the buffer ought to be. let mut bytes_required: CFIndex = 0; CFStringGetBytes(cf_str.0, CFRange { location: 0, length: char_len }, kCFStringEncodingUTF8, 0, false as Boolean, ptr::null_mut(), 0, &mut bytes_required); // Then, allocate the buffer and actually copy. let mut buffer = vec![b'\x00'; bytes_required as usize]; let mut bytes_used: CFIndex = 0; let chars_written = CFStringGetBytes(cf_str.0, CFRange { location: 0, length: char_len }, kCFStringEncodingUTF8, 0, false as Boolean, buffer.as_mut_ptr(), buffer.len().to_CFIndex(), &mut bytes_used); assert_eq!(chars_written, char_len); // This is dangerous; we over-allocate and null-terminate the string (during // initialization). assert_eq!(bytes_used, buffer.len().to_CFIndex()); Cow::Owned(String::from_utf8_unchecked(buffer)) } } } } impl fmt::Display for CFString { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { fmt.write_str(&Cow::from(self)) } } impl fmt::Debug for CFString { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "\"{}\"", self) } } impl CFString { /// Creates a new `CFString` instance from a Rust string. #[inline] pub fn new(string: &str) -> CFString { unsafe { let string_ref = CFStringCreateWithBytes(kCFAllocatorDefault, string.as_ptr(), string.len().to_CFIndex(), kCFStringEncodingUTF8, false as Boolean); CFString::wrap_under_create_rule(string_ref) } } /// Like `CFString::new`, but references a string that can be used as a backing store /// by virtue of being statically allocated. #[inline] pub fn from_static_string(string: &'static str) -> CFString { unsafe { let string_ref = CFStringCreateWithBytesNoCopy(kCFAllocatorDefault, string.as_ptr(), string.len().to_CFIndex(), kCFStringEncodingUTF8, false as Boolean, kCFAllocatorNull); TCFType::wrap_under_create_rule(string_ref) } } /// Returns the number of characters in the string. #[inline] pub fn char_len(&self) -> CFIndex { unsafe { CFStringGetLength(self.0) } } } impl<'a> PartialEq<&'a str> for CFString { fn eq(&self, other: &&str) -> bool { unsafe { let temp = CFStringCreateWithBytesNoCopy(kCFAllocatorDefault, other.as_ptr(), other.len().to_CFIndex(), kCFStringEncodingUTF8, false as Boolean, kCFAllocatorNull); self.eq(&CFString::wrap_under_create_rule(temp)) } } } impl<'a> PartialEq for &'a str { #[inline] fn eq(&self, other: &CFString) -> bool { other.eq(self) } } impl PartialEq for String { #[inline] fn eq(&self, other: &CFString) -> bool { other.eq(&self.as_str()) } } impl PartialEq for CFString { #[inline] fn eq(&self, other: &String) -> bool { self.eq(&other.as_str()) } } #[test] fn str_cmp() { let cfstr = CFString::new("hello"); assert_eq!("hello", cfstr); assert_eq!(cfstr, "hello"); assert_ne!(cfstr, "wrong"); assert_ne!("wrong", cfstr); let hello = String::from("hello"); assert_eq!(hello, cfstr); assert_eq!(cfstr, hello); } #[test] fn string_and_back() { let original = "The quick brown fox jumped over the slow lazy dog."; let cfstr = CFString::from_static_string(original); let converted = cfstr.to_string(); assert_eq!(converted, original); } vendor/core-foundation/tests/0000775000175000017500000000000014160055207017105 5ustar mwhudsonmwhudsonvendor/core-foundation/tests/use_macro_outside_crate.rs0000664000175000017500000000130414160055207024340 0ustar mwhudsonmwhudson#[macro_use] extern crate core_foundation; use core_foundation::base::{CFComparisonResult, TCFType}; use std::os::raw::c_void; // sys equivalent stuff that must be declared #[repr(C)] pub struct __CFFooBar(c_void); pub type CFFooBarRef = *const __CFFooBar; extern "C" { pub fn CFFooBarGetTypeID() -> core_foundation::base::CFTypeID; pub fn fake_compare( this: CFFooBarRef, other: CFFooBarRef, context: *mut c_void, ) -> CFComparisonResult; } // Try to use the macros outside of the crate declare_TCFType!(CFFooBar, CFFooBarRef); impl_TCFType!(CFFooBar, CFFooBarRef, CFFooBarGetTypeID); impl_CFTypeDescription!(CFFooBar); impl_CFComparison!(CFFooBar, fake_compare); vendor/core-foundation/LICENSE-MIT0000664000175000017500000000205314160055207017377 0ustar mwhudsonmwhudsonCopyright (c) 2012-2013 Mozilla Foundation Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/vec_map/0000775000175000017500000000000014160055207014261 5ustar mwhudsonmwhudsonvendor/vec_map/.cargo-checksum.json0000664000175000017500000000013114160055207020120 0ustar mwhudsonmwhudson{"files":{},"package":"f1bddf1187be692e79c5ffeab891132dfb0f236ed36a43c7ed39f1165ee20191"}vendor/vec_map/LICENSE-APACHE0000664000175000017500000002514214160055207016211 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/vec_map/Cargo.toml0000664000175000017500000000354514160055207016220 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] name = "vec_map" version = "0.8.2" authors = ["Alex Crichton ", "Jorge Aparicio ", "Alexis Beingessner ", "Brian Anderson <>", "tbu- <>", "Manish Goregaokar <>", "Aaron Turon ", "Adolfo Ochagavía <>", "Niko Matsakis <>", "Steven Fackler <>", "Chase Southwood ", "Eduard Burtescu <>", "Florian Wilkens <>", "Félix Raimundo <>", "Tibor Benke <>", "Markus Siemens ", "Josh Branchaud ", "Huon Wilson ", "Corey Farwell ", "Aaron Liblong <>", "Nick Cameron ", "Patrick Walton ", "Felix S Klock II <>", "Andrew Paseltiner ", "Sean McArthur ", "Vadim Petrochenkov <>"] exclude = ["/.travis.yml", "/deploy-docs.sh"] description = "A simple map based on a vector for small integer keys" homepage = "https://github.com/contain-rs/vec-map" documentation = "https://contain-rs.github.io/vec-map/vec_map" readme = "README.md" keywords = ["data-structures", "collections", "vecmap", "vec_map", "contain-rs"] license = "MIT/Apache-2.0" repository = "https://github.com/contain-rs/vec-map" [dependencies.serde] version = "1.0" features = ["derive"] optional = true [features] eders = ["serde"] vendor/vec_map/src/0000775000175000017500000000000014160055207015050 5ustar mwhudsonmwhudsonvendor/vec_map/src/lib.rs0000664000175000017500000012545214160055207016175 0ustar mwhudsonmwhudson// Copyright 2012-2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![deny(missing_docs)] //! A simple map based on a vector for small integer keys. Space requirements //! are O(highest integer key). // optional serde support #[cfg(feature = "serde")] #[macro_use] extern crate serde; use self::Entry::*; use std::cmp::{Ordering, max}; use std::fmt; use std::hash::{Hash, Hasher}; use std::iter::{Enumerate, FilterMap, FromIterator}; use std::mem::{replace, swap}; use std::ops::{Index, IndexMut}; use std::slice; use std::vec; /// A map optimized for small integer keys. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut months = VecMap::new(); /// months.insert(1, "Jan"); /// months.insert(2, "Feb"); /// months.insert(3, "Mar"); /// /// if !months.contains_key(12) { /// println!("The end is near!"); /// } /// /// assert_eq!(months.get(1), Some(&"Jan")); /// /// if let Some(value) = months.get_mut(3) { /// *value = "Venus"; /// } /// /// assert_eq!(months.get(3), Some(&"Venus")); /// /// // Print out all months /// for (key, value) in &months { /// println!("month {} is {}", key, value); /// } /// /// months.clear(); /// assert!(months.is_empty()); /// ``` #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] pub struct VecMap { n: usize, v: Vec>, } /// A view into a single entry in a map, which may either be vacant or occupied. pub enum Entry<'a, V: 'a> { /// A vacant Entry Vacant(VacantEntry<'a, V>), /// An occupied Entry Occupied(OccupiedEntry<'a, V>), } /// A vacant Entry. pub struct VacantEntry<'a, V: 'a> { map: &'a mut VecMap, index: usize, } /// An occupied Entry. pub struct OccupiedEntry<'a, V: 'a> { map: &'a mut VecMap, index: usize, } impl Default for VecMap { #[inline] fn default() -> Self { Self::new() } } impl Hash for VecMap { fn hash(&self, state: &mut H) { // In order to not traverse the `VecMap` twice, count the elements // during iteration. let mut count: usize = 0; for elt in self { elt.hash(state); count += 1; } count.hash(state); } } impl VecMap { /// Creates an empty `VecMap`. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// let mut map: VecMap<&str> = VecMap::new(); /// ``` pub fn new() -> Self { VecMap { n: 0, v: vec![] } } /// Creates an empty `VecMap` with space for at least `capacity` /// elements before resizing. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// let mut map: VecMap<&str> = VecMap::with_capacity(10); /// ``` pub fn with_capacity(capacity: usize) -> Self { VecMap { n: 0, v: Vec::with_capacity(capacity) } } /// Returns the number of elements the `VecMap` can hold without /// reallocating. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// let map: VecMap = VecMap::with_capacity(10); /// assert!(map.capacity() >= 10); /// ``` #[inline] pub fn capacity(&self) -> usize { self.v.capacity() } /// Reserves capacity for the given `VecMap` to contain `len` distinct keys. /// In the case of `VecMap` this means reallocations will not occur as long /// as all inserted keys are less than `len`. /// /// The collection may reserve more space to avoid frequent reallocations. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// let mut map: VecMap<&str> = VecMap::new(); /// map.reserve_len(10); /// assert!(map.capacity() >= 10); /// ``` pub fn reserve_len(&mut self, len: usize) { let cur_len = self.v.len(); if len >= cur_len { self.v.reserve(len - cur_len); } } /// Reserves the minimum capacity for the given `VecMap` to contain `len` distinct keys. /// In the case of `VecMap` this means reallocations will not occur as long as all inserted /// keys are less than `len`. /// /// Note that the allocator may give the collection more space than it requests. /// Therefore capacity cannot be relied upon to be precisely minimal. Prefer /// `reserve_len` if future insertions are expected. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// let mut map: VecMap<&str> = VecMap::new(); /// map.reserve_len_exact(10); /// assert!(map.capacity() >= 10); /// ``` pub fn reserve_len_exact(&mut self, len: usize) { let cur_len = self.v.len(); if len >= cur_len { self.v.reserve_exact(len - cur_len); } } /// Trims the `VecMap` of any excess capacity. /// /// The collection may reserve more space to avoid frequent reallocations. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// let mut map: VecMap<&str> = VecMap::with_capacity(10); /// map.shrink_to_fit(); /// assert_eq!(map.capacity(), 0); /// ``` pub fn shrink_to_fit(&mut self) { // strip off trailing `None`s if let Some(idx) = self.v.iter().rposition(Option::is_some) { self.v.truncate(idx + 1); } else { self.v.clear(); } self.v.shrink_to_fit() } /// Returns an iterator visiting all keys in ascending order of the keys. /// The iterator's element type is `usize`. pub fn keys(&self) -> Keys { Keys { iter: self.iter() } } /// Returns an iterator visiting all values in ascending order of the keys. /// The iterator's element type is `&'r V`. pub fn values(&self) -> Values { Values { iter: self.iter() } } /// Returns an iterator visiting all values in ascending order of the keys. /// The iterator's element type is `&'r mut V`. pub fn values_mut(&mut self) -> ValuesMut { ValuesMut { iter_mut: self.iter_mut() } } /// Returns an iterator visiting all key-value pairs in ascending order of the keys. /// The iterator's element type is `(usize, &'r V)`. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut map = VecMap::new(); /// map.insert(1, "a"); /// map.insert(3, "c"); /// map.insert(2, "b"); /// /// // Print `1: a` then `2: b` then `3: c` /// for (key, value) in map.iter() { /// println!("{}: {}", key, value); /// } /// ``` pub fn iter(&self) -> Iter { Iter { front: 0, back: self.v.len(), n: self.n, yielded: 0, iter: self.v.iter() } } /// Returns an iterator visiting all key-value pairs in ascending order of the keys, /// with mutable references to the values. /// The iterator's element type is `(usize, &'r mut V)`. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut map = VecMap::new(); /// map.insert(1, "a"); /// map.insert(2, "b"); /// map.insert(3, "c"); /// /// for (key, value) in map.iter_mut() { /// *value = "x"; /// } /// /// for (key, value) in &map { /// assert_eq!(value, &"x"); /// } /// ``` pub fn iter_mut(&mut self) -> IterMut { IterMut { front: 0, back: self.v.len(), n: self.n, yielded: 0, iter: self.v.iter_mut() } } /// Moves all elements from `other` into the map while overwriting existing keys. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut a = VecMap::new(); /// a.insert(1, "a"); /// a.insert(2, "b"); /// /// let mut b = VecMap::new(); /// b.insert(3, "c"); /// b.insert(4, "d"); /// /// a.append(&mut b); /// /// assert_eq!(a.len(), 4); /// assert_eq!(b.len(), 0); /// assert_eq!(a[1], "a"); /// assert_eq!(a[2], "b"); /// assert_eq!(a[3], "c"); /// assert_eq!(a[4], "d"); /// ``` pub fn append(&mut self, other: &mut Self) { self.extend(other.drain()); } /// Splits the collection into two at the given key. /// /// Returns a newly allocated `Self`. `self` contains elements `[0, at)`, /// and the returned `Self` contains elements `[at, max_key)`. /// /// Note that the capacity of `self` does not change. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut a = VecMap::new(); /// a.insert(1, "a"); /// a.insert(2, "b"); /// a.insert(3, "c"); /// a.insert(4, "d"); /// /// let b = a.split_off(3); /// /// assert_eq!(a[1], "a"); /// assert_eq!(a[2], "b"); /// /// assert_eq!(b[3], "c"); /// assert_eq!(b[4], "d"); /// ``` pub fn split_off(&mut self, at: usize) -> Self { let mut other = VecMap::new(); if at == 0 { // Move all elements to other // The swap will also fix .n swap(self, &mut other); return other } else if at >= self.v.len() { // No elements to copy return other; } // Look up the index of the first non-None item let first_index = self.v.iter().position(|el| el.is_some()); let start_index = match first_index { Some(index) => max(at, index), None => { // self has no elements return other; } }; // Fill the new VecMap with `None`s until `start_index` other.v.extend((0..start_index).map(|_| None)); // Move elements beginning with `start_index` from `self` into `other` let mut taken = 0; other.v.extend(self.v[start_index..].iter_mut().map(|el| { let el = el.take(); if el.is_some() { taken += 1; } el })); other.n = taken; self.n -= taken; other } /// Returns an iterator visiting all key-value pairs in ascending order of /// the keys, emptying (but not consuming) the original `VecMap`. /// The iterator's element type is `(usize, &'r V)`. Keeps the allocated memory for reuse. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut map = VecMap::new(); /// map.insert(1, "a"); /// map.insert(3, "c"); /// map.insert(2, "b"); /// /// let vec: Vec<(usize, &str)> = map.drain().collect(); /// /// assert_eq!(vec, [(1, "a"), (2, "b"), (3, "c")]); /// ``` pub fn drain(&mut self) -> Drain { fn filter((i, v): (usize, Option)) -> Option<(usize, A)> { v.map(|v| (i, v)) } let filter: fn((usize, Option)) -> Option<(usize, V)> = filter; // coerce to fn ptr self.n = 0; Drain { iter: self.v.drain(..).enumerate().filter_map(filter) } } /// Returns the number of elements in the map. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut a = VecMap::new(); /// assert_eq!(a.len(), 0); /// a.insert(1, "a"); /// assert_eq!(a.len(), 1); /// ``` pub fn len(&self) -> usize { self.n } /// Returns true if the map contains no elements. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut a = VecMap::new(); /// assert!(a.is_empty()); /// a.insert(1, "a"); /// assert!(!a.is_empty()); /// ``` pub fn is_empty(&self) -> bool { self.n == 0 } /// Clears the map, removing all key-value pairs. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut a = VecMap::new(); /// a.insert(1, "a"); /// a.clear(); /// assert!(a.is_empty()); /// ``` pub fn clear(&mut self) { self.n = 0; self.v.clear() } /// Returns a reference to the value corresponding to the key. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut map = VecMap::new(); /// map.insert(1, "a"); /// assert_eq!(map.get(1), Some(&"a")); /// assert_eq!(map.get(2), None); /// ``` pub fn get(&self, key: usize) -> Option<&V> { if key < self.v.len() { self.v[key].as_ref() } else { None } } /// Returns true if the map contains a value for the specified key. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut map = VecMap::new(); /// map.insert(1, "a"); /// assert_eq!(map.contains_key(1), true); /// assert_eq!(map.contains_key(2), false); /// ``` #[inline] pub fn contains_key(&self, key: usize) -> bool { self.get(key).is_some() } /// Returns a mutable reference to the value corresponding to the key. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut map = VecMap::new(); /// map.insert(1, "a"); /// if let Some(x) = map.get_mut(1) { /// *x = "b"; /// } /// assert_eq!(map[1], "b"); /// ``` pub fn get_mut(&mut self, key: usize) -> Option<&mut V> { if key < self.v.len() { self.v[key].as_mut() } else { None } } /// Inserts a key-value pair into the map. If the key already had a value /// present in the map, that value is returned. Otherwise, `None` is returned. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut map = VecMap::new(); /// assert_eq!(map.insert(37, "a"), None); /// assert_eq!(map.is_empty(), false); /// /// map.insert(37, "b"); /// assert_eq!(map.insert(37, "c"), Some("b")); /// assert_eq!(map[37], "c"); /// ``` pub fn insert(&mut self, key: usize, value: V) -> Option { let len = self.v.len(); if len <= key { self.v.extend((0..key - len + 1).map(|_| None)); } let was = replace(&mut self.v[key], Some(value)); if was.is_none() { self.n += 1; } was } /// Removes a key from the map, returning the value at the key if the key /// was previously in the map. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut map = VecMap::new(); /// map.insert(1, "a"); /// assert_eq!(map.remove(1), Some("a")); /// assert_eq!(map.remove(1), None); /// ``` pub fn remove(&mut self, key: usize) -> Option { if key >= self.v.len() { return None; } let result = &mut self.v[key]; let was = result.take(); if was.is_some() { self.n -= 1; } was } /// Gets the given key's corresponding entry in the map for in-place manipulation. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut count: VecMap = VecMap::new(); /// /// // count the number of occurrences of numbers in the vec /// for x in vec![1, 2, 1, 2, 3, 4, 1, 2, 4] { /// *count.entry(x).or_insert(0) += 1; /// } /// /// assert_eq!(count[1], 3); /// ``` pub fn entry(&mut self, key: usize) -> Entry { // FIXME(Gankro): this is basically the dumbest implementation of // entry possible, because weird non-lexical borrows issues make it // completely insane to do any other way. That said, Entry is a border-line // useless construct on VecMap, so it's hardly a big loss. if self.contains_key(key) { Occupied(OccupiedEntry { map: self, index: key, }) } else { Vacant(VacantEntry { map: self, index: key, }) } } /// Retains only the elements specified by the predicate. /// /// In other words, remove all pairs `(k, v)` such that `f(&k, &mut v)` returns `false`. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut map: VecMap = (0..8).map(|x|(x, x*10)).collect(); /// map.retain(|k, _| k % 2 == 0); /// assert_eq!(map.len(), 4); /// ``` pub fn retain(&mut self, mut f: F) where F: FnMut(usize, &mut V) -> bool { for (i, e) in self.v.iter_mut().enumerate() { let remove = match *e { Some(ref mut value) => !f(i, value), None => false, }; if remove { *e = None; self.n -= 1; } } } } impl<'a, V> Entry<'a, V> { /// Ensures a value is in the entry by inserting the default if empty, and /// returns a mutable reference to the value in the entry. pub fn or_insert(self, default: V) -> &'a mut V { match self { Occupied(entry) => entry.into_mut(), Vacant(entry) => entry.insert(default), } } /// Ensures a value is in the entry by inserting the result of the default /// function if empty, and returns a mutable reference to the value in the /// entry. pub fn or_insert_with V>(self, default: F) -> &'a mut V { match self { Occupied(entry) => entry.into_mut(), Vacant(entry) => entry.insert(default()), } } } impl<'a, V> VacantEntry<'a, V> { /// Sets the value of the entry with the VacantEntry's key, /// and returns a mutable reference to it. pub fn insert(self, value: V) -> &'a mut V { let index = self.index; self.map.insert(index, value); &mut self.map[index] } } impl<'a, V> OccupiedEntry<'a, V> { /// Gets a reference to the value in the entry. pub fn get(&self) -> &V { let index = self.index; &self.map[index] } /// Gets a mutable reference to the value in the entry. pub fn get_mut(&mut self) -> &mut V { let index = self.index; &mut self.map[index] } /// Converts the entry into a mutable reference to its value. pub fn into_mut(self) -> &'a mut V { let index = self.index; &mut self.map[index] } /// Sets the value of the entry with the OccupiedEntry's key, /// and returns the entry's old value. pub fn insert(&mut self, value: V) -> V { let index = self.index; self.map.insert(index, value).unwrap() } /// Takes the value of the entry out of the map, and returns it. pub fn remove(self) -> V { let index = self.index; self.map.remove(index).unwrap() } } impl Clone for VecMap { #[inline] fn clone(&self) -> Self { VecMap { n: self.n, v: self.v.clone() } } #[inline] fn clone_from(&mut self, source: &Self) { self.v.clone_from(&source.v); self.n = source.n; } } impl PartialEq for VecMap { fn eq(&self, other: &Self) -> bool { self.n == other.n && self.iter().eq(other.iter()) } } impl Eq for VecMap {} impl PartialOrd for VecMap { #[inline] fn partial_cmp(&self, other: &Self) -> Option { self.iter().partial_cmp(other.iter()) } } impl Ord for VecMap { #[inline] fn cmp(&self, other: &Self) -> Ordering { self.iter().cmp(other.iter()) } } impl fmt::Debug for VecMap { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_map().entries(self).finish() } } impl FromIterator<(usize, V)> for VecMap { fn from_iter>(iter: I) -> Self { let mut map = Self::new(); map.extend(iter); map } } impl IntoIterator for VecMap { type Item = (usize, T); type IntoIter = IntoIter; /// Returns an iterator visiting all key-value pairs in ascending order of /// the keys, consuming the original `VecMap`. /// The iterator's element type is `(usize, &'r V)`. /// /// # Examples /// /// ``` /// use vec_map::VecMap; /// /// let mut map = VecMap::new(); /// map.insert(1, "a"); /// map.insert(3, "c"); /// map.insert(2, "b"); /// /// let vec: Vec<(usize, &str)> = map.into_iter().collect(); /// /// assert_eq!(vec, [(1, "a"), (2, "b"), (3, "c")]); /// ``` fn into_iter(self) -> IntoIter { IntoIter { n: self.n, yielded: 0, iter: self.v.into_iter().enumerate() } } } impl<'a, T> IntoIterator for &'a VecMap { type Item = (usize, &'a T); type IntoIter = Iter<'a, T>; fn into_iter(self) -> Iter<'a, T> { self.iter() } } impl<'a, T> IntoIterator for &'a mut VecMap { type Item = (usize, &'a mut T); type IntoIter = IterMut<'a, T>; fn into_iter(self) -> IterMut<'a, T> { self.iter_mut() } } impl Extend<(usize, V)> for VecMap { fn extend>(&mut self, iter: I) { for (k, v) in iter { self.insert(k, v); } } } impl<'a, V: Copy> Extend<(usize, &'a V)> for VecMap { fn extend>(&mut self, iter: I) { self.extend(iter.into_iter().map(|(key, &value)| (key, value))); } } impl Index for VecMap { type Output = V; #[inline] fn index(&self, i: usize) -> &V { self.get(i).expect("key not present") } } impl<'a, V> Index<&'a usize> for VecMap { type Output = V; #[inline] fn index(&self, i: &usize) -> &V { self.get(*i).expect("key not present") } } impl IndexMut for VecMap { #[inline] fn index_mut(&mut self, i: usize) -> &mut V { self.get_mut(i).expect("key not present") } } impl<'a, V> IndexMut<&'a usize> for VecMap { #[inline] fn index_mut(&mut self, i: &usize) -> &mut V { self.get_mut(*i).expect("key not present") } } macro_rules! iterator { (impl $name:ident -> $elem:ty, $($getter:ident),+) => { impl<'a, V> Iterator for $name<'a, V> { type Item = $elem; #[inline] fn next(&mut self) -> Option<$elem> { while self.front < self.back { if let Some(elem) = self.iter.next() { if let Some(x) = elem$(. $getter ())+ { let index = self.front; self.front += 1; self.yielded += 1; return Some((index, x)); } } self.front += 1; } None } #[inline] fn size_hint(&self) -> (usize, Option) { (self.n - self.yielded, Some(self.n - self.yielded)) } } } } macro_rules! double_ended_iterator { (impl $name:ident -> $elem:ty, $($getter:ident),+) => { impl<'a, V> DoubleEndedIterator for $name<'a, V> { #[inline] fn next_back(&mut self) -> Option<$elem> { while self.front < self.back { if let Some(elem) = self.iter.next_back() { if let Some(x) = elem$(. $getter ())+ { self.back -= 1; return Some((self.back, x)); } } self.back -= 1; } None } } } } /// An iterator over the key-value pairs of a map. pub struct Iter<'a, V: 'a> { front: usize, back: usize, n: usize, yielded: usize, iter: slice::Iter<'a, Option> } // FIXME(#19839) Remove in favor of `#[derive(Clone)]` impl<'a, V> Clone for Iter<'a, V> { fn clone(&self) -> Iter<'a, V> { Iter { front: self.front, back: self.back, n: self.n, yielded: self.yielded, iter: self.iter.clone() } } } iterator! { impl Iter -> (usize, &'a V), as_ref } impl<'a, V> ExactSizeIterator for Iter<'a, V> {} double_ended_iterator! { impl Iter -> (usize, &'a V), as_ref } /// An iterator over the key-value pairs of a map, with the /// values being mutable. pub struct IterMut<'a, V: 'a> { front: usize, back: usize, n: usize, yielded: usize, iter: slice::IterMut<'a, Option> } iterator! { impl IterMut -> (usize, &'a mut V), as_mut } impl<'a, V> ExactSizeIterator for IterMut<'a, V> {} double_ended_iterator! { impl IterMut -> (usize, &'a mut V), as_mut } /// An iterator over the keys of a map. pub struct Keys<'a, V: 'a> { iter: Iter<'a, V>, } // FIXME(#19839) Remove in favor of `#[derive(Clone)]` impl<'a, V> Clone for Keys<'a, V> { fn clone(&self) -> Keys<'a, V> { Keys { iter: self.iter.clone() } } } /// An iterator over the values of a map. pub struct Values<'a, V: 'a> { iter: Iter<'a, V>, } // FIXME(#19839) Remove in favor of `#[derive(Clone)]` impl<'a, V> Clone for Values<'a, V> { fn clone(&self) -> Values<'a, V> { Values { iter: self.iter.clone() } } } /// An iterator over the values of a map. pub struct ValuesMut<'a, V: 'a> { iter_mut: IterMut<'a, V>, } /// A consuming iterator over the key-value pairs of a map. pub struct IntoIter { n: usize, yielded: usize, iter: Enumerate>>, } /// A draining iterator over the key-value pairs of a map. pub struct Drain<'a, V: 'a> { iter: FilterMap< Enumerate>>, fn((usize, Option)) -> Option<(usize, V)>> } impl<'a, V> Iterator for Drain<'a, V> { type Item = (usize, V); fn next(&mut self) -> Option<(usize, V)> { self.iter.next() } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } impl<'a, V> ExactSizeIterator for Drain<'a, V> {} impl<'a, V> DoubleEndedIterator for Drain<'a, V> { fn next_back(&mut self) -> Option<(usize, V)> { self.iter.next_back() } } impl<'a, V> Iterator for Keys<'a, V> { type Item = usize; fn next(&mut self) -> Option { self.iter.next().map(|e| e.0) } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } impl<'a, V> ExactSizeIterator for Keys<'a, V> {} impl<'a, V> DoubleEndedIterator for Keys<'a, V> { fn next_back(&mut self) -> Option { self.iter.next_back().map(|e| e.0) } } impl<'a, V> Iterator for Values<'a, V> { type Item = &'a V; fn next(&mut self) -> Option<(&'a V)> { self.iter.next().map(|e| e.1) } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } impl<'a, V> ExactSizeIterator for Values<'a, V> {} impl<'a, V> DoubleEndedIterator for Values<'a, V> { fn next_back(&mut self) -> Option<(&'a V)> { self.iter.next_back().map(|e| e.1) } } impl<'a, V> Iterator for ValuesMut<'a, V> { type Item = &'a mut V; fn next(&mut self) -> Option<(&'a mut V)> { self.iter_mut.next().map(|e| e.1) } fn size_hint(&self) -> (usize, Option) { self.iter_mut.size_hint() } } impl<'a, V> ExactSizeIterator for ValuesMut<'a, V> {} impl<'a, V> DoubleEndedIterator for ValuesMut<'a, V> { fn next_back(&mut self) -> Option<&'a mut V> { self.iter_mut.next_back().map(|e| e.1) } } impl Iterator for IntoIter { type Item = (usize, V); fn next(&mut self) -> Option<(usize, V)> { loop { match self.iter.next() { None => return None, Some((i, Some(value))) => { self.yielded += 1; return Some((i, value)) }, _ => {} } } } fn size_hint(&self) -> (usize, Option) { (self.n - self.yielded, Some(self.n - self.yielded)) } } impl ExactSizeIterator for IntoIter {} impl DoubleEndedIterator for IntoIter { fn next_back(&mut self) -> Option<(usize, V)> { loop { match self.iter.next_back() { None => return None, Some((i, Some(value))) => return Some((i, value)), _ => {} } } } } #[allow(dead_code)] fn assert_properties() { fn vec_map_covariant<'a, T>(map: VecMap<&'static T>) -> VecMap<&'a T> { map } fn into_iter_covariant<'a, T>(iter: IntoIter<&'static T>) -> IntoIter<&'a T> { iter } fn iter_covariant<'i, 'a, T>(iter: Iter<'i, &'static T>) -> Iter<'i, &'a T> { iter } fn keys_covariant<'i, 'a, T>(iter: Keys<'i, &'static T>) -> Keys<'i, &'a T> { iter } fn values_covariant<'i, 'a, T>(iter: Values<'i, &'static T>) -> Values<'i, &'a T> { iter } } #[cfg(test)] mod test { use super::VecMap; use super::Entry::{Occupied, Vacant}; use std::hash::{Hash, Hasher}; use std::collections::hash_map::DefaultHasher; #[test] fn test_get_mut() { let mut m = VecMap::new(); assert!(m.insert(1, 12).is_none()); assert!(m.insert(2, 8).is_none()); assert!(m.insert(5, 14).is_none()); let new = 100; match m.get_mut(5) { None => panic!(), Some(x) => *x = new } assert_eq!(m.get(5), Some(&new)); } #[test] fn test_len() { let mut map = VecMap::new(); assert_eq!(map.len(), 0); assert!(map.is_empty()); assert!(map.insert(5, 20).is_none()); assert_eq!(map.len(), 1); assert!(!map.is_empty()); assert!(map.insert(11, 12).is_none()); assert_eq!(map.len(), 2); assert!(!map.is_empty()); assert!(map.insert(14, 22).is_none()); assert_eq!(map.len(), 3); assert!(!map.is_empty()); } #[test] fn test_clear() { let mut map = VecMap::new(); assert!(map.insert(5, 20).is_none()); assert!(map.insert(11, 12).is_none()); assert!(map.insert(14, 22).is_none()); map.clear(); assert!(map.is_empty()); assert!(map.get(5).is_none()); assert!(map.get(11).is_none()); assert!(map.get(14).is_none()); } #[test] fn test_insert() { let mut m = VecMap::new(); assert_eq!(m.insert(1, 2), None); assert_eq!(m.insert(1, 3), Some(2)); assert_eq!(m.insert(1, 4), Some(3)); } #[test] fn test_remove() { let mut m = VecMap::new(); m.insert(1, 2); assert_eq!(m.remove(1), Some(2)); assert_eq!(m.remove(1), None); } #[test] fn test_keys() { let mut map = VecMap::new(); map.insert(1, 'a'); map.insert(2, 'b'); map.insert(3, 'c'); let keys: Vec<_> = map.keys().collect(); assert_eq!(keys.len(), 3); assert!(keys.contains(&1)); assert!(keys.contains(&2)); assert!(keys.contains(&3)); } #[test] fn test_values() { let mut map = VecMap::new(); map.insert(1, 'a'); map.insert(2, 'b'); map.insert(3, 'c'); let values: Vec<_> = map.values().cloned().collect(); assert_eq!(values.len(), 3); assert!(values.contains(&'a')); assert!(values.contains(&'b')); assert!(values.contains(&'c')); } #[test] fn test_iterator() { let mut m = VecMap::new(); assert!(m.insert(0, 1).is_none()); assert!(m.insert(1, 2).is_none()); assert!(m.insert(3, 5).is_none()); assert!(m.insert(6, 10).is_none()); assert!(m.insert(10, 11).is_none()); let mut it = m.iter(); assert_eq!(it.size_hint(), (5, Some(5))); assert_eq!(it.next().unwrap(), (0, &1)); assert_eq!(it.size_hint(), (4, Some(4))); assert_eq!(it.next().unwrap(), (1, &2)); assert_eq!(it.size_hint(), (3, Some(3))); assert_eq!(it.next().unwrap(), (3, &5)); assert_eq!(it.size_hint(), (2, Some(2))); assert_eq!(it.next().unwrap(), (6, &10)); assert_eq!(it.size_hint(), (1, Some(1))); assert_eq!(it.next().unwrap(), (10, &11)); assert_eq!(it.size_hint(), (0, Some(0))); assert!(it.next().is_none()); } #[test] fn test_iterator_size_hints() { let mut m = VecMap::new(); assert!(m.insert(0, 1).is_none()); assert!(m.insert(1, 2).is_none()); assert!(m.insert(3, 5).is_none()); assert!(m.insert(6, 10).is_none()); assert!(m.insert(10, 11).is_none()); assert_eq!(m.iter().size_hint(), (5, Some(5))); assert_eq!(m.iter().rev().size_hint(), (5, Some(5))); assert_eq!(m.iter_mut().size_hint(), (5, Some(5))); assert_eq!(m.iter_mut().rev().size_hint(), (5, Some(5))); } #[test] fn test_mut_iterator() { let mut m = VecMap::new(); assert!(m.insert(0, 1).is_none()); assert!(m.insert(1, 2).is_none()); assert!(m.insert(3, 5).is_none()); assert!(m.insert(6, 10).is_none()); assert!(m.insert(10, 11).is_none()); for (k, v) in &mut m { *v += k as isize; } let mut it = m.iter(); assert_eq!(it.next().unwrap(), (0, &1)); assert_eq!(it.next().unwrap(), (1, &3)); assert_eq!(it.next().unwrap(), (3, &8)); assert_eq!(it.next().unwrap(), (6, &16)); assert_eq!(it.next().unwrap(), (10, &21)); assert!(it.next().is_none()); } #[test] fn test_rev_iterator() { let mut m = VecMap::new(); assert!(m.insert(0, 1).is_none()); assert!(m.insert(1, 2).is_none()); assert!(m.insert(3, 5).is_none()); assert!(m.insert(6, 10).is_none()); assert!(m.insert(10, 11).is_none()); let mut it = m.iter().rev(); assert_eq!(it.next().unwrap(), (10, &11)); assert_eq!(it.next().unwrap(), (6, &10)); assert_eq!(it.next().unwrap(), (3, &5)); assert_eq!(it.next().unwrap(), (1, &2)); assert_eq!(it.next().unwrap(), (0, &1)); assert!(it.next().is_none()); } #[test] fn test_mut_rev_iterator() { let mut m = VecMap::new(); assert!(m.insert(0, 1).is_none()); assert!(m.insert(1, 2).is_none()); assert!(m.insert(3, 5).is_none()); assert!(m.insert(6, 10).is_none()); assert!(m.insert(10, 11).is_none()); for (k, v) in m.iter_mut().rev() { *v += k as isize; } let mut it = m.iter(); assert_eq!(it.next().unwrap(), (0, &1)); assert_eq!(it.next().unwrap(), (1, &3)); assert_eq!(it.next().unwrap(), (3, &8)); assert_eq!(it.next().unwrap(), (6, &16)); assert_eq!(it.next().unwrap(), (10, &21)); assert!(it.next().is_none()); } #[test] fn test_move_iter() { let mut m: VecMap> = VecMap::new(); m.insert(1, Box::new(2)); let mut called = false; for (k, v) in m { assert!(!called); called = true; assert_eq!(k, 1); assert_eq!(v, Box::new(2)); } assert!(called); } #[test] fn test_drain_iterator() { let mut map = VecMap::new(); map.insert(1, "a"); map.insert(3, "c"); map.insert(2, "b"); let vec: Vec<_> = map.drain().collect(); assert_eq!(vec, [(1, "a"), (2, "b"), (3, "c")]); assert_eq!(map.len(), 0); } #[test] fn test_append() { let mut a = VecMap::new(); a.insert(1, "a"); a.insert(2, "b"); a.insert(3, "c"); let mut b = VecMap::new(); b.insert(3, "d"); // Overwrite element from a b.insert(4, "e"); b.insert(5, "f"); a.append(&mut b); assert_eq!(a.len(), 5); assert_eq!(b.len(), 0); // Capacity shouldn't change for possible reuse assert!(b.capacity() >= 4); assert_eq!(a[1], "a"); assert_eq!(a[2], "b"); assert_eq!(a[3], "d"); assert_eq!(a[4], "e"); assert_eq!(a[5], "f"); } #[test] fn test_split_off() { // Split within the key range let mut a = VecMap::new(); a.insert(1, "a"); a.insert(2, "b"); a.insert(3, "c"); a.insert(4, "d"); let b = a.split_off(3); assert_eq!(a.len(), 2); assert_eq!(b.len(), 2); assert_eq!(a[1], "a"); assert_eq!(a[2], "b"); assert_eq!(b[3], "c"); assert_eq!(b[4], "d"); // Split at 0 a.clear(); a.insert(1, "a"); a.insert(2, "b"); a.insert(3, "c"); a.insert(4, "d"); let b = a.split_off(0); assert_eq!(a.len(), 0); assert_eq!(b.len(), 4); assert_eq!(b[1], "a"); assert_eq!(b[2], "b"); assert_eq!(b[3], "c"); assert_eq!(b[4], "d"); // Split behind max_key a.clear(); a.insert(1, "a"); a.insert(2, "b"); a.insert(3, "c"); a.insert(4, "d"); let b = a.split_off(5); assert_eq!(a.len(), 4); assert_eq!(b.len(), 0); assert_eq!(a[1], "a"); assert_eq!(a[2], "b"); assert_eq!(a[3], "c"); assert_eq!(a[4], "d"); } #[test] fn test_show() { let mut map = VecMap::new(); let empty = VecMap::::new(); map.insert(1, 2); map.insert(3, 4); let map_str = format!("{:?}", map); assert!(map_str == "{1: 2, 3: 4}" || map_str == "{3: 4, 1: 2}"); assert_eq!(format!("{:?}", empty), "{}"); } #[test] fn test_clone() { let mut a = VecMap::new(); a.insert(1, 'x'); a.insert(4, 'y'); a.insert(6, 'z'); assert_eq!(a.clone().iter().collect::>(), [(1, &'x'), (4, &'y'), (6, &'z')]); } #[test] fn test_eq() { let mut a = VecMap::new(); let mut b = VecMap::new(); assert!(a == b); assert!(a.insert(0, 5).is_none()); assert!(a != b); assert!(b.insert(0, 4).is_none()); assert!(a != b); assert!(a.insert(5, 19).is_none()); assert!(a != b); assert!(!b.insert(0, 5).is_none()); assert!(a != b); assert!(b.insert(5, 19).is_none()); assert!(a == b); a = VecMap::new(); b = VecMap::with_capacity(1); assert!(a == b); } #[test] fn test_lt() { let mut a = VecMap::new(); let mut b = VecMap::new(); assert!(!(a < b) && !(b < a)); assert!(b.insert(2, 5).is_none()); assert!(a < b); assert!(a.insert(2, 7).is_none()); assert!(!(a < b) && b < a); assert!(b.insert(1, 0).is_none()); assert!(b < a); assert!(a.insert(0, 6).is_none()); assert!(a < b); assert!(a.insert(6, 2).is_none()); assert!(a < b && !(b < a)); } #[test] fn test_ord() { let mut a = VecMap::new(); let mut b = VecMap::new(); assert!(a <= b && a >= b); assert!(a.insert(1, 1).is_none()); assert!(a > b && a >= b); assert!(b < a && b <= a); assert!(b.insert(2, 2).is_none()); assert!(b > a && b >= a); assert!(a < b && a <= b); } #[test] fn test_hash() { fn hash(t: &T) -> u64 { let mut s = DefaultHasher::new(); t.hash(&mut s); s.finish() } let mut x = VecMap::new(); let mut y = VecMap::new(); assert!(hash(&x) == hash(&y)); x.insert(1, 'a'); x.insert(2, 'b'); x.insert(3, 'c'); y.insert(3, 'c'); y.insert(2, 'b'); y.insert(1, 'a'); assert!(hash(&x) == hash(&y)); x.insert(1000, 'd'); x.remove(1000); assert!(hash(&x) == hash(&y)); } #[test] fn test_from_iter() { let xs = [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd'), (5, 'e')]; let map: VecMap<_> = xs.iter().cloned().collect(); for &(k, v) in &xs { assert_eq!(map.get(k), Some(&v)); } } #[test] fn test_index() { let mut map = VecMap::new(); map.insert(1, 2); map.insert(2, 1); map.insert(3, 4); assert_eq!(map[3], 4); } #[test] #[should_panic] fn test_index_nonexistent() { let mut map = VecMap::new(); map.insert(1, 2); map.insert(2, 1); map.insert(3, 4); map[4]; } #[test] fn test_entry() { let xs = [(1, 10), (2, 20), (3, 30), (4, 40), (5, 50), (6, 60)]; let mut map: VecMap<_> = xs.iter().cloned().collect(); // Existing key (insert) match map.entry(1) { Vacant(_) => unreachable!(), Occupied(mut view) => { assert_eq!(view.get(), &10); assert_eq!(view.insert(100), 10); } } assert_eq!(map.get(1).unwrap(), &100); assert_eq!(map.len(), 6); // Existing key (update) match map.entry(2) { Vacant(_) => unreachable!(), Occupied(mut view) => { let v = view.get_mut(); *v *= 10; } } assert_eq!(map.get(2).unwrap(), &200); assert_eq!(map.len(), 6); // Existing key (take) match map.entry(3) { Vacant(_) => unreachable!(), Occupied(view) => { assert_eq!(view.remove(), 30); } } assert_eq!(map.get(3), None); assert_eq!(map.len(), 5); // Inexistent key (insert) match map.entry(10) { Occupied(_) => unreachable!(), Vacant(view) => { assert_eq!(*view.insert(1000), 1000); } } assert_eq!(map.get(10).unwrap(), &1000); assert_eq!(map.len(), 6); } #[test] fn test_extend_ref() { let mut a = VecMap::new(); a.insert(1, "one"); let mut b = VecMap::new(); b.insert(2, "two"); b.insert(3, "three"); a.extend(&b); assert_eq!(a.len(), 3); assert_eq!(a[&1], "one"); assert_eq!(a[&2], "two"); assert_eq!(a[&3], "three"); } #[test] #[cfg(feature = "serde")] fn test_serde() { use serde::{Serialize, Deserialize}; fn impls_serde_traits<'de, S: Serialize + Deserialize<'de>>() {} impls_serde_traits::>(); } #[test] fn test_retain() { let mut map = VecMap::new(); map.insert(1, "one"); map.insert(2, "two"); map.insert(3, "three"); map.retain(|k, v| match k { 1 => false, 2 => { *v = "two changed"; true }, 3 => false, _ => panic!(), }); assert_eq!(map.len(), 1); assert_eq!(map.get(1), None); assert_eq!(map[2], "two changed"); assert_eq!(map.get(3), None); } } vendor/vec_map/LICENSE-MIT0000664000175000017500000000205714160055207015721 0ustar mwhudsonmwhudsonCopyright (c) 2015 The Rust Project Developers Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/vec_map/README.md0000664000175000017500000000077514160055207015551 0ustar mwhudsonmwhudson**WARNING: THIS PROJECT IS IN MAINTENANCE MODE, DUE TO INSUFFICIENT MAINTAINER RESOURCES** It works fine, but will generally no longer be improved. We are currently only accepting changes which: * keep this compiling with the latest versions of Rust or its dependencies. * have minimal review requirements, such as documentation changes (so not totally new APIs). ------ A simple map based on a vector for small integer keys. Documentation is available at https://contain-rs.github.io/vec-map/vec_map. vendor/pretty_env_logger/0000775000175000017500000000000014160055207016405 5ustar mwhudsonmwhudsonvendor/pretty_env_logger/.cargo-checksum.json0000664000175000017500000000013114160055207022244 0ustar mwhudsonmwhudson{"files":{},"package":"926d36b9553851b8b0005f1275891b392ee4d2d833852c417ed025477350fb9d"}vendor/pretty_env_logger/LICENSE-APACHE0000664000175000017500000002513714160055207020341 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/pretty_env_logger/Cargo.toml0000664000175000017500000000177414160055207020346 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] name = "pretty_env_logger" version = "0.4.0" authors = ["Sean McArthur "] include = ["Cargo.toml", "LICENSE-APACHE", "LICENSE-MIT", "src/**/*"] description = "a visually pretty env_logger" readme = "README.md" keywords = ["log", "logger", "logging"] categories = ["development-tools::debugging"] license = "MIT/Apache-2.0" repository = "https://github.com/seanmonstar/pretty-env-logger" [dependencies.env_logger] version = "0.7.0" [dependencies.log] version = "0.4" vendor/pretty_env_logger/src/0000775000175000017500000000000014160055207017174 5ustar mwhudsonmwhudsonvendor/pretty_env_logger/src/lib.rs0000664000175000017500000001665214160055207020322 0ustar mwhudsonmwhudson#![cfg_attr(test, deny(warnings))] #![deny(missing_docs)] #![doc(html_root_url = "https://docs.rs/pretty_env_logger/0.4.0")] //! A logger configured via an environment variable which writes to standard //! error with nice colored output for log levels. //! //! ## Example //! //! ``` //! extern crate pretty_env_logger; //! #[macro_use] extern crate log; //! //! fn main() { //! pretty_env_logger::init(); //! //! trace!("a trace example"); //! debug!("deboogging"); //! info!("such information"); //! warn!("o_O"); //! error!("boom"); //! } //! ``` //! //! Run the program with the environment variable `RUST_LOG=trace`. //! //! ## Defaults //! //! The defaults can be setup by calling `init()` or `try_init()` at the start //! of the program. //! //! ## Enable logging //! //! This crate uses [env_logger][] internally, so the same ways of enabling //! logs through an environment variable are supported. //! //! [env_logger]: https://docs.rs/env_logger #[doc(hidden)] pub extern crate env_logger; extern crate log; use std::fmt; use std::sync::atomic::{AtomicUsize, Ordering}; use env_logger::{fmt::{Color, Style, StyledValue}, Builder}; use log::Level; /// Initializes the global logger with a pretty env logger. /// /// This should be called early in the execution of a Rust program, and the /// global logger may only be initialized once. Future initialization attempts /// will return an error. /// /// # Panics /// /// This function fails to set the global logger if one has already been set. pub fn init() { try_init().unwrap(); } /// Initializes the global logger with a timed pretty env logger. /// /// This should be called early in the execution of a Rust program, and the /// global logger may only be initialized once. Future initialization attempts /// will return an error. /// /// # Panics /// /// This function fails to set the global logger if one has already been set. pub fn init_timed() { try_init_timed().unwrap(); } /// Initializes the global logger with a pretty env logger. /// /// This should be called early in the execution of a Rust program, and the /// global logger may only be initialized once. Future initialization attempts /// will return an error. /// /// # Errors /// /// This function fails to set the global logger if one has already been set. pub fn try_init() -> Result<(), log::SetLoggerError> { try_init_custom_env("RUST_LOG") } /// Initializes the global logger with a timed pretty env logger. /// /// This should be called early in the execution of a Rust program, and the /// global logger may only be initialized once. Future initialization attempts /// will return an error. /// /// # Errors /// /// This function fails to set the global logger if one has already been set. pub fn try_init_timed() -> Result<(), log::SetLoggerError> { try_init_timed_custom_env("RUST_LOG") } /// Initialized the global logger with a pretty env logger, with a custom variable name. /// /// This should be called early in the execution of a Rust program, and the /// global logger may only be initialized once. Future initialization attempts /// will return an error. /// /// # Panics /// /// This function fails to set the global logger if one has already been set. pub fn init_custom_env(environment_variable_name: &str) { try_init_custom_env(environment_variable_name).unwrap(); } /// Initialized the global logger with a pretty env logger, with a custom variable name. /// /// This should be called early in the execution of a Rust program, and the /// global logger may only be initialized once. Future initialization attempts /// will return an error. /// /// # Errors /// /// This function fails to set the global logger if one has already been set. pub fn try_init_custom_env(environment_variable_name: &str) -> Result<(), log::SetLoggerError> { let mut builder = formatted_builder(); if let Ok(s) = ::std::env::var(environment_variable_name) { builder.parse_filters(&s); } builder.try_init() } /// Initialized the global logger with a timed pretty env logger, with a custom variable name. /// /// This should be called early in the execution of a Rust program, and the /// global logger may only be initialized once. Future initialization attempts /// will return an error. /// /// # Errors /// /// This function fails to set the global logger if one has already been set. pub fn try_init_timed_custom_env(environment_variable_name: &str) -> Result<(), log::SetLoggerError> { let mut builder = formatted_timed_builder(); if let Ok(s) = ::std::env::var(environment_variable_name) { builder.parse_filters(&s); } builder.try_init() } /// Returns a `env_logger::Builder` for further customization. /// /// This method will return a colored and formatted `env_logger::Builder` /// for further customization. Refer to env_logger::Build crate documentation /// for further details and usage. pub fn formatted_builder() -> Builder { let mut builder = Builder::new(); builder.format(|f, record| { use std::io::Write; let target = record.target(); let max_width = max_target_width(target); let mut style = f.style(); let level = colored_level(&mut style, record.level()); let mut style = f.style(); let target = style.set_bold(true).value(Padded { value: target, width: max_width, }); writeln!( f, " {} {} > {}", level, target, record.args(), ) }); builder } /// Returns a `env_logger::Builder` for further customization. /// /// This method will return a colored and time formatted `env_logger::Builder` /// for further customization. Refer to env_logger::Build crate documentation /// for further details and usage. pub fn formatted_timed_builder() -> Builder { let mut builder = Builder::new(); builder.format(|f, record| { use std::io::Write; let target = record.target(); let max_width = max_target_width(target); let mut style = f.style(); let level = colored_level(&mut style, record.level()); let mut style = f.style(); let target = style.set_bold(true).value(Padded { value: target, width: max_width, }); let time = f.timestamp_millis(); writeln!( f, " {} {} {} > {}", time, level, target, record.args(), ) }); builder } struct Padded { value: T, width: usize, } impl fmt::Display for Padded { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "{: usize { let max_width = MAX_MODULE_WIDTH.load(Ordering::Relaxed); if max_width < target.len() { MAX_MODULE_WIDTH.store(target.len(), Ordering::Relaxed); target.len() } else { max_width } } fn colored_level<'a>(style: &'a mut Style, level: Level) -> StyledValue<'a, &'static str> { match level { Level::Trace => style.set_color(Color::Magenta).value("TRACE"), Level::Debug => style.set_color(Color::Blue).value("DEBUG"), Level::Info => style.set_color(Color::Green).value("INFO "), Level::Warn => style.set_color(Color::Yellow).value("WARN "), Level::Error => style.set_color(Color::Red).value("ERROR"), } } vendor/pretty_env_logger/LICENSE-MIT0000664000175000017500000000204214160055207020037 0ustar mwhudsonmwhudsonCopyright (c) 2017 Sean McArthur Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/sized-chunks/0000775000175000017500000000000014160055207015256 5ustar mwhudsonmwhudsonvendor/sized-chunks/.cargo-checksum.json0000664000175000017500000000013114160055207021115 0ustar mwhudsonmwhudson{"files":{},"package":"16d69225bde7a69b235da73377861095455d298f2b970996eec25ddbb42b3d1e"}vendor/sized-chunks/CODE_OF_CONDUCT.md0000664000175000017500000000623214160055207020060 0ustar mwhudsonmwhudson# Contributor Covenant Code of Conduct ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at admin@immutable.rs. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html [homepage]: https://www.contributor-covenant.org vendor/sized-chunks/Cargo.toml0000664000175000017500000000246014160055207017210 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] edition = "2018" name = "sized-chunks" version = "0.6.5" authors = ["Bodil Stokke "] exclude = ["release.toml", "proptest-regressions/**"] description = "Efficient sized chunk datatypes" documentation = "http://docs.rs/sized-chunks" readme = "./README.md" keywords = ["sparse-array"] categories = ["data-structures"] license = "MPL-2.0+" repository = "https://github.com/bodil/sized-chunks" [package.metadata.docs.rs] all-features = true #[dependencies.arbitrary] #version = "1.0.0" #optional = true #[dependencies.array-ops] #version = "0.1.0" #optional = true [dependencies.bitmaps] version = "2.1.0" #[dependencies.refpool] #version = "0.4.3" #optional = true [dependencies.typenum] version = "1.12.0" [features] default = ["std"] #ringbuffer = ["array-ops"] std = [] vendor/sized-chunks/LICENCE.md0000664000175000017500000003627614160055207016660 0ustar mwhudsonmwhudsonMozilla Public License Version 2.0 ================================== ### 1. Definitions **1.1. “Contributorâ€** means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. **1.2. “Contributor Versionâ€** means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution. **1.3. “Contributionâ€** means Covered Software of a particular Contributor. **1.4. “Covered Softwareâ€** means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. **1.5. “Incompatible With Secondary Licensesâ€** means * **(a)** that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or * **(b)** that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. **1.6. “Executable Formâ€** means any form of the work other than Source Code Form. **1.7. “Larger Workâ€** means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. **1.8. “Licenseâ€** means this document. **1.9. “Licensableâ€** means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. **1.10. “Modificationsâ€** means any of the following: * **(a)** any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or * **(b)** any new file in Source Code Form that contains any Covered Software. **1.11. “Patent Claims†of a Contributor** means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. **1.12. “Secondary Licenseâ€** means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. **1.13. “Source Code Formâ€** means the form of the work preferred for making modifications. **1.14. “You†(or “Yourâ€)** means an individual or a legal entity exercising rights under this License. For legal entities, “You†includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, “control†means **(a)** the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or **(b)** ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. ### 2. License Grants and Conditions #### 2.1. Grants Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: * **(a)** under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and * **(b)** under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. #### 2.2. Effective Date The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. #### 2.3. Limitations on Grant Scope The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: * **(a)** for any code that a Contributor has removed from Covered Software; or * **(b)** for infringements caused by: **(i)** Your and any other third party's modifications of Covered Software, or **(ii)** the combination of its Contributions with other software (except as part of its Contributor Version); or * **(c)** under Patent Claims infringed by Covered Software in the absence of its Contributions. This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). #### 2.4. Subsequent Licenses No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). #### 2.5. Representation Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. #### 2.6. Fair Use This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. #### 2.7. Conditions Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. ### 3. Responsibilities #### 3.1. Distribution of Source Form All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form. #### 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: * **(a)** such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and * **(b)** You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License. #### 3.3. Distribution of a Larger Work You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). #### 3.4. Notices You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. #### 3.5. Application of Additional Terms You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. ### 4. Inability to Comply Due to Statute or Regulation If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: **(a)** comply with the terms of this License to the maximum extent possible; and **(b)** describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. ### 5. Termination **5.1.** The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated **(a)** provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and **(b)** on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. **5.2.** If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. **5.3.** In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. ### 6. Disclaimer of Warranty > Covered Software is provided under this License on an “as is†> basis, without warranty of any kind, either expressed, implied, or > statutory, including, without limitation, warranties that the > Covered Software is free of defects, merchantable, fit for a > particular purpose or non-infringing. The entire risk as to the > quality and performance of the Covered Software is with You. > Should any Covered Software prove defective in any respect, You > (not any Contributor) assume the cost of any necessary servicing, > repair, or correction. This disclaimer of warranty constitutes an > essential part of this License. No use of any Covered Software is > authorized under this License except under this disclaimer. ### 7. Limitation of Liability > Under no circumstances and under no legal theory, whether tort > (including negligence), contract, or otherwise, shall any > Contributor, or anyone who distributes Covered Software as > permitted above, be liable to You for any direct, indirect, > special, incidental, or consequential damages of any character > including, without limitation, damages for lost profits, loss of > goodwill, work stoppage, computer failure or malfunction, or any > and all other commercial damages or losses, even if such party > shall have been informed of the possibility of such damages. This > limitation of liability shall not apply to liability for death or > personal injury resulting from such party's negligence to the > extent applicable law prohibits such limitation. Some > jurisdictions do not allow the exclusion or limitation of > incidental or consequential damages, so this exclusion and > limitation may not apply to You. ### 8. Litigation Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims. ### 9. Miscellaneous This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. ### 10. Versions of the License #### 10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. #### 10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. #### 10.3. Modified Versions If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). #### 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. ## Exhibit A - Source Code Form License Notice This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. You may add additional accurate notices of copyright ownership. ## Exhibit B - “Incompatible With Secondary Licenses†Notice This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public License, v. 2.0. vendor/sized-chunks/CHANGELOG.md0000664000175000017500000001676114160055207017102 0ustar mwhudsonmwhudson# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). ## [0.6.5] - 2021-04-16 - When `InlineArray` cannot hold any values because of misalignment, report it as capacity 0 instead of panicking at runtime. (#22) ## [0.6.4] - 2021-02-17 ### FIXED - `InlineArray` can be used in recursive types again. ### CHANGED - `InlineArray::new()` now panics when it can't store elements with large alignment (this was UB prior to 0.6.3). Alignments of `usize` and smaller are always supported. Larger alignments are supported if the capacity-providing type has sufficient alignment. ## [0.6.3] - 2021-02-14 ### FIXED - Multilple soundness fixes: `InlineArray` handles large alignment, panic safety in `Chunk`'s `clone` and `from_iter`, capacity checks in `unit()`, `pair()` and `from()`. - `InlineArray` can now handle zero sized values. This relies on conditionals in const functions, a feature which was introduced in Rust 1.46.0, which means this is now the minimum Rust version this crate will work on. ## [0.6.2] - 2020-05-15 ### FIXED - This release exists for no other purpose than to bump the `refpool` optional dependency. ## [0.6.1] - 2020-03-26 ### ADDED - The crate now has a `std` feature flag, which is on by default, and will make the crate `no_std` if disabled. ### FIXED - Fixed a compilation error if you had the `arbitrary` feature flag enabled without the `ringbuffer` flag. ## [0.6.0] - 2020-03-24 ### CHANGED - `RingBuffer` and its accompanying slice types `Slice` and `SliceMut` now implement `Array` and `ArrayMut` from [`array-ops`](http://docs.rs/array-ops), giving them most of the methods that would be available on primitive slice types and cutting down on code duplication in the implementation, but at the price of having to pull `Array` et al into scope when you need them. Because this means adding a dependency to `array-ops`, `RingBuffer` has now been moved behind the `ringbuffer` feature flag. `Chunk` and `InlineArray` don't and won't implement `Array`, because they are both able to implement `Deref<[A]>`, which provides the same functionality more efficiently. ### ADDED - The `insert_from` and `insert_ordered` methods recently added to `Chunk` have now also been added to `RingBuffer`. - `RingBuffer`'s `Slice` and `SliceMut` now also have the three `binary_search` methods regular slices have. - `SparseChunk`, `RingBuffer`, `Slice` and `SliceMut` now have unsafe `get_unchecked` and `get_unchecked_mut` methods. - `PartialEq` implementations allowing you to compare `RingBuffer`s, `Slice`s and `SliceMut`s interchangeably have been added. ### FIXED - Fixed an aliasing issue in `RingBuffer`'s mutable iterator, as uncovered by Miri. Behind the scenes, the full non-fuzzing unit test suite is now able to run on Miri without crashing it (after migrating the last Proptest tests away from the test suite into the fuzz targets), and this has been included in its CI build. (#6) ## [0.5.3] - 2020-03-11 ### FIXED - Debug only assertions made it into the previous release by accident, and this change has been reverted. (#7) ## [0.5.2] - 2020-03-10 ### ADDED - `Chunk` now has an `insert_from` method for inserting multiple values at an index in one go. - `Chunk` now also has an `insert_ordered` method for inserting values into a sorted chunk. - `SparseChunk` now has the methods `option_iter()`, `option_iter_mut()` and `option_drain()` with their corresponding iterators to iterate over a chunk as if it were an array of `Option`s. - [`Arbitrary`](https://docs.rs/arbitrary/latest/arbitrary/trait.Arbitrary.html) implementations for all data types have been added behind the `arbitrary` feature flag. ### FIXED - Internal consistency assertions are now only performed in debug mode (like with `debug_assert!`). This means `sized_chunks` will no longer cause panics in release mode when you do things like pushing to a full chunk, but do bad and undefined things instead. It also means a very slight performance gain. ## [0.5.1] - 2019-12-12 ### ADDED - `PoolDefault` and `PoolClone` implementations, from the [`refpool`](https://crates.io/crates/refpool) crate, are available for `Chunk`, `SparseChunk` and `RingBuffer`, behind the `refpool` feature flag. ## [0.5.0] - 2019-09-09 ### CHANGED - The `Bitmap` type (and its helper type, `Bits`) has been split off into a separate crate, named `bitmaps`. If you need it, it's in that crate now. `sized-chunks` does not re-export it. Of course, this means `sized-chunks` has gained `bitmaps` as its second hard dependency. ## [0.4.0] - 2019-09-02 ### CHANGED - The 0.3.2 release increased the minimum rustc version required, which should have been a major version bump, so 0.3.2 is being yanked and re-tagged as 0.4.0. ## [0.3.2] - 2019-08-29 ### ADDED - Chunk/bitmap sizes up to 1024 are now supported. ### FIXED - Replaced `ManuallyDrop` in implementations with `MaybeUninit`, along with a general unsafe code cleanup. (#3) ## [0.3.1] - 2019-08-03 ### ADDED - Chunk sizes up to 256 are now supported. ## [0.3.0] - 2019-05-18 ### ADDED - A new data structure, `InlineArray`, which is a stack allocated array matching the size of a given type, intended for optimising for the case of very small vectors. - `Chunk` has an implementation of `From` which is considerably faster than going via iterators. ## [0.2.2] - 2019-05-10 ### ADDED - `Slice::get` methods now return references with the lifetime of the underlying `RingBuffer` rather than the lifetime of the slice. ## [0.2.1] - 2019-04-15 ### ADDED - A lot of documentation. - `std::io::Read` implementations for `Chunk` and `RingBuffer` to match their `Write` implementations. ## [0.2.0] - 2019-04-14 ### CHANGED - The `capacity()` method has been replacied with a `CAPACITY` const on each type. ### ADDED - There is now a `RingBuffer` implementation, which should be nearly a drop-in replacement for `SizedChunk` but is always O(1) on push and cannot be dereferenced to slices (but it has a set of custom slice-like implementations to make that less of a drawback). - The `Drain` iterator for `SizedChunk` now implements `DoubleEndedIterator`. ### FIXED - `SizedChunk::drain_from_front/back` will now always panic if the iterator underflows, instead of only doing it in debug mode. ## [0.1.3] - 2019-04-12 ### ADDED - `SparseChunk` now has a default length of `U64`. - `Chunk` now has `PartialEq` defined for anything that can be borrowed as a slice. - `SparseChunk` likewise has `PartialEq` defined for `BTreeMap` and `HashMap`. These are intended for debugging and aren't optimally `efficient. - `Chunk` and `SparseChunk` now have a new method `capacity()` which returns its maximum capacity (the number in the type) as a usize. - Added an `entries()` method to `SparseChunk`. - `SparseChunk` now has a `Debug` implementation. ### FIXED - Extensive integration tests were added for `Chunk` and `SparseChunk`. - `Chunk::clear` is now very slightly faster. ## [0.1.2] - 2019-03-11 ### FIXED - Fixed an alignment issue in `Chunk::drain_from_back`. (#1) ## [0.1.1] - 2019-02-19 ### FIXED - Some 2018 edition issues. ## [0.1.0] - 2019-02-19 Initial release. vendor/sized-chunks/debian/0000775000175000017500000000000014160055207016500 5ustar mwhudsonmwhudsonvendor/sized-chunks/debian/patches/0000775000175000017500000000000014160055207020127 5ustar mwhudsonmwhudsonvendor/sized-chunks/debian/patches/series0000664000175000017500000000002614160055207021342 0ustar mwhudsonmwhudsondisable-features.diff vendor/sized-chunks/debian/patches/disable-features.diff0000664000175000017500000000155414160055207024205 0ustar mwhudsonmwhudsonIndex: sized-chunks/Cargo.toml =================================================================== --- sized-chunks.orig/Cargo.toml +++ sized-chunks/Cargo.toml @@ -25,25 +25,25 @@ license = "MPL-2.0+" repository = "https://github.com/bodil/sized-chunks" [package.metadata.docs.rs] all-features = true -[dependencies.arbitrary] -version = "1.0.0" -optional = true +#[dependencies.arbitrary] +#version = "1.0.0" +#optional = true -[dependencies.array-ops] -version = "0.1.0" -optional = true +#[dependencies.array-ops] +#version = "0.1.0" +#optional = true [dependencies.bitmaps] version = "2.1.0" -[dependencies.refpool] -version = "0.4.3" -optional = true +#[dependencies.refpool] +#version = "0.4.3" +#optional = true [dependencies.typenum] version = "1.12.0" [features] default = ["std"] -ringbuffer = ["array-ops"] +#ringbuffer = ["array-ops"] std = [] vendor/sized-chunks/src/0000775000175000017500000000000014160055207016045 5ustar mwhudsonmwhudsonvendor/sized-chunks/src/ring_buffer/0000775000175000017500000000000014160055207020335 5ustar mwhudsonmwhudsonvendor/sized-chunks/src/ring_buffer/mod.rs0000664000175000017500000010440114160055207021462 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! A fixed capacity ring buffer. //! //! See [`RingBuffer`](struct.RingBuffer.html) use core::borrow::Borrow; use core::cmp::Ordering; use core::fmt::{Debug, Error, Formatter}; use core::hash::{Hash, Hasher}; use core::iter::FromIterator; use core::mem::{replace, MaybeUninit}; use core::ops::{Bound, Range, RangeBounds}; use core::ops::{Index, IndexMut}; use typenum::U64; pub use array_ops::{Array, ArrayMut, HasLength}; use crate::types::ChunkLength; mod index; use index::{IndexIter, RawIndex}; mod iter; pub use iter::{Drain, Iter, IterMut, OwnedIter}; mod slice; pub use slice::{Slice, SliceMut}; #[cfg(feature = "refpool")] mod refpool; /// A fixed capacity ring buffer. /// /// A ring buffer is an array where the first logical index is at some arbitrary /// location inside the array, and the indices wrap around to the start of the /// array once they overflow its bounds. /// /// This gives us the ability to push to either the front or the end of the /// array in constant time, at the cost of losing the ability to get a single /// contiguous slice reference to the contents. /// /// It differs from the [`Chunk`][Chunk] in that the latter will have mostly /// constant time pushes, but may occasionally need to shift its contents around /// to make room. They both have constant time pop, and they both have linear /// time insert and remove. /// /// The `RingBuffer` offers its own [`Slice`][Slice] and [`SliceMut`][SliceMut] /// types to compensate for the loss of being able to take a slice, but they're /// somewhat less efficient, so the general rule should be that you shouldn't /// choose a `RingBuffer` if you rely heavily on slices - but if you don't, /// it's probably a marginally better choice overall than [`Chunk`][Chunk]. /// /// # Feature Flag /// /// To use this data structure, you need to enable the `ringbuffer` feature. /// /// [Chunk]: ../sized_chunk/struct.Chunk.html /// [Slice]: struct.Slice.html /// [SliceMut]: struct.SliceMut.html pub struct RingBuffer where N: ChunkLength, { origin: RawIndex, length: usize, data: MaybeUninit, } impl> Drop for RingBuffer { #[inline] fn drop(&mut self) { if core::mem::needs_drop::() { for i in self.range() { unsafe { self.force_drop(i) } } } } } impl HasLength for RingBuffer where N: ChunkLength, { /// Get the length of the ring buffer. #[inline] #[must_use] fn len(&self) -> usize { self.length } } impl Array for RingBuffer where N: ChunkLength, { /// Get a reference to the value at a given index. #[must_use] fn get(&self, index: usize) -> Option<&A> { if index >= self.len() { None } else { Some(unsafe { self.get_unchecked(index) }) } } } impl ArrayMut for RingBuffer where N: ChunkLength, { /// Get a mutable reference to the value at a given index. #[must_use] fn get_mut(&mut self, index: usize) -> Option<&mut A> { if index >= self.len() { None } else { Some(unsafe { self.get_unchecked_mut(index) }) } } } impl RingBuffer where N: ChunkLength, { /// The capacity of this ring buffer, as a `usize`. pub const CAPACITY: usize = N::USIZE; /// Get the raw index for a logical index. #[inline] fn raw(&self, index: usize) -> RawIndex { self.origin + index } #[inline] unsafe fn ptr(&self, index: RawIndex) -> *const A { debug_assert!(index.to_usize() < Self::CAPACITY); (&self.data as *const _ as *const A).add(index.to_usize()) } #[inline] unsafe fn mut_ptr(&mut self, index: RawIndex) -> *mut A { debug_assert!(index.to_usize() < Self::CAPACITY); (&mut self.data as *mut _ as *mut A).add(index.to_usize()) } /// Drop the value at a raw index. #[inline] unsafe fn force_drop(&mut self, index: RawIndex) { core::ptr::drop_in_place(self.mut_ptr(index)) } /// Copy the value at a raw index, discarding ownership of the copied value #[inline] unsafe fn force_read(&self, index: RawIndex) -> A { core::ptr::read(self.ptr(index)) } /// Write a value at a raw index without trying to drop what's already there #[inline] unsafe fn force_write(&mut self, index: RawIndex, value: A) { core::ptr::write(self.mut_ptr(index), value) } /// Copy a range of raw indices from another buffer. unsafe fn copy_from( &mut self, source: &mut Self, from: RawIndex, to: RawIndex, count: usize, ) { #[inline] unsafe fn force_copy_to>( source: &mut RingBuffer, from: RawIndex, target: &mut RingBuffer, to: RawIndex, count: usize, ) { if count > 0 { debug_assert!(from.to_usize() + count <= RingBuffer::::CAPACITY); debug_assert!(to.to_usize() + count <= RingBuffer::::CAPACITY); core::ptr::copy_nonoverlapping(source.mut_ptr(from), target.mut_ptr(to), count) } } if from.to_usize() + count > Self::CAPACITY { let first_length = Self::CAPACITY - from.to_usize(); let last_length = count - first_length; self.copy_from(source, from, to, first_length); self.copy_from(source, 0.into(), to + first_length, last_length); } else if to.to_usize() + count > Self::CAPACITY { let first_length = Self::CAPACITY - to.to_usize(); let last_length = count - first_length; force_copy_to(source, from, self, to, first_length); force_copy_to(source, from + first_length, self, 0.into(), last_length); } else { force_copy_to(source, from, self, to, count); } } /// Copy values from a slice. #[allow(dead_code)] unsafe fn copy_from_slice(&mut self, source: &[A], to: RawIndex) { let count = source.len(); debug_assert!(to.to_usize() + count <= Self::CAPACITY); if to.to_usize() + count > Self::CAPACITY { let first_length = Self::CAPACITY - to.to_usize(); let first_slice = &source[..first_length]; let last_slice = &source[first_length..]; core::ptr::copy_nonoverlapping( first_slice.as_ptr(), self.mut_ptr(to), first_slice.len(), ); core::ptr::copy_nonoverlapping( last_slice.as_ptr(), self.mut_ptr(0.into()), last_slice.len(), ); } else { core::ptr::copy_nonoverlapping(source.as_ptr(), self.mut_ptr(to), count) } } /// Get an iterator over the raw indices of the buffer from left to right. #[inline] fn range(&self) -> IndexIter { IndexIter { remaining: self.len(), left_index: self.origin, right_index: self.origin + self.len(), } } /// Construct an empty ring buffer. #[inline] #[must_use] pub fn new() -> Self { Self { origin: 0.into(), length: 0, data: MaybeUninit::uninit(), } } /// Construct a ring buffer with a single item. #[inline] #[must_use] pub fn unit(value: A) -> Self { assert!(Self::CAPACITY >= 1); let mut buffer = Self { origin: 0.into(), length: 1, data: MaybeUninit::uninit(), }; unsafe { buffer.force_write(0.into(), value); } buffer } /// Construct a ring buffer with two items. #[inline] #[must_use] pub fn pair(value1: A, value2: A) -> Self { assert!(Self::CAPACITY >= 2); let mut buffer = Self { origin: 0.into(), length: 2, data: MaybeUninit::uninit(), }; unsafe { buffer.force_write(0.into(), value1); buffer.force_write(1.into(), value2); } buffer } /// Construct a new ring buffer and move every item from `other` into the /// new buffer. /// /// Time: O(n) #[inline] #[must_use] pub fn drain_from(other: &mut Self) -> Self { Self::from_front(other, other.len()) } /// Construct a new ring buffer and populate it by taking `count` items from /// the iterator `iter`. /// /// Panics if the iterator contains less than `count` items. /// /// Time: O(n) #[must_use] pub fn collect_from(iter: &mut I, count: usize) -> Self where I: Iterator, { let buffer = Self::from_iter(iter.take(count)); if buffer.len() < count { panic!("RingBuffer::collect_from: underfull iterator"); } buffer } /// Construct a new ring buffer and populate it by taking `count` items from /// the front of `other`. /// /// Time: O(n) for the number of items moved #[must_use] pub fn from_front(other: &mut Self, count: usize) -> Self { let mut buffer = Self::new(); buffer.drain_from_front(other, count); buffer } /// Construct a new ring buffer and populate it by taking `count` items from /// the back of `other`. /// /// Time: O(n) for the number of items moved #[must_use] pub fn from_back(other: &mut Self, count: usize) -> Self { let mut buffer = Self::new(); buffer.drain_from_back(other, count); buffer } /// Test if the ring buffer is full. #[inline] #[must_use] pub fn is_full(&self) -> bool { self.len() == Self::CAPACITY } /// Get an iterator over references to the items in the ring buffer in /// order. #[inline] #[must_use] pub fn iter(&self) -> Iter<'_, A, N> { Iter { buffer: self, left_index: self.origin, right_index: self.origin + self.len(), remaining: self.len(), } } /// Get an iterator over mutable references to the items in the ring buffer /// in order. #[inline] #[must_use] pub fn iter_mut(&mut self) -> IterMut<'_, A, N> { IterMut::new(self) } #[must_use] fn parse_range>(&self, range: R) -> Range { let new_range = Range { start: match range.start_bound() { Bound::Unbounded => 0, Bound::Included(index) => *index, Bound::Excluded(_) => unimplemented!(), }, end: match range.end_bound() { Bound::Unbounded => self.len(), Bound::Included(index) => *index + 1, Bound::Excluded(index) => *index, }, }; if new_range.end > self.len() || new_range.start > new_range.end { panic!("Slice::parse_range: index out of bounds"); } new_range } /// Get a `Slice` for a subset of the ring buffer. #[must_use] pub fn slice>(&self, range: R) -> Slice<'_, A, N> { Slice { buffer: self, range: self.parse_range(range), } } /// Get a `SliceMut` for a subset of the ring buffer. #[must_use] pub fn slice_mut>(&mut self, range: R) -> SliceMut<'_, A, N> { SliceMut { range: self.parse_range(range), buffer: self, } } /// Get an unchecked reference to the value at the given index. /// /// # Safety /// /// You must ensure the index is not out of bounds. #[must_use] pub unsafe fn get_unchecked(&self, index: usize) -> &A { &*self.ptr(self.raw(index)) } /// Get an unchecked mutable reference to the value at the given index. /// /// # Safety /// /// You must ensure the index is not out of bounds. #[must_use] pub unsafe fn get_unchecked_mut(&mut self, index: usize) -> &mut A { &mut *self.mut_ptr(self.raw(index)) } /// Push a value to the back of the buffer. /// /// Panics if the capacity of the buffer is exceeded. /// /// Time: O(1) pub fn push_back(&mut self, value: A) { if self.is_full() { panic!("RingBuffer::push_back: can't push to a full buffer") } else { unsafe { self.force_write(self.raw(self.length), value) } self.length += 1; } } /// Push a value to the front of the buffer. /// /// Panics if the capacity of the buffer is exceeded. /// /// Time: O(1) pub fn push_front(&mut self, value: A) { if self.is_full() { panic!("RingBuffer::push_front: can't push to a full buffer") } else { let origin = self.origin.dec(); self.length += 1; unsafe { self.force_write(origin, value) } } } /// Pop a value from the back of the buffer. /// /// Returns `None` if the buffer is empty. /// /// Time: O(1) pub fn pop_back(&mut self) -> Option { if self.is_empty() { None } else { self.length -= 1; Some(unsafe { self.force_read(self.raw(self.length)) }) } } /// Pop a value from the front of the buffer. /// /// Returns `None` if the buffer is empty. /// /// Time: O(1) pub fn pop_front(&mut self) -> Option { if self.is_empty() { None } else { self.length -= 1; let index = self.origin.inc(); Some(unsafe { self.force_read(index) }) } } /// Discard all items up to but not including `index`. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items dropped pub fn drop_left(&mut self, index: usize) { if index > 0 { if index > self.len() { panic!("RingBuffer::drop_left: index out of bounds"); } for i in self.range().take(index) { unsafe { self.force_drop(i) } } self.origin += index; self.length -= index; } } /// Discard all items from `index` onward. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items dropped pub fn drop_right(&mut self, index: usize) { if index > self.len() { panic!("RingBuffer::drop_right: index out of bounds"); } if index == self.len() { return; } for i in self.range().skip(index) { unsafe { self.force_drop(i) } } self.length = index; } /// Split a buffer into two, the original buffer containing /// everything up to `index` and the returned buffer containing /// everything from `index` onwards. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items in the new buffer #[must_use] pub fn split_off(&mut self, index: usize) -> Self { if index > self.len() { panic!("RingBuffer::split: index out of bounds"); } if index == self.len() { return Self::new(); } let mut right = Self::new(); let length = self.length - index; unsafe { right.copy_from(self, self.raw(index), 0.into(), length) }; self.length = index; right.length = length; right } /// Remove all items from `other` and append them to the back of `self`. /// /// Panics if the capacity of `self` is exceeded. /// /// `other` will be an empty buffer after this operation. /// /// Time: O(n) for the number of items moved #[inline] pub fn append(&mut self, other: &mut Self) { self.drain_from_front(other, other.len()); } /// Remove `count` items from the front of `other` and append them to the /// back of `self`. /// /// Panics if `self` doesn't have `count` items left, or if `other` has /// fewer than `count` items. /// /// Time: O(n) for the number of items moved pub fn drain_from_front(&mut self, other: &mut Self, count: usize) { let self_len = self.len(); let other_len = other.len(); if self_len + count > Self::CAPACITY { panic!("RingBuffer::drain_from_front: chunk size overflow"); } if other_len < count { panic!("RingBuffer::drain_from_front: index out of bounds"); } unsafe { self.copy_from(other, other.origin, self.raw(self.len()), count) }; other.origin += count; other.length -= count; self.length += count; } /// Remove `count` items from the back of `other` and append them to the /// front of `self`. /// /// Panics if `self` doesn't have `count` items left, or if `other` has /// fewer than `count` items. /// /// Time: O(n) for the number of items moved pub fn drain_from_back(&mut self, other: &mut Self, count: usize) { let self_len = self.len(); let other_len = other.len(); if self_len + count > Self::CAPACITY { panic!("RingBuffer::drain_from_back: chunk size overflow"); } if other_len < count { panic!("RingBuffer::drain_from_back: index out of bounds"); } self.origin -= count; let source_index = other.origin + (other.len() - count); unsafe { self.copy_from(other, source_index, self.origin, count) }; other.length -= count; self.length += count; } /// Insert a new value at index `index`, shifting all the following values /// to the right. /// /// Panics if the index is out of bounds. /// /// Time: O(n) for the number of items shifted pub fn insert(&mut self, index: usize, value: A) { if self.is_full() { panic!("RingBuffer::insert: chunk size overflow"); } if index > self.len() { panic!("RingBuffer::insert: index out of bounds"); } if index == 0 { return self.push_front(value); } if index == self.len() { return self.push_back(value); } let right_count = self.len() - index; // Check which side has fewer elements to shift. if right_count < index { // Shift to the right. let mut i = self.raw(self.len() - 1); let target = self.raw(index); while i != target { unsafe { self.force_write(i + 1, self.force_read(i)) }; i -= 1; } unsafe { self.force_write(target + 1, self.force_read(target)) }; self.length += 1; } else { // Shift to the left. self.origin -= 1; self.length += 1; for i in self.range().take(index) { unsafe { self.force_write(i, self.force_read(i + 1)) }; } } unsafe { self.force_write(self.raw(index), value) }; } /// Insert a new value into the buffer in sorted order. /// /// This assumes every element of the buffer is already in sorted order. /// If not, the value will still be inserted but the ordering is not /// guaranteed. /// /// Time: O(log n) to find the insert position, then O(n) for the number /// of elements shifted. /// /// # Examples /// /// ```rust /// # use std::iter::FromIterator; /// # use sized_chunks::Chunk; /// # use typenum::U64; /// let mut chunk = Chunk::::from_iter(0..5); /// chunk.insert_ordered(3); /// assert_eq!(&[0, 1, 2, 3, 3, 4], chunk.as_slice()); /// ``` pub fn insert_ordered(&mut self, value: A) where A: Ord, { if self.is_full() { panic!("Chunk::insert: chunk is full"); } match self.slice(..).binary_search(&value) { Ok(index) => self.insert(index, value), Err(index) => self.insert(index, value), } } /// Insert multiple values at index `index`, shifting all the following values /// to the right. /// /// Panics if the index is out of bounds or the chunk doesn't have room for /// all the values. /// /// Time: O(m+n) where m is the number of elements inserted and n is the number /// of elements following the insertion index. Calling `insert` /// repeatedly would be O(m*n). pub fn insert_from(&mut self, index: usize, iter: Iterable) where Iterable: IntoIterator, I: ExactSizeIterator, { let iter = iter.into_iter(); let insert_size = iter.len(); if self.len() + insert_size > Self::CAPACITY { panic!( "Chunk::insert_from: chunk cannot fit {} elements", insert_size ); } if index > self.len() { panic!("Chunk::insert_from: index out of bounds"); } if index == self.len() { self.extend(iter); return; } let right_count = self.len() - index; // Check which side has fewer elements to shift. if right_count < index { // Shift to the right. let mut i = self.raw(self.len() - 1); let target = self.raw(index); while i != target { unsafe { self.force_write(i + insert_size, self.force_read(i)) }; i -= 1; } unsafe { self.force_write(target + insert_size, self.force_read(target)) }; self.length += insert_size; } else { // Shift to the left. self.origin -= insert_size; self.length += insert_size; for i in self.range().take(index) { unsafe { self.force_write(i, self.force_read(i + insert_size)) }; } } let mut index = self.raw(index); // Panic safety: unless and until we fill it fully, there's a hole somewhere in the middle // and the destructor would drop non-existing elements. Therefore we pretend to be empty // for a while (and leak the elements instead in case something bad happens). let mut inserted = 0; let length = replace(&mut self.length, 0); for value in iter.take(insert_size) { unsafe { self.force_write(index, value) }; index += 1; inserted += 1; } // This would/could create a hole in the middle if it was less assert_eq!( inserted, insert_size, "Iterator has fewer elements than advertised", ); self.length = length; } /// Remove the value at index `index`, shifting all the following values to /// the left. /// /// Returns the removed value. /// /// Panics if the index is out of bounds. /// /// Time: O(n) for the number of items shifted pub fn remove(&mut self, index: usize) -> A { if index >= self.len() { panic!("RingBuffer::remove: index out of bounds"); } let value = unsafe { self.force_read(self.raw(index)) }; let right_count = self.len() - index; // Check which side has fewer elements to shift. if right_count < index { // Shift from the right. self.length -= 1; let mut i = self.raw(index); let target = self.raw(self.len()); while i != target { unsafe { self.force_write(i, self.force_read(i + 1)) }; i += 1; } } else { // Shift from the left. let mut i = self.raw(index); while i != self.origin { unsafe { self.force_write(i, self.force_read(i - 1)) }; i -= 1; } self.origin += 1; self.length -= 1; } value } /// Construct an iterator that drains values from the front of the buffer. pub fn drain(&mut self) -> Drain<'_, A, N> { Drain { buffer: self } } /// Discard the contents of the buffer. /// /// Time: O(n) pub fn clear(&mut self) { for i in self.range() { unsafe { self.force_drop(i) }; } self.origin = 0.into(); self.length = 0; } } impl> Default for RingBuffer { #[inline] #[must_use] fn default() -> Self { Self::new() } } impl> Clone for RingBuffer { fn clone(&self) -> Self { let mut out = Self::new(); out.origin = self.origin; out.length = self.length; let range = self.range(); // Panic safety. If we panic, we don't want to drop more than we have initialized. out.length = 0; for index in range { unsafe { out.force_write(index, (&*self.ptr(index)).clone()) }; out.length += 1; } out } } impl Index for RingBuffer where N: ChunkLength, { type Output = A; #[must_use] fn index(&self, index: usize) -> &Self::Output { if index >= self.len() { panic!( "RingBuffer::index: index out of bounds {} >= {}", index, self.len() ); } unsafe { &*self.ptr(self.raw(index)) } } } impl IndexMut for RingBuffer where N: ChunkLength, { #[must_use] fn index_mut(&mut self, index: usize) -> &mut Self::Output { if index >= self.len() { panic!( "RingBuffer::index_mut: index out of bounds {} >= {}", index, self.len() ); } unsafe { &mut *self.mut_ptr(self.raw(index)) } } } impl> PartialEq for RingBuffer { #[inline] #[must_use] fn eq(&self, other: &Self) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl PartialEq for RingBuffer where PrimSlice: Borrow<[A]>, A: PartialEq, N: ChunkLength, { #[inline] #[must_use] fn eq(&self, other: &PrimSlice) -> bool { let other = other.borrow(); self.len() == other.len() && self.iter().eq(other.iter()) } } impl PartialEq> for RingBuffer where A: PartialEq, N: ChunkLength, { fn eq(&self, other: &Slice<'_, A, N>) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl PartialEq> for RingBuffer where A: PartialEq, N: ChunkLength, { fn eq(&self, other: &SliceMut<'_, A, N>) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl> Eq for RingBuffer {} impl> PartialOrd for RingBuffer { #[inline] #[must_use] fn partial_cmp(&self, other: &Self) -> Option { self.iter().partial_cmp(other.iter()) } } impl> Ord for RingBuffer { #[inline] #[must_use] fn cmp(&self, other: &Self) -> Ordering { self.iter().cmp(other.iter()) } } impl> Extend for RingBuffer { #[inline] fn extend>(&mut self, iter: I) { for item in iter { self.push_back(item); } } } impl<'a, A: Clone + 'a, N: ChunkLength> Extend<&'a A> for RingBuffer { #[inline] fn extend>(&mut self, iter: I) { for item in iter { self.push_back(item.clone()); } } } impl> Debug for RingBuffer { fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error> { f.write_str("RingBuffer")?; f.debug_list().entries(self.iter()).finish() } } impl> Hash for RingBuffer { #[inline] fn hash(&self, hasher: &mut H) { for item in self { item.hash(hasher) } } } #[cfg(feature = "std")] impl> std::io::Write for RingBuffer { fn write(&mut self, mut buf: &[u8]) -> std::io::Result { let max_new = Self::CAPACITY - self.len(); if buf.len() > max_new { buf = &buf[..max_new]; } unsafe { self.copy_from_slice(buf, self.origin + self.len()) }; self.length += buf.len(); Ok(buf.len()) } #[inline] fn flush(&mut self) -> std::io::Result<()> { Ok(()) } } #[cfg(feature = "std")] impl> std::io::Read for RingBuffer { fn read(&mut self, buf: &mut [u8]) -> std::io::Result { let read_size = buf.len().min(self.len()); if read_size == 0 { Ok(0) } else { for p in buf.iter_mut().take(read_size) { *p = self.pop_front().unwrap(); } Ok(read_size) } } } impl> FromIterator for RingBuffer { #[must_use] fn from_iter>(iter: I) -> Self { let mut buffer = RingBuffer::new(); buffer.extend(iter); buffer } } impl> IntoIterator for RingBuffer { type Item = A; type IntoIter = OwnedIter; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { OwnedIter { buffer: self } } } impl<'a, A, N: ChunkLength> IntoIterator for &'a RingBuffer { type Item = &'a A; type IntoIter = Iter<'a, A, N>; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter() } } impl<'a, A, N: ChunkLength> IntoIterator for &'a mut RingBuffer { type Item = &'a mut A; type IntoIter = IterMut<'a, A, N>; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter_mut() } } // Tests #[cfg(test)] mod test { use typenum::U0; use super::*; #[test] fn validity_invariant() { assert!(Some(RingBuffer::>::new()).is_some()); } #[test] fn is_full() { let mut chunk = RingBuffer::<_, U64>::new(); for i in 0..64 { assert_eq!(false, chunk.is_full()); chunk.push_back(i); } assert_eq!(true, chunk.is_full()); } #[test] fn ref_iter() { let chunk: RingBuffer = (0..64).collect(); let out_vec: Vec<&i32> = chunk.iter().collect(); let should_vec_p: Vec = (0..64).collect(); let should_vec: Vec<&i32> = should_vec_p.iter().collect(); assert_eq!(should_vec, out_vec); } #[test] fn mut_ref_iter() { let mut chunk: RingBuffer = (0..64).collect(); let out_vec: Vec<&mut i32> = chunk.iter_mut().collect(); let mut should_vec_p: Vec = (0..64).collect(); let should_vec: Vec<&mut i32> = should_vec_p.iter_mut().collect(); assert_eq!(should_vec, out_vec); } #[test] fn consuming_iter() { let chunk: RingBuffer = (0..64).collect(); let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn draining_iter() { let mut chunk: RingBuffer = (0..64).collect(); let mut half: RingBuffer = chunk.drain().take(16).collect(); half.extend(chunk.drain().rev().take(16)); let should: Vec = (16..48).collect(); assert_eq!(chunk, should); let should: Vec = (0..16).chain((48..64).rev()).collect(); assert_eq!(half, should); } #[cfg(feature = "std")] #[test] fn io_write() { use std::io::Write; let mut buffer: RingBuffer = (0..32).collect(); let to_write: Vec = (32..128).collect(); assert_eq!(32, buffer.write(&to_write).unwrap()); assert_eq!(buffer, (0..64).collect::>()); } #[cfg(feature = "std")] #[test] fn io_read() { use std::io::Read; let mut buffer: RingBuffer = (16..48).collect(); let mut read_buf: Vec = (0..16).collect(); assert_eq!(16, buffer.read(&mut read_buf).unwrap()); assert_eq!(read_buf, (16..32).collect::>()); assert_eq!(buffer, (32..48).collect::>()); assert_eq!(16, buffer.read(&mut read_buf).unwrap()); assert_eq!(read_buf, (32..48).collect::>()); assert_eq!(buffer, vec![]); assert_eq!(0, buffer.read(&mut read_buf).unwrap()); } #[test] fn clone() { let buffer: RingBuffer = (0..50).collect(); assert_eq!(buffer, buffer.clone()); } #[test] fn failing() { let mut buffer: RingBuffer = RingBuffer::new(); buffer.push_front(0); let mut add: RingBuffer = vec![1, 0, 0, 0, 0, 0].into_iter().collect(); buffer.append(&mut add); assert_eq!(1, buffer.remove(1)); let expected = vec![0, 0, 0, 0, 0, 0]; assert_eq!(buffer, expected); } use crate::tests::DropTest; use std::sync::atomic::{AtomicUsize, Ordering}; #[test] fn dropping() { let counter = AtomicUsize::new(0); { let mut chunk: RingBuffer> = RingBuffer::new(); for _i in 0..20 { chunk.push_back(DropTest::new(&counter)) } for _i in 0..20 { chunk.push_front(DropTest::new(&counter)) } assert_eq!(40, counter.load(Ordering::Relaxed)); for _i in 0..10 { chunk.pop_back(); } assert_eq!(30, counter.load(Ordering::Relaxed)); } assert_eq!(0, counter.load(Ordering::Relaxed)); } #[test] #[should_panic(expected = "assertion failed: Self::CAPACITY >= 1")] fn unit_on_empty() { let _ = RingBuffer::::unit(1); } #[test] #[should_panic(expected = "assertion failed: Self::CAPACITY >= 2")] fn pair_on_empty() { let _ = RingBuffer::::pair(1, 2); } } vendor/sized-chunks/src/ring_buffer/refpool.rs0000664000175000017500000000411614160055207022353 0ustar mwhudsonmwhudsonuse core::mem::MaybeUninit; use ::refpool::{PoolClone, PoolDefault}; use crate::ring_buffer::index::RawIndex; use crate::types::ChunkLength; use crate::RingBuffer; impl PoolDefault for RingBuffer where N: ChunkLength, { unsafe fn default_uninit(target: &mut MaybeUninit) { let ptr = target.as_mut_ptr(); let origin_ptr: *mut RawIndex = &mut (*ptr).origin; let length_ptr: *mut usize = &mut (*ptr).length; origin_ptr.write(0.into()); length_ptr.write(0); } } impl PoolClone for RingBuffer where A: Clone, N: ChunkLength, { unsafe fn clone_uninit(&self, target: &mut MaybeUninit) { let ptr = target.as_mut_ptr(); let origin_ptr: *mut RawIndex = &mut (*ptr).origin; let length_ptr: *mut usize = &mut (*ptr).length; let data_ptr: *mut _ = &mut (*ptr).data; let data_ptr: *mut A = (*data_ptr).as_mut_ptr().cast(); origin_ptr.write(self.origin); length_ptr.write(self.length); for index in self.range() { data_ptr .add(index.to_usize()) .write((*self.ptr(index)).clone()); } } } #[cfg(test)] mod test { use super::*; use ::refpool::{Pool, PoolRef}; use std::iter::FromIterator; #[test] fn default_and_clone() { let pool: Pool> = Pool::new(16); let mut ref1 = PoolRef::default(&pool); { let chunk = PoolRef::make_mut(&pool, &mut ref1); chunk.push_back(1); chunk.push_back(2); chunk.push_back(3); } let ref2 = PoolRef::cloned(&pool, &ref1); let ref3 = PoolRef::clone_from(&pool, &RingBuffer::from_iter(1..=3)); assert_eq!(RingBuffer::::from_iter(1..=3), *ref1); assert_eq!(RingBuffer::::from_iter(1..=3), *ref2); assert_eq!(RingBuffer::::from_iter(1..=3), *ref3); assert_eq!(ref1, ref2); assert_eq!(ref1, ref3); assert_eq!(ref2, ref3); assert!(!PoolRef::ptr_eq(&ref1, &ref2)); } } vendor/sized-chunks/src/ring_buffer/iter.rs0000664000175000017500000001207714160055207021655 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. use core::iter::FusedIterator; use core::marker::PhantomData; use crate::types::ChunkLength; use super::{index::RawIndex, RingBuffer}; use array_ops::HasLength; /// A reference iterator over a `RingBuffer`. pub struct Iter<'a, A, N> where N: ChunkLength, { pub(crate) buffer: &'a RingBuffer, pub(crate) left_index: RawIndex, pub(crate) right_index: RawIndex, pub(crate) remaining: usize, } impl<'a, A, N> Iterator for Iter<'a, A, N> where N: ChunkLength, { type Item = &'a A; fn next(&mut self) -> Option { if self.remaining == 0 { None } else { self.remaining -= 1; Some(unsafe { &*self.buffer.ptr(self.left_index.inc()) }) } } #[inline] #[must_use] fn size_hint(&self) -> (usize, Option) { (self.remaining, Some(self.remaining)) } } impl<'a, A, N> DoubleEndedIterator for Iter<'a, A, N> where N: ChunkLength, { fn next_back(&mut self) -> Option { if self.remaining == 0 { None } else { self.remaining -= 1; Some(unsafe { &*self.buffer.ptr(self.right_index.dec()) }) } } } impl<'a, A, N> ExactSizeIterator for Iter<'a, A, N> where N: ChunkLength {} impl<'a, A, N> FusedIterator for Iter<'a, A, N> where N: ChunkLength {} /// A mutable reference iterator over a `RingBuffer`. pub struct IterMut<'a, A, N> where N: ChunkLength, { data: *mut A, left_index: RawIndex, right_index: RawIndex, remaining: usize, phantom: PhantomData<&'a ()>, } impl<'a, A, N> IterMut<'a, A, N> where N: ChunkLength, A: 'a, { pub(crate) fn new(buffer: &mut RingBuffer) -> Self { Self::new_slice(buffer, buffer.origin, buffer.len()) } pub(crate) fn new_slice( buffer: &mut RingBuffer, origin: RawIndex, len: usize, ) -> Self { Self { left_index: origin, right_index: origin + len, remaining: len, phantom: PhantomData, data: buffer.data.as_mut_ptr().cast(), } } unsafe fn mut_ptr(&mut self, index: RawIndex) -> *mut A { self.data.add(index.to_usize()) } } impl<'a, A, N> Iterator for IterMut<'a, A, N> where N: ChunkLength, A: 'a, { type Item = &'a mut A; fn next(&mut self) -> Option { if self.remaining == 0 { None } else { self.remaining -= 1; let index = self.left_index.inc(); Some(unsafe { &mut *self.mut_ptr(index) }) } } #[inline] #[must_use] fn size_hint(&self) -> (usize, Option) { (self.remaining, Some(self.remaining)) } } impl<'a, A, N> DoubleEndedIterator for IterMut<'a, A, N> where N: ChunkLength, A: 'a, { fn next_back(&mut self) -> Option { if self.remaining == 0 { None } else { self.remaining -= 1; let index = self.right_index.dec(); Some(unsafe { &mut *self.mut_ptr(index) }) } } } impl<'a, A, N> ExactSizeIterator for IterMut<'a, A, N> where N: ChunkLength, A: 'a, { } impl<'a, A, N> FusedIterator for IterMut<'a, A, N> where N: ChunkLength, A: 'a, { } /// A draining iterator over a `RingBuffer`. pub struct Drain<'a, A, N: ChunkLength> { pub(crate) buffer: &'a mut RingBuffer, } impl<'a, A: 'a, N: ChunkLength + 'a> Iterator for Drain<'a, A, N> { type Item = A; #[inline] fn next(&mut self) -> Option { self.buffer.pop_front() } #[inline] #[must_use] fn size_hint(&self) -> (usize, Option) { (self.buffer.len(), Some(self.buffer.len())) } } impl<'a, A: 'a, N: ChunkLength + 'a> DoubleEndedIterator for Drain<'a, A, N> { #[inline] fn next_back(&mut self) -> Option { self.buffer.pop_back() } } impl<'a, A: 'a, N: ChunkLength + 'a> ExactSizeIterator for Drain<'a, A, N> {} impl<'a, A: 'a, N: ChunkLength + 'a> FusedIterator for Drain<'a, A, N> {} /// A consuming iterator over a `RingBuffer`. pub struct OwnedIter> { pub(crate) buffer: RingBuffer, } impl> Iterator for OwnedIter { type Item = A; #[inline] fn next(&mut self) -> Option { self.buffer.pop_front() } #[inline] #[must_use] fn size_hint(&self) -> (usize, Option) { (self.buffer.len(), Some(self.buffer.len())) } } impl> DoubleEndedIterator for OwnedIter { #[inline] fn next_back(&mut self) -> Option { self.buffer.pop_back() } } impl> ExactSizeIterator for OwnedIter {} impl> FusedIterator for OwnedIter {} vendor/sized-chunks/src/ring_buffer/index.rs0000664000175000017500000000776414160055207022030 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. use core::iter::FusedIterator; use core::marker::PhantomData; use core::ops::{Add, AddAssign, Sub, SubAssign}; use typenum::Unsigned; pub(crate) struct RawIndex(usize, PhantomData); impl Clone for RawIndex { #[inline] #[must_use] fn clone(&self) -> Self { self.0.into() } } impl Copy for RawIndex where N: Unsigned {} impl RawIndex { #[inline] #[must_use] pub(crate) fn to_usize(self) -> usize { self.0 } /// Increments the index and returns a copy of the index /before/ incrementing. #[inline] #[must_use] pub(crate) fn inc(&mut self) -> Self { let old = *self; self.0 = if self.0 == N::USIZE - 1 { 0 } else { self.0 + 1 }; old } /// Decrements the index and returns a copy of the new value. #[inline] #[must_use] pub(crate) fn dec(&mut self) -> Self { self.0 = if self.0 == 0 { N::USIZE - 1 } else { self.0 - 1 }; *self } } impl From for RawIndex { #[inline] #[must_use] fn from(index: usize) -> Self { debug_assert!(index < N::USIZE); RawIndex(index, PhantomData) } } impl PartialEq for RawIndex { #[inline] #[must_use] fn eq(&self, other: &Self) -> bool { self.0 == other.0 } } impl Eq for RawIndex {} impl Add for RawIndex { type Output = RawIndex; #[inline] #[must_use] fn add(self, other: Self) -> Self::Output { self + other.0 } } impl Add for RawIndex { type Output = RawIndex; #[inline] #[must_use] fn add(self, other: usize) -> Self::Output { let mut result = self.0 + other; while result >= N::USIZE { result -= N::USIZE; } result.into() } } impl AddAssign for RawIndex { #[inline] fn add_assign(&mut self, other: usize) { self.0 += other; while self.0 >= N::USIZE { self.0 -= N::USIZE; } } } impl Sub for RawIndex { type Output = RawIndex; #[inline] #[must_use] fn sub(self, other: Self) -> Self::Output { self - other.0 } } impl Sub for RawIndex { type Output = RawIndex; #[inline] #[must_use] fn sub(self, other: usize) -> Self::Output { let mut start = self.0; while other > start { start += N::USIZE; } (start - other).into() } } impl SubAssign for RawIndex { #[inline] fn sub_assign(&mut self, other: usize) { while other > self.0 { self.0 += N::USIZE; } self.0 -= other; } } pub(crate) struct IndexIter { pub(crate) remaining: usize, pub(crate) left_index: RawIndex, pub(crate) right_index: RawIndex, } impl Iterator for IndexIter { type Item = RawIndex; #[inline] fn next(&mut self) -> Option { if self.remaining > 0 { self.remaining -= 1; Some(self.left_index.inc()) } else { None } } #[inline] #[must_use] fn size_hint(&self) -> (usize, Option) { (self.remaining, Some(self.remaining)) } } impl DoubleEndedIterator for IndexIter { #[inline] fn next_back(&mut self) -> Option { if self.remaining > 0 { self.remaining -= 1; Some(self.right_index.dec()) } else { None } } } impl ExactSizeIterator for IndexIter {} impl FusedIterator for IndexIter {} vendor/sized-chunks/src/ring_buffer/slice.rs0000664000175000017500000003665614160055207022022 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. use core::borrow::Borrow; use core::cmp::Ordering; use core::fmt::Debug; use core::fmt::Error; use core::fmt::Formatter; use core::hash::Hash; use core::hash::Hasher; use core::ops::IndexMut; use core::ops::{Bound, Index, Range, RangeBounds}; use super::{Iter, IterMut, RingBuffer}; use crate::types::ChunkLength; use array_ops::{Array, ArrayMut, HasLength}; /// An indexable representation of a subset of a `RingBuffer`. pub struct Slice<'a, A, N: ChunkLength> { pub(crate) buffer: &'a RingBuffer, pub(crate) range: Range, } impl<'a, A: 'a, N: ChunkLength + 'a> HasLength for Slice<'a, A, N> { /// Get the length of the slice. #[inline] #[must_use] fn len(&self) -> usize { self.range.end - self.range.start } } impl<'a, A: 'a, N: ChunkLength + 'a> Array for Slice<'a, A, N> { /// Get a reference to the value at a given index. #[inline] #[must_use] fn get(&self, index: usize) -> Option<&A> { if index >= self.len() { None } else { Some(unsafe { self.get_unchecked(index) }) } } } impl<'a, A: 'a, N: ChunkLength + 'a> Slice<'a, A, N> { /// Get an unchecked reference to the value at the given index. /// /// # Safety /// /// You must ensure the index is not out of bounds. #[must_use] pub unsafe fn get_unchecked(&self, index: usize) -> &A { self.buffer.get_unchecked(self.range.start + index) } /// Get an iterator over references to the items in the slice in order. #[inline] #[must_use] pub fn iter(&self) -> Iter<'_, A, N> { Iter { buffer: self.buffer, left_index: self.buffer.origin + self.range.start, right_index: self.buffer.origin + self.range.start + self.len(), remaining: self.len(), } } /// Create a subslice of this slice. /// /// This consumes the slice. To create a subslice without consuming it, /// clone it first: `my_slice.clone().slice(1..2)`. #[must_use] pub fn slice>(self, range: R) -> Slice<'a, A, N> { let new_range = Range { start: match range.start_bound() { Bound::Unbounded => self.range.start, Bound::Included(index) => self.range.start + index, Bound::Excluded(_) => unimplemented!(), }, end: match range.end_bound() { Bound::Unbounded => self.range.end, Bound::Included(index) => self.range.start + index + 1, Bound::Excluded(index) => self.range.start + index, }, }; if new_range.start < self.range.start || new_range.end > self.range.end || new_range.start > new_range.end { panic!("Slice::slice: index out of bounds"); } Slice { buffer: self.buffer, range: new_range, } } /// Split the slice into two subslices at the given index. #[must_use] pub fn split_at(self, index: usize) -> (Slice<'a, A, N>, Slice<'a, A, N>) { if index > self.len() { panic!("Slice::split_at: index out of bounds"); } let index = self.range.start + index; ( Slice { buffer: self.buffer, range: Range { start: self.range.start, end: index, }, }, Slice { buffer: self.buffer, range: Range { start: index, end: self.range.end, }, }, ) } /// Construct a new `RingBuffer` by copying the elements in this slice. #[inline] #[must_use] pub fn to_owned(&self) -> RingBuffer where A: Clone, { self.iter().cloned().collect() } } impl<'a, A: 'a, N: ChunkLength + 'a> From<&'a RingBuffer> for Slice<'a, A, N> { #[inline] #[must_use] fn from(buffer: &'a RingBuffer) -> Self { Slice { range: Range { start: 0, end: buffer.len(), }, buffer, } } } impl<'a, A: 'a, N: ChunkLength + 'a> Clone for Slice<'a, A, N> { #[inline] #[must_use] fn clone(&self) -> Self { Slice { buffer: self.buffer, range: self.range.clone(), } } } impl<'a, A: 'a, N: ChunkLength + 'a> Index for Slice<'a, A, N> { type Output = A; #[inline] #[must_use] fn index(&self, index: usize) -> &Self::Output { self.buffer.index(self.range.start + index) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a> PartialEq for Slice<'a, A, N> { #[inline] #[must_use] fn eq(&self, other: &Self) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a> PartialEq> for Slice<'a, A, N> { #[inline] #[must_use] fn eq(&self, other: &SliceMut<'a, A, N>) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a> PartialEq> for Slice<'a, A, N> { #[inline] #[must_use] fn eq(&self, other: &RingBuffer) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a, S> PartialEq for Slice<'a, A, N> where S: Borrow<[A]>, { #[inline] #[must_use] fn eq(&self, other: &S) -> bool { let other = other.borrow(); self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: Eq + 'a, N: ChunkLength + 'a> Eq for Slice<'a, A, N> {} impl<'a, A: PartialOrd + 'a, N: ChunkLength + 'a> PartialOrd for Slice<'a, A, N> { #[inline] #[must_use] fn partial_cmp(&self, other: &Self) -> Option { self.iter().partial_cmp(other.iter()) } } impl<'a, A: Ord + 'a, N: ChunkLength + 'a> Ord for Slice<'a, A, N> { #[inline] #[must_use] fn cmp(&self, other: &Self) -> Ordering { self.iter().cmp(other.iter()) } } impl<'a, A: Debug + 'a, N: ChunkLength + 'a> Debug for Slice<'a, A, N> { fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error> { f.write_str("RingBuffer")?; f.debug_list().entries(self.iter()).finish() } } impl<'a, A: Hash + 'a, N: ChunkLength + 'a> Hash for Slice<'a, A, N> { #[inline] fn hash(&self, hasher: &mut H) { for item in self { item.hash(hasher) } } } impl<'a, A: 'a, N: ChunkLength + 'a> IntoIterator for &'a Slice<'a, A, N> { type Item = &'a A; type IntoIter = Iter<'a, A, N>; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter() } } // Mutable slice /// An indexable representation of a mutable subset of a `RingBuffer`. pub struct SliceMut<'a, A, N: ChunkLength> { pub(crate) buffer: &'a mut RingBuffer, pub(crate) range: Range, } impl<'a, A: 'a, N: ChunkLength + 'a> HasLength for SliceMut<'a, A, N> { /// Get the length of the slice. #[inline] #[must_use] fn len(&self) -> usize { self.range.end - self.range.start } } impl<'a, A: 'a, N: ChunkLength + 'a> Array for SliceMut<'a, A, N> { /// Get a reference to the value at a given index. #[inline] #[must_use] fn get(&self, index: usize) -> Option<&A> { if index >= self.len() { None } else { Some(unsafe { self.get_unchecked(index) }) } } } impl<'a, A: 'a, N: ChunkLength + 'a> ArrayMut for SliceMut<'a, A, N> { /// Get a mutable reference to the value at a given index. #[inline] #[must_use] fn get_mut(&mut self, index: usize) -> Option<&mut A> { if index >= self.len() { None } else { Some(unsafe { self.get_unchecked_mut(index) }) } } } impl<'a, A: 'a, N: ChunkLength + 'a> SliceMut<'a, A, N> { /// Downgrade this slice into a non-mutable slice. #[inline] #[must_use] pub fn unmut(self) -> Slice<'a, A, N> { Slice { buffer: self.buffer, range: self.range, } } /// Get an unchecked reference to the value at the given index. /// /// # Safety /// /// You must ensure the index is not out of bounds. #[must_use] pub unsafe fn get_unchecked(&self, index: usize) -> &A { self.buffer.get_unchecked(self.range.start + index) } /// Get an unchecked mutable reference to the value at the given index. /// /// # Safety /// /// You must ensure the index is not out of bounds. #[must_use] pub unsafe fn get_unchecked_mut(&mut self, index: usize) -> &mut A { self.buffer.get_unchecked_mut(self.range.start + index) } /// Get an iterator over references to the items in the slice in order. #[inline] #[must_use] pub fn iter(&self) -> Iter<'_, A, N> { Iter { buffer: self.buffer, left_index: self.buffer.origin + self.range.start, right_index: self.buffer.origin + self.range.start + self.len(), remaining: self.len(), } } /// Get an iterator over mutable references to the items in the slice in /// order. #[inline] #[must_use] pub fn iter_mut(&mut self) -> IterMut<'_, A, N> { IterMut::new_slice( self.buffer, self.buffer.origin + self.range.start, self.len(), ) } /// Create a subslice of this slice. /// /// This consumes the slice. Because the slice works like a mutable /// reference, you can only have one slice over a given subset of a /// `RingBuffer` at any one time, so that's just how it's got to be. #[must_use] pub fn slice>(self, range: R) -> SliceMut<'a, A, N> { let new_range = Range { start: match range.start_bound() { Bound::Unbounded => self.range.start, Bound::Included(index) => self.range.start + index, Bound::Excluded(_) => unimplemented!(), }, end: match range.end_bound() { Bound::Unbounded => self.range.end, Bound::Included(index) => self.range.start + index + 1, Bound::Excluded(index) => self.range.start + index, }, }; if new_range.start < self.range.start || new_range.end > self.range.end || new_range.start > new_range.end { panic!("Slice::slice: index out of bounds"); } SliceMut { buffer: self.buffer, range: new_range, } } /// Split the slice into two subslices at the given index. #[must_use] pub fn split_at(self, index: usize) -> (SliceMut<'a, A, N>, SliceMut<'a, A, N>) { if index > self.len() { panic!("SliceMut::split_at: index out of bounds"); } let index = self.range.start + index; let ptr: *mut RingBuffer = self.buffer; ( SliceMut { buffer: unsafe { &mut *ptr }, range: Range { start: self.range.start, end: index, }, }, SliceMut { buffer: unsafe { &mut *ptr }, range: Range { start: index, end: self.range.end, }, }, ) } /// Construct a new `RingBuffer` by copying the elements in this slice. #[inline] #[must_use] pub fn to_owned(&self) -> RingBuffer where A: Clone, { self.iter().cloned().collect() } } impl<'a, A: 'a, N: ChunkLength + 'a> From<&'a mut RingBuffer> for SliceMut<'a, A, N> { #[must_use] fn from(buffer: &'a mut RingBuffer) -> Self { SliceMut { range: Range { start: 0, end: buffer.len(), }, buffer, } } } impl<'a, A: 'a, N: ChunkLength + 'a> Into> for SliceMut<'a, A, N> { #[inline] #[must_use] fn into(self) -> Slice<'a, A, N> { self.unmut() } } impl<'a, A: 'a, N: ChunkLength + 'a> Index for SliceMut<'a, A, N> { type Output = A; #[inline] #[must_use] fn index(&self, index: usize) -> &Self::Output { self.buffer.index(self.range.start + index) } } impl<'a, A: 'a, N: ChunkLength + 'a> IndexMut for SliceMut<'a, A, N> { #[inline] #[must_use] fn index_mut(&mut self, index: usize) -> &mut Self::Output { self.buffer.index_mut(self.range.start + index) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a> PartialEq for SliceMut<'a, A, N> { #[inline] #[must_use] fn eq(&self, other: &Self) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a> PartialEq> for SliceMut<'a, A, N> { #[inline] #[must_use] fn eq(&self, other: &Slice<'a, A, N>) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a> PartialEq> for SliceMut<'a, A, N> { #[inline] #[must_use] fn eq(&self, other: &RingBuffer) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a, S> PartialEq for SliceMut<'a, A, N> where S: Borrow<[A]>, { #[inline] #[must_use] fn eq(&self, other: &S) -> bool { let other = other.borrow(); self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: Eq + 'a, N: ChunkLength + 'a> Eq for SliceMut<'a, A, N> {} impl<'a, A: PartialOrd + 'a, N: ChunkLength + 'a> PartialOrd for SliceMut<'a, A, N> { #[inline] #[must_use] fn partial_cmp(&self, other: &Self) -> Option { self.iter().partial_cmp(other.iter()) } } impl<'a, A: Ord + 'a, N: ChunkLength + 'a> Ord for SliceMut<'a, A, N> { #[inline] #[must_use] fn cmp(&self, other: &Self) -> Ordering { self.iter().cmp(other.iter()) } } impl<'a, A: Debug + 'a, N: ChunkLength + 'a> Debug for SliceMut<'a, A, N> { fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error> { f.write_str("RingBuffer")?; f.debug_list().entries(self.iter()).finish() } } impl<'a, A: Hash + 'a, N: ChunkLength + 'a> Hash for SliceMut<'a, A, N> { #[inline] fn hash(&self, hasher: &mut H) { for item in self { item.hash(hasher) } } } impl<'a, 'b, A: 'a, N: ChunkLength + 'a> IntoIterator for &'a SliceMut<'a, A, N> { type Item = &'a A; type IntoIter = Iter<'a, A, N>; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter() } } impl<'a, 'b, A: 'a, N: ChunkLength + 'a> IntoIterator for &'a mut SliceMut<'a, A, N> { type Item = &'a mut A; type IntoIter = IterMut<'a, A, N>; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter_mut() } } vendor/sized-chunks/src/inline_array/0000775000175000017500000000000014160055207020521 5ustar mwhudsonmwhudsonvendor/sized-chunks/src/inline_array/mod.rs0000664000175000017500000005324314160055207021655 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! A fixed capacity array sized to match some other type `T`. //! //! See [`InlineArray`](struct.InlineArray.html) use core::borrow::{Borrow, BorrowMut}; use core::cmp::Ordering; use core::fmt::{Debug, Error, Formatter}; use core::hash::{Hash, Hasher}; use core::iter::FromIterator; use core::marker::PhantomData; use core::mem::{self, MaybeUninit}; use core::ops::{Deref, DerefMut}; use core::ptr; use core::ptr::NonNull; use core::slice::{from_raw_parts, from_raw_parts_mut, Iter as SliceIter, IterMut as SliceIterMut}; mod iter; pub use self::iter::{Drain, Iter}; /// A fixed capacity array sized to match some other type `T`. /// /// This works like a vector, but allocated on the stack (and thus marginally /// faster than `Vec`), with the allocated space exactly matching the size of /// the given type `T`. The vector consists of a `usize` tracking its current /// length and zero or more elements of type `A`. The capacity is thus /// `( size_of::() - size_of::() ) / size_of::()`. This could lead /// to situations where the capacity is zero, if `size_of::()` is greater /// than `size_of::() - size_of::()`, which is not an error and /// handled properly by the data structure. /// /// If `size_of::()` is less than `size_of::()`, meaning the vector /// has no space to store its length, `InlineArray::new()` will panic. /// /// This is meant to facilitate optimisations where a list data structure /// allocates a fairly large struct for itself, allowing you to replace it with /// an `InlineArray` until it grows beyond its capacity. This not only gives you /// a performance boost at very small sizes, it also saves you from having to /// allocate anything on the heap until absolutely necessary. /// /// For instance, `im::Vector` in its final form currently looks like this /// (approximately): /// /// ```rust, ignore /// struct RRB { /// length: usize, /// tree_height: usize, /// outer_head: Rc>, /// inner_head: Rc>, /// tree: Rc>, /// inner_tail: Rc>, /// outer_tail: Rc>, /// } /// ``` /// /// That's two `usize`s and five `Rc`s, which comes in at 56 bytes on x86_64 /// architectures. With `InlineArray`, that leaves us with 56 - /// `size_of::()` = 48 bytes we can use before having to expand into the /// full data struture. If `A` is `u8`, that's 48 elements, and even if `A` is a /// pointer we can still keep 6 of them inline before we run out of capacity. /// /// We can declare an enum like this: /// /// ```rust, ignore /// enum VectorWrapper { /// Inline(InlineArray>), /// Full(RRB), /// } /// ``` /// /// Both of these will have the same size, and we can swap the `Inline` case out /// with the `Full` case once the `InlineArray` runs out of capacity. #[repr(C)] pub struct InlineArray { // Alignment tricks // // We need both the `_header_align` and `data` to be properly aligned in memory. We do a few tricks // to handle that. // // * An alignment is always power of 2. Therefore, with a pair of alignments, one is always // a multiple of the other (one way or the other). // * A struct is aligned to at least the max alignment of each of its fields. // * A `repr(C)` struct follows the order of fields and pushes each as close to the previous one // as allowed by alignment. // // By placing two "fake" fields that have 0 size, but an alignment first, we make sure that all // 3 start at the beginning of the struct and that all of them are aligned to their maximum // alignment. // // Unfortunately, we can't use `[A; 0]` to align to actual alignment of the type `A`, because // it prevents use of `InlineArray` in recursive types. // We rely on alignment of `u64`/`usize` or `T` to be sufficient, and panic otherwise. We use // `u64` to handle all common types on 32-bit systems too. // // Furthermore, because we don't know if `u64` or `A` has bigger alignment, we decide on case by // case basis if the header or the elements go first. By placing the one with higher alignment // requirements first, we align that one and the other one will be aligned "automatically" when // placed just after it. // // To the best of our knowledge, this is all guaranteed by the compiler. But just to make sure, // we have bunch of asserts in the constructor to check; as these are invariants enforced by // the compiler, it should be trivial for it to remove the checks so they are for free (if we // are correct) or will save us (if we are not). _header_align: [(u64, usize); 0], _phantom: PhantomData, data: MaybeUninit, } const fn capacity( host_size: usize, header_size: usize, element_size: usize, element_align: usize, container_align: usize, ) -> usize { if element_size == 0 { usize::MAX } else if element_align <= container_align && host_size > header_size { (host_size - header_size) / element_size } else { 0 // larger alignment can't be guaranteed, so it'd be unsafe to store any elements } } impl InlineArray { const HOST_SIZE: usize = mem::size_of::(); const ELEMENT_SIZE: usize = mem::size_of::(); const HEADER_SIZE: usize = mem::size_of::(); // Do we place the header before the elements or the other way around? const HEADER_FIRST: bool = mem::align_of::() >= mem::align_of::(); // Note: one of the following is always 0 // How many usizes to skip before the first element? const ELEMENT_SKIP: usize = Self::HEADER_FIRST as usize; // How many elements to skip before the header const HEADER_SKIP: usize = Self::CAPACITY * (1 - Self::ELEMENT_SKIP); /// The maximum number of elements the `InlineArray` can hold. pub const CAPACITY: usize = capacity( Self::HOST_SIZE, Self::HEADER_SIZE, Self::ELEMENT_SIZE, mem::align_of::(), mem::align_of::(), ); #[inline] #[must_use] unsafe fn len_const(&self) -> *const usize { let ptr = self .data .as_ptr() .cast::() .add(Self::HEADER_SKIP) .cast::(); debug_assert!(ptr as usize % mem::align_of::() == 0); ptr } #[inline] #[must_use] pub(crate) unsafe fn len_mut(&mut self) -> *mut usize { let ptr = self .data .as_mut_ptr() .cast::() .add(Self::HEADER_SKIP) .cast::(); debug_assert!(ptr as usize % mem::align_of::() == 0); ptr } #[inline] #[must_use] pub(crate) unsafe fn data(&self) -> *const A { if Self::CAPACITY == 0 { return NonNull::::dangling().as_ptr(); } let ptr = self .data .as_ptr() .cast::() .add(Self::ELEMENT_SKIP) .cast::(); debug_assert!(ptr as usize % mem::align_of::() == 0); ptr } #[inline] #[must_use] unsafe fn data_mut(&mut self) -> *mut A { if Self::CAPACITY == 0 { return NonNull::::dangling().as_ptr(); } let ptr = self .data .as_mut_ptr() .cast::() .add(Self::ELEMENT_SKIP) .cast::(); debug_assert!(ptr as usize % mem::align_of::() == 0); ptr } #[inline] #[must_use] unsafe fn ptr_at(&self, index: usize) -> *const A { debug_assert!(index < Self::CAPACITY); self.data().add(index) } #[inline] #[must_use] unsafe fn ptr_at_mut(&mut self, index: usize) -> *mut A { debug_assert!(index < Self::CAPACITY); self.data_mut().add(index) } #[inline] unsafe fn read_at(&self, index: usize) -> A { ptr::read(self.ptr_at(index)) } #[inline] unsafe fn write_at(&mut self, index: usize, value: A) { ptr::write(self.ptr_at_mut(index), value); } /// Get the length of the array. #[inline] #[must_use] pub fn len(&self) -> usize { unsafe { *self.len_const() } } /// Test if the array is empty. #[inline] #[must_use] pub fn is_empty(&self) -> bool { self.len() == 0 } /// Test if the array is at capacity. #[inline] #[must_use] pub fn is_full(&self) -> bool { self.len() >= Self::CAPACITY } /// Construct a new empty array. /// /// # Panics /// /// If the element type requires large alignment, which is larger than /// both alignment of `usize` and alignment of the type that provides the capacity. #[inline] #[must_use] pub fn new() -> Self { assert!(Self::HOST_SIZE > Self::HEADER_SIZE); assert!( (Self::CAPACITY == 0) || (mem::align_of::() % mem::align_of::() == 0), "InlineArray can't satisfy alignment of {}", core::any::type_name::() ); let mut self_ = Self { _header_align: [], _phantom: PhantomData, data: MaybeUninit::uninit(), }; // Sanity check our assumptions about what is guaranteed by the compiler. If we are right, // these should completely optimize out of the resulting binary. assert_eq!( &self_ as *const _ as usize, self_.data.as_ptr() as usize, "Padding at the start of struct", ); assert_eq!( self_.data.as_ptr() as usize % mem::align_of::(), 0, "Unaligned header" ); assert!(mem::size_of::() == mem::size_of::() || mem::align_of::() < mem::align_of::()); assert_eq!(0, unsafe { self_.data() } as usize % mem::align_of::()); assert_eq!(0, unsafe { self_.data_mut() } as usize % mem::align_of::()); assert!(Self::ELEMENT_SKIP == 0 || Self::HEADER_SKIP == 0); unsafe { ptr::write(self_.len_mut(), 0usize) }; self_ } /// Push an item to the back of the array. /// /// Panics if the capacity of the array is exceeded. /// /// Time: O(1) pub fn push(&mut self, value: A) { if self.is_full() { panic!("InlineArray::push: chunk size overflow"); } unsafe { self.write_at(self.len(), value); *self.len_mut() += 1; } } /// Pop an item from the back of the array. /// /// Returns `None` if the array is empty. /// /// Time: O(1) pub fn pop(&mut self) -> Option { if self.is_empty() { None } else { unsafe { *self.len_mut() -= 1; } Some(unsafe { self.read_at(self.len()) }) } } /// Insert a new value at index `index`, shifting all the following values /// to the right. /// /// Panics if the index is out of bounds or the array is at capacity. /// /// Time: O(n) for the number of items shifted pub fn insert(&mut self, index: usize, value: A) { if self.is_full() { panic!("InlineArray::push: chunk size overflow"); } if index > self.len() { panic!("InlineArray::insert: index out of bounds"); } unsafe { let src = self.ptr_at_mut(index); ptr::copy(src, src.add(1), self.len() - index); ptr::write(src, value); *self.len_mut() += 1; } } /// Remove the value at index `index`, shifting all the following values to /// the left. /// /// Returns the removed value, or `None` if the array is empty or the index /// is out of bounds. /// /// Time: O(n) for the number of items shifted pub fn remove(&mut self, index: usize) -> Option { if index >= self.len() { None } else { unsafe { let src = self.ptr_at_mut(index); let value = ptr::read(src); *self.len_mut() -= 1; ptr::copy(src.add(1), src, self.len() - index); Some(value) } } } /// Split an array into two, the original array containing /// everything up to `index` and the returned array containing /// everything from `index` onwards. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items in the new chunk pub fn split_off(&mut self, index: usize) -> Self { if index > self.len() { panic!("InlineArray::split_off: index out of bounds"); } let mut out = Self::new(); if index < self.len() { unsafe { ptr::copy(self.ptr_at(index), out.data_mut(), self.len() - index); *out.len_mut() = self.len() - index; *self.len_mut() = index; } } out } #[inline] unsafe fn drop_contents(&mut self) { ptr::drop_in_place::<[A]>(&mut **self) // uses DerefMut } /// Discard the contents of the array. /// /// Time: O(n) pub fn clear(&mut self) { unsafe { self.drop_contents(); *self.len_mut() = 0; } } /// Construct an iterator that drains values from the front of the array. pub fn drain(&mut self) -> Drain<'_, A, T> { Drain { array: self } } } impl Drop for InlineArray { fn drop(&mut self) { unsafe { self.drop_contents() } } } impl Default for InlineArray { fn default() -> Self { Self::new() } } // WANT: // impl Copy for InlineArray where A: Copy {} impl Clone for InlineArray where A: Clone, { fn clone(&self) -> Self { let mut copy = Self::new(); for i in 0..self.len() { unsafe { copy.write_at(i, self.get_unchecked(i).clone()); } } unsafe { *copy.len_mut() = self.len(); } copy } } impl Deref for InlineArray { type Target = [A]; fn deref(&self) -> &Self::Target { unsafe { from_raw_parts(self.data(), self.len()) } } } impl DerefMut for InlineArray { fn deref_mut(&mut self) -> &mut Self::Target { unsafe { from_raw_parts_mut(self.data_mut(), self.len()) } } } impl Borrow<[A]> for InlineArray { fn borrow(&self) -> &[A] { self.deref() } } impl BorrowMut<[A]> for InlineArray { fn borrow_mut(&mut self) -> &mut [A] { self.deref_mut() } } impl AsRef<[A]> for InlineArray { fn as_ref(&self) -> &[A] { self.deref() } } impl AsMut<[A]> for InlineArray { fn as_mut(&mut self) -> &mut [A] { self.deref_mut() } } impl PartialEq for InlineArray where Slice: Borrow<[A]>, A: PartialEq, { fn eq(&self, other: &Slice) -> bool { self.deref() == other.borrow() } } impl Eq for InlineArray where A: Eq {} impl PartialOrd for InlineArray where A: PartialOrd, { fn partial_cmp(&self, other: &Self) -> Option { self.iter().partial_cmp(other.iter()) } } impl Ord for InlineArray where A: Ord, { fn cmp(&self, other: &Self) -> Ordering { self.iter().cmp(other.iter()) } } impl Debug for InlineArray where A: Debug, { fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error> { f.write_str("Chunk")?; f.debug_list().entries(self.iter()).finish() } } impl Hash for InlineArray where A: Hash, { fn hash(&self, hasher: &mut H) where H: Hasher, { for item in self { item.hash(hasher) } } } impl IntoIterator for InlineArray { type Item = A; type IntoIter = Iter; fn into_iter(self) -> Self::IntoIter { Iter { array: self } } } impl FromIterator for InlineArray { fn from_iter(it: I) -> Self where I: IntoIterator, { let mut chunk = Self::new(); for item in it { chunk.push(item); } chunk } } impl<'a, A, T> IntoIterator for &'a InlineArray { type Item = &'a A; type IntoIter = SliceIter<'a, A>; fn into_iter(self) -> Self::IntoIter { self.iter() } } impl<'a, A, T> IntoIterator for &'a mut InlineArray { type Item = &'a mut A; type IntoIter = SliceIterMut<'a, A>; fn into_iter(self) -> Self::IntoIter { self.iter_mut() } } impl Extend for InlineArray { /// Append the contents of the iterator to the back of the array. /// /// Panics if the array exceeds its capacity. /// /// Time: O(n) for the length of the iterator fn extend(&mut self, it: I) where I: IntoIterator, { for item in it { self.push(item); } } } impl<'a, A, T> Extend<&'a A> for InlineArray where A: 'a + Copy, { /// Append the contents of the iterator to the back of the array. /// /// Panics if the array exceeds its capacity. /// /// Time: O(n) for the length of the iterator fn extend(&mut self, it: I) where I: IntoIterator, { for item in it { self.push(*item); } } } #[cfg(test)] mod test { use super::*; use crate::tests::DropTest; use std::sync::atomic::{AtomicUsize, Ordering}; #[test] fn dropping() { let counter = AtomicUsize::new(0); { let mut chunk: InlineArray, [usize; 32]> = InlineArray::new(); for _i in 0..16 { chunk.push(DropTest::new(&counter)); } assert_eq!(16, counter.load(Ordering::Relaxed)); for _i in 0..8 { chunk.pop(); } assert_eq!(8, counter.load(Ordering::Relaxed)); } assert_eq!(0, counter.load(Ordering::Relaxed)); } #[test] fn zero_sized_values() { let mut chunk: InlineArray<(), [usize; 32]> = InlineArray::new(); for _i in 0..65536 { chunk.push(()); } assert_eq!(65536, chunk.len()); assert_eq!(Some(()), chunk.pop()); } #[test] fn low_align_base() { let mut chunk: InlineArray = InlineArray::new(); chunk.push("Hello".to_owned()); assert_eq!(chunk[0], "Hello"); let mut chunk: InlineArray = InlineArray::new(); chunk.push("Hello".to_owned()); assert_eq!(chunk[0], "Hello"); } #[test] fn float_align() { let mut chunk: InlineArray = InlineArray::new(); chunk.push(1234.); assert_eq!(chunk[0], 1234.); let mut chunk: InlineArray = InlineArray::new(); chunk.push(1234.); assert_eq!(chunk[0], 1234.); } #[test] fn recursive_types_compile() { #[allow(dead_code)] enum Recursive { A(InlineArray), B, } } #[test] fn insufficient_alignment1() { #[repr(align(256))] struct BigAlign(u8); #[repr(align(32))] struct MediumAlign(u8); assert_eq!(0, InlineArray::::CAPACITY); assert_eq!(0, InlineArray::::CAPACITY); assert_eq!(0, InlineArray::::CAPACITY); assert_eq!(0, InlineArray::::CAPACITY); } #[test] fn insufficient_alignment2() { #[repr(align(256))] struct BigAlign(usize); let mut bad: InlineArray = InlineArray::new(); assert_eq!(0, InlineArray::::CAPACITY); assert_eq!(0, bad.len()); assert_eq!(0, bad[..].len()); assert_eq!(true, bad.is_full()); assert_eq!(0, bad.drain().count()); assert!(bad.pop().is_none()); assert!(bad.remove(0).is_none()); assert!(bad.split_off(0).is_full()); bad.clear(); } #[test] fn sufficient_alignment1() { #[repr(align(256))] struct BigAlign(u8); assert_eq!(13, InlineArray::::CAPACITY); assert_eq!(1, InlineArray::::CAPACITY); assert_eq!(0, InlineArray::::CAPACITY); let mut chunk: InlineArray = InlineArray::new(); chunk.push(BigAlign(42)); assert_eq!( chunk.get(0).unwrap() as *const _ as usize % mem::align_of::(), 0 ); } #[test] fn sufficient_alignment2() { #[repr(align(128))] struct BigAlign([u8; 64]); #[repr(align(256))] struct BiggerAlign(u8); assert_eq!(128, mem::align_of::()); assert_eq!(256, mem::align_of::()); assert_eq!(199, InlineArray::::CAPACITY); assert_eq!(3, InlineArray::::CAPACITY); assert_eq!(1, InlineArray::::CAPACITY); assert_eq!(0, InlineArray::::CAPACITY); let mut chunk: InlineArray = InlineArray::new(); chunk.push(BigAlign([0; 64])); assert_eq!( chunk.get(0).unwrap() as *const _ as usize % mem::align_of::(), 0 ); } } vendor/sized-chunks/src/inline_array/iter.rs0000664000175000017500000000326414160055207022037 0ustar mwhudsonmwhudsonuse core::iter::FusedIterator; use crate::InlineArray; /// A consuming iterator over the elements of an `InlineArray`. pub struct Iter { pub(crate) array: InlineArray, } impl Iterator for Iter { type Item = A; fn next(&mut self) -> Option { self.array.remove(0) } fn size_hint(&self) -> (usize, Option) { (self.array.len(), Some(self.array.len())) } } impl DoubleEndedIterator for Iter { fn next_back(&mut self) -> Option { self.array.pop() } } impl ExactSizeIterator for Iter {} impl FusedIterator for Iter {} /// A draining iterator over the elements of an `InlineArray`. /// /// "Draining" means that as the iterator yields each element, it's removed from /// the `InlineArray`. When the iterator terminates, the array will be empty. /// This is different from the consuming iterator `Iter` in that `Iter` will /// take ownership of the `InlineArray` and discard it when you're done /// iterating, while `Drain` leaves you still owning the drained `InlineArray`. pub struct Drain<'a, A, T> { pub(crate) array: &'a mut InlineArray, } impl<'a, A, T> Iterator for Drain<'a, A, T> { type Item = A; fn next(&mut self) -> Option { self.array.remove(0) } fn size_hint(&self) -> (usize, Option) { (self.array.len(), Some(self.array.len())) } } impl<'a, A, T> DoubleEndedIterator for Drain<'a, A, T> { fn next_back(&mut self) -> Option { self.array.pop() } } impl<'a, A, T> ExactSizeIterator for Drain<'a, A, T> {} impl<'a, A, T> FusedIterator for Drain<'a, A, T> {} vendor/sized-chunks/src/sparse_chunk/0000775000175000017500000000000014160055207020532 5ustar mwhudsonmwhudsonvendor/sized-chunks/src/sparse_chunk/mod.rs0000664000175000017500000003513714160055207021670 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! A fixed capacity sparse array. //! //! See [`SparseChunk`](struct.SparseChunk.html) use core::fmt::{Debug, Error, Formatter}; use core::iter::FromIterator; use core::mem::{self, MaybeUninit}; use core::ops::Index; use core::ops::IndexMut; use core::ptr; use core::slice::{from_raw_parts, from_raw_parts_mut}; #[cfg(feature = "std")] use std::collections::{BTreeMap, HashMap}; use typenum::U64; use bitmaps::{Bitmap, Bits, Iter as BitmapIter}; use crate::types::ChunkLength; mod iter; pub use self::iter::{Drain, Iter, IterMut, OptionDrain, OptionIter, OptionIterMut}; #[cfg(feature = "refpool")] mod refpool; /// A fixed capacity sparse array. /// /// An inline sparse array of up to `N` items of type `A`, where `N` is an /// [`Unsigned`][Unsigned] type level numeral. You can think of it as an array /// of `Option`, where the discriminant (whether the value is `Some` or /// `None`) is kept in a bitmap instead of adjacent to the value. /// /// Because the bitmap is kept in a primitive type, the maximum value of `N` is /// currently 128, corresponding to a type of `u128`. The type of the bitmap /// will be the minimum unsigned integer type required to fit the number of bits /// required. Thus, disregarding memory alignment rules, the allocated size of a /// `SparseChunk` will be `uX` + `A` * `N` where `uX` is the type of the /// discriminant bitmap, either `u8`, `u16`, `u32`, `u64` or `u128`. /// /// # Examples /// /// ```rust /// # #[macro_use] extern crate sized_chunks; /// # extern crate typenum; /// # use sized_chunks::SparseChunk; /// # use typenum::U20; /// // Construct a chunk with a 20 item capacity /// let mut chunk = SparseChunk::::new(); /// // Set the 18th index to the value 5. /// chunk.insert(18, 5); /// // Set the 5th index to the value 23. /// chunk.insert(5, 23); /// /// assert_eq!(chunk.len(), 2); /// assert_eq!(chunk.get(5), Some(&23)); /// assert_eq!(chunk.get(6), None); /// assert_eq!(chunk.get(18), Some(&5)); /// ``` /// /// [Unsigned]: https://docs.rs/typenum/1.10.0/typenum/marker_traits/trait.Unsigned.html pub struct SparseChunk = U64> { map: Bitmap, data: MaybeUninit, } impl> Drop for SparseChunk { fn drop(&mut self) { if mem::needs_drop::() { let bits = self.map; for index in &bits { unsafe { ptr::drop_in_place(&mut self.values_mut()[index]) } } } } } impl> Clone for SparseChunk { fn clone(&self) -> Self { let mut out = Self::new(); for index in &self.map { out.insert(index, self[index].clone()); } out } } impl SparseChunk where N: Bits + ChunkLength, { /// The maximum number of elements a `SparseChunk` can contain. pub const CAPACITY: usize = N::USIZE; #[inline] fn values(&self) -> &[A] { unsafe { from_raw_parts(&self.data as *const _ as *const A, N::USIZE) } } #[inline] fn values_mut(&mut self) -> &mut [A] { unsafe { from_raw_parts_mut(&mut self.data as *mut _ as *mut A, N::USIZE) } } /// Copy the value at an index, discarding ownership of the copied value #[inline] unsafe fn force_read(index: usize, chunk: &Self) -> A { ptr::read(&chunk.values()[index as usize]) } /// Write a value at an index without trying to drop what's already there #[inline] unsafe fn force_write(index: usize, value: A, chunk: &mut Self) { ptr::write(&mut chunk.values_mut()[index as usize], value) } /// Construct a new empty chunk. pub fn new() -> Self { Self { map: Bitmap::default(), data: MaybeUninit::uninit(), } } /// Construct a new chunk with one item. pub fn unit(index: usize, value: A) -> Self { let mut chunk = Self::new(); chunk.insert(index, value); chunk } /// Construct a new chunk with two items. pub fn pair(index1: usize, value1: A, index2: usize, value2: A) -> Self { let mut chunk = Self::new(); chunk.insert(index1, value1); chunk.insert(index2, value2); chunk } /// Get the length of the chunk. #[inline] pub fn len(&self) -> usize { self.map.len() } /// Test if the chunk is empty. #[inline] pub fn is_empty(&self) -> bool { self.map.len() == 0 } /// Test if the chunk is at capacity. #[inline] pub fn is_full(&self) -> bool { self.len() == N::USIZE } /// Insert a new value at a given index. /// /// Returns the previous value at that index, if any. pub fn insert(&mut self, index: usize, value: A) -> Option { if index >= N::USIZE { panic!("SparseChunk::insert: index out of bounds"); } if self.map.set(index, true) { Some(mem::replace(&mut self.values_mut()[index], value)) } else { unsafe { SparseChunk::force_write(index, value, self) }; None } } /// Remove the value at a given index. /// /// Returns the value, or `None` if the index had no value. pub fn remove(&mut self, index: usize) -> Option { if index >= N::USIZE { panic!("SparseChunk::remove: index out of bounds"); } if self.map.set(index, false) { Some(unsafe { SparseChunk::force_read(index, self) }) } else { None } } /// Remove the first value present in the array. /// /// Returns the value that was removed, or `None` if the array was empty. pub fn pop(&mut self) -> Option { self.first_index().and_then(|index| self.remove(index)) } /// Get the value at a given index. pub fn get(&self, index: usize) -> Option<&A> { if index >= N::USIZE { return None; } if self.map.get(index) { Some(unsafe { self.get_unchecked(index) }) } else { None } } /// Get a mutable reference to the value at a given index. pub fn get_mut(&mut self, index: usize) -> Option<&mut A> { if index >= N::USIZE { return None; } if self.map.get(index) { Some(unsafe { self.get_unchecked_mut(index) }) } else { None } } /// Get an unchecked reference to the value at a given index. /// /// # Safety /// /// Uninhabited indices contain uninitialised data, so make sure you validate /// the index before using this method. pub unsafe fn get_unchecked(&self, index: usize) -> &A { self.values().get_unchecked(index) } /// Get an unchecked mutable reference to the value at a given index. /// /// # Safety /// /// Uninhabited indices contain uninitialised data, so make sure you validate /// the index before using this method. pub unsafe fn get_unchecked_mut(&mut self, index: usize) -> &mut A { self.values_mut().get_unchecked_mut(index) } /// Make an iterator over the indices which contain values. pub fn indices(&self) -> BitmapIter<'_, N> { self.map.into_iter() } /// Find the first index which contains a value. pub fn first_index(&self) -> Option { self.map.first_index() } /// Make an iterator of references to the values contained in the array. pub fn iter(&self) -> Iter<'_, A, N> { Iter { indices: self.indices(), chunk: self, } } /// Make an iterator of mutable references to the values contained in the /// array. pub fn iter_mut(&mut self) -> IterMut<'_, A, N> { IterMut { bitmap: self.map, chunk: self, } } /// Turn the chunk into an iterator over the values contained within it. pub fn drain(self) -> Drain { Drain { chunk: self } } /// Make an iterator of pairs of indices and references to the values /// contained in the array. pub fn entries(&self) -> impl Iterator { self.indices().zip(self.iter()) } /// Make an iterator of `Option`s of references to the values contained in the array. /// /// Iterates over every index in the `SparseChunk`, from zero to its full capacity, /// returning an `Option<&A>` for each index. pub fn option_iter(&self) -> OptionIter<'_, A, N> { OptionIter { chunk: self, index: 0, } } /// Make an iterator of `Option`s of mutable references to the values contained in the array. /// /// Iterates over every index in the `SparseChunk`, from zero to its full capacity, /// returning an `Option<&mut A>` for each index. pub fn option_iter_mut(&mut self) -> OptionIterMut<'_, A, N> { OptionIterMut { chunk: self, index: 0, } } /// Make a draining iterator of `Option's of the values contained in the array. /// /// Iterates over every index in the `SparseChunk`, from zero to its full capacity, /// returning an `Option` for each index. pub fn option_drain(self) -> OptionDrain { OptionDrain { chunk: self, index: 0, } } } impl> Default for SparseChunk { fn default() -> Self { Self::new() } } impl> Index for SparseChunk { type Output = A; #[inline] fn index(&self, index: usize) -> &Self::Output { self.get(index).unwrap() } } impl> IndexMut for SparseChunk { #[inline] fn index_mut(&mut self, index: usize) -> &mut Self::Output { self.get_mut(index).unwrap() } } impl> IntoIterator for SparseChunk { type Item = A; type IntoIter = Drain; #[inline] fn into_iter(self) -> Self::IntoIter { self.drain() } } impl> FromIterator> for SparseChunk { fn from_iter(iter: I) -> Self where I: IntoIterator>, { let mut out = Self::new(); for (index, value) in iter.into_iter().enumerate() { if let Some(value) = value { out.insert(index, value); } } out } } impl PartialEq for SparseChunk where A: PartialEq, N: Bits + ChunkLength, { fn eq(&self, other: &Self) -> bool { if self.map != other.map { return false; } for index in self.indices() { if self.get(index) != other.get(index) { return false; } } true } } #[cfg(feature = "std")] impl PartialEq> for SparseChunk where A: PartialEq, N: Bits + ChunkLength, { fn eq(&self, other: &BTreeMap) -> bool { if self.len() != other.len() { return false; } for index in self.indices() { if self.get(index) != other.get(&index) { return false; } } true } } #[cfg(feature = "std")] impl PartialEq> for SparseChunk where A: PartialEq, N: Bits + ChunkLength, { fn eq(&self, other: &HashMap) -> bool { if self.len() != other.len() { return false; } for index in self.indices() { if self.get(index) != other.get(&index) { return false; } } true } } impl Eq for SparseChunk where A: Eq, N: Bits + ChunkLength, { } impl Debug for SparseChunk where A: Debug, N: Bits + ChunkLength, { fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error> { f.write_str("SparseChunk")?; f.debug_map().entries(self.entries()).finish() } } #[cfg(test)] mod test { use super::*; use typenum::U32; #[test] fn insert_remove_iterate() { let mut chunk: SparseChunk<_, U32> = SparseChunk::new(); assert_eq!(None, chunk.insert(5, 5)); assert_eq!(None, chunk.insert(1, 1)); assert_eq!(None, chunk.insert(24, 42)); assert_eq!(None, chunk.insert(22, 22)); assert_eq!(Some(42), chunk.insert(24, 24)); assert_eq!(None, chunk.insert(31, 31)); assert_eq!(Some(24), chunk.remove(24)); assert_eq!(4, chunk.len()); let indices: Vec<_> = chunk.indices().collect(); assert_eq!(vec![1, 5, 22, 31], indices); let values: Vec<_> = chunk.into_iter().collect(); assert_eq!(vec![1, 5, 22, 31], values); } #[test] fn clone_chunk() { let mut chunk: SparseChunk<_, U32> = SparseChunk::new(); assert_eq!(None, chunk.insert(5, 5)); assert_eq!(None, chunk.insert(1, 1)); assert_eq!(None, chunk.insert(24, 42)); assert_eq!(None, chunk.insert(22, 22)); let cloned = chunk.clone(); let right_indices: Vec<_> = chunk.indices().collect(); let left_indices: Vec<_> = cloned.indices().collect(); let right: Vec<_> = chunk.into_iter().collect(); let left: Vec<_> = cloned.into_iter().collect(); assert_eq!(left, right); assert_eq!(left_indices, right_indices); assert_eq!(vec![1, 5, 22, 24], left_indices); assert_eq!(vec![1, 5, 22, 24], right_indices); } use crate::tests::DropTest; use std::sync::atomic::{AtomicUsize, Ordering}; #[test] fn dropping() { let counter = AtomicUsize::new(0); { let mut chunk: SparseChunk> = SparseChunk::new(); for i in 0..40 { chunk.insert(i, DropTest::new(&counter)); } assert_eq!(40, counter.load(Ordering::Relaxed)); for i in 0..20 { chunk.remove(i); } assert_eq!(20, counter.load(Ordering::Relaxed)); } assert_eq!(0, counter.load(Ordering::Relaxed)); } #[test] fn equality() { let mut c1 = SparseChunk::::new(); for i in 0..32 { c1.insert(i, i); } let mut c2 = c1.clone(); assert_eq!(c1, c2); for i in 4..8 { c2.insert(i, 0); } assert_ne!(c1, c2); c2 = c1.clone(); for i in 0..16 { c2.remove(i); } assert_ne!(c1, c2); } } vendor/sized-chunks/src/sparse_chunk/refpool.rs0000664000175000017500000000274514160055207022556 0ustar mwhudsonmwhudsonuse core::mem::MaybeUninit; use bitmaps::{Bitmap, Bits}; use ::refpool::{PoolClone, PoolDefault}; use crate::types::ChunkLength; use crate::SparseChunk; impl PoolDefault for SparseChunk where N: Bits + ChunkLength, { unsafe fn default_uninit(target: &mut MaybeUninit) { let ptr = target.as_mut_ptr(); let map_ptr: *mut Bitmap = &mut (*ptr).map; map_ptr.write(Bitmap::new()); } } impl PoolClone for SparseChunk where A: Clone, N: Bits + ChunkLength, { unsafe fn clone_uninit(&self, target: &mut MaybeUninit) { let ptr = target.as_mut_ptr(); let map_ptr: *mut Bitmap = &mut (*ptr).map; let data_ptr: *mut _ = &mut (*ptr).data; let data_ptr: *mut A = (*data_ptr).as_mut_ptr().cast(); map_ptr.write(self.map); for index in &self.map { data_ptr.add(index).write(self[index].clone()); } } } #[cfg(test)] mod test { use super::*; use ::refpool::{Pool, PoolRef}; #[test] fn default_and_clone() { let pool: Pool> = Pool::new(16); let mut ref1 = PoolRef::default(&pool); { let chunk = PoolRef::make_mut(&pool, &mut ref1); chunk.insert(5, 13); chunk.insert(10, 37); chunk.insert(31, 337); } let ref2 = PoolRef::cloned(&pool, &ref1); assert_eq!(ref1, ref2); assert!(!PoolRef::ptr_eq(&ref1, &ref2)); } } vendor/sized-chunks/src/sparse_chunk/iter.rs0000664000175000017500000001640014160055207022044 0ustar mwhudsonmwhudsonuse bitmaps::{Bitmap, Bits, Iter as BitmapIter}; use super::SparseChunk; use crate::types::ChunkLength; /// An iterator over references to the elements of a `SparseChunk`. pub struct Iter<'a, A, N: Bits + ChunkLength> { pub(crate) indices: BitmapIter<'a, N>, pub(crate) chunk: &'a SparseChunk, } impl<'a, A, N: Bits + ChunkLength> Iterator for Iter<'a, A, N> { type Item = &'a A; fn next(&mut self) -> Option { self.indices.next().map(|index| &self.chunk.values()[index]) } fn size_hint(&self) -> (usize, Option) { (0, Some(SparseChunk::::CAPACITY)) } } /// An iterator over mutable references to the elements of a `SparseChunk`. pub struct IterMut<'a, A, N: Bits + ChunkLength> { pub(crate) bitmap: Bitmap, pub(crate) chunk: &'a mut SparseChunk, } impl<'a, A, N: Bits + ChunkLength> Iterator for IterMut<'a, A, N> { type Item = &'a mut A; fn next(&mut self) -> Option { if let Some(index) = self.bitmap.first_index() { self.bitmap.set(index, false); unsafe { let p: *mut A = &mut self.chunk.values_mut()[index]; Some(&mut *p) } } else { None } } fn size_hint(&self) -> (usize, Option) { (0, Some(SparseChunk::::CAPACITY)) } } /// A draining iterator over the elements of a `SparseChunk`. /// /// "Draining" means that as the iterator yields each element, it's removed from /// the `SparseChunk`. When the iterator terminates, the chunk will be empty. pub struct Drain> { pub(crate) chunk: SparseChunk, } impl<'a, A, N: Bits + ChunkLength> Iterator for Drain { type Item = A; fn next(&mut self) -> Option { self.chunk.pop() } fn size_hint(&self) -> (usize, Option) { let len = self.chunk.len(); (len, Some(len)) } } /// An iterator over `Option`s of references to the elements of a `SparseChunk`. /// /// Iterates over every index in the `SparseChunk`, from zero to its full capacity, /// returning an `Option<&A>` for each index. pub struct OptionIter<'a, A, N: Bits + ChunkLength> { pub(crate) index: usize, pub(crate) chunk: &'a SparseChunk, } impl<'a, A, N: Bits + ChunkLength> Iterator for OptionIter<'a, A, N> { type Item = Option<&'a A>; fn next(&mut self) -> Option { if self.index < N::USIZE { let result = self.chunk.get(self.index); self.index += 1; Some(result) } else { None } } fn size_hint(&self) -> (usize, Option) { ( SparseChunk::::CAPACITY - self.index, Some(SparseChunk::::CAPACITY - self.index), ) } } /// An iterator over `Option`s of mutable references to the elements of a `SparseChunk`. /// /// Iterates over every index in the `SparseChunk`, from zero to its full capacity, /// returning an `Option<&mut A>` for each index. pub struct OptionIterMut<'a, A, N: Bits + ChunkLength> { pub(crate) index: usize, pub(crate) chunk: &'a mut SparseChunk, } impl<'a, A, N: Bits + ChunkLength> Iterator for OptionIterMut<'a, A, N> { type Item = Option<&'a mut A>; fn next(&mut self) -> Option { if self.index < N::USIZE { let result = if self.chunk.map.get(self.index) { unsafe { let p: *mut A = &mut self.chunk.values_mut()[self.index]; Some(Some(&mut *p)) } } else { Some(None) }; self.index += 1; result } else { None } } fn size_hint(&self) -> (usize, Option) { ( SparseChunk::::CAPACITY - self.index, Some(SparseChunk::::CAPACITY - self.index), ) } } /// A draining iterator over `Option`s of the elements of a `SparseChunk`. /// /// Iterates over every index in the `SparseChunk`, from zero to its full capacity, /// returning an `Option` for each index. pub struct OptionDrain> { pub(crate) index: usize, pub(crate) chunk: SparseChunk, } impl<'a, A, N: Bits + ChunkLength> Iterator for OptionDrain { type Item = Option; fn next(&mut self) -> Option { if self.index < N::USIZE { let result = self.chunk.remove(self.index); self.index += 1; Some(result) } else { None } } fn size_hint(&self) -> (usize, Option) { ( SparseChunk::::CAPACITY - self.index, Some(SparseChunk::::CAPACITY - self.index), ) } } #[cfg(test)] mod test { use super::*; use std::iter::FromIterator; use typenum::U64; #[test] fn iter() { let vec: Vec> = Vec::from_iter((0..64).map(|i| if i % 2 == 0 { Some(i) } else { None })); let chunk: SparseChunk = vec.iter().cloned().collect(); let vec: Vec = vec .iter() .cloned() .filter(|v| v.is_some()) .map(|v| v.unwrap()) .collect(); assert!(vec.iter().eq(chunk.iter())); } #[test] fn iter_mut() { let vec: Vec> = Vec::from_iter((0..64).map(|i| if i % 2 == 0 { Some(i) } else { None })); let mut chunk: SparseChunk<_, U64> = vec.iter().cloned().collect(); let mut vec: Vec = vec .iter() .cloned() .filter(|v| v.is_some()) .map(|v| v.unwrap()) .collect(); assert!(vec.iter_mut().eq(chunk.iter_mut())); } #[test] fn drain() { let vec: Vec> = Vec::from_iter((0..64).map(|i| if i % 2 == 0 { Some(i) } else { None })); let chunk: SparseChunk<_, U64> = vec.iter().cloned().collect(); let vec: Vec = vec .iter() .cloned() .filter(|v| v.is_some()) .map(|v| v.unwrap()) .collect(); assert!(vec.into_iter().eq(chunk.into_iter())); } #[test] fn option_iter() { let vec: Vec> = Vec::from_iter((0..64).map(|i| if i % 2 == 0 { Some(i) } else { None })); let chunk: SparseChunk<_, U64> = vec.iter().cloned().collect(); assert!(vec .iter() .cloned() .eq(chunk.option_iter().map(|v| v.cloned()))); } #[test] fn option_iter_mut() { let vec: Vec> = Vec::from_iter((0..64).map(|i| if i % 2 == 0 { Some(i) } else { None })); let mut chunk: SparseChunk<_, U64> = vec.iter().cloned().collect(); assert!(vec .iter() .cloned() .eq(chunk.option_iter_mut().map(|v| v.cloned()))); } #[test] fn option_drain() { let vec: Vec> = Vec::from_iter((0..64).map(|i| if i % 2 == 0 { Some(i) } else { None })); let chunk: SparseChunk<_, U64> = vec.iter().cloned().collect(); assert!(vec.iter().cloned().eq(chunk.option_drain())); } } vendor/sized-chunks/src/sized_chunk/0000775000175000017500000000000014160055207020353 5ustar mwhudsonmwhudsonvendor/sized-chunks/src/sized_chunk/mod.rs0000664000175000017500000012054114160055207021503 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! A fixed capacity smart array. //! //! See [`Chunk`](struct.Chunk.html) use crate::inline_array::InlineArray; use core::borrow::{Borrow, BorrowMut}; use core::cmp::Ordering; use core::fmt::{Debug, Error, Formatter}; use core::hash::{Hash, Hasher}; use core::iter::FromIterator; use core::mem::{replace, MaybeUninit}; use core::ops::{Deref, DerefMut, Index, IndexMut}; use core::ptr; use core::slice::{ from_raw_parts, from_raw_parts_mut, Iter as SliceIter, IterMut as SliceIterMut, SliceIndex, }; #[cfg(feature = "std")] use std::io; use typenum::U64; use crate::types::ChunkLength; mod iter; pub use self::iter::{Drain, Iter}; #[cfg(feature = "refpool")] mod refpool; /// A fixed capacity smart array. /// /// An inline array of items with a variable length but a fixed, preallocated /// capacity given by the `N` type, which must be an [`Unsigned`][Unsigned] type /// level numeral. /// /// It's 'smart' because it's able to reorganise its contents based on expected /// behaviour. If you construct one using `push_back`, it will be laid out like /// a `Vec` with space at the end. If you `push_front` it will start filling in /// values from the back instead of the front, so that you still get linear time /// push as long as you don't reverse direction. If you do, and there's no room /// at the end you're pushing to, it'll shift its contents over to the other /// side, creating more space to push into. This technique is tuned for /// `Chunk`'s expected use case in [im::Vector]: usually, chunks always see /// either `push_front` or `push_back`, but not both unless they move around /// inside the tree, in which case they're able to reorganise themselves with /// reasonable efficiency to suit their new usage patterns. /// /// It maintains a `left` index and a `right` index instead of a simple length /// counter in order to accomplish this, much like a ring buffer would, except /// that the `Chunk` keeps all its items sequentially in memory so that you can /// always get a `&[A]` slice for them, at the price of the occasional /// reordering operation. The allocated size of a `Chunk` is thus `usize` * 2 + /// `A` * `N`. /// /// This technique also lets us choose to shift the shortest side to account for /// the inserted or removed element when performing insert and remove /// operations, unlike `Vec` where you always need to shift the right hand side. /// /// Unlike a `Vec`, the `Chunk` has a fixed capacity and cannot grow beyond it. /// Being intended for low level use, it expects you to know or test whether /// you're pushing to a full array, and has an API more geared towards panics /// than returning `Option`s, on the assumption that you know what you're doing. /// Of course, if you don't, you can expect it to panic immediately rather than /// do something undefined and usually bad. /// /// ## Isn't this just a less efficient ring buffer? /// /// You might be wondering why you would want to use this data structure rather /// than a [`RingBuffer`][RingBuffer], which is similar but doesn't need to /// shift its content around when it hits the sides of the allocated buffer. The /// answer is that `Chunk` can be dereferenced into a slice, while a ring buffer /// can not. You'll also save a few cycles on index lookups, as a `Chunk`'s data /// is guaranteed to be contiguous in memory, so there's no need to remap logical /// indices to a ring buffer's physical layout. /// /// # Examples /// /// ```rust /// # #[macro_use] extern crate sized_chunks; /// # extern crate typenum; /// # use sized_chunks::Chunk; /// # use typenum::U64; /// // Construct a chunk with a 64 item capacity /// let mut chunk = Chunk::::new(); /// // Fill it with descending numbers /// chunk.extend((0..64).rev()); /// // It derefs to a slice so we can use standard slice methods /// chunk.sort(); /// // It's got all the amenities like `FromIterator` and `Eq` /// let expected: Chunk = (0..64).collect(); /// assert_eq!(expected, chunk); /// ``` /// /// [Unsigned]: https://docs.rs/typenum/1.10.0/typenum/marker_traits/trait.Unsigned.html /// [im::Vector]: https://docs.rs/im/latest/im/vector/enum.Vector.html /// [RingBuffer]: ../ring_buffer/struct.RingBuffer.html pub struct Chunk where N: ChunkLength, { left: usize, right: usize, data: MaybeUninit, } impl Drop for Chunk where N: ChunkLength, { fn drop(&mut self) { unsafe { ptr::drop_in_place(self.as_mut_slice()) } } } impl Clone for Chunk where A: Clone, N: ChunkLength, { fn clone(&self) -> Self { let mut out = Self::new(); out.left = self.left; out.right = self.left; for index in self.left..self.right { unsafe { Chunk::force_write(index, (*self.ptr(index)).clone(), &mut out) } // Panic safety, move the right index to cover only the really initialized things. This // way we don't try to drop uninitialized, but also don't leak if we panic in the // middle. out.right = index + 1; } out } } impl Chunk where N: ChunkLength, { /// The maximum number of elements this `Chunk` can contain. pub const CAPACITY: usize = N::USIZE; /// Construct a new empty chunk. pub fn new() -> Self { Self { left: 0, right: 0, data: MaybeUninit::uninit(), } } /// Construct a new chunk with one item. pub fn unit(value: A) -> Self { assert!(Self::CAPACITY >= 1); let mut chunk = Self { left: 0, right: 1, data: MaybeUninit::uninit(), }; unsafe { Chunk::force_write(0, value, &mut chunk); } chunk } /// Construct a new chunk with two items. pub fn pair(left: A, right: A) -> Self { assert!(Self::CAPACITY >= 2); let mut chunk = Self { left: 0, right: 2, data: MaybeUninit::uninit(), }; unsafe { Chunk::force_write(0, left, &mut chunk); Chunk::force_write(1, right, &mut chunk); } chunk } /// Construct a new chunk and move every item from `other` into the new /// chunk. /// /// Time: O(n) pub fn drain_from(other: &mut Self) -> Self { let other_len = other.len(); Self::from_front(other, other_len) } /// Construct a new chunk and populate it by taking `count` items from the /// iterator `iter`. /// /// Panics if the iterator contains less than `count` items. /// /// Time: O(n) pub fn collect_from(iter: &mut I, mut count: usize) -> Self where I: Iterator, { let mut chunk = Self::new(); while count > 0 { count -= 1; chunk.push_back( iter.next() .expect("Chunk::collect_from: underfull iterator"), ); } chunk } /// Construct a new chunk and populate it by taking `count` items from the /// front of `other`. /// /// Time: O(n) for the number of items moved pub fn from_front(other: &mut Self, count: usize) -> Self { let other_len = other.len(); debug_assert!(count <= other_len); let mut chunk = Self::new(); unsafe { Chunk::force_copy_to(other.left, 0, count, other, &mut chunk) }; chunk.right = count; other.left += count; chunk } /// Construct a new chunk and populate it by taking `count` items from the /// back of `other`. /// /// Time: O(n) for the number of items moved pub fn from_back(other: &mut Self, count: usize) -> Self { let other_len = other.len(); debug_assert!(count <= other_len); let mut chunk = Self::new(); unsafe { Chunk::force_copy_to(other.right - count, 0, count, other, &mut chunk) }; chunk.right = count; other.right -= count; chunk } /// Get the length of the chunk. #[inline] pub fn len(&self) -> usize { self.right - self.left } /// Test if the chunk is empty. #[inline] pub fn is_empty(&self) -> bool { self.left == self.right } /// Test if the chunk is at capacity. #[inline] pub fn is_full(&self) -> bool { self.left == 0 && self.right == Self::CAPACITY } #[inline] unsafe fn ptr(&self, index: usize) -> *const A { (&self.data as *const _ as *const A).add(index) } /// It has no bounds checks #[inline] unsafe fn mut_ptr(&mut self, index: usize) -> *mut A { (&mut self.data as *mut _ as *mut A).add(index) } /// Copy the value at an index, discarding ownership of the copied value #[inline] unsafe fn force_read(index: usize, chunk: &mut Self) -> A { chunk.ptr(index).read() } /// Write a value at an index without trying to drop what's already there. /// It has no bounds checks. #[inline] unsafe fn force_write(index: usize, value: A, chunk: &mut Self) { chunk.mut_ptr(index).write(value) } /// Copy a range within a chunk #[inline] unsafe fn force_copy(from: usize, to: usize, count: usize, chunk: &mut Self) { if count > 0 { ptr::copy(chunk.ptr(from), chunk.mut_ptr(to), count) } } /// Write values from iterator into range starting at write_index. /// /// Will overwrite values at the relevant range without dropping even in case the values were /// already initialized (it is expected they are empty). Does not update the left or right /// index. /// /// # Safety /// /// Range checks must already have been performed. /// /// # Panics /// /// If the iterator panics, the chunk becomes conceptually empty and will leak any previous /// elements (even the ones outside the range). #[inline] unsafe fn write_from_iter(mut write_index: usize, iter: I, chunk: &mut Self) where I: ExactSizeIterator, { // Panic safety. We make the array conceptually empty, so we never ever drop anything that // is unitialized. We do so because we expect to be called when there's a potential "hole" // in the array that makes the space for the new elements to be written. We return it back // to original when everything goes fine, but leak any elements on panic. This is bad, but // better than dropping non-existing stuff. // // Should we worry about some better panic recovery than this? let left = replace(&mut chunk.left, 0); let right = replace(&mut chunk.right, 0); let len = iter.len(); let expected_end = write_index + len; for value in iter.take(len) { Chunk::force_write(write_index, value, chunk); write_index += 1; } // Oops, we have a hole in here now. That would be bad, give up. assert_eq!( expected_end, write_index, "ExactSizeIterator yielded fewer values than advertised", ); chunk.left = left; chunk.right = right; } /// Copy a range between chunks #[inline] unsafe fn force_copy_to( from: usize, to: usize, count: usize, chunk: &mut Self, other: &mut Self, ) { if count > 0 { ptr::copy_nonoverlapping(chunk.ptr(from), other.mut_ptr(to), count) } } /// Push an item to the front of the chunk. /// /// Panics if the capacity of the chunk is exceeded. /// /// Time: O(1) if there's room at the front, O(n) otherwise pub fn push_front(&mut self, value: A) { if self.is_full() { panic!("Chunk::push_front: can't push to full chunk"); } if self.is_empty() { self.left = N::USIZE; self.right = N::USIZE; } else if self.left == 0 { self.left = N::USIZE - self.right; unsafe { Chunk::force_copy(0, self.left, self.right, self) }; self.right = N::USIZE; } self.left -= 1; unsafe { Chunk::force_write(self.left, value, self) } } /// Push an item to the back of the chunk. /// /// Panics if the capacity of the chunk is exceeded. /// /// Time: O(1) if there's room at the back, O(n) otherwise pub fn push_back(&mut self, value: A) { if self.is_full() { panic!("Chunk::push_back: can't push to full chunk"); } if self.is_empty() { self.left = 0; self.right = 0; } else if self.right == N::USIZE { unsafe { Chunk::force_copy(self.left, 0, self.len(), self) }; self.right = N::USIZE - self.left; self.left = 0; } unsafe { Chunk::force_write(self.right, value, self) } self.right += 1; } /// Pop an item off the front of the chunk. /// /// Panics if the chunk is empty. /// /// Time: O(1) pub fn pop_front(&mut self) -> A { if self.is_empty() { panic!("Chunk::pop_front: can't pop from empty chunk"); } else { let value = unsafe { Chunk::force_read(self.left, self) }; self.left += 1; value } } /// Pop an item off the back of the chunk. /// /// Panics if the chunk is empty. /// /// Time: O(1) pub fn pop_back(&mut self) -> A { if self.is_empty() { panic!("Chunk::pop_back: can't pop from empty chunk"); } else { self.right -= 1; unsafe { Chunk::force_read(self.right, self) } } } /// Discard all items up to but not including `index`. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items dropped pub fn drop_left(&mut self, index: usize) { if index > 0 { unsafe { ptr::drop_in_place(&mut self[..index]) } self.left += index; } } /// Discard all items from `index` onward. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items dropped pub fn drop_right(&mut self, index: usize) { if index != self.len() { unsafe { ptr::drop_in_place(&mut self[index..]) } self.right = self.left + index; } } /// Split a chunk into two, the original chunk containing /// everything up to `index` and the returned chunk containing /// everything from `index` onwards. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items in the new chunk pub fn split_off(&mut self, index: usize) -> Self { if index > self.len() { panic!("Chunk::split_off: index out of bounds"); } if index == self.len() { return Self::new(); } let mut right_chunk = Self::new(); let start = self.left + index; let len = self.right - start; unsafe { Chunk::force_copy_to(start, 0, len, self, &mut right_chunk) }; right_chunk.right = len; self.right = start; right_chunk } /// Remove all items from `other` and append them to the back of `self`. /// /// Panics if the capacity of the chunk is exceeded. /// /// Time: O(n) for the number of items moved pub fn append(&mut self, other: &mut Self) { let self_len = self.len(); let other_len = other.len(); if self_len + other_len > N::USIZE { panic!("Chunk::append: chunk size overflow"); } if self.right + other_len > N::USIZE { unsafe { Chunk::force_copy(self.left, 0, self_len, self) }; self.right -= self.left; self.left = 0; } unsafe { Chunk::force_copy_to(other.left, self.right, other_len, other, self) }; self.right += other_len; other.left = 0; other.right = 0; } /// Remove `count` items from the front of `other` and append them to the /// back of `self`. /// /// Panics if `self` doesn't have `count` items left, or if `other` has /// fewer than `count` items. /// /// Time: O(n) for the number of items moved pub fn drain_from_front(&mut self, other: &mut Self, count: usize) { let self_len = self.len(); let other_len = other.len(); assert!(self_len + count <= N::USIZE); assert!(other_len >= count); if self.right + count > N::USIZE { unsafe { Chunk::force_copy(self.left, 0, self_len, self) }; self.right -= self.left; self.left = 0; } unsafe { Chunk::force_copy_to(other.left, self.right, count, other, self) }; self.right += count; other.left += count; } /// Remove `count` items from the back of `other` and append them to the /// front of `self`. /// /// Panics if `self` doesn't have `count` items left, or if `other` has /// fewer than `count` items. /// /// Time: O(n) for the number of items moved pub fn drain_from_back(&mut self, other: &mut Self, count: usize) { let self_len = self.len(); let other_len = other.len(); assert!(self_len + count <= N::USIZE); assert!(other_len >= count); if self.left < count { unsafe { Chunk::force_copy(self.left, N::USIZE - self_len, self_len, self) }; self.left = N::USIZE - self_len; self.right = N::USIZE; } unsafe { Chunk::force_copy_to(other.right - count, self.left - count, count, other, self) }; self.left -= count; other.right -= count; } /// Update the value at index `index`, returning the old value. /// /// Panics if `index` is out of bounds. /// /// Time: O(1) pub fn set(&mut self, index: usize, value: A) -> A { replace(&mut self[index], value) } /// Insert a new value at index `index`, shifting all the following values /// to the right. /// /// Panics if the index is out of bounds or the chunk is full. /// /// Time: O(n) for the number of elements shifted pub fn insert(&mut self, index: usize, value: A) { if self.is_full() { panic!("Chunk::insert: chunk is full"); } if index > self.len() { panic!("Chunk::insert: index out of bounds"); } let real_index = index + self.left; let left_size = index; let right_size = self.right - real_index; if self.right == N::USIZE || (self.left > 0 && left_size < right_size) { unsafe { Chunk::force_copy(self.left, self.left - 1, left_size, self); Chunk::force_write(real_index - 1, value, self); } self.left -= 1; } else { unsafe { Chunk::force_copy(real_index, real_index + 1, right_size, self); Chunk::force_write(real_index, value, self); } self.right += 1; } } /// Insert a new value into the chunk in sorted order. /// /// This assumes every element of the chunk is already in sorted order. /// If not, the value will still be inserted but the ordering is not /// guaranteed. /// /// Time: O(log n) to find the insert position, then O(n) for the number /// of elements shifted. /// /// # Examples /// /// ```rust /// # use std::iter::FromIterator; /// # use sized_chunks::Chunk; /// # use typenum::U64; /// let mut chunk = Chunk::::from_iter(0..5); /// chunk.insert_ordered(3); /// assert_eq!(&[0, 1, 2, 3, 3, 4], chunk.as_slice()); /// ``` pub fn insert_ordered(&mut self, value: A) where A: Ord, { if self.is_full() { panic!("Chunk::insert: chunk is full"); } match self.binary_search(&value) { Ok(index) => self.insert(index, value), Err(index) => self.insert(index, value), } } /// Insert multiple values at index `index`, shifting all the following values /// to the right. /// /// Panics if the index is out of bounds or the chunk doesn't have room for /// all the values. /// /// Time: O(m+n) where m is the number of elements inserted and n is the number /// of elements following the insertion index. Calling `insert` /// repeatedly would be O(m*n). pub fn insert_from(&mut self, index: usize, iter: Iterable) where Iterable: IntoIterator, I: ExactSizeIterator, { let iter = iter.into_iter(); let insert_size = iter.len(); if self.len() + insert_size > Self::CAPACITY { panic!( "Chunk::insert_from: chunk cannot fit {} elements", insert_size ); } if index > self.len() { panic!("Chunk::insert_from: index out of bounds"); } let real_index = index + self.left; let left_size = index; let right_size = self.right - real_index; if self.right == N::USIZE || (self.left >= insert_size && left_size < right_size) { unsafe { Chunk::force_copy(self.left, self.left - insert_size, left_size, self); let write_index = real_index - insert_size; Chunk::write_from_iter(write_index, iter, self); } self.left -= insert_size; } else if self.left == 0 || (self.right + insert_size <= Self::CAPACITY) { unsafe { Chunk::force_copy(real_index, real_index + insert_size, right_size, self); let write_index = real_index; Chunk::write_from_iter(write_index, iter, self); } self.right += insert_size; } else { unsafe { Chunk::force_copy(self.left, 0, left_size, self); Chunk::force_copy(real_index, left_size + insert_size, right_size, self); let write_index = left_size; Chunk::write_from_iter(write_index, iter, self); } self.right -= self.left; self.right += insert_size; self.left = 0; } } /// Remove the value at index `index`, shifting all the following values to /// the left. /// /// Returns the removed value. /// /// Panics if the index is out of bounds. /// /// Time: O(n) for the number of items shifted pub fn remove(&mut self, index: usize) -> A { if index >= self.len() { panic!("Chunk::remove: index out of bounds"); } let real_index = index + self.left; let value = unsafe { Chunk::force_read(real_index, self) }; let left_size = index; let right_size = self.right - real_index - 1; if left_size < right_size { unsafe { Chunk::force_copy(self.left, self.left + 1, left_size, self) }; self.left += 1; } else { unsafe { Chunk::force_copy(real_index + 1, real_index, right_size, self) }; self.right -= 1; } value } /// Construct an iterator that drains values from the front of the chunk. pub fn drain(&mut self) -> Drain<'_, A, N> { Drain { chunk: self } } /// Discard the contents of the chunk. /// /// Time: O(n) pub fn clear(&mut self) { unsafe { ptr::drop_in_place(self.as_mut_slice()) } self.left = 0; self.right = 0; } /// Get a reference to the contents of the chunk as a slice. pub fn as_slice(&self) -> &[A] { unsafe { from_raw_parts( (&self.data as *const MaybeUninit as *const A).add(self.left), self.len(), ) } } /// Get a reference to the contents of the chunk as a mutable slice. pub fn as_mut_slice(&mut self) -> &mut [A] { unsafe { from_raw_parts_mut( (&mut self.data as *mut MaybeUninit as *mut A).add(self.left), self.len(), ) } } } impl Default for Chunk where N: ChunkLength, { fn default() -> Self { Self::new() } } impl Index for Chunk where I: SliceIndex<[A]>, N: ChunkLength, { type Output = I::Output; fn index(&self, index: I) -> &Self::Output { self.as_slice().index(index) } } impl IndexMut for Chunk where I: SliceIndex<[A]>, N: ChunkLength, { fn index_mut(&mut self, index: I) -> &mut Self::Output { self.as_mut_slice().index_mut(index) } } impl Debug for Chunk where A: Debug, N: ChunkLength, { fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error> { f.write_str("Chunk")?; f.debug_list().entries(self.iter()).finish() } } impl Hash for Chunk where A: Hash, N: ChunkLength, { fn hash(&self, hasher: &mut H) where H: Hasher, { for item in self { item.hash(hasher) } } } impl PartialEq for Chunk where Slice: Borrow<[A]>, A: PartialEq, N: ChunkLength, { fn eq(&self, other: &Slice) -> bool { self.as_slice() == other.borrow() } } impl Eq for Chunk where A: Eq, N: ChunkLength, { } impl PartialOrd for Chunk where A: PartialOrd, N: ChunkLength, { fn partial_cmp(&self, other: &Self) -> Option { self.iter().partial_cmp(other.iter()) } } impl Ord for Chunk where A: Ord, N: ChunkLength, { fn cmp(&self, other: &Self) -> Ordering { self.iter().cmp(other.iter()) } } #[cfg(feature = "std")] impl io::Write for Chunk where N: ChunkLength, { fn write(&mut self, buf: &[u8]) -> io::Result { let old_len = self.len(); self.extend(buf.iter().cloned().take(N::USIZE - old_len)); Ok(self.len() - old_len) } fn flush(&mut self) -> io::Result<()> { Ok(()) } } #[cfg(feature = "std")] impl> std::io::Read for Chunk { fn read(&mut self, buf: &mut [u8]) -> std::io::Result { let read_size = buf.len().min(self.len()); if read_size == 0 { Ok(0) } else { for p in buf.iter_mut().take(read_size) { *p = self.pop_front(); } Ok(read_size) } } } impl From> for Chunk where N: ChunkLength, { #[inline] fn from(mut array: InlineArray) -> Self { Self::from(&mut array) } } impl<'a, A, N, T> From<&'a mut InlineArray> for Chunk where N: ChunkLength, { fn from(array: &mut InlineArray) -> Self { // The first capacity comparison is to help optimize it out assert!( InlineArray::::CAPACITY <= Self::CAPACITY || array.len() <= Self::CAPACITY, "CAPACITY too small" ); let mut out = Self::new(); out.left = 0; out.right = array.len(); unsafe { ptr::copy_nonoverlapping(array.data(), out.mut_ptr(0), out.right); *array.len_mut() = 0; } out } } impl Borrow<[A]> for Chunk where N: ChunkLength, { fn borrow(&self) -> &[A] { self.as_slice() } } impl BorrowMut<[A]> for Chunk where N: ChunkLength, { fn borrow_mut(&mut self) -> &mut [A] { self.as_mut_slice() } } impl AsRef<[A]> for Chunk where N: ChunkLength, { fn as_ref(&self) -> &[A] { self.as_slice() } } impl AsMut<[A]> for Chunk where N: ChunkLength, { fn as_mut(&mut self) -> &mut [A] { self.as_mut_slice() } } impl Deref for Chunk where N: ChunkLength, { type Target = [A]; fn deref(&self) -> &Self::Target { self.as_slice() } } impl DerefMut for Chunk where N: ChunkLength, { fn deref_mut(&mut self) -> &mut Self::Target { self.as_mut_slice() } } impl FromIterator for Chunk where N: ChunkLength, { fn from_iter(it: I) -> Self where I: IntoIterator, { let mut chunk = Self::new(); for item in it { chunk.push_back(item); } chunk } } impl<'a, A, N> IntoIterator for &'a Chunk where N: ChunkLength, { type Item = &'a A; type IntoIter = SliceIter<'a, A>; fn into_iter(self) -> Self::IntoIter { self.iter() } } impl<'a, A, N> IntoIterator for &'a mut Chunk where N: ChunkLength, { type Item = &'a mut A; type IntoIter = SliceIterMut<'a, A>; fn into_iter(self) -> Self::IntoIter { self.iter_mut() } } impl Extend for Chunk where N: ChunkLength, { /// Append the contents of the iterator to the back of the chunk. /// /// Panics if the chunk exceeds its capacity. /// /// Time: O(n) for the length of the iterator fn extend(&mut self, it: I) where I: IntoIterator, { for item in it { self.push_back(item); } } } impl<'a, A, N> Extend<&'a A> for Chunk where A: 'a + Copy, N: ChunkLength, { /// Append the contents of the iterator to the back of the chunk. /// /// Panics if the chunk exceeds its capacity. /// /// Time: O(n) for the length of the iterator fn extend(&mut self, it: I) where I: IntoIterator, { for item in it { self.push_back(*item); } } } impl IntoIterator for Chunk where N: ChunkLength, { type Item = A; type IntoIter = Iter; fn into_iter(self) -> Self::IntoIter { Iter { chunk: self } } } #[cfg(test)] #[rustfmt::skip] mod test { use super::*; use typenum::{U0, U1, U2, U3, U5}; #[test] #[should_panic(expected = "Chunk::push_back: can't push to full chunk")] fn issue_11_testcase1d() { let mut chunk = Chunk::::pair(123, 456); chunk.push_back(789); } #[test] #[should_panic(expected = "CAPACITY too small")] fn issue_11_testcase2a() { let mut from = InlineArray::::new(); from.push(1); let _ = Chunk::::from(from); } #[test] fn issue_11_testcase2b() { let mut from = InlineArray::::new(); from.push(1); let _ = Chunk::::from(from); } struct DropDetector(u32); impl DropDetector { fn new(num: u32) -> Self { DropDetector(num) } } impl Drop for DropDetector { fn drop(&mut self) { assert!(self.0 == 42 || self.0 == 43); } } impl Clone for DropDetector { fn clone(&self) -> Self { if self.0 == 42 { panic!("panic on clone") } DropDetector::new(self.0) } } /// This is for miri to catch #[test] fn issue_11_testcase3a() { let mut chunk = Chunk::::new(); chunk.push_back(DropDetector::new(42)); chunk.push_back(DropDetector::new(42)); chunk.push_back(DropDetector::new(43)); let _ = chunk.pop_front(); let _ = std::panic::catch_unwind(|| { let _ = chunk.clone(); }); } struct PanickingIterator { current: u32, panic_at: u32, len: usize, } impl Iterator for PanickingIterator { type Item = DropDetector; fn next(&mut self) -> Option { let num = self.current; if num == self.panic_at { panic!("panicking index") } self.current += 1; Some(DropDetector::new(num)) } fn size_hint(&self) -> (usize, Option) { (self.len, Some(self.len)) } } impl ExactSizeIterator for PanickingIterator {} #[test] fn issue_11_testcase3b() { let _ = std::panic::catch_unwind(|| { let mut chunk = Chunk::::new(); chunk.push_back(DropDetector::new(1)); chunk.push_back(DropDetector::new(2)); chunk.push_back(DropDetector::new(3)); chunk.insert_from( 1, PanickingIterator { current: 1, panic_at: 1, len: 1, }, ); }); } struct FakeSizeIterator { reported: usize, actual: usize } impl Iterator for FakeSizeIterator { type Item = u8; fn next(&mut self) -> Option { if self.actual == 0 { None } else { self.actual -= 1; Some(1) } } fn size_hint(&self) -> (usize, Option) { (self.reported, Some(self.reported)) } } impl ExactSizeIterator for FakeSizeIterator { fn len(&self) -> usize { self.reported } } #[test] fn iterator_too_long() { let mut chunk = Chunk::::new(); chunk.push_back(0); chunk.push_back(1); chunk.push_back(2); chunk.insert_from(1, FakeSizeIterator { reported: 1, actual: 10 }); let mut chunk = Chunk::::new(); chunk.push_back(1); chunk.insert_from(0, FakeSizeIterator { reported: 1, actual: 10 }); let mut chunk = Chunk::::new(); chunk.insert_from(0, FakeSizeIterator { reported: 1, actual: 10 }); } #[test] #[should_panic(expected = "ExactSizeIterator yielded fewer values than advertised")] fn iterator_too_short1() { let mut chunk = Chunk::::new(); chunk.push_back(0); chunk.push_back(1); chunk.push_back(2); chunk.insert_from(1, FakeSizeIterator { reported: 2, actual: 0 }); } #[test] #[should_panic(expected = "ExactSizeIterator yielded fewer values than advertised")] fn iterator_too_short2() { let mut chunk = Chunk::::new(); chunk.push_back(1); chunk.insert_from(1, FakeSizeIterator { reported: 4, actual: 2 }); } #[test] fn is_full() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { assert_eq!(false, chunk.is_full()); chunk.push_back(i); } assert_eq!(true, chunk.is_full()); } #[test] fn push_back_front() { let mut chunk = Chunk::<_, U64>::new(); for i in 12..20 { chunk.push_back(i); } assert_eq!(8, chunk.len()); for i in (0..12).rev() { chunk.push_front(i); } assert_eq!(20, chunk.len()); for i in 20..32 { chunk.push_back(i); } assert_eq!(32, chunk.len()); let right: Vec = chunk.into_iter().collect(); let left: Vec = (0..32).collect(); assert_eq!(left, right); } #[test] fn push_and_pop() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { chunk.push_back(i); } for i in 0..64 { assert_eq!(i, chunk.pop_front()); } for i in 0..64 { chunk.push_front(i); } for i in 0..64 { assert_eq!(i, chunk.pop_back()); } } #[test] fn drop_left() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..6 { chunk.push_back(i); } chunk.drop_left(3); let vec: Vec = chunk.into_iter().collect(); assert_eq!(vec![3, 4, 5], vec); } #[test] fn drop_right() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..6 { chunk.push_back(i); } chunk.drop_right(3); let vec: Vec = chunk.into_iter().collect(); assert_eq!(vec![0, 1, 2], vec); } #[test] fn split_off() { let mut left = Chunk::<_, U64>::new(); for i in 0..6 { left.push_back(i); } let right = left.split_off(3); let left_vec: Vec = left.into_iter().collect(); let right_vec: Vec = right.into_iter().collect(); assert_eq!(vec![0, 1, 2], left_vec); assert_eq!(vec![3, 4, 5], right_vec); } #[test] fn append() { let mut left = Chunk::<_, U64>::new(); for i in 0..32 { left.push_back(i); } let mut right = Chunk::<_, U64>::new(); for i in (32..64).rev() { right.push_front(i); } left.append(&mut right); let out_vec: Vec = left.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn ref_iter() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { chunk.push_back(i); } let out_vec: Vec<&i32> = chunk.iter().collect(); let should_vec_p: Vec = (0..64).collect(); let should_vec: Vec<&i32> = should_vec_p.iter().collect(); assert_eq!(should_vec, out_vec); } #[test] fn mut_ref_iter() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { chunk.push_back(i); } let out_vec: Vec<&mut i32> = chunk.iter_mut().collect(); let mut should_vec_p: Vec = (0..64).collect(); let should_vec: Vec<&mut i32> = should_vec_p.iter_mut().collect(); assert_eq!(should_vec, out_vec); } #[test] fn consuming_iter() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { chunk.push_back(i); } let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn insert_middle() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..32 { chunk.push_back(i); } for i in 33..64 { chunk.push_back(i); } chunk.insert(32, 32); let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn insert_back() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..63 { chunk.push_back(i); } chunk.insert(63, 63); let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn insert_front() { let mut chunk = Chunk::<_, U64>::new(); for i in 1..64 { chunk.push_front(64 - i); } chunk.insert(0, 0); let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn remove_value() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { chunk.push_back(i); } chunk.remove(32); let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..32).chain(33..64).collect(); assert_eq!(should_vec, out_vec); } use crate::tests::DropTest; use std::sync::atomic::{AtomicUsize, Ordering}; #[test] fn dropping() { let counter = AtomicUsize::new(0); { let mut chunk: Chunk> = Chunk::new(); for _i in 0..20 { chunk.push_back(DropTest::new(&counter)) } for _i in 0..20 { chunk.push_front(DropTest::new(&counter)) } assert_eq!(40, counter.load(Ordering::Relaxed)); for _i in 0..10 { chunk.pop_back(); } assert_eq!(30, counter.load(Ordering::Relaxed)); } assert_eq!(0, counter.load(Ordering::Relaxed)); } #[test] #[should_panic(expected = "assertion failed: Self::CAPACITY >= 1")] fn unit_on_empty() { Chunk::::unit(1); } #[test] #[should_panic(expected = "assertion failed: Self::CAPACITY >= 2")] fn pair_on_empty() { Chunk::::pair(1, 2); } } vendor/sized-chunks/src/sized_chunk/refpool.rs0000664000175000017500000000366114160055207022375 0ustar mwhudsonmwhudsonuse core::mem::MaybeUninit; use ::refpool::{PoolClone, PoolDefault}; use crate::types::ChunkLength; use crate::Chunk; impl PoolDefault for Chunk where N: ChunkLength, { unsafe fn default_uninit(target: &mut MaybeUninit) { let ptr = target.as_mut_ptr(); let left_ptr: *mut usize = &mut (*ptr).left; let right_ptr: *mut usize = &mut (*ptr).right; left_ptr.write(0); right_ptr.write(0); } } impl PoolClone for Chunk where A: Clone, N: ChunkLength, { unsafe fn clone_uninit(&self, target: &mut MaybeUninit) { let ptr = target.as_mut_ptr(); let left_ptr: *mut usize = &mut (*ptr).left; let right_ptr: *mut usize = &mut (*ptr).right; let data_ptr: *mut _ = &mut (*ptr).data; let data_ptr: *mut A = (*data_ptr).as_mut_ptr().cast(); left_ptr.write(self.left); right_ptr.write(self.right); for index in self.left..self.right { data_ptr.add(index).write((*self.ptr(index)).clone()); } } } #[cfg(test)] mod test { use super::*; use ::refpool::{Pool, PoolRef}; use std::iter::FromIterator; #[test] fn default_and_clone() { let pool: Pool> = Pool::new(16); let mut ref1 = PoolRef::default(&pool); { let chunk = PoolRef::make_mut(&pool, &mut ref1); chunk.push_back(1); chunk.push_back(2); chunk.push_back(3); } let ref2 = PoolRef::cloned(&pool, &ref1); let ref3 = PoolRef::clone_from(&pool, &Chunk::from_iter(1..=3)); assert_eq!(Chunk::::from_iter(1..=3), *ref1); assert_eq!(Chunk::::from_iter(1..=3), *ref2); assert_eq!(Chunk::::from_iter(1..=3), *ref3); assert_eq!(ref1, ref2); assert_eq!(ref1, ref3); assert_eq!(ref2, ref3); assert!(!PoolRef::ptr_eq(&ref1, &ref2)); } } vendor/sized-chunks/src/sized_chunk/iter.rs0000664000175000017500000000455414160055207021674 0ustar mwhudsonmwhudsonuse core::iter::FusedIterator; use super::Chunk; use crate::types::ChunkLength; /// A consuming iterator over the elements of a `Chunk`. pub struct Iter where N: ChunkLength, { pub(crate) chunk: Chunk, } impl Iterator for Iter where N: ChunkLength, { type Item = A; fn next(&mut self) -> Option { if self.chunk.is_empty() { None } else { Some(self.chunk.pop_front()) } } fn size_hint(&self) -> (usize, Option) { (self.chunk.len(), Some(self.chunk.len())) } } impl DoubleEndedIterator for Iter where N: ChunkLength, { fn next_back(&mut self) -> Option { if self.chunk.is_empty() { None } else { Some(self.chunk.pop_back()) } } } impl ExactSizeIterator for Iter where N: ChunkLength {} impl FusedIterator for Iter where N: ChunkLength {} /// A draining iterator over the elements of a `Chunk`. /// /// "Draining" means that as the iterator yields each element, it's removed from /// the `Chunk`. When the iterator terminates, the chunk will be empty. This is /// different from the consuming iterator `Iter` in that `Iter` will take /// ownership of the `Chunk` and discard it when you're done iterating, while /// `Drain` leaves you still owning the drained `Chunk`. pub struct Drain<'a, A, N> where N: ChunkLength, { pub(crate) chunk: &'a mut Chunk, } impl<'a, A, N> Iterator for Drain<'a, A, N> where A: 'a, N: ChunkLength + 'a, { type Item = A; fn next(&mut self) -> Option { if self.chunk.is_empty() { None } else { Some(self.chunk.pop_front()) } } fn size_hint(&self) -> (usize, Option) { (self.chunk.len(), Some(self.chunk.len())) } } impl<'a, A, N> DoubleEndedIterator for Drain<'a, A, N> where A: 'a, N: ChunkLength + 'a, { fn next_back(&mut self) -> Option { if self.chunk.is_empty() { None } else { Some(self.chunk.pop_back()) } } } impl<'a, A, N> ExactSizeIterator for Drain<'a, A, N> where A: 'a, N: ChunkLength + 'a, { } impl<'a, A, N> FusedIterator for Drain<'a, A, N> where A: 'a, N: ChunkLength + 'a, { } vendor/sized-chunks/src/types.rs0000664000175000017500000000217014160055207017557 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! Helper types for chunks. use core::marker::PhantomData; use typenum::*; // Chunk sizes /// A trait used to decide the size of an array. /// /// `>::SizedType` for a type level integer N will have the /// same size as `[A; N]`. pub trait ChunkLength: Unsigned { /// A `Sized` type matching the size of an array of `Self` elements of `A`. type SizedType; } impl ChunkLength for UTerm { type SizedType = (); } #[doc(hidden)] #[allow(dead_code)] pub struct SizeEven { parent1: B, parent2: B, _marker: PhantomData, } #[doc(hidden)] #[allow(dead_code)] pub struct SizeOdd { parent1: B, parent2: B, data: A, } impl ChunkLength for UInt where N: ChunkLength, { type SizedType = SizeEven; } impl ChunkLength for UInt where N: ChunkLength, { type SizedType = SizeOdd; } vendor/sized-chunks/src/arbitrary.rs0000664000175000017500000000560214160055207020415 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. use bitmaps::Bits; use ::arbitrary::{size_hint, Arbitrary, Result, Unstructured}; use crate::{types::ChunkLength, Chunk, InlineArray, SparseChunk}; #[cfg(feature = "ringbuffer")] use crate::RingBuffer; impl<'a, A, N> Arbitrary<'a> for Chunk where A: Arbitrary<'a>, N: ChunkLength + 'static, { fn arbitrary(u: &mut Unstructured<'a>) -> Result { u.arbitrary_iter()?.take(Self::CAPACITY).collect() } fn arbitrary_take_rest(u: Unstructured<'a>) -> Result { u.arbitrary_take_rest_iter()?.take(Self::CAPACITY).collect() } fn size_hint(depth: usize) -> (usize, Option) { size_hint::recursion_guard(depth, |depth| { let (_, upper) = A::size_hint(depth); (0, upper.map(|upper| upper * Self::CAPACITY)) }) } } #[cfg(feature = "ringbuffer")] impl<'a, A, N> Arbitrary<'a> for RingBuffer where A: Arbitrary<'a>, N: ChunkLength + 'static, { fn arbitrary(u: &mut Unstructured<'a>) -> Result { u.arbitrary_iter()?.take(Self::CAPACITY).collect() } fn arbitrary_take_rest(u: Unstructured<'a>) -> Result { u.arbitrary_take_rest_iter()?.take(Self::CAPACITY).collect() } fn size_hint(depth: usize) -> (usize, Option) { size_hint::recursion_guard(depth, |depth| { let (_, upper) = A::size_hint(depth); (0, upper.map(|upper| upper * Self::CAPACITY)) }) } } impl<'a, A, N> Arbitrary<'a> for SparseChunk where A: Clone, Option: Arbitrary<'a>, N: ChunkLength + Bits + 'static, { fn arbitrary(u: &mut Unstructured<'a>) -> Result { u.arbitrary_iter()?.take(Self::CAPACITY).collect() } fn arbitrary_take_rest(u: Unstructured<'a>) -> Result { u.arbitrary_take_rest_iter()?.take(Self::CAPACITY).collect() } fn size_hint(depth: usize) -> (usize, Option) { size_hint::recursion_guard(depth, |depth| { let (_, upper) = Option::::size_hint(depth); (0, upper.map(|upper| upper * Self::CAPACITY)) }) } } impl<'a, A, T> Arbitrary<'a> for InlineArray where A: Arbitrary<'a>, T: 'static, { fn arbitrary(u: &mut Unstructured<'a>) -> Result { u.arbitrary_iter()?.take(Self::CAPACITY).collect() } fn arbitrary_take_rest(u: Unstructured<'a>) -> Result { u.arbitrary_take_rest_iter()?.take(Self::CAPACITY).collect() } fn size_hint(depth: usize) -> (usize, Option) { size_hint::recursion_guard(depth, |depth| { let (_, upper) = A::size_hint(depth); (0, upper.map(|upper| upper * Self::CAPACITY)) }) } } vendor/sized-chunks/src/tests.rs0000664000175000017500000000062414160055207017557 0ustar mwhudsonmwhudsonuse std::sync::atomic::{AtomicUsize, Ordering}; pub(crate) struct DropTest<'a> { counter: &'a AtomicUsize, } impl<'a> DropTest<'a> { pub(crate) fn new(counter: &'a AtomicUsize) -> Self { counter.fetch_add(1, Ordering::Relaxed); DropTest { counter } } } impl<'a> Drop for DropTest<'a> { fn drop(&mut self) { self.counter.fetch_sub(1, Ordering::Relaxed); } } vendor/sized-chunks/src/lib.rs0000664000175000017500000001377714160055207017200 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! # Sized Chunks //! //! This crate contains three fixed size low level array like data structures, //! primarily intended for use in [immutable.rs], but fully supported as a //! standalone crate. //! //! Their sizing information is encoded in the type using the //! [`typenum`][typenum] crate, which you may want to take a look at before //! reading on, but usually all you need to know about it is that it provides //! types `U1` to `U128` to represent numbers, which the data types take as type //! parameters, eg. `SparseChunk` would give you a sparse array with //! room for 32 elements of type `A`. You can also omit the size, as they all //! default to a size of 64, so `SparseChunk` would be a sparse array with a //! capacity of 64. //! //! All data structures always allocate the same amount of space, as determined //! by their capacity, regardless of how many elements they contain, and when //! they run out of space, they will panic. //! //! ## Data Structures //! //! | Type | Description | Push | Pop | Deref to `&[A]` | //! | ---- | ----------- | ---- | --- | --------------- | //! | [`Chunk`][Chunk] | Contiguous array | O(1)/O(n) | O(1) | Yes | //! | [`RingBuffer`][RingBuffer] | Non-contiguous array | O(1) | O(1) | No | //! | [`SparseChunk`][SparseChunk] | Sparse array | N/A | N/A | No | //! //! The [`Chunk`][Chunk] and [`RingBuffer`][RingBuffer] are very similar in //! practice, in that they both work like a plain array, except that you can //! push to either end with some expectation of performance. The difference is //! that [`RingBuffer`][RingBuffer] always allows you to do this in constant //! time, but in order to give that guarantee, it doesn't lay out its elements //! contiguously in memory, which means that you can't dereference it to a slice //! `&[A]`. //! //! [`Chunk`][Chunk], on the other hand, will shift its contents around when //! necessary to accommodate a push to a full side, but is able to guarantee a //! contiguous memory layout in this way, so it can always be dereferenced into //! a slice. Performance wise, repeated pushes to the same side will always run //! in constant time, but a push to one side followed by a push to the other //! side will cause the latter to run in linear time if there's no room (which //! there would only be if you've popped from that side). //! //! To choose between them, you can use the following rules: //! - I only ever want to push to the back: you don't need this crate, try //! [`ArrayVec`][ArrayVec]. //! - I need to push to either side but probably not both on the same array: use //! [`Chunk`][Chunk]. //! - I need to push to both sides and I don't need slices: use //! [`RingBuffer`][RingBuffer]. //! - I need to push to both sides but I do need slices: use [`Chunk`][Chunk]. //! //! Finally, [`SparseChunk`][SparseChunk] is a more efficient version of //! `Vec>`: each index is either inhabited or not, but instead of //! using the `Option` discriminant to decide which is which, it uses a compact //! bitmap. You can also think of `SparseChunk` as a `BTreeMap` //! where the `usize` must be less than `N`, but without the performance //! overhead. Its API is also more consistent with a map than an array - there's //! no push, pop, append, etc, just insert, remove and lookup. //! //! # [`InlineArray`][InlineArray] //! //! Finally, there's [`InlineArray`][InlineArray], which is a simple vector that's //! sized to fit inside any `Sized` type that's big enough to hold a size counter //! and at least one instance of the array element type. This can be a useful //! optimisation when implementing a list like data structure with a nontrivial //! set of pointers in its full form, where you could plausibly fit several //! elements inside the space allocated for the pointers. `im::Vector` is a //! good example of that, and the use case for which [`InlineArray`][InlineArray] //! was implemented. //! //! # Feature Flags //! //! The following feature flags are available: //! //! | Feature | Description | //! | ------- | ----------- | //! | `arbitrary` | Provides [`Arbitrary`][Arbitrary] implementations from the [`arbitrary`][arbitrary_crate] crate. Requires the `std` flag. | //! | `refpool` | Provides [`PoolDefault`][PoolDefault] and [`PoolClone`][PoolClone] implemetations from the [`refpool`][refpool] crate. | //! | `ringbuffer` | Enables the [`RingBuffer`][RingBuffer] data structure. | //! | `std` | Without this flag (enabled by default), the crate will be `no_std`, and absent traits relating to `std::collections` and `std::io`. | //! //! [immutable.rs]: https://immutable.rs/ //! [typenum]: https://docs.rs/typenum/ //! [Chunk]: struct.Chunk.html //! [RingBuffer]: struct.RingBuffer.html //! [SparseChunk]: struct.SparseChunk.html //! [InlineArray]: struct.InlineArray.html //! [ArrayVec]: https://docs.rs/arrayvec/ //! [Arbitrary]: https://docs.rs/arbitrary/latest/arbitrary/trait.Arbitrary.html //! [arbitrary_crate]: https://docs.rs/arbitrary //! [refpool]: https://docs.rs/refpool //! [PoolDefault]: https://docs.rs/refpool/latest/refpool/trait.PoolDefault.html //! [PoolClone]: https://docs.rs/refpool/latest/refpool/trait.PoolClone.html #![forbid(rust_2018_idioms)] #![deny(nonstandard_style)] #![warn(unreachable_pub, missing_docs)] #![cfg_attr(test, deny(warnings))] #![cfg_attr(not(any(feature = "std", test)), no_std)] // Jeremy Francis Corbyn, clippy devs need to calm down 🤦â€â™€ï¸ #![allow(clippy::suspicious_op_assign_impl, clippy::suspicious_arithmetic_impl)] pub mod inline_array; pub mod sized_chunk; pub mod sparse_chunk; pub mod types; #[cfg(test)] mod tests; #[cfg(feature = "arbitrary")] mod arbitrary; pub use crate::inline_array::InlineArray; pub use crate::sized_chunk::Chunk; pub use crate::sparse_chunk::SparseChunk; #[cfg(feature = "ringbuffer")] pub mod ring_buffer; #[cfg(feature = "ringbuffer")] pub use crate::ring_buffer::RingBuffer; vendor/sized-chunks/README.md0000664000175000017500000000215014160055207016533 0ustar mwhudsonmwhudson# sized-chunks Various fixed length array data types, designed for [immutable.rs]. ## Overview This crate provides the core building blocks for the immutable data structures in [immutable.rs]: a sized array with O(1) amortised double ended push/pop and smarter insert/remove performance (used by `im::Vector` and `im::OrdMap`), and a fixed size sparse array (used by `im::HashMap`). In a nutshell, this crate contains the unsafe bits from [immutable.rs], which may or may not be useful to anyone else, and have been split out for ease of auditing. ## Documentation * [API docs](https://docs.rs/sized-chunks) ## Licence Copyright 2019 Bodil Stokke This software is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. ## Code of Conduct Please note that this project is released with a [Contributor Code of Conduct][coc]. By participating in this project you agree to abide by its terms. [immutable.rs]: https://immutable.rs/ [coc]: https://github.com/bodil/sized-chunks/blob/master/CODE_OF_CONDUCT.md vendor/clap/0000775000175000017500000000000014172417313013571 5ustar mwhudsonmwhudsonvendor/clap/.cargo-checksum.json0000664000175000017500000000013114172417313017430 0ustar mwhudsonmwhudson{"files":{},"package":"a0610544180c38b88101fecf2dd634b174a62eef6946f84dfc6a7127512b381c"}vendor/clap/Cargo.toml0000664000175000017500000000542214172417313015524 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] edition = "2018" name = "clap" version = "2.34.0" authors = ["Kevin K. "] exclude = ["examples/*", "clap-test/*", "tests/*", "benches/*", "*.png", "clap-perf/*", "*.dot"] description = "A simple to use, efficient, and full-featured Command Line Argument Parser\n" homepage = "https://clap.rs/" documentation = "https://docs.rs/clap/" readme = "README.md" keywords = ["argument", "cli", "arg", "parser", "parse"] categories = ["command-line-interface"] license = "MIT" repository = "https://github.com/clap-rs/clap" [package.metadata.docs.rs] features = ["doc"] [profile.bench] opt-level = 3 lto = true codegen-units = 1 debug = false debug-assertions = false rpath = false [profile.dev] opt-level = 0 lto = false codegen-units = 4 debug = true debug-assertions = true rpath = false [profile.release] opt-level = 3 lto = true codegen-units = 1 debug = false debug-assertions = false rpath = false [profile.test] opt-level = 1 lto = false codegen-units = 4 debug = true debug-assertions = true rpath = false [dependencies.atty] version = "0.2.2" optional = true [dependencies.bitflags] version = "1.0" [dependencies.strsim] version = ">= 0.7, < 0.10" optional = true [dependencies.term_size] version = "0.3.0" optional = true [dependencies.textwrap] version = "0.11.0" [dependencies.unicode-width] version = "0.1.4" [dependencies.vec_map] version = "0.8" optional = true [dependencies.yaml-rust] version = ">= 0.3.5, < 0.5" optional = true [dev-dependencies.lazy_static] version = "1.3" [dev-dependencies.regex] version = "1" [dev-dependencies.version-sync] version = "0.8" [features] color = ["ansi_term", "atty"] debug = [] default = ["suggestions", "color", "vec_map"] doc = ["yaml"] nightly = [] no_cargo = [] suggestions = ["strsim"] unstable = [] wrap_help = ["term_size", "textwrap/term_size"] yaml = ["yaml-rust"] [target."cfg(not(windows))".dependencies.ansi_term] version = ">= 0.11, < 0.13" optional = true [badges.appveyor] repository = "clap-rs/clap" [badges.coveralls] branch = "master" repository = "clap-rs/clap" [badges.is-it-maintained-issue-resolution] repository = "clap-rs/clap" [badges.is-it-maintained-open-issues] repository = "clap-rs/clap" [badges.maintenance] status = "actively-developed" [badges.travis-ci] repository = "clap-rs/clap" vendor/clap/CHANGELOG.md0000664000175000017500000043774414172417313015425 0ustar mwhudsonmwhudson ## v2.34.0 (2021-11-30) - Updates to Rust 2018 edition and bumps the MSRV to Rust 1.46 ### v2.33.4 (2021-11-29) #### Bug Fixes * **prevents `panic`:** swallows broken pipe errors on error output ([7a729bc4](https://github.com/kbknapp/clap-rs/commit/7a729bc4df2646b05f6bf15f001124cd39d076ce)) ### v2.33.3 (2020-08-13) #### Improvements * Suppress deprecation warnings when using `crate_*` macros. ### v2.33.2 (2020-08-5) #### Documentation * Fixed links to `2.x` examples. Now they point to the right place. ### v2.33.1 (2020-05-11) #### Bug Fixes * Windows: Prevent some panics when parsing invalid Unicode on Windows ([922c645](https://github.com/clap-rs/clap/commit/922c64508389170c9c77f1c8a4e597d14d3ed2f0), closes [#1905](https://github.com/clap-rs/clap/issues/1905)) #### Documentation * fixes versions referenced in the README ([d307466a](https://github.com/kbknapp/clap-rs/commit/d307466af1013f172b8ec0252f01a473e2192d6b)) * **README.md:** * cuts down the number of examples to reduce confusion ([6e508ee0](https://github.com/kbknapp/clap-rs/commit/6e508ee09e7153de4adf4e88b0aa6418a537dadd)) #### Improvements * **Deps:** doesnt compile ansi_term on Windows since its not used ([b57ee946](https://github.com/kbknapp/clap-rs/commit/b57ee94609da3ddc897286cfba968f26ff961491), closes [#1155](https://github.com/kbknapp/clap-rs/issues/1155)) #### Minimum Required Rust * As of this release, `clap` requires `rustc 1.36.0` or greater. ## v2.33.0 (2019-04-06) #### New Sponsor * Stephen Oats is now a sponsor \o/ ([823457c0](https://github.com/kbknapp/clap-rs/commit/823457c0ef5e994ed7080cf62addbfe1aa3b1833)) * **SPONSORS.md:** fixes Josh Triplett's info in the sponsor document ([24cb5740](https://github.com/kbknapp/clap-rs/commit/24cb574090a11159b48bba105d5ec2dfb0a20e4e)) #### Features * **Completions:** adds completion support for Elvish. ([e9d0562a](https://github.com/kbknapp/clap-rs/commit/e9d0562a1dc5dfe731ed7c767e6cee0af08f0cf9)) * There is a new setting to disable automatic building of `--help` and `-h` flags (`AppSettings::DisableAutoHelp`) #### Improvements * **arg_matches.rs:** add Debug implementations ([47192b7a](https://github.com/kbknapp/clap-rs/commit/47192b7a2d84ec716b81ae4af621e008a8762dc9)) * **macros:** Support shorthand syntax for ArgGroups ([df9095e7](https://github.com/kbknapp/clap-rs/commit/df9095e75bb1e7896415251d0d4ffd8a0ebcd559)) #### Documentation * Refer to macOS rather than OSX. ([ab0d767f](https://github.com/kbknapp/clap-rs/commit/ab0d767f3a5a57e2bbb97d0183c2ef63c8c77a6c)) * **README.md:** use https for all links ([96a7639a](https://github.com/kbknapp/clap-rs/commit/96a7639a36bcb184c3f45348986883115ef1ab3a)) #### Bug Fixes * add debug assertion for missing args in subcommand ArgGroup ([2699d9e5](https://github.com/kbknapp/clap-rs/commit/2699d9e51e7eadc258ba64c4e347c5d1fef61343)) * Restore compat with Rust 1.21 ([6b263de1](https://github.com/kbknapp/clap-rs/commit/6b263de1d42ede692ec5ee55019ad2fc6386f92e)) * Dont mention unused subcommands ([ef92e2b6](https://github.com/kbknapp/clap-rs/commit/ef92e2b639ed305bdade4741f60fa85cb0101c5a)) * **OsValues:** Add `ExactSizeIterator` implementation ([356c69e5](https://github.com/kbknapp/clap-rs/commit/356c69e508fd25a9f0ea2d27bf80ae1d9a8d88f4)) * **arg_enum!:** * Fix comma position for valid values. ([1f1f9ff3](https://github.com/kbknapp/clap-rs/commit/1f1f9ff3fa38a43231ef8be9cfea89a32e53f518)) * Invalid expansions of some trailing-comma patterns ([7023184f](https://github.com/kbknapp/clap-rs/commit/7023184fca04e852c270341548d6a16207d13862)) * **completions:** improve correctness of completions when whitespace is involved ([5a08ff29](https://github.com/kbknapp/clap-rs/commit/5a08ff295b2aa6ce29420df6252a0e3ff4441bdc)) * **help message:** Unconditionally uses long description for subcommands ([6acc8b6a](https://github.com/kbknapp/clap-rs/commit/6acc8b6a621a765cbf513450188000d943676a30), closes [#897](https://github.com/kbknapp/clap-rs/issues/897)) * **macros:** fixes broken pattern which prevented calling multi-argument Arg methods ([9e7a352e](https://github.com/kbknapp/clap-rs/commit/9e7a352e13aaf8025d80f2bac5c47fb32528672b)) * **parser:** Better interaction between AllowExternalSubcommands and SubcommandRequired ([9601c95a](https://github.com/kbknapp/clap-rs/commit/9601c95a03d2b82bf265c328b4769238f1b79002)) #### Minimum Required Rust * As of this release, `clap` requires `rustc 1.31.0` or greater. ## v2.32.0 (2018-06-26) #### Minimum Required Rust * As of this release, `clap` requires `rustc 1.21.0` or greater. #### Features * **Completions:** adds completion support for Elvish. ([e9d0562a](https://github.com/kbknapp/clap-rs/commit/e9d0562a1dc5dfe731ed7c767e6cee0af08f0cf9)) #### Improvements * **macros:** Support shorthand syntax for ArgGroups ([df9095e7](https://github.com/kbknapp/clap-rs/commit/df9095e75bb1e7896415251d0d4ffd8a0ebcd559)) #### Bug Fixes * **OsValues:** Add `ExactSizeIterator` implementation ([356c69e5](https://github.com/kbknapp/clap-rs/commit/356c69e508fd25a9f0ea2d27bf80ae1d9a8d88f4)) * **arg_enum!:** Invalid expansions of some trailing-comma patterns ([7023184f](https://github.com/kbknapp/clap-rs/commit/7023184fca04e852c270341548d6a16207d13862)) * **help message:** Unconditionally uses long description for subcommands ([6acc8b6a](https://github.com/kbknapp/clap-rs/commit/6acc8b6a621a765cbf513450188000d943676a30), closes [#897](https://github.com/kbknapp/clap-rs/issues/897)) #### Documentation * Refer to macOS rather than OSX. ([ab0d767f](https://github.com/kbknapp/clap-rs/commit/ab0d767f3a5a57e2bbb97d0183c2ef63c8c77a6c)) ### v2.31.2 (2018-03-19) #### Bug Fixes * **Fish Completions:** fixes a bug that only allowed a single completion in in Fish Shell ([e8774a8](https://github.com/kbknapp/clap-rs/pull/1214/commits/e8774a84ee4a319c888036e7c595ab46451d8e48), closes [#1212](https://github.com/kbknapp/clap-rs/issues/1212)) * **AllowExternalSubcommands**: fixes a bug where external subcommands would be blocked by a similarly named subcommand (suggestions were getting in the way). ([a410e85](https://github.com/kbknapp/clap-rs/pull/1215/commits/a410e855bcd82b05f9efa73fa8b9774dc8842c6b)) #### Documentation * Fixes some typos in the `README.md` ([c8e685d7](https://github.com/kbknapp/clap-rs/commit/c8e685d76adee2a3cc06cac6952ffcf6f9548089)) ### v2.31.1 (2018-03-06) #### Improvements * **AllowMissingPositional:** improves the ability of AllowMissingPositional to allow 'skipping' to the last positional arg with '--' ([df20e6e2](https://github.com/kbknapp/clap-rs/commit/df20e6e24b4e782be0b423b484b9798e3e2efe2f)) ## v2.31.0 (2018-03-04) #### Features * **Arg Indices:** adds the ability to query argument value indices ([f58d0576](https://github.com/kbknapp/clap-rs/commit/f58d05767ec8133c8eb2de117cb642b9ae29ccbc)) * **Indices:** implements an Indices iterator ([1e67be44](https://github.com/kbknapp/clap-rs/commit/1e67be44f0ccf161cc84c4e6082382072e89c302)) * **Raw Args** adds a convenience function to `Arg` that allows implying all of `Arg::last` `Arg::allow_hyphen_values` and `Arg::multiple(true)` ([66a78f29](https://github.com/kbknapp/clap-rs/commit/66a78f2972786f5fe7c07937a1ac23da2542afd2)) #### Documentation * Fix some typos and markdown issues. ([935ba0dd](https://github.com/kbknapp/clap-rs/commit/935ba0dd547a69c3f636c5486795012019408794)) * **Arg Indices:** adds the documentation for the arg index querying methods ([50bc0047](https://github.com/kbknapp/clap-rs/commit/50bc00477afa64dc6cdc5de161d3de3ba1d105a7)) * **CONTRIBUTING.md:** fix url to clippy upstream repo to point to https://github.com/rust-lang-nursery/rust-clippy instead of https://github.com/Manishearth/rust-clippy ([42407d7f](https://github.com/kbknapp/clap-rs/commit/42407d7f21d794103cda61f49d2615aae0a4bcd9)) * **Values:** improves the docs example of the Values iterator ([74075d65](https://github.com/kbknapp/clap-rs/commit/74075d65e8db1ddb5e2a4558009a5729d749d1b6)) * Updates readme to hint that the `wrap_help` feature is a thing ([fc7ab227](https://github.com/kbknapp/clap-rs/commit/66a78f2972786f5fe7c07937a1ac23da2542afd2)) ### Improvements * Cargo.toml: use codegen-units = 1 in release and bench profiles ([19f425ea](https://github.com/kbknapp/clap-rs/commit/66a78f2972786f5fe7c07937a1ac23da2542afd2)) * Adds WASM support (clap now compiles on WASM!) ([689949e5](https://github.com/kbknapp/clap-rs/commit/689949e57d390bb61bc69f3ed91f60a2105738d0)) * Uses the short help tool-tip for PowerShell completion scripts ([ecda22ce](https://github.com/kbknapp/clap-rs/commit/ecda22ce7210ce56d7b2d1a5445dd1b8a2959656)) ## v2.30.0 (2018-02-13) #### Bug Fixes * **YAML:** Adds a missing conversion from `Arg::last` when instantiating from a YAML file ([aab77c81a5](https://github.com/kbknapp/clap-rs/pull/1175/commits/aab77c81a519b045f95946ae0dd3e850f9b93070), closes [#1160](https://github.com/kbknapp/clap-rs/issues/1173)) #### Improvements * **Bash Completions:** instead of completing a generic option name, all bash completions fall back to file completions UNLESS `Arg::possible_values` was used ([872f02ae](https://github.com/kbknapp/clap-rs/commit/872f02aea900ffa376850a279eb164645e1234fa)) * **Deps:** No longer needlessly compiles `ansi_term` on Windows since its not used ([b57ee946](https://github.com/kbknapp/clap-rs/commit/b57ee94609da3ddc897286cfba968f26ff961491), closes [#1155](https://github.com/kbknapp/clap-rs/issues/1155)) * **Help Message:** changes the `[values: foo bar baz]` array to `[possible values: foo bar baz]` for consistency with the API ([414707e4e97](https://github.com/kbknapp/clap-rs/pull/1176/commits/414707e4e979d07bfe555247e5d130c546673708), closes [#1160](https://github.com/kbknapp/clap-rs/issues/1160)) ### v2.29.4 (2018-02-06) #### Bug Fixes * **Overrides Self:** fixes a bug where options with multiple values couldnt ever have multiple values ([d95907cf](https://github.com/kbknapp/clap-rs/commit/d95907cff6d011a901fe35fa00b0f4e18547a1fb)) ### v2.29.3 (2018-02-05) #### Improvements * **Overrides:** clap now supports arguments which override with themselves ([6c7a0010](https://github.com/kbknapp/clap-rs/commit/6c7a001023ca1eac1cc6ffe6c936b4c4a2aa3c45), closes [#976](https://github.com/kbknapp/clap-rs/issues/976)) #### Bug Fixes * **Requirements:** fixes an issue where conflicting args would still show up as required ([e06cefac](https://github.com/kbknapp/clap-rs/commit/e06cefac97083838c0a4e1444dcad02a5c3f911e), closes [#1158](https://github.com/kbknapp/clap-rs/issues/1158)) * Fixes a bug which disallows proper nesting of `--` ([73993fe](https://github.com/kbknapp/clap-rs/commit/73993fe30d135f682e763ec93dcb0814ed518011), closes [#1161](https://github.com/kbknapp/clap-rs/issues/1161)) #### New Settings * **AllArgsOverrideSelf:** adds a new convenience setting to allow all args to override themselves ([4670325d](https://github.com/kbknapp/clap-rs/commit/4670325d1bf0369addec2ae2bcb56f1be054c924)) ### v2.29.2 (2018-01-16) #### Features * **completions/zsh.rs:** * Escape possible values for options ([25561dec](https://github.com/kbknapp/clap-rs/commit/25561decf147d329b64634a14d9695673c2fc78f)) * Implement postional argument possible values completion ([f3b0afd2](https://github.com/kbknapp/clap-rs/commit/f3b0afd2bef8b7be97162f8a7802ddf7603dff36)) * Complete positional arguments properly ([e39aeab8](https://github.com/kbknapp/clap-rs/commit/e39aeab8487596046fbdbc6a226e5c8820585245)) #### Bug Fixes * **completions/zsh.rs:** * Add missing autoload for is-at-least ([a6522607](https://github.com/kbknapp/clap-rs/commit/a652260795d1519f6ec2a7a09ccc1258499cad7b)) * Don't pass -S to _arguments if Zsh is too old ([16b4f143](https://github.com/kbknapp/clap-rs/commit/16b4f143ff466b7ef18a267bc44ade0f9639109b)) * Maybe fix completions with mixed positionals and subcommands ([1146f0da](https://github.com/kbknapp/clap-rs/commit/1146f0da154d6796fbfcb09db8efa3593cb0d898)) * **completions/zsh.zsh:** Remove redundant code from output ([0e185b92](https://github.com/kbknapp/clap-rs/commit/0e185b922ed1e0fd653de00b4cd8d567d72ff68e), closes [#1142](https://github.com/kbknapp/clap-rs/issues/1142)) ### 2.29.1 (2018-01-09) #### Documentation * fixes broken links. ([56e734b8](https://github.com/kbknapp/clap-rs/commit/56e734b839303d733d2e5baf7dac39bd7b97b8e4)) * updates contributors list ([e1313a5a](https://github.com/kbknapp/clap-rs/commit/e1313a5a0f69d8f4016f73b860a63af8318a6676)) #### Performance * further debloating by removing generics from error cases ([eb8d919e](https://github.com/kbknapp/clap-rs/commit/eb8d919e6f3443db279ba0c902f15d76676c02dc)) * debloats clap by deduplicating logic and refactors ([03e413d7](https://github.com/kbknapp/clap-rs/commit/03e413d7175d35827cd7d8908d47dbae15a849a3)) #### Bug Fixes * fixes the ripgrep benchmark by adding a value to a flag that expects it ([d26ab2b9](https://github.com/kbknapp/clap-rs/commit/d26ab2b97cf9c0ea675b440b7b0eaf6ac3ad01f4)) * **bash completion:** Change the bash completion script code generation to support hyphens. ([ba7f1d18](https://github.com/kbknapp/clap-rs/commit/ba7f1d18eba7a07ce7f57e0981986f66c994b639)) * **completions/zsh.rs:** Fix completion of long option values ([46365cf8](https://github.com/kbknapp/clap-rs/commit/46365cf8be5331ba04c895eb183e2f230b5aad51)) ## 2.29.0 (2017-12-02) #### API Additions * **Arg:** adds Arg::hide_env_values(bool) which allows one to hide any current env values and display only the key in help messages ([fb41d062](https://github.com/kbknapp/clap-rs/commit/fb41d062eedf37cb4f805c90adca29909bd197d7)) ## 2.28.0 (2017-11-28) The minimum required Rust is now 1.20. This was done to start using bitflags 1.0 and having >1.0 deps is a *very good* thing! #### Documentation * changes the demo version to 2.28 to stay in sync ([ce6ca492](https://github.com/kbknapp/clap-rs/commit/ce6ca492c7510ab6474075806360b96081b021a9)) * Fix URL path to github hosted files ([ce72aada](https://github.com/kbknapp/clap-rs/commit/ce72aada56a9581d4a6cb4bf9bdb861c3906f8df), closes [#1106](https://github.com/kbknapp/clap-rs/issues/1106)) * fix typo ([002b07fc](https://github.com/kbknapp/clap-rs/commit/002b07fc98a1c85acb66296b1eec0b2aba906125)) * **README.md:** updates the readme and pulls out some redundant sections ([db6caf86](https://github.com/kbknapp/clap-rs/commit/db6caf8663747e679d2f4ed3bd127f33476754aa)) #### Improvements * adds '[SUBCOMMAND]' to usage strings with only AppSettings::AllowExternalSubcommands is used with no other subcommands ([e78bb757](https://github.com/kbknapp/clap-rs/commit/e78bb757a3df16e82d539e450c06767a6bfcf859), closes [#1093](https://github.com/kbknapp/clap-rs/issues/1093)) #### API Additions * Adds Arg::case_insensitive(bool) which allows matching Arg::possible_values without worrying about ASCII case ([1fec268e](https://github.com/kbknapp/clap-rs/commit/1fec268e51736602e38e67c76266f439e2e0ef12), closes [#1118](https://github.com/kbknapp/clap-rs/issues/1118)) * Adds the traits to be used with the clap-derive crate to be able to use Custom Derive ([6f4c3412](https://github.com/kbknapp/clap-rs/commit/6f4c3412415e882f5ca2cc3fbd6d4dce79440828)) #### Bug Fixes * Fixes a regression where --help couldn't be overridden ([a283d69f](https://github.com/kbknapp/clap-rs/commit/a283d69fc08aa016ae1bf9ba010012abecc7ba69), closes [#1112](https://github.com/kbknapp/clap-rs/issues/1112)) * fixes a bug that allowed options to pass parsing when no value was provided ([2fb75821](https://github.com/kbknapp/clap-rs/commit/2fb758219c7a60d639da67692e100b855a8165ac), closes [#1105](https://github.com/kbknapp/clap-rs/issues/1105)) * ignore PropagateGlobalValuesDown deprecation warning ([f61ce3f5](https://github.com/kbknapp/clap-rs/commit/f61ce3f55fe65e16b3db0bd4facdc4575de22767), closes [#1086](https://github.com/kbknapp/clap-rs/issues/1086)) #### Deps * Updates `bitflags` to 1.0 ## v2.27.1 (2017-10-24) #### Bug Fixes * Adds `term_size` as an optional dependency (with feature `wrap_help`) to fix compile bug ## v2.27.0 (2017-10-24) ** This release raises the minimum required version of Rust to 1.18 ** ** This release also contains a very minor breaking change to fix a bug ** The only CLIs affected will be those using unrestrained multiple values and subcommands where the subcommand name can coincide with one of the multiple values. See the commit [0c223f54](https://github.com/kbknapp/clap-rs/commit/0c223f54ed46da406bc8b43a5806e0b227863b31) for full details. #### Bug Fixes * Values from global args are now propagated UP and DOWN! * fixes a bug where using AppSettings::AllowHyphenValues would allow invalid arguments even when there is no way for them to be valid ([77ed4684](https://github.com/kbknapp/clap-rs/commit/77ed46841fc0263d7aa32fcc5cc49ef703b37c04), closes [#1066](https://github.com/kbknapp/clap-rs/issues/1066)) * when an argument requires a value and that value happens to match a subcommand name, its parsed as a value ([0c223f54](https://github.com/kbknapp/clap-rs/commit/0c223f54ed46da406bc8b43a5806e0b227863b31), closes [#1031](https://github.com/kbknapp/clap-rs/issues/1031), breaks [#](https://github.com/kbknapp/clap-rs/issues/), [#](https://github.com/kbknapp/clap-rs/issues/)) * fixes a bug that prevented number_of_values and default_values to be used together ([5eb342a9](https://github.com/kbknapp/clap-rs/commit/5eb342a99dde07b0f011048efde3e283bc1110fc), closes [#1050](https://github.com/kbknapp/clap-rs/issues/1050), [#1056](https://github.com/kbknapp/clap-rs/issues/1056)) * fixes a bug that didn't allow args with default values to have conflicts ([58b5b4be](https://github.com/kbknapp/clap-rs/commit/58b5b4be315280888d50d9b15119b91a9028f050), closes [#1071](https://github.com/kbknapp/clap-rs/issues/1071)) * fixes a panic when using global args and calling App::get_matches_from_safe_borrow multiple times ([d86ec797](https://github.com/kbknapp/clap-rs/commit/d86ec79742c77eb3f663fb30e225954515cf25bb), closes [#1076](https://github.com/kbknapp/clap-rs/issues/1076)) * fixes issues and potential regressions with global args values not being propagated properly or at all ([a43f9dd4](https://github.com/kbknapp/clap-rs/commit/a43f9dd4aaf1864dd14a3c28dec89ccdd70c61e5), closes [#1010](https://github.com/kbknapp/clap-rs/issues/1010), [#1061](https://github.com/kbknapp/clap-rs/issues/1061), [#978](https://github.com/kbknapp/clap-rs/issues/978)) * fixes a bug where default values are not applied if the option supports zero values ([9c248cbf](https://github.com/kbknapp/clap-rs/commit/9c248cbf7d8a825119bc387c23e9a1d1989682b0), closes [#1047](https://github.com/kbknapp/clap-rs/issues/1047)) #### Documentation * adds addtional blurbs about using multiples with subcommands ([03455b77](https://github.com/kbknapp/clap-rs/commit/03455b7751a757e7b2f6ffaf2d16168539c99661)) * updates the docs to reflect changes to global args and that global args values can now be propagated back up the stack ([ead076f0](https://github.com/kbknapp/clap-rs/commit/ead076f03ada4c322bf3e34203925561ec496d87)) * add html_root_url attribute ([e67a061b](https://github.com/kbknapp/clap-rs/commit/e67a061bcf567c6518d6c2f58852e01f02764b22)) * sync README version numbers with crate version ([5536361b](https://github.com/kbknapp/clap-rs/commit/5536361bcda29887ed86bb68e43d0b603cbc423f)) #### Improvements * args that have require_delimiter(true) is now reflected in help and usage strings ([dce61699](https://github.com/kbknapp/clap-rs/commit/dce616998ed9bd95e8ed3bec1f09a4883da47b85), closes [#1052](https://github.com/kbknapp/clap-rs/issues/1052)) * if all subcommands are hidden, the subcommands section of the help message is no longer displayed ([4ae7b046](https://github.com/kbknapp/clap-rs/commit/4ae7b0464750bc07ec80ece38e43f003fdd1b8ae), closes [#1046](https://github.com/kbknapp/clap-rs/issues/1046)) #### Breaking Changes * when an argument requires a value and that value happens to match a subcommand name, its parsed as a value ([0c223f54](https://github.com/kbknapp/clap-rs/commit/0c223f54ed46da406bc8b43a5806e0b227863b31), closes [#1031](https://github.com/kbknapp/clap-rs/issues/1031), breaks [#](https://github.com/kbknapp/clap-rs/issues/), [#](https://github.com/kbknapp/clap-rs/issues/)) #### Deprecations * **AppSettings::PropagateGlobalValuesDown:** this setting is no longer required to propagate values down or up ([2bb5ddce](https://github.com/kbknapp/clap-rs/commit/2bb5ddcee61c791ca1aaca494fbeb4bd5e277488)) ### v2.26.2 (2017-09-14) #### Improvements * if all subcommands are hidden, the subcommands section of the help message is no longer displayed ([4ae7b046](https://github.com/kbknapp/clap-rs/commit/4ae7b0464750bc07ec80ece38e43f003fdd1b8ae), closes [#1046](https://github.com/kbknapp/clap-rs/issues/1046)) #### Bug Fixes * fixes a bug where default values are not applied if the option supports zero values ([9c248cbf](https://github.com/kbknapp/clap-rs/commit/9c248cbf7d8a825119bc387c23e9a1d1989682b0), closes [#1047](https://github.com/kbknapp/clap-rs/issues/1047)) ### v2.26.1 (2017-09-14) #### Bug Fixes * fixes using require_equals(true) and min_values(0) together ([10ae208f](https://github.com/kbknapp/clap-rs/commit/10ae208f68518eff6e98166724065745f4083174), closes [#1044](https://github.com/kbknapp/clap-rs/issues/1044)) * escape special characters in zsh and fish completions ([87e019fc](https://github.com/kbknapp/clap-rs/commit/87e019fc84ba6193a8c4ddc26c61eb99efffcd25)) * avoid panic generating default help msg if term width set to 0 due to bug in textwrap 0.7.0 ([b3eadb0d](https://github.com/kbknapp/clap-rs/commit/b3eadb0de516106db4e08f078ad32e8f6d6e7a57)) * Change `who's` -> `whose` ([53c1ffe8](https://github.com/kbknapp/clap-rs/commit/53c1ffe87f38b05d8804a0f7832412a952845349)) * adds a debug assertion to ensure all args added to groups actually exist ([7ad123e2](https://github.com/kbknapp/clap-rs/commit/7ad123e2c02577e3ca30f7e205181e896b157d11), closes [#917](https://github.com/kbknapp/clap-rs/issues/917)) * fixes a bug where args that allow values to start with a hyphen couldnt contain a double hyphen -- as a value ([ab2f4c9e](https://github.com/kbknapp/clap-rs/commit/ab2f4c9e563e36ec739a4b55d5a5b76fdb9e9fa4), closes [#960](https://github.com/kbknapp/clap-rs/issues/960)) * fixes a bug where positional argument help text is misaligned ([54c16836](https://github.com/kbknapp/clap-rs/commit/54c16836dea4651806a2cfad53146a83fa3abf21)) * **Help Message:** fixes long_about not being usable ([a8257ea0](https://github.com/kbknapp/clap-rs/commit/a8257ea0ffb812e552aca256c4a3d2aebfd8065b), closes [#1043](https://github.com/kbknapp/clap-rs/issues/1043)) * **Suggestions:** output for flag after subcommand ([434ea5ba](https://github.com/kbknapp/clap-rs/commit/434ea5ba71395d8c1afcf88e69f0b0d8339b01a1)) ## v2.26.0 (2017-07-29) Minimum version of Rust is now v1.13.0 (Stable) #### Improvements * bumps unicode-segmentation to v1.2 ([cd7b40a2](https://github.com/kbknapp/clap-rs/commit/cd7b40a21c77bae17ba453c5512cb82b7d1ce474)) #### Performance * update textwrap to version 0.7.0 ([c2d4e637](https://github.com/kbknapp/clap-rs/commit/c2d4e63756a6f070e38c16dff846e9b0a53d6f93)) ### v2.25.1 (2017-07-21) #### Improvements * impl Default for Values + OsValues for any lifetime. ([fb7d6231f1](https://github.com/kbknapp/clap-rs/commit/fb7d6231f13a2f79f411e62dca210b7dc9994c18)) #### Documentation * Various documentation typos and grammar fixes ### v2.25.0 (2017-06-20) #### Features * use textwrap crate for wrapping help texts ([b93870c1](https://github.com/kbknapp/clap-rs/commit/b93870c10ae3bd90d233c586a33e086803117285)) #### Improvements * **Suggestions:** suggests to use flag after subcommand when applicable ([2671ca72](https://github.com/kbknapp/clap-rs/commit/2671ca7260119d4311d21c4075466aafdd9da734)) * Bumps bitflags crate to v0.9 #### Documentation * Change `who's` -> `whose` ([53c1ffe8](https://github.com/kbknapp/clap-rs/commit/53c1ffe87f38b05d8804a0f7832412a952845349)) #### Documentation * **App::template:** adds details about the necessity to use AppSettings::UnifiedHelpMessage when using {unified} tags in the help template ([cbea3d5a](https://github.com/kbknapp/clap-rs/commit/cbea3d5acf3271a7a734498c4d99c709941c331e), closes [#949](https://github.com/kbknapp/clap-rs/issues/949)) * **Arg::allow_hyphen_values:** updates the docs to include warnings for allow_hyphen_values and multiple(true) used together ([f9b0d657](https://github.com/kbknapp/clap-rs/commit/f9b0d657835d3f517f313d70962177dc30acf4a7)) * **README.md:** * added a warning about using ~ deps ([821929b5](https://github.com/kbknapp/clap-rs/commit/821929b51bd60213955705900a436c9a64fcb79f), closes [#964](https://github.com/kbknapp/clap-rs/issues/964)) * **clap_app!:** adds using the @group specifier to the macro docs ([826048cb](https://github.com/kbknapp/clap-rs/commit/826048cb3cbc0280169303f1498ff0a2b7395883), closes [#932](https://github.com/kbknapp/clap-rs/issues/932)) ### v2.24.2 (2017-05-15) #### Bug Fixes * adds a debug assertion to ensure all args added to groups actually exist ([14f6b8f3](https://github.com/kbknapp/clap-rs/commit/14f6b8f3a2f6df73aeeec9c54a54909b1acfc158), closes [#917](https://github.com/kbknapp/clap-rs/issues/917)) * fixes a bug where args that allow values to start with a hyphen couldnt contain a double hyphen -- as a value ([ebf73a09](https://github.com/kbknapp/clap-rs/commit/ebf73a09db6f3c03c19cdd76b1ba6113930e1643), closes [#960](https://github.com/kbknapp/clap-rs/issues/960)) * fixes a bug where positional argument help text is misaligned ([54c16836](https://github.com/kbknapp/clap-rs/commit/54c16836dea4651806a2cfad53146a83fa3abf21)) #### Documentation * **App::template:** adds details about the necessity to use AppSettings::UnifiedHelpMessage when using {unified} tags in the help template ([cf569438](https://github.com/kbknapp/clap-rs/commit/cf569438f309c199800bb8e46c9f140187de69d7), closes [#949](https://github.com/kbknapp/clap-rs/issues/949)) * **Arg::allow_hyphen_values:** updates the docs to include warnings for allow_hyphen_values and multiple(true) used together ([ded5a2f1](https://github.com/kbknapp/clap-rs/commit/ded5a2f15474d4a5bd46a67b130ccb8b6781bd01)) * **clap_app!:** adds using the @group specifier to the macro docs ([fe85fcb1](https://github.com/kbknapp/clap-rs/commit/fe85fcb1772b61f13b20b7ea5290e2437a76190c), closes [#932](https://github.com/kbknapp/clap-rs/issues/932)) ### v2.24.0 (2017-05-07) #### Bug Fixes * fixes a bug where args with last(true) and required(true) set were not being printed in the usage string ([3ac533fe](https://github.com/kbknapp/clap-rs/commit/3ac533fedabf713943eedf006f830a5a486bbe80), closes [#944](https://github.com/kbknapp/clap-rs/issues/944)) * fixes a bug that was printing the arg name, instead of value name when Arg::last(true) was used ([e1fe8ac3](https://github.com/kbknapp/clap-rs/commit/e1fe8ac3bc1f9cf4e36df0d881f8419755f1787b), closes [#940](https://github.com/kbknapp/clap-rs/issues/940)) * fixes a bug where flags were parsed as flags AND positional values when specific combinations of settings were used ([20f83292](https://github.com/kbknapp/clap-rs/commit/20f83292d070038b8cee2a6b47e91f6b0a2f7871), closes [#946](https://github.com/kbknapp/clap-rs/issues/946)) ## v2.24.0 (2017-05-05) #### Documentation * **README.md:** fix some typos ([fa34deac](https://github.com/kbknapp/clap-rs/commit/fa34deac079f334c3af97bb7fb151880ba8887f8)) #### API Additions * **Arg:** add `default_value_os` ([d5ef8955](https://github.com/kbknapp/clap-rs/commit/d5ef8955414b1587060f7218385256105b639c88)) * **arg_matches.rs:** Added a Default implementation for Values and OsValues iterators. ([0a4384e3](https://github.com/kbknapp/clap-rs/commit/0a4384e350eed74c2a4dc8964c203f21ac64897f)) ### v2.23.2 (2017-04-19) #### Bug Fixes * **PowerShell Completions:** fixes a bug where powershells completions cant be used if no subcommands are defined ([a8bce558](https://github.com/kbknapp/clap-rs/commit/a8bce55837dc4e0fb187dc93180884a40ae09c6f), closes [#931](https://github.com/kbknapp/clap-rs/issues/931)) #### Improvements * bumps term_size to take advantage of better terminal dimension handling ([e05100b7](https://github.com/kbknapp/clap-rs/commit/e05100b73d74066a90876bf38f952adf5e8ee422)) * **PowerShell Completions:** massively dedups subcommand names in the generate script to make smaller scripts that are still functionally equiv ([85b0e1cc](https://github.com/kbknapp/clap-rs/commit/85b0e1cc4b9755dda75a93d898d79bc38631552b)) #### Documentation * Fix a typo the minimum rust version required ([71dabba3](https://github.com/kbknapp/clap-rs/commit/71dabba3ea0a17c88b0e2199c9d99f0acbf3bc17)) ### v2.23.1 (2017-04-05) #### Bug Fixes * fixes a missing newline character in the autogenerated help and version messages in some instances ([5ae9007d](https://github.com/kbknapp/clap-rs/commit/5ae9007d984ae94ae2752df51bcbaeb0ec89bc15)) ## v2.23.0 (2017-04-05) #### API Additions * `App::long_about` * `App::long_version` * `App::print_long_help` * `App::write_long_help` * `App::print_long_version` * `App::write_long_version` * `Arg::long_help` #### Features * allows distinguishing between short and long version messages (-V/short or --version/long) ([59272b06](https://github.com/kbknapp/clap-rs/commit/59272b06cc213289dc604dbc694cb95d383a5d68)) * allows distinguishing between short and long help with subcommands in the same manner as args ([6b371891](https://github.com/kbknapp/clap-rs/commit/6b371891a1702173a849d1e95f9fecb168bf6fc4)) * allows specifying a short help vs a long help (i.e. varying levels of detail depending on if -h or --help was used) ([ef1b24c3](https://github.com/kbknapp/clap-rs/commit/ef1b24c3a0dff2f58c5e2e90880fbc2b69df20ee)) * **clap_app!:** adds support for arg names with hyphens similar to longs with hyphens ([f7a88779](https://github.com/kbknapp/clap-rs/commit/f7a8877978c8f90e6543d4f0d9600c086cf92cd7), closes [#869](https://github.com/kbknapp/clap-rs/issues/869)) #### Bug Fixes * fixes a bug that wasn't allowing help and version to be properly overridden ([8b2ceb83](https://github.com/kbknapp/clap-rs/commit/8b2ceb8368bcb70689fadf1c7f4b9549184926c1), closes [#922](https://github.com/kbknapp/clap-rs/issues/922)) #### Documentation * **clap_app!:** documents the `--("some-arg")` method for using args with hyphens inside them ([bc08ef3e](https://github.com/kbknapp/clap-rs/commit/bc08ef3e185393073d969d301989b6319c616c1f), closes [#919](https://github.com/kbknapp/clap-rs/issues/919)) ### v2.22.2 (2017-03-30) #### Bug Fixes * **Custom Usage Strings:** fixes the usage string regression when using help templates ([0e4fd96d](https://github.com/kbknapp/clap-rs/commit/0e4fd96d74280d306d09e60ac44f938a82321769)) ### v2.22.1 (2017-03-24) #### Bug Fixes * **usage:** fixes a big regression with custom usage strings ([2c41caba](https://github.com/kbknapp/clap-rs/commit/2c41caba3c7d723a2894e315d04da796b0e97759)) ## v2.22.0 (2017-03-23) #### API Additions * **App::name:** adds the ability to change the name of the App instance after creation ([d49e8292](https://github.com/kbknapp/clap-rs/commit/d49e8292b026b06e2b70447cd9f08299f4fcba76), closes [#908](https://github.com/kbknapp/clap-rs/issues/908)) * **Arg::hide_default_value:** adds ability to hide the default value of an argument from the help string ([89e6ea86](https://github.com/kbknapp/clap-rs/commit/89e6ea861e16a1ad56757ca12f6b32d02253e44a), closes [#902](https://github.com/kbknapp/clap-rs/issues/902)) ### v2.21.3 (2017-03-23) #### Bug Fixes * **yaml:** adds support for loading author info from yaml ([e04c390c](https://github.com/kbknapp/clap-rs/commit/e04c390c597a55fa27e724050342f16c42f1c5c9)) ### v2.21.2 (2017-03-17) #### Improvements * add fish subcommand help support ([f8f68cf8](https://github.com/kbknapp/clap-rs/commit/f8f68cf8251669aef4539a25a7c1166f0ac81ea6)) * options that use `require_equals(true)` now display the equals sign in help messages, usage strings, and errors" ([c8eb0384](https://github.com/kbknapp/clap-rs/commit/c8eb0384d394d2900ccdc1593099c97808a3fa05), closes [#903](https://github.com/kbknapp/clap-rs/issues/903)) #### Bug Fixes * setting the max term width now correctly propagates down through child subcommands ### v2.21.1 (2017-03-12) #### Bug Fixes * **ArgRequiredElseHelp:** fixes the precedence of this error to prioritize over other error messages ([74b751ff](https://github.com/kbknapp/clap-rs/commit/74b751ff2e3631e337b7946347c1119829a41c53), closes [#895](https://github.com/kbknapp/clap-rs/issues/895)) * **Positionals:** fixes some regression bugs resulting from old asserts in debug mode. ([9a3bc98e](https://github.com/kbknapp/clap-rs/commit/9a3bc98e9b55e7514b74b73374c5ac8b6e5e0508), closes [#896](https://github.com/kbknapp/clap-rs/issues/896)) ## v2.21.0 (2017-03-09) #### Performance * doesn't run `arg_post_processing` on multiple values anymore ([ec516182](https://github.com/kbknapp/clap-rs/commit/ec5161828729f6a53f0fccec8648f71697f01f78)) * changes internal use of `VecMap` to `Vec` for matched values of `Arg`s ([22bf137a](https://github.com/kbknapp/clap-rs/commit/22bf137ac581684c6ed460d2c3c640c503d62621)) * vastly reduces the amount of cloning when adding non-global args minus when they're added from `App::args` which is forced to clone ([8da0303b](https://github.com/kbknapp/clap-rs/commit/8da0303bc02db5fe047cfc0631a9da41d9dc60f7)) * refactor to remove unneeded vectors and allocations and checks for significant performance increases ([0efa4119](https://github.com/kbknapp/clap-rs/commit/0efa4119632f134fc5b8b9695b007dd94b76735d)) #### Documentation * Fix examples link in CONTRIBUTING.md ([60cf875d](https://github.com/kbknapp/clap-rs/commit/60cf875d67a252e19bb85054be57696fac2c57a1)) #### Improvements * when `AppSettings::SubcommandsNegateReqs` and `ArgsNegateSubcommands` are used, a new more accurate double line usage string is shown ([50f02300](https://github.com/kbknapp/clap-rs/commit/50f02300d81788817acefef0697e157e01b6ca32), closes [#871](https://github.com/kbknapp/clap-rs/issues/871)) #### API Additions * **Arg::last:** adds the ability to mark a positional argument as 'last' which means it should be used with `--` syntax and can be accessed early ([6a7aea90](https://github.com/kbknapp/clap-rs/commit/6a7aea9043b83badd9ab038b4ecc4c787716147e), closes [#888](https://github.com/kbknapp/clap-rs/issues/888)) * provides `default_value_os` and `default_value_if[s]_os` ([0f2a3782](https://github.com/kbknapp/clap-rs/commit/0f2a378219a6930748d178ba350fe5925be5dad5), closes [#849](https://github.com/kbknapp/clap-rs/issues/849)) * provides `App::help_message` and `App::version_message` which allows one to override the auto-generated help/version flag associated help ([389c413](https://github.com/kbknapp/clap-rs/commit/389c413b7023dccab8c76aa00577ea1d048e7a99), closes [#889](https://github.com/kbknapp/clap-rs/issues/889)) #### New Settings * **InferSubcommands:** adds a setting to allow one to infer shortened subcommands or aliases (i.e. for subcommmand "test", "t", "te", or "tes" would be allowed assuming no other ambiguities) ([11602032](https://github.com/kbknapp/clap-rs/commit/11602032f6ff05881e3adf130356e37d5e66e8f9), closes [#863](https://github.com/kbknapp/clap-rs/issues/863)) #### Bug Fixes * doesn't print the argument sections in the help message if all args in that section are hidden ([ce5ee5f5](https://github.com/kbknapp/clap-rs/commit/ce5ee5f5a76f838104aeddd01c8ec956dd347f50)) * doesn't include the various [ARGS] [FLAGS] or [OPTIONS] if the only ones available are hidden ([7b4000af](https://github.com/kbknapp/clap-rs/commit/7b4000af97637703645c5fb2ac8bb65bd546b95b), closes [#882](https://github.com/kbknapp/clap-rs/issues/882)) * now correctly shows subcommand as required in the usage string when AppSettings::SubcommandRequiredElseHelp is used ([8f0884c1](https://github.com/kbknapp/clap-rs/commit/8f0884c1764983a49b45de52a1eddf8d721564d8)) * fixes some memory leaks when an error is detected and clap exits ([8c2dd287](https://github.com/kbknapp/clap-rs/commit/8c2dd28718262ace4ae0db98563809548e02a86b)) * fixes a trait that's marked private accidentlly, but should be crate internal public ([1ae21108](https://github.com/kbknapp/clap-rs/commit/1ae21108015cea87e5360402e1747025116c7878)) * **Completions:** fixes a bug that tried to propogate global args multiple times when generating multiple completion scripts ([5e9b9cf4](https://github.com/kbknapp/clap-rs/commit/5e9b9cf4dd80fa66a624374fd04e6545635c1f94), closes [#846](https://github.com/kbknapp/clap-rs/issues/846)) #### Features * **Options:** adds the ability to require the equals syntax with options --opt=val ([f002693d](https://github.com/kbknapp/clap-rs/commit/f002693dec6a6959c4e9590cb7b7bfffd6d6e5bc), closes [#833](https://github.com/kbknapp/clap-rs/issues/833)) ### v2.20.5 (2017-02-18) #### Bug Fixes * **clap_app!:** fixes a critical bug of a missing fragment specifier when using `!property` style tags. ([5635c1f94](https://github.com/kbknapp/clap-rs/commit/5e9b9cf4dd80fa66a624374fd04e6545635c1f94)) ### v2.20.4 (2017-02-15) #### Bug Fixes * **Completions:** fixes a bug that tried to propogate global args multiple times when generating multiple completion scripts ([5e9b9cf4](https://github.com/kbknapp/clap-rs/commit/5e9b9cf4dd80fa66a624374fd04e6545635c1f94), closes [#846](https://github.com/kbknapp/clap-rs/issues/846)) #### Documentation * Fix examples link in CONTRIBUTING.md ([60cf875d](https://github.com/kbknapp/clap-rs/commit/60cf875d67a252e19bb85054be57696fac2c57a1)) ### v2.20.3 (2017-02-03) #### Documentation * **Macros:** adds a warning about changing values in Cargo.toml not triggering a rebuild automatically ([112aea3e](https://github.com/kbknapp/clap-rs/commit/112aea3e42ae9e0c0a2d33ebad89496dbdd95e5d), closes [#838](https://github.com/kbknapp/clap-rs/issues/838)) #### Bug Fixes * fixes a println->debugln typo ([279aa62e](https://github.com/kbknapp/clap-rs/commit/279aa62eaf08f56ce090ba16b937bc763cbb45be)) * fixes bash completions for commands that have an underscore in the name ([7f5cfa72](https://github.com/kbknapp/clap-rs/commit/7f5cfa724f0ac4e098f5fe466c903febddb2d994), closes [#581](https://github.com/kbknapp/clap-rs/issues/581)) * fixes a bug where ZSH completions would panic if the binary name had an underscore in it ([891a2a00](https://github.com/kbknapp/clap-rs/commit/891a2a006f775e92c556dda48bb32fac9807c4fb), closes [#581](https://github.com/kbknapp/clap-rs/issues/581)) * allow final word to be wrapped in wrap_help ([564c5f0f](https://github.com/kbknapp/clap-rs/commit/564c5f0f1730f4a2c1cdd128664f1a981c31dcd4), closes [#828](https://github.com/kbknapp/clap-rs/issues/828)) * fixes a bug where global args weren't included in the generated completion scripts ([9a1e006e](https://github.com/kbknapp/clap-rs/commit/9a1e006eb75ad5a6057ebd119aa90f7e06c0ace8), closes [#841](https://github.com/kbknapp/clap-rs/issues/841)) ### v2.20.2 (2017-02-03) #### Bug Fixes * fixes a critical bug where subcommand settings were being propogated too far ([74648c94](https://github.com/kbknapp/clap-rs/commit/74648c94b893df542bfa5bb595e68c7bb8167e36), closes [#832](https://github.com/kbknapp/clap-rs/issues/832)) #### Improvements * adds ArgGroup::multiple to the supported YAML fields for building ArgGroups from YAML ([d8590037](https://github.com/kbknapp/clap-rs/commit/d8590037ce07dafd8cd5b26928aa4a9fd3018288), closes [#840](https://github.com/kbknapp/clap-rs/issues/840)) ### v2.20.1 (2017-01-31) #### Bug Fixes * allow final word to be wrapped in wrap_help ([564c5f0f](https://github.com/kbknapp/clap-rs/commit/564c5f0f1730f4a2c1cdd128664f1a981c31dcd4), closes [#828](https://github.com/kbknapp/clap-rs/issues/828)) * actually show character in debug output ([84d8c547](https://github.com/kbknapp/clap-rs/commit/84d8c5476de95b7f37d61888bc4f13688b712434)) * include final character in line lenght ([aff4ba18](https://github.com/kbknapp/clap-rs/commit/aff4ba18da8147e1259b04b0bfbc1fcb5c78a3c0)) #### Improvements * updates libc and term_size deps for the libc version conflict ([6802ac4a](https://github.com/kbknapp/clap-rs/commit/6802ac4a59c142cda9ec55ca0c45ae5cb9a6ab55)) #### Documentation * fix link from app_from_crate! to crate_authors! (#822) ([5b29be9b](https://github.com/kbknapp/clap-rs/commit/5b29be9b073330ab1f7227cdd19fe4aab39d5dcb)) * fix spelling of "guaranteed" ([4f30a65b](https://github.com/kbknapp/clap-rs/commit/4f30a65b9c03eb09607eb91a929a6396637dc105)) #### New Settings * **ArgsNegateSubcommands:** disables args being allowed between subcommands ([5e2af8c9](https://github.com/kbknapp/clap-rs/commit/5e2af8c96adb5ab75fa2d1536237ebcb41869494), closes [#793](https://github.com/kbknapp/clap-rs/issues/793)) * **DontCollapseArgsInUsage:** disables the collapsing of positional args into `[ARGS]` in the usage string ([c2978afc](https://github.com/kbknapp/clap-rs/commit/c2978afc61fb46d5263ab3b2d87ecde1c9ce1553), closes [#769](https://github.com/kbknapp/clap-rs/issues/769)) * **DisableHelpSubcommand:** disables building the `help` subcommand ([a10fc859](https://github.com/kbknapp/clap-rs/commit/a10fc859ee20159fbd9ff4337be59b76467a64f2)) * **AllowMissingPositional:** allows one to implement `$ prog [optional] ` style CLIs where the second postional argument is required, but the first is optional ([1110fdc7](https://github.com/kbknapp/clap-rs/commit/1110fdc7a345c108820dc45783a9bf893fa4c214), closes [#636](https://github.com/kbknapp/clap-rs/issues/636)) * **PropagateGlobalValuesDown:** automatically propagats global arg's values down through *used* subcommands ([985536c8](https://github.com/kbknapp/clap-rs/commit/985536c8ebcc09af98aac835f42a8072ad58c262), closes [#694](https://github.com/kbknapp/clap-rs/issues/694)) #### API Additions ##### Arg * **Arg::value_terminator:** adds the ability to terminate multiple values with a given string or char ([be64ce0c](https://github.com/kbknapp/clap-rs/commit/be64ce0c373efc106384baca3f487ea99fe7b8cf), closes [#782](https://github.com/kbknapp/clap-rs/issues/782)) * **Arg::default_value_if[s]:** adds new methods for *conditional* default values (such as a particular value from another argument was used) ([eb4010e7](https://github.com/kbknapp/clap-rs/commit/eb4010e7b21724447ef837db11ac441915728f22)) * **Arg::requires_if[s]:** adds the ability to *conditionally* require additional args (such as if a particular value was used) ([198449d6](https://github.com/kbknapp/clap-rs/commit/198449d64393c265f0bc327aaeac23ec4bb97226)) * **Arg::required_if[s]:** adds the ability for an arg to be *conditionally* required (i.e. "arg X is only required if arg Y was used with value Z") ([ee9cfddf](https://github.com/kbknapp/clap-rs/commit/ee9cfddf345a6b5ae2af42ba72aa5c89e2ca7f59)) * **Arg::validator_os:** adds ability to validate values which may contain invalid UTF-8 ([47232498](https://github.com/kbknapp/clap-rs/commit/47232498a813db4f3366ccd3e9faf0bff56433a4)) ##### Macros * **crate_description!:** Uses the `Cargo.toml` description field to fill in the `App::about` method at compile time ([4d9a82db](https://github.com/kbknapp/clap-rs/commit/4d9a82db8e875e9b64a9c2a5c6e22c25afc1279d), closes [#778](https://github.com/kbknapp/clap-rs/issues/778)) * **crate_name!:** Uses the `Cargo.toml` name field to fill in the `App::new` method at compile time ([4d9a82db](https://github.com/kbknapp/clap-rs/commit/4d9a82db8e875e9b64a9c2a5c6e22c25afc1279d), closes [#778](https://github.com/kbknapp/clap-rs/issues/778)) * **app_from_crate!:** Combines `crate_version!`, `crate_name!`, `crate_description!`, and `crate_authors!` into a single macro call to build a default `App` instance from the `Cargo.toml` fields ([4d9a82db](https://github.com/kbknapp/clap-rs/commit/4d9a82db8e875e9b64a9c2a5c6e22c25afc1279d), closes [#778](https://github.com/kbknapp/clap-rs/issues/778)) #### Features * **no_cargo:** adds a `no_cargo` feature to disable Cargo-env-var-dependent macros for those *not* using `cargo` to build their crates (#786) ([6fdd2f9d](https://github.com/kbknapp/clap-rs/commit/6fdd2f9d693aaf1118fc61bd362273950703f43d)) #### Bug Fixes * **Options:** fixes a critical bug where options weren't forced to have a value ([5a5f2b1e](https://github.com/kbknapp/clap-rs/commit/5a5f2b1e9f598a0d0280ef3e98abbbba2bc41132), closes [#665](https://github.com/kbknapp/clap-rs/issues/665)) * fixes a bug where calling the help of a subcommand wasn't ignoring required args of parent commands ([d3d34a2b](https://github.com/kbknapp/clap-rs/commit/d3d34a2b51ef31004055b0ab574f766d801c3adf), closes [#789](https://github.com/kbknapp/clap-rs/issues/789)) * **Help Subcommand:** fixes a bug where the help subcommand couldn't be overriden ([d34ec3e0](https://github.com/kbknapp/clap-rs/commit/d34ec3e032d03e402d8e87af9b2942fe2819b2da), closes [#787](https://github.com/kbknapp/clap-rs/issues/787)) * **Low Index Multiples:** fixes a bug which caused combinations of LowIndexMultiples and `Arg::allow_hyphen_values` to fail parsing ([26c670ca](https://github.com/kbknapp/clap-rs/commit/26c670ca16d2c80dc26d5c1ce83380ace6357318)) #### Improvements * **Default Values:** improves the error message when default values are involved ([1f33de54](https://github.com/kbknapp/clap-rs/commit/1f33de545036e7fd2f80faba251fca009bd519b8), closes [#774](https://github.com/kbknapp/clap-rs/issues/774)) * **YAML:** adds conditional requirements and conditional default values to YAML ([9a4df327](https://github.com/kbknapp/clap-rs/commit/9a4df327893486adb5558ffefba790c634ccdc6e), closes [#764](https://github.com/kbknapp/clap-rs/issues/764)) * Support `--("some-arg-name")` syntax for defining long arg names when using `clap_app!` macro ([f41ec962](https://github.com/kbknapp/clap-rs/commit/f41ec962c243a5ffff8b1be1ae2ad63970d3d1d4)) * Support `("some app name")` syntax for defining app names when using `clap_app!` macro ([9895b671](https://github.com/kbknapp/clap-rs/commit/9895b671cff784f35cf56abcd8270f7c2ba09699), closes [#759](https://github.com/kbknapp/clap-rs/issues/759)) * **Help Wrapping:** long app names (with spaces), authors, and descriptions are now wrapped appropriately ([ad4691b7](https://github.com/kbknapp/clap-rs/commit/ad4691b71a63e951ace346318238d8834e04ad8a), closes [#777](https://github.com/kbknapp/clap-rs/issues/777)) #### Documentation * **Conditional Default Values:** fixes the failing doc tests of Arg::default_value_ifs ([4ef09101](https://github.com/kbknapp/clap-rs/commit/4ef091019c083b4db1a0c13f1c1e95ac363259f2)) * **Conditional Requirements:** adds docs for Arg::requires_ifs ([7f296e29](https://github.com/kbknapp/clap-rs/commit/7f296e29db7d9036e76e5dbcc9c8b20dfe7b25bd)) * **README.md:** fix some typos ([f22c21b4](https://github.com/kbknapp/clap-rs/commit/f22c21b422d5b287d1a1ac183a379ee02eebf54f)) * **src/app/mod.rs:** fix some typos ([5c9b0d47](https://github.com/kbknapp/clap-rs/commit/5c9b0d47ca78dea285c5b9dec79063d24c3e451a)) ### v2.19.3 (2016-12-28) #### Bug Fixes * fixes a bug where calling the help of a subcommand wasn't ignoring required args of parent commands ([a0ee4993](https://github.com/kbknapp/clap-rs/commit/a0ee4993015ea97b06b5bc9f378d8bcb18f1c51c), closes [#789](https://github.com/kbknapp/clap-rs/issues/789)) ### v2.19.2 (2016-12-08) #### Bug Fixes * **ZSH Completions:** escapes square brackets in ZSH completions ([7e17d5a3](https://github.com/kbknapp/clap-rs/commit/7e17d5a36b2cc2cc77e7b15796b14d639ed3cbf7), closes [#771](https://github.com/kbknapp/clap-rs/issues/771)) #### Documentation * **Examples:** adds subcommand examples ([0e0f3354](https://github.com/kbknapp/clap-rs/commit/0e0f33547a6901425afc1d9fbe19f7ae3832d9a4), closes [#766](https://github.com/kbknapp/clap-rs/issues/766)) * **README.md:** adds guidance on when to use ~ in version pinning, and clarifies breaking change policy ([591eaefc](https://github.com/kbknapp/clap-rs/commit/591eaefc7319142ba921130e502bb0729feed907), closes [#765](https://github.com/kbknapp/clap-rs/issues/765)) ### v2.19.1 (2016-12-01) #### Bug Fixes * **Help Messages:** fixes help message alignment when specific settings are used on options ([cd94b318](https://github.com/kbknapp/clap-rs/commit/cd94b3188d63b63295a319e90e826bca46befcd2), closes [#760](https://github.com/kbknapp/clap-rs/issues/760)) #### Improvements * **Bash Completion:** allows bash completion to fall back to traidtional bash completion upon no matching completing function ([b1b16d56](https://github.com/kbknapp/clap-rs/commit/b1b16d56d8fddf819bdbe24b3724bb6a9f3fa613))) ## v2.19.0 (2016-11-21) #### Features * allows specifying AllowLeadingHyphen style values, but only for specific args vice command wide ([c0d70feb](https://github.com/kbknapp/clap-rs/commit/c0d70febad9996a77a54107054daf1914c50d4ef), closes [#742](https://github.com/kbknapp/clap-rs/issues/742)) #### Bug Fixes * **Required Unless:** fixes a bug where having required_unless set doesn't work when conflicts are also set ([d20331b6](https://github.com/kbknapp/clap-rs/commit/d20331b6f7940ac3a4e919999f8bb4780875125d), closes [#753](https://github.com/kbknapp/clap-rs/issues/753)) * **ZSH Completions:** fixes an issue where zsh completions caused panics if there were no subcommands ([49e7cdab](https://github.com/kbknapp/clap-rs/commit/49e7cdab76dd1ccc07221e360f07808ec62648aa), closes [#754](https://github.com/kbknapp/clap-rs/issues/754)) #### Improvements * **Validators:** improves the error messages for validators ([65eb3385](https://github.com/kbknapp/clap-rs/commit/65eb33859d3ff53e7d3277f02a9d3fd9038a9dfb), closes [#744](https://github.com/kbknapp/clap-rs/issues/744)) #### Documentation * updates the docs landing page ([01e1e33f](https://github.com/kbknapp/clap-rs/commit/01e1e33f377934099a4a725fab5cd6c5ff50eaa2)) * adds the macro version back to the readme ([45eb9bf1](https://github.com/kbknapp/clap-rs/commit/45eb9bf130329c3f3853aba0342c2fe3c64ff80f)) * fix broken docs links ([808e7cee](https://github.com/kbknapp/clap-rs/commit/808e7ceeb86d4a319bdc270f51c23a64621dbfb3)) * **Compatibility Policy:** adds an official compatibility policy to ([760d66dc](https://github.com/kbknapp/clap-rs/commit/760d66dc17310b357f257776624151da933cd25d), closes [#740](https://github.com/kbknapp/clap-rs/issues/740)) * **Contributing:** updates the readme to improve the readability and contributing sections ([eb51316c](https://github.com/kbknapp/clap-rs/commit/eb51316cdfdc7258d287ba13b67ef2f42bd2b8f6)) ## v2.18.0 (2016-11-05) #### Features * **Completions:** adds completion support for PowerShell. ([cff82c88](https://github.com/kbknapp/clap-rs/commit/cff82c880e21064fca63351507b80350df6caadf), closes [#729](https://github.com/kbknapp/clap-rs/issues/729)) ### v2.17.1 (2016-11-02) #### Bug Fixes * **Low Index Multiples:** fixes a bug where using low index multiples was propagated to subcommands ([33924e88](https://github.com/kbknapp/clap-rs/commit/33924e884461983c4e6b5ea1330fecc769a4ade7), closes [#725](https://github.com/kbknapp/clap-rs/issues/725)) ## v2.17.0 (2016-11-01) #### Features * **Positional Args:** allows specifying the second to last positional argument as multiple(true) ([1ced2a74](https://github.com/kbknapp/clap-rs/commit/1ced2a7433ea8937a1b260ea65d708f32ca7c95e), closes [#725](https://github.com/kbknapp/clap-rs/issues/725)) ### v2.16.4 (2016-10-31) #### Improvements * **Error Output:** conflicting errors are now symetrical, meaning more consistent and less confusing ([3d37001d](https://github.com/kbknapp/clap-rs/commit/3d37001d1dc647d73cc597ff172f1072d4beb80d), closes [#718](https://github.com/kbknapp/clap-rs/issues/718)) #### Documentation * Fix typo in example `13a_enum_values_automatic` ([c22fbc07](https://github.com/kbknapp/clap-rs/commit/c22fbc07356e556ffb5d1a79ec04597d149b915e)) * **README.md:** fixes failing yaml example (#715) ([21fba9e6](https://github.com/kbknapp/clap-rs/commit/21fba9e6cd8c163012999cd0ce271ec8780c5695)) #### Bug Fixes * **ZSH Completions:** fixes bug that caused panic on subcommands with aliases ([5c70e1a0](https://github.com/kbknapp/clap-rs/commit/5c70e1a01bc977e44c10015d18bb8e215c32dfc8), closes [#714](https://github.com/kbknapp/clap-rs/issues/714)) * **debug:** fixes the debug feature (#716) ([6c11ccf4](https://github.com/kbknapp/clap-rs/commit/6c11ccf443d46258d51f7cda33fbcc81e7fe8e90)) ### v2.16.3 (2016-10-28) #### Bug Fixes * Derive display order after propagation ([9cb6facf](https://github.com/kbknapp/clap-rs/commit/9cb6facf507aff7cddd124b8c29714d2b0e7bd13), closes [#706](https://github.com/kbknapp/clap-rs/issues/706)) * **yaml-example:** inconsistent args ([847f7199](https://github.com/kbknapp/clap-rs/commit/847f7199219ead5065561d91d64780d99ae4b587)) ### v2.16.2 (2016-10-25) #### Bug Fixes * **Fish Completions:** fixes a bug where single quotes are not escaped ([780b4a18](https://github.com/kbknapp/clap-rs/commit/780b4a18281b6f7f7071e1b9db2290fae653c406), closes [#704](https://github.com/kbknapp/clap-rs/issues/704)) ### v2.16.1 (2016-10-24) #### Bug Fixes * **Help Message:** fixes a regression bug where args with multiple(true) threw off alignment ([ebddac79](https://github.com/kbknapp/clap-rs/commit/ebddac791f3ceac193d5ad833b4b734b9643a7af), closes [#702](https://github.com/kbknapp/clap-rs/issues/702)) ## v2.16.0 (2016-10-23) #### Features * **Completions:** adds ZSH completion support ([3e36b0ba](https://github.com/kbknapp/clap-rs/commit/3e36b0bac491d3f6194aee14604caf7be26b3d56), closes [#699](https://github.com/kbknapp/clap-rs/issues/699)) ## v2.15.0 (2016-10-21) #### Features * **AppSettings:** adds new setting `AppSettings::AllowNegativeNumbers` ([ab064546](https://github.com/kbknapp/clap-rs/commit/ab06454677fb6aa9b9f804644fcca2168b1eaee3), closes [#696](https://github.com/kbknapp/clap-rs/issues/696)) #### Documentation * **app/settings.rs:** moves variants to roughly alphabetical order ([9ed4d4d7](https://github.com/kbknapp/clap-rs/commit/9ed4d4d7957a23357aef60081e45639ab9e3905f)) ### v2.14.1 (2016-10-20) #### Documentation * Improve documentation around features ([4ee85b95](https://github.com/kbknapp/clap-rs/commit/4ee85b95d2d16708a016a3ba4e6e2c93b89b7fad)) * reword docs for ErrorKind and app::Settings ([3ccde7a4](https://github.com/kbknapp/clap-rs/commit/3ccde7a4b8f7a2ea8b916a5415c04a8ff4b5cb7a)) * fix tests that fail when the "suggestions" feature is disabled ([996fc381](https://github.com/kbknapp/clap-rs/commit/996fc381763a48d125c7ea8a58fed057fd0b4ac6)) * fix the OsString-using doc-tests ([af9e1a39](https://github.com/kbknapp/clap-rs/commit/af9e1a393ce6cdda46a03c8a4f48df222b015a24)) * tag non-rust code blocks as such instead of ignoring them ([0ba9f4b1](https://github.com/kbknapp/clap-rs/commit/0ba9f4b123f281952581b6dec948f7e51dd22890)) * **ErrorKind:** improve some errors about subcommands ([9f6217a4](https://github.com/kbknapp/clap-rs/commit/9f6217a424da823343d7b801b9c350dee3cd1906)) * **yaml:** make sure the doc-tests don't fail before "missing file" ([8c0f5551](https://github.com/kbknapp/clap-rs/commit/8c0f55516f4910c78c9f8a2bdbd822729574f95b)) #### Improvements * Stabilize clap_app! ([cd516006](https://github.com/kbknapp/clap-rs/commit/cd516006e35c37b005f329338560a0a53d1f3e00)) * **with_defaults:** Deprecate App::with_defaults() ([26085409](https://github.com/kbknapp/clap-rs/commit/2608540940c8bb66e517b65706bc7dea55510682), closes [#638](https://github.com/kbknapp/clap-rs/issues/638)) #### Bug Fixes * fixes a bug that made determining when to auto-wrap long help messages inconsistent ([468baadb](https://github.com/kbknapp/clap-rs/commit/468baadb8398fc1d37897b0c49374aef4cf97dca), closes [#688](https://github.com/kbknapp/clap-rs/issues/688)) * **Completions:** fish completions for nested subcommands ([a61eaf8a](https://github.com/kbknapp/clap-rs/commit/a61eaf8aade76cfe90ccc0f7125751ebf60e3254)) * **features:** Make lints not enable other nightly-requiring features ([835f75e3](https://github.com/kbknapp/clap-rs/commit/835f75e3ba20999117363ed9f916464d777f36ef)) ## v2.14.0 (2016-10-05) #### Features * **arg_aliases:** Ability to alias arguments ([33b5f6ef](https://github.com/kbknapp/clap-rs/commit/33b5f6ef2c9612ecabb31f96b824793e46bfd3dd), closes [#669](https://github.com/kbknapp/clap-rs/issues/669)) * **flag_aliases:** Ability to alias flags ([40d6dac9](https://github.com/kbknapp/clap-rs/commit/40d6dac973927dded6ab423481634ef47ee7bfd7)) #### Bug Fixes * **UsageParser:** Handle non-ascii names / options. ([1d6a7c6e](https://github.com/kbknapp/clap-rs/commit/1d6a7c6e7e6aadc527346aa822f19d8587f714f3), closes [#664](https://github.com/kbknapp/clap-rs/issues/664)) #### Documentation * typo ([bac417fa](https://github.com/kbknapp/clap-rs/commit/bac417fa1cea3d32308334c7cccfcf54546cd9d8)) ## v2.13.0 (2016-09-18) #### Documentation * updates README.md with new website information and updated video tutorials info ([0c19c580](https://github.com/kbknapp/clap-rs/commit/0c19c580cf50f1b82ff32f70b36708ae2bcac132)) * updates the docs about removing implicit value_delimiter(true) ([c81bc722](https://github.com/kbknapp/clap-rs/commit/c81bc722ebb8a86d22be89b5aec98df9fe222a08)) * **Default Values:** adds better examples on using default values ([57a8d9ab](https://github.com/kbknapp/clap-rs/commit/57a8d9abb2f973c235a8a14f8fc031673d7a7460), closes [#418](https://github.com/kbknapp/clap-rs/issues/418)) #### Bug Fixes * **Value Delimiters:** fixes the confusion around implicitly setting value delimiters. (default is now `false`) ([09d4d0a9](https://github.com/kbknapp/clap-rs/commit/09d4d0a9038d7ce2df55c2aec95e16f36189fcee), closes [#666](https://github.com/kbknapp/clap-rs/issues/666)) ### v2.12.1 (2016-09-13) #### Bug Fixes * **Help Wrapping:** fixes a regression-bug where the old {n} newline char stopped working ([92ac353b](https://github.com/kbknapp/clap-rs/commit/92ac353b48b7caa2511ad2a046d94da93c236cf6), closes [#661](https://github.com/kbknapp/clap-rs/issues/661)) ## v2.12.0 (2016-09-13) #### Features * **Help:** adds ability to hide the possible values on a per argument basis ([9151ef73](https://github.com/kbknapp/clap-rs/commit/9151ef739871f2e74910c342299c0de196b95dec), closes [#640](https://github.com/kbknapp/clap-rs/issues/640)) * **help:** allow for limiting detected terminal width ([a43e28af](https://github.com/kbknapp/clap-rs/commit/a43e28af85c9a9deaedd5ef735f4f13008daab29), closes [#653](https://github.com/kbknapp/clap-rs/issues/653)) #### Documentation * **Help Wrapping:** removes the verbiage about using `'{n}'` to insert newlines in help text ([c5a2b352](https://github.com/kbknapp/clap-rs/commit/c5a2b352ca600f5b802290ad945731066cd53611)) * **Value Delimiters:** updates the docs for the Arg::multiple method WRT value delimiters and default settings ([f9d17a06](https://github.com/kbknapp/clap-rs/commit/f9d17a060aa53f10d0a6e1a7eed5d989d1a59533)) * **appsettings:** Document AppSetting::DisableVersion ([94501965](https://github.com/kbknapp/clap-rs/commit/945019654d2ca67eb2b1d6014fdf80b84d528d30), closes [#589](https://github.com/kbknapp/clap-rs/issues/589)) #### Bug Fixes * **AllowLeadingHyphen:** fixes a bug where valid args aren't recognized with this setting ([a9699e4d](https://github.com/kbknapp/clap-rs/commit/a9699e4d7cdc9a06e73b845933ff1fe6d76f016a), closes [#588](https://github.com/kbknapp/clap-rs/issues/588)) #### Improvements * **Help Wrapping:** * clap now ignores hard newlines in help messages and properly re-aligns text, but still wraps if the term width is too small ([c7678523](https://github.com/kbknapp/clap-rs/commit/c76785239fd42adc8ca04f9202b6fec615aa9f14), closes [#617](https://github.com/kbknapp/clap-rs/issues/617)) * makes some minor changes to when next line help is automatically used ([01cae799](https://github.com/kbknapp/clap-rs/commit/01cae7990a33167ac35103fb36c811b4fe6eb98f)) * **Value Delimiters:** changes the default value delimiter rules ([f9e69254](https://github.com/kbknapp/clap-rs/commit/f9e692548e8c94de15f909432de301407d6bb834), closes [#655](https://github.com/kbknapp/clap-rs/issues/655)) * **YAML:** supports setting Arg::require_delimiter from YAML ([b9b55a39](https://github.com/kbknapp/clap-rs/commit/b9b55a39dfebcdbdc05dca2692927e503db50816)) #### Performance * **help:** fix redundant contains() checks ([a8afed74](https://github.com/kbknapp/clap-rs/commit/a8afed7428bf0733f8e93bb11ad6c00d9e970fcc)) ### v2.11.3 (2016-09-07) #### Documentation * **Help Wrapping:** removes the verbiage about using `'{n}'` to insert newlines in help text ([c5a2b352](https://github.com/kbknapp/clap-rs/commit/c5a2b352ca600f5b802290ad945731066cd53611)) #### Improvements * **Help Wrapping:** * clap now ignores hard newlines in help messages and properly re-aligns text, but still wraps if the term width is too small ([c7678523](https://github.com/kbknapp/clap-rs/commit/c76785239fd42adc8ca04f9202b6fec615aa9f14), closes [#617](https://github.com/kbknapp/clap-rs/issues/617)) * makes some minor changes to when next line help is automatically used ([01cae799](https://github.com/kbknapp/clap-rs/commit/01cae7990a33167ac35103fb36c811b4fe6eb98f)) * **YAML:** supports setting Arg::require_delimiter from YAML ([b9b55a39](https://github.com/kbknapp/clap-rs/commit/b9b55a39dfebcdbdc05dca2692927e503db50816)) ### v2.11.2 (2016-09-06) #### Improvements * **Help Wrapping:** makes some minor changes to when next line help is automatically used ([5658b117](https://github.com/kbknapp/clap-rs/commit/5658b117aec3e03adff9c8c52a4c4bc1fcb4e1ff)) ### v2.11.1 (2016-09-05) #### Bug Fixes * **Settings:** fixes an issue where settings weren't propogated down through grand-child subcommands ([b3efc107](https://github.com/kbknapp/clap-rs/commit/b3efc107515d78517b20798ff3890b8a2b04498e), closes [#638](https://github.com/kbknapp/clap-rs/issues/638)) #### Features * **Errors:** Errors with custom description ([58512f2f](https://github.com/kbknapp/clap-rs/commit/58512f2fcb430745f1ee6ee8f1c67f62dc216c73)) #### Improvements * **help:** use term_size instead of home-grown solution ([fc7327e9](https://github.com/kbknapp/clap-rs/commit/fc7327e9dcf4258ef2baebf0a8714d9c0622855b)) ### v2.11.0 (2016-08-28) #### Bug Fixes * **Groups:** fixes some usage strings that contain both args in groups and ones that conflict with each other ([3d782def](https://github.com/kbknapp/clap-rs/commit/3d782def57725e2de26ca5a5bc5cc2e40ddebefb), closes [#616](https://github.com/kbknapp/clap-rs/issues/616)) #### Documentation * moves docs to docs.rs ([03209d5e](https://github.com/kbknapp/clap-rs/commit/03209d5e1300906f00bafec1869c2047a92e5071), closes [#634](https://github.com/kbknapp/clap-rs/issues/634)) #### Improvements * **Completions:** uses standard conventions for bash completion files, namely '{bin}.bash-completion' ([27f5bbfb](https://github.com/kbknapp/clap-rs/commit/27f5bbfbcc9474c2f57c2b92b1feb898ae46ee70), closes [#567](https://github.com/kbknapp/clap-rs/issues/567)) * **Help:** automatically moves help text to the next line and wraps when term width is determined to be too small, or help text is too long ([150964c4](https://github.com/kbknapp/clap-rs/commit/150964c4e7124d54476c9d9b4b3f2406f0fd00e5), closes [#597](https://github.com/kbknapp/clap-rs/issues/597)) * **YAML Errors:** vastly improves error messages when using YAML ([f43b7c65](https://github.com/kbknapp/clap-rs/commit/f43b7c65941c53adc0616b8646a21dc255862eb2), closes [#574](https://github.com/kbknapp/clap-rs/issues/574)) #### Features * adds App::with_defaults to automatically use crate_authors! and crate_version! macros ([5520bb01](https://github.com/kbknapp/clap-rs/commit/5520bb012c127dfd299fd55699443c744d8dcd5b), closes [#600](https://github.com/kbknapp/clap-rs/issues/600)) ### v2.10.4 (2016-08-25) #### Bug Fixes * **Help Wrapping:** fixes a bug where help is wrapped incorrectly and causing a panic with some non-English characters ([d0b442c7](https://github.com/kbknapp/clap-rs/commit/d0b442c7beeecac9764406bc3bd171ced0b8825e), closes [#626](https://github.com/kbknapp/clap-rs/issues/626)) ### v2.10.3 (2016-08-25) #### Features * **Help:** adds new short hand way to use source formatting and ignore term width in help messages ([7dfdaf20](https://github.com/kbknapp/clap-rs/commit/7dfdaf200ebb5c431351a045b48f5e0f0d3f31db), closes [#625](https://github.com/kbknapp/clap-rs/issues/625)) #### Documentation * **Term Width:** adds details about set_term_width(0) ([00b8205d](https://github.com/kbknapp/clap-rs/commit/00b8205d22639d1b54b9c453c55c785aace52cb2)) #### Bug Fixes * **Unicode:** fixes two bugs where non-English characters were stripped or caused a panic with help wrapping ([763a5c92](https://github.com/kbknapp/clap-rs/commit/763a5c920e23efc74d190af0cb8b5dd714b2d67a), closes [#626](https://github.com/kbknapp/clap-rs/issues/626)) ### v2.10.2 (2016-08-22) #### Bug Fixes * fixes a bug where the help is printed twice ([a643fb28](https://github.com/kbknapp/clap-rs/commit/a643fb283acd9905dc727c4579c5c9fa2ceaa7e7), closes [#623](https://github.com/kbknapp/clap-rs/issues/623)) ### v2.10.1 (2016-08-21) #### Bug Fixes * **Help Subcommand:** fixes misleading usage string when using multi-level subcommmands ([e203515e](https://github.com/kbknapp/clap-rs/commit/e203515e3ac495b405dbba4f78fb6af148fd282e), closes [#618](https://github.com/kbknapp/clap-rs/issues/618)) #### Features * **YAML:** allows using lists or single values with arg declarations ([9ade2cd4](https://github.com/kbknapp/clap-rs/commit/9ade2cd4b268d6d7fe828319ce6a523c641b9c38), closes [#614](https://github.com/kbknapp/clap-rs/issues/614), [#613](https://github.com/kbknapp/clap-rs/issues/613)) ## v2.10.0 (2016-07-29) #### Features * **Completions:** one can generate a basic fish completions script at compile time ([1979d2f2](https://github.com/kbknapp/clap-rs/commit/1979d2f2f3216e57d02a97e624a8a8f6cf867ed9)) #### Bug Fixes * **parser:** preserve external subcommand name ([875df243](https://github.com/kbknapp/clap-rs/commit/875df24316c266920a073c13bbefbf546bc1f635)) #### Breaking Changes * **parser:** preserve external subcommand name ([875df243](https://github.com/kbknapp/clap-rs/commit/875df24316c266920a073c13bbefbf546bc1f635)) #### Documentation * **YAML:** fixes example 17's incorrect reference to arg_groups instead of groups ([b6c99e13](https://github.com/kbknapp/clap-rs/commit/b6c99e1377f918e78c16c8faced70a71607da931), closes [#601](https://github.com/kbknapp/clap-rs/issues/601)) ### 2.9.3 (2016-07-24) #### Bug Fixes * fixes bug where only first arg in list of required_unless_one is recognized ([1fc3b55b](https://github.com/kbknapp/clap-rs/commit/1fc3b55bd6c8653b02e7c4253749c6b77737d2ac), closes [#575](https://github.com/kbknapp/clap-rs/issues/575)) * **Settings:** fixes typo subcommandsrequired->subcommandrequired ([fc72cdf5](https://github.com/kbknapp/clap-rs/commit/fc72cdf591d30f5d9375d0b5cc2a2ff3e812f9f6), closes [#593](https://github.com/kbknapp/clap-rs/issues/593)) #### Features * **Completions:** adds the ability to generate completions to io::Write object ([9f62cf73](https://github.com/kbknapp/clap-rs/commit/9f62cf7378ba5acb5ce8c5bac89b4aa60c30755f)) * **Settings:** Add unset_setting and unset_settings fns to App (#598) ([0ceba231](https://github.com/kbknapp/clap-rs/commit/0ceba231c6767cd6d88fdb1feeeea41deadf77ff), closes [#590](https://github.com/kbknapp/clap-rs/issues/590)) ### 2.9.2 (2016-07-03) #### Documentation * **Completions:** fixes the formatting of the Cargo.toml excerpt in the completions example ([722f2607](https://github.com/kbknapp/clap-rs/commit/722f2607beaef56b6a0e433db5fd09492d9f028c)) #### Bug Fixes * **Completions:** fixes bug where --help and --version short weren't added to the completion list ([e9f2438e](https://github.com/kbknapp/clap-rs/commit/e9f2438e2ce99af0ae570a2eaf541fc7f55b771b), closes [#536](https://github.com/kbknapp/clap-rs/issues/536)) ### 2.9.1 (2016-07-02) #### Improvements * **Completions:** allows multiple completions to be built by namespacing with bin name ([57484b2d](https://github.com/kbknapp/clap-rs/commit/57484b2daeaac01c1026e8c84efc8bf099e0eb31)) ## v2.9.0 (2016-07-01) #### Documentation * **Completions:** * fixes some errors in the completion docs ([9b359bf0](https://github.com/kbknapp/clap-rs/commit/9b359bf06255d3dad8f489308044b60a9d1e6a87)) * adds documentation for completion scripts ([c6c519e4](https://github.com/kbknapp/clap-rs/commit/c6c519e40efd6c4533a9ef5efe8e74fd150391b7)) #### Features * **Completions:** * one can now generate a bash completions script at compile time! ([e75b6c7b](https://github.com/kbknapp/clap-rs/commit/e75b6c7b75f729afb9eb1d2a2faf61dca7674634), closes [#376](https://github.com/kbknapp/clap-rs/issues/376)) * completions now include aliases to subcommands, including all subcommand options ([0ab9f840](https://github.com/kbknapp/clap-rs/commit/0ab9f84052a8cf65b5551657f46c0c270841e634), closes [#556](https://github.com/kbknapp/clap-rs/issues/556)) * completions now continue completing even after first completion ([18fc2e5b](https://github.com/kbknapp/clap-rs/commit/18fc2e5b5af63bf54a94b72cec5e1223d49f4806)) * allows matching on possible values in options ([89cc2026](https://github.com/kbknapp/clap-rs/commit/89cc2026ba9ac69cf44c5254360bbf99236d4f89), closes [#557](https://github.com/kbknapp/clap-rs/issues/557)) #### Bug Fixes * **AllowLeadingHyphen:** fixes an issue where isn't ignored like it should be with this setting ([96c24c9a](https://github.com/kbknapp/clap-rs/commit/96c24c9a8fa1f85e06138d3cdd133e51659e19d2), closes [#558](https://github.com/kbknapp/clap-rs/issues/558)) ## v2.8.0 (2016-06-30) #### Features * **Arg:** adds new setting `Arg::require_delimiter` which requires val delimiter to parse multiple values ([920b5595](https://github.com/kbknapp/clap-rs/commit/920b5595ed72abfb501ce054ab536067d8df2a66)) #### Bug Fixes * Declare term::Winsize as repr(C) ([5d663d90](https://github.com/kbknapp/clap-rs/commit/5d663d905c9829ce6e7a164f1f0896cdd70236dd)) #### Documentation * **Arg:** adds docs for ([49af4e38](https://github.com/kbknapp/clap-rs/commit/49af4e38a5dae2ab0a7fc3b4147e2c053d532484)) ### v2.7.1 (2016-06-29) #### Bug Fixes * **Options:** * options with multiple values and using delimiters no longer parse additional values after a trailing space ([cdc500bd](https://github.com/kbknapp/clap-rs/commit/cdc500bdde6abe238c36ade406ddafc2bafff583)) * using options with multiple values and with an = no longer parse args after the trailing space as values ([290f61d0](https://github.com/kbknapp/clap-rs/commit/290f61d07177413cf082ada55526d83405f6d011)) ## v2.7.0 (2016-06-28) #### Documentation * fix typos ([43b3d40b](https://github.com/kbknapp/clap-rs/commit/43b3d40b8c38b1571da75af86b5088be96cccec2)) * **ArgGroup:** vastly improves ArgGroup docs by adding better examples ([9e5f4f5d](https://github.com/kbknapp/clap-rs/commit/9e5f4f5d734d630bca5535c3a0aa4fd4f9db3e39), closes [#534](https://github.com/kbknapp/clap-rs/issues/534)) #### Features * **ArgGroup:** one can now specify groups which require AT LEAST one of the args ([33689acc](https://github.com/kbknapp/clap-rs/commit/33689acc689b217a8c0ee439f1b1225590c38355), closes [#533](https://github.com/kbknapp/clap-rs/issues/533)) #### Bug Fixes * **App:** using `App::print_help` now prints the same as would have been printed by `--help` or the like ([e84cc018](https://github.com/kbknapp/clap-rs/commit/e84cc01836bbe0527e97de6db9889bd9e0fd6ba1), closes [#536](https://github.com/kbknapp/clap-rs/issues/536)) * **Help:** * prevents invoking help help and displaying incorrect help message ([e3d2893f](https://github.com/kbknapp/clap-rs/commit/e3d2893f377942a2d4cf3c6ff04524d0346e6fdb), closes [#538](https://github.com/kbknapp/clap-rs/issues/538)) * subcommand help messages requested via help now correctly match --help ([08ad1cff](https://github.com/kbknapp/clap-rs/commit/08ad1cff4fec57224ea957a2891a057b323c01bc), closes [#539](https://github.com/kbknapp/clap-rs/issues/539)) #### Improvements * **ArgGroup:** Add multiple ArgGroups per Arg ([902e182f](https://github.com/kbknapp/clap-rs/commit/902e182f7a58aff11ff01e0a452abcdbdb2262aa), closes [#426](https://github.com/kbknapp/clap-rs/issues/426)) * **Usage Strings:** `[FLAGS]` and `[ARGS]` are no longer blindly added to usage strings ([9b2e45b1](https://github.com/kbknapp/clap-rs/commit/9b2e45b170aff567b038d8b3368880b6046c10c6), closes [#537](https://github.com/kbknapp/clap-rs/issues/537)) * **arg_enum!:** allows using meta items like repr(C) with arg_enum!s ([edf9b233](https://github.com/kbknapp/clap-rs/commit/edf9b2331c17a2cbcc13f961add4c55c2778e773), closes [#543](https://github.com/kbknapp/clap-rs/issues/543)) ## v2.6.0 (2016-06-14) #### Improvements * removes extra newline from help output ([86e61d19](https://github.com/kbknapp/clap-rs/commit/86e61d19a748fb9870fcf1175308984e51ca1115)) * allows printing version to any io::Write object ([921f5f79](https://github.com/kbknapp/clap-rs/commit/921f5f7916597f1d028cd4a65bfe76a01c801724)) * removes extra newline when printing version ([7e2e2cbb](https://github.com/kbknapp/clap-rs/commit/7e2e2cbb4a8a0f050bb8072a376f742fc54b8589)) * **Aliases:** improves readability of asliases in help messages ([ca511de7](https://github.com/kbknapp/clap-rs/commit/ca511de71f5b8c2ac419f1b188658e8c63b67846), closes [#526](https://github.com/kbknapp/clap-rs/issues/526), [#529](https://github.com/kbknapp/clap-rs/issues/529)) * **Usage Strings:** improves the default usage string when only a single positional arg is present ([ec86f2da](https://github.com/kbknapp/clap-rs/commit/ec86f2dada1545a63fc72355e22fcdc4c466c215), closes [#518](https://github.com/kbknapp/clap-rs/issues/518)) #### Features * **Help:** allows wrapping at specified term width (Even on Windows!) ([1761dc0d](https://github.com/kbknapp/clap-rs/commit/1761dc0d27d0d621229d792be40c36fbf65c3014), closes [#451](https://github.com/kbknapp/clap-rs/issues/451)) * **Settings:** * adds new setting to stop delimiting values with -- or TrailingVarArg ([fc3e0f5a](https://github.com/kbknapp/clap-rs/commit/fc3e0f5afda6d24cdb3c4676614beebe13e1e870), closes [#511](https://github.com/kbknapp/clap-rs/issues/511)) * one can now set an AppSetting which is propogated down through child subcommands ([e2341835](https://github.com/kbknapp/clap-rs/commit/e23418351a3b98bf08dfd7744bc14377c70d59ee), closes [#519](https://github.com/kbknapp/clap-rs/issues/519)) * **Subcommands:** adds support for visible aliases ([7b10e7f8](https://github.com/kbknapp/clap-rs/commit/7b10e7f8937a07fdb8d16a6d8df79ce78d080cd3), closes [#522](https://github.com/kbknapp/clap-rs/issues/522)) #### Bug Fixes * fixes bug where args are printed out of order with templates ([05abb534](https://github.com/kbknapp/clap-rs/commit/05abb534864764102031a0d402e64ac65867aa87)) * fixes bug where one can't override version or help flags ([90d7d6a2](https://github.com/kbknapp/clap-rs/commit/90d7d6a2ea8240122dd9bf8d82d3c4f5ebb5c703), closes [#514](https://github.com/kbknapp/clap-rs/issues/514)) * fixes issue where before_help wasn't printed ([b3faff60](https://github.com/kbknapp/clap-rs/commit/b3faff6030f76a23f26afcfa6a90169002ed7106)) * **Help:** `App::before_help` and `App::after_help` now correctly wrap ([1f4da767](https://github.com/kbknapp/clap-rs/commit/1f4da7676e6e71aa8dda799f3eeefad105a47819), closes [#516](https://github.com/kbknapp/clap-rs/issues/516)) * **Settings:** fixes bug where new color settings couldn't be converted from strs ([706a7c11](https://github.com/kbknapp/clap-rs/commit/706a7c11b0900be594de6d5a3121938eff197602)) * **Subcommands:** subcommands with aliases now display help of the aliased subcommand ([5354d14b](https://github.com/kbknapp/clap-rs/commit/5354d14b51f189885ba110e01e6b76cca3752992), closes [#521](https://github.com/kbknapp/clap-rs/issues/521)) * **Windows:** fixes a failing windows build ([01e7dfd6](https://github.com/kbknapp/clap-rs/commit/01e7dfd6c07228c0be6695b3c7bf9370d82860d4)) * **YAML:** adds missing YAML methods for App and Arg ([e468faf3](https://github.com/kbknapp/clap-rs/commit/e468faf3f05950fd9f72d84b69aa2061e91c6c64), closes [#528](https://github.com/kbknapp/clap-rs/issues/528)) ### v2.5.2 (2016-05-31) #### Improvements * removes extra newline from help output ([86e61d19](https://github.com/kbknapp/clap-rs/commit/86e61d19a748fb9870fcf1175308984e51ca1115)) * allows printing version to any io::Write object ([921f5f79](https://github.com/kbknapp/clap-rs/commit/921f5f7916597f1d028cd4a65bfe76a01c801724)) * removes extra newline when printing version ([7e2e2cbb](https://github.com/kbknapp/clap-rs/commit/7e2e2cbb4a8a0f050bb8072a376f742fc54b8589)) #### Bug Fixes * fixes bug where args are printed out of order with templates ([3935431d](https://github.com/kbknapp/clap-rs/commit/3935431d5633f577c0826ae2142794b301f4b8ca)) * fixes bug where one can't override version or help flags ([90d7d6a2](https://github.com/kbknapp/clap-rs/commit/90d7d6a2ea8240122dd9bf8d82d3c4f5ebb5c703), closes [#514](https://github.com/kbknapp/clap-rs/issues/514)) * fixes issue where before_help wasn't printed ([b3faff60](https://github.com/kbknapp/clap-rs/commit/b3faff6030f76a23f26afcfa6a90169002ed7106)) #### Documentation * inter-links all types and pages ([3312893d](https://github.com/kbknapp/clap-rs/commit/3312893ddaef3f44d68d8d26ed3d08010be50d97), closes [#505](https://github.com/kbknapp/clap-rs/issues/505)) * makes all publicly available types viewable in docs ([52ca6505](https://github.com/kbknapp/clap-rs/commit/52ca6505b4fec7b5c2d53d160c072d395eb21da6)) ### v2.5.1 (2016-05-11) #### Bug Fixes * **Subcommand Aliases**: fixes lifetime issue when setting multiple aliases at once ([ac42f6cf0](https://github.com/kbknapp/clap-rs/commit/ac42f6cf0de6c4920f703807d63061803930b18d)) ## v2.5.0 (2016-05-10) #### Improvements * **SubCommand Aliases:** adds feature to yaml configs too ([69592195](https://github.com/kbknapp/clap-rs/commit/695921954dde46dfd483399dcdef482c9dd7f34a)) #### Features * **SubCommands:** adds support for subcommand aliases ([66b4dea6](https://github.com/kbknapp/clap-rs/commit/66b4dea65c44d8f77ff522238a9237aed1bcab6d), closes [#469](https://github.com/kbknapp/clap-rs/issues/469)) ### v2.4.3 (2016-05-10) #### Bug Fixes * **Usage Strings:** * now properly dedups args that are also in groups ([3ca0947c](https://github.com/kbknapp/clap-rs/commit/3ca0947c166b4f8525752255e3a4fa6565eb9689), closes [#498](https://github.com/kbknapp/clap-rs/issues/498)) * removes duplicate groups from usage strings ([f574fb8a](https://github.com/kbknapp/clap-rs/commit/f574fb8a7cde4d4a2fa4c4481d59be2d0f135427)) #### Improvements * **Groups:** formats positional args in groups in a better way ([fef11154](https://github.com/kbknapp/clap-rs/commit/fef11154fb7430d1cbf04a672aabb366e456a368)) * **Help:** * moves positionals to standard <> formatting ([03dfe5ce](https://github.com/kbknapp/clap-rs/commit/03dfe5ceff1d63f172788ff688567ddad9fe119b)) * default help subcommand string has been shortened ([5b7fe8e4](https://github.com/kbknapp/clap-rs/commit/5b7fe8e4161e43ab19e2e5fcf55fbe46791134e9), closes [#494](https://github.com/kbknapp/clap-rs/issues/494)) ### v2.4.3 (2016-05-10) * Ghost Release ### v2.4.3 (2016-05-10) * Ghost Release ## v2.4.0 (2016-05-02) #### Features * **Help:** adds support for displaying info before help message ([29fbfa3b](https://github.com/kbknapp/clap-rs/commit/29fbfa3b963f2f3ca7704bf5d3e1201531baa373)) * **Required:** adds allowing args that are required unless certain args are present ([af1f7916](https://github.com/kbknapp/clap-rs/commit/af1f79168390ea7da4074d0d9777de458ea64971)) #### Documentation * hides formatting from docs ([cb708093](https://github.com/kbknapp/clap-rs/commit/cb708093a7cd057f08c98b7bd1ed54c2db86ae7e)) * **required_unless:** adds docs and examples for required_unless ([ca727b52](https://github.com/kbknapp/clap-rs/commit/ca727b52423b9883acd88b2f227b2711bc144573)) #### Bug Fixes * **Required Args:** fixes issue where missing required args are sometimes duplicatd in error messages ([3beebd81](https://github.com/kbknapp/clap-rs/commit/3beebd81e7bc2faa4115ac109cf570e512c5477f), closes [#492](https://github.com/kbknapp/clap-rs/issues/492)) ## v2.3.0 (2016-04-18) #### Improvements * **macros.rs:** Added write_nspaces macro (a new version of write_spaces) ([9d757e86](https://github.com/kbknapp/clap-rs/commit/9d757e8678e334e5a740ac750c76a9ed4e785cba)) * **parser.rs:** * Provide a way to create a usage string without the USAGE: title ([a91d378b](https://github.com/kbknapp/clap-rs/commit/a91d378ba0c91b5796457f8c6e881b13226ab735)) * Make Parser's create_usage public allowing to have function outside the parser to generate the help ([d51945f8](https://github.com/kbknapp/clap-rs/commit/d51945f8b82ebb0963f4f40b384a9e8335783091)) * Expose Parser's flags, opts and positionals argument as iterators ([9b23e7ee](https://github.com/kbknapp/clap-rs/commit/9b23e7ee40e51f7a823644c4496be955dc6c9d3a)) * **src/args:** Exposes argument display order by introducing a new Trait ([1321630e](https://github.com/kbknapp/clap-rs/commit/1321630ef56955f152c73376d4d85cceb0bb4a12)) * **srs/args:** Added longest_filter to AnyArg trait ([65b3f667](https://github.com/kbknapp/clap-rs/commit/65b3f667532685f854c699ddd264d326599cf7e5)) #### Features * **Authors Macro:** adds a crate_authors macro ([38fb59ab](https://github.com/kbknapp/clap-rs/commit/38fb59abf480eb2b6feca269097412f8b00b5b54), closes [#447](https://github.com/kbknapp/clap-rs/issues/447)) * **HELP:** * implements optional colored help messages ([abc8f669](https://github.com/kbknapp/clap-rs/commit/abc8f669c3c8193ffc3a3b0ac6c3ac2198794d4f), closes [#483](https://github.com/kbknapp/clap-rs/issues/483)) * Add a Templated Help system. ([81e121ed](https://github.com/kbknapp/clap-rs/commit/81e121edd616f7285593f11120c63bcccae0d23e)) #### Bug Fixes * **HELP:** Adjust Help to semantic changes introduced in 6933b84 ([8d23806b](https://github.com/kbknapp/clap-rs/commit/8d23806bd67530ad412c34a1dcdcb1435555573d)) ### v2.2.6 (2016-04-11) #### Bug Fixes * **Arg Groups**: fixes bug where arg name isn't printed properly ([3019a685](https://github.com/kbknapp/clap-rs/commit/3019a685eee747ccbe6be09ad5dddce0b1d1d4db), closes [#476](https://github.com/kbknapp/clap-rs/issues/476)) ### v2.2.5 (2016-04-03) #### Bug Fixes * **Empty Values:** fixes bug where empty values weren't stored ([885d166f](https://github.com/kbknapp/clap-rs/commit/885d166f04eb3fb581898ae5818c6c8032e5a686), closes [#470](https://github.com/kbknapp/clap-rs/issues/470)) * **Help Message:** fixes bug where arg name is printed twice ([71acf1d5](https://github.com/kbknapp/clap-rs/commit/71acf1d576946658b8bbdb5ae79e6716c43a030f), closes [#472](https://github.com/kbknapp/clap-rs/issues/472)) ### v2.2.4 (2016-03-30) #### Bug Fixes * fixes compiling with debug cargo feature ([d4b55450](https://github.com/kbknapp/clap-rs/commit/d4b554509928031ac0808076178075bb21f8c1da)) * **Empty Values:** fixes bug where empty values weren't stored ([885d166f](https://github.com/kbknapp/clap-rs/commit/885d166f04eb3fb581898ae5818c6c8032e5a686), closes [#470](https://github.com/kbknapp/clap-rs/issues/470)) ### v2.2.3 (2016-03-28) #### Bug Fixes * **Help Subcommand:** fixes issue where help and version flags weren't properly displayed ([205b07bf](https://github.com/kbknapp/clap-rs/commit/205b07bf2e6547851f1290f8cd6b169145e144f1), closes [#466](https://github.com/kbknapp/clap-rs/issues/466)) ### v2.2.2 (2016-03-27) #### Bug Fixes * **Help Message:** fixes bug with wrapping in the middle of a unicode sequence ([05365ddc](https://github.com/kbknapp/clap-rs/commit/05365ddcc252e4b49e7a75e199d6001a430bd84d), closes [#456](https://github.com/kbknapp/clap-rs/issues/456)) * **Usage Strings:** fixes small bug where -- would appear needlessly in usage strings ([6933b849](https://github.com/kbknapp/clap-rs/commit/6933b8491c2a7e28cdb61b47dcf10caf33c2f78a), closes [#461](https://github.com/kbknapp/clap-rs/issues/461)) ### 2.2.1 (2016-03-16) #### Features * **Help Message:** wraps and aligns the help message of subcommands ([813d75d0](https://github.com/kbknapp/clap-rs/commit/813d75d06fbf077c65762608c0fa5e941cfc393c), closes [#452](https://github.com/kbknapp/clap-rs/issues/452)) #### Bug Fixes * **Help Message:** fixes a bug where small terminal sizes causing a loop ([1d73b035](https://github.com/kbknapp/clap-rs/commit/1d73b0355236923aeaf6799abc759762ded7e1d0), closes [#453](https://github.com/kbknapp/clap-rs/issues/453)) ## v2.2.0 (2016-03-15) #### Features * **Help Message:** can auto wrap and aligning help text to term width ([e36af026](https://github.com/kbknapp/clap-rs/commit/e36af0266635f23e85e951b9088d561e9a5d1bf6), closes [#428](https://github.com/kbknapp/clap-rs/issues/428)) * **Help Subcommand:** adds support passing additional subcommands to help subcommand ([2c12757b](https://github.com/kbknapp/clap-rs/commit/2c12757bbdf34ce481f3446c074e24c09c2e60fd), closes [#416](https://github.com/kbknapp/clap-rs/issues/416)) * **Opts and Flags:** adds support for custom ordering in help messages ([9803b51e](https://github.com/kbknapp/clap-rs/commit/9803b51e799904c0befaac457418ee766ccc1ab9)) * **Settings:** adds support for automatically deriving custom display order of args ([ad86e433](https://github.com/kbknapp/clap-rs/commit/ad86e43334c4f70e86909689a088fb87e26ff95a), closes [#444](https://github.com/kbknapp/clap-rs/issues/444)) * **Subcommands:** adds support for custom ordering in help messages ([7d2a2ed4](https://github.com/kbknapp/clap-rs/commit/7d2a2ed413f5517d45988eef0765cdcd663b6372), closes [#442](https://github.com/kbknapp/clap-rs/issues/442)) #### Bug Fixes * **From Usage:** fixes a bug where adding empty lines werent ignored ([c5c58c86](https://github.com/kbknapp/clap-rs/commit/c5c58c86b9c503d8de19da356a5a5cffb59fbe84)) #### Documentation * **Groups:** explains required ArgGroups better ([4ff0205b](https://github.com/kbknapp/clap-rs/commit/4ff0205b85a45151b59bbaf090a89df13438380f), closes [#439](https://github.com/kbknapp/clap-rs/issues/439)) ### v2.1.2 (2016-02-24) #### Bug Fixes * **Nightly:** fixes failing nightly build ([d752c170](https://github.com/kbknapp/clap-rs/commit/d752c17029598b19037710f204b7943f0830ae75), closes [#434](https://github.com/kbknapp/clap-rs/issues/434)) ### v2.1.1 (2016-02-19) #### Documentation * **AppSettings:** clarifies that AppSettings do not propagate ([3c8db0e9](https://github.com/kbknapp/clap-rs/commit/3c8db0e9be1d24edaad364359513cbb02abb4186), closes [#429](https://github.com/kbknapp/clap-rs/issues/429)) * **Arg Examples:** adds better examples ([1e79cccc](https://github.com/kbknapp/clap-rs/commit/1e79cccc12937bc0e7cd2aad8e404410798e9fff)) #### Improvements * **Help:** adds setting for next line help by arg ([066df748](https://github.com/kbknapp/clap-rs/commit/066df7486e684cf50a8479a356a12ba972c34ce1), closes [#427](https://github.com/kbknapp/clap-rs/issues/427)) ## v2.1.0 (2016-02-10) #### Features * **Defult Values:** adds support for default values in args ([73211952](https://github.com/kbknapp/clap-rs/commit/73211952964a79d97b434dd567e6d7d34be7feb5), closes [#418](https://github.com/kbknapp/clap-rs/issues/418)) #### Documentation * **Default Values:** adds better examples and notes for default values ([9facd74f](https://github.com/kbknapp/clap-rs/commit/9facd74f843ef3807c5d35259558a344e6c25905)) ### v2.0.6 (2016-02-09) #### Improvements * **Positional Arguments:** now displays value name if appropriate ([f0a99916](https://github.com/kbknapp/clap-rs/commit/f0a99916c59ce675515c6dcdfe9a40b130510908), closes [#420](https://github.com/kbknapp/clap-rs/issues/420)) ### v2.0.5 (2016-02-05) #### Bug Fixes * **Multiple Values:** fixes bug where number_of_values wasnt respected ([72c387da](https://github.com/kbknapp/clap-rs/commit/72c387da0bb8a6f526f863770f08bb8ca0d3de03)) ### v2.0.4 (2016-02-04) #### Bug Fixes * adds support for building ArgGroups from standalone YAML ([fcbc7e12](https://github.com/kbknapp/clap-rs/commit/fcbc7e12f5d7b023b8f30cba8cad28a01cf6cd26)) * Stop lonely hyphens from causing panic ([85b11468](https://github.com/kbknapp/clap-rs/commit/85b11468b0189d5cc15f1cfac5db40d17a0077dc), closes [#410](https://github.com/kbknapp/clap-rs/issues/410)) * **AppSettings:** fixes bug where subcmds didn't receive parent ver ([a62e4527](https://github.com/kbknapp/clap-rs/commit/a62e452754b3b0e3ac9a15aa8b5330636229ead1)) ### v2.0.3 (2016-02-02) #### Improvements * **values:** adds support for up to u64::max values per arg ([c7abf7d7](https://github.com/kbknapp/clap-rs/commit/c7abf7d7611e317b0d31d97632e3d2e13570947c)) * **occurrences:** Allow for more than 256 occurrences of an argument. ([3731ddb3](https://github.com/kbknapp/clap-rs/commit/3731ddb361163f3d6b86844362871e48c80fa530)) #### Features * **AppSettings:** adds HidePossibleValuesInHelp to skip writing those values ([cdee7a0e](https://github.com/kbknapp/clap-rs/commit/cdee7a0eb2beeec723cb98acfacf03bf629c1da3)) #### Bug Fixes * **value_t_or_exit:** fixes typo which causes value_t_or_exit to return a Result ([ee96baff](https://github.com/kbknapp/clap-rs/commit/ee96baffd306cb8d20ddc5575cf739bb1a6354e8)) ### v2.0.2 (2016-01-31) #### Improvements * **arg_enum:** enum declared with arg_enum returns [&'static str; #] instead of Vec ([9c4b8a1a](https://github.com/kbknapp/clap-rs/commit/9c4b8a1a6b12949222f17d1074578ad7676b9c0d)) #### Bug Fixes * clap_app! should be gated by unstable, not nightly feature ([0c8b84af](https://github.com/kbknapp/clap-rs/commit/0c8b84af6161d5baf683688eafc00874846f83fa)) * **SubCommands:** fixed where subcmds weren't recognized after mult args ([c19c17a8](https://github.com/kbknapp/clap-rs/commit/c19c17a8850602990e24347aeb4427cf43316223), closes [#405](https://github.com/kbknapp/clap-rs/issues/405)) * **Usage Parser:** fixes a bug where literal single quotes weren't allowed in help strings ([0bcc7120](https://github.com/kbknapp/clap-rs/commit/0bcc71206478074769e311479b34a9f74fe80f5c), closes [#406](https://github.com/kbknapp/clap-rs/issues/406)) ### v2.0.1 (2016-01-30) #### Bug Fixes * fixes cargo features to NOT require nightly with unstable features ([dcbcc60c](https://github.com/kbknapp/clap-rs/commit/dcbcc60c9ba17894be636472ea4b07a82d86a9db), closes [#402](https://github.com/kbknapp/clap-rs/issues/402)) ## v2.0.0 (2016-01-28) #### Improvements * **From Usage:** vastly improves the usage parser ([fa3a2f86](https://github.com/kbknapp/clap-rs/commit/fa3a2f86bd674c5eb07128c95098fab7d1437247), closes [#350](https://github.com/kbknapp/clap-rs/issues/350)) #### Features * adds support for external subcommands ([177fe5cc](https://github.com/kbknapp/clap-rs/commit/177fe5cce745c2164a8e38c23be4c4460d2d7211), closes [#372](https://github.com/kbknapp/clap-rs/issues/372)) * adds support values with a leading hyphen ([e4d429b9](https://github.com/kbknapp/clap-rs/commit/e4d429b9d52e95197bd0b572d59efacecf305a59), closes [#385](https://github.com/kbknapp/clap-rs/issues/385)) * adds support for turning off the value delimiter ([508db850](https://github.com/kbknapp/clap-rs/commit/508db850a87c2e251cf6b6ddead9ad56b29f9e57), closes [#352](https://github.com/kbknapp/clap-rs/issues/352)) * adds support changing the value delimiter ([dafeae8a](https://github.com/kbknapp/clap-rs/commit/dafeae8a526162640f6a68da434370c64d190889), closes [#353](https://github.com/kbknapp/clap-rs/issues/353)) * adds support for comma separated values ([e69da6af](https://github.com/kbknapp/clap-rs/commit/e69da6afcd2fe48a3c458ca031db40997f860eda), closes [#348](https://github.com/kbknapp/clap-rs/issues/348)) * adds support with options with optional values ([4555736c](https://github.com/kbknapp/clap-rs/commit/4555736cad01441dcde4ea84a285227e0844c16e), closes [#367](https://github.com/kbknapp/clap-rs/issues/367)) * **UTF-8:** adds support for invalid utf8 in values ([c5c59dec](https://github.com/kbknapp/clap-rs/commit/c5c59dec0bc33b86b2e99d30741336f17ec84282), closes [#269](https://github.com/kbknapp/clap-rs/issues/269)) * **v2:** implementing the base of 2.x ([a3536054](https://github.com/kbknapp/clap-rs/commit/a3536054512ba833533dc56615ce3663d884381c)) #### Bug Fixes * fixes nightly build with new lints ([17599195](https://github.com/kbknapp/clap-rs/commit/175991956c37dc83ba9c49396e927a1cb65c5b11)) * fixes Windows build for 2x release ([674c9b48](https://github.com/kbknapp/clap-rs/commit/674c9b48c7c92079cb180cc650a9e39f34781c32), closes [#392](https://github.com/kbknapp/clap-rs/issues/392)) * fixes yaml build for 2x base ([adceae64](https://github.com/kbknapp/clap-rs/commit/adceae64c8556d00ab715677377b216f9f468ad7)) #### Documentation * updates examples for 2x release ([1303b360](https://github.com/kbknapp/clap-rs/commit/1303b3607468f362ab1b452d5614c1a064dc69b4), closes [#394](https://github.com/kbknapp/clap-rs/issues/394)) * updates examples for 2x release ([0a011f31](https://github.com/kbknapp/clap-rs/commit/0a011f3142aec338d388a6c8bfe22fa7036021bb), closes [#394](https://github.com/kbknapp/clap-rs/issues/394)) * updates documentation for v2 release ([8d51724e](https://github.com/kbknapp/clap-rs/commit/8d51724ef73dfde5bb94fb9466bc5463a1cc1502)) * updating docs for 2x release ([576d0e0e](https://github.com/kbknapp/clap-rs/commit/576d0e0e2c7b8f386589179bbf7419b93abacf1c)) * **README.md:** * updates readme for v2 release ([acaba01a](https://github.com/kbknapp/clap-rs/commit/acaba01a353c12144b9cd9a3ce447400691849b0), closes [#393](https://github.com/kbknapp/clap-rs/issues/393)) * fix typo and make documentation conspicuous ([07b9f614](https://github.com/kbknapp/clap-rs/commit/07b9f61495d927f69f7abe6c0d85253f0f4e6107)) #### BREAKING CHANGES * **Fewer liftimes! Yay!** * `App<'a, 'b, 'c, 'd, 'e, 'f>` => `App<'a, 'b>` * `Arg<'a, 'b, 'c, 'd, 'e, 'f>` => `Arg<'a, 'b>` * `ArgMatches<'a, 'b>` => `ArgMatches<'a>` * **Simply Renamed** * `App::arg_group` => `App::group` * `App::arg_groups` => `App::groups` * `ArgGroup::add` => `ArgGroup::arg` * `ArgGroup::add_all` => `ArgGroup::args` * `ClapError` => `Error` * struct field `ClapError::error_type` => `Error::kind` * `ClapResult` => `Result` * `ClapErrorType` => `ErrorKind` * **Removed Deprecated Functions and Methods** * `App::subcommands_negate_reqs` * `App::subcommand_required` * `App::arg_required_else_help` * `App::global_version(bool)` * `App::versionless_subcommands` * `App::unified_help_messages` * `App::wait_on_error` * `App::subcommand_required_else_help` * `SubCommand::new` * `App::error_on_no_subcommand` * `Arg::new` * `Arg::mutually_excludes` * `Arg::mutually_excludes_all` * `Arg::mutually_overrides_with` * `simple_enum!` * **Renamed Error Variants** * `InvalidUnicode` => `InvalidUtf8` * `InvalidArgument` => `UnknownArgument` * **Usage Parser** * Value names can now be specified inline, i.e. `-o, --option 'some option which takes two files'` * **There is now a priority of order to determine the name** - This is perhaps the biggest breaking change. See the documentation for full details. Prior to this change, the value name took precedence. **Ensure your args are using the proper names (i.e. typically the long or short and NOT the value name) throughout the code** * `ArgMatches::values_of` returns an `Values` now which implements `Iterator` (should not break any code) * `crate_version!` returns `&'static str` instead of `String` * Using the `clap_app!` macro requires compiling with the `unstable` feature because the syntax could change slightly in the future ### v1.5.5 (2016-01-04) #### Bug Fixes * fixes an issue where invalid short args didn't cause an error ([c9bf7e44](https://github.com/kbknapp/clap-rs/commit/c9bf7e4440bd2f9b524ea955311d433c40a7d1e0)) * prints the name in version and help instead of binary name ([8f3817f6](https://github.com/kbknapp/clap-rs/commit/8f3817f665c0cab6726bc16c56a53b6a61e44448), closes [#368](https://github.com/kbknapp/clap-rs/issues/368)) * fixes an intentional panic issue discovered via clippy ([ea83a3d4](https://github.com/kbknapp/clap-rs/commit/ea83a3d421ea8856d4cac763942834d108b71406)) ### v1.5.4 (2015-12-18) #### Examples * **17_yaml:** conditinonally compile 17_yaml example ([575de089](https://github.com/kbknapp/clap-rs/commit/575de089a3e240c398cb10e6cf5a5c6b68662c01)) #### Improvements * clippy improvements ([99cdebc2](https://github.com/kbknapp/clap-rs/commit/99cdebc23da3a45a165f14b27bebeb2ed828a2ce)) #### Bug Fixes * **errors:** return correct error type in WrongNumValues error builder ([5ba8ba9d](https://github.com/kbknapp/clap-rs/commit/5ba8ba9dcccdfa74dd1c44260e64b359bbb36be6)) * ArgRequiredElseHelp setting now takes precedence over missing required args ([faad83fb](https://github.com/kbknapp/clap-rs/commit/faad83fbef6752f3093b6e98fca09a9449b830f4), closes [#362](https://github.com/kbknapp/clap-rs/issues/362)) ### v1.5.3 (2015-11-20) #### Bug Fixes * **Errors:** fixes some instances when errors are missing a final newline ([c4d2b171](https://github.com/kbknapp/clap-rs/commit/c4d2b1711994479ad64ee52b6b49d2ceccbf2118)) ### v1.5.2 (2015-11-14) #### Bug Fixes * **Errors:** fixes a compiling bug when built on Windows or without the color feature ([a35f7634](https://github.com/kbknapp/clap-rs/commit/a35f76346fe6ecc88dda6a1eb13627186e7ce185)) ### v1.5.1 (2015-11-13) #### Bug Fixes * **Required Args:** fixes a bug where required args are not correctly accounted for ([f03b88a9](https://github.com/kbknapp/clap-rs/commit/f03b88a9766b331a63879bcd747687f2e5a2661b), closes [#343](https://github.com/kbknapp/clap-rs/issues/343)) ## v1.5.0 (2015-11-13) #### Bug Fixes * fixes a bug with required positional args in usage strings ([c6858f78](https://github.com/kbknapp/clap-rs/commit/c6858f78755f8e860204323c828c8355a066dc83)) #### Documentation * **FAQ:** updates readme with slight changes to FAQ ([a4ef0fab](https://github.com/kbknapp/clap-rs/commit/a4ef0fab73c8dc68f1b138965d1340459c113398)) #### Improvements * massive errors overhaul ([cdc29175](https://github.com/kbknapp/clap-rs/commit/cdc29175bc9c53e5b4aec86cbc04c1743154dae6)) * **ArgMatcher:** huge refactor and deduplication of code ([8988853f](https://github.com/kbknapp/clap-rs/commit/8988853fb8825e8f841fde349834cc12cdbad081)) * **Errors:** errors have been vastly improved ([e59bc0c1](https://github.com/kbknapp/clap-rs/commit/e59bc0c16046db156a88ba71a037db05028e995c)) * **Traits:** refactoring some configuration into traits ([5800cdec](https://github.com/kbknapp/clap-rs/commit/5800cdec6dce3def4242b9f7bd136308afb19685)) #### Performance * **App:** * more BTreeMap->Vec, Opts and SubCmds ([bc4495b3](https://github.com/kbknapp/clap-rs/commit/bc4495b32ec752b6c4b29719e831c043ef2a26ce)) * changes flags BTreeMap->Vec ([d357640f](https://github.com/kbknapp/clap-rs/commit/d357640fab55e5964fe83efc3c771e53aa3222fd)) * removed unneeded BTreeMap ([78971fd6](https://github.com/kbknapp/clap-rs/commit/78971fd68d7dc5c8e6811b4520cdc54e4188f733)) * changes BTreeMap to VecMap in some instances ([64b921d0](https://github.com/kbknapp/clap-rs/commit/64b921d087fdd03775c95ba0bcf65d3f5d36f812)) * removed excess clones ([ec0089d4](https://github.com/kbknapp/clap-rs/commit/ec0089d42ed715d293fb668d3a90b0db0aa3ec39)) ### v1.4.7 (2015-11-03) #### Documentation * Clarify behavior of Arg::multiple with options. ([434f497a](https://github.com/kbknapp/clap-rs/commit/434f497ab6d831f8145cf09278c97ca6ee6c6fe7)) * Fix typos and improve grammar. ([c1f66b5d](https://github.com/kbknapp/clap-rs/commit/c1f66b5de7b5269fbf8760a005ef8c645edd3229)) #### Bug Fixes * **Error Status:** fixes bug where --help and --version return non-zero exit code ([89b51fdf](https://github.com/kbknapp/clap-rs/commit/89b51fdf8b1ab67607567344e2317ff1a757cb12)) ### v1.4.6 (2015-10-29) #### Features * allows parsing without a binary name for daemons and interactive CLIs ([aff89d57](https://github.com/kbknapp/clap-rs/commit/aff89d579b5b85c3dc81b64f16d5865299ec39a2), closes [#318](https://github.com/kbknapp/clap-rs/issues/318)) #### Bug Fixes * **Errors:** tones down quoting in some error messages ([34ce59ed](https://github.com/kbknapp/clap-rs/commit/34ce59ede53bfa2eef722c74881cdba7419fd9c7), closes [#309](https://github.com/kbknapp/clap-rs/issues/309)) * **Help and Version:** only builds help and version once ([e3be87cf](https://github.com/kbknapp/clap-rs/commit/e3be87cfc095fc41c9811adcdc6d2b079f237d5e)) * **Option Args:** fixes bug with args and multiple values ([c9a9548a](https://github.com/kbknapp/clap-rs/commit/c9a9548a8f96cef8a3dd9a980948325fbbc1b91b), closes [#323](https://github.com/kbknapp/clap-rs/issues/323)) * **POSIX Overrides:** fixes bug where required args are overridden ([40ed2b50](https://github.com/kbknapp/clap-rs/commit/40ed2b50c3a9fe88bfdbaa43cef9fd6493ecaa8e)) * **Safe Matches:** using 'safe' forms of the get_matches family no longer exit the process ([c47025dc](https://github.com/kbknapp/clap-rs/commit/c47025dca2b3305dea0a0acfdd741b09af0c0d05), closes [#256](https://github.com/kbknapp/clap-rs/issues/256)) * **Versionless SubCommands:** fixes a bug where the -V flag was needlessly built ([27df8b9d](https://github.com/kbknapp/clap-rs/commit/27df8b9d98d13709dad3929a009f40ebff089a1a), closes [#329](https://github.com/kbknapp/clap-rs/issues/329)) #### Documentation * adds comparison in readme ([1a8bf31e](https://github.com/kbknapp/clap-rs/commit/1a8bf31e7a6b87ce48a66af2cde1645b2dd5bc95), closes [#325](https://github.com/kbknapp/clap-rs/issues/325)) ### v1.4.5 (2015-10-06) #### Bug Fixes * fixes crash on invalid arg error ([c78ce128](https://github.com/kbknapp/clap-rs/commit/c78ce128ebbe7b8f730815f8176c29d76f4ade8c)) ### v1.4.4 (2015-10-06) #### Documentation * clean up some formatting ([b7df92d7](https://github.com/kbknapp/clap-rs/commit/b7df92d7ea25835701dd22ddff984b9749f48a00)) * move the crate-level docs to top of the lib.rs file ([d7233bf1](https://github.com/kbknapp/clap-rs/commit/d7233bf122dbf80ba8fc79e5641be2df8af10e7a)) * changes doc comments to rustdoc comments ([34b601be](https://github.com/kbknapp/clap-rs/commit/34b601be5fdde76c1a0859385b359b96d66b8732)) * fixes panic in 14_groups example ([945b00a0](https://github.com/kbknapp/clap-rs/commit/945b00a0c27714b63bdca48d003fe205fcfdc578), closes [#295](https://github.com/kbknapp/clap-rs/issues/295)) * avoid suggesting star dependencies. ([d33228f4](https://github.com/kbknapp/clap-rs/commit/d33228f40b5fefb84cf3dd51546bfb340dcd9f5a)) * **Rustdoc:** adds portions of the readme to main rustdoc page ([6f9ee181](https://github.com/kbknapp/clap-rs/commit/6f9ee181e69d90bd4206290e59d6f3f1e8f0cbb2), closes [#293](https://github.com/kbknapp/clap-rs/issues/293)) #### Bug Fixes * grammar error in some conflicting option errors ([e73b07e1](https://github.com/kbknapp/clap-rs/commit/e73b07e19474323ad2260da66abbf6a6d4ecbd4f)) * **Unified Help:** sorts both flags and options as a unified category ([2a223dad](https://github.com/kbknapp/clap-rs/commit/2a223dad82901fa2e74baad3bfc4c7b94509300f)) * **Usage:** fixes a bug where required args aren't filtered properly ([72b453dc](https://github.com/kbknapp/clap-rs/commit/72b453dc170af3050bb123d35364f6da77fc06d7), closes [#277](https://github.com/kbknapp/clap-rs/issues/277)) * **Usage Strings:** fixes a bug ordering of elements in usage strings ([aaf0d6fe](https://github.com/kbknapp/clap-rs/commit/aaf0d6fe7aa2403e76096c16204d254a9ee61ee2), closes [#298](https://github.com/kbknapp/clap-rs/issues/298)) #### Features * supports -aValue style options ([0e3733e4](https://github.com/kbknapp/clap-rs/commit/0e3733e4fec2015c2d566a51432dcd92cb69cad3)) * **Trailing VarArg:** adds opt-in setting for final arg being vararg ([27018b18](https://github.com/kbknapp/clap-rs/commit/27018b1821a4bcd5235cfe92abe71b3c99efc24d), closes [#278](https://github.com/kbknapp/clap-rs/issues/278)) ### v1.4.3 (2015-09-30) #### Features * allows accessing arg values by group name ([c92a4b9e](https://github.com/kbknapp/clap-rs/commit/c92a4b9eff2d679957f61c0c41ff404b40d38a91)) #### Documentation * use links to examples instead of plain text ([bb4fe237](https://github.com/kbknapp/clap-rs/commit/bb4fe237858535627271465147add537e4556b43)) #### Bug Fixes * **Help Message:** required args no longer double list in usage ([1412e639](https://github.com/kbknapp/clap-rs/commit/1412e639e0a79df84936d1101a837f90077d1c83), closes [#277](https://github.com/kbknapp/clap-rs/issues/277)) * **Possible Values:** possible value validation is restored ([f121ae74](https://github.com/kbknapp/clap-rs/commit/f121ae749f8f4bfe754ef2e8a6dfc286504b5b75), closes [#287](https://github.com/kbknapp/clap-rs/issues/287)) ### v1.4.2 (2015-09-23) #### Bug Fixes * **Conflicts:** fixes bug with conflicts not removing required args ([e17fcec5](https://github.com/kbknapp/clap-rs/commit/e17fcec53b3216ad047a13dddc6f740473fad1a1), closes [#271](https://github.com/kbknapp/clap-rs/issues/271)) ### v1.4.1 (2015-09-22) #### Examples * add clap_app quick example ([4ba6249c](https://github.com/kbknapp/clap-rs/commit/4ba6249c3cf4d2e083370d1fe4dcc7025282c28a)) #### Features * **Unicode:** allows non-panicing on invalid unicode characters ([c5bf7ddc](https://github.com/kbknapp/clap-rs/commit/c5bf7ddc8cfb876ec928a5aaf5591232bbb32e5d)) #### Documentation * properly names Examples section for rustdoc ([87ba5445](https://github.com/kbknapp/clap-rs/commit/87ba54451d7ec7b1c9b9ef134f90bbe39e6fac69)) * fixes various typos and spelling ([f85640f9](https://github.com/kbknapp/clap-rs/commit/f85640f9f6d8fd3821a40e9b8b7a34fabb789d02)) * **Arg:** unhides fields of the Arg struct ([931aea88](https://github.com/kbknapp/clap-rs/commit/931aea88427edf43a3da90d5a500c1ff2b2c3614)) #### Bug Fixes * flush the buffer in App::print_version() ([cbc42a37](https://github.com/kbknapp/clap-rs/commit/cbc42a37d212d84d22b1777d08e584ff191934e7)) * Macro benchmarks ([13712da1](https://github.com/kbknapp/clap-rs/commit/13712da1d36dc7614eec3a10ad488257ba615751)) ## v1.4.0 (2015-09-09) #### Features * allows printing help message by library consumers ([56b95f32](https://github.com/kbknapp/clap-rs/commit/56b95f320875c62dda82cb91b29059671e120ed1)) * allows defining hidden args and subcmds ([2cab4d03](https://github.com/kbknapp/clap-rs/commit/2cab4d0334ea3c2439a1d4bfca5bf9905c7ea9ac), closes [#231](https://github.com/kbknapp/clap-rs/issues/231)) * Builder macro to assist with App/Arg/Group/SubCommand building ([443841b0](https://github.com/kbknapp/clap-rs/commit/443841b012a8d795cd5c2bd69ae6e23ef9b16477)) * **Errors:** allows consumers to write to stderr and exit on error ([1e6403b6](https://github.com/kbknapp/clap-rs/commit/1e6403b6a863574fa3cb6946b1fb58f034e8664c)) ### v1.3.2 (2015-09-08) #### Documentation * fixed ErrorKind docs ([dd057843](https://github.com/kbknapp/clap-rs/commit/dd05784327fa070eb6ce5ce89a8507e011d8db94)) * **ErrorKind:** changed examples content ([b9ca2616](https://github.com/kbknapp/clap-rs/commit/b9ca261634b89613bbf3d98fd74d55cefbb31a8c)) #### Bug Fixes * fixes a bug where the help subcommand wasn't overridable ([94003db4](https://github.com/kbknapp/clap-rs/commit/94003db4b5eebe552ca337521c1c001295822745)) #### Features * adds abiltiy not consume self when parsing matches and/or exit on help ([94003db4](https://github.com/kbknapp/clap-rs/commit/94003db4b5eebe552ca337521c1c001295822745)) * **App:** Added ability for users to handle errors themselves ([934e6fbb](https://github.com/kbknapp/clap-rs/commit/934e6fbb643b2385efc23444fe6fce31494dc288)) ### v1.3.1 (2015-09-04) #### Examples * **17_yaml:** fixed example ([9b848622](https://github.com/kbknapp/clap-rs/commit/9b848622296c8c5c7b9a39b93ddd41f51df790b5)) #### Performance * changes ArgGroup HashSets to Vec ([3cb4a48e](https://github.com/kbknapp/clap-rs/commit/3cb4a48ebd15c20692f4f3a2a924284dc7fd5e10)) * changes BTreeSet for Vec in some instances ([baab2e3f](https://github.com/kbknapp/clap-rs/commit/baab2e3f4060e811abee14b1654cbcd5cf3b5fea)) ## v1.3.0 (2015-09-01) #### Features * **YAML:** allows building a CLI from YAML files ([86cf4c45](https://github.com/kbknapp/clap-rs/commit/86cf4c45626a36b8115446952f9069f73c1debc3)) * **ArgGroups:** adds support for building ArgGroups from yaml ([ecf88665](https://github.com/kbknapp/clap-rs/commit/ecf88665cbff367018b29161a1b75d44a212707d)) * **Subcommands:** adds support for subcommands from yaml ([e415cf78](https://github.com/kbknapp/clap-rs/commit/e415cf78ba916052d118a8648deba2b9c16b1530)) #### Documentation * **YAML:** adds examples for using YAML to build a CLI ([ab41d7f3](https://github.com/kbknapp/clap-rs/commit/ab41d7f38219544750e6e1426076dc498073191b)) * **Args from YAML:** fixes doc examples ([19b348a1](https://github.com/kbknapp/clap-rs/commit/19b348a10050404cd93888dbbbe4f396681b67d0)) * **Examples:** adds better usage examples instead of having unused variables ([8cbacd88](https://github.com/kbknapp/clap-rs/commit/8cbacd8883004fe71a8ea036ec4391c7dd8efe94)) #### Examples * Add AppSettings example ([12705079](https://github.com/kbknapp/clap-rs/commit/12705079ca96a709b4dd94f7ddd20a833b26838c)) #### Bug Fixes * **Unified Help Messages:** fixes a crash from this setting and no opts ([169ffec1](https://github.com/kbknapp/clap-rs/commit/169ffec1003d58d105d7ef2585b3425e57980000), closes [#210](https://github.com/kbknapp/clap-rs/issues/210)) ### v1.2.5 (2015-08-27) #### Examples * add custom validator example ([b9997d1f](https://github.com/kbknapp/clap-rs/commit/b9997d1fca74d4d8f93971f2a01bdf9798f913d5)) * fix indentation ([d4f1b740](https://github.com/kbknapp/clap-rs/commit/d4f1b740ede410fd2528b9ecd89592c2fd8b1e20)) #### Features * **Args:** allows opts and args to define a name for help and usage msgs ([ad962ec4](https://github.com/kbknapp/clap-rs/commit/ad962ec478da999c7dba0afdb84c266f4d09b1bd)) ### v1.2.4 (2015-08-26) #### Bug Fixes * **Possible Values:** fixes a bug where suggestions arent made when using --long=value format ([3d5e9a6c](https://github.com/kbknapp/clap-rs/commit/3d5e9a6cedb26668839b481c9978e2fbbab8be6f), closes [#192](https://github.com/kbknapp/clap-rs/issues/192)) ### v1.2.3 (2015-08-24) #### Bug Fixes * **App, Args:** fixed subcommand reqs negation ([b41afa8c](https://github.com/kbknapp/clap-rs/commit/b41afa8c3ded3d1be12f7a2f8ea06cc44afc9458), closes [#188](https://github.com/kbknapp/clap-rs/issues/188)) ### v1.2.2 (2015-08-23) #### Bug Fixes * fixed confusing error message, also added test for it ([fc7a31a7](https://github.com/kbknapp/clap-rs/commit/fc7a31a745efbf1768ee2c62cd3bb72bfe30c708)) * **App:** fixed requirmets overriding ([9c135eb7](https://github.com/kbknapp/clap-rs/commit/9c135eb790fa16183e5bdb2009ddc3cf9e25f99f)) ### v1.2.1 (2015-08-20) #### Documentation * **README.md:** updates for new features ([16cf9245](https://github.com/kbknapp/clap-rs/commit/16cf9245fb5fc4cf6face898e358368bf9961cbb)) #### Features * implements posix compatible conflicts for long args ([8c2d48ac](https://github.com/kbknapp/clap-rs/commit/8c2d48acf5473feebd721a9049a9c9b7051e70f9)) * added overrides to support conflicts in POSIX compatible manner ([0b916a00](https://github.com/kbknapp/clap-rs/commit/0b916a00de26f6941538f6bc5f3365fa302083c1)) * **Args:** allows defining POSIX compatible argument conflicts ([d715646e](https://github.com/kbknapp/clap-rs/commit/d715646e69759ccd95e01f49b04f489827ecf502)) #### Bug Fixes * fixed links in cargo and license buttons ([6d9837ad](https://github.com/kbknapp/clap-rs/commit/6d9837ad9a9e006117cd7372fdc60f9a3889c7e2)) #### Performance * **Args and Apps:** changes HashSet->Vec in some instances for increased performance ([d0c3b379](https://github.com/kbknapp/clap-rs/commit/d0c3b379700757e0a9b0c40af709f8af1f5b4949)) ### v1.2.0 (2015-08-15) #### Bug Fixes * fixed misspell and enum name ([7df170d7](https://github.com/kbknapp/clap-rs/commit/7df170d7f4ecff06608317655d1e0c4298f62076)) * fixed use for clap crate ([dc3ada73](https://github.com/kbknapp/clap-rs/commit/dc3ada738667d4b689678f79d14251ee82004ece)) #### Documentation * updates docs for new features ([03496547](https://github.com/kbknapp/clap-rs/commit/034965471782d872ca495045b58d34b31807c5b1)) * fixed docs for previous changes ([ade36778](https://github.com/kbknapp/clap-rs/commit/ade367780c366425de462506d256e0f554ed3b9c)) #### Improvements * **AppSettings:** adds ability to add multiple settings at once ([4a00e251](https://github.com/kbknapp/clap-rs/commit/4a00e2510d0ca8d095d5257d51691ba3b61c1374)) #### Features * Replace application level settings with enum variants ([618dc4e2](https://github.com/kbknapp/clap-rs/commit/618dc4e2c205bf26bc43146164e65eb1f6b920ed)) * **Args:** allows for custom argument value validations to be defined ([84ae2ddb](https://github.com/kbknapp/clap-rs/commit/84ae2ddbceda34b5cbda98a6959edaa52fde2e1a), closes [#170](https://github.com/kbknapp/clap-rs/issues/170)) ### v1.1.6 (2015-08-01) #### Bug Fixes * fixes two bugs in App when printing newlines in help and subcommands required error ([d63c0136](https://github.com/kbknapp/clap-rs/commit/d63c0136310db9dd2b1c7b4745938311601d8938)) ### v1.1.5 (2015-07-29) #### Performance * removes some unneeded allocations ([93e915df](https://github.com/kbknapp/clap-rs/commit/93e915dfe300f7b7d6209ca93323c6a46f89a8c1)) ### v1.1.4 (2015-07-20) #### Improvements * **Usage Strings** displays a [--] when it may be helpful ([86c3be85](https://github.com/kbknapp/clap-rs/commit/86c3be85fb6f77f83b5a6d2df40ae60937486984)) #### Bug Fixes * **Macros** fixes a typo in a macro generated error message ([c9195c5f](https://github.com/kbknapp/clap-rs/commit/c9195c5f92abb8cd6a37b4f4fbb2f1fee2a8e368)) * **Type Errors** fixes formatting of error output when failed type parsing ([fe5d95c6](https://github.com/kbknapp/clap-rs/commit/fe5d95c64f3296e6eddcbec0cb8b86659800145f)) ### v1.1.3 (2015-07-18) #### Documentation * updates README.md to include lack of color support on Windows ([52f81e17](https://github.com/kbknapp/clap-rs/commit/52f81e17377b18d2bd0f34693b642b7f358998ee)) #### Bug Fixes * fixes formatting bug which prevented compiling on windows ([9cb5dceb](https://github.com/kbknapp/clap-rs/commit/9cb5dceb3e5fe5e0e7b24619ff77e5040672b723), closes [#163](https://github.com/kbknapp/clap-rs/issues/163)) ### v1.1.2 (2015-07-17) #### Bug Fixes * fixes a bug when parsing multiple {n} newlines inside help strings ([6d214b54](https://github.com/kbknapp/clap-rs/commit/6d214b549a9b7e189a94e5fa2b7c92cc333ca637)) ## v1.1.1 (2015-07-17) #### Bug Fixes * fixes a logic bug and allows setting Arg::number_of_values() < 2 ([42b6d1fc](https://github.com/kbknapp/clap-rs/commit/42b6d1fc3c519c92dfb3af15276e7d3b635e6cfe), closes [#161](https://github.com/kbknapp/clap-rs/issues/161)) ## v1.1.0 (2015-07-16) #### Features * allows creating unified help messages, a la docopt or getopts ([52bcd892](https://github.com/kbknapp/clap-rs/commit/52bcd892ea51564ce463bc5865acd64f8fe91cb1), closes [#158](https://github.com/kbknapp/clap-rs/issues/158)) * allows stating all subcommands should *not* have --version flags ([336c476f](https://github.com/kbknapp/clap-rs/commit/336c476f631d512b54ac56fdca6f29ebdc2c00c5), closes [#156](https://github.com/kbknapp/clap-rs/issues/156)) * allows setting version number to auto-propagate through subcommands ([bc66d3c6](https://github.com/kbknapp/clap-rs/commit/bc66d3c6deedeca62463fff95369ab1cfcdd366b), closes [#157](https://github.com/kbknapp/clap-rs/issues/157)) #### Improvements * **Help Strings** properly aligns and handles newlines in long help strings ([f9800a29](https://github.com/kbknapp/clap-rs/commit/f9800a29696dd2cc0b0284bf693b3011831e556f), closes [#145](https://github.com/kbknapp/clap-rs/issues/145)) #### Performance * **Help Messages** big performance improvements when printing help messages ([52bcd892](https://github.com/kbknapp/clap-rs/commit/52bcd892ea51564ce463bc5865acd64f8fe91cb1)) #### Documentation * updates readme with new features ([8232f7bb](https://github.com/kbknapp/clap-rs/commit/8232f7bb52e88862bc13c3d4f99ee4f56cfe4bc0)) * fix incorrect code example for `App::subcommand_required` ([8889689d](https://github.com/kbknapp/clap-rs/commit/8889689dc6336ccc45b2c9f2cf8e2e483a639e93)) ### v1.0.3 (2015-07-11) #### Improvements * **Errors** writes errors to stderr ([cc76ab8c](https://github.com/kbknapp/clap-rs/commit/cc76ab8c2b77c67b42f4717ded530df7806142cf), closes [#154](https://github.com/kbknapp/clap-rs/issues/154)) #### Documentation * **README.md** updates example help message to new format ([0aca29bd](https://github.com/kbknapp/clap-rs/commit/0aca29bd5d6d1a4e9971bdc88d946ffa58606efa)) ### v1.0.2 (2015-07-09) #### Improvements * **Usage** re-orders optional arguments and required to natural standard ([dc7e1fce](https://github.com/kbknapp/clap-rs/commit/dc7e1fcea5c85d317018fb201d2a9262249131b4), closes [#147](https://github.com/kbknapp/clap-rs/issues/147)) ### v1.0.1 (2015-07-08) #### Bug Fixes * allows empty values when using --long='' syntax ([083f82d3](https://github.com/kbknapp/clap-rs/commit/083f82d333b69720a6ef30074875310921d964d1), closes [#151](https://github.com/kbknapp/clap-rs/issues/151)) ## v1.0.0 (2015-07-08) #### Documentation * **README.md** adds new features to what's new list ([938f7f01](https://github.com/kbknapp/clap-rs/commit/938f7f01340f521969376cf4e2e3d9436bca21f7)) * **README.md** use with_name for subcommands ([28b7e316](https://github.com/kbknapp/clap-rs/commit/28b7e3161fb772e5309042648fe8c3a420645bac)) #### Features * args can now be parsed from arbitrary locations, not just std::env::args() ([75312528](https://github.com/kbknapp/clap-rs/commit/753125282b1b9bfff875f1557ce27610edcc59e1)) ## v1.0.0-beta (2015-06-30) #### Features * allows waiting for user input on error ([d0da3bdd](https://github.com/kbknapp/clap-rs/commit/d0da3bdd9d1871541907ea9c645322a74d260e07), closes [#140](https://github.com/kbknapp/clap-rs/issues/140)) * **Help** allows one to fully override the auto-generated help message ([26d5ae3e](https://github.com/kbknapp/clap-rs/commit/26d5ae3e330d1e150811d5b60b2b01a8f8df854e), closes [#141](https://github.com/kbknapp/clap-rs/issues/141)) #### Documentation * adds "whats new" section to readme ([ff149a29](https://github.com/kbknapp/clap-rs/commit/ff149a29dd9e179865e6d577cd7dc87c54f8f95c)) #### Improvements * removes deprecated functions in prep for 1.0 ([274484df](https://github.com/kbknapp/clap-rs/commit/274484dfd08fff4859cefd7e9bef3b73d3a9cb5f)) ## v0.11.0 (2015-06-17) - BREAKING CHANGE #### Documentation * updates docs to new version flag defaults ([ebf442eb](https://github.com/kbknapp/clap-rs/commit/ebf442ebebbcd2ec6bfe2c06566c9d362bccb112)) #### Features * **Help and Version** default short for version is now `-V` but can be overridden (only breaks manual documentation) (**BREAKING CHANGE** [eb1d9320](https://github.com/kbknapp/clap-rs/commit/eb1d9320c509c1e4e57d7c7959da82bcfe06ada0)) ### v0.10.5 (2015-06-06) #### Bug Fixes * **Global Args** global arguments propogate fully now ([1f377960](https://github.com/kbknapp/clap-rs/commit/1f377960a48c82f54ca5f39eb56bcb393140b046), closes [#137](https://github.com/kbknapp/clap-rs/issues/137)) ### v0.10.4 (2015-06-06) #### Bug Fixes * **Global Args** global arguments propogate fully now ([8f2c0160](https://github.com/kbknapp/clap-rs/commit/8f2c0160c8d844daef375a33dbaec7d89de00a00), closes [#137](https://github.com/kbknapp/clap-rs/issues/137)) ### v0.10.3 (2015-05-31) #### Bug Fixes * **Global Args** fixes a bug where globals only transfer to one subcommand ([a37842ee](https://github.com/kbknapp/clap-rs/commit/a37842eec1ee3162b86fdbda23420b221cdb1e3b), closes [#135](https://github.com/kbknapp/clap-rs/issues/135)) ### v0.10.2 (2015-05-30) #### Improvements * **Binary Names** allows users to override the system determined bin name ([2191fe94](https://github.com/kbknapp/clap-rs/commit/2191fe94bda35771383b52872fb7f5421b178be1), closes [#134](https://github.com/kbknapp/clap-rs/issues/134)) #### Documentation * adds contributing guidelines ([6f76bd0a](https://github.com/kbknapp/clap-rs/commit/6f76bd0a07e8b7419b391243ab2d6687cd8a9c5f)) ### v0.10.1 (2015-05-26) #### Features * can now specify that an app or subcommand should display help on no args or subcommands ([29ca7b2f](https://github.com/kbknapp/clap-rs/commit/29ca7b2f74376ca0cdb9d8ee3bfa99f7640cc404), closes [#133](https://github.com/kbknapp/clap-rs/issues/133)) ## v0.10.0 (2015-05-23) #### Features * **Global Args** allows args that propagate down to child commands ([2bcc6137](https://github.com/kbknapp/clap-rs/commit/2bcc6137a83cb07757771a0afea953e68e692f0b), closes [#131](https://github.com/kbknapp/clap-rs/issues/131)) #### Improvements * **Colors** implements more structured colored output ([d6c3ed54](https://github.com/kbknapp/clap-rs/commit/d6c3ed54d21cf7b40d9f130d4280ff5448522fc5), closes [#129](https://github.com/kbknapp/clap-rs/issues/129)) #### Deprecations * **SubCommand/App** several methods and functions for stable release ([28b73855](https://github.com/kbknapp/clap-rs/commit/28b73855523ad170544afdb20665db98702fbe70)) #### Documentation * updates for deprecations and new features ([743eefe8](https://github.com/kbknapp/clap-rs/commit/743eefe8dd40c1260065ce086d572e9e9358bc4c)) ## v0.9.2 (2015-05-20) #### Bug Fixes * **help** allows parent requirements to be ignored with help and version ([52218cc1](https://github.com/kbknapp/clap-rs/commit/52218cc1fdb06a42456c964d98cc2c7ac3432412), closes [#124](https://github.com/kbknapp/clap-rs/issues/124)) ## v0.9.1 (2015-05-18) #### Bug Fixes * **help** fixes a bug where requirements are included as program name in help and version ([08ba3f25](https://github.com/kbknapp/clap-rs/commit/08ba3f25cf38b149229ba8b9cb37a5804fe6b789)) ## v0.9.0 (2015-05-17) #### Improvements * **usage** usage strings now include parent command requirements ([dd8f21c7](https://github.com/kbknapp/clap-rs/commit/dd8f21c7c15cde348fdcf44fa7c205f0e98d2e4a), closes [#125](https://github.com/kbknapp/clap-rs/issues/125)) * **args** allows consumer of clap to decide if empty values are allowed or not ([ab4ec609](https://github.com/kbknapp/clap-rs/commit/ab4ec609ccf692b9b72cccef5c9f74f5577e360d), closes [#122](https://github.com/kbknapp/clap-rs/issues/122)) #### Features * **subcommands** * allows optionally specifying that no subcommand is an error ([7554f238](https://github.com/kbknapp/clap-rs/commit/7554f238fd3afdd60b7e4dcf00ff4a9eccf842c1), closes [#126](https://github.com/kbknapp/clap-rs/issues/126)) * subcommands can optionally negate parent requirements ([4a4229f5](https://github.com/kbknapp/clap-rs/commit/4a4229f500e21c350e1ef78dd09ef27559653288), closes [#123](https://github.com/kbknapp/clap-rs/issues/123)) ## v0.8.6 (2015-05-17) #### Bug Fixes * **args** `-` can now be parsed as a value for an argument ([bc12e78e](https://github.com/kbknapp/clap-rs/commit/bc12e78eadd7eaf9d008a8469fdd2dfd7990cb5d), closes [#121](https://github.com/kbknapp/clap-rs/issues/121)) ## v0.8.5 (2015-05-15) #### Bug Fixes * **macros** makes macro errors consistent with others ([0c264a8c](https://github.com/kbknapp/clap-rs/commit/0c264a8ca57ec1cfdcb74dae79145d766cdc9b97), closes [#118](https://github.com/kbknapp/clap-rs/issues/118)) #### Features * **macros** * arg_enum! and simple_enum! provide a Vec<&str> of variant names ([30fa87ba](https://github.com/kbknapp/clap-rs/commit/30fa87ba4e0f3189351d8f4f78b72e616a30d0bd), closes [#119](https://github.com/kbknapp/clap-rs/issues/119)) * arg_enum! and simple_enum! auto-implement Display ([d1219f0d](https://github.com/kbknapp/clap-rs/commit/d1219f0d1371d872061bd0718057eca4ef47b739), closes [#120](https://github.com/kbknapp/clap-rs/issues/120)) ## v0.8.4 (2015-05-12) #### Bug Fixes * **suggestions** --help and --version now get suggestions ([d2b3b1fa](https://github.com/kbknapp/clap-rs/commit/d2b3b1faa0bdc1c5d2350cc4635aba81e02e9d96), closes [#116](https://github.com/kbknapp/clap-rs/issues/116)) ## v0.8.3 (2015-05-10) #### Bug Fixes * **usage** groups unfold their members in usage strings ([55d15582](https://github.com/kbknapp/clap-rs/commit/55d155827ea4a6b077a83669701e797ce1ad68f4), closes [#114](https://github.com/kbknapp/clap-rs/issues/114)) #### Performance * **usage** removes unneeded allocations ([fd53cd18](https://github.com/kbknapp/clap-rs/commit/fd53cd188555f5c3dc8bc341c5d7eb04b761a70f)) ## v0.8.2 (2015-05-08) #### Bug Fixes * **usage strings** positional arguments are presented in index order ([eb0e374e](https://github.com/kbknapp/clap-rs/commit/eb0e374ecf952f1eefbc73113f21e0705936e40b), closes [#112](https://github.com/kbknapp/clap-rs/issues/112)) ## v0.8.1 (2015-05-06) #### Bug Fixes * **subcommands** stops parsing multiple values when subcommands are found ([fc79017e](https://github.com/kbknapp/clap-rs/commit/fc79017eced04fd41cc1801331e5054df41fac17), closes [#109](https://github.com/kbknapp/clap-rs/issues/109)) #### Improvements * **color** reduces color in error messages ([aab44cca](https://github.com/kbknapp/clap-rs/commit/aab44cca6352f47e280c296e50c535f5d752dd46), closes [#110](https://github.com/kbknapp/clap-rs/issues/110)) * **suggestions** adds suggested arguments to usage strings ([99447414](https://github.com/kbknapp/clap-rs/commit/994474146e9fb8b701af773a52da71553d74d4b7)) ## v0.8.0 (2015-05-06) #### Bug Fixes * **did-you-mean** for review ([0535cfb0](https://github.com/kbknapp/clap-rs/commit/0535cfb0c711331568b4de8080eeef80bd254b68)) * **Positional** positionals were ignored if they matched a subcmd, even after '--' ([90e7b081](https://github.com/kbknapp/clap-rs/commit/90e7b0818741668b47cbe3becd029bab588e3553)) * **help** fixes bug where space between arg and help is too long ([632fb115](https://github.com/kbknapp/clap-rs/commit/632fb11514c504999ea86bdce47cdd34f8ebf646)) #### Features * **from_usage** adds ability to add value names or num of vals in usage string ([3d581976](https://github.com/kbknapp/clap-rs/commit/3d58197674ed7886ca315efb76e411608a327501), closes [#98](https://github.com/kbknapp/clap-rs/issues/98)) * **did-you-mean** * gate it behind 'suggestions' ([c0e38351](https://github.com/kbknapp/clap-rs/commit/c0e383515d01bdd5ca459af9c2f7e2cf49e2488b)) * for possible values ([1cc2deb2](https://github.com/kbknapp/clap-rs/commit/1cc2deb29158e0e4e8b434e4ce26b3d819301a7d)) * for long flags (i.e. --long) ([52a0b850](https://github.com/kbknapp/clap-rs/commit/52a0b8505c99354bdf5fd1cd256cf41197ac2d81)) * for subcommands ([06e869b5](https://github.com/kbknapp/clap-rs/commit/06e869b5180258047ed3c60ba099de818dd25fff)) * **Flags** adds sugestions functionality ([8745071c](https://github.com/kbknapp/clap-rs/commit/8745071c3257dd327c497013516f12a823df9530)) * **errors** colorizes output red on error ([f8b26b13](https://github.com/kbknapp/clap-rs/commit/f8b26b13da82ba3ba9a932d3d1ab4ea45d1ab036)) #### Improvements * **arg_enum** allows ascii case insensitivity for enum variants ([b249f965](https://github.com/kbknapp/clap-rs/commit/b249f9657c6921c004764bd80d13ebca81585eec), closes [#104](https://github.com/kbknapp/clap-rs/issues/104)) * **clap-test** simplified `make test` invocation ([d17dcb29](https://github.com/kbknapp/clap-rs/commit/d17dcb2920637a1f58c61c596b7bd362fd53047c)) #### Documentation * **README** adds details about optional and new features ([960389de](https://github.com/kbknapp/clap-rs/commit/960389de02c9872aaee9adabe86987f71f986e39)) * **clap** fix typos caught by codespell ([8891d929](https://github.com/kbknapp/clap-rs/commit/8891d92917aa1a069cca67272be41b99e548356e)) * **from_usage** explains new usage strings with multiple values ([05476fc6](https://github.com/kbknapp/clap-rs/commit/05476fc61cd1e5f4a4e750d258c878732a3a9c64)) ## v0.7.6 (2015-05-05) #### Improvements * **Options** adds number of values to options in help/usage ([c1c993c4](https://github.com/kbknapp/clap-rs/commit/c1c993c419d18e35c443785053d8de9a2ef88073)) #### Features * **from_usage** adds ability to add value names or num of vals in usage string ([ad55748c](https://github.com/kbknapp/clap-rs/commit/ad55748c265cf27935c7b210307d2040b6a09125), closes [#98](https://github.com/kbknapp/clap-rs/issues/98)) #### Bug Fixes * **MultipleValues** properly distinguishes between multiple values and multiple occurrences ([dd2a7564](https://github.com/kbknapp/clap-rs/commit/dd2a75640ca68a91b973faad15f04df891356cef), closes [#99](https://github.com/kbknapp/clap-rs/issues/99)) * **help** fixes tab alignment with multiple values ([847001ff](https://github.com/kbknapp/clap-rs/commit/847001ff6d8f4d9518e810fefb8edf746dd0f31e)) #### Documentation * **from_usage** explains new usage strings with multiple values ([5a3a42df](https://github.com/kbknapp/clap-rs/commit/5a3a42dfa3a783537f88dedc0fd5f0edcb8ea372)) ## v0.7.5 (2015-05-04) #### Bug Fixes * **Options** fixes bug where options with no value don't error out ([a1fb94be](https://github.com/kbknapp/clap-rs/commit/a1fb94be53141572ffd97aad037295d4ffec82d0)) ## v0.7.4 (2015-05-03) #### Bug Fixes * **Options** fixes a bug where option arguments in succession get their values skipped ([f66334d0](https://github.com/kbknapp/clap-rs/commit/f66334d0ce984e2b56e5c19abb1dd536fae9342a)) ## v0.7.3 (2015-05-03) #### Bug Fixes * **RequiredValues** fixes a bug where missing values are parsed as missing arguments ([93c4a723](https://github.com/kbknapp/clap-rs/commit/93c4a7231ba1a08152648598f7aa4503ea82e4de)) #### Improvements * **ErrorMessages** improves error messages and corrections ([a29c3983](https://github.com/kbknapp/clap-rs/commit/a29c3983c4229906655a29146ec15a0e46dd942d)) * **ArgGroups** improves requirement and confliction support for groups ([c236dc5f](https://github.com/kbknapp/clap-rs/commit/c236dc5ff475110d2a1b80e62903f80296163ad3)) ## v0.7.2 (2015-05-03) #### Bug Fixes * **RequiredArgs** fixes bug where required-by-default arguments are not listed in usage ([12aea961](https://github.com/kbknapp/clap-rs/commit/12aea9612d290845ba86515c240aeeb0a21198db), closes [#96](https://github.com/kbknapp/clap-rs/issues/96)) ## v0.7.1 (2015-05-01) #### Bug Fixes * **MultipleValues** stops evaluating values if the max or exact number of values was reached ([86d92c9f](https://github.com/kbknapp/clap-rs/commit/86d92c9fdbf9f422442e9562977bbaf268dbbae1)) ## v0.7.0 (2015-04-30) - BREAKING CHANGE #### Bug Fixes * **from_usage** removes bug where usage strings have no help text ([ad4e5451](https://github.com/kbknapp/clap-rs/commit/ad4e54510739aeabf75f0da3278fb0952db531b3), closes [#83](https://github.com/kbknapp/clap-rs/issues/83)) #### Features * **MultipleValues** * add support for minimum and maximum number of values ([53f6b8c9](https://github.com/kbknapp/clap-rs/commit/53f6b8c9d8dc408b4fa9f833fc3a63683873c42f)) * adds support limited number and named values ([ae09f05e](https://github.com/kbknapp/clap-rs/commit/ae09f05e92251c1b39a83d372736fcc7b504e432)) * implement shorthand for options with multiple values ([6669f0a9](https://github.com/kbknapp/clap-rs/commit/6669f0a9687d4f668523145d7bd5c007d1eb59a8)) * **arg** allow other types besides Vec for multiple value settings (**BREAKING CHANGE** [0cc2f698](https://github.com/kbknapp/clap-rs/commit/0cc2f69839b9b1db5d06330771b494783049a88e), closes [#87](https://github.com/kbknapp/clap-rs/issues/87)) * **usage** implement smart usage strings on errors ([d77048ef](https://github.com/kbknapp/clap-rs/commit/d77048efb1e595ffe831f1a2bea2f2700db53b9f), closes [#88](https://github.com/kbknapp/clap-rs/issues/88)) ## v0.6.9 (2015-04-29) #### Bug Fixes * **from_usage** removes bug where usage strings have no help text ([ad4e5451](https://github.com/kbknapp/clap-rs/commit/ad4e54510739aeabf75f0da3278fb0952db531b3), closes [#83](https://github.com/kbknapp/clap-rs/issues/83)) ## 0.6.8 (2015-04-27) #### Bug Fixes * **help** change long help --long=long -> --long ([1e25abfc](https://github.com/kbknapp/clap-rs/commit/1e25abfc36679ab89eae71bf98ced4de81992d00)) * **RequiredArgs** required by default args should no longer be required when their exclusions are present ([4bb4c3cc](https://github.com/kbknapp/clap-rs/commit/4bb4c3cc076b49e86720e882bf8c489877199f2d)) #### Features * **ArgGroups** add ability to create arg groups ([09eb4d98](https://github.com/kbknapp/clap-rs/commit/09eb4d9893af40c347e50e2b717e1adef552357d)) ## v0.6.7 (2015-04-22) #### Bug Fixes * **from_usage** fix bug causing args to not be required ([b76129e9](https://github.com/kbknapp/clap-rs/commit/b76129e9b71a63365d5c77a7f57b58dbd1e94d49)) #### Features * **apps** add ability to display additional help info after auto-gen'ed help msg ([65cc259e](https://github.com/kbknapp/clap-rs/commit/65cc259e4559cbe3653c865ec0c4b1e42a389b07)) ## v0.6.6 (2015-04-19) #### Bug Fixes * **from_usage** tabs and spaces should be treated equally ([4fd44181](https://github.com/kbknapp/clap-rs/commit/4fd44181d55d8eb88caab1e625231cfa3129e347)) #### Features * **macros.rs** add macro to get version from Cargo.toml ([c630969a](https://github.com/kbknapp/clap-rs/commit/c630969aa3bbd386379219cae27ba1305b117f3e)) ## v0.6.5 (2015-04-19) #### Bug Fixes * **macros.rs** fix use statements for trait impls ([86e4075e](https://github.com/kbknapp/clap-rs/commit/86e4075eb111937c8a7bdb344e866e350429f042)) ## v0.6.4 (2015-04-17) #### Features * **macros** add ability to create enums pub or priv with derives ([2c499f80](https://github.com/kbknapp/clap-rs/commit/2c499f8015a199827cdf1fa3ec4f6f171722f8c7)) ## v0.6.3 (2015-04-16) #### Features * **macros** add macro to create custom enums to use as types ([fb672aff](https://github.com/kbknapp/clap-rs/commit/fb672aff561c29db2e343d6c607138f141aca8b6)) ## v0.6.2 (2015-04-14) #### Features * **macros** * add ability to get multiple typed values or exit ([0b87251f](https://github.com/kbknapp/clap-rs/commit/0b87251fc088234bee51c323c2b652d7254f7a59)) * add ability to get a typed multiple values ([e243fe38](https://github.com/kbknapp/clap-rs/commit/e243fe38ddbbf845a46c0b9baebaac3778c80927)) * add convenience macro to get a typed value or exit ([4b7cd3ea](https://github.com/kbknapp/clap-rs/commit/4b7cd3ea4947780d9daa39f3e1ddab53ad4c7fef)) * add convenience macro to get a typed value ([8752700f](https://github.com/kbknapp/clap-rs/commit/8752700fbb30e89ee68adbce24489ae9a24d33a9)) ## v0.6.1 (2015-04-13) #### Bug Fixes * **from_usage** trim all whitespace before parsing ([91d29045](https://github.com/kbknapp/clap-rs/commit/91d2904599bd602deef2e515dfc65dc2863bdea0)) ## v0.6.0 (2015-04-13) #### Bug Fixes * **tests** fix failing doc tests ([3710cd69](https://github.com/kbknapp/clap-rs/commit/3710cd69162f87221a62464f63437c1ce843ad3c)) #### Features * **app** add support for building args from usage strings ([d5d48bcf](https://github.com/kbknapp/clap-rs/commit/d5d48bcf463a4e494ef758836bd69a4c220bbbb5)) * **args** add ability to create basic arguments from a usage string ([ab409a8f](https://github.com/kbknapp/clap-rs/commit/ab409a8f1db9e37cc70200f6f4a84a162692e618)) ## v0.5.14 (2015-04-10) #### Bug Fixes * **usage** * remove unneeded space ([51372789](https://github.com/kbknapp/clap-rs/commit/5137278942121bc2593ce6e5dc224ec2682549e6)) * remove warning about unused variables ([ba817b9d](https://github.com/kbknapp/clap-rs/commit/ba817b9d815e37320650973f1bea0e7af3030fd7)) #### Features * **usage** add ability to get usage string for subcommands too ([3636afc4](https://github.com/kbknapp/clap-rs/commit/3636afc401c2caa966efb5b1869ef4f1ed3384aa)) ## v0.5.13 (2015-04-09) #### Features * **SubCommands** add method to get name and subcommand matches together ([64e53928](https://github.com/kbknapp/clap-rs/commit/64e539280e23e567cf5de393b346eb0ca20e7eb5)) * **ArgMatches** add method to get default usage string ([02462150](https://github.com/kbknapp/clap-rs/commit/02462150ca750bdc7012627d7e8d96379d494d7f)) ## v0.5.12 (2015-04-08) #### Features * **help** sort arguments by name so as to not display a random order ([f4b2bf57](https://github.com/kbknapp/clap-rs/commit/f4b2bf5767386013069fb74862e6e938dacf44d2)) ## v0.5.11 (2015-04-08) #### Bug Fixes * **flags** fix bug not allowing users to specify -v or -h ([90e72cff](https://github.com/kbknapp/clap-rs/commit/90e72cffdee321b79eea7a2207119533540062b4)) ## v0.5.10 (2015-04-08) #### Bug Fixes * **help** fix spacing when option argument has not long version ([ca17fa49](https://github.com/kbknapp/clap-rs/commit/ca17fa494b68e92da83ee364bf64b0687006824b)) ## v0.5.9 (2015-04-08) #### Bug Fixes * **positional args** all previous positional args become required when a latter one is required ([c14c3f31](https://github.com/kbknapp/clap-rs/commit/c14c3f31fd557c165570b60911d8ee483d89d6eb), closes [#50](https://github.com/kbknapp/clap-rs/issues/50)) * **clap** remove unstable features for Rust 1.0 ([9abdb438](https://github.com/kbknapp/clap-rs/commit/9abdb438e36e364d41550e7f5d44ebcaa8ee6b10)) * **args** improve error messages for arguments with mutual exclusions ([18dbcf37](https://github.com/kbknapp/clap-rs/commit/18dbcf37024daf2b76ca099a6f118b53827aa339), closes [#51](https://github.com/kbknapp/clap-rs/issues/51)) ## v0.5.8 (2015-04-08) #### Bug Fixes * **option args** fix bug in getting the wrong number of occurrences for options ([82ad6ad7](https://github.com/kbknapp/clap-rs/commit/82ad6ad77539cf9f9a03b78db466f575ebd972cc)) * **help** fix formatting for option arguments with no long ([e8691004](https://github.com/kbknapp/clap-rs/commit/e869100423d93fa3acff03c4620cbcc0d0e790a1)) * **flags** add assertion to catch flags with specific value sets ([a0a2a40f](https://github.com/kbknapp/clap-rs/commit/a0a2a40fed57f7c5ad9d68970d090e9856306c7d), closes [#52](https://github.com/kbknapp/clap-rs/issues/52)) * **args** improve error messages for arguments with mutual exclusions ([bff945fc](https://github.com/kbknapp/clap-rs/commit/bff945fc5d03bba4266533340adcffb002508d1b), closes [#51](https://github.com/kbknapp/clap-rs/issues/51)) * **tests** add missing .takes_value(true) to option2 ([bdb0e88f](https://github.com/kbknapp/clap-rs/commit/bdb0e88f696c8595c3def3bfb0e52d538c7be085)) * **positional args** all previous positional args become required when a latter one is required ([343d47dc](https://github.com/kbknapp/clap-rs/commit/343d47dcbf83786a45c0d0f01b27fd9dd76725de), closes [#50](https://github.com/kbknapp/clap-rs/issues/50)) ## v0.5.7 (2015-04-08) #### Bug Fixes * **args** fix bug in arguments who are required and mutually exclusive ([6ceb88a5](https://github.com/kbknapp/clap-rs/commit/6ceb88a594caae825605abc1cdad95204996bf29)) ## v0.5.6 (2015-04-08) #### Bug Fixes * **help** fix formatting of help and usage ([28691b52](https://github.com/kbknapp/clap-rs/commit/28691b52f67e65c599e10e4ea2a0f6f9765a06b8)) ## v0.5.5 (2015-04-08) #### Bug Fixes * **help** fix formatting of help for flags and options ([6ec10115](https://github.com/kbknapp/clap-rs/commit/6ec1011563a746f0578a93b76d45e63878e0f9a8)) ## v0.5.4 (2015-04-08) #### Features * **help** add '...' to indicate multiple values supported ([297ddba7](https://github.com/kbknapp/clap-rs/commit/297ddba77000e2228762ab0eca50b480f7467386)) ## v0.5.3 (2015-04-08) #### Features * **positionals** * add assertions for positional args with multiple vals ([b7fa72d4](https://github.com/kbknapp/clap-rs/commit/b7fa72d40f18806ec2042dd67a518401c2cf5681)) * add support for multiple values ([80784009](https://github.com/kbknapp/clap-rs/commit/807840094109fbf90b348039ae22669ef27889ba)) ## v0.5.2 (2015-04-08) #### Bug Fixes * **apps** allow use of hyphens in application and subcommand names ([da549dcb](https://github.com/kbknapp/clap-rs/commit/da549dcb6c7e0d773044ab17829744483a8b0f7f)) ## v0.5.1 (2015-04-08) #### Bug Fixes * **args** determine if the only arguments allowed are also required ([0a09eb36](https://github.com/kbknapp/clap-rs/commit/0a09eb365ced9a03faf8ed24f083ef730acc90e8)) ## v0.5.0 (2015-04-08) #### Features * **args** add support for a specific set of allowed values on options or positional arguments ([270eb889](https://github.com/kbknapp/clap-rs/commit/270eb88925b6dc2881bff1f31ee344f085d31809)) ## v0.4.18 (2015-04-08) #### Bug Fixes * **usage** display required args in usage, even if only required by others ([1b7316d4](https://github.com/kbknapp/clap-rs/commit/1b7316d4a8df70b0aa584ccbfd33f68966ad2a54)) #### Features * **subcommands** properly list subcommands in help and usage ([4ee02344](https://github.com/kbknapp/clap-rs/commit/4ee023442abc3dba54b68138006a52b714adf331)) ## v0.4.17 (2015-04-08) #### Bug Fixes * **tests** remove cargo test from claptests makefile ([1cf73817](https://github.com/kbknapp/clap-rs/commit/1cf73817d6fb1dccb5b6a23b46c2efa8b567ad62)) ## v0.4.16 (2015-04-08) #### Bug Fixes * **option** fix bug with option occurrence values ([9af52e93](https://github.com/kbknapp/clap-rs/commit/9af52e93cef9e17ac9974963f132013d0b97b946)) * **tests** fix testing script bug and formatting ([d8f03a55](https://github.com/kbknapp/clap-rs/commit/d8f03a55c4f74d126710ee06aad5a667246a8001)) #### Features * **arg** allow lifetimes other than 'static in arguments ([9e8c1fb9](https://github.com/kbknapp/clap-rs/commit/9e8c1fb9406f8448873ca58bab07fe905f1551e5)) vendor/clap/SPONSORS.md0000664000175000017500000000174414160055207015404 0ustar mwhudsonmwhudsonBelow is a list of sponsors for the clap-rs project If you are interested in becoming a sponsor for this project please our [sponsorship page](https://clap.rs/sponsorship/). ## Recurring Sponsors: | [Noelia Seva-Gonzalez](https://noeliasg.com/about/) | [messense](https://github.com/messense) | [Josh](https://joshtriplett.org) | Stephen Oats | |:-:|:-:|:-:|:-:| |Noelia Seva-Gonzalez | Messense | Josh Triplett | Stephen Oats | ## Single-Donation and Former Sponsors: | [Rob Tsuk](https://github.com/rtsuk)| | | |:-:|:-:|:-:| |Rob Tsuk| | | vendor/clap/debian/0000775000175000017500000000000014160055207015010 5ustar mwhudsonmwhudsonvendor/clap/debian/patches/0000775000175000017500000000000014172417313016442 5ustar mwhudsonmwhudsonvendor/clap/debian/patches/no-clippy.patch0000664000175000017500000000033214172417313021373 0ustar mwhudsonmwhudson--- a/Cargo.toml +++ b/Cargo.toml @@ -64,10 +64,6 @@ [dependencies.bitflags] version = "1.0" -[dependencies.clippy] -version = "~0.0.166" -optional = true - [dependencies.strsim] version = "0.8" optional = true vendor/clap/debian/patches/series0000664000175000017500000000005114160055207017650 0ustar mwhudsonmwhudsonno-clippy.patch relax-dep-versions.patch vendor/clap/debian/patches/relax-dep-versions.patch0000664000175000017500000000115114172417313023210 0ustar mwhudsonmwhudson--- a/Cargo.toml +++ b/Cargo.toml @@ -65,7 +65,7 @@ version = "1.0" [dependencies.strsim] -version = "0.8" +version = ">= 0.7, < 0.10" optional = true [dependencies.term_size] @@ -83,7 +83,7 @@ optional = true [dependencies.yaml-rust] -version = "0.3.5" +version = ">= 0.3.5, < 0.5" optional = true [dev-dependencies.lazy_static] version = "1.3" @@ -106,7 +106,7 @@ wrap_help = ["term_size", "textwrap/term_size"] yaml = ["yaml-rust"] [target."cfg(not(windows))".dependencies.ansi_term] -version = "0.12" +version = ">= 0.11, < 0.13" optional = true [badges.appveyor] repository = "clap-rs/clap" vendor/clap/src/0000775000175000017500000000000014172417313014360 5ustar mwhudsonmwhudsonvendor/clap/src/errors.rs0000664000175000017500000007404714172417313016256 0ustar mwhudsonmwhudson// Std use std::{ convert::From, error::Error as StdError, fmt as std_fmt, fmt::Display, io::{self, Write}, process, result::Result as StdResult, }; // Internal use crate::{ args::AnyArg, fmt::{ColorWhen, Colorizer, ColorizerOption}, suggestions, }; /// Short hand for [`Result`] type /// /// [`Result`]: https://doc.rust-lang.org/std/result/enum.Result.html pub type Result = StdResult; /// Command line argument parser kind of error #[derive(Debug, Copy, Clone, PartialEq)] pub enum ErrorKind { /// Occurs when an [`Arg`] has a set of possible values, /// and the user provides a value which isn't in that set. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let result = App::new("prog") /// .arg(Arg::with_name("speed") /// .possible_value("fast") /// .possible_value("slow")) /// .get_matches_from_safe(vec!["prog", "other"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::InvalidValue); /// ``` /// [`Arg`]: ./struct.Arg.html InvalidValue, /// Occurs when a user provides a flag, option, argument or subcommand which isn't defined. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let result = App::new("prog") /// .arg(Arg::from_usage("--flag 'some flag'")) /// .get_matches_from_safe(vec!["prog", "--other"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::UnknownArgument); /// ``` UnknownArgument, /// Occurs when the user provides an unrecognized [`SubCommand`] which meets the threshold for /// being similar enough to an existing subcommand. /// If it doesn't meet the threshold, or the 'suggestions' feature is disabled, /// the more general [`UnknownArgument`] error is returned. /// /// # Examples /// #[cfg_attr(not(feature = "suggestions"), doc = " ```no_run")] #[cfg_attr(feature = "suggestions", doc = " ```")] /// # use clap::{App, Arg, ErrorKind, SubCommand}; /// let result = App::new("prog") /// .subcommand(SubCommand::with_name("config") /// .about("Used for configuration") /// .arg(Arg::with_name("config_file") /// .help("The configuration file to use") /// .index(1))) /// .get_matches_from_safe(vec!["prog", "confi"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::InvalidSubcommand); /// ``` /// [`SubCommand`]: ./struct.SubCommand.html /// [`UnknownArgument`]: ./enum.ErrorKind.html#variant.UnknownArgument InvalidSubcommand, /// Occurs when the user provides an unrecognized [`SubCommand`] which either /// doesn't meet the threshold for being similar enough to an existing subcommand, /// or the 'suggestions' feature is disabled. /// Otherwise the more detailed [`InvalidSubcommand`] error is returned. /// /// This error typically happens when passing additional subcommand names to the `help` /// subcommand. Otherwise, the more general [`UnknownArgument`] error is used. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind, SubCommand}; /// let result = App::new("prog") /// .subcommand(SubCommand::with_name("config") /// .about("Used for configuration") /// .arg(Arg::with_name("config_file") /// .help("The configuration file to use") /// .index(1))) /// .get_matches_from_safe(vec!["prog", "help", "nothing"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::UnrecognizedSubcommand); /// ``` /// [`SubCommand`]: ./struct.SubCommand.html /// [`InvalidSubcommand`]: ./enum.ErrorKind.html#variant.InvalidSubcommand /// [`UnknownArgument`]: ./enum.ErrorKind.html#variant.UnknownArgument UnrecognizedSubcommand, /// Occurs when the user provides an empty value for an option that does not allow empty /// values. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("color") /// .long("color") /// .empty_values(false)) /// .get_matches_from_safe(vec!["prog", "--color="]); /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::EmptyValue); /// ``` EmptyValue, /// Occurs when the user provides a value for an argument with a custom validation and the /// value fails that validation. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// fn is_numeric(val: String) -> Result<(), String> { /// match val.parse::() { /// Ok(..) => Ok(()), /// Err(..) => Err(String::from("Value wasn't a number!")), /// } /// } /// /// let result = App::new("prog") /// .arg(Arg::with_name("num") /// .validator(is_numeric)) /// .get_matches_from_safe(vec!["prog", "NotANumber"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::ValueValidation); /// ``` ValueValidation, /// Occurs when a user provides more values for an argument than were defined by setting /// [`Arg::max_values`]. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let result = App::new("prog") /// .arg(Arg::with_name("arg") /// .multiple(true) /// .max_values(2)) /// .get_matches_from_safe(vec!["prog", "too", "many", "values"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::TooManyValues); /// ``` /// [`Arg::max_values`]: ./struct.Arg.html#method.max_values TooManyValues, /// Occurs when the user provides fewer values for an argument than were defined by setting /// [`Arg::min_values`]. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let result = App::new("prog") /// .arg(Arg::with_name("some_opt") /// .long("opt") /// .min_values(3)) /// .get_matches_from_safe(vec!["prog", "--opt", "too", "few"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::TooFewValues); /// ``` /// [`Arg::min_values`]: ./struct.Arg.html#method.min_values TooFewValues, /// Occurs when the user provides a different number of values for an argument than what's /// been defined by setting [`Arg::number_of_values`] or than was implicitly set by /// [`Arg::value_names`]. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let result = App::new("prog") /// .arg(Arg::with_name("some_opt") /// .long("opt") /// .takes_value(true) /// .number_of_values(2)) /// .get_matches_from_safe(vec!["prog", "--opt", "wrong"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::WrongNumberOfValues); /// ``` /// /// [`Arg::number_of_values`]: ./struct.Arg.html#method.number_of_values /// [`Arg::value_names`]: ./struct.Arg.html#method.value_names WrongNumberOfValues, /// Occurs when the user provides two values which conflict with each other and can't be used /// together. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let result = App::new("prog") /// .arg(Arg::with_name("debug") /// .long("debug") /// .conflicts_with("color")) /// .arg(Arg::with_name("color") /// .long("color")) /// .get_matches_from_safe(vec!["prog", "--debug", "--color"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::ArgumentConflict); /// ``` ArgumentConflict, /// Occurs when the user does not provide one or more required arguments. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let result = App::new("prog") /// .arg(Arg::with_name("debug") /// .required(true)) /// .get_matches_from_safe(vec!["prog"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::MissingRequiredArgument); /// ``` MissingRequiredArgument, /// Occurs when a subcommand is required (as defined by [`AppSettings::SubcommandRequired`]), /// but the user does not provide one. /// /// # Examples /// /// ```rust /// # use clap::{App, AppSettings, SubCommand, ErrorKind}; /// let err = App::new("prog") /// .setting(AppSettings::SubcommandRequired) /// .subcommand(SubCommand::with_name("test")) /// .get_matches_from_safe(vec![ /// "myprog", /// ]); /// assert!(err.is_err()); /// assert_eq!(err.unwrap_err().kind, ErrorKind::MissingSubcommand); /// # ; /// ``` /// [`AppSettings::SubcommandRequired`]: ./enum.AppSettings.html#variant.SubcommandRequired MissingSubcommand, /// Occurs when either an argument or [`SubCommand`] is required, as defined by /// [`AppSettings::ArgRequiredElseHelp`], but the user did not provide one. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, AppSettings, ErrorKind, SubCommand}; /// let result = App::new("prog") /// .setting(AppSettings::ArgRequiredElseHelp) /// .subcommand(SubCommand::with_name("config") /// .about("Used for configuration") /// .arg(Arg::with_name("config_file") /// .help("The configuration file to use"))) /// .get_matches_from_safe(vec!["prog"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::MissingArgumentOrSubcommand); /// ``` /// [`SubCommand`]: ./struct.SubCommand.html /// [`AppSettings::ArgRequiredElseHelp`]: ./enum.AppSettings.html#variant.ArgRequiredElseHelp MissingArgumentOrSubcommand, /// Occurs when the user provides multiple values to an argument which doesn't allow that. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let result = App::new("prog") /// .arg(Arg::with_name("debug") /// .long("debug") /// .multiple(false)) /// .get_matches_from_safe(vec!["prog", "--debug", "--debug"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::UnexpectedMultipleUsage); /// ``` UnexpectedMultipleUsage, /// Occurs when the user provides a value containing invalid UTF-8 for an argument and /// [`AppSettings::StrictUtf8`] is set. /// /// # Platform Specific /// /// Non-Windows platforms only (such as Linux, Unix, macOS, etc.) /// /// # Examples /// #[cfg_attr(not(unix), doc = " ```ignore")] #[cfg_attr(unix, doc = " ```")] /// # use clap::{App, Arg, ErrorKind, AppSettings}; /// # use std::os::unix::ffi::OsStringExt; /// # use std::ffi::OsString; /// let result = App::new("prog") /// .setting(AppSettings::StrictUtf8) /// .arg(Arg::with_name("utf8") /// .short("u") /// .takes_value(true)) /// .get_matches_from_safe(vec![OsString::from("myprog"), /// OsString::from("-u"), /// OsString::from_vec(vec![0xE9])]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::InvalidUtf8); /// ``` /// [`AppSettings::StrictUtf8`]: ./enum.AppSettings.html#variant.StrictUtf8 InvalidUtf8, /// Not a true "error" as it means `--help` or similar was used. /// The help message will be sent to `stdout`. /// /// **Note**: If the help is displayed due to an error (such as missing subcommands) it will /// be sent to `stderr` instead of `stdout`. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let result = App::new("prog") /// .get_matches_from_safe(vec!["prog", "--help"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::HelpDisplayed); /// ``` HelpDisplayed, /// Not a true "error" as it means `--version` or similar was used. /// The message will be sent to `stdout`. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let result = App::new("prog") /// .get_matches_from_safe(vec!["prog", "--version"]); /// assert!(result.is_err()); /// assert_eq!(result.unwrap_err().kind, ErrorKind::VersionDisplayed); /// ``` VersionDisplayed, /// Occurs when using the [`value_t!`] and [`values_t!`] macros to convert an argument value /// into type `T`, but the argument you requested wasn't used. I.e. you asked for an argument /// with name `config` to be converted, but `config` wasn't used by the user. /// [`value_t!`]: ./macro.value_t!.html /// [`values_t!`]: ./macro.values_t!.html ArgumentNotFound, /// Represents an [I/O error]. /// Can occur when writing to `stderr` or `stdout` or reading a configuration file. /// [I/O error]: https://doc.rust-lang.org/std/io/struct.Error.html Io, /// Represents a [Format error] (which is a part of [`Display`]). /// Typically caused by writing to `stderr` or `stdout`. /// /// [`Display`]: https://doc.rust-lang.org/std/fmt/trait.Display.html /// [Format error]: https://doc.rust-lang.org/std/fmt/struct.Error.html Format, } /// Command Line Argument Parser Error #[derive(Debug)] pub struct Error { /// Formatted error message pub message: String, /// The type of error pub kind: ErrorKind, /// Any additional information passed along, such as the argument name that caused the error pub info: Option>, } impl Error { /// Should the message be written to `stdout` or not pub fn use_stderr(&self) -> bool { !matches!( self.kind, ErrorKind::HelpDisplayed | ErrorKind::VersionDisplayed ) } /// Prints the error message and exits. If `Error::use_stderr` evaluates to `true`, the message /// will be written to `stderr` and exits with a status of `1`. Otherwise, `stdout` is used /// with a status of `0`. pub fn exit(&self) -> ! { if self.use_stderr() { wlnerr!(@nopanic "{}", self.message); process::exit(1); } // We are deliberately dropping errors here. We could match on the error kind, and only // drop things such as `std::io::ErrorKind::BrokenPipe`, however nothing is being bubbled // up or reported back to the caller and we will be exit'ing the process anyways. // Additionally, changing this API to bubble up the result would be a breaking change. // // Another approach could be to try and write to stdout, if that fails due to a broken pipe // then use stderr. However, that would change the semantics in what could be argued is a // breaking change. Simply dropping the error, can always be changed to this "use stderr if // stdout is closed" approach later if desired. // // A good explanation of the types of errors are SIGPIPE where the read side of the pipe // closes before the write side. See the README in `calm_io` for a good explanation: // // https://github.com/myrrlyn/calm_io/blob/a42845575a04cd8b65e92c19d104627f5fcad3d7/README.md let _ = writeln!(&mut io::stdout().lock(), "{}", self.message); process::exit(0); } #[doc(hidden)] pub fn write_to(&self, w: &mut W) -> io::Result<()> { write!(w, "{}", self.message) } #[doc(hidden)] pub fn argument_conflict( arg: &AnyArg, other: Option, usage: U, color: ColorWhen, ) -> Self where O: Into, U: Display, { let mut v = vec![arg.name().to_owned()]; let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} The argument '{}' cannot be used with {}\n\n\ {}\n\n\ For more information try {}", c.error("error:"), c.warning(&*arg.to_string()), match other { Some(name) => { let n = name.into(); v.push(n.clone()); c.warning(format!("'{}'", n)) } None => c.none("one or more of the other specified arguments".to_owned()), }, usage, c.good("--help") ), kind: ErrorKind::ArgumentConflict, info: Some(v), } } #[doc(hidden)] pub fn empty_value(arg: &AnyArg, usage: U, color: ColorWhen) -> Self where U: Display, { let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} The argument '{}' requires a value but none was supplied\ \n\n\ {}\n\n\ For more information try {}", c.error("error:"), c.warning(arg.to_string()), usage, c.good("--help") ), kind: ErrorKind::EmptyValue, info: Some(vec![arg.name().to_owned()]), } } #[doc(hidden)] pub fn invalid_value( bad_val: B, good_vals: &[G], arg: &AnyArg, usage: U, color: ColorWhen, ) -> Self where B: AsRef, G: AsRef + Display, U: Display, { let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); let suffix = suggestions::did_you_mean_value_suffix(bad_val.as_ref(), good_vals.iter()); let mut sorted = vec![]; for v in good_vals { let val = format!("{}", c.good(v)); sorted.push(val); } sorted.sort(); let valid_values = sorted.join(", "); Error { message: format!( "{} '{}' isn't a valid value for '{}'\n\t\ [possible values: {}]\n\ {}\n\n\ {}\n\n\ For more information try {}", c.error("error:"), c.warning(bad_val.as_ref()), c.warning(arg.to_string()), valid_values, suffix.0, usage, c.good("--help") ), kind: ErrorKind::InvalidValue, info: Some(vec![arg.name().to_owned(), bad_val.as_ref().to_owned()]), } } #[doc(hidden)] pub fn invalid_subcommand( subcmd: S, did_you_mean: D, name: N, usage: U, color: ColorWhen, ) -> Self where S: Into, D: AsRef + Display, N: Display, U: Display, { let s = subcmd.into(); let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} The subcommand '{}' wasn't recognized\n\t\ Did you mean '{}'?\n\n\ If you believe you received this message in error, try \ re-running with '{} {} {}'\n\n\ {}\n\n\ For more information try {}", c.error("error:"), c.warning(&*s), c.good(did_you_mean.as_ref()), name, c.good("--"), &*s, usage, c.good("--help") ), kind: ErrorKind::InvalidSubcommand, info: Some(vec![s]), } } #[doc(hidden)] pub fn unrecognized_subcommand(subcmd: S, name: N, color: ColorWhen) -> Self where S: Into, N: Display, { let s = subcmd.into(); let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} The subcommand '{}' wasn't recognized\n\n\ {}\n\t\ {} help ...\n\n\ For more information try {}", c.error("error:"), c.warning(&*s), c.warning("USAGE:"), name, c.good("--help") ), kind: ErrorKind::UnrecognizedSubcommand, info: Some(vec![s]), } } #[doc(hidden)] pub fn missing_required_argument(required: R, usage: U, color: ColorWhen) -> Self where R: Display, U: Display, { let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} The following required arguments were not provided:{}\n\n\ {}\n\n\ For more information try {}", c.error("error:"), required, usage, c.good("--help") ), kind: ErrorKind::MissingRequiredArgument, info: None, } } #[doc(hidden)] pub fn missing_subcommand(name: N, usage: U, color: ColorWhen) -> Self where N: AsRef + Display, U: Display, { let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} '{}' requires a subcommand, but one was not provided\n\n\ {}\n\n\ For more information try {}", c.error("error:"), c.warning(name), usage, c.good("--help") ), kind: ErrorKind::MissingSubcommand, info: None, } } #[doc(hidden)] pub fn invalid_utf8(usage: U, color: ColorWhen) -> Self where U: Display, { let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} Invalid UTF-8 was detected in one or more arguments\n\n\ {}\n\n\ For more information try {}", c.error("error:"), usage, c.good("--help") ), kind: ErrorKind::InvalidUtf8, info: None, } } #[doc(hidden)] pub fn too_many_values(val: V, arg: &AnyArg, usage: U, color: ColorWhen) -> Self where V: AsRef + Display + ToOwned, U: Display, { let v = val.as_ref(); let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} The value '{}' was provided to '{}', but it wasn't expecting \ any more values\n\n\ {}\n\n\ For more information try {}", c.error("error:"), c.warning(v), c.warning(arg.to_string()), usage, c.good("--help") ), kind: ErrorKind::TooManyValues, info: Some(vec![arg.name().to_owned(), v.to_owned()]), } } #[doc(hidden)] pub fn too_few_values( arg: &AnyArg, min_vals: u64, curr_vals: usize, usage: U, color: ColorWhen, ) -> Self where U: Display, { let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} The argument '{}' requires at least {} values, but only {} w{} \ provided\n\n\ {}\n\n\ For more information try {}", c.error("error:"), c.warning(arg.to_string()), c.warning(min_vals.to_string()), c.warning(curr_vals.to_string()), if curr_vals > 1 { "ere" } else { "as" }, usage, c.good("--help") ), kind: ErrorKind::TooFewValues, info: Some(vec![arg.name().to_owned()]), } } #[doc(hidden)] pub fn value_validation(arg: Option<&AnyArg>, err: String, color: ColorWhen) -> Self { let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} Invalid value{}: {}", c.error("error:"), if let Some(a) = arg { format!(" for '{}'", c.warning(a.to_string())) } else { "".to_string() }, err ), kind: ErrorKind::ValueValidation, info: None, } } #[doc(hidden)] pub fn value_validation_auto(err: String) -> Self { let n: Option<&AnyArg> = None; Error::value_validation(n, err, ColorWhen::Auto) } #[doc(hidden)] pub fn wrong_number_of_values( arg: &AnyArg, num_vals: u64, curr_vals: usize, suffix: S, usage: U, color: ColorWhen, ) -> Self where S: Display, U: Display, { let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} The argument '{}' requires {} values, but {} w{} \ provided\n\n\ {}\n\n\ For more information try {}", c.error("error:"), c.warning(arg.to_string()), c.warning(num_vals.to_string()), c.warning(curr_vals.to_string()), suffix, usage, c.good("--help") ), kind: ErrorKind::WrongNumberOfValues, info: Some(vec![arg.name().to_owned()]), } } #[doc(hidden)] pub fn unexpected_multiple_usage(arg: &AnyArg, usage: U, color: ColorWhen) -> Self where U: Display, { let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} The argument '{}' was provided more than once, but cannot \ be used multiple times\n\n\ {}\n\n\ For more information try {}", c.error("error:"), c.warning(arg.to_string()), usage, c.good("--help") ), kind: ErrorKind::UnexpectedMultipleUsage, info: Some(vec![arg.name().to_owned()]), } } #[doc(hidden)] pub fn unknown_argument(arg: A, did_you_mean: &str, usage: U, color: ColorWhen) -> Self where A: Into, U: Display, { let a = arg.into(); let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!( "{} Found argument '{}' which wasn't expected, or isn't valid in \ this context{}\n\ {}\n\n\ For more information try {}", c.error("error:"), c.warning(&*a), if did_you_mean.is_empty() { "\n".to_owned() } else { format!("{}\n", did_you_mean) }, usage, c.good("--help") ), kind: ErrorKind::UnknownArgument, info: Some(vec![a]), } } #[doc(hidden)] pub fn io_error(e: &Error, color: ColorWhen) -> Self { let c = Colorizer::new(ColorizerOption { use_stderr: true, when: color, }); Error { message: format!("{} {}", c.error("error:"), e.description()), kind: ErrorKind::Io, info: None, } } #[doc(hidden)] pub fn argument_not_found_auto(arg: A) -> Self where A: Into, { let a = arg.into(); let c = Colorizer::new(ColorizerOption { use_stderr: true, when: ColorWhen::Auto, }); Error { message: format!("{} The argument '{}' wasn't found", c.error("error:"), a), kind: ErrorKind::ArgumentNotFound, info: Some(vec![a]), } } /// Create an error with a custom description. /// /// This can be used in combination with `Error::exit` to exit your program /// with a custom error message. pub fn with_description(description: &str, kind: ErrorKind) -> Self { let c = Colorizer::new(ColorizerOption { use_stderr: true, when: ColorWhen::Auto, }); Error { message: format!("{} {}", c.error("error:"), description), kind, info: None, } } } impl StdError for Error { fn description(&self) -> &str { &*self.message } } impl Display for Error { fn fmt(&self, f: &mut std_fmt::Formatter) -> std_fmt::Result { writeln!(f, "{}", self.message) } } impl From for Error { fn from(e: io::Error) -> Self { Error::with_description(e.description(), ErrorKind::Io) } } impl From for Error { fn from(e: std_fmt::Error) -> Self { Error::with_description(e.description(), ErrorKind::Format) } } vendor/clap/src/usage_parser.rs0000664000175000017500000013621114172417313017412 0ustar mwhudsonmwhudson// Internal use crate::{ args::{settings::ArgSettings, Arg}, map::VecMap, INTERNAL_ERROR_MSG, }; #[derive(PartialEq, Debug)] enum UsageToken { Name, ValName, Short, Long, Help, Multiple, Unknown, } #[doc(hidden)] #[derive(Debug)] pub struct UsageParser<'a> { usage: &'a str, pos: usize, start: usize, prev: UsageToken, explicit_name_set: bool, } impl<'a> UsageParser<'a> { fn new(usage: &'a str) -> Self { debugln!("UsageParser::new: usage={:?}", usage); UsageParser { usage, pos: 0, start: 0, prev: UsageToken::Unknown, explicit_name_set: false, } } pub fn from_usage(usage: &'a str) -> Self { debugln!("UsageParser::from_usage;"); UsageParser::new(usage) } pub fn parse(mut self) -> Arg<'a, 'a> { debugln!("UsageParser::parse;"); let mut arg = Arg::default(); loop { debugln!("UsageParser::parse:iter: pos={};", self.pos); self.stop_at(token); if let Some(&c) = self.usage.as_bytes().get(self.pos) { match c { b'-' => self.short_or_long(&mut arg), b'.' => self.multiple(&mut arg), b'\'' => self.help(&mut arg), _ => self.name(&mut arg), } } else { break; } } debug_assert!( !arg.b.name.is_empty(), "No name found for Arg when parsing usage string: {}", self.usage ); arg.v.num_vals = match arg.v.val_names { Some(ref v) if v.len() >= 2 => Some(v.len() as u64), _ => None, }; debugln!("UsageParser::parse: vals...{:?}", arg.v.val_names); arg } fn name(&mut self, arg: &mut Arg<'a, 'a>) { debugln!("UsageParser::name;"); if *self .usage .as_bytes() .get(self.pos) .expect(INTERNAL_ERROR_MSG) == b'<' && !self.explicit_name_set { arg.setb(ArgSettings::Required); } self.pos += 1; self.stop_at(name_end); let name = &self.usage[self.start..self.pos]; if self.prev == UsageToken::Unknown { debugln!("UsageParser::name: setting name...{}", name); arg.b.name = name; if arg.s.long.is_none() && arg.s.short.is_none() { debugln!("UsageParser::name: explicit name set..."); self.explicit_name_set = true; self.prev = UsageToken::Name; } } else { debugln!("UsageParser::name: setting val name...{}", name); if let Some(ref mut v) = arg.v.val_names { let len = v.len(); v.insert(len, name); } else { let mut v = VecMap::new(); v.insert(0, name); arg.v.val_names = Some(v); arg.setb(ArgSettings::TakesValue); } self.prev = UsageToken::ValName; } } fn stop_at(&mut self, f: F) where F: Fn(u8) -> bool, { debugln!("UsageParser::stop_at;"); self.start = self.pos; self.pos += self.usage[self.start..] .bytes() .take_while(|&b| f(b)) .count(); } fn short_or_long(&mut self, arg: &mut Arg<'a, 'a>) { debugln!("UsageParser::short_or_long;"); self.pos += 1; if *self .usage .as_bytes() .get(self.pos) .expect(INTERNAL_ERROR_MSG) == b'-' { self.pos += 1; self.long(arg); return; } self.short(arg) } fn long(&mut self, arg: &mut Arg<'a, 'a>) { debugln!("UsageParser::long;"); self.stop_at(long_end); let name = &self.usage[self.start..self.pos]; if !self.explicit_name_set { debugln!("UsageParser::long: setting name...{}", name); arg.b.name = name; } debugln!("UsageParser::long: setting long...{}", name); arg.s.long = Some(name); self.prev = UsageToken::Long; } fn short(&mut self, arg: &mut Arg<'a, 'a>) { debugln!("UsageParser::short;"); let start = &self.usage[self.pos..]; let short = start.chars().next().expect(INTERNAL_ERROR_MSG); debugln!("UsageParser::short: setting short...{}", short); arg.s.short = Some(short); if arg.b.name.is_empty() { // --long takes precedence but doesn't set self.explicit_name_set let name = &start[..short.len_utf8()]; debugln!("UsageParser::short: setting name...{}", name); arg.b.name = name; } self.prev = UsageToken::Short; } // "something..." fn multiple(&mut self, arg: &mut Arg) { debugln!("UsageParser::multiple;"); let mut dot_counter = 1; let start = self.pos; let mut bytes = self.usage[start..].bytes(); while bytes.next() == Some(b'.') { dot_counter += 1; self.pos += 1; if dot_counter == 3 { debugln!("UsageParser::multiple: setting multiple"); arg.setb(ArgSettings::Multiple); if arg.is_set(ArgSettings::TakesValue) { arg.setb(ArgSettings::UseValueDelimiter); arg.unsetb(ArgSettings::ValueDelimiterNotSet); if arg.v.val_delim.is_none() { arg.v.val_delim = Some(','); } } self.prev = UsageToken::Multiple; self.pos += 1; break; } } } fn help(&mut self, arg: &mut Arg<'a, 'a>) { debugln!("UsageParser::help;"); self.stop_at(help_start); self.start = self.pos + 1; self.pos = self.usage.len() - 1; debugln!( "UsageParser::help: setting help...{}", &self.usage[self.start..self.pos] ); arg.b.help = Some(&self.usage[self.start..self.pos]); self.pos += 1; // Move to next byte to keep from thinking ending ' is a start self.prev = UsageToken::Help; } } #[inline] fn name_end(b: u8) -> bool { b != b']' && b != b'>' } #[inline] fn token(b: u8) -> bool { b != b'\'' && b != b'.' && b != b'<' && b != b'[' && b != b'-' } #[inline] fn long_end(b: u8) -> bool { b != b'\'' && b != b'.' && b != b'<' && b != b'[' && b != b'=' && b != b' ' } #[inline] fn help_start(b: u8) -> bool { b != b'\'' } #[cfg(test)] mod test { use crate::args::{Arg, ArgSettings}; #[test] fn create_flag_usage() { let a = Arg::from_usage("[flag] -f 'some help info'"); assert_eq!(a.b.name, "flag"); assert_eq!(a.s.short.unwrap(), 'f'); assert!(a.s.long.is_none()); assert_eq!(a.b.help.unwrap(), "some help info"); assert!(!a.is_set(ArgSettings::Multiple)); assert!(a.v.val_names.is_none()); assert!(a.v.num_vals.is_none()); let b = Arg::from_usage("[flag] --flag 'some help info'"); assert_eq!(b.b.name, "flag"); assert_eq!(b.s.long.unwrap(), "flag"); assert!(b.s.short.is_none()); assert_eq!(b.b.help.unwrap(), "some help info"); assert!(!b.is_set(ArgSettings::Multiple)); assert!(a.v.val_names.is_none()); assert!(a.v.num_vals.is_none()); let b = Arg::from_usage("--flag 'some help info'"); assert_eq!(b.b.name, "flag"); assert_eq!(b.s.long.unwrap(), "flag"); assert!(b.s.short.is_none()); assert_eq!(b.b.help.unwrap(), "some help info"); assert!(!b.is_set(ArgSettings::Multiple)); assert!(b.v.val_names.is_none()); assert!(b.v.num_vals.is_none()); let c = Arg::from_usage("[flag] -f --flag 'some help info'"); assert_eq!(c.b.name, "flag"); assert_eq!(c.s.short.unwrap(), 'f'); assert_eq!(c.s.long.unwrap(), "flag"); assert_eq!(c.b.help.unwrap(), "some help info"); assert!(!c.is_set(ArgSettings::Multiple)); assert!(c.v.val_names.is_none()); assert!(c.v.num_vals.is_none()); let d = Arg::from_usage("[flag] -f... 'some help info'"); assert_eq!(d.b.name, "flag"); assert_eq!(d.s.short.unwrap(), 'f'); assert!(d.s.long.is_none()); assert_eq!(d.b.help.unwrap(), "some help info"); assert!(d.is_set(ArgSettings::Multiple)); assert!(d.v.val_names.is_none()); assert!(d.v.num_vals.is_none()); let e = Arg::from_usage("[flag] -f --flag... 'some help info'"); assert_eq!(e.b.name, "flag"); assert_eq!(e.s.long.unwrap(), "flag"); assert_eq!(e.s.short.unwrap(), 'f'); assert_eq!(e.b.help.unwrap(), "some help info"); assert!(e.is_set(ArgSettings::Multiple)); assert!(e.v.val_names.is_none()); assert!(e.v.num_vals.is_none()); let e = Arg::from_usage("-f --flag... 'some help info'"); assert_eq!(e.b.name, "flag"); assert_eq!(e.s.long.unwrap(), "flag"); assert_eq!(e.s.short.unwrap(), 'f'); assert_eq!(e.b.help.unwrap(), "some help info"); assert!(e.is_set(ArgSettings::Multiple)); assert!(e.v.val_names.is_none()); assert!(e.v.num_vals.is_none()); let e = Arg::from_usage("--flags"); assert_eq!(e.b.name, "flags"); assert_eq!(e.s.long.unwrap(), "flags"); assert!(e.v.val_names.is_none()); assert!(e.v.num_vals.is_none()); let e = Arg::from_usage("--flags..."); assert_eq!(e.b.name, "flags"); assert_eq!(e.s.long.unwrap(), "flags"); assert!(e.is_set(ArgSettings::Multiple)); assert!(e.v.val_names.is_none()); assert!(e.v.num_vals.is_none()); let e = Arg::from_usage("[flags] -f"); assert_eq!(e.b.name, "flags"); assert_eq!(e.s.short.unwrap(), 'f'); assert!(e.v.val_names.is_none()); assert!(e.v.num_vals.is_none()); let e = Arg::from_usage("[flags] -f..."); assert_eq!(e.b.name, "flags"); assert_eq!(e.s.short.unwrap(), 'f'); assert!(e.is_set(ArgSettings::Multiple)); assert!(e.v.val_names.is_none()); assert!(e.v.num_vals.is_none()); let a = Arg::from_usage("-f 'some help info'"); assert_eq!(a.b.name, "f"); assert_eq!(a.s.short.unwrap(), 'f'); assert!(a.s.long.is_none()); assert_eq!(a.b.help.unwrap(), "some help info"); assert!(!a.is_set(ArgSettings::Multiple)); assert!(a.v.val_names.is_none()); assert!(a.v.num_vals.is_none()); let e = Arg::from_usage("-f"); assert_eq!(e.b.name, "f"); assert_eq!(e.s.short.unwrap(), 'f'); assert!(e.v.val_names.is_none()); assert!(e.v.num_vals.is_none()); let e = Arg::from_usage("-f..."); assert_eq!(e.b.name, "f"); assert_eq!(e.s.short.unwrap(), 'f'); assert!(e.is_set(ArgSettings::Multiple)); assert!(e.v.val_names.is_none()); assert!(e.v.num_vals.is_none()); } #[test] fn create_option_usage0() { // Short only let a = Arg::from_usage("[option] -o [opt] 'some help info'"); assert_eq!(a.b.name, "option"); assert_eq!(a.s.short.unwrap(), 'o'); assert!(a.s.long.is_none()); assert_eq!(a.b.help.unwrap(), "some help info"); assert!(!a.is_set(ArgSettings::Multiple)); assert!(a.is_set(ArgSettings::TakesValue)); assert!(!a.is_set(ArgSettings::Required)); assert_eq!( a.v.val_names.unwrap().values().collect::>(), [&"opt"] ); assert!(a.v.num_vals.is_none()); } #[test] fn create_option_usage1() { let b = Arg::from_usage("-o [opt] 'some help info'"); assert_eq!(b.b.name, "o"); assert_eq!(b.s.short.unwrap(), 'o'); assert!(b.s.long.is_none()); assert_eq!(b.b.help.unwrap(), "some help info"); assert!(!b.is_set(ArgSettings::Multiple)); assert!(b.is_set(ArgSettings::TakesValue)); assert!(!b.is_set(ArgSettings::Required)); assert_eq!( b.v.val_names.unwrap().values().collect::>(), [&"opt"] ); assert!(b.v.num_vals.is_none()); } #[test] fn create_option_usage2() { let c = Arg::from_usage("( &self, arg: &A, val: &OsStr, matcher: &mut ArgMatcher<'a>, ) -> ClapResult> where A: AnyArg<'a, 'b> + Display, { debugln!("Parser::add_val_to_arg; arg={}, val={:?}", arg.name(), val); debugln!( "Parser::add_val_to_arg; trailing_vals={:?}, DontDelimTrailingVals={:?}", self.is_set(AS::TrailingValues), self.is_set(AS::DontDelimitTrailingValues) ); if !(self.is_set(AS::TrailingValues) && self.is_set(AS::DontDelimitTrailingValues)) { if let Some(delim) = arg.val_delim() { if val.is_empty() { Ok(self.add_single_val_to_arg(arg, val, matcher)?) } else { let mut iret = ParseResult::ValuesDone; for v in val.split(delim as u32 as u8) { iret = self.add_single_val_to_arg(arg, v, matcher)?; } // If there was a delimiter used, we're not looking for more values if val.contains_byte(delim as u32 as u8) || arg.is_set(ArgSettings::RequireDelimiter) { iret = ParseResult::ValuesDone; } Ok(iret) } } else { self.add_single_val_to_arg(arg, val, matcher) } } else { self.add_single_val_to_arg(arg, val, matcher) } } fn add_single_val_to_arg( &self, arg: &A, v: &OsStr, matcher: &mut ArgMatcher<'a>, ) -> ClapResult> where A: AnyArg<'a, 'b> + Display, { debugln!("Parser::add_single_val_to_arg;"); debugln!("Parser::add_single_val_to_arg: adding val...{:?}", v); // update the current index because each value is a distinct index to clap self.cur_idx.set(self.cur_idx.get() + 1); // @TODO @docs @p4: docs for indices should probably note that a terminator isn't a value // and therefore not reported in indices if let Some(t) = arg.val_terminator() { if t == v { return Ok(ParseResult::ValuesDone); } } matcher.add_val_to(arg.name(), v); matcher.add_index_to(arg.name(), self.cur_idx.get()); // Increment or create the group "args" if let Some(grps) = self.groups_for_arg(arg.name()) { for grp in grps { matcher.add_val_to(&*grp, v); } } if matcher.needs_more_vals(arg) { return Ok(ParseResult::Opt(arg.name())); } Ok(ParseResult::ValuesDone) } fn parse_flag( &self, flag: &FlagBuilder<'a, 'b>, matcher: &mut ArgMatcher<'a>, ) -> ClapResult> { debugln!("Parser::parse_flag;"); matcher.inc_occurrence_of(flag.b.name); matcher.add_index_to(flag.b.name, self.cur_idx.get()); // Increment or create the group "args" if let Some(vec) = self.groups_for_arg(flag.b.name) { matcher.inc_occurrences_of(&*vec); } Ok(ParseResult::Flag) } fn did_you_mean_error( &self, arg: &str, matcher: &mut ArgMatcher<'a>, args_rest: &[&str], ) -> ClapResult<()> { // Didn't match a flag or option let suffix = suggestions::did_you_mean_flag_suffix(arg, args_rest, longs!(self), &self.subcommands); // Add the arg to the matches to build a proper usage string if let Some(name) = suffix.1 { if let Some(opt) = find_opt_by_long!(self, name) { if let Some(grps) = self.groups_for_arg(&*opt.b.name) { matcher.inc_occurrences_of(&*grps); } matcher.insert(&*opt.b.name); } else if let Some(flg) = find_flag_by_long!(self, name) { if let Some(grps) = self.groups_for_arg(&*flg.b.name) { matcher.inc_occurrences_of(&*grps); } matcher.insert(&*flg.b.name); } } let used_arg = format!("--{}", arg); Err(Error::unknown_argument( &*used_arg, &*suffix.0, &*usage::create_error_usage(self, matcher, None), self.color(), )) } // Prints the version to the user and exits if quit=true fn print_version(&self, w: &mut W, use_long: bool) -> ClapResult<()> { self.write_version(w, use_long)?; w.flush().map_err(Error::from) } pub fn write_version(&self, w: &mut W, use_long: bool) -> io::Result<()> { let ver = if use_long { self.meta .long_version .unwrap_or_else(|| self.meta.version.unwrap_or("")) } else { self.meta .version .unwrap_or_else(|| self.meta.long_version.unwrap_or("")) }; if let Some(bn) = self.meta.bin_name.as_ref() { if bn.contains(' ') { // Incase we're dealing with subcommands i.e. git mv is translated to git-mv write!(w, "{} {}", bn.replace(" ", "-"), ver) } else { write!(w, "{} {}", &self.meta.name[..], ver) } } else { write!(w, "{} {}", &self.meta.name[..], ver) } } pub fn print_help(&self) -> ClapResult<()> { let out = io::stdout(); let mut buf_w = BufWriter::new(out.lock()); self.write_help(&mut buf_w) } pub fn write_help(&self, w: &mut W) -> ClapResult<()> { Help::write_parser_help(w, self, false) } pub fn write_long_help(&self, w: &mut W) -> ClapResult<()> { Help::write_parser_help(w, self, true) } pub fn write_help_err(&self, w: &mut W) -> ClapResult<()> { Help::write_parser_help_to_stderr(w, self) } pub fn add_defaults(&mut self, matcher: &mut ArgMatcher<'a>) -> ClapResult<()> { debugln!("Parser::add_defaults;"); macro_rules! add_val { (@default $_self:ident, $a:ident, $m:ident) => { if let Some(ref val) = $a.v.default_val { debugln!("Parser::add_defaults:iter:{}: has default vals", $a.b.name); if $m.get($a.b.name).map(|ma| ma.vals.len()).map(|len| len == 0).unwrap_or(false) { debugln!("Parser::add_defaults:iter:{}: has no user defined vals", $a.b.name); $_self.add_val_to_arg($a, OsStr::new(val), $m)?; if $_self.cache.map_or(true, |name| name != $a.name()) { $_self.cache = Some($a.name()); } } else if $m.get($a.b.name).is_some() { debugln!("Parser::add_defaults:iter:{}: has user defined vals", $a.b.name); } else { debugln!("Parser::add_defaults:iter:{}: wasn't used", $a.b.name); $_self.add_val_to_arg($a, OsStr::new(val), $m)?; if $_self.cache.map_or(true, |name| name != $a.name()) { $_self.cache = Some($a.name()); } } } else { debugln!("Parser::add_defaults:iter:{}: doesn't have default vals", $a.b.name); } }; ($_self:ident, $a:ident, $m:ident) => { if let Some(ref vm) = $a.v.default_vals_ifs { sdebugln!(" has conditional defaults"); let mut done = false; if $m.get($a.b.name).is_none() { for &(arg, val, default) in vm.values() { let add = if let Some(a) = $m.get(arg) { if let Some(v) = val { a.vals.iter().any(|value| v == value) } else { true } } else { false }; if add { $_self.add_val_to_arg($a, OsStr::new(default), $m)?; if $_self.cache.map_or(true, |name| name != $a.name()) { $_self.cache = Some($a.name()); } done = true; break; } } } if done { continue; // outer loop (outside macro) } } else { sdebugln!(" doesn't have conditional defaults"); } add_val!(@default $_self, $a, $m) }; } for o in &self.opts { debug!("Parser::add_defaults:iter:{}:", o.b.name); add_val!(self, o, matcher); } for p in self.positionals.values() { debug!("Parser::add_defaults:iter:{}:", p.b.name); add_val!(self, p, matcher); } Ok(()) } pub fn add_env(&mut self, matcher: &mut ArgMatcher<'a>) -> ClapResult<()> { macro_rules! add_val { ($_self:ident, $a:ident, $m:ident) => { if let Some(ref val) = $a.v.env { if $m .get($a.b.name) .map(|ma| ma.vals.len()) .map(|len| len == 0) .unwrap_or(false) { if let Some(ref val) = val.1 { $_self.add_val_to_arg($a, OsStr::new(val), $m)?; if $_self.cache.map_or(true, |name| name != $a.name()) { $_self.cache = Some($a.name()); } } } else { if let Some(ref val) = val.1 { $_self.add_val_to_arg($a, OsStr::new(val), $m)?; if $_self.cache.map_or(true, |name| name != $a.name()) { $_self.cache = Some($a.name()); } } } } }; } for o in &self.opts { add_val!(self, o, matcher); } for p in self.positionals.values() { add_val!(self, p, matcher); } Ok(()) } pub fn flags(&self) -> Iter> { self.flags.iter() } pub fn opts(&self) -> Iter> { self.opts.iter() } pub fn positionals(&self) -> map::Values> { self.positionals.values() } pub fn subcommands(&self) -> Iter { self.subcommands.iter() } // Should we color the output? None=determined by output location, true=yes, false=no #[doc(hidden)] pub fn color(&self) -> ColorWhen { debugln!("Parser::color;"); debug!("Parser::color: Color setting..."); if self.is_set(AS::ColorNever) { sdebugln!("Never"); ColorWhen::Never } else if self.is_set(AS::ColorAlways) { sdebugln!("Always"); ColorWhen::Always } else { sdebugln!("Auto"); ColorWhen::Auto } } pub fn find_any_arg(&self, name: &str) -> Option<&AnyArg<'a, 'b>> { if let Some(f) = find_by_name!(self, name, flags, iter) { return Some(f); } if let Some(o) = find_by_name!(self, name, opts, iter) { return Some(o); } if let Some(p) = find_by_name!(self, name, positionals, values) { return Some(p); } None } /// Check is a given string matches the binary name for this parser fn is_bin_name(&self, value: &str) -> bool { self.meta .bin_name .as_ref() .map(|name| value == name) .unwrap_or(false) } /// Check is a given string is an alias for this parser fn is_alias(&self, value: &str) -> bool { self.meta .aliases .as_ref() .map(|aliases| { for alias in aliases { if alias.0 == value { return true; } } false }) .unwrap_or(false) } // Only used for completion scripts due to bin_name messiness #[cfg_attr(feature = "lints", allow(block_in_if_condition_stmt))] pub fn find_subcommand(&'b self, sc: &str) -> Option<&'b App<'a, 'b>> { debugln!("Parser::find_subcommand: sc={}", sc); debugln!( "Parser::find_subcommand: Currently in Parser...{}", self.meta.bin_name.as_ref().unwrap() ); for s in &self.subcommands { if s.p.is_bin_name(sc) { return Some(s); } // XXX: why do we split here? // isn't `sc` supposed to be single word already? let last = sc.split(' ').rev().next().expect(INTERNAL_ERROR_MSG); if s.p.is_alias(last) { return Some(s); } if let Some(app) = s.p.find_subcommand(sc) { return Some(app); } } None } #[inline] fn contains_long(&self, l: &str) -> bool { longs!(self).any(|al| al == &l) } #[inline] fn contains_short(&self, s: char) -> bool { shorts!(self).any(|arg_s| arg_s == &s) } } vendor/clap/src/app/validator.rs0000664000175000017500000005303114172417313017475 0ustar mwhudsonmwhudson// std #[allow(deprecated, unused_imports)] use std::{ascii::AsciiExt, fmt::Display}; // Internal use crate::{ app::{ parser::{ParseResult, Parser}, settings::AppSettings as AS, usage, }, args::{settings::ArgSettings, AnyArg, ArgMatcher, MatchedArg}, errors::{Error, ErrorKind, Result as ClapResult}, fmt::{Colorizer, ColorizerOption}, INTERNAL_ERROR_MSG, INVALID_UTF8, }; pub struct Validator<'a, 'b, 'z>(&'z mut Parser<'a, 'b>) where 'a: 'b, 'b: 'z; impl<'a, 'b, 'z> Validator<'a, 'b, 'z> { pub fn new(p: &'z mut Parser<'a, 'b>) -> Self { Validator(p) } pub fn validate( &mut self, needs_val_of: ParseResult<'a>, subcmd_name: Option, matcher: &mut ArgMatcher<'a>, ) -> ClapResult<()> { debugln!("Validator::validate;"); let mut reqs_validated = false; self.0.add_env(matcher)?; self.0.add_defaults(matcher)?; if let ParseResult::Opt(a) = needs_val_of { debugln!("Validator::validate: needs_val_of={:?}", a); let o = { self.0 .opts .iter() .find(|o| o.b.name == a) .expect(INTERNAL_ERROR_MSG) .clone() }; self.validate_required(matcher)?; reqs_validated = true; let should_err = if let Some(v) = matcher.0.args.get(&*o.b.name) { v.vals.is_empty() && !(o.v.min_vals.is_some() && o.v.min_vals.unwrap() == 0) } else { true }; if should_err { return Err(Error::empty_value( &o, &*usage::create_error_usage(self.0, matcher, None), self.0.color(), )); } } if matcher.is_empty() && matcher.subcommand_name().is_none() && self.0.is_set(AS::ArgRequiredElseHelp) { let mut out = vec![]; self.0.write_help_err(&mut out)?; return Err(Error { message: String::from_utf8_lossy(&*out).into_owned(), kind: ErrorKind::MissingArgumentOrSubcommand, info: None, }); } self.validate_blacklist(matcher)?; if !(reqs_validated || self.0.is_set(AS::SubcommandsNegateReqs) && subcmd_name.is_some()) { self.validate_required(matcher)?; } self.validate_matched_args(matcher)?; matcher.usage(usage::create_usage_with_title(self.0, &[])); Ok(()) } fn validate_arg_values( &self, arg: &A, ma: &MatchedArg, matcher: &ArgMatcher<'a>, ) -> ClapResult<()> where A: AnyArg<'a, 'b> + Display, { debugln!("Validator::validate_arg_values: arg={:?}", arg.name()); for val in &ma.vals { if self.0.is_set(AS::StrictUtf8) && val.to_str().is_none() { debugln!( "Validator::validate_arg_values: invalid UTF-8 found in val {:?}", val ); return Err(Error::invalid_utf8( &*usage::create_error_usage(self.0, matcher, None), self.0.color(), )); } if let Some(p_vals) = arg.possible_vals() { debugln!("Validator::validate_arg_values: possible_vals={:?}", p_vals); let val_str = val.to_string_lossy(); let ok = if arg.is_set(ArgSettings::CaseInsensitive) { p_vals.iter().any(|pv| pv.eq_ignore_ascii_case(&*val_str)) } else { p_vals.contains(&&*val_str) }; if !ok { return Err(Error::invalid_value( val_str, p_vals, arg, &*usage::create_error_usage(self.0, matcher, None), self.0.color(), )); } } if !arg.is_set(ArgSettings::EmptyValues) && val.is_empty() && matcher.contains(&*arg.name()) { debugln!("Validator::validate_arg_values: illegal empty val found"); return Err(Error::empty_value( arg, &*usage::create_error_usage(self.0, matcher, None), self.0.color(), )); } if let Some(vtor) = arg.validator() { debug!("Validator::validate_arg_values: checking validator..."); if let Err(e) = vtor(val.to_string_lossy().into_owned()) { sdebugln!("error"); return Err(Error::value_validation(Some(arg), e, self.0.color())); } else { sdebugln!("good"); } } if let Some(vtor) = arg.validator_os() { debug!("Validator::validate_arg_values: checking validator_os..."); if let Err(e) = vtor(val) { sdebugln!("error"); return Err(Error::value_validation( Some(arg), (*e).to_string_lossy().to_string(), self.0.color(), )); } else { sdebugln!("good"); } } } Ok(()) } fn build_err(&self, name: &str, matcher: &ArgMatcher) -> ClapResult<()> { debugln!("build_err!: name={}", name); let mut c_with = find_from!(self.0, &name, blacklist, matcher); c_with = c_with.or_else(|| { self.0 .find_any_arg(name) .and_then(|aa| aa.blacklist()) .and_then(|bl| bl.iter().find(|arg| matcher.contains(arg))) .and_then(|an| self.0.find_any_arg(an)) .map(|aa| format!("{}", aa)) }); debugln!("build_err!: '{:?}' conflicts with '{}'", c_with, &name); // matcher.remove(&name); let usg = usage::create_error_usage(self.0, matcher, None); if let Some(f) = find_by_name!(self.0, name, flags, iter) { debugln!("build_err!: It was a flag..."); Err(Error::argument_conflict(f, c_with, &*usg, self.0.color())) } else if let Some(o) = find_by_name!(self.0, name, opts, iter) { debugln!("build_err!: It was an option..."); Err(Error::argument_conflict(o, c_with, &*usg, self.0.color())) } else { match find_by_name!(self.0, name, positionals, values) { Some(p) => { debugln!("build_err!: It was a positional..."); Err(Error::argument_conflict(p, c_with, &*usg, self.0.color())) } None => panic!("{}", INTERNAL_ERROR_MSG), } } } fn validate_blacklist(&self, matcher: &mut ArgMatcher) -> ClapResult<()> { debugln!("Validator::validate_blacklist;"); let mut conflicts: Vec<&str> = vec![]; for (&name, _) in matcher.iter() { debugln!("Validator::validate_blacklist:iter:{};", name); if let Some(grps) = self.0.groups_for_arg(name) { for grp in &grps { if let Some(g) = self.0.groups.iter().find(|g| &g.name == grp) { if !g.multiple { for arg in &g.args { if arg == &name { continue; } conflicts.push(arg); } } if let Some(ref gc) = g.conflicts { conflicts.extend(&*gc); } } } } if let Some(arg) = find_any_by_name!(self.0, name) { if let Some(bl) = arg.blacklist() { for conf in bl { if matcher.get(conf).is_some() { conflicts.push(conf); } } } } else { debugln!("Validator::validate_blacklist:iter:{}:group;", name); let args = self.0.arg_names_in_group(name); for arg in &args { debugln!( "Validator::validate_blacklist:iter:{}:group:iter:{};", name, arg ); if let Some(bl) = find_any_by_name!(self.0, *arg).unwrap().blacklist() { for conf in bl { if matcher.get(conf).is_some() { conflicts.push(conf); } } } } } } for name in &conflicts { debugln!( "Validator::validate_blacklist:iter:{}: Checking blacklisted arg", name ); let mut should_err = false; if self.0.groups.iter().any(|g| &g.name == name) { debugln!( "Validator::validate_blacklist:iter:{}: groups contains it...", name ); for n in self.0.arg_names_in_group(name) { debugln!( "Validator::validate_blacklist:iter:{}:iter:{}: looking in group...", name, n ); if matcher.contains(n) { debugln!( "Validator::validate_blacklist:iter:{}:iter:{}: matcher contains it...", name, n ); return self.build_err(n, matcher); } } } else if let Some(ma) = matcher.get(name) { debugln!( "Validator::validate_blacklist:iter:{}: matcher contains it...", name ); should_err = ma.occurs > 0; } if should_err { return self.build_err(*name, matcher); } } Ok(()) } fn validate_matched_args(&self, matcher: &mut ArgMatcher<'a>) -> ClapResult<()> { debugln!("Validator::validate_matched_args;"); for (name, ma) in matcher.iter() { debugln!( "Validator::validate_matched_args:iter:{}: vals={:#?}", name, ma.vals ); if let Some(opt) = find_by_name!(self.0, *name, opts, iter) { self.validate_arg_num_vals(opt, ma, matcher)?; self.validate_arg_values(opt, ma, matcher)?; self.validate_arg_requires(opt, ma, matcher)?; self.validate_arg_num_occurs(opt, ma, matcher)?; } else if let Some(flag) = find_by_name!(self.0, *name, flags, iter) { self.validate_arg_requires(flag, ma, matcher)?; self.validate_arg_num_occurs(flag, ma, matcher)?; } else if let Some(pos) = find_by_name!(self.0, *name, positionals, values) { self.validate_arg_num_vals(pos, ma, matcher)?; self.validate_arg_num_occurs(pos, ma, matcher)?; self.validate_arg_values(pos, ma, matcher)?; self.validate_arg_requires(pos, ma, matcher)?; } else { let grp = self .0 .groups .iter() .find(|g| &g.name == name) .expect(INTERNAL_ERROR_MSG); if let Some(ref g_reqs) = grp.requires { if g_reqs.iter().any(|&n| !matcher.contains(n)) { return self.missing_required_error(matcher, None); } } } } Ok(()) } fn validate_arg_num_occurs( &self, a: &A, ma: &MatchedArg, matcher: &ArgMatcher, ) -> ClapResult<()> where A: AnyArg<'a, 'b> + Display, { debugln!("Validator::validate_arg_num_occurs: a={};", a.name()); if ma.occurs > 1 && !a.is_set(ArgSettings::Multiple) { // Not the first time, and we don't allow multiples return Err(Error::unexpected_multiple_usage( a, &*usage::create_error_usage(self.0, matcher, None), self.0.color(), )); } Ok(()) } fn validate_arg_num_vals( &self, a: &A, ma: &MatchedArg, matcher: &ArgMatcher, ) -> ClapResult<()> where A: AnyArg<'a, 'b> + Display, { debugln!("Validator::validate_arg_num_vals:{}", a.name()); if let Some(num) = a.num_vals() { debugln!("Validator::validate_arg_num_vals: num_vals set...{}", num); let should_err = if a.is_set(ArgSettings::Multiple) { ((ma.vals.len() as u64) % num) != 0 } else { num != (ma.vals.len() as u64) }; if should_err { debugln!("Validator::validate_arg_num_vals: Sending error WrongNumberOfValues"); return Err(Error::wrong_number_of_values( a, num, if a.is_set(ArgSettings::Multiple) { ma.vals.len() % num as usize } else { ma.vals.len() }, if ma.vals.len() == 1 || (a.is_set(ArgSettings::Multiple) && (ma.vals.len() % num as usize) == 1) { "as" } else { "ere" }, &*usage::create_error_usage(self.0, matcher, None), self.0.color(), )); } } if let Some(num) = a.max_vals() { debugln!("Validator::validate_arg_num_vals: max_vals set...{}", num); if (ma.vals.len() as u64) > num { debugln!("Validator::validate_arg_num_vals: Sending error TooManyValues"); return Err(Error::too_many_values( ma.vals .iter() .last() .expect(INTERNAL_ERROR_MSG) .to_str() .expect(INVALID_UTF8), a, &*usage::create_error_usage(self.0, matcher, None), self.0.color(), )); } } let min_vals_zero = if let Some(num) = a.min_vals() { debugln!("Validator::validate_arg_num_vals: min_vals set: {}", num); if (ma.vals.len() as u64) < num && num != 0 { debugln!("Validator::validate_arg_num_vals: Sending error TooFewValues"); return Err(Error::too_few_values( a, num, ma.vals.len(), &*usage::create_error_usage(self.0, matcher, None), self.0.color(), )); } num == 0 } else { false }; // Issue 665 (https://github.com/clap-rs/clap/issues/665) // Issue 1105 (https://github.com/clap-rs/clap/issues/1105) if a.takes_value() && !min_vals_zero && ma.vals.is_empty() { return Err(Error::empty_value( a, &*usage::create_error_usage(self.0, matcher, None), self.0.color(), )); } Ok(()) } fn validate_arg_requires( &self, a: &A, ma: &MatchedArg, matcher: &ArgMatcher, ) -> ClapResult<()> where A: AnyArg<'a, 'b> + Display, { debugln!("Validator::validate_arg_requires:{};", a.name()); if let Some(a_reqs) = a.requires() { for &(val, name) in a_reqs.iter().filter(|&&(val, _)| val.is_some()) { let missing_req = |v| v == val.expect(INTERNAL_ERROR_MSG) && !matcher.contains(name); if ma.vals.iter().any(missing_req) { return self.missing_required_error(matcher, None); } } for &(_, name) in a_reqs.iter().filter(|&&(val, _)| val.is_none()) { if !matcher.contains(name) { return self.missing_required_error(matcher, Some(name)); } } } Ok(()) } fn validate_required(&mut self, matcher: &ArgMatcher) -> ClapResult<()> { debugln!( "Validator::validate_required: required={:?};", self.0.required ); let mut should_err = false; let mut to_rem = Vec::new(); for name in &self.0.required { debugln!("Validator::validate_required:iter:{}:", name); if matcher.contains(name) { continue; } if to_rem.contains(name) { continue; } else if let Some(a) = find_any_by_name!(self.0, *name) { if self.is_missing_required_ok(a, matcher) { to_rem.push(a.name()); if let Some(reqs) = a.requires() { for r in reqs .iter() .filter(|&&(val, _)| val.is_none()) .map(|&(_, name)| name) { to_rem.push(r); } } continue; } } should_err = true; break; } if should_err { for r in &to_rem { 'inner: for i in (0..self.0.required.len()).rev() { if &self.0.required[i] == r { self.0.required.swap_remove(i); break 'inner; } } } return self.missing_required_error(matcher, None); } // Validate the conditionally required args for &(a, v, r) in &self.0.r_ifs { if let Some(ma) = matcher.get(a) { if matcher.get(r).is_none() && ma.vals.iter().any(|val| val == v) { return self.missing_required_error(matcher, Some(r)); } } } Ok(()) } fn validate_arg_conflicts(&self, a: &AnyArg, matcher: &ArgMatcher) -> Option { debugln!("Validator::validate_arg_conflicts: a={:?};", a.name()); a.blacklist().map(|bl| { bl.iter().any(|conf| { matcher.contains(conf) || self .0 .groups .iter() .find(|g| &g.name == conf) .map_or(false, |g| g.args.iter().any(|arg| matcher.contains(arg))) }) }) } fn validate_required_unless(&self, a: &AnyArg, matcher: &ArgMatcher) -> Option { debugln!("Validator::validate_required_unless: a={:?};", a.name()); macro_rules! check { ($how:ident, $_self:expr, $a:ident, $m:ident) => {{ $a.required_unless().map(|ru| { ru.iter().$how(|n| { $m.contains(n) || { if let Some(grp) = $_self.groups.iter().find(|g| &g.name == n) { grp.args.iter().any(|arg| $m.contains(arg)) } else { false } } }) }) }}; } if a.is_set(ArgSettings::RequiredUnlessAll) { check!(all, self.0, a, matcher) } else { check!(any, self.0, a, matcher) } } fn missing_required_error(&self, matcher: &ArgMatcher, extra: Option<&str>) -> ClapResult<()> { debugln!("Validator::missing_required_error: extra={:?}", extra); let c = Colorizer::new(ColorizerOption { use_stderr: true, when: self.0.color(), }); let mut reqs = self.0.required.iter().map(|&r| &*r).collect::>(); if let Some(r) = extra { reqs.push(r); } reqs.retain(|n| !matcher.contains(n)); reqs.dedup(); debugln!("Validator::missing_required_error: reqs={:#?}", reqs); let req_args = usage::get_required_usage_from(self.0, &reqs[..], Some(matcher), extra, true) .iter() .fold(String::new(), |acc, s| { acc + &format!("\n {}", c.error(s))[..] }); debugln!( "Validator::missing_required_error: req_args={:#?}", req_args ); Err(Error::missing_required_argument( &*req_args, &*usage::create_error_usage(self.0, matcher, extra), self.0.color(), )) } #[inline] fn is_missing_required_ok(&self, a: &AnyArg, matcher: &ArgMatcher) -> bool { debugln!("Validator::is_missing_required_ok: a={}", a.name()); self.validate_arg_conflicts(a, matcher).unwrap_or(false) || self.validate_required_unless(a, matcher).unwrap_or(false) } } vendor/clap/src/fmt.rs0000664000175000017500000001162514172417313015521 0ustar mwhudsonmwhudson#[cfg(all(feature = "color", not(target_os = "windows")))] use ansi_term::ANSIString; #[cfg(all(feature = "color", not(target_os = "windows")))] use ansi_term::Colour::{Green, Red, Yellow}; use std::env; use std::fmt; #[doc(hidden)] #[derive(Debug, Copy, Clone, PartialEq)] pub enum ColorWhen { Auto, Always, Never, } #[cfg(feature = "color")] pub fn is_a_tty(stderr: bool) -> bool { debugln!("is_a_tty: stderr={:?}", stderr); let stream = if stderr { atty::Stream::Stderr } else { atty::Stream::Stdout }; atty::is(stream) } #[cfg(not(feature = "color"))] pub fn is_a_tty(_: bool) -> bool { debugln!("is_a_tty;"); false } pub fn is_term_dumb() -> bool { env::var("TERM").ok() == Some(String::from("dumb")) } #[doc(hidden)] pub struct ColorizerOption { pub use_stderr: bool, pub when: ColorWhen, } #[doc(hidden)] pub struct Colorizer { when: ColorWhen, } macro_rules! color { ($_self:ident, $c:ident, $m:expr) => { match $_self.when { ColorWhen::Auto => Format::$c($m), ColorWhen::Always => Format::$c($m), ColorWhen::Never => Format::None($m), } }; } impl Colorizer { pub fn new(option: ColorizerOption) -> Colorizer { let is_a_tty = is_a_tty(option.use_stderr); let is_term_dumb = is_term_dumb(); Colorizer { when: match option.when { ColorWhen::Auto if is_a_tty && !is_term_dumb => ColorWhen::Auto, ColorWhen::Auto => ColorWhen::Never, when => when, }, } } pub fn good(&self, msg: T) -> Format where T: fmt::Display + AsRef, { debugln!("Colorizer::good;"); color!(self, Good, msg) } pub fn warning(&self, msg: T) -> Format where T: fmt::Display + AsRef, { debugln!("Colorizer::warning;"); color!(self, Warning, msg) } pub fn error(&self, msg: T) -> Format where T: fmt::Display + AsRef, { debugln!("Colorizer::error;"); color!(self, Error, msg) } pub fn none(&self, msg: T) -> Format where T: fmt::Display + AsRef, { debugln!("Colorizer::none;"); Format::None(msg) } } impl Default for Colorizer { fn default() -> Self { Colorizer::new(ColorizerOption { use_stderr: true, when: ColorWhen::Auto, }) } } /// Defines styles for different types of error messages. Defaults to Error=Red, Warning=Yellow, /// and Good=Green #[derive(Debug)] #[doc(hidden)] pub enum Format { /// Defines the style used for errors, defaults to Red Error(T), /// Defines the style used for warnings, defaults to Yellow Warning(T), /// Defines the style used for good values, defaults to Green Good(T), /// Defines no formatting style None(T), } #[cfg(all(feature = "color", not(target_os = "windows")))] impl> Format { fn format(&self) -> ANSIString { match *self { Format::Error(ref e) => Red.bold().paint(e.as_ref()), Format::Warning(ref e) => Yellow.paint(e.as_ref()), Format::Good(ref e) => Green.paint(e.as_ref()), Format::None(ref e) => ANSIString::from(e.as_ref()), } } } #[cfg(any(not(feature = "color"), target_os = "windows"))] #[cfg_attr(feature = "cargo-clippy", allow(clippy::match_same_arms))] impl Format { fn format(&self) -> &T { match *self { Format::Error(ref e) => e, Format::Warning(ref e) => e, Format::Good(ref e) => e, Format::None(ref e) => e, } } } #[cfg(all(feature = "color", not(target_os = "windows")))] impl> fmt::Display for Format { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "{}", &self.format()) } } #[cfg(any(not(feature = "color"), target_os = "windows"))] impl fmt::Display for Format { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "{}", &self.format()) } } #[cfg(all(test, feature = "color", not(target_os = "windows")))] mod test { use super::Format; use ansi_term::ANSIString; use ansi_term::Colour::{Green, Red, Yellow}; #[test] fn colored_output() { let err = Format::Error("error"); assert_eq!( &*format!("{}", err), &*format!("{}", Red.bold().paint("error")) ); let good = Format::Good("good"); assert_eq!(&*format!("{}", good), &*format!("{}", Green.paint("good"))); let warn = Format::Warning("warn"); assert_eq!(&*format!("{}", warn), &*format!("{}", Yellow.paint("warn"))); let none = Format::None("none"); assert_eq!( &*format!("{}", none), &*format!("{}", ANSIString::from("none")) ); } } vendor/clap/src/osstringext.rs0000664000175000017500000001516114172417313017323 0ustar mwhudsonmwhudsonuse std::ffi::OsStr; #[cfg(not(any(target_os = "windows", target_arch = "wasm32")))] use std::os::unix::ffi::OsStrExt; #[cfg(any(target_os = "windows", target_arch = "wasm32"))] use crate::INVALID_UTF8; #[cfg(any(target_os = "windows", target_arch = "wasm32"))] pub trait OsStrExt3 { fn from_bytes(b: &[u8]) -> &Self; fn as_bytes(&self) -> &[u8]; } #[doc(hidden)] pub trait OsStrExt2 { fn starts_with(&self, s: &[u8]) -> bool; fn split_at_byte(&self, b: u8) -> (&OsStr, &OsStr); fn split_at(&self, i: usize) -> (&OsStr, &OsStr); fn trim_left_matches(&self, b: u8) -> &OsStr; fn contains_byte(&self, b: u8) -> bool; fn split(&self, b: u8) -> OsSplit; } // A starts-with implementation that does not panic when the OsStr contains // invalid Unicode. // // A Windows OsStr is usually UTF-16. If `prefix` is valid UTF-8, we can // re-encode it as UTF-16, and ask whether `osstr` starts with the same series // of u16 code units. If `prefix` is not valid UTF-8, then this comparison // isn't meaningful, and we just return false. #[cfg(target_os = "windows")] fn windows_osstr_starts_with(osstr: &OsStr, prefix: &[u8]) -> bool { use std::os::windows::ffi::OsStrExt; let prefix_str = if let Ok(s) = std::str::from_utf8(prefix) { s } else { return false; }; let mut osstr_units = osstr.encode_wide(); let mut prefix_units = prefix_str.encode_utf16(); loop { match (osstr_units.next(), prefix_units.next()) { // These code units match. Keep looping. (Some(o), Some(p)) if o == p => continue, // We've reached the end of the prefix. It's a match. (_, None) => return true, // Otherwise, it's not a match. _ => return false, } } } #[test] #[cfg(target_os = "windows")] fn test_windows_osstr_starts_with() { use std::ffi::OsString; use std::os::windows::ffi::OsStringExt; fn from_ascii(ascii: &[u8]) -> OsString { let u16_vec: Vec = ascii.iter().map(|&c| c as u16).collect(); OsString::from_wide(&u16_vec) } // Test all the basic cases. assert!(windows_osstr_starts_with(&from_ascii(b"abcdef"), b"abc")); assert!(windows_osstr_starts_with(&from_ascii(b"abcdef"), b"abcdef")); assert!(!windows_osstr_starts_with(&from_ascii(b"abcdef"), b"def")); assert!(!windows_osstr_starts_with(&from_ascii(b"abc"), b"abcd")); // Test the case where the candidate prefix is not valid UTF-8. Note that a // standalone \xff byte is valid ASCII but not valid UTF-8. Thus although // these strings look identical, they do not match. assert!(!windows_osstr_starts_with(&from_ascii(b"\xff"), b"\xff")); // Test the case where the OsString is not valid UTF-16. It should still be // possible to match the valid characters at the front. // // UTF-16 surrogate characters are only valid in pairs. Including one on // the end by itself makes this invalid UTF-16. let surrogate_char: u16 = 0xDC00; let invalid_unicode = OsString::from_wide(&['a' as u16, 'b' as u16, 'c' as u16, surrogate_char]); assert!( invalid_unicode.to_str().is_none(), "This string is invalid Unicode, and conversion to &str should fail.", ); assert!(windows_osstr_starts_with(&invalid_unicode, b"abc")); assert!(!windows_osstr_starts_with(&invalid_unicode, b"abcd")); } #[cfg(any(target_os = "windows", target_arch = "wasm32"))] impl OsStrExt3 for OsStr { fn from_bytes(b: &[u8]) -> &Self { use std::mem; unsafe { mem::transmute(b) } } fn as_bytes(&self) -> &[u8] { self.to_str().map(|s| s.as_bytes()).expect(INVALID_UTF8) } } impl OsStrExt2 for OsStr { fn starts_with(&self, s: &[u8]) -> bool { #[cfg(target_os = "windows")] { // On Windows, the as_bytes() method will panic if the OsStr // contains invalid Unicode. To avoid this, we use a // Windows-specific starts-with function that doesn't rely on // as_bytes(). This is necessary for Windows command line // applications to handle non-Unicode arguments successfully. This // allows common cases like `clap.exe [invalid]` to succeed, though // cases that require string splitting will still fail, like // `clap.exe --arg=[invalid]`. Note that this entire module is // replaced in Clap 3.x, so this workaround is specific to the 2.x // branch. windows_osstr_starts_with(self, s) } #[cfg(not(target_os = "windows"))] { self.as_bytes().starts_with(s) } } fn contains_byte(&self, byte: u8) -> bool { for b in self.as_bytes() { if b == &byte { return true; } } false } fn split_at_byte(&self, byte: u8) -> (&OsStr, &OsStr) { for (i, b) in self.as_bytes().iter().enumerate() { if b == &byte { return ( OsStr::from_bytes(&self.as_bytes()[..i]), OsStr::from_bytes(&self.as_bytes()[i + 1..]), ); } } ( &*self, OsStr::from_bytes(&self.as_bytes()[self.len()..self.len()]), ) } fn trim_left_matches(&self, byte: u8) -> &OsStr { let mut found = false; for (i, b) in self.as_bytes().iter().enumerate() { if b != &byte { return OsStr::from_bytes(&self.as_bytes()[i..]); } else { found = true; } } if found { return OsStr::from_bytes(&self.as_bytes()[self.len()..]); } &*self } fn split_at(&self, i: usize) -> (&OsStr, &OsStr) { ( OsStr::from_bytes(&self.as_bytes()[..i]), OsStr::from_bytes(&self.as_bytes()[i..]), ) } fn split(&self, b: u8) -> OsSplit { OsSplit { sep: b, val: self.as_bytes(), pos: 0, } } } #[doc(hidden)] #[derive(Clone, Debug)] pub struct OsSplit<'a> { sep: u8, val: &'a [u8], pos: usize, } impl<'a> Iterator for OsSplit<'a> { type Item = &'a OsStr; fn next(&mut self) -> Option<&'a OsStr> { debugln!("OsSplit::next: self={:?}", self); if self.pos == self.val.len() { return None; } let start = self.pos; for b in &self.val[start..] { self.pos += 1; if *b == self.sep { return Some(OsStr::from_bytes(&self.val[start..self.pos - 1])); } } Some(OsStr::from_bytes(&self.val[start..])) } } vendor/clap/src/strext.rs0000664000175000017500000000057114172417313016262 0ustar mwhudsonmwhudsonpub trait _StrExt { fn _is_char_boundary(&self, index: usize) -> bool; } impl _StrExt for str { #[inline] fn _is_char_boundary(&self, index: usize) -> bool { if index == self.len() { return true; } match self.as_bytes().get(index) { None => false, Some(&b) => !(128..192).contains(&b), } } } vendor/clap/src/macros.rs0000664000175000017500000011226214172417313016216 0ustar mwhudsonmwhudson/// A convenience macro for loading the YAML file at compile time (relative to the current file, /// like modules work). That YAML object can then be passed to this function. /// /// # Panics /// /// The YAML file must be properly formatted or this function will panic!(). A good way to /// ensure this doesn't happen is to run your program with the `--help` switch. If this passes /// without error, you needn't worry because the YAML is properly formatted. /// /// # Examples /// /// The following example shows how to load a properly formatted YAML file to build an instance /// of an `App` struct. /// /// ```ignore /// # #[macro_use] /// # extern crate clap; /// # use clap::App; /// # fn main() { /// let yml = load_yaml!("app.yml"); /// let app = App::from_yaml(yml); /// /// // continued logic goes here, such as `app.get_matches()` etc. /// # } /// ``` #[cfg(feature = "yaml")] #[macro_export] macro_rules! load_yaml { ($yml:expr) => { &::clap::YamlLoader::load_from_str(include_str!($yml)).expect("failed to load YAML file")[0] }; } /// Convenience macro getting a typed value `T` where `T` implements [`std::str::FromStr`] from an /// argument value. This macro returns a `Result` which allows you as the developer to /// decide what you'd like to do on a failed parse. There are two types of errors, parse failures /// and those where the argument wasn't present (such as a non-required argument). You can use /// it to get a single value, or a iterator as with the [`ArgMatches::values_of`] /// /// # Examples /// /// ```no_run /// # #[macro_use] /// # extern crate clap; /// # use clap::App; /// # fn main() { /// let matches = App::new("myapp") /// .arg_from_usage("[length] 'Set the length to use as a pos whole num, i.e. 20'") /// .get_matches(); /// /// let len = value_t!(matches.value_of("length"), u32).unwrap_or_else(|e| e.exit()); /// let also_len = value_t!(matches, "length", u32).unwrap_or_else(|e| e.exit()); /// /// println!("{} + 2: {}", len, len + 2); /// # } /// ``` /// [`std::str::FromStr`]: https://doc.rust-lang.org/std/str/trait.FromStr.html /// [`ArgMatches::values_of`]: ./struct.ArgMatches.html#method.values_of /// [`Result`]: https://doc.rust-lang.org/std/result/enum.Result.html #[macro_export] macro_rules! value_t { ($m:ident, $v:expr, $t:ty) => { value_t!($m.value_of($v), $t) }; ($m:ident.value_of($v:expr), $t:ty) => { if let Some(v) = $m.value_of($v) { match v.parse::<$t>() { Ok(val) => Ok(val), Err(_) => Err(::clap::Error::value_validation_auto(format!( "The argument '{}' isn't a valid value", v ))), } } else { Err(::clap::Error::argument_not_found_auto($v)) } }; } /// Convenience macro getting a typed value `T` where `T` implements [`std::str::FromStr`] or /// exiting upon error, instead of returning a [`Result`] type. /// /// **NOTE:** This macro is for backwards compatibility sake. Prefer /// [`value_t!(/* ... */).unwrap_or_else(|e| e.exit())`] /// /// # Examples /// /// ```no_run /// # #[macro_use] /// # extern crate clap; /// # use clap::App; /// # fn main() { /// let matches = App::new("myapp") /// .arg_from_usage("[length] 'Set the length to use as a pos whole num, i.e. 20'") /// .get_matches(); /// /// let len = value_t_or_exit!(matches.value_of("length"), u32); /// let also_len = value_t_or_exit!(matches, "length", u32); /// /// println!("{} + 2: {}", len, len + 2); /// # } /// ``` /// [`std::str::FromStr`]: https://doc.rust-lang.org/std/str/trait.FromStr.html /// [`Result`]: https://doc.rust-lang.org/std/result/enum.Result.html /// [`value_t!(/* ... */).unwrap_or_else(|e| e.exit())`]: ./macro.value_t!.html #[macro_export] macro_rules! value_t_or_exit { ($m:ident, $v:expr, $t:ty) => { value_t_or_exit!($m.value_of($v), $t) }; ($m:ident.value_of($v:expr), $t:ty) => { if let Some(v) = $m.value_of($v) { match v.parse::<$t>() { Ok(val) => val, Err(_) => ::clap::Error::value_validation_auto(format!( "The argument '{}' isn't a valid value", v )) .exit(), } } else { ::clap::Error::argument_not_found_auto($v).exit() } }; } /// Convenience macro getting a typed value [`Vec`] where `T` implements [`std::str::FromStr`] /// This macro returns a [`clap::Result>`] which allows you as the developer to decide /// what you'd like to do on a failed parse. /// /// # Examples /// /// ```no_run /// # #[macro_use] /// # extern crate clap; /// # use clap::App; /// # fn main() { /// let matches = App::new("myapp") /// .arg_from_usage("[seq]... 'A sequence of pos whole nums, i.e. 20 45'") /// .get_matches(); /// /// let vals = values_t!(matches.values_of("seq"), u32).unwrap_or_else(|e| e.exit()); /// for v in &vals { /// println!("{} + 2: {}", v, v + 2); /// } /// /// let vals = values_t!(matches, "seq", u32).unwrap_or_else(|e| e.exit()); /// for v in &vals { /// println!("{} + 2: {}", v, v + 2); /// } /// # } /// ``` /// [`std::str::FromStr`]: https://doc.rust-lang.org/std/str/trait.FromStr.html /// [`Vec`]: https://doc.rust-lang.org/std/vec/struct.Vec.html /// [`clap::Result>`]: ./type.Result.html #[macro_export] macro_rules! values_t { ($m:ident, $v:expr, $t:ty) => { values_t!($m.values_of($v), $t) }; ($m:ident.values_of($v:expr), $t:ty) => { if let Some(vals) = $m.values_of($v) { let mut tmp = vec![]; let mut err = None; for pv in vals { match pv.parse::<$t>() { Ok(rv) => tmp.push(rv), Err(..) => { err = Some(::clap::Error::value_validation_auto(format!( "The argument '{}' isn't a valid value", pv ))); break; } } } match err { Some(e) => Err(e), None => Ok(tmp), } } else { Err(::clap::Error::argument_not_found_auto($v)) } }; } /// Convenience macro getting a typed value [`Vec`] where `T` implements [`std::str::FromStr`] /// or exiting upon error. /// /// **NOTE:** This macro is for backwards compatibility sake. Prefer /// [`values_t!(/* ... */).unwrap_or_else(|e| e.exit())`] /// /// # Examples /// /// ```no_run /// # #[macro_use] /// # extern crate clap; /// # use clap::App; /// # fn main() { /// let matches = App::new("myapp") /// .arg_from_usage("[seq]... 'A sequence of pos whole nums, i.e. 20 45'") /// .get_matches(); /// /// let vals = values_t_or_exit!(matches.values_of("seq"), u32); /// for v in &vals { /// println!("{} + 2: {}", v, v + 2); /// } /// /// // type for example only /// let vals: Vec = values_t_or_exit!(matches, "seq", u32); /// for v in &vals { /// println!("{} + 2: {}", v, v + 2); /// } /// # } /// ``` /// [`values_t!(/* ... */).unwrap_or_else(|e| e.exit())`]: ./macro.values_t!.html /// [`std::str::FromStr`]: https://doc.rust-lang.org/std/str/trait.FromStr.html /// [`Vec`]: https://doc.rust-lang.org/std/vec/struct.Vec.html #[macro_export] macro_rules! values_t_or_exit { ($m:ident, $v:expr, $t:ty) => { values_t_or_exit!($m.values_of($v), $t) }; ($m:ident.values_of($v:expr), $t:ty) => { if let Some(vals) = $m.values_of($v) { vals.map(|v| { v.parse::<$t>().unwrap_or_else(|_| { ::clap::Error::value_validation_auto(format!( "One or more arguments aren't valid values" )) .exit() }) }) .collect::>() } else { ::clap::Error::argument_not_found_auto($v).exit() } }; } // _clap_count_exprs! is derived from https://github.com/DanielKeep/rust-grabbag // commit: 82a35ca5d9a04c3b920622d542104e3310ee5b07 // License: MIT // Copyright â“’ 2015 grabbag contributors. // Licensed under the MIT license (see LICENSE or ) or the Apache License, Version 2.0 (see LICENSE of // ), at your option. All // files in the project carrying such notice may not be copied, modified, // or distributed except according to those terms. // /// Counts the number of comma-delimited expressions passed to it. The result is a compile-time /// evaluable expression, suitable for use as a static array size, or the value of a `const`. /// /// # Examples /// /// ``` /// # #[macro_use] extern crate clap; /// # fn main() { /// const COUNT: usize = _clap_count_exprs!(a, 5+1, "hi there!".into_string()); /// assert_eq!(COUNT, 3); /// # } /// ``` #[macro_export] macro_rules! _clap_count_exprs { () => { 0 }; ($e:expr) => { 1 }; ($e:expr, $($es:expr),+) => { 1 + $crate::_clap_count_exprs!($($es),*) }; } /// Convenience macro to generate more complete enums with variants to be used as a type when /// parsing arguments. This enum also provides a `variants()` function which can be used to /// retrieve a `Vec<&'static str>` of the variant names, as well as implementing [`FromStr`] and /// [`Display`] automatically. /// /// **NOTE:** Case insensitivity is supported for ASCII characters only. It's highly recommended to /// use [`Arg::case_insensitive(true)`] for args that will be used with these enums /// /// **NOTE:** This macro automatically implements [`std::str::FromStr`] and [`std::fmt::Display`] /// /// **NOTE:** These enums support pub (or not) and uses of the `#[derive()]` traits /// /// # Examples /// /// ```rust /// # #[macro_use] /// # extern crate clap; /// # use clap::{App, Arg}; /// arg_enum!{ /// #[derive(PartialEq, Debug)] /// pub enum Foo { /// Bar, /// Baz, /// Qux /// } /// } /// // Foo enum can now be used via Foo::Bar, or Foo::Baz, etc /// // and implements std::str::FromStr to use with the value_t! macros /// fn main() { /// let m = App::new("app") /// .arg(Arg::from_usage(" 'the foo'") /// .possible_values(&Foo::variants()) /// .case_insensitive(true)) /// .get_matches_from(vec![ /// "app", "baz" /// ]); /// let f = value_t!(m, "foo", Foo).unwrap_or_else(|e| e.exit()); /// /// assert_eq!(f, Foo::Baz); /// } /// ``` /// [`FromStr`]: https://doc.rust-lang.org/std/str/trait.FromStr.html /// [`std::str::FromStr`]: https://doc.rust-lang.org/std/str/trait.FromStr.html /// [`Display`]: https://doc.rust-lang.org/std/fmt/trait.Display.html /// [`std::fmt::Display`]: https://doc.rust-lang.org/std/fmt/trait.Display.html /// [`Arg::case_insensitive(true)`]: ./struct.Arg.html#method.case_insensitive #[macro_export] macro_rules! arg_enum { (@as_item $($i:item)*) => ($($i)*); (@impls ( $($tts:tt)* ) -> ($e:ident, $($v:ident),+)) => { arg_enum!(@as_item $($tts)* impl ::std::str::FromStr for $e { type Err = String; fn from_str(s: &str) -> ::std::result::Result { #[allow(deprecated, unused_imports)] use ::std::ascii::AsciiExt; match s { $(stringify!($v) | _ if s.eq_ignore_ascii_case(stringify!($v)) => Ok($e::$v)),+, _ => Err({ let v = vec![ $(stringify!($v),)+ ]; format!("valid values: {}", v.join(", ")) }), } } } impl ::std::fmt::Display for $e { fn fmt(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result { match *self { $($e::$v => write!(f, stringify!($v)),)+ } } } impl $e { #[allow(dead_code)] pub fn variants() -> [&'static str; $crate::_clap_count_exprs!($(stringify!($v)),+)] { [ $(stringify!($v),)+ ] } }); }; ($(#[$($m:meta),+])+ pub enum $e:ident { $($v:ident $(=$val:expr)*,)+ } ) => { arg_enum!(@impls ($(#[$($m),+])+ pub enum $e { $($v$(=$val)*),+ }) -> ($e, $($v),+) ); }; ($(#[$($m:meta),+])+ pub enum $e:ident { $($v:ident $(=$val:expr)*),+ } ) => { arg_enum!(@impls ($(#[$($m),+])+ pub enum $e { $($v$(=$val)*),+ }) -> ($e, $($v),+) ); }; ($(#[$($m:meta),+])+ enum $e:ident { $($v:ident $(=$val:expr)*,)+ } ) => { arg_enum!(@impls ($(#[$($m),+])+ enum $e { $($v$(=$val)*),+ }) -> ($e, $($v),+) ); }; ($(#[$($m:meta),+])+ enum $e:ident { $($v:ident $(=$val:expr)*),+ } ) => { arg_enum!(@impls ($(#[$($m),+])+ enum $e { $($v$(=$val)*),+ }) -> ($e, $($v),+) ); }; (pub enum $e:ident { $($v:ident $(=$val:expr)*,)+ } ) => { arg_enum!(@impls (pub enum $e { $($v$(=$val)*),+ }) -> ($e, $($v),+) ); }; (pub enum $e:ident { $($v:ident $(=$val:expr)*),+ } ) => { arg_enum!(@impls (pub enum $e { $($v$(=$val)*),+ }) -> ($e, $($v),+) ); }; (enum $e:ident { $($v:ident $(=$val:expr)*,)+ } ) => { arg_enum!(@impls (enum $e { $($v$(=$val)*),+ }) -> ($e, $($v),+) ); }; (enum $e:ident { $($v:ident $(=$val:expr)*),+ } ) => { arg_enum!(@impls (enum $e { $($v$(=$val)*),+ }) -> ($e, $($v),+) ); }; } /// Allows you to pull the version from your Cargo.toml at compile time as /// `MAJOR.MINOR.PATCH_PKGVERSION_PRE` /// /// # Examples /// /// ```no_run /// # #[macro_use] /// # extern crate clap; /// # use clap::App; /// # fn main() { /// let m = App::new("app") /// .version(crate_version!()) /// .get_matches(); /// # } /// ``` #[cfg(not(feature = "no_cargo"))] #[macro_export] macro_rules! crate_version { () => { env!("CARGO_PKG_VERSION") }; } /// Allows you to pull the authors for the app from your Cargo.toml at /// compile time in the form: /// `"author1 lastname :author2 lastname "` /// /// You can replace the colons with a custom separator by supplying a /// replacement string, so, for example, /// `crate_authors!(",\n")` would become /// `"author1 lastname ,\nauthor2 lastname ,\nauthor3 lastname "` /// /// # Examples /// /// ```no_run /// # #[macro_use] /// # extern crate clap; /// # use clap::App; /// # fn main() { /// let m = App::new("app") /// .author(crate_authors!("\n")) /// .get_matches(); /// # } /// ``` #[cfg(not(feature = "no_cargo"))] #[macro_export] macro_rules! crate_authors { ($sep:expr) => {{ use std::ops::Deref; #[allow(deprecated)] use std::sync::{Once, ONCE_INIT}; #[allow(missing_copy_implementations)] #[allow(dead_code)] struct CargoAuthors { __private_field: (), }; impl Deref for CargoAuthors { type Target = str; #[allow(unsafe_code)] fn deref(&self) -> &'static str { #[allow(deprecated)] static ONCE: Once = ONCE_INIT; static mut VALUE: *const String = 0 as *const String; unsafe { ONCE.call_once(|| { let s = env!("CARGO_PKG_AUTHORS").replace(':', $sep); VALUE = Box::into_raw(Box::new(s)); }); &(*VALUE)[..] } } } &*CargoAuthors { __private_field: (), } }}; () => { env!("CARGO_PKG_AUTHORS") }; } /// Allows you to pull the description from your Cargo.toml at compile time. /// /// # Examples /// /// ```no_run /// # #[macro_use] /// # extern crate clap; /// # use clap::App; /// # fn main() { /// let m = App::new("app") /// .about(crate_description!()) /// .get_matches(); /// # } /// ``` #[cfg(not(feature = "no_cargo"))] #[macro_export] macro_rules! crate_description { () => { env!("CARGO_PKG_DESCRIPTION") }; } /// Allows you to pull the name from your Cargo.toml at compile time. /// /// # Examples /// /// ```no_run /// # #[macro_use] /// # extern crate clap; /// # use clap::App; /// # fn main() { /// let m = App::new(crate_name!()) /// .get_matches(); /// # } /// ``` #[cfg(not(feature = "no_cargo"))] #[macro_export] macro_rules! crate_name { () => { env!("CARGO_PKG_NAME") }; } /// Allows you to build the `App` instance from your Cargo.toml at compile time. /// /// Equivalent to using the `crate_*!` macros with their respective fields. /// /// Provided separator is for the [`crate_authors!`](macro.crate_authors.html) macro, /// refer to the documentation therefor. /// /// **NOTE:** Changing the values in your `Cargo.toml` does not trigger a re-build automatically, /// and therefore won't change the generated output until you recompile. /// /// **Pro Tip:** In some cases you can "trick" the compiler into triggering a rebuild when your /// `Cargo.toml` is changed by including this in your `src/main.rs` file /// `include_str!("../Cargo.toml");` /// /// # Examples /// /// ```no_run /// # #[macro_use] /// # extern crate clap; /// # fn main() { /// let m = app_from_crate!().get_matches(); /// # } /// ``` #[cfg(not(feature = "no_cargo"))] #[macro_export] macro_rules! app_from_crate { () => { $crate::App::new(crate_name!()) .version(crate_version!()) .author(crate_authors!()) .about(crate_description!()) }; ($sep:expr) => { $crate::App::new(crate_name!()) .version(crate_version!()) .author(crate_authors!($sep)) .about(crate_description!()) }; } /// Build `App`, `Arg`s, `SubCommand`s and `Group`s with Usage-string like input /// but without the associated parsing runtime cost. /// /// `clap_app!` also supports several shorthand syntaxes. /// /// # Examples /// /// ```no_run /// # #[macro_use] /// # extern crate clap; /// # fn main() { /// let matches = clap_app!(myapp => /// (version: "1.0") /// (author: "Kevin K. ") /// (about: "Does awesome things") /// (@arg CONFIG: -c --config +takes_value "Sets a custom config file") /// (@arg INPUT: +required "Sets the input file to use") /// (@arg debug: -d ... "Sets the level of debugging information") /// (@group difficulty => /// (@arg hard: -h --hard "Sets hard mode") /// (@arg normal: -n --normal "Sets normal mode") /// (@arg easy: -e --easy "Sets easy mode") /// ) /// (@subcommand test => /// (about: "controls testing features") /// (version: "1.3") /// (author: "Someone E. ") /// (@arg verbose: -v --verbose "Print test information verbosely") /// ) /// ) /// .get_matches(); /// # } /// ``` /// # Shorthand Syntax for Args /// /// * A single hyphen followed by a character (such as `-c`) sets the [`Arg::short`] /// * A double hyphen followed by a character or word (such as `--config`) sets [`Arg::long`] /// * If one wishes to use a [`Arg::long`] with a hyphen inside (i.e. `--config-file`), you /// must use `--("config-file")` due to limitations of the Rust macro system. /// * Three dots (`...`) sets [`Arg::multiple(true)`] /// * Angled brackets after either a short or long will set [`Arg::value_name`] and /// `Arg::required(true)` such as `--config ` = `Arg::value_name("FILE")` and /// `Arg::required(true)` /// * Square brackets after either a short or long will set [`Arg::value_name`] and /// `Arg::required(false)` such as `--config [FILE]` = `Arg::value_name("FILE")` and /// `Arg::required(false)` /// * There are short hand syntaxes for Arg methods that accept booleans /// * A plus sign will set that method to `true` such as `+required` = `Arg::required(true)` /// * An exclamation will set that method to `false` such as `!required` = `Arg::required(false)` /// * A `#{min, max}` will set [`Arg::min_values(min)`] and [`Arg::max_values(max)`] /// * An asterisk (`*`) will set `Arg::required(true)` /// * Curly brackets around a `fn` will set [`Arg::validator`] as in `{fn}` = `Arg::validator(fn)` /// * An Arg method that accepts a string followed by square brackets will set that method such as /// `conflicts_with[FOO]` will set `Arg::conflicts_with("FOO")` (note the lack of quotes around /// `FOO` in the macro) /// * An Arg method that takes a string and can be set multiple times (such as /// [`Arg::conflicts_with`]) followed by square brackets and a list of values separated by spaces /// will set that method such as `conflicts_with[FOO BAR BAZ]` will set /// `Arg::conflicts_with("FOO")`, `Arg::conflicts_with("BAR")`, and `Arg::conflicts_with("BAZ")` /// (note the lack of quotes around the values in the macro) /// /// # Shorthand Syntax for Groups /// /// * There are short hand syntaxes for `ArgGroup` methods that accept booleans /// * A plus sign will set that method to `true` such as `+required` = `ArgGroup::required(true)` /// * An exclamation will set that method to `false` such as `!required` = `ArgGroup::required(false)` /// /// [`Arg::short`]: ./struct.Arg.html#method.short /// [`Arg::long`]: ./struct.Arg.html#method.long /// [`Arg::multiple(true)`]: ./struct.Arg.html#method.multiple /// [`Arg::value_name`]: ./struct.Arg.html#method.value_name /// [`Arg::min_values(min)`]: ./struct.Arg.html#method.min_values /// [`Arg::max_values(max)`]: ./struct.Arg.html#method.max_values /// [`Arg::validator`]: ./struct.Arg.html#method.validator /// [`Arg::conflicts_with`]: ./struct.Arg.html#method.conflicts_with #[macro_export] macro_rules! clap_app { (@app ($builder:expr)) => { $builder }; (@app ($builder:expr) (@arg ($name:expr): $($tail:tt)*) $($tt:tt)*) => { clap_app!{ @app ($builder.arg( clap_app!{ @arg ($crate::Arg::with_name($name)) (-) $($tail)* })) $($tt)* } }; (@app ($builder:expr) (@arg $name:ident: $($tail:tt)*) $($tt:tt)*) => { clap_app!{ @app ($builder.arg( clap_app!{ @arg ($crate::Arg::with_name(stringify!($name))) (-) $($tail)* })) $($tt)* } }; (@app ($builder:expr) (@setting $setting:ident) $($tt:tt)*) => { clap_app!{ @app ($builder.setting($crate::AppSettings::$setting)) $($tt)* } }; // Treat the application builder as an argument to set its attributes (@app ($builder:expr) (@attributes $($attr:tt)*) $($tt:tt)*) => { clap_app!{ @app (clap_app!{ @arg ($builder) $($attr)* }) $($tt)* } }; (@app ($builder:expr) (@group $name:ident => $($tail:tt)*) $($tt:tt)*) => { clap_app!{ @app (clap_app!{ @group ($builder, $crate::ArgGroup::with_name(stringify!($name))) $($tail)* }) $($tt)* } }; (@app ($builder:expr) (@group $name:ident !$ident:ident => $($tail:tt)*) $($tt:tt)*) => { clap_app!{ @app (clap_app!{ @group ($builder, $crate::ArgGroup::with_name(stringify!($name)).$ident(false)) $($tail)* }) $($tt)* } }; (@app ($builder:expr) (@group $name:ident +$ident:ident => $($tail:tt)*) $($tt:tt)*) => { clap_app!{ @app (clap_app!{ @group ($builder, $crate::ArgGroup::with_name(stringify!($name)).$ident(true)) $($tail)* }) $($tt)* } }; // Handle subcommand creation (@app ($builder:expr) (@subcommand $name:ident => $($tail:tt)*) $($tt:tt)*) => { clap_app!{ @app ($builder.subcommand( clap_app!{ @app ($crate::SubCommand::with_name(stringify!($name))) $($tail)* } )) $($tt)* } }; // Yaml like function calls - used for setting various meta directly against the app (@app ($builder:expr) ($ident:ident: $($v:expr),*) $($tt:tt)*) => { // clap_app!{ @app ($builder.$ident($($v),*)) $($tt)* } clap_app!{ @app ($builder.$ident($($v),*)) $($tt)* } }; // Add members to group and continue argument handling with the parent builder (@group ($builder:expr, $group:expr)) => { $builder.group($group) }; // Treat the group builder as an argument to set its attributes (@group ($builder:expr, $group:expr) (@attributes $($attr:tt)*) $($tt:tt)*) => { clap_app!{ @group ($builder, clap_app!{ @arg ($group) (-) $($attr)* }) $($tt)* } }; (@group ($builder:expr, $group:expr) (@arg $name:ident: $($tail:tt)*) $($tt:tt)*) => { clap_app!{ @group (clap_app!{ @app ($builder) (@arg $name: $($tail)*) }, $group.arg(stringify!($name))) $($tt)* } }; // No more tokens to munch (@arg ($arg:expr) $modes:tt) => { $arg }; // Shorthand tokens influenced by the usage_string (@arg ($arg:expr) $modes:tt --($long:expr) $($tail:tt)*) => { clap_app!{ @arg ($arg.long($long)) $modes $($tail)* } }; (@arg ($arg:expr) $modes:tt --$long:ident $($tail:tt)*) => { clap_app!{ @arg ($arg.long(stringify!($long))) $modes $($tail)* } }; (@arg ($arg:expr) $modes:tt -$short:ident $($tail:tt)*) => { clap_app!{ @arg ($arg.short(stringify!($short))) $modes $($tail)* } }; (@arg ($arg:expr) (-) <$var:ident> $($tail:tt)*) => { clap_app!{ @arg ($arg.value_name(stringify!($var))) (+) +takes_value +required $($tail)* } }; (@arg ($arg:expr) (+) <$var:ident> $($tail:tt)*) => { clap_app!{ @arg ($arg.value_name(stringify!($var))) (+) $($tail)* } }; (@arg ($arg:expr) (-) [$var:ident] $($tail:tt)*) => { clap_app!{ @arg ($arg.value_name(stringify!($var))) (+) +takes_value $($tail)* } }; (@arg ($arg:expr) (+) [$var:ident] $($tail:tt)*) => { clap_app!{ @arg ($arg.value_name(stringify!($var))) (+) $($tail)* } }; (@arg ($arg:expr) $modes:tt ... $($tail:tt)*) => { clap_app!{ @arg ($arg) $modes +multiple $($tail)* } }; // Shorthand magic (@arg ($arg:expr) $modes:tt #{$n:expr, $m:expr} $($tail:tt)*) => { clap_app!{ @arg ($arg) $modes min_values($n) max_values($m) $($tail)* } }; (@arg ($arg:expr) $modes:tt * $($tail:tt)*) => { clap_app!{ @arg ($arg) $modes +required $($tail)* } }; // !foo -> .foo(false) (@arg ($arg:expr) $modes:tt !$ident:ident $($tail:tt)*) => { clap_app!{ @arg ($arg.$ident(false)) $modes $($tail)* } }; // +foo -> .foo(true) (@arg ($arg:expr) $modes:tt +$ident:ident $($tail:tt)*) => { clap_app!{ @arg ($arg.$ident(true)) $modes $($tail)* } }; // Validator (@arg ($arg:expr) $modes:tt {$fn_:expr} $($tail:tt)*) => { clap_app!{ @arg ($arg.validator($fn_)) $modes $($tail)* } }; (@as_expr $expr:expr) => { $expr }; // Help (@arg ($arg:expr) $modes:tt $desc:tt) => { $arg.help(clap_app!{ @as_expr $desc }) }; // Handle functions that need to be called multiple times for each argument (@arg ($arg:expr) $modes:tt $ident:ident[$($target:ident)*] $($tail:tt)*) => { clap_app!{ @arg ($arg $( .$ident(stringify!($target)) )*) $modes $($tail)* } }; // Inherit builder's functions, e.g. `index(2)`, `requires_if("val", "arg")` (@arg ($arg:expr) $modes:tt $ident:ident($($expr:expr),*) $($tail:tt)*) => { clap_app!{ @arg ($arg.$ident($($expr),*)) $modes $($tail)* } }; // Inherit builder's functions with trailing comma, e.g. `index(2,)`, `requires_if("val", "arg",)` (@arg ($arg:expr) $modes:tt $ident:ident($($expr:expr,)*) $($tail:tt)*) => { clap_app!{ @arg ($arg.$ident($($expr),*)) $modes $($tail)* } }; // Build a subcommand outside of an app. (@subcommand $name:ident => $($tail:tt)*) => { clap_app!{ @app ($crate::SubCommand::with_name(stringify!($name))) $($tail)* } }; // Start the magic (($name:expr) => $($tail:tt)*) => {{ clap_app!{ @app ($crate::App::new($name)) $($tail)*} }}; ($name:ident => $($tail:tt)*) => {{ clap_app!{ @app ($crate::App::new(stringify!($name))) $($tail)*} }}; } macro_rules! impl_settings { ($n:ident, $($v:ident => $c:path),+) => { pub fn set(&mut self, s: $n) { match s { $($n::$v => self.0.insert($c)),+ } } pub fn unset(&mut self, s: $n) { match s { $($n::$v => self.0.remove($c)),+ } } pub fn is_set(&self, s: $n) -> bool { match s { $($n::$v => self.0.contains($c)),+ } } }; } // Convenience for writing to stderr thanks to https://github.com/BurntSushi macro_rules! wlnerr( (@nopanic $($arg:tt)*) => ({ use std::io::{Write, stderr}; let _ = writeln!(&mut stderr().lock(), $($arg)*); }); ($($arg:tt)*) => ({ use std::io::{Write, stderr}; writeln!(&mut stderr(), $($arg)*).ok(); }) ); #[cfg(feature = "debug")] #[cfg_attr(feature = "debug", macro_use)] #[cfg_attr(feature = "debug", allow(unused_macros))] mod debug_macros { macro_rules! debugln { ($fmt:expr) => (println!(concat!("DEBUG:clap:", $fmt))); ($fmt:expr, $($arg:tt)*) => (println!(concat!("DEBUG:clap:",$fmt), $($arg)*)); } macro_rules! sdebugln { ($fmt:expr) => (println!($fmt)); ($fmt:expr, $($arg:tt)*) => (println!($fmt, $($arg)*)); } macro_rules! debug { ($fmt:expr) => (print!(concat!("DEBUG:clap:", $fmt))); ($fmt:expr, $($arg:tt)*) => (print!(concat!("DEBUG:clap:",$fmt), $($arg)*)); } macro_rules! sdebug { ($fmt:expr) => (print!($fmt)); ($fmt:expr, $($arg:tt)*) => (print!($fmt, $($arg)*)); } } #[cfg(not(feature = "debug"))] #[cfg_attr(not(feature = "debug"), macro_use)] mod debug_macros { macro_rules! debugln { ($fmt:expr) => {}; ($fmt:expr, $($arg:tt)*) => {}; } macro_rules! sdebugln { ($fmt:expr) => {}; ($fmt:expr, $($arg:tt)*) => {}; } macro_rules! debug { ($fmt:expr) => {}; ($fmt:expr, $($arg:tt)*) => {}; } } // Helper/deduplication macro for printing the correct number of spaces in help messages // used in: // src/args/arg_builder/*.rs // src/app/mod.rs macro_rules! write_nspaces { ($dst:expr, $num:expr) => {{ debugln!("write_spaces!: num={}", $num); for _ in 0..$num { $dst.write_all(b" ")?; } }}; } // convenience macro for remove an item from a vec //macro_rules! vec_remove_all { // ($vec:expr, $to_rem:expr) => { // debugln!("vec_remove_all! to_rem={:?}", $to_rem); // for i in (0 .. $vec.len()).rev() { // let should_remove = $to_rem.any(|name| name == &$vec[i]); // if should_remove { $vec.swap_remove(i); } // } // }; //} macro_rules! find_from { ($_self:expr, $arg_name:expr, $from:ident, $matcher:expr) => {{ let mut ret = None; for k in $matcher.arg_names() { if let Some(f) = find_by_name!($_self, k, flags, iter) { if let Some(ref v) = f.$from() { if v.contains($arg_name) { ret = Some(f.to_string()); } } } if let Some(o) = find_by_name!($_self, k, opts, iter) { if let Some(ref v) = o.$from() { if v.contains(&$arg_name) { ret = Some(o.to_string()); } } } if let Some(pos) = find_by_name!($_self, k, positionals, values) { if let Some(ref v) = pos.$from() { if v.contains($arg_name) { ret = Some(pos.b.name.to_owned()); } } } } ret }}; } //macro_rules! find_name_from { // ($_self:expr, $arg_name:expr, $from:ident, $matcher:expr) => {{ // let mut ret = None; // for k in $matcher.arg_names() { // if let Some(f) = find_by_name!($_self, k, flags, iter) { // if let Some(ref v) = f.$from() { // if v.contains($arg_name) { // ret = Some(f.b.name); // } // } // } // if let Some(o) = find_by_name!($_self, k, opts, iter) { // if let Some(ref v) = o.$from() { // if v.contains(&$arg_name) { // ret = Some(o.b.name); // } // } // } // if let Some(pos) = find_by_name!($_self, k, positionals, values) { // if let Some(ref v) = pos.$from() { // if v.contains($arg_name) { // ret = Some(pos.b.name); // } // } // } // } // ret // }}; //} macro_rules! find_any_by_name { ($p:expr, $name:expr) => {{ fn as_trait_obj<'a, 'b, T: AnyArg<'a, 'b>>(x: &T) -> &AnyArg<'a, 'b> { x } find_by_name!($p, $name, flags, iter) .map(as_trait_obj) .or(find_by_name!($p, $name, opts, iter) .map(as_trait_obj) .or(find_by_name!($p, $name, positionals, values).map(as_trait_obj))) }}; } // Finds an arg by name macro_rules! find_by_name { ($p:expr, $name:expr, $what:ident, $how:ident) => { $p.$what.$how().find(|o| o.b.name == $name) }; } // Finds an option including if it's aliased macro_rules! find_opt_by_long { (@os $_self:ident, $long:expr) => {{ _find_by_long!($_self, $long, opts) }}; ($_self:ident, $long:expr) => {{ _find_by_long!($_self, $long, opts) }}; } macro_rules! find_flag_by_long { (@os $_self:ident, $long:expr) => {{ _find_by_long!($_self, $long, flags) }}; ($_self:ident, $long:expr) => {{ _find_by_long!($_self, $long, flags) }}; } macro_rules! _find_by_long { ($_self:ident, $long:expr, $what:ident) => {{ $_self .$what .iter() .filter(|a| a.s.long.is_some()) .find(|a| { a.s.long.unwrap() == $long || (a.s.aliases.is_some() && a.s .aliases .as_ref() .unwrap() .iter() .any(|&(alias, _)| alias == $long)) }) }}; } // Finds an option macro_rules! find_opt_by_short { ($_self:ident, $short:expr) => {{ _find_by_short!($_self, $short, opts) }}; } macro_rules! find_flag_by_short { ($_self:ident, $short:expr) => {{ _find_by_short!($_self, $short, flags) }}; } macro_rules! _find_by_short { ($_self:ident, $short:expr, $what:ident) => {{ $_self .$what .iter() .filter(|a| a.s.short.is_some()) .find(|a| a.s.short.unwrap() == $short) }}; } macro_rules! find_subcmd { ($_self:expr, $sc:expr) => {{ $_self.subcommands.iter().find(|s| { &*s.p.meta.name == $sc || (s.p.meta.aliases.is_some() && s.p .meta .aliases .as_ref() .unwrap() .iter() .any(|&(n, _)| n == $sc)) }) }}; } macro_rules! shorts { ($_self:ident) => {{ _shorts_longs!($_self, short) }}; } macro_rules! longs { ($_self:ident) => {{ _shorts_longs!($_self, long) }}; } macro_rules! _shorts_longs { ($_self:ident, $what:ident) => {{ $_self .flags .iter() .filter(|f| f.s.$what.is_some()) .map(|f| f.s.$what.as_ref().unwrap()) .chain( $_self .opts .iter() .filter(|o| o.s.$what.is_some()) .map(|o| o.s.$what.as_ref().unwrap()), ) }}; } macro_rules! arg_names { ($_self:ident) => {{ _names!(@args $_self) }}; } macro_rules! sc_names { ($_self:ident) => {{ _names!(@sc $_self) }}; } macro_rules! _names { (@args $_self:ident) => {{ $_self.flags.iter().map(|f| &*f.b.name).chain( $_self .opts .iter() .map(|o| &*o.b.name) .chain($_self.positionals.values().map(|p| &*p.b.name)), ) }}; (@sc $_self:ident) => {{ $_self.subcommands.iter().map(|s| &*s.p.meta.name).chain( $_self .subcommands .iter() .filter(|s| s.p.meta.aliases.is_some()) .flat_map(|s| s.p.meta.aliases.as_ref().unwrap().iter().map(|&(n, _)| n)), ) }}; } vendor/clap/src/suggestions.rs0000664000175000017500000001032614172417313017302 0ustar mwhudsonmwhudson// Internal use crate::{app::App, fmt::Format}; /// Produces a string from a given list of possible values which is similar to /// the passed in value `v` with a certain confidence. /// Thus in a list of possible values like ["foo", "bar"], the value "fop" will yield /// `Some("foo")`, whereas "blark" would yield `None`. #[cfg(feature = "suggestions")] #[cfg_attr(feature = "cargo-clippy", allow(clippy::needless_lifetimes))] pub fn did_you_mean<'a, T: ?Sized, I>(v: &str, possible_values: I) -> Option<&'a str> where T: AsRef + 'a, I: IntoIterator, { let mut candidate: Option<(f64, &str)> = None; for pv in possible_values { let confidence = strsim::jaro_winkler(v, pv.as_ref()); if confidence > 0.8 && (candidate.is_none() || (candidate.as_ref().unwrap().0 < confidence)) { candidate = Some((confidence, pv.as_ref())); } } match candidate { None => None, Some((_, candidate)) => Some(candidate), } } #[cfg(not(feature = "suggestions"))] pub fn did_you_mean<'a, T: ?Sized, I>(_: &str, _: I) -> Option<&'a str> where T: AsRef + 'a, I: IntoIterator, { None } /// Returns a suffix that can be empty, or is the standard 'did you mean' phrase pub fn did_you_mean_flag_suffix<'z, T, I>( arg: &str, args_rest: &'z [&str], longs: I, subcommands: &'z [App], ) -> (String, Option<&'z str>) where T: AsRef + 'z, I: IntoIterator, { if let Some(candidate) = did_you_mean(arg, longs) { let suffix = format!( "\n\tDid you mean {}{}?", Format::Good("--"), Format::Good(candidate) ); return (suffix, Some(candidate)); } subcommands .iter() .filter_map(|subcommand| { let opts = subcommand .p .flags .iter() .filter_map(|f| f.s.long) .chain(subcommand.p.opts.iter().filter_map(|o| o.s.long)); let candidate = match did_you_mean(arg, opts) { Some(candidate) => candidate, None => return None, }; let score = match args_rest.iter().position(|x| *x == subcommand.get_name()) { Some(score) => score, None => return None, }; let suffix = format!( "\n\tDid you mean to put '{}{}' after the subcommand '{}'?", Format::Good("--"), Format::Good(candidate), Format::Good(subcommand.get_name()) ); Some((score, (suffix, Some(candidate)))) }) .min_by_key(|&(score, _)| score) .map(|(_, suggestion)| suggestion) .unwrap_or_else(|| (String::new(), None)) } /// Returns a suffix that can be empty, or is the standard 'did you mean' phrase pub fn did_you_mean_value_suffix<'z, T, I>(arg: &str, values: I) -> (String, Option<&'z str>) where T: AsRef + 'z, I: IntoIterator, { match did_you_mean(arg, values) { Some(candidate) => { let suffix = format!("\n\tDid you mean '{}'?", Format::Good(candidate)); (suffix, Some(candidate)) } None => (String::new(), None), } } #[cfg(all(test, features = "suggestions"))] mod test { use super::*; #[test] fn possible_values_match() { let p_vals = ["test", "possible", "values"]; assert_eq!(did_you_mean("tst", p_vals.iter()), Some("test")); } #[test] fn possible_values_nomatch() { let p_vals = ["test", "possible", "values"]; assert!(did_you_mean("hahaahahah", p_vals.iter()).is_none()); } #[test] fn suffix_long() { let p_vals = ["test", "possible", "values"]; let suffix = "\n\tDid you mean \'--test\'?"; assert_eq!( did_you_mean_flag_suffix("tst", p_vals.iter(), []), (suffix, Some("test")) ); } #[test] fn suffix_enum() { let p_vals = ["test", "possible", "values"]; let suffix = "\n\tDid you mean \'test\'?"; assert_eq!( did_you_mean_value_suffix("tst", p_vals.iter()), (suffix, Some("test")) ); } } vendor/clap/src/args/0000775000175000017500000000000014172417313015314 5ustar mwhudsonmwhudsonvendor/clap/src/args/arg_builder/0000775000175000017500000000000014172417313017573 5ustar mwhudsonmwhudsonvendor/clap/src/args/arg_builder/base.rs0000664000175000017500000000215414172417313021055 0ustar mwhudsonmwhudsonuse crate::args::{Arg, ArgFlags, ArgSettings}; #[derive(Debug, Clone, Default)] pub struct Base<'a, 'b> where 'a: 'b, { pub name: &'a str, pub help: Option<&'b str>, pub long_help: Option<&'b str>, pub blacklist: Option>, pub settings: ArgFlags, pub r_unless: Option>, pub overrides: Option>, pub groups: Option>, pub requires: Option, &'a str)>>, } impl<'n, 'e> Base<'n, 'e> { pub fn new(name: &'n str) -> Self { Base { name, ..Default::default() } } pub fn set(&mut self, s: ArgSettings) { self.settings.set(s); } pub fn unset(&mut self, s: ArgSettings) { self.settings.unset(s); } pub fn is_set(&self, s: ArgSettings) -> bool { self.settings.is_set(s) } } impl<'n, 'e, 'z> From<&'z Arg<'n, 'e>> for Base<'n, 'e> { fn from(a: &'z Arg<'n, 'e>) -> Self { a.b.clone() } } impl<'n, 'e> PartialEq for Base<'n, 'e> { fn eq(&self, other: &Base<'n, 'e>) -> bool { self.name == other.name } } vendor/clap/src/args/arg_builder/mod.rs0000664000175000017500000000041614160055207020716 0ustar mwhudsonmwhudsonpub use self::base::Base; pub use self::flag::FlagBuilder; pub use self::option::OptBuilder; pub use self::positional::PosBuilder; pub use self::switched::Switched; pub use self::valued::Valued; mod base; mod flag; mod option; mod positional; mod switched; mod valued; vendor/clap/src/args/arg_builder/option.rs0000664000175000017500000001756514172417313021467 0ustar mwhudsonmwhudson// Std use std::{ ffi::{OsStr, OsString}, fmt::{Display, Formatter, Result}, mem, rc::Rc, result::Result as StdResult, }; // Internal use crate::{ args::{AnyArg, Arg, ArgSettings, Base, DispOrder, Switched, Valued}, map::{self, VecMap}, INTERNAL_ERROR_MSG, }; #[allow(missing_debug_implementations)] #[doc(hidden)] #[derive(Default, Clone)] pub struct OptBuilder<'n, 'e> where 'n: 'e, { pub b: Base<'n, 'e>, pub s: Switched<'e>, pub v: Valued<'n, 'e>, } impl<'n, 'e> OptBuilder<'n, 'e> { pub fn new(name: &'n str) -> Self { OptBuilder { b: Base::new(name), ..Default::default() } } } impl<'n, 'e, 'z> From<&'z Arg<'n, 'e>> for OptBuilder<'n, 'e> { fn from(a: &'z Arg<'n, 'e>) -> Self { OptBuilder { b: Base::from(a), s: Switched::from(a), v: Valued::from(a), } } } impl<'n, 'e> From> for OptBuilder<'n, 'e> { fn from(mut a: Arg<'n, 'e>) -> Self { a.v.fill_in(); OptBuilder { b: mem::take(&mut a.b), s: mem::take(&mut a.s), v: mem::take(&mut a.v), } } } impl<'n, 'e> Display for OptBuilder<'n, 'e> { fn fmt(&self, f: &mut Formatter) -> Result { debugln!("OptBuilder::fmt:{}", self.b.name); let sep = if self.b.is_set(ArgSettings::RequireEquals) { "=" } else { " " }; // Write the name such --long or -l if let Some(l) = self.s.long { write!(f, "--{}{}", l, sep)?; } else { write!(f, "-{}{}", self.s.short.unwrap(), sep)?; } let delim = if self.is_set(ArgSettings::RequireDelimiter) { self.v.val_delim.expect(INTERNAL_ERROR_MSG) } else { ' ' }; // Write the values such as if let Some(ref vec) = self.v.val_names { let mut it = vec.iter().peekable(); while let Some((_, val)) = it.next() { write!(f, "<{}>", val)?; if it.peek().is_some() { write!(f, "{}", delim)?; } } let num = vec.len(); if self.is_set(ArgSettings::Multiple) && num == 1 { write!(f, "...")?; } } else if let Some(num) = self.v.num_vals { let mut it = (0..num).peekable(); while let Some(_) = it.next() { write!(f, "<{}>", self.b.name)?; if it.peek().is_some() { write!(f, "{}", delim)?; } } if self.is_set(ArgSettings::Multiple) && num == 1 { write!(f, "...")?; } } else { write!( f, "<{}>{}", self.b.name, if self.is_set(ArgSettings::Multiple) { "..." } else { "" } )?; } Ok(()) } } impl<'n, 'e> AnyArg<'n, 'e> for OptBuilder<'n, 'e> { fn name(&self) -> &'n str { self.b.name } fn overrides(&self) -> Option<&[&'e str]> { self.b.overrides.as_ref().map(|o| &o[..]) } fn requires(&self) -> Option<&[(Option<&'e str>, &'n str)]> { self.b.requires.as_ref().map(|o| &o[..]) } fn blacklist(&self) -> Option<&[&'e str]> { self.b.blacklist.as_ref().map(|o| &o[..]) } fn required_unless(&self) -> Option<&[&'e str]> { self.b.r_unless.as_ref().map(|o| &o[..]) } fn val_names(&self) -> Option<&VecMap<&'e str>> { self.v.val_names.as_ref() } fn is_set(&self, s: ArgSettings) -> bool { self.b.settings.is_set(s) } fn has_switch(&self) -> bool { true } fn set(&mut self, s: ArgSettings) { self.b.settings.set(s) } fn max_vals(&self) -> Option { self.v.max_vals } fn val_terminator(&self) -> Option<&'e str> { self.v.terminator } fn num_vals(&self) -> Option { self.v.num_vals } fn possible_vals(&self) -> Option<&[&'e str]> { self.v.possible_vals.as_ref().map(|o| &o[..]) } #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] fn validator(&self) -> Option<&Rc StdResult<(), String>>> { self.v.validator.as_ref() } #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] fn validator_os(&self) -> Option<&Rc StdResult<(), OsString>>> { self.v.validator_os.as_ref() } fn min_vals(&self) -> Option { self.v.min_vals } fn short(&self) -> Option { self.s.short } fn long(&self) -> Option<&'e str> { self.s.long } fn val_delim(&self) -> Option { self.v.val_delim } fn takes_value(&self) -> bool { true } fn help(&self) -> Option<&'e str> { self.b.help } fn long_help(&self) -> Option<&'e str> { self.b.long_help } fn default_val(&self) -> Option<&'e OsStr> { self.v.default_val } fn default_vals_ifs(&self) -> Option, &'e OsStr)>> { self.v.default_vals_ifs.as_ref().map(|vm| vm.values()) } fn env<'s>(&'s self) -> Option<(&'n OsStr, Option<&'s OsString>)> { self.v .env .as_ref() .map(|&(key, ref value)| (key, value.as_ref())) } fn longest_filter(&self) -> bool { true } fn aliases(&self) -> Option> { if let Some(ref aliases) = self.s.aliases { let vis_aliases: Vec<_> = aliases .iter() .filter_map(|&(n, v)| if v { Some(n) } else { None }) .collect(); if vis_aliases.is_empty() { None } else { Some(vis_aliases) } } else { None } } } impl<'n, 'e> DispOrder for OptBuilder<'n, 'e> { fn disp_ord(&self) -> usize { self.s.disp_ord } } impl<'n, 'e> PartialEq for OptBuilder<'n, 'e> { fn eq(&self, other: &OptBuilder<'n, 'e>) -> bool { self.b == other.b } } #[cfg(test)] mod test { use super::OptBuilder; use crate::{args::settings::ArgSettings, map::VecMap}; #[test] fn optbuilder_display1() { let mut o = OptBuilder::new("opt"); o.s.long = Some("option"); o.b.settings.set(ArgSettings::Multiple); assert_eq!(&*format!("{}", o), "--option ..."); } #[test] fn optbuilder_display2() { let mut v_names = VecMap::new(); v_names.insert(0, "file"); v_names.insert(1, "name"); let mut o2 = OptBuilder::new("opt"); o2.s.short = Some('o'); o2.v.val_names = Some(v_names); assert_eq!(&*format!("{}", o2), "-o "); } #[test] fn optbuilder_display3() { let mut v_names = VecMap::new(); v_names.insert(0, "file"); v_names.insert(1, "name"); let mut o2 = OptBuilder::new("opt"); o2.s.short = Some('o'); o2.v.val_names = Some(v_names); o2.b.settings.set(ArgSettings::Multiple); assert_eq!(&*format!("{}", o2), "-o "); } #[test] fn optbuilder_display_single_alias() { let mut o = OptBuilder::new("opt"); o.s.long = Some("option"); o.s.aliases = Some(vec![("als", true)]); assert_eq!(&*format!("{}", o), "--option "); } #[test] fn optbuilder_display_multiple_aliases() { let mut o = OptBuilder::new("opt"); o.s.long = Some("option"); o.s.aliases = Some(vec![ ("als_not_visible", false), ("als2", true), ("als3", true), ("als4", true), ]); assert_eq!(&*format!("{}", o), "--option "); } } vendor/clap/src/args/arg_builder/switched.rs0000664000175000017500000000157114172417313021757 0ustar mwhudsonmwhudsonuse crate::Arg; #[derive(Debug)] pub struct Switched<'b> { pub short: Option, pub long: Option<&'b str>, pub aliases: Option>, // (name, visible) pub disp_ord: usize, pub unified_ord: usize, } impl<'e> Default for Switched<'e> { fn default() -> Self { Switched { short: None, long: None, aliases: None, disp_ord: 999, unified_ord: 999, } } } impl<'n, 'e, 'z> From<&'z Arg<'n, 'e>> for Switched<'e> { fn from(a: &'z Arg<'n, 'e>) -> Self { a.s.clone() } } impl<'e> Clone for Switched<'e> { fn clone(&self) -> Self { Switched { short: self.short, long: self.long, aliases: self.aliases.clone(), disp_ord: self.disp_ord, unified_ord: self.unified_ord, } } } vendor/clap/src/args/arg_builder/positional.rs0000664000175000017500000001707114172417313022330 0ustar mwhudsonmwhudson// Std use std::{ borrow::Cow, ffi::{OsStr, OsString}, fmt::{Display, Formatter, Result}, mem, rc::Rc, result::Result as StdResult, }; // Internal use crate::{ args::{AnyArg, Arg, ArgSettings, Base, DispOrder, Valued}, map::{self, VecMap}, INTERNAL_ERROR_MSG, }; #[allow(missing_debug_implementations)] #[doc(hidden)] #[derive(Clone, Default)] pub struct PosBuilder<'n, 'e> where 'n: 'e, { pub b: Base<'n, 'e>, pub v: Valued<'n, 'e>, pub index: u64, } impl<'n, 'e> PosBuilder<'n, 'e> { pub fn new(name: &'n str, idx: u64) -> Self { PosBuilder { b: Base::new(name), index: idx, ..Default::default() } } pub fn from_arg_ref(a: &Arg<'n, 'e>, idx: u64) -> Self { let mut pb = PosBuilder { b: Base::from(a), v: Valued::from(a), index: idx, }; if a.v.max_vals.is_some() || a.v.min_vals.is_some() || (a.v.num_vals.is_some() && a.v.num_vals.unwrap() > 1) { pb.b.settings.set(ArgSettings::Multiple); } pb } pub fn from_arg(mut a: Arg<'n, 'e>, idx: u64) -> Self { if a.v.max_vals.is_some() || a.v.min_vals.is_some() || (a.v.num_vals.is_some() && a.v.num_vals.unwrap() > 1) { a.b.settings.set(ArgSettings::Multiple); } PosBuilder { b: mem::take(&mut a.b), v: mem::take(&mut a.v), index: idx, } } pub fn multiple_str(&self) -> &str { let mult_vals = self .v .val_names .as_ref() .map_or(true, |names| names.len() < 2); if self.is_set(ArgSettings::Multiple) && mult_vals { "..." } else { "" } } pub fn name_no_brackets(&self) -> Cow { debugln!("PosBuilder::name_no_brackets;"); let mut delim = String::new(); delim.push(if self.is_set(ArgSettings::RequireDelimiter) { self.v.val_delim.expect(INTERNAL_ERROR_MSG) } else { ' ' }); if let Some(ref names) = self.v.val_names { debugln!("PosBuilder:name_no_brackets: val_names={:#?}", names); if names.len() > 1 { Cow::Owned( names .values() .map(|n| format!("<{}>", n)) .collect::>() .join(&*delim), ) } else { Cow::Borrowed(names.values().next().expect(INTERNAL_ERROR_MSG)) } } else { debugln!("PosBuilder:name_no_brackets: just name"); Cow::Borrowed(self.b.name) } } } impl<'n, 'e> Display for PosBuilder<'n, 'e> { fn fmt(&self, f: &mut Formatter) -> Result { let mut delim = String::new(); delim.push(if self.is_set(ArgSettings::RequireDelimiter) { self.v.val_delim.expect(INTERNAL_ERROR_MSG) } else { ' ' }); if let Some(ref names) = self.v.val_names { write!( f, "{}", names .values() .map(|n| format!("<{}>", n)) .collect::>() .join(&*delim) )?; } else { write!(f, "<{}>", self.b.name)?; } if self.b.settings.is_set(ArgSettings::Multiple) && (self.v.val_names.is_none() || self.v.val_names.as_ref().unwrap().len() == 1) { write!(f, "...")?; } Ok(()) } } impl<'n, 'e> AnyArg<'n, 'e> for PosBuilder<'n, 'e> { fn name(&self) -> &'n str { self.b.name } fn overrides(&self) -> Option<&[&'e str]> { self.b.overrides.as_ref().map(|o| &o[..]) } fn requires(&self) -> Option<&[(Option<&'e str>, &'n str)]> { self.b.requires.as_ref().map(|o| &o[..]) } fn blacklist(&self) -> Option<&[&'e str]> { self.b.blacklist.as_ref().map(|o| &o[..]) } fn required_unless(&self) -> Option<&[&'e str]> { self.b.r_unless.as_ref().map(|o| &o[..]) } fn val_names(&self) -> Option<&VecMap<&'e str>> { self.v.val_names.as_ref() } fn is_set(&self, s: ArgSettings) -> bool { self.b.settings.is_set(s) } fn set(&mut self, s: ArgSettings) { self.b.settings.set(s) } fn has_switch(&self) -> bool { false } fn max_vals(&self) -> Option { self.v.max_vals } fn val_terminator(&self) -> Option<&'e str> { self.v.terminator } fn num_vals(&self) -> Option { self.v.num_vals } fn possible_vals(&self) -> Option<&[&'e str]> { self.v.possible_vals.as_ref().map(|o| &o[..]) } #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] fn validator(&self) -> Option<&Rc StdResult<(), String>>> { self.v.validator.as_ref() } #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] fn validator_os(&self) -> Option<&Rc StdResult<(), OsString>>> { self.v.validator_os.as_ref() } fn min_vals(&self) -> Option { self.v.min_vals } fn short(&self) -> Option { None } fn long(&self) -> Option<&'e str> { None } fn val_delim(&self) -> Option { self.v.val_delim } fn takes_value(&self) -> bool { true } fn help(&self) -> Option<&'e str> { self.b.help } fn long_help(&self) -> Option<&'e str> { self.b.long_help } fn default_vals_ifs(&self) -> Option, &'e OsStr)>> { self.v.default_vals_ifs.as_ref().map(|vm| vm.values()) } fn default_val(&self) -> Option<&'e OsStr> { self.v.default_val } fn env<'s>(&'s self) -> Option<(&'n OsStr, Option<&'s OsString>)> { self.v .env .as_ref() .map(|&(key, ref value)| (key, value.as_ref())) } fn longest_filter(&self) -> bool { true } fn aliases(&self) -> Option> { None } } impl<'n, 'e> DispOrder for PosBuilder<'n, 'e> { fn disp_ord(&self) -> usize { self.index as usize } } impl<'n, 'e> PartialEq for PosBuilder<'n, 'e> { fn eq(&self, other: &PosBuilder<'n, 'e>) -> bool { self.b == other.b } } #[cfg(test)] mod test { use super::PosBuilder; use crate::{args::settings::ArgSettings, map::VecMap}; #[test] fn display_mult() { let mut p = PosBuilder::new("pos", 1); p.b.settings.set(ArgSettings::Multiple); assert_eq!(&*format!("{}", p), "..."); } #[test] fn display_required() { let mut p2 = PosBuilder::new("pos", 1); p2.b.settings.set(ArgSettings::Required); assert_eq!(&*format!("{}", p2), ""); } #[test] fn display_val_names() { let mut p2 = PosBuilder::new("pos", 1); let mut vm = VecMap::new(); vm.insert(0, "file1"); vm.insert(1, "file2"); p2.v.val_names = Some(vm); assert_eq!(&*format!("{}", p2), " "); } #[test] fn display_val_names_req() { let mut p2 = PosBuilder::new("pos", 1); p2.b.settings.set(ArgSettings::Required); let mut vm = VecMap::new(); vm.insert(0, "file1"); vm.insert(1, "file2"); p2.v.val_names = Some(vm); assert_eq!(&*format!("{}", p2), " "); } } vendor/clap/src/args/arg_builder/flag.rs0000664000175000017500000001230214172417313021050 0ustar mwhudsonmwhudson// Std use std::{ convert::From, ffi::{OsStr, OsString}, fmt::{Display, Formatter, Result}, mem, rc::Rc, result::Result as StdResult, }; // Internal use crate::{ args::{AnyArg, Arg, ArgSettings, Base, DispOrder, Switched}, map::{self, VecMap}, }; #[derive(Default, Clone, Debug)] #[doc(hidden)] pub struct FlagBuilder<'n, 'e> where 'n: 'e, { pub b: Base<'n, 'e>, pub s: Switched<'e>, } impl<'n, 'e> FlagBuilder<'n, 'e> { pub fn new(name: &'n str) -> Self { FlagBuilder { b: Base::new(name), ..Default::default() } } } impl<'a, 'b, 'z> From<&'z Arg<'a, 'b>> for FlagBuilder<'a, 'b> { fn from(a: &'z Arg<'a, 'b>) -> Self { FlagBuilder { b: Base::from(a), s: Switched::from(a), } } } impl<'a, 'b> From> for FlagBuilder<'a, 'b> { fn from(mut a: Arg<'a, 'b>) -> Self { FlagBuilder { b: mem::take(&mut a.b), s: mem::take(&mut a.s), } } } impl<'n, 'e> Display for FlagBuilder<'n, 'e> { fn fmt(&self, f: &mut Formatter) -> Result { if let Some(l) = self.s.long { write!(f, "--{}", l)?; } else { write!(f, "-{}", self.s.short.unwrap())?; } Ok(()) } } impl<'n, 'e> AnyArg<'n, 'e> for FlagBuilder<'n, 'e> { fn name(&self) -> &'n str { self.b.name } fn overrides(&self) -> Option<&[&'e str]> { self.b.overrides.as_ref().map(|o| &o[..]) } fn requires(&self) -> Option<&[(Option<&'e str>, &'n str)]> { self.b.requires.as_ref().map(|o| &o[..]) } fn blacklist(&self) -> Option<&[&'e str]> { self.b.blacklist.as_ref().map(|o| &o[..]) } fn required_unless(&self) -> Option<&[&'e str]> { self.b.r_unless.as_ref().map(|o| &o[..]) } fn is_set(&self, s: ArgSettings) -> bool { self.b.settings.is_set(s) } fn has_switch(&self) -> bool { true } fn takes_value(&self) -> bool { false } fn set(&mut self, s: ArgSettings) { self.b.settings.set(s) } fn max_vals(&self) -> Option { None } fn val_names(&self) -> Option<&VecMap<&'e str>> { None } fn num_vals(&self) -> Option { None } fn possible_vals(&self) -> Option<&[&'e str]> { None } #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] fn validator(&self) -> Option<&Rc StdResult<(), String>>> { None } #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] fn validator_os(&self) -> Option<&Rc StdResult<(), OsString>>> { None } fn min_vals(&self) -> Option { None } fn short(&self) -> Option { self.s.short } fn long(&self) -> Option<&'e str> { self.s.long } fn val_delim(&self) -> Option { None } fn help(&self) -> Option<&'e str> { self.b.help } fn long_help(&self) -> Option<&'e str> { self.b.long_help } fn val_terminator(&self) -> Option<&'e str> { None } fn default_val(&self) -> Option<&'e OsStr> { None } fn default_vals_ifs(&self) -> Option, &'e OsStr)>> { None } fn env<'s>(&'s self) -> Option<(&'n OsStr, Option<&'s OsString>)> { None } fn longest_filter(&self) -> bool { self.s.long.is_some() } fn aliases(&self) -> Option> { if let Some(ref aliases) = self.s.aliases { let vis_aliases: Vec<_> = aliases .iter() .filter_map(|&(n, v)| if v { Some(n) } else { None }) .collect(); if vis_aliases.is_empty() { None } else { Some(vis_aliases) } } else { None } } } impl<'n, 'e> DispOrder for FlagBuilder<'n, 'e> { fn disp_ord(&self) -> usize { self.s.disp_ord } } impl<'n, 'e> PartialEq for FlagBuilder<'n, 'e> { fn eq(&self, other: &FlagBuilder<'n, 'e>) -> bool { self.b == other.b } } #[cfg(test)] mod test { use super::FlagBuilder; use crate::args::settings::ArgSettings; #[test] fn flagbuilder_display() { let mut f = FlagBuilder::new("flg"); f.b.settings.set(ArgSettings::Multiple); f.s.long = Some("flag"); assert_eq!(&*format!("{}", f), "--flag"); let mut f2 = FlagBuilder::new("flg"); f2.s.short = Some('f'); assert_eq!(&*format!("{}", f2), "-f"); } #[test] fn flagbuilder_display_single_alias() { let mut f = FlagBuilder::new("flg"); f.s.long = Some("flag"); f.s.aliases = Some(vec![("als", true)]); assert_eq!(&*format!("{}", f), "--flag"); } #[test] fn flagbuilder_display_multiple_aliases() { let mut f = FlagBuilder::new("flg"); f.s.short = Some('f'); f.s.aliases = Some(vec![ ("alias_not_visible", false), ("f2", true), ("f3", true), ("f4", true), ]); assert_eq!(&*format!("{}", f), "-f"); } } vendor/clap/src/args/arg_builder/valued.rs0000664000175000017500000000367014172417313021427 0ustar mwhudsonmwhudsonuse std::{ ffi::{OsStr, OsString}, rc::Rc, }; use crate::{map::VecMap, Arg}; #[allow(missing_debug_implementations)] #[derive(Clone)] pub struct Valued<'a, 'b> where 'a: 'b, { pub possible_vals: Option>, pub val_names: Option>, pub num_vals: Option, pub max_vals: Option, pub min_vals: Option, #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] pub validator: Option Result<(), String>>>, #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] pub validator_os: Option Result<(), OsString>>>, pub val_delim: Option, pub default_val: Option<&'b OsStr>, #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] pub default_vals_ifs: Option, &'b OsStr)>>, pub env: Option<(&'a OsStr, Option)>, pub terminator: Option<&'b str>, } impl<'n, 'e> Default for Valued<'n, 'e> { fn default() -> Self { Valued { possible_vals: None, num_vals: None, min_vals: None, max_vals: None, val_names: None, validator: None, validator_os: None, val_delim: None, default_val: None, default_vals_ifs: None, env: None, terminator: None, } } } impl<'n, 'e> Valued<'n, 'e> { pub fn fill_in(&mut self) { if let Some(ref vec) = self.val_names { if vec.len() > 1 { self.num_vals = Some(vec.len() as u64); } } } } impl<'n, 'e, 'z> From<&'z Arg<'n, 'e>> for Valued<'n, 'e> { fn from(a: &'z Arg<'n, 'e>) -> Self { let mut v = a.v.clone(); if let Some(ref vec) = a.v.val_names { if vec.len() > 1 { v.num_vals = Some(vec.len() as u64); } } v } } vendor/clap/src/args/settings.rs0000664000175000017500000002033614160055207017523 0ustar mwhudsonmwhudson// Std #[allow(deprecated, unused_imports)] use std::ascii::AsciiExt; use std::str::FromStr; bitflags! { struct Flags: u32 { const REQUIRED = 1; const MULTIPLE = 1 << 1; const EMPTY_VALS = 1 << 2; const GLOBAL = 1 << 3; const HIDDEN = 1 << 4; const TAKES_VAL = 1 << 5; const USE_DELIM = 1 << 6; const NEXT_LINE_HELP = 1 << 7; const R_UNLESS_ALL = 1 << 8; const REQ_DELIM = 1 << 9; const DELIM_NOT_SET = 1 << 10; const HIDE_POS_VALS = 1 << 11; const ALLOW_TAC_VALS = 1 << 12; const REQUIRE_EQUALS = 1 << 13; const LAST = 1 << 14; const HIDE_DEFAULT_VAL = 1 << 15; const CASE_INSENSITIVE = 1 << 16; const HIDE_ENV_VALS = 1 << 17; const HIDDEN_SHORT_H = 1 << 18; const HIDDEN_LONG_H = 1 << 19; } } #[doc(hidden)] #[derive(Debug, Clone, Copy)] pub struct ArgFlags(Flags); impl ArgFlags { pub fn new() -> Self { ArgFlags::default() } impl_settings! {ArgSettings, Required => Flags::REQUIRED, Multiple => Flags::MULTIPLE, EmptyValues => Flags::EMPTY_VALS, Global => Flags::GLOBAL, Hidden => Flags::HIDDEN, TakesValue => Flags::TAKES_VAL, UseValueDelimiter => Flags::USE_DELIM, NextLineHelp => Flags::NEXT_LINE_HELP, RequiredUnlessAll => Flags::R_UNLESS_ALL, RequireDelimiter => Flags::REQ_DELIM, ValueDelimiterNotSet => Flags::DELIM_NOT_SET, HidePossibleValues => Flags::HIDE_POS_VALS, AllowLeadingHyphen => Flags::ALLOW_TAC_VALS, RequireEquals => Flags::REQUIRE_EQUALS, Last => Flags::LAST, CaseInsensitive => Flags::CASE_INSENSITIVE, HideEnvValues => Flags::HIDE_ENV_VALS, HideDefaultValue => Flags::HIDE_DEFAULT_VAL, HiddenShortHelp => Flags::HIDDEN_SHORT_H, HiddenLongHelp => Flags::HIDDEN_LONG_H } } impl Default for ArgFlags { fn default() -> Self { ArgFlags(Flags::EMPTY_VALS | Flags::DELIM_NOT_SET) } } /// Various settings that apply to arguments and may be set, unset, and checked via getter/setter /// methods [`Arg::set`], [`Arg::unset`], and [`Arg::is_set`] /// /// [`Arg::set`]: ./struct.Arg.html#method.set /// [`Arg::unset`]: ./struct.Arg.html#method.unset /// [`Arg::is_set`]: ./struct.Arg.html#method.is_set #[derive(Debug, PartialEq, Copy, Clone)] pub enum ArgSettings { /// The argument must be used Required, /// The argument may be used multiple times such as `--flag --flag` Multiple, /// The argument allows empty values such as `--option ""` EmptyValues, /// The argument should be propagated down through all child [`SubCommand`]s /// /// [`SubCommand`]: ./struct.SubCommand.html Global, /// The argument should **not** be shown in help text Hidden, /// The argument accepts a value, such as `--option ` TakesValue, /// Determines if the argument allows values to be grouped via a delimiter UseValueDelimiter, /// Prints the help text on the line after the argument NextLineHelp, /// Requires the use of a value delimiter for all multiple values RequireDelimiter, /// Hides the possible values from the help string HidePossibleValues, /// Allows vals that start with a '-' AllowLeadingHyphen, /// Require options use `--option=val` syntax RequireEquals, /// Specifies that the arg is the last positional argument and may be accessed early via `--` /// syntax Last, /// Hides the default value from the help string HideDefaultValue, /// Makes `Arg::possible_values` case insensitive CaseInsensitive, /// Hides ENV values in the help message HideEnvValues, /// The argument should **not** be shown in short help text HiddenShortHelp, /// The argument should **not** be shown in long help text HiddenLongHelp, #[doc(hidden)] RequiredUnlessAll, #[doc(hidden)] ValueDelimiterNotSet, } impl FromStr for ArgSettings { type Err = String; fn from_str(s: &str) -> Result::Err> { match &*s.to_ascii_lowercase() { "required" => Ok(ArgSettings::Required), "multiple" => Ok(ArgSettings::Multiple), "global" => Ok(ArgSettings::Global), "emptyvalues" => Ok(ArgSettings::EmptyValues), "hidden" => Ok(ArgSettings::Hidden), "takesvalue" => Ok(ArgSettings::TakesValue), "usevaluedelimiter" => Ok(ArgSettings::UseValueDelimiter), "nextlinehelp" => Ok(ArgSettings::NextLineHelp), "requiredunlessall" => Ok(ArgSettings::RequiredUnlessAll), "requiredelimiter" => Ok(ArgSettings::RequireDelimiter), "valuedelimiternotset" => Ok(ArgSettings::ValueDelimiterNotSet), "hidepossiblevalues" => Ok(ArgSettings::HidePossibleValues), "allowleadinghyphen" => Ok(ArgSettings::AllowLeadingHyphen), "requireequals" => Ok(ArgSettings::RequireEquals), "last" => Ok(ArgSettings::Last), "hidedefaultvalue" => Ok(ArgSettings::HideDefaultValue), "caseinsensitive" => Ok(ArgSettings::CaseInsensitive), "hideenvvalues" => Ok(ArgSettings::HideEnvValues), "hiddenshorthelp" => Ok(ArgSettings::HiddenShortHelp), "hiddenlonghelp" => Ok(ArgSettings::HiddenLongHelp), _ => Err("unknown ArgSetting, cannot convert from str".to_owned()), } } } #[cfg(test)] mod test { use super::ArgSettings; #[test] fn arg_settings_fromstr() { assert_eq!( "allowleadinghyphen".parse::().unwrap(), ArgSettings::AllowLeadingHyphen ); assert_eq!( "emptyvalues".parse::().unwrap(), ArgSettings::EmptyValues ); assert_eq!( "global".parse::().unwrap(), ArgSettings::Global ); assert_eq!( "hidepossiblevalues".parse::().unwrap(), ArgSettings::HidePossibleValues ); assert_eq!( "hidden".parse::().unwrap(), ArgSettings::Hidden ); assert_eq!( "multiple".parse::().unwrap(), ArgSettings::Multiple ); assert_eq!( "nextlinehelp".parse::().unwrap(), ArgSettings::NextLineHelp ); assert_eq!( "requiredunlessall".parse::().unwrap(), ArgSettings::RequiredUnlessAll ); assert_eq!( "requiredelimiter".parse::().unwrap(), ArgSettings::RequireDelimiter ); assert_eq!( "required".parse::().unwrap(), ArgSettings::Required ); assert_eq!( "takesvalue".parse::().unwrap(), ArgSettings::TakesValue ); assert_eq!( "usevaluedelimiter".parse::().unwrap(), ArgSettings::UseValueDelimiter ); assert_eq!( "valuedelimiternotset".parse::().unwrap(), ArgSettings::ValueDelimiterNotSet ); assert_eq!( "requireequals".parse::().unwrap(), ArgSettings::RequireEquals ); assert_eq!("last".parse::().unwrap(), ArgSettings::Last); assert_eq!( "hidedefaultvalue".parse::().unwrap(), ArgSettings::HideDefaultValue ); assert_eq!( "caseinsensitive".parse::().unwrap(), ArgSettings::CaseInsensitive ); assert_eq!( "hideenvvalues".parse::().unwrap(), ArgSettings::HideEnvValues ); assert_eq!( "hiddenshorthelp".parse::().unwrap(), ArgSettings::HiddenShortHelp ); assert_eq!( "hiddenlonghelp".parse::().unwrap(), ArgSettings::HiddenLongHelp ); assert!("hahahaha".parse::().is_err()); } } vendor/clap/src/args/mod.rs0000664000175000017500000000110214160055207016430 0ustar mwhudsonmwhudsonpub use self::any_arg::{AnyArg, DispOrder}; pub use self::arg::Arg; pub use self::arg_builder::{Base, FlagBuilder, OptBuilder, PosBuilder, Switched, Valued}; pub use self::arg_matcher::ArgMatcher; pub use self::arg_matches::{ArgMatches, OsValues, Values}; pub use self::group::ArgGroup; pub use self::matched_arg::MatchedArg; pub use self::settings::{ArgFlags, ArgSettings}; pub use self::subcommand::SubCommand; #[macro_use] mod macros; pub mod any_arg; mod arg; mod arg_builder; mod arg_matcher; mod arg_matches; mod group; mod matched_arg; pub mod settings; mod subcommand; vendor/clap/src/args/subcommand.rs0000664000175000017500000000364614172417313020023 0ustar mwhudsonmwhudson// Third Party #[cfg(feature = "yaml")] use yaml_rust::Yaml; // Internal use crate::{App, ArgMatches}; /// The abstract representation of a command line subcommand. /// /// This struct describes all the valid options of the subcommand for the program. Subcommands are /// essentially "sub-[`App`]s" and contain all the same possibilities (such as their own /// [arguments], subcommands, and settings). /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, SubCommand}; /// App::new("myprog") /// .subcommand( /// SubCommand::with_name("config") /// .about("Used for configuration") /// .arg(Arg::with_name("config_file") /// .help("The configuration file to use") /// .index(1))) /// # ; /// ``` /// [`App`]: ./struct.App.html /// [arguments]: ./struct.Arg.html #[derive(Debug, Clone)] pub struct SubCommand<'a> { #[doc(hidden)] pub name: String, #[doc(hidden)] pub matches: ArgMatches<'a>, } impl<'a> SubCommand<'a> { /// Creates a new instance of a subcommand requiring a name. The name will be displayed /// to the user when they print version or help and usage information. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, SubCommand}; /// App::new("myprog") /// .subcommand( /// SubCommand::with_name("config")) /// # ; /// ``` pub fn with_name<'b>(name: &str) -> App<'a, 'b> { App::new(name) } /// Creates a new instance of a subcommand from a YAML (.yml) document /// /// # Examples /// /// ```ignore /// # #[macro_use] /// # extern crate clap; /// # use clap::Subcommand; /// # fn main() { /// let sc_yaml = load_yaml!("test_subcommand.yml"); /// let sc = SubCommand::from_yaml(sc_yaml); /// # } /// ``` #[cfg(feature = "yaml")] pub fn from_yaml(yaml: &Yaml) -> App { App::from_yaml(yaml) } } vendor/clap/src/args/matched_arg.rs0000664000175000017500000000076414160055207020124 0ustar mwhudsonmwhudson// Std use std::ffi::OsString; #[doc(hidden)] #[derive(Debug, Clone)] pub struct MatchedArg { #[doc(hidden)] pub occurs: u64, #[doc(hidden)] pub indices: Vec, #[doc(hidden)] pub vals: Vec, } impl Default for MatchedArg { fn default() -> Self { MatchedArg { occurs: 1, indices: Vec::new(), vals: Vec::new(), } } } impl MatchedArg { pub fn new() -> Self { MatchedArg::default() } } vendor/clap/src/args/group.rs0000664000175000017500000005417314172417313017030 0ustar mwhudsonmwhudson#[cfg(feature = "yaml")] use std::collections::BTreeMap; use std::fmt::{Debug, Formatter, Result}; #[cfg(feature = "yaml")] use yaml_rust::Yaml; /// `ArgGroup`s are a family of related [arguments] and way for you to express, "Any of these /// arguments". By placing arguments in a logical group, you can create easier requirement and /// exclusion rules instead of having to list each argument individually, or when you want a rule /// to apply "any but not all" arguments. /// /// For instance, you can make an entire `ArgGroup` required. If [`ArgGroup::multiple(true)`] is /// set, this means that at least one argument from that group must be present. If /// [`ArgGroup::multiple(false)`] is set (the default), one and *only* one must be present. /// /// You can also do things such as name an entire `ArgGroup` as a [conflict] or [requirement] for /// another argument, meaning any of the arguments that belong to that group will cause a failure /// if present, or must present respectively. /// /// Perhaps the most common use of `ArgGroup`s is to require one and *only* one argument to be /// present out of a given set. Imagine that you had multiple arguments, and you want one of them /// to be required, but making all of them required isn't feasible because perhaps they conflict /// with each other. For example, lets say that you were building an application where one could /// set a given version number by supplying a string with an option argument, i.e. /// `--set-ver v1.2.3`, you also wanted to support automatically using a previous version number /// and simply incrementing one of the three numbers. So you create three flags `--major`, /// `--minor`, and `--patch`. All of these arguments shouldn't be used at one time but you want to /// specify that *at least one* of them is used. For this, you can create a group. /// /// Finally, you may use `ArgGroup`s to pull a value from a group of arguments when you don't care /// exactly which argument was actually used at runtime. /// /// # Examples /// /// The following example demonstrates using an `ArgGroup` to ensure that one, and only one, of /// the arguments from the specified group is present at runtime. /// /// ```rust /// # use clap::{App, ArgGroup, ErrorKind}; /// let result = App::new("app") /// .args_from_usage( /// "--set-ver [ver] 'set the version manually' /// --major 'auto increase major' /// --minor 'auto increase minor' /// --patch 'auto increase patch'") /// .group(ArgGroup::with_name("vers") /// .args(&["set-ver", "major", "minor", "patch"]) /// .required(true)) /// .get_matches_from_safe(vec!["app", "--major", "--patch"]); /// // Because we used two args in the group it's an error /// assert!(result.is_err()); /// let err = result.unwrap_err(); /// assert_eq!(err.kind, ErrorKind::ArgumentConflict); /// ``` /// This next example shows a passing parse of the same scenario /// /// ```rust /// # use clap::{App, ArgGroup}; /// let result = App::new("app") /// .args_from_usage( /// "--set-ver [ver] 'set the version manually' /// --major 'auto increase major' /// --minor 'auto increase minor' /// --patch 'auto increase patch'") /// .group(ArgGroup::with_name("vers") /// .args(&["set-ver", "major", "minor","patch"]) /// .required(true)) /// .get_matches_from_safe(vec!["app", "--major"]); /// assert!(result.is_ok()); /// let matches = result.unwrap(); /// // We may not know which of the args was used, so we can test for the group... /// assert!(matches.is_present("vers")); /// // we could also alternatively check each arg individually (not shown here) /// ``` /// [`ArgGroup::multiple(true)`]: ./struct.ArgGroup.html#method.multiple /// [arguments]: ./struct.Arg.html /// [conflict]: ./struct.Arg.html#method.conflicts_with /// [requirement]: ./struct.Arg.html#method.requires #[derive(Default)] pub struct ArgGroup<'a> { #[doc(hidden)] pub name: &'a str, #[doc(hidden)] pub args: Vec<&'a str>, #[doc(hidden)] pub required: bool, #[doc(hidden)] pub requires: Option>, #[doc(hidden)] pub conflicts: Option>, #[doc(hidden)] pub multiple: bool, } impl<'a> ArgGroup<'a> { /// Creates a new instance of `ArgGroup` using a unique string name. The name will be used to /// get values from the group or refer to the group inside of conflict and requirement rules. /// /// # Examples /// /// ```rust /// # use clap::{App, ArgGroup}; /// ArgGroup::with_name("config") /// # ; /// ``` pub fn with_name(n: &'a str) -> Self { ArgGroup { name: n, required: false, args: vec![], requires: None, conflicts: None, multiple: false, } } /// Creates a new instance of `ArgGroup` from a .yml (YAML) file. /// /// # Examples /// /// ```ignore /// # #[macro_use] /// # extern crate clap; /// # use clap::ArgGroup; /// # fn main() { /// let yml = load_yaml!("group.yml"); /// let ag = ArgGroup::from_yaml(yml); /// # } /// ``` #[cfg(feature = "yaml")] pub fn from_yaml(y: &'a Yaml) -> ArgGroup<'a> { ArgGroup::from(y.as_hash().unwrap()) } /// Adds an [argument] to this group by name /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ArgGroup}; /// let m = App::new("myprog") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("color") /// .short("c")) /// .group(ArgGroup::with_name("req_flags") /// .arg("flag") /// .arg("color")) /// .get_matches_from(vec!["myprog", "-f"]); /// // maybe we don't know which of the two flags was used... /// assert!(m.is_present("req_flags")); /// // but we can also check individually if needed /// assert!(m.is_present("flag")); /// ``` /// [argument]: ./struct.Arg.html pub fn arg(mut self, n: &'a str) -> Self { assert!( self.name != n, "ArgGroup '{}' can not have same name as arg inside it", &*self.name ); self.args.push(n); self } /// Adds multiple [arguments] to this group by name /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ArgGroup}; /// let m = App::new("myprog") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("color") /// .short("c")) /// .group(ArgGroup::with_name("req_flags") /// .args(&["flag", "color"])) /// .get_matches_from(vec!["myprog", "-f"]); /// // maybe we don't know which of the two flags was used... /// assert!(m.is_present("req_flags")); /// // but we can also check individually if needed /// assert!(m.is_present("flag")); /// ``` /// [arguments]: ./struct.Arg.html pub fn args(mut self, ns: &[&'a str]) -> Self { for n in ns { self = self.arg(n); } self } /// Allows more than one of the ['Arg']s in this group to be used. (Default: `false`) /// /// # Examples /// /// Notice in this example we use *both* the `-f` and `-c` flags which are both part of the /// group /// /// ```rust /// # use clap::{App, Arg, ArgGroup}; /// let m = App::new("myprog") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("color") /// .short("c")) /// .group(ArgGroup::with_name("req_flags") /// .args(&["flag", "color"]) /// .multiple(true)) /// .get_matches_from(vec!["myprog", "-f", "-c"]); /// // maybe we don't know which of the two flags was used... /// assert!(m.is_present("req_flags")); /// ``` /// In this next example, we show the default behavior (i.e. `multiple(false)) which will throw /// an error if more than one of the args in the group was used. /// /// ```rust /// # use clap::{App, Arg, ArgGroup, ErrorKind}; /// let result = App::new("myprog") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("color") /// .short("c")) /// .group(ArgGroup::with_name("req_flags") /// .args(&["flag", "color"])) /// .get_matches_from_safe(vec!["myprog", "-f", "-c"]); /// // Because we used both args in the group it's an error /// assert!(result.is_err()); /// let err = result.unwrap_err(); /// assert_eq!(err.kind, ErrorKind::ArgumentConflict); /// ``` /// ['Arg']: ./struct.Arg.html pub fn multiple(mut self, m: bool) -> Self { self.multiple = m; self } /// Sets the group as required or not. A required group will be displayed in the usage string /// of the application in the format ``. A required `ArgGroup` simply states /// that one argument from this group *must* be present at runtime (unless /// conflicting with another argument). /// /// **NOTE:** This setting only applies to the current [`App`] / [`SubCommand`], and not /// globally. /// /// **NOTE:** By default, [`ArgGroup::multiple`] is set to `false` which when combined with /// `ArgGroup::required(true)` states, "One and *only one* arg must be used from this group. /// Use of more than one arg is an error." Vice setting `ArgGroup::multiple(true)` which /// states, '*At least* one arg from this group must be used. Using multiple is OK." /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ArgGroup, ErrorKind}; /// let result = App::new("myprog") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("color") /// .short("c")) /// .group(ArgGroup::with_name("req_flags") /// .args(&["flag", "color"]) /// .required(true)) /// .get_matches_from_safe(vec!["myprog"]); /// // Because we didn't use any of the args in the group, it's an error /// assert!(result.is_err()); /// let err = result.unwrap_err(); /// assert_eq!(err.kind, ErrorKind::MissingRequiredArgument); /// ``` /// [`App`]: ./struct.App.html /// [`SubCommand`]: ./struct.SubCommand.html /// [`ArgGroup::multiple`]: ./struct.ArgGroup.html#method.multiple pub fn required(mut self, r: bool) -> Self { self.required = r; self } /// Sets the requirement rules of this group. This is not to be confused with a /// [required group]. Requirement rules function just like [argument requirement rules], you /// can name other arguments or groups that must be present when any one of the arguments from /// this group is used. /// /// **NOTE:** The name provided may be an argument, or group name /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ArgGroup, ErrorKind}; /// let result = App::new("myprog") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("color") /// .short("c")) /// .arg(Arg::with_name("debug") /// .short("d")) /// .group(ArgGroup::with_name("req_flags") /// .args(&["flag", "color"]) /// .requires("debug")) /// .get_matches_from_safe(vec!["myprog", "-c"]); /// // because we used an arg from the group, and the group requires "-d" to be used, it's an /// // error /// assert!(result.is_err()); /// let err = result.unwrap_err(); /// assert_eq!(err.kind, ErrorKind::MissingRequiredArgument); /// ``` /// [required group]: ./struct.ArgGroup.html#method.required /// [argument requirement rules]: ./struct.Arg.html#method.requires pub fn requires(mut self, n: &'a str) -> Self { if let Some(ref mut reqs) = self.requires { reqs.push(n); } else { self.requires = Some(vec![n]); } self } /// Sets the requirement rules of this group. This is not to be confused with a /// [required group]. Requirement rules function just like [argument requirement rules], you /// can name other arguments or groups that must be present when one of the arguments from this /// group is used. /// /// **NOTE:** The names provided may be an argument, or group name /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ArgGroup, ErrorKind}; /// let result = App::new("myprog") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("color") /// .short("c")) /// .arg(Arg::with_name("debug") /// .short("d")) /// .arg(Arg::with_name("verb") /// .short("v")) /// .group(ArgGroup::with_name("req_flags") /// .args(&["flag", "color"]) /// .requires_all(&["debug", "verb"])) /// .get_matches_from_safe(vec!["myprog", "-c", "-d"]); /// // because we used an arg from the group, and the group requires "-d" and "-v" to be used, /// // yet we only used "-d" it's an error /// assert!(result.is_err()); /// let err = result.unwrap_err(); /// assert_eq!(err.kind, ErrorKind::MissingRequiredArgument); /// ``` /// [required group]: ./struct.ArgGroup.html#method.required /// [argument requirement rules]: ./struct.Arg.html#method.requires_all pub fn requires_all(mut self, ns: &[&'a str]) -> Self { for n in ns { self = self.requires(n); } self } /// Sets the exclusion rules of this group. Exclusion (aka conflict) rules function just like /// [argument exclusion rules], you can name other arguments or groups that must *not* be /// present when one of the arguments from this group are used. /// /// **NOTE:** The name provided may be an argument, or group name /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ArgGroup, ErrorKind}; /// let result = App::new("myprog") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("color") /// .short("c")) /// .arg(Arg::with_name("debug") /// .short("d")) /// .group(ArgGroup::with_name("req_flags") /// .args(&["flag", "color"]) /// .conflicts_with("debug")) /// .get_matches_from_safe(vec!["myprog", "-c", "-d"]); /// // because we used an arg from the group, and the group conflicts with "-d", it's an error /// assert!(result.is_err()); /// let err = result.unwrap_err(); /// assert_eq!(err.kind, ErrorKind::ArgumentConflict); /// ``` /// [argument exclusion rules]: ./struct.Arg.html#method.conflicts_with pub fn conflicts_with(mut self, n: &'a str) -> Self { if let Some(ref mut confs) = self.conflicts { confs.push(n); } else { self.conflicts = Some(vec![n]); } self } /// Sets the exclusion rules of this group. Exclusion rules function just like /// [argument exclusion rules], you can name other arguments or groups that must *not* be /// present when one of the arguments from this group are used. /// /// **NOTE:** The names provided may be an argument, or group name /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, ArgGroup, ErrorKind}; /// let result = App::new("myprog") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("color") /// .short("c")) /// .arg(Arg::with_name("debug") /// .short("d")) /// .arg(Arg::with_name("verb") /// .short("v")) /// .group(ArgGroup::with_name("req_flags") /// .args(&["flag", "color"]) /// .conflicts_with_all(&["debug", "verb"])) /// .get_matches_from_safe(vec!["myprog", "-c", "-v"]); /// // because we used an arg from the group, and the group conflicts with either "-v" or "-d" /// // it's an error /// assert!(result.is_err()); /// let err = result.unwrap_err(); /// assert_eq!(err.kind, ErrorKind::ArgumentConflict); /// ``` /// [argument exclusion rules]: ./struct.Arg.html#method.conflicts_with_all pub fn conflicts_with_all(mut self, ns: &[&'a str]) -> Self { for n in ns { self = self.conflicts_with(n); } self } } impl<'a> Debug for ArgGroup<'a> { fn fmt(&self, f: &mut Formatter) -> Result { write!( f, "{{\n\ \tname: {:?},\n\ \targs: {:?},\n\ \trequired: {:?},\n\ \trequires: {:?},\n\ \tconflicts: {:?},\n\ }}", self.name, self.args, self.required, self.requires, self.conflicts ) } } impl<'a, 'z> From<&'z ArgGroup<'a>> for ArgGroup<'a> { fn from(g: &'z ArgGroup<'a>) -> Self { ArgGroup { name: g.name, required: g.required, args: g.args.clone(), requires: g.requires.clone(), conflicts: g.conflicts.clone(), multiple: g.multiple, } } } #[cfg(feature = "yaml")] impl<'a> From<&'a BTreeMap> for ArgGroup<'a> { fn from(b: &'a BTreeMap) -> Self { // We WANT this to panic on error...so expect() is good. let mut a = ArgGroup::default(); let group_settings = if b.len() == 1 { let name_yml = b.keys().nth(0).expect("failed to get name"); let name_str = name_yml .as_str() .expect("failed to convert arg YAML name to str"); a.name = name_str; b.get(name_yml) .expect("failed to get name_str") .as_hash() .expect("failed to convert to a hash") } else { b }; for (k, v) in group_settings { a = match k.as_str().unwrap() { "required" => a.required(v.as_bool().unwrap()), "multiple" => a.multiple(v.as_bool().unwrap()), "args" => yaml_vec_or_str!(v, a, arg), "arg" => { if let Some(ys) = v.as_str() { a = a.arg(ys); } a } "requires" => yaml_vec_or_str!(v, a, requires), "conflicts_with" => yaml_vec_or_str!(v, a, conflicts_with), "name" => { if let Some(ys) = v.as_str() { a.name = ys; } a } s => panic!( "Unknown ArgGroup setting '{}' in YAML file for \ ArgGroup '{}'", s, a.name ), } } a } } #[cfg(test)] mod test { use super::ArgGroup; #[cfg(feature = "yaml")] use yaml_rust::YamlLoader; #[test] fn groups() { let g = ArgGroup::with_name("test") .arg("a1") .arg("a4") .args(&["a2", "a3"]) .required(true) .conflicts_with("c1") .conflicts_with_all(&["c2", "c3"]) .conflicts_with("c4") .requires("r1") .requires_all(&["r2", "r3"]) .requires("r4"); let args = vec!["a1", "a4", "a2", "a3"]; let reqs = vec!["r1", "r2", "r3", "r4"]; let confs = vec!["c1", "c2", "c3", "c4"]; assert_eq!(g.args, args); assert_eq!(g.requires, Some(reqs)); assert_eq!(g.conflicts, Some(confs)); } #[test] fn test_debug() { let g = ArgGroup::with_name("test") .arg("a1") .arg("a4") .args(&["a2", "a3"]) .required(true) .conflicts_with("c1") .conflicts_with_all(&["c2", "c3"]) .conflicts_with("c4") .requires("r1") .requires_all(&["r2", "r3"]) .requires("r4"); let args = vec!["a1", "a4", "a2", "a3"]; let reqs = vec!["r1", "r2", "r3", "r4"]; let confs = vec!["c1", "c2", "c3", "c4"]; let debug_str = format!( "{{\n\ \tname: \"test\",\n\ \targs: {:?},\n\ \trequired: {:?},\n\ \trequires: {:?},\n\ \tconflicts: {:?},\n\ }}", args, true, Some(reqs), Some(confs) ); assert_eq!(&*format!("{:?}", g), &*debug_str); } #[test] fn test_from() { let g = ArgGroup::with_name("test") .arg("a1") .arg("a4") .args(&["a2", "a3"]) .required(true) .conflicts_with("c1") .conflicts_with_all(&["c2", "c3"]) .conflicts_with("c4") .requires("r1") .requires_all(&["r2", "r3"]) .requires("r4"); let args = vec!["a1", "a4", "a2", "a3"]; let reqs = vec!["r1", "r2", "r3", "r4"]; let confs = vec!["c1", "c2", "c3", "c4"]; let g2 = ArgGroup::from(&g); assert_eq!(g2.args, args); assert_eq!(g2.requires, Some(reqs)); assert_eq!(g2.conflicts, Some(confs)); } #[cfg(feature = "yaml")] #[cfg_attr(feature = "yaml", test)] fn test_yaml() { let g_yaml = "name: test args: - a1 - a4 - a2 - a3 conflicts_with: - c1 - c2 - c3 - c4 requires: - r1 - r2 - r3 - r4"; let yml = &YamlLoader::load_from_str(g_yaml).expect("failed to load YAML file")[0]; let g = ArgGroup::from_yaml(yml); let args = vec!["a1", "a4", "a2", "a3"]; let reqs = vec!["r1", "r2", "r3", "r4"]; let confs = vec!["c1", "c2", "c3", "c4"]; assert_eq!(g.args, args); assert_eq!(g.requires, Some(reqs)); assert_eq!(g.conflicts, Some(confs)); } } impl<'a> Clone for ArgGroup<'a> { fn clone(&self) -> Self { ArgGroup { name: self.name, required: self.required, args: self.args.clone(), requires: self.requires.clone(), conflicts: self.conflicts.clone(), multiple: self.multiple, } } } vendor/clap/src/args/arg_matches.rs0000664000175000017500000011124114172417313020137 0ustar mwhudsonmwhudson// Std use std::{ borrow::Cow, collections::HashMap, ffi::{OsStr, OsString}, iter::Map, slice::Iter, }; // Internal use crate::{ args::{MatchedArg, SubCommand}, INVALID_UTF8, }; /// Used to get information about the arguments that were supplied to the program at runtime by /// the user. New instances of this struct are obtained by using the [`App::get_matches`] family of /// methods. /// /// # Examples /// /// ```no_run /// # use clap::{App, Arg}; /// let matches = App::new("MyApp") /// .arg(Arg::with_name("out") /// .long("output") /// .required(true) /// .takes_value(true)) /// .arg(Arg::with_name("debug") /// .short("d") /// .multiple(true)) /// .arg(Arg::with_name("cfg") /// .short("c") /// .takes_value(true)) /// .get_matches(); // builds the instance of ArgMatches /// /// // to get information about the "cfg" argument we created, such as the value supplied we use /// // various ArgMatches methods, such as ArgMatches::value_of /// if let Some(c) = matches.value_of("cfg") { /// println!("Value for -c: {}", c); /// } /// /// // The ArgMatches::value_of method returns an Option because the user may not have supplied /// // that argument at runtime. But if we specified that the argument was "required" as we did /// // with the "out" argument, we can safely unwrap because `clap` verifies that was actually /// // used at runtime. /// println!("Value for --output: {}", matches.value_of("out").unwrap()); /// /// // You can check the presence of an argument /// if matches.is_present("out") { /// // Another way to check if an argument was present, or if it occurred multiple times is to /// // use occurrences_of() which returns 0 if an argument isn't found at runtime, or the /// // number of times that it occurred, if it was. To allow an argument to appear more than /// // once, you must use the .multiple(true) method, otherwise it will only return 1 or 0. /// if matches.occurrences_of("debug") > 2 { /// println!("Debug mode is REALLY on, don't be crazy"); /// } else { /// println!("Debug mode kind of on"); /// } /// } /// ``` /// [`App::get_matches`]: ./struct.App.html#method.get_matches #[derive(Debug, Clone)] pub struct ArgMatches<'a> { #[doc(hidden)] pub args: HashMap<&'a str, MatchedArg>, #[doc(hidden)] pub subcommand: Option>>, #[doc(hidden)] pub usage: Option, } impl<'a> Default for ArgMatches<'a> { fn default() -> Self { ArgMatches { args: HashMap::new(), subcommand: None, usage: None, } } } impl<'a> ArgMatches<'a> { #[doc(hidden)] pub fn new() -> Self { ArgMatches { ..Default::default() } } /// Gets the value of a specific [option] or [positional] argument (i.e. an argument that takes /// an additional value at runtime). If the option wasn't present at runtime /// it returns `None`. /// /// *NOTE:* If getting a value for an option or positional argument that allows multiples, /// prefer [`ArgMatches::values_of`] as `ArgMatches::value_of` will only return the *first* /// value. /// /// # Panics /// /// This method will [`panic!`] if the value contains invalid UTF-8 code points. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myapp") /// .arg(Arg::with_name("output") /// .takes_value(true)) /// .get_matches_from(vec!["myapp", "something"]); /// /// assert_eq!(m.value_of("output"), Some("something")); /// ``` /// [option]: ./struct.Arg.html#method.takes_value /// [positional]: ./struct.Arg.html#method.index /// [`ArgMatches::values_of`]: ./struct.ArgMatches.html#method.values_of /// [`panic!`]: https://doc.rust-lang.org/std/macro.panic!.html pub fn value_of>(&self, name: S) -> Option<&str> { if let Some(arg) = self.args.get(name.as_ref()) { if let Some(v) = arg.vals.get(0) { return Some(v.to_str().expect(INVALID_UTF8)); } } None } /// Gets the lossy value of a specific argument. If the argument wasn't present at runtime /// it returns `None`. A lossy value is one which contains invalid UTF-8 code points, those /// invalid points will be replaced with `\u{FFFD}` /// /// *NOTE:* If getting a value for an option or positional argument that allows multiples, /// prefer [`Arg::values_of_lossy`] as `value_of_lossy()` will only return the *first* value. /// /// # Examples /// #[cfg_attr(not(unix), doc = " ```ignore")] #[cfg_attr(unix, doc = " ```")] /// # use clap::{App, Arg}; /// use std::ffi::OsString; /// use std::os::unix::ffi::{OsStrExt,OsStringExt}; /// /// let m = App::new("utf8") /// .arg(Arg::from_usage(" 'some arg'")) /// .get_matches_from(vec![OsString::from("myprog"), /// // "Hi {0xe9}!" /// OsString::from_vec(vec![b'H', b'i', b' ', 0xe9, b'!'])]); /// assert_eq!(&*m.value_of_lossy("arg").unwrap(), "Hi \u{FFFD}!"); /// ``` /// [`Arg::values_of_lossy`]: ./struct.ArgMatches.html#method.values_of_lossy pub fn value_of_lossy>(&'a self, name: S) -> Option> { if let Some(arg) = self.args.get(name.as_ref()) { if let Some(v) = arg.vals.get(0) { return Some(v.to_string_lossy()); } } None } /// Gets the OS version of a string value of a specific argument. If the option wasn't present /// at runtime it returns `None`. An OS value on Unix-like systems is any series of bytes, /// regardless of whether or not they contain valid UTF-8 code points. Since [`String`]s in /// Rust are guaranteed to be valid UTF-8, a valid filename on a Unix system as an argument /// value may contain invalid UTF-8 code points. /// /// *NOTE:* If getting a value for an option or positional argument that allows multiples, /// prefer [`ArgMatches::values_of_os`] as `Arg::value_of_os` will only return the *first* /// value. /// /// # Examples /// #[cfg_attr(not(unix), doc = " ```ignore")] #[cfg_attr(unix, doc = " ```")] /// # use clap::{App, Arg}; /// use std::ffi::OsString; /// use std::os::unix::ffi::{OsStrExt,OsStringExt}; /// /// let m = App::new("utf8") /// .arg(Arg::from_usage(" 'some arg'")) /// .get_matches_from(vec![OsString::from("myprog"), /// // "Hi {0xe9}!" /// OsString::from_vec(vec![b'H', b'i', b' ', 0xe9, b'!'])]); /// assert_eq!(&*m.value_of_os("arg").unwrap().as_bytes(), [b'H', b'i', b' ', 0xe9, b'!']); /// ``` /// [`String`]: https://doc.rust-lang.org/std/string/struct.String.html /// [`ArgMatches::values_of_os`]: ./struct.ArgMatches.html#method.values_of_os pub fn value_of_os>(&self, name: S) -> Option<&OsStr> { self.args .get(name.as_ref()) .and_then(|arg| arg.vals.get(0).map(|v| v.as_os_str())) } /// Gets a [`Values`] struct which implements [`Iterator`] for values of a specific argument /// (i.e. an argument that takes multiple values at runtime). If the option wasn't present at /// runtime it returns `None` /// /// # Panics /// /// This method will panic if any of the values contain invalid UTF-8 code points. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myprog") /// .arg(Arg::with_name("output") /// .multiple(true) /// .short("o") /// .takes_value(true)) /// .get_matches_from(vec![ /// "myprog", "-o", "val1", "val2", "val3" /// ]); /// let vals: Vec<&str> = m.values_of("output").unwrap().collect(); /// assert_eq!(vals, ["val1", "val2", "val3"]); /// ``` /// [`Values`]: ./struct.Values.html /// [`Iterator`]: https://doc.rust-lang.org/std/iter/trait.Iterator.html pub fn values_of>(&'a self, name: S) -> Option> { if let Some(arg) = self.args.get(name.as_ref()) { fn to_str_slice(o: &OsString) -> &str { o.to_str().expect(INVALID_UTF8) } let to_str_slice: fn(&OsString) -> &str = to_str_slice; // coerce to fn pointer return Some(Values { iter: arg.vals.iter().map(to_str_slice), }); } None } /// Gets the lossy values of a specific argument. If the option wasn't present at runtime /// it returns `None`. A lossy value is one where if it contains invalid UTF-8 code points, /// those invalid points will be replaced with `\u{FFFD}` /// /// # Examples /// #[cfg_attr(not(unix), doc = " ```ignore")] #[cfg_attr(unix, doc = " ```")] /// # use clap::{App, Arg}; /// use std::ffi::OsString; /// use std::os::unix::ffi::OsStringExt; /// /// let m = App::new("utf8") /// .arg(Arg::from_usage("... 'some arg'")) /// .get_matches_from(vec![OsString::from("myprog"), /// // "Hi" /// OsString::from_vec(vec![b'H', b'i']), /// // "{0xe9}!" /// OsString::from_vec(vec![0xe9, b'!'])]); /// let mut itr = m.values_of_lossy("arg").unwrap().into_iter(); /// assert_eq!(&itr.next().unwrap()[..], "Hi"); /// assert_eq!(&itr.next().unwrap()[..], "\u{FFFD}!"); /// assert_eq!(itr.next(), None); /// ``` pub fn values_of_lossy>(&'a self, name: S) -> Option> { if let Some(arg) = self.args.get(name.as_ref()) { return Some( arg.vals .iter() .map(|v| v.to_string_lossy().into_owned()) .collect(), ); } None } /// Gets a [`OsValues`] struct which is implements [`Iterator`] for [`OsString`] values of a /// specific argument. If the option wasn't present at runtime it returns `None`. An OS value /// on Unix-like systems is any series of bytes, regardless of whether or not they contain /// valid UTF-8 code points. Since [`String`]s in Rust are guaranteed to be valid UTF-8, a valid /// filename as an argument value on Linux (for example) may contain invalid UTF-8 code points. /// /// # Examples /// #[cfg_attr(not(unix), doc = " ```ignore")] #[cfg_attr(unix, doc = " ```")] /// # use clap::{App, Arg}; /// use std::ffi::{OsStr,OsString}; /// use std::os::unix::ffi::{OsStrExt,OsStringExt}; /// /// let m = App::new("utf8") /// .arg(Arg::from_usage("... 'some arg'")) /// .get_matches_from(vec![OsString::from("myprog"), /// // "Hi" /// OsString::from_vec(vec![b'H', b'i']), /// // "{0xe9}!" /// OsString::from_vec(vec![0xe9, b'!'])]); /// /// let mut itr = m.values_of_os("arg").unwrap().into_iter(); /// assert_eq!(itr.next(), Some(OsStr::new("Hi"))); /// assert_eq!(itr.next(), Some(OsStr::from_bytes(&[0xe9, b'!']))); /// assert_eq!(itr.next(), None); /// ``` /// [`OsValues`]: ./struct.OsValues.html /// [`Iterator`]: https://doc.rust-lang.org/std/iter/trait.Iterator.html /// [`OsString`]: https://doc.rust-lang.org/std/ffi/struct.OsString.html /// [`String`]: https://doc.rust-lang.org/std/string/struct.String.html pub fn values_of_os>(&'a self, name: S) -> Option> { fn to_str_slice(o: &OsString) -> &OsStr { &*o } let to_str_slice: fn(&'a OsString) -> &'a OsStr = to_str_slice; // coerce to fn pointer if let Some(arg) = self.args.get(name.as_ref()) { return Some(OsValues { iter: arg.vals.iter().map(to_str_slice), }); } None } /// Returns `true` if an argument was present at runtime, otherwise `false`. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myprog") /// .arg(Arg::with_name("debug") /// .short("d")) /// .get_matches_from(vec![ /// "myprog", "-d" /// ]); /// /// assert!(m.is_present("debug")); /// ``` pub fn is_present>(&self, name: S) -> bool { if let Some(ref sc) = self.subcommand { if sc.name == name.as_ref() { return true; } } self.args.contains_key(name.as_ref()) } /// Returns the number of times an argument was used at runtime. If an argument isn't present /// it will return `0`. /// /// **NOTE:** This returns the number of times the argument was used, *not* the number of /// values. For example, `-o val1 val2 val3 -o val4` would return `2` (2 occurrences, but 4 /// values). /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myprog") /// .arg(Arg::with_name("debug") /// .short("d") /// .multiple(true)) /// .get_matches_from(vec![ /// "myprog", "-d", "-d", "-d" /// ]); /// /// assert_eq!(m.occurrences_of("debug"), 3); /// ``` /// /// This next example shows that counts actual uses of the argument, not just `-`'s /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myprog") /// .arg(Arg::with_name("debug") /// .short("d") /// .multiple(true)) /// .arg(Arg::with_name("flag") /// .short("f")) /// .get_matches_from(vec![ /// "myprog", "-ddfd" /// ]); /// /// assert_eq!(m.occurrences_of("debug"), 3); /// assert_eq!(m.occurrences_of("flag"), 1); /// ``` pub fn occurrences_of>(&self, name: S) -> u64 { self.args.get(name.as_ref()).map_or(0, |a| a.occurs) } /// Gets the starting index of the argument in respect to all other arguments. Indices are /// similar to argv indices, but are not exactly 1:1. /// /// For flags (i.e. those arguments which don't have an associated value), indices refer /// to occurrence of the switch, such as `-f`, or `--flag`. However, for options the indices /// refer to the *values* `-o val` would therefore not represent two distinct indices, only the /// index for `val` would be recorded. This is by design. /// /// Besides the flag/option descrepancy, the primary difference between an argv index and clap /// index, is that clap continues counting once all arguments have properly seperated, whereas /// an argv index does not. /// /// The examples should clear this up. /// /// *NOTE:* If an argument is allowed multiple times, this method will only give the *first* /// index. /// /// # Examples /// /// The argv indices are listed in the comments below. See how they correspond to the clap /// indices. Note that if it's not listed in a clap index, this is becuase it's not saved in /// in an `ArgMatches` struct for querying. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myapp") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("option") /// .short("o") /// .takes_value(true)) /// .get_matches_from(vec!["myapp", "-f", "-o", "val"]); /// // ARGV idices: ^0 ^1 ^2 ^3 /// // clap idices: ^1 ^3 /// /// assert_eq!(m.index_of("flag"), Some(1)); /// assert_eq!(m.index_of("option"), Some(3)); /// ``` /// /// Now notice, if we use one of the other styles of options: /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myapp") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("option") /// .short("o") /// .takes_value(true)) /// .get_matches_from(vec!["myapp", "-f", "-o=val"]); /// // ARGV idices: ^0 ^1 ^2 /// // clap idices: ^1 ^3 /// /// assert_eq!(m.index_of("flag"), Some(1)); /// assert_eq!(m.index_of("option"), Some(3)); /// ``` /// /// Things become much more complicated, or clear if we look at a more complex combination of /// flags. Let's also throw in the final option style for good measure. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myapp") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("flag2") /// .short("F")) /// .arg(Arg::with_name("flag3") /// .short("z")) /// .arg(Arg::with_name("option") /// .short("o") /// .takes_value(true)) /// .get_matches_from(vec!["myapp", "-fzF", "-oval"]); /// // ARGV idices: ^0 ^1 ^2 /// // clap idices: ^1,2,3 ^5 /// // /// // clap sees the above as 'myapp -f -z -F -o val' /// // ^0 ^1 ^2 ^3 ^4 ^5 /// assert_eq!(m.index_of("flag"), Some(1)); /// assert_eq!(m.index_of("flag2"), Some(3)); /// assert_eq!(m.index_of("flag3"), Some(2)); /// assert_eq!(m.index_of("option"), Some(5)); /// ``` /// /// One final combination of flags/options to see how they combine: /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myapp") /// .arg(Arg::with_name("flag") /// .short("f")) /// .arg(Arg::with_name("flag2") /// .short("F")) /// .arg(Arg::with_name("flag3") /// .short("z")) /// .arg(Arg::with_name("option") /// .short("o") /// .takes_value(true) /// .multiple(true)) /// .get_matches_from(vec!["myapp", "-fzFoval"]); /// // ARGV idices: ^0 ^1 /// // clap idices: ^1,2,3^5 /// // /// // clap sees the above as 'myapp -f -z -F -o val' /// // ^0 ^1 ^2 ^3 ^4 ^5 /// assert_eq!(m.index_of("flag"), Some(1)); /// assert_eq!(m.index_of("flag2"), Some(3)); /// assert_eq!(m.index_of("flag3"), Some(2)); /// assert_eq!(m.index_of("option"), Some(5)); /// ``` /// /// The last part to mention is when values are sent in multiple groups with a [delimiter]. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myapp") /// .arg(Arg::with_name("option") /// .short("o") /// .takes_value(true) /// .multiple(true)) /// .get_matches_from(vec!["myapp", "-o=val1,val2,val3"]); /// // ARGV idices: ^0 ^1 /// // clap idices: ^2 ^3 ^4 /// // /// // clap sees the above as 'myapp -o val1 val2 val3' /// // ^0 ^1 ^2 ^3 ^4 /// assert_eq!(m.index_of("option"), Some(2)); /// ``` /// [`ArgMatches`]: ./struct.ArgMatches.html /// [delimiter]: ./struct.Arg.html#method.value_delimiter pub fn index_of>(&self, name: S) -> Option { if let Some(arg) = self.args.get(name.as_ref()) { if let Some(i) = arg.indices.get(0) { return Some(*i); } } None } /// Gets all indices of the argument in respect to all other arguments. Indices are /// similar to argv indices, but are not exactly 1:1. /// /// For flags (i.e. those arguments which don't have an associated value), indices refer /// to occurrence of the switch, such as `-f`, or `--flag`. However, for options the indices /// refer to the *values* `-o val` would therefore not represent two distinct indices, only the /// index for `val` would be recorded. This is by design. /// /// *NOTE:* For more information about how clap indices compare to argv indices, see /// [`ArgMatches::index_of`] /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myapp") /// .arg(Arg::with_name("option") /// .short("o") /// .takes_value(true) /// .use_delimiter(true) /// .multiple(true)) /// .get_matches_from(vec!["myapp", "-o=val1,val2,val3"]); /// // ARGV idices: ^0 ^1 /// // clap idices: ^2 ^3 ^4 /// // /// // clap sees the above as 'myapp -o val1 val2 val3' /// // ^0 ^1 ^2 ^3 ^4 /// assert_eq!(m.indices_of("option").unwrap().collect::>(), &[2, 3, 4]); /// ``` /// /// Another quick example is when flags and options are used together /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myapp") /// .arg(Arg::with_name("option") /// .short("o") /// .takes_value(true) /// .multiple(true)) /// .arg(Arg::with_name("flag") /// .short("f") /// .multiple(true)) /// .get_matches_from(vec!["myapp", "-o", "val1", "-f", "-o", "val2", "-f"]); /// // ARGV idices: ^0 ^1 ^2 ^3 ^4 ^5 ^6 /// // clap idices: ^2 ^3 ^5 ^6 /// /// assert_eq!(m.indices_of("option").unwrap().collect::>(), &[2, 5]); /// assert_eq!(m.indices_of("flag").unwrap().collect::>(), &[3, 6]); /// ``` /// /// One final example, which is an odd case; if we *don't* use value delimiter as we did with /// the first example above instead of `val1`, `val2` and `val3` all being distinc values, they /// would all be a single value of `val1,val2,val3`, in which case case they'd only receive a /// single index. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myapp") /// .arg(Arg::with_name("option") /// .short("o") /// .takes_value(true) /// .multiple(true)) /// .get_matches_from(vec!["myapp", "-o=val1,val2,val3"]); /// // ARGV idices: ^0 ^1 /// // clap idices: ^2 /// // /// // clap sees the above as 'myapp -o "val1,val2,val3"' /// // ^0 ^1 ^2 /// assert_eq!(m.indices_of("option").unwrap().collect::>(), &[2]); /// ``` /// [`ArgMatches`]: ./struct.ArgMatches.html /// [`ArgMatches::index_of`]: ./struct.ArgMatches.html#method.index_of /// [delimiter]: ./struct.Arg.html#method.value_delimiter pub fn indices_of>(&'a self, name: S) -> Option> { if let Some(arg) = self.args.get(name.as_ref()) { fn to_usize(i: &usize) -> usize { *i } let to_usize: fn(&usize) -> usize = to_usize; // coerce to fn pointer return Some(Indices { iter: arg.indices.iter().map(to_usize), }); } None } /// Because [`Subcommand`]s are essentially "sub-[`App`]s" they have their own [`ArgMatches`] /// as well. This method returns the [`ArgMatches`] for a particular subcommand or `None` if /// the subcommand wasn't present at runtime. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg, SubCommand}; /// let app_m = App::new("myprog") /// .arg(Arg::with_name("debug") /// .short("d")) /// .subcommand(SubCommand::with_name("test") /// .arg(Arg::with_name("opt") /// .long("option") /// .takes_value(true))) /// .get_matches_from(vec![ /// "myprog", "-d", "test", "--option", "val" /// ]); /// /// // Both parent commands, and child subcommands can have arguments present at the same times /// assert!(app_m.is_present("debug")); /// /// // Get the subcommand's ArgMatches instance /// if let Some(sub_m) = app_m.subcommand_matches("test") { /// // Use the struct like normal /// assert_eq!(sub_m.value_of("opt"), Some("val")); /// } /// ``` /// [`Subcommand`]: ./struct.SubCommand.html /// [`App`]: ./struct.App.html /// [`ArgMatches`]: ./struct.ArgMatches.html pub fn subcommand_matches>(&self, name: S) -> Option<&ArgMatches<'a>> { if let Some(ref s) = self.subcommand { if s.name == name.as_ref() { return Some(&s.matches); } } None } /// Because [`Subcommand`]s are essentially "sub-[`App`]s" they have their own [`ArgMatches`] /// as well.But simply getting the sub-[`ArgMatches`] doesn't help much if we don't also know /// which subcommand was actually used. This method returns the name of the subcommand that was /// used at runtime, or `None` if one wasn't. /// /// *NOTE*: Subcommands form a hierarchy, where multiple subcommands can be used at runtime, /// but only a single subcommand from any group of sibling commands may used at once. /// /// An ASCII art depiction may help explain this better...Using a fictional version of `git` as /// the demo subject. Imagine the following are all subcommands of `git` (note, the author is /// aware these aren't actually all subcommands in the real `git` interface, but it makes /// explanation easier) /// /// ```notrust /// Top Level App (git) TOP /// | /// ----------------------------------------- /// / | \ \ /// clone push add commit LEVEL 1 /// | / \ / \ | /// url origin remote ref name message LEVEL 2 /// / /\ /// path remote local LEVEL 3 /// ``` /// /// Given the above fictional subcommand hierarchy, valid runtime uses would be (not an all /// inclusive list, and not including argument options per command for brevity and clarity): /// /// ```sh /// $ git clone url /// $ git push origin path /// $ git add ref local /// $ git commit message /// ``` /// /// Notice only one command per "level" may be used. You could not, for example, do `$ git /// clone url push origin path` /// /// # Examples /// /// ```no_run /// # use clap::{App, Arg, SubCommand}; /// let app_m = App::new("git") /// .subcommand(SubCommand::with_name("clone")) /// .subcommand(SubCommand::with_name("push")) /// .subcommand(SubCommand::with_name("commit")) /// .get_matches(); /// /// match app_m.subcommand_name() { /// Some("clone") => {}, // clone was used /// Some("push") => {}, // push was used /// Some("commit") => {}, // commit was used /// _ => {}, // Either no subcommand or one not tested for... /// } /// ``` /// [`Subcommand`]: ./struct.SubCommand.html /// [`App`]: ./struct.App.html /// [`ArgMatches`]: ./struct.ArgMatches.html pub fn subcommand_name(&self) -> Option<&str> { self.subcommand.as_ref().map(|sc| &sc.name[..]) } /// This brings together [`ArgMatches::subcommand_matches`] and [`ArgMatches::subcommand_name`] /// by returning a tuple with both pieces of information. /// /// # Examples /// /// ```no_run /// # use clap::{App, Arg, SubCommand}; /// let app_m = App::new("git") /// .subcommand(SubCommand::with_name("clone")) /// .subcommand(SubCommand::with_name("push")) /// .subcommand(SubCommand::with_name("commit")) /// .get_matches(); /// /// match app_m.subcommand() { /// ("clone", Some(sub_m)) => {}, // clone was used /// ("push", Some(sub_m)) => {}, // push was used /// ("commit", Some(sub_m)) => {}, // commit was used /// _ => {}, // Either no subcommand or one not tested for... /// } /// ``` /// /// Another useful scenario is when you want to support third party, or external, subcommands. /// In these cases you can't know the subcommand name ahead of time, so use a variable instead /// with pattern matching! /// /// ```rust /// # use clap::{App, AppSettings}; /// // Assume there is an external subcommand named "subcmd" /// let app_m = App::new("myprog") /// .setting(AppSettings::AllowExternalSubcommands) /// .get_matches_from(vec![ /// "myprog", "subcmd", "--option", "value", "-fff", "--flag" /// ]); /// /// // All trailing arguments will be stored under the subcommand's sub-matches using an empty /// // string argument name /// match app_m.subcommand() { /// (external, Some(sub_m)) => { /// let ext_args: Vec<&str> = sub_m.values_of("").unwrap().collect(); /// assert_eq!(external, "subcmd"); /// assert_eq!(ext_args, ["--option", "value", "-fff", "--flag"]); /// }, /// _ => {}, /// } /// ``` /// [`ArgMatches::subcommand_matches`]: ./struct.ArgMatches.html#method.subcommand_matches /// [`ArgMatches::subcommand_name`]: ./struct.ArgMatches.html#method.subcommand_name pub fn subcommand(&self) -> (&str, Option<&ArgMatches<'a>>) { self.subcommand .as_ref() .map_or(("", None), |sc| (&sc.name[..], Some(&sc.matches))) } /// Returns a string slice of the usage statement for the [`App`] or [`SubCommand`] /// /// # Examples /// /// ```no_run /// # use clap::{App, Arg, SubCommand}; /// let app_m = App::new("myprog") /// .subcommand(SubCommand::with_name("test")) /// .get_matches(); /// /// println!("{}", app_m.usage()); /// ``` /// [`Subcommand`]: ./struct.SubCommand.html /// [`App`]: ./struct.App.html pub fn usage(&self) -> &str { self.usage.as_ref().map_or("", |u| &u[..]) } } // The following were taken and adapated from vec_map source // repo: https://github.com/contain-rs/vec-map // commit: be5e1fa3c26e351761b33010ddbdaf5f05dbcc33 // license: MIT - Copyright (c) 2015 The Rust Project Developers /// An iterator for getting multiple values out of an argument via the [`ArgMatches::values_of`] /// method. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myapp") /// .arg(Arg::with_name("output") /// .short("o") /// .multiple(true) /// .takes_value(true)) /// .get_matches_from(vec!["myapp", "-o", "val1", "val2"]); /// /// let mut values = m.values_of("output").unwrap(); /// /// assert_eq!(values.next(), Some("val1")); /// assert_eq!(values.next(), Some("val2")); /// assert_eq!(values.next(), None); /// ``` /// [`ArgMatches::values_of`]: ./struct.ArgMatches.html#method.values_of #[derive(Debug, Clone)] pub struct Values<'a> { iter: Map, fn(&'a OsString) -> &'a str>, } impl<'a> Iterator for Values<'a> { type Item = &'a str; fn next(&mut self) -> Option<&'a str> { self.iter.next() } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } impl<'a> DoubleEndedIterator for Values<'a> { fn next_back(&mut self) -> Option<&'a str> { self.iter.next_back() } } impl<'a> ExactSizeIterator for Values<'a> {} /// Creates an empty iterator. impl<'a> Default for Values<'a> { fn default() -> Self { static EMPTY: [OsString; 0] = []; // This is never called because the iterator is empty: fn to_str_slice(_: &OsString) -> &str { unreachable!() } Values { iter: EMPTY[..].iter().map(to_str_slice), } } } /// An iterator for getting multiple values out of an argument via the [`ArgMatches::values_of_os`] /// method. Usage of this iterator allows values which contain invalid UTF-8 code points unlike /// [`Values`]. /// /// # Examples /// #[cfg_attr(not(unix), doc = " ```ignore")] #[cfg_attr(unix, doc = " ```")] /// # use clap::{App, Arg}; /// use std::ffi::OsString; /// use std::os::unix::ffi::{OsStrExt,OsStringExt}; /// /// let m = App::new("utf8") /// .arg(Arg::from_usage(" 'some arg'")) /// .get_matches_from(vec![OsString::from("myprog"), /// // "Hi {0xe9}!" /// OsString::from_vec(vec![b'H', b'i', b' ', 0xe9, b'!'])]); /// assert_eq!(&*m.value_of_os("arg").unwrap().as_bytes(), [b'H', b'i', b' ', 0xe9, b'!']); /// ``` /// [`ArgMatches::values_of_os`]: ./struct.ArgMatches.html#method.values_of_os /// [`Values`]: ./struct.Values.html #[derive(Debug, Clone)] pub struct OsValues<'a> { iter: Map, fn(&'a OsString) -> &'a OsStr>, } impl<'a> Iterator for OsValues<'a> { type Item = &'a OsStr; fn next(&mut self) -> Option<&'a OsStr> { self.iter.next() } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } impl<'a> DoubleEndedIterator for OsValues<'a> { fn next_back(&mut self) -> Option<&'a OsStr> { self.iter.next_back() } } impl<'a> ExactSizeIterator for OsValues<'a> {} /// Creates an empty iterator. impl<'a> Default for OsValues<'a> { fn default() -> Self { static EMPTY: [OsString; 0] = []; // This is never called because the iterator is empty: fn to_str_slice(_: &OsString) -> &OsStr { unreachable!() } OsValues { iter: EMPTY[..].iter().map(to_str_slice), } } } /// An iterator for getting multiple indices out of an argument via the [`ArgMatches::indices_of`] /// method. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("myapp") /// .arg(Arg::with_name("output") /// .short("o") /// .multiple(true) /// .takes_value(true)) /// .get_matches_from(vec!["myapp", "-o", "val1", "val2"]); /// /// let mut indices = m.indices_of("output").unwrap(); /// /// assert_eq!(indices.next(), Some(2)); /// assert_eq!(indices.next(), Some(3)); /// assert_eq!(indices.next(), None); /// ``` /// [`ArgMatches::indices_of`]: ./struct.ArgMatches.html#method.indices_of #[derive(Debug, Clone)] pub struct Indices<'a> { // would rather use '_, but: https://github.com/rust-lang/rust/issues/48469 iter: Map, fn(&'a usize) -> usize>, } impl<'a> Iterator for Indices<'a> { type Item = usize; fn next(&mut self) -> Option { self.iter.next() } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } impl<'a> DoubleEndedIterator for Indices<'a> { fn next_back(&mut self) -> Option { self.iter.next_back() } } impl<'a> ExactSizeIterator for Indices<'a> {} /// Creates an empty iterator. impl<'a> Default for Indices<'a> { fn default() -> Self { static EMPTY: [usize; 0] = []; // This is never called because the iterator is empty: fn to_usize(_: &usize) -> usize { unreachable!() } Indices { iter: EMPTY[..].iter().map(to_usize), } } } #[cfg(test)] mod tests { use super::*; #[test] fn test_default_values() { let mut values: Values = Values::default(); assert_eq!(values.next(), None); } #[test] fn test_default_values_with_shorter_lifetime() { let matches = ArgMatches::new(); let mut values = matches.values_of("").unwrap_or_default(); assert_eq!(values.next(), None); } #[test] fn test_default_osvalues() { let mut values: OsValues = OsValues::default(); assert_eq!(values.next(), None); } #[test] fn test_default_osvalues_with_shorter_lifetime() { let matches = ArgMatches::new(); let mut values = matches.values_of_os("").unwrap_or_default(); assert_eq!(values.next(), None); } #[test] fn test_default_indices() { let mut indices: Indices = Indices::default(); assert_eq!(indices.next(), None); } #[test] fn test_default_indices_with_shorter_lifetime() { let matches = ArgMatches::new(); let mut indices = matches.indices_of("").unwrap_or_default(); assert_eq!(indices.next(), None); } } vendor/clap/src/args/macros.rs0000664000175000017500000000631014160055207017143 0ustar mwhudsonmwhudson#[cfg(feature = "yaml")] macro_rules! yaml_tuple2 { ($a:ident, $v:ident, $c:ident) => {{ if let Some(vec) = $v.as_vec() { for ys in vec { if let Some(tup) = ys.as_vec() { debug_assert_eq!(2, tup.len()); $a = $a.$c(yaml_str!(tup[0]), yaml_str!(tup[1])); } else { panic!("Failed to convert YAML value to vec"); } } } else { panic!("Failed to convert YAML value to vec"); } $a }}; } #[cfg(feature = "yaml")] macro_rules! yaml_tuple3 { ($a:ident, $v:ident, $c:ident) => {{ if let Some(vec) = $v.as_vec() { for ys in vec { if let Some(tup) = ys.as_vec() { debug_assert_eq!(3, tup.len()); $a = $a.$c(yaml_str!(tup[0]), yaml_opt_str!(tup[1]), yaml_str!(tup[2])); } else { panic!("Failed to convert YAML value to vec"); } } } else { panic!("Failed to convert YAML value to vec"); } $a }}; } #[cfg(feature = "yaml")] macro_rules! yaml_vec_or_str { ($v:ident, $a:ident, $c:ident) => {{ let maybe_vec = $v.as_vec(); if let Some(vec) = maybe_vec { for ys in vec { if let Some(s) = ys.as_str() { $a = $a.$c(s); } else { panic!("Failed to convert YAML value {:?} to a string", ys); } } } else { if let Some(s) = $v.as_str() { $a = $a.$c(s); } else { panic!( "Failed to convert YAML value {:?} to either a vec or string", $v ); } } $a }}; } #[cfg(feature = "yaml")] macro_rules! yaml_opt_str { ($v:expr) => {{ if $v.is_null() { Some( $v.as_str() .unwrap_or_else(|| panic!("failed to convert YAML {:?} value to a string", $v)), ) } else { None } }}; } #[cfg(feature = "yaml")] macro_rules! yaml_str { ($v:expr) => {{ $v.as_str() .unwrap_or_else(|| panic!("failed to convert YAML {:?} value to a string", $v)) }}; } #[cfg(feature = "yaml")] macro_rules! yaml_to_str { ($a:ident, $v:ident, $c:ident) => {{ $a.$c(yaml_str!($v)) }}; } #[cfg(feature = "yaml")] macro_rules! yaml_to_bool { ($a:ident, $v:ident, $c:ident) => {{ $a.$c($v .as_bool() .unwrap_or_else(|| panic!("failed to convert YAML {:?} value to a string", $v))) }}; } #[cfg(feature = "yaml")] macro_rules! yaml_to_u64 { ($a:ident, $v:ident, $c:ident) => {{ $a.$c($v .as_i64() .unwrap_or_else(|| panic!("failed to convert YAML {:?} value to a string", $v)) as u64) }}; } #[cfg(feature = "yaml")] macro_rules! yaml_to_usize { ($a:ident, $v:ident, $c:ident) => {{ $a.$c($v .as_i64() .unwrap_or_else(|| panic!("failed to convert YAML {:?} value to a string", $v)) as usize) }}; } vendor/clap/src/args/arg_matcher.rs0000664000175000017500000002137014172417313020141 0ustar mwhudsonmwhudson// Std use std::{ collections::{ hash_map::{Entry, Iter}, HashMap, }, ffi::OsStr, mem, ops::Deref, }; // Internal use crate::args::{settings::ArgSettings, AnyArg, ArgMatches, MatchedArg, SubCommand}; #[doc(hidden)] #[allow(missing_debug_implementations)] pub struct ArgMatcher<'a>(pub ArgMatches<'a>); impl<'a> Default for ArgMatcher<'a> { fn default() -> Self { ArgMatcher(ArgMatches::default()) } } impl<'a> ArgMatcher<'a> { pub fn new() -> Self { ArgMatcher::default() } pub fn process_arg_overrides<'b>( &mut self, a: Option<&AnyArg<'a, 'b>>, overrides: &mut Vec<(&'b str, &'a str)>, required: &mut Vec<&'a str>, check_all: bool, ) { debugln!( "ArgMatcher::process_arg_overrides:{:?};", a.map_or(None, |a| Some(a.name())) ); if let Some(aa) = a { let mut self_done = false; if let Some(a_overrides) = aa.overrides() { for overr in a_overrides { debugln!("ArgMatcher::process_arg_overrides:iter:{};", overr); if overr == &aa.name() { self_done = true; self.handle_self_overrides(a); } else if self.is_present(overr) { debugln!( "ArgMatcher::process_arg_overrides:iter:{}: removing from matches;", overr ); self.remove(overr); for i in (0..required.len()).rev() { if &required[i] == overr { debugln!( "ArgMatcher::process_arg_overrides:iter:{}: removing required;", overr ); required.swap_remove(i); break; } } overrides.push((overr, aa.name())); } else { overrides.push((overr, aa.name())); } } } if check_all && !self_done { self.handle_self_overrides(a); } } } pub fn handle_self_overrides<'b>(&mut self, a: Option<&AnyArg<'a, 'b>>) { debugln!( "ArgMatcher::handle_self_overrides:{:?};", a.map_or(None, |a| Some(a.name())) ); if let Some(aa) = a { if !aa.has_switch() || aa.is_set(ArgSettings::Multiple) { // positional args can't override self or else we would never advance to the next // Also flags with --multiple set are ignored otherwise we could never have more // than one return; } if let Some(ma) = self.get_mut(aa.name()) { if ma.vals.len() > 1 { // swap_remove(0) would be O(1) but does not preserve order, which // we need ma.vals.remove(0); ma.occurs = 1; } else if !aa.takes_value() && ma.occurs > 1 { ma.occurs = 1; } } } } pub fn is_present(&self, name: &str) -> bool { self.0.is_present(name) } pub fn propagate_globals(&mut self, global_arg_vec: &[&'a str]) { debugln!( "ArgMatcher::get_global_values: global_arg_vec={:?}", global_arg_vec ); let mut vals_map = HashMap::new(); self.fill_in_global_values(global_arg_vec, &mut vals_map); } fn fill_in_global_values( &mut self, global_arg_vec: &[&'a str], vals_map: &mut HashMap<&'a str, MatchedArg>, ) { for global_arg in global_arg_vec { if let Some(ma) = self.get(global_arg) { // We have to check if the parent's global arg wasn't used but still exists // such as from a default value. // // For example, `myprog subcommand --global-arg=value` where --global-arg defines // a default value of `other` myprog would have an existing MatchedArg for // --global-arg where the value is `other`, however the occurs will be 0. let to_update = if let Some(parent_ma) = vals_map.get(global_arg) { if parent_ma.occurs > 0 && ma.occurs == 0 { parent_ma.clone() } else { ma.clone() } } else { ma.clone() }; vals_map.insert(global_arg, to_update); } } if let Some(ref mut sc) = self.0.subcommand { let mut am = ArgMatcher(mem::replace(&mut sc.matches, ArgMatches::new())); am.fill_in_global_values(global_arg_vec, vals_map); mem::swap(&mut am.0, &mut sc.matches); } for (name, matched_arg) in vals_map.iter_mut() { self.0.args.insert(name, matched_arg.clone()); } } pub fn get_mut(&mut self, arg: &str) -> Option<&mut MatchedArg> { self.0.args.get_mut(arg) } pub fn get(&self, arg: &str) -> Option<&MatchedArg> { self.0.args.get(arg) } pub fn remove(&mut self, arg: &str) { self.0.args.remove(arg); } pub fn remove_all(&mut self, args: &[&str]) { for &arg in args { self.0.args.remove(arg); } } pub fn insert(&mut self, name: &'a str) { self.0.args.insert(name, MatchedArg::new()); } pub fn contains(&self, arg: &str) -> bool { self.0.args.contains_key(arg) } pub fn is_empty(&self) -> bool { self.0.args.is_empty() } pub fn usage(&mut self, usage: String) { self.0.usage = Some(usage); } pub fn arg_names(&'a self) -> Vec<&'a str> { self.0.args.keys().map(Deref::deref).collect() } pub fn entry(&mut self, arg: &'a str) -> Entry<&'a str, MatchedArg> { self.0.args.entry(arg) } pub fn subcommand(&mut self, sc: SubCommand<'a>) { self.0.subcommand = Some(Box::new(sc)); } pub fn subcommand_name(&self) -> Option<&str> { self.0.subcommand_name() } pub fn iter(&self) -> Iter<&str, MatchedArg> { self.0.args.iter() } pub fn inc_occurrence_of(&mut self, arg: &'a str) { debugln!("ArgMatcher::inc_occurrence_of: arg={}", arg); if let Some(a) = self.get_mut(arg) { a.occurs += 1; return; } debugln!("ArgMatcher::inc_occurrence_of: first instance"); self.insert(arg); } pub fn inc_occurrences_of(&mut self, args: &[&'a str]) { debugln!("ArgMatcher::inc_occurrences_of: args={:?}", args); for arg in args { self.inc_occurrence_of(arg); } } pub fn add_val_to(&mut self, arg: &'a str, val: &OsStr) { let ma = self.entry(arg).or_insert(MatchedArg { occurs: 0, indices: Vec::with_capacity(1), vals: Vec::with_capacity(1), }); ma.vals.push(val.to_owned()); } pub fn add_index_to(&mut self, arg: &'a str, idx: usize) { let ma = self.entry(arg).or_insert(MatchedArg { occurs: 0, indices: Vec::with_capacity(1), vals: Vec::new(), }); ma.indices.push(idx); } pub fn needs_more_vals<'b, A>(&self, o: &A) -> bool where A: AnyArg<'a, 'b>, { debugln!("ArgMatcher::needs_more_vals: o={}", o.name()); if let Some(ma) = self.get(o.name()) { if let Some(num) = o.num_vals() { debugln!("ArgMatcher::needs_more_vals: num_vals...{}", num); return if o.is_set(ArgSettings::Multiple) { ((ma.vals.len() as u64) % num) != 0 } else { num != (ma.vals.len() as u64) }; } else if let Some(num) = o.max_vals() { debugln!("ArgMatcher::needs_more_vals: max_vals...{}", num); return (ma.vals.len() as u64) <= num; } else if o.min_vals().is_some() { debugln!("ArgMatcher::needs_more_vals: min_vals...true"); return true; } return o.is_set(ArgSettings::Multiple); } true } } // Not changing to From just to not deal with possible breaking changes on v2 since v3 is coming // in the future anyways #[cfg_attr(feature = "cargo-clippy", allow(clippy::from_over_into))] impl<'a> Into> for ArgMatcher<'a> { fn into(self) -> ArgMatches<'a> { self.0 } } vendor/clap/src/args/arg.rs0000664000175000017500000043627414172417313016453 0ustar mwhudsonmwhudson#[cfg(feature = "yaml")] use std::collections::BTreeMap; #[cfg(not(any(target_os = "windows", target_arch = "wasm32")))] use std::os::unix::ffi::OsStrExt; use std::{ env, ffi::{OsStr, OsString}, rc::Rc, }; #[cfg(feature = "yaml")] use yaml_rust::Yaml; #[cfg(any(target_os = "windows", target_arch = "wasm32"))] use crate::osstringext::OsStrExt3; use crate::{ args::{ arg_builder::{Base, Switched, Valued}, settings::ArgSettings, }, map::VecMap, usage_parser::UsageParser, }; /// The abstract representation of a command line argument. Used to set all the options and /// relationships that define a valid argument for the program. /// /// There are two methods for constructing [`Arg`]s, using the builder pattern and setting options /// manually, or using a usage string which is far less verbose but has fewer options. You can also /// use a combination of the two methods to achieve the best of both worlds. /// /// # Examples /// /// ```rust /// # use clap::Arg; /// // Using the traditional builder pattern and setting each option manually /// let cfg = Arg::with_name("config") /// .short("c") /// .long("config") /// .takes_value(true) /// .value_name("FILE") /// .help("Provides a config file to myprog"); /// // Using a usage string (setting a similar argument to the one above) /// let input = Arg::from_usage("-i, --input=[FILE] 'Provides an input file to the program'"); /// ``` /// [`Arg`]: ./struct.Arg.html #[allow(missing_debug_implementations)] #[derive(Default, Clone)] pub struct Arg<'a, 'b> where 'a: 'b, { #[doc(hidden)] pub b: Base<'a, 'b>, #[doc(hidden)] pub s: Switched<'b>, #[doc(hidden)] pub v: Valued<'a, 'b>, #[doc(hidden)] pub index: Option, #[doc(hidden)] pub r_ifs: Option>, } impl<'a, 'b> Arg<'a, 'b> { /// Creates a new instance of [`Arg`] using a unique string name. The name will be used to get /// information about whether or not the argument was used at runtime, get values, set /// relationships with other args, etc.. /// /// **NOTE:** In the case of arguments that take values (i.e. [`Arg::takes_value(true)`]) /// and positional arguments (i.e. those without a preceding `-` or `--`) the name will also /// be displayed when the user prints the usage/help information of the program. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("config") /// # ; /// ``` /// [`Arg::takes_value(true)`]: ./struct.Arg.html#method.takes_value /// [`Arg`]: ./struct.Arg.html pub fn with_name(n: &'a str) -> Self { Arg { b: Base::new(n), ..Default::default() } } /// Creates a new instance of [`Arg`] from a .yml (YAML) file. /// /// # Examples /// /// ```ignore /// # #[macro_use] /// # extern crate clap; /// # use clap::Arg; /// # fn main() { /// let yml = load_yaml!("arg.yml"); /// let arg = Arg::from_yaml(yml); /// # } /// ``` /// [`Arg`]: ./struct.Arg.html #[cfg(feature = "yaml")] pub fn from_yaml(y: &BTreeMap) -> Arg { // We WANT this to panic on error...so expect() is good. let name_yml = y.keys().nth(0).unwrap(); let name_str = name_yml.as_str().unwrap(); let mut a = Arg::with_name(name_str); let arg_settings = y.get(name_yml).unwrap().as_hash().unwrap(); for (k, v) in arg_settings.iter() { a = match k.as_str().unwrap() { "short" => yaml_to_str!(a, v, short), "long" => yaml_to_str!(a, v, long), "aliases" => yaml_vec_or_str!(v, a, alias), "help" => yaml_to_str!(a, v, help), "long_help" => yaml_to_str!(a, v, long_help), "required" => yaml_to_bool!(a, v, required), "required_if" => yaml_tuple2!(a, v, required_if), "required_ifs" => yaml_tuple2!(a, v, required_if), "takes_value" => yaml_to_bool!(a, v, takes_value), "index" => yaml_to_u64!(a, v, index), "global" => yaml_to_bool!(a, v, global), "multiple" => yaml_to_bool!(a, v, multiple), "hidden" => yaml_to_bool!(a, v, hidden), "next_line_help" => yaml_to_bool!(a, v, next_line_help), "empty_values" => yaml_to_bool!(a, v, empty_values), "group" => yaml_to_str!(a, v, group), "number_of_values" => yaml_to_u64!(a, v, number_of_values), "max_values" => yaml_to_u64!(a, v, max_values), "min_values" => yaml_to_u64!(a, v, min_values), "value_name" => yaml_to_str!(a, v, value_name), "use_delimiter" => yaml_to_bool!(a, v, use_delimiter), "allow_hyphen_values" => yaml_to_bool!(a, v, allow_hyphen_values), "last" => yaml_to_bool!(a, v, last), "require_delimiter" => yaml_to_bool!(a, v, require_delimiter), "value_delimiter" => yaml_to_str!(a, v, value_delimiter), "required_unless" => yaml_to_str!(a, v, required_unless), "display_order" => yaml_to_usize!(a, v, display_order), "default_value" => yaml_to_str!(a, v, default_value), "default_value_if" => yaml_tuple3!(a, v, default_value_if), "default_value_ifs" => yaml_tuple3!(a, v, default_value_if), "env" => yaml_to_str!(a, v, env), "value_names" => yaml_vec_or_str!(v, a, value_name), "groups" => yaml_vec_or_str!(v, a, group), "requires" => yaml_vec_or_str!(v, a, requires), "requires_if" => yaml_tuple2!(a, v, requires_if), "requires_ifs" => yaml_tuple2!(a, v, requires_if), "conflicts_with" => yaml_vec_or_str!(v, a, conflicts_with), "overrides_with" => yaml_vec_or_str!(v, a, overrides_with), "possible_values" => yaml_vec_or_str!(v, a, possible_value), "case_insensitive" => yaml_to_bool!(a, v, case_insensitive), "required_unless_one" => yaml_vec_or_str!(v, a, required_unless), "required_unless_all" => { a = yaml_vec_or_str!(v, a, required_unless); a.setb(ArgSettings::RequiredUnlessAll); a } s => panic!( "Unknown Arg setting '{}' in YAML file for arg '{}'", s, name_str ), } } a } /// Creates a new instance of [`Arg`] from a usage string. Allows creation of basic settings /// for the [`Arg`]. The syntax is flexible, but there are some rules to follow. /// /// **NOTE**: Not all settings may be set using the usage string method. Some properties are /// only available via the builder pattern. /// /// **NOTE**: Only ASCII values are officially supported in [`Arg::from_usage`] strings. Some /// UTF-8 codepoints may work just fine, but this is not guaranteed. /// /// # Syntax /// /// Usage strings typically following the form: /// /// ```notrust /// [explicit name] [short] [long] [value names] [help string] /// ``` /// /// This is not a hard rule as the attributes can appear in other orders. There are also /// several additional sigils which denote additional settings. Below are the details of each /// portion of the string. /// /// ### Explicit Name /// /// This is an optional field, if it's omitted the argument will use one of the additional /// fields as the name using the following priority order: /// /// * Explicit Name (This always takes precedence when present) /// * Long /// * Short /// * Value Name /// /// `clap` determines explicit names as the first string of characters between either `[]` or /// `<>` where `[]` has the dual notation of meaning the argument is optional, and `<>` meaning /// the argument is required. /// /// Explicit names may be followed by: /// * The multiple denotation `...` /// /// Example explicit names as follows (`ename` for an optional argument, and `rname` for a /// required argument): /// /// ```notrust /// [ename] -s, --long 'some flag' /// -r, --longer 'some other flag' /// ``` /// /// ### Short /// /// This is set by placing a single character after a leading `-`. /// /// Shorts may be followed by /// * The multiple denotation `...` /// * An optional comma `,` which is cosmetic only /// * Value notation /// /// Example shorts are as follows (`-s`, and `-r`): /// /// ```notrust /// -s, --long 'some flag' /// -r [val], --longer 'some option' /// ``` /// /// ### Long /// /// This is set by placing a word (no spaces) after a leading `--`. /// /// Shorts may be followed by /// * The multiple denotation `...` /// * Value notation /// /// Example longs are as follows (`--some`, and `--rapid`): /// /// ```notrust /// -s, --some 'some flag' /// --rapid=[FILE] 'some option' /// ``` /// /// ### Values (Value Notation) /// /// This is set by placing a word(s) between `[]` or `<>` optionally after `=` (although this /// is cosmetic only and does not affect functionality). If an explicit name has **not** been /// set, using `<>` will denote a required argument, and `[]` will denote an optional argument /// /// Values may be followed by /// * The multiple denotation `...` /// * More Value notation /// /// More than one value will also implicitly set the arguments number of values, i.e. having /// two values, `--option [val1] [val2]` specifies that in order for option to be satisified it /// must receive exactly two values /// /// Example values are as follows (`FILE`, and `SPEED`): /// /// ```notrust /// -s, --some [FILE] 'some option' /// --rapid=... 'some required multiple option' /// ``` /// /// ### Help String /// /// The help string is denoted between a pair of single quotes `''` and may contain any /// characters. /// /// Example help strings are as follows: /// /// ```notrust /// -s, --some [FILE] 'some option' /// --rapid=... 'some required multiple option' /// ``` /// /// ### Additional Sigils /// /// Multiple notation `...` (three consecutive dots/periods) specifies that this argument may /// be used multiple times. Do not confuse multiple occurrences (`...`) with multiple values. /// `--option val1 val2` is a single occurrence with multiple values. `--flag --flag` is /// multiple occurrences (and then you can obviously have instances of both as well) /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// App::new("prog") /// .args(&[ /// Arg::from_usage("--config 'a required file for the configuration and no short'"), /// Arg::from_usage("-d, --debug... 'turns on debugging information and allows multiples'"), /// Arg::from_usage("[input] 'an optional input file to use'") /// ]) /// # ; /// ``` /// [`Arg`]: ./struct.Arg.html /// [`Arg::from_usage`]: ./struct.Arg.html#method.from_usage pub fn from_usage(u: &'a str) -> Self { let parser = UsageParser::from_usage(u); parser.parse() } /// Sets the short version of the argument without the preceding `-`. /// /// By default `clap` automatically assigns `V` and `h` to the auto-generated `version` and /// `help` arguments respectively. You may use the uppercase `V` or lowercase `h` for your own /// arguments, in which case `clap` simply will not assign those to the auto-generated /// `version` or `help` arguments. /// /// **NOTE:** Any leading `-` characters will be stripped, and only the first /// non `-` character will be used as the [`short`] version /// /// # Examples /// /// To set [`short`] use a single valid UTF-8 code point. If you supply a leading `-` such as /// `-c`, the `-` will be stripped. /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("config") /// .short("c") /// # ; /// ``` /// /// Setting [`short`] allows using the argument via a single hyphen (`-`) such as `-c` /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("config") /// .short("c")) /// .get_matches_from(vec![ /// "prog", "-c" /// ]); /// /// assert!(m.is_present("config")); /// ``` /// [`short`]: ./struct.Arg.html#method.short pub fn short>(mut self, s: S) -> Self { self.s.short = s.as_ref().trim_left_matches(|c| c == '-').chars().next(); self } /// Sets the long version of the argument without the preceding `--`. /// /// By default `clap` automatically assigns `version` and `help` to the auto-generated /// `version` and `help` arguments respectively. You may use the word `version` or `help` for /// the long form of your own arguments, in which case `clap` simply will not assign those to /// the auto-generated `version` or `help` arguments. /// /// **NOTE:** Any leading `-` characters will be stripped /// /// # Examples /// /// To set `long` use a word containing valid UTF-8 codepoints. If you supply a double leading /// `--` such as `--config` they will be stripped. Hyphens in the middle of the word, however, /// will *not* be stripped (i.e. `config-file` is allowed) /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("cfg") /// .long("config") /// # ; /// ``` /// /// Setting `long` allows using the argument via a double hyphen (`--`) such as `--config` /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("cfg") /// .long("config")) /// .get_matches_from(vec![ /// "prog", "--config" /// ]); /// /// assert!(m.is_present("cfg")); /// ``` pub fn long(mut self, l: &'b str) -> Self { self.s.long = Some(l.trim_left_matches(|c| c == '-')); self } /// Allows adding a [`Arg`] alias, which function as "hidden" arguments that /// automatically dispatch as if this argument was used. This is more efficient, and easier /// than creating multiple hidden arguments as one only needs to check for the existence of /// this command, and not all variants. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("test") /// .long("test") /// .alias("alias") /// .takes_value(true)) /// .get_matches_from(vec![ /// "prog", "--alias", "cool" /// ]); /// assert!(m.is_present("test")); /// assert_eq!(m.value_of("test"), Some("cool")); /// ``` /// [`Arg`]: ./struct.Arg.html pub fn alias>(mut self, name: S) -> Self { if let Some(ref mut als) = self.s.aliases { als.push((name.into(), false)); } else { self.s.aliases = Some(vec![(name.into(), false)]); } self } /// Allows adding [`Arg`] aliases, which function as "hidden" arguments that /// automatically dispatch as if this argument was used. This is more efficient, and easier /// than creating multiple hidden subcommands as one only needs to check for the existence of /// this command, and not all variants. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("test") /// .long("test") /// .aliases(&["do-stuff", "do-tests", "tests"]) /// .help("the file to add") /// .required(false)) /// .get_matches_from(vec![ /// "prog", "--do-tests" /// ]); /// assert!(m.is_present("test")); /// ``` /// [`Arg`]: ./struct.Arg.html pub fn aliases(mut self, names: &[&'b str]) -> Self { if let Some(ref mut als) = self.s.aliases { for n in names { als.push((n, false)); } } else { self.s.aliases = Some(names.iter().map(|n| (*n, false)).collect::>()); } self } /// Allows adding a [`Arg`] alias that functions exactly like those defined with /// [`Arg::alias`], except that they are visible inside the help message. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("test") /// .visible_alias("something-awesome") /// .long("test") /// .takes_value(true)) /// .get_matches_from(vec![ /// "prog", "--something-awesome", "coffee" /// ]); /// assert!(m.is_present("test")); /// assert_eq!(m.value_of("test"), Some("coffee")); /// ``` /// [`Arg`]: ./struct.Arg.html /// [`App::alias`]: ./struct.Arg.html#method.alias pub fn visible_alias>(mut self, name: S) -> Self { if let Some(ref mut als) = self.s.aliases { als.push((name.into(), true)); } else { self.s.aliases = Some(vec![(name.into(), true)]); } self } /// Allows adding multiple [`Arg`] aliases that functions exactly like those defined /// with [`Arg::aliases`], except that they are visible inside the help message. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("test") /// .long("test") /// .visible_aliases(&["something", "awesome", "cool"])) /// .get_matches_from(vec![ /// "prog", "--awesome" /// ]); /// assert!(m.is_present("test")); /// ``` /// [`Arg`]: ./struct.Arg.html /// [`App::aliases`]: ./struct.Arg.html#method.aliases pub fn visible_aliases(mut self, names: &[&'b str]) -> Self { if let Some(ref mut als) = self.s.aliases { for n in names { als.push((n, true)); } } else { self.s.aliases = Some(names.iter().map(|n| (*n, true)).collect::>()); } self } /// Sets the short help text of the argument that will be displayed to the user when they print /// the help information with `-h`. Typically, this is a short (one line) description of the /// arg. /// /// **NOTE:** If only `Arg::help` is provided, and not [`Arg::long_help`] but the user requests /// `--help` clap will still display the contents of `help` appropriately /// /// **NOTE:** Only `Arg::help` is used in completion script generation in order to be concise /// /// # Examples /// /// Any valid UTF-8 is allowed in the help text. The one exception is when one wishes to /// include a newline in the help text and have the following text be properly aligned with all /// the other help text. /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("config") /// .help("The config file used by the myprog") /// # ; /// ``` /// /// Setting `help` displays a short message to the side of the argument when the user passes /// `-h` or `--help` (by default). /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("cfg") /// .long("config") /// .help("Some help text describing the --config arg")) /// .get_matches_from(vec![ /// "prog", "--help" /// ]); /// ``` /// /// The above example displays /// /// ```notrust /// helptest /// /// USAGE: /// helptest [FLAGS] /// /// FLAGS: /// --config Some help text describing the --config arg /// -h, --help Prints help information /// -V, --version Prints version information /// ``` /// [`Arg::long_help`]: ./struct.Arg.html#method.long_help pub fn help(mut self, h: &'b str) -> Self { self.b.help = Some(h); self } /// Sets the long help text of the argument that will be displayed to the user when they print /// the help information with `--help`. Typically this a more detailed (multi-line) message /// that describes the arg. /// /// **NOTE:** If only `long_help` is provided, and not [`Arg::help`] but the user requests `-h` /// clap will still display the contents of `long_help` appropriately /// /// **NOTE:** Only [`Arg::help`] is used in completion script generation in order to be concise /// /// # Examples /// /// Any valid UTF-8 is allowed in the help text. The one exception is when one wishes to /// include a newline in the help text and have the following text be properly aligned with all /// the other help text. /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("config") /// .long_help( /// "The config file used by the myprog must be in JSON format /// with only valid keys and may not contain other nonsense /// that cannot be read by this program. Obviously I'm going on /// and on, so I'll stop now.") /// # ; /// ``` /// /// Setting `help` displays a short message to the side of the argument when the user passes /// `-h` or `--help` (by default). /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("cfg") /// .long("config") /// .long_help( /// "The config file used by the myprog must be in JSON format /// with only valid keys and may not contain other nonsense /// that cannot be read by this program. Obviously I'm going on /// and on, so I'll stop now.")) /// .get_matches_from(vec![ /// "prog", "--help" /// ]); /// ``` /// /// The above example displays /// /// ```notrust /// helptest /// /// USAGE: /// helptest [FLAGS] /// /// FLAGS: /// --config /// The config file used by the myprog must be in JSON format /// with only valid keys and may not contain other nonsense /// that cannot be read by this program. Obviously I'm going on /// and on, so I'll stop now. /// /// -h, --help /// Prints help information /// /// -V, --version /// Prints version information /// ``` /// [`Arg::help`]: ./struct.Arg.html#method.help pub fn long_help(mut self, h: &'b str) -> Self { self.b.long_help = Some(h); self } /// Specifies that this arg is the last, or final, positional argument (i.e. has the highest /// index) and is *only* able to be accessed via the `--` syntax (i.e. `$ prog args -- /// last_arg`). Even, if no other arguments are left to parse, if the user omits the `--` syntax /// they will receive an [`UnknownArgument`] error. Setting an argument to `.last(true)` also /// allows one to access this arg early using the `--` syntax. Accessing an arg early, even with /// the `--` syntax is otherwise not possible. /// /// **NOTE:** This will change the usage string to look like `$ prog [FLAGS] [-- ]` if /// `ARG` is marked as `.last(true)`. /// /// **NOTE:** This setting will imply [`AppSettings::DontCollapseArgsInUsage`] because failing /// to set this can make the usage string very confusing. /// /// **NOTE**: This setting only applies to positional arguments, and has no affect on FLAGS / /// OPTIONS /// /// **CAUTION:** Setting an argument to `.last(true)` *and* having child subcommands is not /// recommended with the exception of *also* using [`AppSettings::ArgsNegateSubcommands`] /// (or [`AppSettings::SubcommandsNegateReqs`] if the argument marked `.last(true)` is also /// marked [`.required(true)`]) /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("args") /// .last(true) /// # ; /// ``` /// /// Setting [`Arg::last(true)`] ensures the arg has the highest [index] of all positional args /// and requires that the `--` syntax be used to access it early. /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("first")) /// .arg(Arg::with_name("second")) /// .arg(Arg::with_name("third").last(true)) /// .get_matches_from_safe(vec![ /// "prog", "one", "--", "three" /// ]); /// /// assert!(res.is_ok()); /// let m = res.unwrap(); /// assert_eq!(m.value_of("third"), Some("three")); /// assert!(m.value_of("second").is_none()); /// ``` /// /// Even if the positional argument marked `.last(true)` is the only argument left to parse, /// failing to use the `--` syntax results in an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("first")) /// .arg(Arg::with_name("second")) /// .arg(Arg::with_name("third").last(true)) /// .get_matches_from_safe(vec![ /// "prog", "one", "two", "three" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::UnknownArgument); /// ``` /// [`Arg::last(true)`]: ./struct.Arg.html#method.last /// [index]: ./struct.Arg.html#method.index /// [`AppSettings::DontCollapseArgsInUsage`]: ./enum.AppSettings.html#variant.DontCollapseArgsInUsage /// [`AppSettings::ArgsNegateSubcommands`]: ./enum.AppSettings.html#variant.ArgsNegateSubcommands /// [`AppSettings::SubcommandsNegateReqs`]: ./enum.AppSettings.html#variant.SubcommandsNegateReqs /// [`.required(true)`]: ./struct.Arg.html#method.required /// [`UnknownArgument`]: ./enum.ErrorKind.html#variant.UnknownArgument pub fn last(self, l: bool) -> Self { if l { self.set(ArgSettings::Last) } else { self.unset(ArgSettings::Last) } } /// Sets whether or not the argument is required by default. Required by default means it is /// required, when no other conflicting rules have been evaluated. Conflicting rules take /// precedence over being required. **Default:** `false` /// /// **NOTE:** Flags (i.e. not positional, or arguments that take values) cannot be required by /// default. This is simply because if a flag should be required, it should simply be implied /// as no additional information is required from user. Flags by their very nature are simply /// yes/no, or true/false. /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .required(true) /// # ; /// ``` /// /// Setting [`Arg::required(true)`] requires that the argument be used at runtime. /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .required(true) /// .takes_value(true) /// .long("config")) /// .get_matches_from_safe(vec![ /// "prog", "--config", "file.conf" /// ]); /// /// assert!(res.is_ok()); /// ``` /// /// Setting [`Arg::required(true)`] and *not* supplying that argument is an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .required(true) /// .takes_value(true) /// .long("config")) /// .get_matches_from_safe(vec![ /// "prog" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::MissingRequiredArgument); /// ``` /// [`Arg::required(true)`]: ./struct.Arg.html#method.required pub fn required(self, r: bool) -> Self { if r { self.set(ArgSettings::Required) } else { self.unset(ArgSettings::Required) } } /// Requires that options use the `--option=val` syntax (i.e. an equals between the option and /// associated value) **Default:** `false` /// /// **NOTE:** This setting also removes the default of allowing empty values and implies /// [`Arg::empty_values(false)`]. /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .long("config") /// .takes_value(true) /// .require_equals(true) /// # ; /// ``` /// /// Setting [`Arg::require_equals(true)`] requires that the option have an equals sign between /// it and the associated value. /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .require_equals(true) /// .takes_value(true) /// .long("config")) /// .get_matches_from_safe(vec![ /// "prog", "--config=file.conf" /// ]); /// /// assert!(res.is_ok()); /// ``` /// /// Setting [`Arg::require_equals(true)`] and *not* supplying the equals will cause an error /// unless [`Arg::empty_values(true)`] is set. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .require_equals(true) /// .takes_value(true) /// .long("config")) /// .get_matches_from_safe(vec![ /// "prog", "--config", "file.conf" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::EmptyValue); /// ``` /// [`Arg::require_equals(true)`]: ./struct.Arg.html#method.require_equals /// [`Arg::empty_values(true)`]: ./struct.Arg.html#method.empty_values /// [`Arg::empty_values(false)`]: ./struct.Arg.html#method.empty_values pub fn require_equals(mut self, r: bool) -> Self { if r { self.unsetb(ArgSettings::EmptyValues); self.set(ArgSettings::RequireEquals) } else { self.unset(ArgSettings::RequireEquals) } } /// Allows values which start with a leading hyphen (`-`) /// /// **WARNING**: Take caution when using this setting combined with [`Arg::multiple(true)`], as /// this becomes ambiguous `$ prog --arg -- -- val`. All three `--, --, val` will be values /// when the user may have thought the second `--` would constitute the normal, "Only /// positional args follow" idiom. To fix this, consider using [`Arg::number_of_values(1)`] /// /// **WARNING**: When building your CLIs, consider the effects of allowing leading hyphens and /// the user passing in a value that matches a valid short. For example `prog -opt -F` where /// `-F` is supposed to be a value, yet `-F` is *also* a valid short for another arg. Care should /// should be taken when designing these args. This is compounded by the ability to "stack" /// short args. I.e. if `-val` is supposed to be a value, but `-v`, `-a`, and `-l` are all valid /// shorts. /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("pattern") /// .allow_hyphen_values(true) /// # ; /// ``` /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("pat") /// .allow_hyphen_values(true) /// .takes_value(true) /// .long("pattern")) /// .get_matches_from(vec![ /// "prog", "--pattern", "-file" /// ]); /// /// assert_eq!(m.value_of("pat"), Some("-file")); /// ``` /// /// Not setting [`Arg::allow_hyphen_values(true)`] and supplying a value which starts with a /// hyphen is an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("pat") /// .takes_value(true) /// .long("pattern")) /// .get_matches_from_safe(vec![ /// "prog", "--pattern", "-file" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::UnknownArgument); /// ``` /// [`Arg::allow_hyphen_values(true)`]: ./struct.Arg.html#method.allow_hyphen_values /// [`Arg::multiple(true)`]: ./struct.Arg.html#method.multiple /// [`Arg::number_of_values(1)`]: ./struct.Arg.html#method.number_of_values pub fn allow_hyphen_values(self, a: bool) -> Self { if a { self.set(ArgSettings::AllowLeadingHyphen) } else { self.unset(ArgSettings::AllowLeadingHyphen) } } /// Sets an arg that override this arg's required setting. (i.e. this arg will be required /// unless this other argument is present). /// /// **Pro Tip:** Using [`Arg::required_unless`] implies [`Arg::required`] and is therefore not /// mandatory to also set. /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .required_unless("debug") /// # ; /// ``` /// /// Setting [`Arg::required_unless(name)`] requires that the argument be used at runtime /// *unless* `name` is present. In the following example, the required argument is *not* /// provided, but it's not an error because the `unless` arg has been supplied. /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .required_unless("dbg") /// .takes_value(true) /// .long("config")) /// .arg(Arg::with_name("dbg") /// .long("debug")) /// .get_matches_from_safe(vec![ /// "prog", "--debug" /// ]); /// /// assert!(res.is_ok()); /// ``` /// /// Setting [`Arg::required_unless(name)`] and *not* supplying `name` or this arg is an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .required_unless("dbg") /// .takes_value(true) /// .long("config")) /// .arg(Arg::with_name("dbg") /// .long("debug")) /// .get_matches_from_safe(vec![ /// "prog" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::MissingRequiredArgument); /// ``` /// [`Arg::required_unless`]: ./struct.Arg.html#method.required_unless /// [`Arg::required`]: ./struct.Arg.html#method.required /// [`Arg::required_unless(name)`]: ./struct.Arg.html#method.required_unless pub fn required_unless(mut self, name: &'a str) -> Self { if let Some(ref mut vec) = self.b.r_unless { vec.push(name); } else { self.b.r_unless = Some(vec![name]); } self.required(true) } /// Sets args that override this arg's required setting. (i.e. this arg will be required unless /// all these other arguments are present). /// /// **NOTE:** If you wish for this argument to only be required if *one of* these args are /// present see [`Arg::required_unless_one`] /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .required_unless_all(&["cfg", "dbg"]) /// # ; /// ``` /// /// Setting [`Arg::required_unless_all(names)`] requires that the argument be used at runtime /// *unless* *all* the args in `names` are present. In the following example, the required /// argument is *not* provided, but it's not an error because all the `unless` args have been /// supplied. /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .required_unless_all(&["dbg", "infile"]) /// .takes_value(true) /// .long("config")) /// .arg(Arg::with_name("dbg") /// .long("debug")) /// .arg(Arg::with_name("infile") /// .short("i") /// .takes_value(true)) /// .get_matches_from_safe(vec![ /// "prog", "--debug", "-i", "file" /// ]); /// /// assert!(res.is_ok()); /// ``` /// /// Setting [`Arg::required_unless_all(names)`] and *not* supplying *all* of `names` or this /// arg is an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .required_unless_all(&["dbg", "infile"]) /// .takes_value(true) /// .long("config")) /// .arg(Arg::with_name("dbg") /// .long("debug")) /// .arg(Arg::with_name("infile") /// .short("i") /// .takes_value(true)) /// .get_matches_from_safe(vec![ /// "prog" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::MissingRequiredArgument); /// ``` /// [`Arg::required_unless_one`]: ./struct.Arg.html#method.required_unless_one /// [`Arg::required_unless_all(names)`]: ./struct.Arg.html#method.required_unless_all pub fn required_unless_all(mut self, names: &[&'a str]) -> Self { if let Some(ref mut vec) = self.b.r_unless { for s in names { vec.push(s); } } else { self.b.r_unless = Some(names.iter().copied().collect()); } self.setb(ArgSettings::RequiredUnlessAll); self.required(true) } /// Sets args that override this arg's [required] setting. (i.e. this arg will be required /// unless *at least one of* these other arguments are present). /// /// **NOTE:** If you wish for this argument to only be required if *all of* these args are /// present see [`Arg::required_unless_all`] /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .required_unless_all(&["cfg", "dbg"]) /// # ; /// ``` /// /// Setting [`Arg::required_unless_one(names)`] requires that the argument be used at runtime /// *unless* *at least one of* the args in `names` are present. In the following example, the /// required argument is *not* provided, but it's not an error because one the `unless` args /// have been supplied. /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .required_unless_one(&["dbg", "infile"]) /// .takes_value(true) /// .long("config")) /// .arg(Arg::with_name("dbg") /// .long("debug")) /// .arg(Arg::with_name("infile") /// .short("i") /// .takes_value(true)) /// .get_matches_from_safe(vec![ /// "prog", "--debug" /// ]); /// /// assert!(res.is_ok()); /// ``` /// /// Setting [`Arg::required_unless_one(names)`] and *not* supplying *at least one of* `names` /// or this arg is an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .required_unless_one(&["dbg", "infile"]) /// .takes_value(true) /// .long("config")) /// .arg(Arg::with_name("dbg") /// .long("debug")) /// .arg(Arg::with_name("infile") /// .short("i") /// .takes_value(true)) /// .get_matches_from_safe(vec![ /// "prog" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::MissingRequiredArgument); /// ``` /// [required]: ./struct.Arg.html#method.required /// [`Arg::required_unless_one(names)`]: ./struct.Arg.html#method.required_unless_one /// [`Arg::required_unless_all`]: ./struct.Arg.html#method.required_unless_all pub fn required_unless_one(mut self, names: &[&'a str]) -> Self { if let Some(ref mut vec) = self.b.r_unless { for s in names { vec.push(s); } } else { self.b.r_unless = Some(names.iter().copied().collect()); } self.required(true) } /// Sets a conflicting argument by name. I.e. when using this argument, /// the following argument can't be present and vice versa. /// /// **NOTE:** Conflicting rules take precedence over being required by default. Conflict rules /// only need to be set for one of the two arguments, they do not need to be set for each. /// /// **NOTE:** Defining a conflict is two-way, but does *not* need to defined for both arguments /// (i.e. if A conflicts with B, defining A.conflicts_with(B) is sufficient. You do not need /// need to also do B.conflicts_with(A)) /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .conflicts_with("debug") /// # ; /// ``` /// /// Setting conflicting argument, and having both arguments present at runtime is an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .takes_value(true) /// .conflicts_with("debug") /// .long("config")) /// .arg(Arg::with_name("debug") /// .long("debug")) /// .get_matches_from_safe(vec![ /// "prog", "--debug", "--config", "file.conf" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::ArgumentConflict); /// ``` pub fn conflicts_with(mut self, name: &'a str) -> Self { if let Some(ref mut vec) = self.b.blacklist { vec.push(name); } else { self.b.blacklist = Some(vec![name]); } self } /// The same as [`Arg::conflicts_with`] but allows specifying multiple two-way conlicts per /// argument. /// /// **NOTE:** Conflicting rules take precedence over being required by default. Conflict rules /// only need to be set for one of the two arguments, they do not need to be set for each. /// /// **NOTE:** Defining a conflict is two-way, but does *not* need to defined for both arguments /// (i.e. if A conflicts with B, defining A.conflicts_with(B) is sufficient. You do not need /// need to also do B.conflicts_with(A)) /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .conflicts_with_all(&["debug", "input"]) /// # ; /// ``` /// /// Setting conflicting argument, and having any of the arguments present at runtime with a /// conflicting argument is an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .takes_value(true) /// .conflicts_with_all(&["debug", "input"]) /// .long("config")) /// .arg(Arg::with_name("debug") /// .long("debug")) /// .arg(Arg::with_name("input") /// .index(1)) /// .get_matches_from_safe(vec![ /// "prog", "--config", "file.conf", "file.txt" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::ArgumentConflict); /// ``` /// [`Arg::conflicts_with`]: ./struct.Arg.html#method.conflicts_with pub fn conflicts_with_all(mut self, names: &[&'a str]) -> Self { if let Some(ref mut vec) = self.b.blacklist { for s in names { vec.push(s); } } else { self.b.blacklist = Some(names.iter().copied().collect()); } self } /// Sets a overridable argument by name. I.e. this argument and the following argument /// will override each other in POSIX style (whichever argument was specified at runtime /// **last** "wins") /// /// **NOTE:** When an argument is overridden it is essentially as if it never was used, any /// conflicts, requirements, etc. are evaluated **after** all "overrides" have been removed /// /// **WARNING:** Positional arguments cannot override themselves (or we would never be able /// to advance to the next positional). If a positional agument lists itself as an override, /// it is simply ignored. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::from_usage("-f, --flag 'some flag'") /// .conflicts_with("debug")) /// .arg(Arg::from_usage("-d, --debug 'other flag'")) /// .arg(Arg::from_usage("-c, --color 'third flag'") /// .overrides_with("flag")) /// .get_matches_from(vec![ /// "prog", "-f", "-d", "-c"]); /// // ^~~~~~~~~~~~^~~~~ flag is overridden by color /// /// assert!(m.is_present("color")); /// assert!(m.is_present("debug")); // even though flag conflicts with debug, it's as if flag /// // was never used because it was overridden with color /// assert!(!m.is_present("flag")); /// ``` /// Care must be taken when using this setting, and having an arg override with itself. This /// is common practice when supporting things like shell aliases, config files, etc. /// However, when combined with multiple values, it can get dicy. /// Here is how clap handles such situations: /// /// When a flag overrides itself, it's as if the flag was only ever used once (essentially /// preventing a "Unexpected multiple usage" error): /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("posix") /// .arg(Arg::from_usage("--flag 'some flag'").overrides_with("flag")) /// .get_matches_from(vec!["posix", "--flag", "--flag"]); /// assert!(m.is_present("flag")); /// assert_eq!(m.occurrences_of("flag"), 1); /// ``` /// Making a arg `multiple(true)` and override itself is essentially meaningless. Therefore /// clap ignores an override of self if it's a flag and it already accepts multiple occurrences. /// /// ``` /// # use clap::{App, Arg}; /// let m = App::new("posix") /// .arg(Arg::from_usage("--flag... 'some flag'").overrides_with("flag")) /// .get_matches_from(vec!["", "--flag", "--flag", "--flag", "--flag"]); /// assert!(m.is_present("flag")); /// assert_eq!(m.occurrences_of("flag"), 4); /// ``` /// Now notice with options (which *do not* set `multiple(true)`), it's as if only the last /// occurrence happened. /// /// ``` /// # use clap::{App, Arg}; /// let m = App::new("posix") /// .arg(Arg::from_usage("--opt [val] 'some option'").overrides_with("opt")) /// .get_matches_from(vec!["", "--opt=some", "--opt=other"]); /// assert!(m.is_present("opt")); /// assert_eq!(m.occurrences_of("opt"), 1); /// assert_eq!(m.value_of("opt"), Some("other")); /// ``` /// /// Just like flags, options with `multiple(true)` set, will ignore the "override self" setting. /// /// ``` /// # use clap::{App, Arg}; /// let m = App::new("posix") /// .arg(Arg::from_usage("--opt [val]... 'some option'") /// .overrides_with("opt")) /// .get_matches_from(vec!["", "--opt", "first", "over", "--opt", "other", "val"]); /// assert!(m.is_present("opt")); /// assert_eq!(m.occurrences_of("opt"), 2); /// assert_eq!(m.values_of("opt").unwrap().collect::>(), &["first", "over", "other", "val"]); /// ``` /// /// A safe thing to do if you'd like to support an option which supports multiple values, but /// also is "overridable" by itself, is to use `use_delimiter(false)` and *not* use /// `multiple(true)` while telling users to seperate values with a comma (i.e. `val1,val2`) /// /// ``` /// # use clap::{App, Arg}; /// let m = App::new("posix") /// .arg(Arg::from_usage("--opt [val] 'some option'") /// .overrides_with("opt") /// .use_delimiter(false)) /// .get_matches_from(vec!["", "--opt=some,other", "--opt=one,two"]); /// assert!(m.is_present("opt")); /// assert_eq!(m.occurrences_of("opt"), 1); /// assert_eq!(m.values_of("opt").unwrap().collect::>(), &["one,two"]); /// ``` pub fn overrides_with(mut self, name: &'a str) -> Self { if let Some(ref mut vec) = self.b.overrides { vec.push(name); } else { self.b.overrides = Some(vec![name]); } self } /// Sets multiple mutually overridable arguments by name. I.e. this argument and the following /// argument will override each other in POSIX style (whichever argument was specified at /// runtime **last** "wins") /// /// **NOTE:** When an argument is overridden it is essentially as if it never was used, any /// conflicts, requirements, etc. are evaluated **after** all "overrides" have been removed /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::from_usage("-f, --flag 'some flag'") /// .conflicts_with("color")) /// .arg(Arg::from_usage("-d, --debug 'other flag'")) /// .arg(Arg::from_usage("-c, --color 'third flag'") /// .overrides_with_all(&["flag", "debug"])) /// .get_matches_from(vec![ /// "prog", "-f", "-d", "-c"]); /// // ^~~~~~^~~~~~~~~ flag and debug are overridden by color /// /// assert!(m.is_present("color")); // even though flag conflicts with color, it's as if flag /// // and debug were never used because they were overridden /// // with color /// assert!(!m.is_present("debug")); /// assert!(!m.is_present("flag")); /// ``` pub fn overrides_with_all(mut self, names: &[&'a str]) -> Self { if let Some(ref mut vec) = self.b.overrides { for s in names { vec.push(s); } } else { self.b.overrides = Some(names.iter().copied().collect()); } self } /// Sets an argument by name that is required when this one is present I.e. when /// using this argument, the following argument *must* be present. /// /// **NOTE:** [Conflicting] rules and [override] rules take precedence over being required /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .requires("input") /// # ; /// ``` /// /// Setting [`Arg::requires(name)`] requires that the argument be used at runtime if the /// defining argument is used. If the defining argument isn't used, the other argument isn't /// required /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .takes_value(true) /// .requires("input") /// .long("config")) /// .arg(Arg::with_name("input") /// .index(1)) /// .get_matches_from_safe(vec![ /// "prog" /// ]); /// /// assert!(res.is_ok()); // We didn't use cfg, so input wasn't required /// ``` /// /// Setting [`Arg::requires(name)`] and *not* supplying that argument is an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .takes_value(true) /// .requires("input") /// .long("config")) /// .arg(Arg::with_name("input") /// .index(1)) /// .get_matches_from_safe(vec![ /// "prog", "--config", "file.conf" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::MissingRequiredArgument); /// ``` /// [`Arg::requires(name)`]: ./struct.Arg.html#method.requires /// [Conflicting]: ./struct.Arg.html#method.conflicts_with /// [override]: ./struct.Arg.html#method.overrides_with pub fn requires(mut self, name: &'a str) -> Self { if let Some(ref mut vec) = self.b.requires { vec.push((None, name)); } else { self.b.requires = Some(vec![(None, name)]); } self } /// Allows a conditional requirement. The requirement will only become valid if this arg's value /// equals `val`. /// /// **NOTE:** If using YAML the values should be laid out as follows /// /// ```yaml /// requires_if: /// - [val, arg] /// ``` /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .requires_if("val", "arg") /// # ; /// ``` /// /// Setting [`Arg::requires_if(val, arg)`] requires that the `arg` be used at runtime if the /// defining argument's value is equal to `val`. If the defining argument is anything other than /// `val`, the other argument isn't required. /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .takes_value(true) /// .requires_if("my.cfg", "other") /// .long("config")) /// .arg(Arg::with_name("other")) /// .get_matches_from_safe(vec![ /// "prog", "--config", "some.cfg" /// ]); /// /// assert!(res.is_ok()); // We didn't use --config=my.cfg, so other wasn't required /// ``` /// /// Setting [`Arg::requires_if(val, arg)`] and setting the value to `val` but *not* supplying /// `arg` is an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .takes_value(true) /// .requires_if("my.cfg", "input") /// .long("config")) /// .arg(Arg::with_name("input")) /// .get_matches_from_safe(vec![ /// "prog", "--config", "my.cfg" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::MissingRequiredArgument); /// ``` /// [`Arg::requires(name)`]: ./struct.Arg.html#method.requires /// [Conflicting]: ./struct.Arg.html#method.conflicts_with /// [override]: ./struct.Arg.html#method.overrides_with pub fn requires_if(mut self, val: &'b str, arg: &'a str) -> Self { if let Some(ref mut vec) = self.b.requires { vec.push((Some(val), arg)); } else { self.b.requires = Some(vec![(Some(val), arg)]); } self } /// Allows multiple conditional requirements. The requirement will only become valid if this arg's value /// equals `val`. /// /// **NOTE:** If using YAML the values should be laid out as follows /// /// ```yaml /// requires_if: /// - [val, arg] /// - [val2, arg2] /// ``` /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .requires_ifs(&[ /// ("val", "arg"), /// ("other_val", "arg2"), /// ]) /// # ; /// ``` /// /// Setting [`Arg::requires_ifs(&["val", "arg"])`] requires that the `arg` be used at runtime if the /// defining argument's value is equal to `val`. If the defining argument's value is anything other /// than `val`, `arg` isn't required. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .takes_value(true) /// .requires_ifs(&[ /// ("special.conf", "opt"), /// ("other.conf", "other"), /// ]) /// .long("config")) /// .arg(Arg::with_name("opt") /// .long("option") /// .takes_value(true)) /// .arg(Arg::with_name("other")) /// .get_matches_from_safe(vec![ /// "prog", "--config", "special.conf" /// ]); /// /// assert!(res.is_err()); // We used --config=special.conf so --option is required /// assert_eq!(res.unwrap_err().kind, ErrorKind::MissingRequiredArgument); /// ``` /// [`Arg::requires(name)`]: ./struct.Arg.html#method.requires /// [Conflicting]: ./struct.Arg.html#method.conflicts_with /// [override]: ./struct.Arg.html#method.overrides_with pub fn requires_ifs(mut self, ifs: &[(&'b str, &'a str)]) -> Self { if let Some(ref mut vec) = self.b.requires { for &(val, arg) in ifs { vec.push((Some(val), arg)); } } else { let mut vec = vec![]; for &(val, arg) in ifs { vec.push((Some(val), arg)); } self.b.requires = Some(vec); } self } /// Allows specifying that an argument is [required] conditionally. The requirement will only /// become valid if the specified `arg`'s value equals `val`. /// /// **NOTE:** If using YAML the values should be laid out as follows /// /// ```yaml /// required_if: /// - [arg, val] /// ``` /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .required_if("other_arg", "value") /// # ; /// ``` /// /// Setting [`Arg::required_if(arg, val)`] makes this arg required if the `arg` is used at /// runtime and it's value is equal to `val`. If the `arg`'s value is anything other than `val`, /// this argument isn't required. /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .takes_value(true) /// .required_if("other", "special") /// .long("config")) /// .arg(Arg::with_name("other") /// .long("other") /// .takes_value(true)) /// .get_matches_from_safe(vec![ /// "prog", "--other", "not-special" /// ]); /// /// assert!(res.is_ok()); // We didn't use --other=special, so "cfg" wasn't required /// ``` /// /// Setting [`Arg::required_if(arg, val)`] and having `arg` used with a value of `val` but *not* /// using this arg is an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .takes_value(true) /// .required_if("other", "special") /// .long("config")) /// .arg(Arg::with_name("other") /// .long("other") /// .takes_value(true)) /// .get_matches_from_safe(vec![ /// "prog", "--other", "special" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::MissingRequiredArgument); /// ``` /// [`Arg::requires(name)`]: ./struct.Arg.html#method.requires /// [Conflicting]: ./struct.Arg.html#method.conflicts_with /// [required]: ./struct.Arg.html#method.required pub fn required_if(mut self, arg: &'a str, val: &'b str) -> Self { if let Some(ref mut vec) = self.r_ifs { vec.push((arg, val)); } else { self.r_ifs = Some(vec![(arg, val)]); } self } /// Allows specifying that an argument is [required] based on multiple conditions. The /// conditions are set up in a `(arg, val)` style tuple. The requirement will only become valid /// if one of the specified `arg`'s value equals it's corresponding `val`. /// /// **NOTE:** If using YAML the values should be laid out as follows /// /// ```yaml /// required_if: /// - [arg, val] /// - [arg2, val2] /// ``` /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .required_ifs(&[ /// ("extra", "val"), /// ("option", "spec") /// ]) /// # ; /// ``` /// /// Setting [`Arg::required_ifs(&[(arg, val)])`] makes this arg required if any of the `arg`s /// are used at runtime and it's corresponding value is equal to `val`. If the `arg`'s value is /// anything other than `val`, this argument isn't required. /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .required_ifs(&[ /// ("extra", "val"), /// ("option", "spec") /// ]) /// .takes_value(true) /// .long("config")) /// .arg(Arg::with_name("extra") /// .takes_value(true) /// .long("extra")) /// .arg(Arg::with_name("option") /// .takes_value(true) /// .long("option")) /// .get_matches_from_safe(vec![ /// "prog", "--option", "other" /// ]); /// /// assert!(res.is_ok()); // We didn't use --option=spec, or --extra=val so "cfg" isn't required /// ``` /// /// Setting [`Arg::required_ifs(&[(arg, val)])`] and having any of the `arg`s used with it's /// value of `val` but *not* using this arg is an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .required_ifs(&[ /// ("extra", "val"), /// ("option", "spec") /// ]) /// .takes_value(true) /// .long("config")) /// .arg(Arg::with_name("extra") /// .takes_value(true) /// .long("extra")) /// .arg(Arg::with_name("option") /// .takes_value(true) /// .long("option")) /// .get_matches_from_safe(vec![ /// "prog", "--option", "spec" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::MissingRequiredArgument); /// ``` /// [`Arg::requires(name)`]: ./struct.Arg.html#method.requires /// [Conflicting]: ./struct.Arg.html#method.conflicts_with /// [required]: ./struct.Arg.html#method.required pub fn required_ifs(mut self, ifs: &[(&'a str, &'b str)]) -> Self { if let Some(ref mut vec) = self.r_ifs { for r_if in ifs { vec.push((r_if.0, r_if.1)); } } else { let mut vec = vec![]; for r_if in ifs { vec.push((r_if.0, r_if.1)); } self.r_ifs = Some(vec); } self } /// Sets multiple arguments by names that are required when this one is present I.e. when /// using this argument, the following arguments *must* be present. /// /// **NOTE:** [Conflicting] rules and [override] rules take precedence over being required /// by default. /// /// # Examples /// /// ```rust /// # use clap::Arg; /// Arg::with_name("config") /// .requires_all(&["input", "output"]) /// # ; /// ``` /// /// Setting [`Arg::requires_all(&[arg, arg2])`] requires that all the arguments be used at /// runtime if the defining argument is used. If the defining argument isn't used, the other /// argument isn't required /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .takes_value(true) /// .requires("input") /// .long("config")) /// .arg(Arg::with_name("input") /// .index(1)) /// .arg(Arg::with_name("output") /// .index(2)) /// .get_matches_from_safe(vec![ /// "prog" /// ]); /// /// assert!(res.is_ok()); // We didn't use cfg, so input and output weren't required /// ``` /// /// Setting [`Arg::requires_all(&[arg, arg2])`] and *not* supplying all the arguments is an /// error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .takes_value(true) /// .requires_all(&["input", "output"]) /// .long("config")) /// .arg(Arg::with_name("input") /// .index(1)) /// .arg(Arg::with_name("output") /// .index(2)) /// .get_matches_from_safe(vec![ /// "prog", "--config", "file.conf", "in.txt" /// ]); /// /// assert!(res.is_err()); /// // We didn't use output /// assert_eq!(res.unwrap_err().kind, ErrorKind::MissingRequiredArgument); /// ``` /// [Conflicting]: ./struct.Arg.html#method.conflicts_with /// [override]: ./struct.Arg.html#method.overrides_with /// [`Arg::requires_all(&[arg, arg2])`]: ./struct.Arg.html#method.requires_all pub fn requires_all(mut self, names: &[&'a str]) -> Self { if let Some(ref mut vec) = self.b.requires { for s in names { vec.push((None, s)); } } else { let mut vec = vec![]; for s in names { vec.push((None, *s)); } self.b.requires = Some(vec); } self } /// Specifies that the argument takes a value at run time. /// /// **NOTE:** values for arguments may be specified in any of the following methods /// /// * Using a space such as `-o value` or `--option value` /// * Using an equals and no space such as `-o=value` or `--option=value` /// * Use a short and no space such as `-ovalue` /// /// **NOTE:** By default, args which allow [multiple values] are delimited by commas, meaning /// `--option=val1,val2,val3` is three values for the `--option` argument. If you wish to /// change the delimiter to another character you can use [`Arg::value_delimiter(char)`], /// alternatively you can turn delimiting values **OFF** by using [`Arg::use_delimiter(false)`] /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("config") /// .takes_value(true) /// # ; /// ``` /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("mode") /// .long("mode") /// .takes_value(true)) /// .get_matches_from(vec![ /// "prog", "--mode", "fast" /// ]); /// /// assert!(m.is_present("mode")); /// assert_eq!(m.value_of("mode"), Some("fast")); /// ``` /// [`Arg::value_delimiter(char)`]: ./struct.Arg.html#method.value_delimiter /// [`Arg::use_delimiter(false)`]: ./struct.Arg.html#method.use_delimiter /// [multiple values]: ./struct.Arg.html#method.multiple pub fn takes_value(self, tv: bool) -> Self { if tv { self.set(ArgSettings::TakesValue) } else { self.unset(ArgSettings::TakesValue) } } /// Specifies if the possible values of an argument should be displayed in the help text or /// not. Defaults to `false` (i.e. show possible values) /// /// This is useful for args with many values, or ones which are explained elsewhere in the /// help text. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("config") /// .hide_possible_values(true) /// # ; /// ``` /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("mode") /// .long("mode") /// .possible_values(&["fast", "slow"]) /// .takes_value(true) /// .hide_possible_values(true)); /// /// ``` /// /// If we were to run the above program with `--help` the `[values: fast, slow]` portion of /// the help text would be omitted. pub fn hide_possible_values(self, hide: bool) -> Self { if hide { self.set(ArgSettings::HidePossibleValues) } else { self.unset(ArgSettings::HidePossibleValues) } } /// Specifies if the default value of an argument should be displayed in the help text or /// not. Defaults to `false` (i.e. show default value) /// /// This is useful when default behavior of an arg is explained elsewhere in the help text. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("config") /// .hide_default_value(true) /// # ; /// ``` /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("connect") /// .arg(Arg::with_name("host") /// .long("host") /// .default_value("localhost") /// .hide_default_value(true)); /// /// ``` /// /// If we were to run the above program with `--help` the `[default: localhost]` portion of /// the help text would be omitted. pub fn hide_default_value(self, hide: bool) -> Self { if hide { self.set(ArgSettings::HideDefaultValue) } else { self.unset(ArgSettings::HideDefaultValue) } } /// Specifies the index of a positional argument **starting at** 1. /// /// **NOTE:** The index refers to position according to **other positional argument**. It does /// not define position in the argument list as a whole. /// /// **NOTE:** If no [`Arg::short`], or [`Arg::long`] have been defined, you can optionally /// leave off the `index` method, and the index will be assigned in order of evaluation. /// Utilizing the `index` method allows for setting indexes out of order /// /// **NOTE:** When utilized with [`Arg::multiple(true)`], only the **last** positional argument /// may be defined as multiple (i.e. with the highest index) /// /// # Panics /// /// Although not in this method directly, [`App`] will [`panic!`] if indexes are skipped (such /// as defining `index(1)` and `index(3)` but not `index(2)`, or a positional argument is /// defined as multiple and is not the highest index /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("config") /// .index(1) /// # ; /// ``` /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("mode") /// .index(1)) /// .arg(Arg::with_name("debug") /// .long("debug")) /// .get_matches_from(vec![ /// "prog", "--debug", "fast" /// ]); /// /// assert!(m.is_present("mode")); /// assert_eq!(m.value_of("mode"), Some("fast")); // notice index(1) means "first positional" /// // *not* first argument /// ``` /// [`Arg::short`]: ./struct.Arg.html#method.short /// [`Arg::long`]: ./struct.Arg.html#method.long /// [`Arg::multiple(true)`]: ./struct.Arg.html#method.multiple /// [`App`]: ./struct.App.html /// [`panic!`]: https://doc.rust-lang.org/std/macro.panic!.html pub fn index(mut self, idx: u64) -> Self { self.index = Some(idx); self } /// Specifies that the argument may appear more than once. For flags, this results /// in the number of occurrences of the flag being recorded. For example `-ddd` or `-d -d -d` /// would count as three occurrences. For options there is a distinct difference in multiple /// occurrences vs multiple values. /// /// For example, `--opt val1 val2` is one occurrence, but two values. Whereas /// `--opt val1 --opt val2` is two occurrences. /// /// **WARNING:** /// /// Setting `multiple(true)` for an [option] with no other details, allows multiple values /// **and** multiple occurrences because it isn't possible to have more occurrences than values /// for options. Because multiple values are allowed, `--option val1 val2 val3` is perfectly /// valid, be careful when designing a CLI where positional arguments are expected after a /// option which accepts multiple values, as `clap` will continue parsing *values* until it /// reaches the max or specific number of values defined, or another flag or option. /// /// **Pro Tip**: /// /// It's possible to define an option which allows multiple occurrences, but only one value per /// occurrence. To do this use [`Arg::number_of_values(1)`] in coordination with /// [`Arg::multiple(true)`]. /// /// **WARNING:** /// /// When using args with `multiple(true)` on [options] or [positionals] (i.e. those args that /// accept values) and [subcommands], one needs to consider the possibility of an argument value /// being the same as a valid subcommand. By default `clap` will parse the argument in question /// as a value *only if* a value is possible at that moment. Otherwise it will be parsed as a /// subcommand. In effect, this means using `multiple(true)` with no additional parameters and /// a possible value that coincides with a subcommand name, the subcommand cannot be called /// unless another argument is passed first. /// /// As an example, consider a CLI with an option `--ui-paths=...` and subcommand `signer` /// /// The following would be parsed as values to `--ui-paths`. /// /// ```notrust /// $ program --ui-paths path1 path2 signer /// ``` /// /// This is because `--ui-paths` accepts multiple values. `clap` will continue parsing values /// until another argument is reached and it knows `--ui-paths` is done. /// /// By adding additional parameters to `--ui-paths` we can solve this issue. Consider adding /// [`Arg::number_of_values(1)`] as discussed above. The following are all valid, and `signer` /// is parsed as both a subcommand and a value in the second case. /// /// ```notrust /// $ program --ui-paths path1 signer /// $ program --ui-paths path1 --ui-paths signer signer /// ``` /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("debug") /// .short("d") /// .multiple(true) /// # ; /// ``` /// An example with flags /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("verbose") /// .multiple(true) /// .short("v")) /// .get_matches_from(vec![ /// "prog", "-v", "-v", "-v" // note, -vvv would have same result /// ]); /// /// assert!(m.is_present("verbose")); /// assert_eq!(m.occurrences_of("verbose"), 3); /// ``` /// /// An example with options /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("file") /// .multiple(true) /// .takes_value(true) /// .short("F")) /// .get_matches_from(vec![ /// "prog", "-F", "file1", "file2", "file3" /// ]); /// /// assert!(m.is_present("file")); /// assert_eq!(m.occurrences_of("file"), 1); // notice only one occurrence /// let files: Vec<_> = m.values_of("file").unwrap().collect(); /// assert_eq!(files, ["file1", "file2", "file3"]); /// ``` /// This is functionally equivalent to the example above /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("file") /// .multiple(true) /// .takes_value(true) /// .short("F")) /// .get_matches_from(vec![ /// "prog", "-F", "file1", "-F", "file2", "-F", "file3" /// ]); /// let files: Vec<_> = m.values_of("file").unwrap().collect(); /// assert_eq!(files, ["file1", "file2", "file3"]); /// /// assert!(m.is_present("file")); /// assert_eq!(m.occurrences_of("file"), 3); // Notice 3 occurrences /// let files: Vec<_> = m.values_of("file").unwrap().collect(); /// assert_eq!(files, ["file1", "file2", "file3"]); /// ``` /// /// A common mistake is to define an option which allows multiples, and a positional argument /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("file") /// .multiple(true) /// .takes_value(true) /// .short("F")) /// .arg(Arg::with_name("word") /// .index(1)) /// .get_matches_from(vec![ /// "prog", "-F", "file1", "file2", "file3", "word" /// ]); /// /// assert!(m.is_present("file")); /// let files: Vec<_> = m.values_of("file").unwrap().collect(); /// assert_eq!(files, ["file1", "file2", "file3", "word"]); // wait...what?! /// assert!(!m.is_present("word")); // but we clearly used word! /// ``` /// The problem is clap doesn't know when to stop parsing values for "files". This is further /// compounded by if we'd said `word -F file1 file2` it would have worked fine, so it would /// appear to only fail sometimes...not good! /// /// A solution for the example above is to specify that `-F` only accepts one value, but is /// allowed to appear multiple times /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("file") /// .multiple(true) /// .takes_value(true) /// .number_of_values(1) /// .short("F")) /// .arg(Arg::with_name("word") /// .index(1)) /// .get_matches_from(vec![ /// "prog", "-F", "file1", "-F", "file2", "-F", "file3", "word" /// ]); /// /// assert!(m.is_present("file")); /// let files: Vec<_> = m.values_of("file").unwrap().collect(); /// assert_eq!(files, ["file1", "file2", "file3"]); /// assert!(m.is_present("word")); /// assert_eq!(m.value_of("word"), Some("word")); /// ``` /// As a final example, notice if we define [`Arg::number_of_values(1)`] and try to run the /// problem example above, it would have been a runtime error with a pretty message to the /// user :) /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("file") /// .multiple(true) /// .takes_value(true) /// .number_of_values(1) /// .short("F")) /// .arg(Arg::with_name("word") /// .index(1)) /// .get_matches_from_safe(vec![ /// "prog", "-F", "file1", "file2", "file3", "word" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::UnknownArgument); /// ``` /// [option]: ./struct.Arg.html#method.takes_value /// [options]: ./struct.Arg.html#method.takes_value /// [subcommands]: ./struct.SubCommand.html /// [positionals]: ./struct.Arg.html#method.index /// [`Arg::number_of_values(1)`]: ./struct.Arg.html#method.number_of_values /// [`Arg::multiple(true)`]: ./struct.Arg.html#method.multiple pub fn multiple(self, multi: bool) -> Self { if multi { self.set(ArgSettings::Multiple) } else { self.unset(ArgSettings::Multiple) } } /// Specifies a value that *stops* parsing multiple values of a give argument. By default when /// one sets [`multiple(true)`] on an argument, clap will continue parsing values for that /// argument until it reaches another valid argument, or one of the other more specific settings /// for multiple values is used (such as [`min_values`], [`max_values`] or /// [`number_of_values`]). /// /// **NOTE:** This setting only applies to [options] and [positional arguments] /// /// **NOTE:** When the terminator is passed in on the command line, it is **not** stored as one /// of the values /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("vals") /// .takes_value(true) /// .multiple(true) /// .value_terminator(";") /// # ; /// ``` /// The following example uses two arguments, a sequence of commands, and the location in which /// to perform them /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("cmds") /// .multiple(true) /// .allow_hyphen_values(true) /// .value_terminator(";")) /// .arg(Arg::with_name("location")) /// .get_matches_from(vec![ /// "prog", "find", "-type", "f", "-name", "special", ";", "/home/clap" /// ]); /// let cmds: Vec<_> = m.values_of("cmds").unwrap().collect(); /// assert_eq!(&cmds, &["find", "-type", "f", "-name", "special"]); /// assert_eq!(m.value_of("location"), Some("/home/clap")); /// ``` /// [options]: ./struct.Arg.html#method.takes_value /// [positional arguments]: ./struct.Arg.html#method.index /// [`multiple(true)`]: ./struct.Arg.html#method.multiple /// [`min_values`]: ./struct.Arg.html#method.min_values /// [`number_of_values`]: ./struct.Arg.html#method.number_of_values /// [`max_values`]: ./struct.Arg.html#method.max_values pub fn value_terminator(mut self, term: &'b str) -> Self { self.setb(ArgSettings::TakesValue); self.v.terminator = Some(term); self } /// Specifies that an argument can be matched to all child [`SubCommand`]s. /// /// **NOTE:** Global arguments *only* propagate down, **not** up (to parent commands), however /// their values once a user uses them will be propagated back up to parents. In effect, this /// means one should *define* all global arguments at the top level, however it doesn't matter /// where the user *uses* the global argument. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("debug") /// .short("d") /// .global(true) /// # ; /// ``` /// /// For example, assume an application with two subcommands, and you'd like to define a /// `--verbose` flag that can be called on any of the subcommands and parent, but you don't /// want to clutter the source with three duplicate [`Arg`] definitions. /// /// ```rust /// # use clap::{App, Arg, SubCommand}; /// let m = App::new("prog") /// .arg(Arg::with_name("verb") /// .long("verbose") /// .short("v") /// .global(true)) /// .subcommand(SubCommand::with_name("test")) /// .subcommand(SubCommand::with_name("do-stuff")) /// .get_matches_from(vec![ /// "prog", "do-stuff", "--verbose" /// ]); /// /// assert_eq!(m.subcommand_name(), Some("do-stuff")); /// let sub_m = m.subcommand_matches("do-stuff").unwrap(); /// assert!(sub_m.is_present("verb")); /// ``` /// [`SubCommand`]: ./struct.SubCommand.html /// [required]: ./struct.Arg.html#method.required /// [`ArgMatches`]: ./struct.ArgMatches.html /// [`ArgMatches::is_present("flag")`]: ./struct.ArgMatches.html#method.is_present /// [`Arg`]: ./struct.Arg.html pub fn global(self, g: bool) -> Self { if g { self.set(ArgSettings::Global) } else { self.unset(ArgSettings::Global) } } /// Allows an argument to accept explicitly empty values. An empty value must be specified at /// the command line with an explicit `""`, or `''` /// /// **NOTE:** Defaults to `true` (Explicitly empty values are allowed) /// /// **NOTE:** Implicitly sets [`Arg::takes_value(true)`] when set to `false` /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("file") /// .long("file") /// .empty_values(false) /// # ; /// ``` /// The default is to allow empty values, such as `--option ""` would be an empty value. But /// we can change to make empty values become an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("cfg") /// .long("config") /// .short("v") /// .empty_values(false)) /// .get_matches_from_safe(vec![ /// "prog", "--config=" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::EmptyValue); /// ``` /// [`Arg::takes_value(true)`]: ./struct.Arg.html#method.takes_value pub fn empty_values(mut self, ev: bool) -> Self { if ev { self.set(ArgSettings::EmptyValues) } else { self = self.set(ArgSettings::TakesValue); self.unset(ArgSettings::EmptyValues) } } /// Hides an argument from help message output. /// /// **NOTE:** Implicitly sets [`Arg::hidden_short_help(true)`] and [`Arg::hidden_long_help(true)`] /// when set to true /// /// **NOTE:** This does **not** hide the argument from usage strings on error /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("debug") /// .hidden(true) /// # ; /// ``` /// Setting `hidden(true)` will hide the argument when displaying help text /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("cfg") /// .long("config") /// .hidden(true) /// .help("Some help text describing the --config arg")) /// .get_matches_from(vec![ /// "prog", "--help" /// ]); /// ``` /// /// The above example displays /// /// ```notrust /// helptest /// /// USAGE: /// helptest [FLAGS] /// /// FLAGS: /// -h, --help Prints help information /// -V, --version Prints version information /// ``` /// [`Arg::hidden_short_help(true)`]: ./struct.Arg.html#method.hidden_short_help /// [`Arg::hidden_long_help(true)`]: ./struct.Arg.html#method.hidden_long_help pub fn hidden(self, h: bool) -> Self { if h { self.set(ArgSettings::Hidden) } else { self.unset(ArgSettings::Hidden) } } /// Specifies a list of possible values for this argument. At runtime, `clap` verifies that /// only one of the specified values was used, or fails with an error message. /// /// **NOTE:** This setting only applies to [options] and [positional arguments] /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("mode") /// .takes_value(true) /// .possible_values(&["fast", "slow", "medium"]) /// # ; /// ``` /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("mode") /// .long("mode") /// .takes_value(true) /// .possible_values(&["fast", "slow", "medium"])) /// .get_matches_from(vec![ /// "prog", "--mode", "fast" /// ]); /// assert!(m.is_present("mode")); /// assert_eq!(m.value_of("mode"), Some("fast")); /// ``` /// /// The next example shows a failed parse from using a value which wasn't defined as one of the /// possible values. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("mode") /// .long("mode") /// .takes_value(true) /// .possible_values(&["fast", "slow", "medium"])) /// .get_matches_from_safe(vec![ /// "prog", "--mode", "wrong" /// ]); /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::InvalidValue); /// ``` /// [options]: ./struct.Arg.html#method.takes_value /// [positional arguments]: ./struct.Arg.html#method.index pub fn possible_values(mut self, names: &[&'b str]) -> Self { if let Some(ref mut vec) = self.v.possible_vals { for s in names { vec.push(s); } } else { self.v.possible_vals = Some(names.iter().copied().collect()); } self } /// Specifies a possible value for this argument, one at a time. At runtime, `clap` verifies /// that only one of the specified values was used, or fails with error message. /// /// **NOTE:** This setting only applies to [options] and [positional arguments] /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("mode") /// .takes_value(true) /// .possible_value("fast") /// .possible_value("slow") /// .possible_value("medium") /// # ; /// ``` /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("mode") /// .long("mode") /// .takes_value(true) /// .possible_value("fast") /// .possible_value("slow") /// .possible_value("medium")) /// .get_matches_from(vec![ /// "prog", "--mode", "fast" /// ]); /// assert!(m.is_present("mode")); /// assert_eq!(m.value_of("mode"), Some("fast")); /// ``` /// /// The next example shows a failed parse from using a value which wasn't defined as one of the /// possible values. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("mode") /// .long("mode") /// .takes_value(true) /// .possible_value("fast") /// .possible_value("slow") /// .possible_value("medium")) /// .get_matches_from_safe(vec![ /// "prog", "--mode", "wrong" /// ]); /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::InvalidValue); /// ``` /// [options]: ./struct.Arg.html#method.takes_value /// [positional arguments]: ./struct.Arg.html#method.index pub fn possible_value(mut self, name: &'b str) -> Self { if let Some(ref mut vec) = self.v.possible_vals { vec.push(name); } else { self.v.possible_vals = Some(vec![name]); } self } /// When used with [`Arg::possible_values`] it allows the argument value to pass validation even if /// the case differs from that of the specified `possible_value`. /// /// **Pro Tip:** Use this setting with [`arg_enum!`] /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// # use std::ascii::AsciiExt; /// let m = App::new("pv") /// .arg(Arg::with_name("option") /// .long("--option") /// .takes_value(true) /// .possible_value("test123") /// .case_insensitive(true)) /// .get_matches_from(vec![ /// "pv", "--option", "TeSt123", /// ]); /// /// assert!(m.value_of("option").unwrap().eq_ignore_ascii_case("test123")); /// ``` /// /// This setting also works when multiple values can be defined: /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("pv") /// .arg(Arg::with_name("option") /// .short("-o") /// .long("--option") /// .takes_value(true) /// .possible_value("test123") /// .possible_value("test321") /// .multiple(true) /// .case_insensitive(true)) /// .get_matches_from(vec![ /// "pv", "--option", "TeSt123", "teST123", "tESt321" /// ]); /// /// let matched_vals = m.values_of("option").unwrap().collect::>(); /// assert_eq!(&*matched_vals, &["TeSt123", "teST123", "tESt321"]); /// ``` /// [`Arg::case_insensitive(true)`]: ./struct.Arg.html#method.possible_values /// [`arg_enum!`]: ./macro.arg_enum.html pub fn case_insensitive(self, ci: bool) -> Self { if ci { self.set(ArgSettings::CaseInsensitive) } else { self.unset(ArgSettings::CaseInsensitive) } } /// Specifies the name of the [`ArgGroup`] the argument belongs to. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("debug") /// .long("debug") /// .group("mode") /// # ; /// ``` /// /// Multiple arguments can be a member of a single group and then the group checked as if it /// was one of said arguments. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("debug") /// .long("debug") /// .group("mode")) /// .arg(Arg::with_name("verbose") /// .long("verbose") /// .group("mode")) /// .get_matches_from(vec![ /// "prog", "--debug" /// ]); /// assert!(m.is_present("mode")); /// ``` /// [`ArgGroup`]: ./struct.ArgGroup.html pub fn group(mut self, name: &'a str) -> Self { if let Some(ref mut vec) = self.b.groups { vec.push(name); } else { self.b.groups = Some(vec![name]); } self } /// Specifies the names of multiple [`ArgGroup`]'s the argument belongs to. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("debug") /// .long("debug") /// .groups(&["mode", "verbosity"]) /// # ; /// ``` /// /// Arguments can be members of multiple groups and then the group checked as if it /// was one of said arguments. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("debug") /// .long("debug") /// .groups(&["mode", "verbosity"])) /// .arg(Arg::with_name("verbose") /// .long("verbose") /// .groups(&["mode", "verbosity"])) /// .get_matches_from(vec![ /// "prog", "--debug" /// ]); /// assert!(m.is_present("mode")); /// assert!(m.is_present("verbosity")); /// ``` /// [`ArgGroup`]: ./struct.ArgGroup.html pub fn groups(mut self, names: &[&'a str]) -> Self { if let Some(ref mut vec) = self.b.groups { for s in names { vec.push(s); } } else { self.b.groups = Some(names.iter().copied().collect()); } self } /// Specifies how many values are required to satisfy this argument. For example, if you had a /// `-f ` argument where you wanted exactly 3 'files' you would set /// `.number_of_values(3)`, and this argument wouldn't be satisfied unless the user provided /// 3 and only 3 values. /// /// **NOTE:** Does *not* require [`Arg::multiple(true)`] to be set. Setting /// [`Arg::multiple(true)`] would allow `-f -f ` where /// as *not* setting [`Arg::multiple(true)`] would only allow one occurrence of this argument. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("file") /// .short("f") /// .number_of_values(3) /// # ; /// ``` /// /// Not supplying the correct number of values is an error /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("file") /// .takes_value(true) /// .number_of_values(2) /// .short("F")) /// .get_matches_from_safe(vec![ /// "prog", "-F", "file1" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::WrongNumberOfValues); /// ``` /// [`Arg::multiple(true)`]: ./struct.Arg.html#method.multiple pub fn number_of_values(mut self, qty: u64) -> Self { self.setb(ArgSettings::TakesValue); self.v.num_vals = Some(qty); self } /// Allows one to perform a custom validation on the argument value. You provide a closure /// which accepts a [`String`] value, and return a [`Result`] where the [`Err(String)`] is a /// message displayed to the user. /// /// **NOTE:** The error message does *not* need to contain the `error:` portion, only the /// message as all errors will appear as /// `error: Invalid value for '': ` where `` is replaced by the actual /// arg, and `` is the `String` you return as the error. /// /// **NOTE:** There is a small performance hit for using validators, as they are implemented /// with [`Rc`] pointers. And the value to be checked will be allocated an extra time in order /// to to be passed to the closure. This performance hit is extremely minimal in the grand /// scheme of things. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// fn has_at(v: String) -> Result<(), String> { /// if v.contains("@") { return Ok(()); } /// Err(String::from("The value did not contain the required @ sigil")) /// } /// let res = App::new("prog") /// .arg(Arg::with_name("file") /// .index(1) /// .validator(has_at)) /// .get_matches_from_safe(vec![ /// "prog", "some@file" /// ]); /// assert!(res.is_ok()); /// assert_eq!(res.unwrap().value_of("file"), Some("some@file")); /// ``` /// [`String`]: https://doc.rust-lang.org/std/string/struct.String.html /// [`Result`]: https://doc.rust-lang.org/std/result/enum.Result.html /// [`Err(String)`]: https://doc.rust-lang.org/std/result/enum.Result.html#variant.Err /// [`Rc`]: https://doc.rust-lang.org/std/rc/struct.Rc.html pub fn validator(mut self, f: F) -> Self where F: Fn(String) -> Result<(), String> + 'static, { self.v.validator = Some(Rc::new(f)); self } /// Works identically to Validator but is intended to be used with values that could /// contain non UTF-8 formatted strings. /// /// # Examples /// #[cfg_attr(not(unix), doc = " ```ignore")] #[cfg_attr(unix, doc = " ```rust")] /// # use clap::{App, Arg}; /// # use std::ffi::{OsStr, OsString}; /// # use std::os::unix::ffi::OsStrExt; /// fn has_ampersand(v: &OsStr) -> Result<(), OsString> { /// if v.as_bytes().iter().any(|b| *b == b'&') { return Ok(()); } /// Err(OsString::from("The value did not contain the required & sigil")) /// } /// let res = App::new("prog") /// .arg(Arg::with_name("file") /// .index(1) /// .validator_os(has_ampersand)) /// .get_matches_from_safe(vec![ /// "prog", "Fish & chips" /// ]); /// assert!(res.is_ok()); /// assert_eq!(res.unwrap().value_of("file"), Some("Fish & chips")); /// ``` /// [`String`]: https://doc.rust-lang.org/std/string/struct.String.html /// [`OsStr`]: https://doc.rust-lang.org/std/ffi/struct.OsStr.html /// [`OsString`]: https://doc.rust-lang.org/std/ffi/struct.OsString.html /// [`Result`]: https://doc.rust-lang.org/std/result/enum.Result.html /// [`Err(String)`]: https://doc.rust-lang.org/std/result/enum.Result.html#variant.Err /// [`Rc`]: https://doc.rust-lang.org/std/rc/struct.Rc.html pub fn validator_os(mut self, f: F) -> Self where F: Fn(&OsStr) -> Result<(), OsString> + 'static, { self.v.validator_os = Some(Rc::new(f)); self } /// Specifies the *maximum* number of values are for this argument. For example, if you had a /// `-f ` argument where you wanted up to 3 'files' you would set `.max_values(3)`, and /// this argument would be satisfied if the user provided, 1, 2, or 3 values. /// /// **NOTE:** This does *not* implicitly set [`Arg::multiple(true)`]. This is because /// `-o val -o val` is multiple occurrences but a single value and `-o val1 val2` is a single /// occurrence with multiple values. For positional arguments this **does** set /// [`Arg::multiple(true)`] because there is no way to determine the difference between multiple /// occurrences and multiple values. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("file") /// .short("f") /// .max_values(3) /// # ; /// ``` /// /// Supplying less than the maximum number of values is allowed /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("file") /// .takes_value(true) /// .max_values(3) /// .short("F")) /// .get_matches_from_safe(vec![ /// "prog", "-F", "file1", "file2" /// ]); /// /// assert!(res.is_ok()); /// let m = res.unwrap(); /// let files: Vec<_> = m.values_of("file").unwrap().collect(); /// assert_eq!(files, ["file1", "file2"]); /// ``` /// /// Supplying more than the maximum number of values is an error /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("file") /// .takes_value(true) /// .max_values(2) /// .short("F")) /// .get_matches_from_safe(vec![ /// "prog", "-F", "file1", "file2", "file3" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::TooManyValues); /// ``` /// [`Arg::multiple(true)`]: ./struct.Arg.html#method.multiple pub fn max_values(mut self, qty: u64) -> Self { self.setb(ArgSettings::TakesValue); self.v.max_vals = Some(qty); self } /// Specifies the *minimum* number of values for this argument. For example, if you had a /// `-f ` argument where you wanted at least 2 'files' you would set /// `.min_values(2)`, and this argument would be satisfied if the user provided, 2 or more /// values. /// /// **NOTE:** This does not implicitly set [`Arg::multiple(true)`]. This is because /// `-o val -o val` is multiple occurrences but a single value and `-o val1 val2` is a single /// occurrence with multiple values. For positional arguments this **does** set /// [`Arg::multiple(true)`] because there is no way to determine the difference between multiple /// occurrences and multiple values. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("file") /// .short("f") /// .min_values(3) /// # ; /// ``` /// /// Supplying more than the minimum number of values is allowed /// /// ```rust /// # use clap::{App, Arg}; /// let res = App::new("prog") /// .arg(Arg::with_name("file") /// .takes_value(true) /// .min_values(2) /// .short("F")) /// .get_matches_from_safe(vec![ /// "prog", "-F", "file1", "file2", "file3" /// ]); /// /// assert!(res.is_ok()); /// let m = res.unwrap(); /// let files: Vec<_> = m.values_of("file").unwrap().collect(); /// assert_eq!(files, ["file1", "file2", "file3"]); /// ``` /// /// Supplying less than the minimum number of values is an error /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("file") /// .takes_value(true) /// .min_values(2) /// .short("F")) /// .get_matches_from_safe(vec![ /// "prog", "-F", "file1" /// ]); /// /// assert!(res.is_err()); /// assert_eq!(res.unwrap_err().kind, ErrorKind::TooFewValues); /// ``` /// [`Arg::multiple(true)`]: ./struct.Arg.html#method.multiple pub fn min_values(mut self, qty: u64) -> Self { self.v.min_vals = Some(qty); self.set(ArgSettings::TakesValue) } /// Specifies whether or not an argument should allow grouping of multiple values via a /// delimiter. I.e. should `--option=val1,val2,val3` be parsed as three values (`val1`, `val2`, /// and `val3`) or as a single value (`val1,val2,val3`). Defaults to using `,` (comma) as the /// value delimiter for all arguments that accept values (options and positional arguments) /// /// **NOTE:** The default is `false`. When set to `true` the default [`Arg::value_delimiter`] /// is the comma `,`. /// /// # Examples /// /// The following example shows the default behavior. /// /// ```rust /// # use clap::{App, Arg}; /// let delims = App::new("prog") /// .arg(Arg::with_name("option") /// .long("option") /// .use_delimiter(true) /// .takes_value(true)) /// .get_matches_from(vec![ /// "prog", "--option=val1,val2,val3", /// ]); /// /// assert!(delims.is_present("option")); /// assert_eq!(delims.occurrences_of("option"), 1); /// assert_eq!(delims.values_of("option").unwrap().collect::>(), ["val1", "val2", "val3"]); /// ``` /// The next example shows the difference when turning delimiters off. This is the default /// behavior /// /// ```rust /// # use clap::{App, Arg}; /// let nodelims = App::new("prog") /// .arg(Arg::with_name("option") /// .long("option") /// .use_delimiter(false) /// .takes_value(true)) /// .get_matches_from(vec![ /// "prog", "--option=val1,val2,val3", /// ]); /// /// assert!(nodelims.is_present("option")); /// assert_eq!(nodelims.occurrences_of("option"), 1); /// assert_eq!(nodelims.value_of("option").unwrap(), "val1,val2,val3"); /// ``` /// [`Arg::value_delimiter`]: ./struct.Arg.html#method.value_delimiter pub fn use_delimiter(mut self, d: bool) -> Self { if d { if self.v.val_delim.is_none() { self.v.val_delim = Some(','); } self.setb(ArgSettings::TakesValue); self.setb(ArgSettings::UseValueDelimiter); } else { self.v.val_delim = None; self.unsetb(ArgSettings::UseValueDelimiter); } self.unset(ArgSettings::ValueDelimiterNotSet) } /// Specifies that *multiple values* may only be set using the delimiter. This means if an /// if an option is encountered, and no delimiter is found, it automatically assumed that no /// additional values for that option follow. This is unlike the default, where it is generally /// assumed that more values will follow regardless of whether or not a delimiter is used. /// /// **NOTE:** The default is `false`. /// /// **NOTE:** Setting this to true implies [`Arg::use_delimiter(true)`] /// /// **NOTE:** It's a good idea to inform the user that use of a delimiter is required, either /// through help text or other means. /// /// # Examples /// /// These examples demonstrate what happens when `require_delimiter(true)` is used. Notice /// everything works in this first example, as we use a delimiter, as expected. /// /// ```rust /// # use clap::{App, Arg}; /// let delims = App::new("prog") /// .arg(Arg::with_name("opt") /// .short("o") /// .takes_value(true) /// .multiple(true) /// .require_delimiter(true)) /// .get_matches_from(vec![ /// "prog", "-o", "val1,val2,val3", /// ]); /// /// assert!(delims.is_present("opt")); /// assert_eq!(delims.values_of("opt").unwrap().collect::>(), ["val1", "val2", "val3"]); /// ``` /// In this next example, we will *not* use a delimiter. Notice it's now an error. /// /// ```rust /// # use clap::{App, Arg, ErrorKind}; /// let res = App::new("prog") /// .arg(Arg::with_name("opt") /// .short("o") /// .takes_value(true) /// .multiple(true) /// .require_delimiter(true)) /// .get_matches_from_safe(vec![ /// "prog", "-o", "val1", "val2", "val3", /// ]); /// /// assert!(res.is_err()); /// let err = res.unwrap_err(); /// assert_eq!(err.kind, ErrorKind::UnknownArgument); /// ``` /// What's happening is `-o` is getting `val1`, and because delimiters are required yet none /// were present, it stops parsing `-o`. At this point it reaches `val2` and because no /// positional arguments have been defined, it's an error of an unexpected argument. /// /// In this final example, we contrast the above with `clap`'s default behavior where the above /// is *not* an error. /// /// ```rust /// # use clap::{App, Arg}; /// let delims = App::new("prog") /// .arg(Arg::with_name("opt") /// .short("o") /// .takes_value(true) /// .multiple(true)) /// .get_matches_from(vec![ /// "prog", "-o", "val1", "val2", "val3", /// ]); /// /// assert!(delims.is_present("opt")); /// assert_eq!(delims.values_of("opt").unwrap().collect::>(), ["val1", "val2", "val3"]); /// ``` /// [`Arg::use_delimiter(true)`]: ./struct.Arg.html#method.use_delimiter pub fn require_delimiter(mut self, d: bool) -> Self { if d { self = self.use_delimiter(true); self.unsetb(ArgSettings::ValueDelimiterNotSet); self.setb(ArgSettings::UseValueDelimiter); self.set(ArgSettings::RequireDelimiter) } else { self = self.use_delimiter(false); self.unsetb(ArgSettings::UseValueDelimiter); self.unset(ArgSettings::RequireDelimiter) } } /// Specifies the separator to use when values are clumped together, defaults to `,` (comma). /// /// **NOTE:** implicitly sets [`Arg::use_delimiter(true)`] /// /// **NOTE:** implicitly sets [`Arg::takes_value(true)`] /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("config") /// .short("c") /// .long("config") /// .value_delimiter(";")) /// .get_matches_from(vec![ /// "prog", "--config=val1;val2;val3" /// ]); /// /// assert_eq!(m.values_of("config").unwrap().collect::>(), ["val1", "val2", "val3"]) /// ``` /// [`Arg::use_delimiter(true)`]: ./struct.Arg.html#method.use_delimiter /// [`Arg::takes_value(true)`]: ./struct.Arg.html#method.takes_value pub fn value_delimiter(mut self, d: &str) -> Self { self.unsetb(ArgSettings::ValueDelimiterNotSet); self.setb(ArgSettings::TakesValue); self.setb(ArgSettings::UseValueDelimiter); self.v.val_delim = Some( d.chars() .next() .expect("Failed to get value_delimiter from arg"), ); self } /// Specify multiple names for values of option arguments. These names are cosmetic only, used /// for help and usage strings only. The names are **not** used to access arguments. The values /// of the arguments are accessed in numeric order (i.e. if you specify two names `one` and /// `two` `one` will be the first matched value, `two` will be the second). /// /// This setting can be very helpful when describing the type of input the user should be /// using, such as `FILE`, `INTERFACE`, etc. Although not required, it's somewhat convention to /// use all capital letters for the value name. /// /// **Pro Tip:** It may help to use [`Arg::next_line_help(true)`] if there are long, or /// multiple value names in order to not throw off the help text alignment of all options. /// /// **NOTE:** This implicitly sets [`Arg::number_of_values`] if the number of value names is /// greater than one. I.e. be aware that the number of "names" you set for the values, will be /// the *exact* number of values required to satisfy this argument /// /// **NOTE:** implicitly sets [`Arg::takes_value(true)`] /// /// **NOTE:** Does *not* require or imply [`Arg::multiple(true)`]. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("speed") /// .short("s") /// .value_names(&["fast", "slow"]) /// # ; /// ``` /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("io") /// .long("io-files") /// .value_names(&["INFILE", "OUTFILE"])) /// .get_matches_from(vec![ /// "prog", "--help" /// ]); /// ``` /// Running the above program produces the following output /// /// ```notrust /// valnames /// /// USAGE: /// valnames [FLAGS] [OPTIONS] /// /// FLAGS: /// -h, --help Prints help information /// -V, --version Prints version information /// /// OPTIONS: /// --io-files Some help text /// ``` /// [`Arg::next_line_help(true)`]: ./struct.Arg.html#method.next_line_help /// [`Arg::number_of_values`]: ./struct.Arg.html#method.number_of_values /// [`Arg::takes_value(true)`]: ./struct.Arg.html#method.takes_value /// [`Arg::multiple(true)`]: ./struct.Arg.html#method.multiple pub fn value_names(mut self, names: &[&'b str]) -> Self { self.setb(ArgSettings::TakesValue); if self.is_set(ArgSettings::ValueDelimiterNotSet) { self.unsetb(ArgSettings::ValueDelimiterNotSet); self.setb(ArgSettings::UseValueDelimiter); } if let Some(ref mut vals) = self.v.val_names { let mut l = vals.len(); for s in names { vals.insert(l, s); l += 1; } } else { let mut vm = VecMap::new(); for (i, n) in names.iter().enumerate() { vm.insert(i, *n); } self.v.val_names = Some(vm); } self } /// Specifies the name for value of [option] or [positional] arguments inside of help /// documentation. This name is cosmetic only, the name is **not** used to access arguments. /// This setting can be very helpful when describing the type of input the user should be /// using, such as `FILE`, `INTERFACE`, etc. Although not required, it's somewhat convention to /// use all capital letters for the value name. /// /// **NOTE:** implicitly sets [`Arg::takes_value(true)`] /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("cfg") /// .long("config") /// .value_name("FILE") /// # ; /// ``` /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("config") /// .long("config") /// .value_name("FILE")) /// .get_matches_from(vec![ /// "prog", "--help" /// ]); /// ``` /// Running the above program produces the following output /// /// ```notrust /// valnames /// /// USAGE: /// valnames [FLAGS] [OPTIONS] /// /// FLAGS: /// -h, --help Prints help information /// -V, --version Prints version information /// /// OPTIONS: /// --config Some help text /// ``` /// [option]: ./struct.Arg.html#method.takes_value /// [positional]: ./struct.Arg.html#method.index /// [`Arg::takes_value(true)`]: ./struct.Arg.html#method.takes_value pub fn value_name(mut self, name: &'b str) -> Self { self.setb(ArgSettings::TakesValue); if let Some(ref mut vals) = self.v.val_names { let l = vals.len(); vals.insert(l, name); } else { let mut vm = VecMap::new(); vm.insert(0, name); self.v.val_names = Some(vm); } self } /// Specifies the value of the argument when *not* specified at runtime. /// /// **NOTE:** If the user *does not* use this argument at runtime, [`ArgMatches::occurrences_of`] /// will return `0` even though the [`ArgMatches::value_of`] will return the default specified. /// /// **NOTE:** If the user *does not* use this argument at runtime [`ArgMatches::is_present`] will /// still return `true`. If you wish to determine whether the argument was used at runtime or /// not, consider [`ArgMatches::occurrences_of`] which will return `0` if the argument was *not* /// used at runtime. /// /// **NOTE:** This setting is perfectly compatible with [`Arg::default_value_if`] but slightly /// different. `Arg::default_value` *only* takes affect when the user has not provided this arg /// at runtime. `Arg::default_value_if` however only takes affect when the user has not provided /// a value at runtime **and** these other conditions are met as well. If you have set /// `Arg::default_value` and `Arg::default_value_if`, and the user **did not** provide a this /// arg at runtime, nor did were the conditions met for `Arg::default_value_if`, the /// `Arg::default_value` will be applied. /// /// **NOTE:** This implicitly sets [`Arg::takes_value(true)`]. /// /// **NOTE:** This setting effectively disables `AppSettings::ArgRequiredElseHelp` if used in /// conjunction as it ensures that some argument will always be present. /// /// # Examples /// /// First we use the default value without providing any value at runtime. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("opt") /// .long("myopt") /// .default_value("myval")) /// .get_matches_from(vec![ /// "prog" /// ]); /// /// assert_eq!(m.value_of("opt"), Some("myval")); /// assert!(m.is_present("opt")); /// assert_eq!(m.occurrences_of("opt"), 0); /// ``` /// /// Next we provide a value at runtime to override the default. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("opt") /// .long("myopt") /// .default_value("myval")) /// .get_matches_from(vec![ /// "prog", "--myopt=non_default" /// ]); /// /// assert_eq!(m.value_of("opt"), Some("non_default")); /// assert!(m.is_present("opt")); /// assert_eq!(m.occurrences_of("opt"), 1); /// ``` /// [`ArgMatches::occurrences_of`]: ./struct.ArgMatches.html#method.occurrences_of /// [`ArgMatches::value_of`]: ./struct.ArgMatches.html#method.value_of /// [`Arg::takes_value(true)`]: ./struct.Arg.html#method.takes_value /// [`ArgMatches::is_present`]: ./struct.ArgMatches.html#method.is_present /// [`Arg::default_value_if`]: ./struct.Arg.html#method.default_value_if pub fn default_value(self, val: &'a str) -> Self { self.default_value_os(OsStr::from_bytes(val.as_bytes())) } /// Provides a default value in the exact same manner as [`Arg::default_value`] /// only using [`OsStr`]s instead. /// [`Arg::default_value`]: ./struct.Arg.html#method.default_value /// [`OsStr`]: https://doc.rust-lang.org/std/ffi/struct.OsStr.html pub fn default_value_os(mut self, val: &'a OsStr) -> Self { self.setb(ArgSettings::TakesValue); self.v.default_val = Some(val); self } /// Specifies the value of the argument if `arg` has been used at runtime. If `val` is set to /// `None`, `arg` only needs to be present. If `val` is set to `"some-val"` then `arg` must be /// present at runtime **and** have the value `val`. /// /// **NOTE:** This setting is perfectly compatible with [`Arg::default_value`] but slightly /// different. `Arg::default_value` *only* takes affect when the user has not provided this arg /// at runtime. This setting however only takes affect when the user has not provided a value at /// runtime **and** these other conditions are met as well. If you have set `Arg::default_value` /// and `Arg::default_value_if`, and the user **did not** provide a this arg at runtime, nor did /// were the conditions met for `Arg::default_value_if`, the `Arg::default_value` will be /// applied. /// /// **NOTE:** This implicitly sets [`Arg::takes_value(true)`]. /// /// **NOTE:** If using YAML the values should be laid out as follows (`None` can be represented /// as `null` in YAML) /// /// ```yaml /// default_value_if: /// - [arg, val, default] /// ``` /// /// # Examples /// /// First we use the default value only if another arg is present at runtime. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("flag") /// .long("flag")) /// .arg(Arg::with_name("other") /// .long("other") /// .default_value_if("flag", None, "default")) /// .get_matches_from(vec![ /// "prog", "--flag" /// ]); /// /// assert_eq!(m.value_of("other"), Some("default")); /// ``` /// /// Next we run the same test, but without providing `--flag`. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("flag") /// .long("flag")) /// .arg(Arg::with_name("other") /// .long("other") /// .default_value_if("flag", None, "default")) /// .get_matches_from(vec![ /// "prog" /// ]); /// /// assert_eq!(m.value_of("other"), None); /// ``` /// /// Now lets only use the default value if `--opt` contains the value `special`. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("opt") /// .takes_value(true) /// .long("opt")) /// .arg(Arg::with_name("other") /// .long("other") /// .default_value_if("opt", Some("special"), "default")) /// .get_matches_from(vec![ /// "prog", "--opt", "special" /// ]); /// /// assert_eq!(m.value_of("other"), Some("default")); /// ``` /// /// We can run the same test and provide any value *other than* `special` and we won't get a /// default value. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("opt") /// .takes_value(true) /// .long("opt")) /// .arg(Arg::with_name("other") /// .long("other") /// .default_value_if("opt", Some("special"), "default")) /// .get_matches_from(vec![ /// "prog", "--opt", "hahaha" /// ]); /// /// assert_eq!(m.value_of("other"), None); /// ``` /// [`Arg::takes_value(true)`]: ./struct.Arg.html#method.takes_value /// [`Arg::default_value`]: ./struct.Arg.html#method.default_value pub fn default_value_if(self, arg: &'a str, val: Option<&'b str>, default: &'b str) -> Self { self.default_value_if_os( arg, val.map(str::as_bytes).map(OsStr::from_bytes), OsStr::from_bytes(default.as_bytes()), ) } /// Provides a conditional default value in the exact same manner as [`Arg::default_value_if`] /// only using [`OsStr`]s instead. /// [`Arg::default_value_if`]: ./struct.Arg.html#method.default_value_if /// [`OsStr`]: https://doc.rust-lang.org/std/ffi/struct.OsStr.html pub fn default_value_if_os( mut self, arg: &'a str, val: Option<&'b OsStr>, default: &'b OsStr, ) -> Self { self.setb(ArgSettings::TakesValue); if let Some(ref mut vm) = self.v.default_vals_ifs { let l = vm.len(); vm.insert(l, (arg, val, default)); } else { let mut vm = VecMap::new(); vm.insert(0, (arg, val, default)); self.v.default_vals_ifs = Some(vm); } self } /// Specifies multiple values and conditions in the same manner as [`Arg::default_value_if`]. /// The method takes a slice of tuples in the `(arg, Option, default)` format. /// /// **NOTE**: The conditions are stored in order and evaluated in the same order. I.e. the first /// if multiple conditions are true, the first one found will be applied and the ultimate value. /// /// **NOTE:** If using YAML the values should be laid out as follows /// /// ```yaml /// default_value_if: /// - [arg, val, default] /// - [arg2, null, default2] /// ``` /// /// # Examples /// /// First we use the default value only if another arg is present at runtime. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("flag") /// .long("flag")) /// .arg(Arg::with_name("opt") /// .long("opt") /// .takes_value(true)) /// .arg(Arg::with_name("other") /// .long("other") /// .default_value_ifs(&[ /// ("flag", None, "default"), /// ("opt", Some("channal"), "chan"), /// ])) /// .get_matches_from(vec![ /// "prog", "--opt", "channal" /// ]); /// /// assert_eq!(m.value_of("other"), Some("chan")); /// ``` /// /// Next we run the same test, but without providing `--flag`. /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("flag") /// .long("flag")) /// .arg(Arg::with_name("other") /// .long("other") /// .default_value_ifs(&[ /// ("flag", None, "default"), /// ("opt", Some("channal"), "chan"), /// ])) /// .get_matches_from(vec![ /// "prog" /// ]); /// /// assert_eq!(m.value_of("other"), None); /// ``` /// /// We can also see that these values are applied in order, and if more than one condition is /// true, only the first evaluated "wins" /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("flag") /// .long("flag")) /// .arg(Arg::with_name("opt") /// .long("opt") /// .takes_value(true)) /// .arg(Arg::with_name("other") /// .long("other") /// .default_value_ifs(&[ /// ("flag", None, "default"), /// ("opt", Some("channal"), "chan"), /// ])) /// .get_matches_from(vec![ /// "prog", "--opt", "channal", "--flag" /// ]); /// /// assert_eq!(m.value_of("other"), Some("default")); /// ``` /// [`Arg::takes_value(true)`]: ./struct.Arg.html#method.takes_value /// [`Arg::default_value`]: ./struct.Arg.html#method.default_value pub fn default_value_ifs(mut self, ifs: &[(&'a str, Option<&'b str>, &'b str)]) -> Self { for &(arg, val, default) in ifs { self = self.default_value_if_os( arg, val.map(str::as_bytes).map(OsStr::from_bytes), OsStr::from_bytes(default.as_bytes()), ); } self } /// Provides multiple conditional default values in the exact same manner as /// [`Arg::default_value_ifs`] only using [`OsStr`]s instead. /// [`Arg::default_value_ifs`]: ./struct.Arg.html#method.default_value_ifs /// [`OsStr`]: https://doc.rust-lang.org/std/ffi/struct.OsStr.html pub fn default_value_ifs_os(mut self, ifs: &[(&'a str, Option<&'b OsStr>, &'b OsStr)]) -> Self { for &(arg, val, default) in ifs { self = self.default_value_if_os(arg, val, default); } self } /// Specifies that if the value is not passed in as an argument, that it should be retrieved /// from the environment, if available. If it is not present in the environment, then default /// rules will apply. /// /// **NOTE:** If the user *does not* use this argument at runtime, [`ArgMatches::occurrences_of`] /// will return `0` even though the [`ArgMatches::value_of`] will return the default specified. /// /// **NOTE:** If the user *does not* use this argument at runtime [`ArgMatches::is_present`] will /// return `true` if the variable is present in the environment . If you wish to determine whether /// the argument was used at runtime or not, consider [`ArgMatches::occurrences_of`] which will /// return `0` if the argument was *not* used at runtime. /// /// **NOTE:** This implicitly sets [`Arg::takes_value(true)`]. /// /// **NOTE:** If [`Arg::multiple(true)`] is set then [`Arg::use_delimiter(true)`] should also be /// set. Otherwise, only a single argument will be returned from the environment variable. The /// default delimiter is `,` and follows all the other delimiter rules. /// /// # Examples /// /// In this example, we show the variable coming from the environment: /// /// ```rust /// # use std::env; /// # use clap::{App, Arg}; /// /// env::set_var("MY_FLAG", "env"); /// /// let m = App::new("prog") /// .arg(Arg::with_name("flag") /// .long("flag") /// .env("MY_FLAG")) /// .get_matches_from(vec![ /// "prog" /// ]); /// /// assert_eq!(m.value_of("flag"), Some("env")); /// ``` /// /// In this example, we show the variable coming from an option on the CLI: /// /// ```rust /// # use std::env; /// # use clap::{App, Arg}; /// /// env::set_var("MY_FLAG", "env"); /// /// let m = App::new("prog") /// .arg(Arg::with_name("flag") /// .long("flag") /// .env("MY_FLAG")) /// .get_matches_from(vec![ /// "prog", "--flag", "opt" /// ]); /// /// assert_eq!(m.value_of("flag"), Some("opt")); /// ``` /// /// In this example, we show the variable coming from the environment even with the /// presence of a default: /// /// ```rust /// # use std::env; /// # use clap::{App, Arg}; /// /// env::set_var("MY_FLAG", "env"); /// /// let m = App::new("prog") /// .arg(Arg::with_name("flag") /// .long("flag") /// .env("MY_FLAG") /// .default_value("default")) /// .get_matches_from(vec![ /// "prog" /// ]); /// /// assert_eq!(m.value_of("flag"), Some("env")); /// ``` /// /// In this example, we show the use of multiple values in a single environment variable: /// /// ```rust /// # use std::env; /// # use clap::{App, Arg}; /// /// env::set_var("MY_FLAG_MULTI", "env1,env2"); /// /// let m = App::new("prog") /// .arg(Arg::with_name("flag") /// .long("flag") /// .env("MY_FLAG_MULTI") /// .multiple(true) /// .use_delimiter(true)) /// .get_matches_from(vec![ /// "prog" /// ]); /// /// assert_eq!(m.values_of("flag").unwrap().collect::>(), vec!["env1", "env2"]); /// ``` /// [`ArgMatches::occurrences_of`]: ./struct.ArgMatches.html#method.occurrences_of /// [`ArgMatches::value_of`]: ./struct.ArgMatches.html#method.value_of /// [`ArgMatches::is_present`]: ./struct.ArgMatches.html#method.is_present /// [`Arg::takes_value(true)`]: ./struct.Arg.html#method.takes_value /// [`Arg::multiple(true)`]: ./struct.Arg.html#method.multiple /// [`Arg::use_delimiter(true)`]: ./struct.Arg.html#method.use_delimiter pub fn env(self, name: &'a str) -> Self { self.env_os(OsStr::new(name)) } /// Specifies that if the value is not passed in as an argument, that it should be retrieved /// from the environment if available in the exact same manner as [`Arg::env`] only using /// [`OsStr`]s instead. pub fn env_os(mut self, name: &'a OsStr) -> Self { self.setb(ArgSettings::TakesValue); self.v.env = Some((name, env::var_os(name))); self } /// @TODO @p2 @docs @release: write docs pub fn hide_env_values(self, hide: bool) -> Self { if hide { self.set(ArgSettings::HideEnvValues) } else { self.unset(ArgSettings::HideEnvValues) } } /// When set to `true` the help string will be displayed on the line after the argument and /// indented once. This can be helpful for arguments with very long or complex help messages. /// This can also be helpful for arguments with very long flag names, or many/long value names. /// /// **NOTE:** To apply this setting to all arguments consider using /// [`AppSettings::NextLineHelp`] /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("opt") /// .long("long-option-flag") /// .short("o") /// .takes_value(true) /// .value_names(&["value1", "value2"]) /// .help("Some really long help and complex\n\ /// help that makes more sense to be\n\ /// on a line after the option") /// .next_line_help(true)) /// .get_matches_from(vec![ /// "prog", "--help" /// ]); /// ``` /// /// The above example displays the following help message /// /// ```notrust /// nlh /// /// USAGE: /// nlh [FLAGS] [OPTIONS] /// /// FLAGS: /// -h, --help Prints help information /// -V, --version Prints version information /// /// OPTIONS: /// -o, --long-option-flag /// Some really long help and complex /// help that makes more sense to be /// on a line after the option /// ``` /// [`AppSettings::NextLineHelp`]: ./enum.AppSettings.html#variant.NextLineHelp pub fn next_line_help(mut self, nlh: bool) -> Self { if nlh { self.setb(ArgSettings::NextLineHelp); } else { self.unsetb(ArgSettings::NextLineHelp); } self } /// Allows custom ordering of args within the help message. Args with a lower value will be /// displayed first in the help message. This is helpful when one would like to emphasise /// frequently used args, or prioritize those towards the top of the list. Duplicate values /// **are** allowed. Args with duplicate display orders will be displayed in alphabetical /// order. /// /// **NOTE:** The default is 999 for all arguments. /// /// **NOTE:** This setting is ignored for [positional arguments] which are always displayed in /// [index] order. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("a") // Typically args are grouped alphabetically by name. /// // Args without a display_order have a value of 999 and are /// // displayed alphabetically with all other 999 valued args. /// .long("long-option") /// .short("o") /// .takes_value(true) /// .help("Some help and text")) /// .arg(Arg::with_name("b") /// .long("other-option") /// .short("O") /// .takes_value(true) /// .display_order(1) // In order to force this arg to appear *first* /// // all we have to do is give it a value lower than 999. /// // Any other args with a value of 1 will be displayed /// // alphabetically with this one...then 2 values, then 3, etc. /// .help("I should be first!")) /// .get_matches_from(vec![ /// "prog", "--help" /// ]); /// ``` /// /// The above example displays the following help message /// /// ```notrust /// cust-ord /// /// USAGE: /// cust-ord [FLAGS] [OPTIONS] /// /// FLAGS: /// -h, --help Prints help information /// -V, --version Prints version information /// /// OPTIONS: /// -O, --other-option I should be first! /// -o, --long-option Some help and text /// ``` /// [positional arguments]: ./struct.Arg.html#method.index /// [index]: ./struct.Arg.html#method.index pub fn display_order(mut self, ord: usize) -> Self { self.s.disp_ord = ord; self } /// Indicates that all parameters passed after this should not be parsed /// individually, but rather passed in their entirety. It is worth noting /// that setting this requires all values to come after a `--` to indicate they /// should all be captured. For example: /// /// ```notrust /// --foo something -- -v -v -v -b -b -b --baz -q -u -x /// ``` /// Will result in everything after `--` to be considered one raw argument. This behavior /// may not be exactly what you are expecting and using [`AppSettings::TrailingVarArg`] /// may be more appropriate. /// /// **NOTE:** Implicitly sets [`Arg::multiple(true)`], [`Arg::allow_hyphen_values(true)`], and /// [`Arg::last(true)`] when set to `true` /// /// [`Arg::multiple(true)`]: ./struct.Arg.html#method.multiple /// [`Arg::allow_hyphen_values(true)`]: ./struct.Arg.html#method.allow_hyphen_values /// [`Arg::last(true)`]: ./struct.Arg.html#method.last /// [`AppSettings::TrailingVarArg`]: ./enum.AppSettings.html#variant.TrailingVarArg pub fn raw(self, raw: bool) -> Self { self.multiple(raw).allow_hyphen_values(raw).last(raw) } /// Hides an argument from short help message output. /// /// **NOTE:** This does **not** hide the argument from usage strings on error /// /// **NOTE:** Setting this option will cause next-line-help output style to be used /// when long help (`--help`) is called. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("debug") /// .hidden_short_help(true) /// # ; /// ``` /// Setting `hidden_short_help(true)` will hide the argument when displaying short help text /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("cfg") /// .long("config") /// .hidden_short_help(true) /// .help("Some help text describing the --config arg")) /// .get_matches_from(vec![ /// "prog", "-h" /// ]); /// ``` /// /// The above example displays /// /// ```notrust /// helptest /// /// USAGE: /// helptest [FLAGS] /// /// FLAGS: /// -h, --help Prints help information /// -V, --version Prints version information /// ``` /// /// However, when --help is called /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("cfg") /// .long("config") /// .hidden_short_help(true) /// .help("Some help text describing the --config arg")) /// .get_matches_from(vec![ /// "prog", "--help" /// ]); /// ``` /// /// Then the following would be displayed /// /// ```notrust /// helptest /// /// USAGE: /// helptest [FLAGS] /// /// FLAGS: /// --config Some help text describing the --config arg /// -h, --help Prints help information /// -V, --version Prints version information /// ``` pub fn hidden_short_help(self, hide: bool) -> Self { if hide { self.set(ArgSettings::HiddenShortHelp) } else { self.unset(ArgSettings::HiddenShortHelp) } } /// Hides an argument from long help message output. /// /// **NOTE:** This does **not** hide the argument from usage strings on error /// /// **NOTE:** Setting this option will cause next-line-help output style to be used /// when long help (`--help`) is called. /// /// # Examples /// /// ```rust /// # use clap::{App, Arg}; /// Arg::with_name("debug") /// .hidden_long_help(true) /// # ; /// ``` /// Setting `hidden_long_help(true)` will hide the argument when displaying long help text /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("cfg") /// .long("config") /// .hidden_long_help(true) /// .help("Some help text describing the --config arg")) /// .get_matches_from(vec![ /// "prog", "--help" /// ]); /// ``` /// /// The above example displays /// /// ```notrust /// helptest /// /// USAGE: /// helptest [FLAGS] /// /// FLAGS: /// -h, --help Prints help information /// -V, --version Prints version information /// ``` /// /// However, when -h is called /// /// ```rust /// # use clap::{App, Arg}; /// let m = App::new("prog") /// .arg(Arg::with_name("cfg") /// .long("config") /// .hidden_long_help(true) /// .help("Some help text describing the --config arg")) /// .get_matches_from(vec![ /// "prog", "-h" /// ]); /// ``` /// /// Then the following would be displayed /// /// ```notrust /// helptest /// /// USAGE: /// helptest [FLAGS] /// /// FLAGS: /// --config Some help text describing the --config arg /// -h, --help Prints help information /// -V, --version Prints version information /// ``` pub fn hidden_long_help(self, hide: bool) -> Self { if hide { self.set(ArgSettings::HiddenLongHelp) } else { self.unset(ArgSettings::HiddenLongHelp) } } /// Checks if one of the [`ArgSettings`] settings is set for the argument. /// /// [`ArgSettings`]: ./enum.ArgSettings.html pub fn is_set(&self, s: ArgSettings) -> bool { self.b.is_set(s) } /// Sets one of the [`ArgSettings`] settings for the argument. /// /// [`ArgSettings`]: ./enum.ArgSettings.html pub fn set(mut self, s: ArgSettings) -> Self { self.setb(s); self } /// Unsets one of the [`ArgSettings`] settings for the argument. /// /// [`ArgSettings`]: ./enum.ArgSettings.html pub fn unset(mut self, s: ArgSettings) -> Self { self.unsetb(s); self } #[doc(hidden)] pub fn setb(&mut self, s: ArgSettings) { self.b.set(s); } #[doc(hidden)] pub fn unsetb(&mut self, s: ArgSettings) { self.b.unset(s); } } impl<'a, 'b, 'z> From<&'z Arg<'a, 'b>> for Arg<'a, 'b> { fn from(a: &'z Arg<'a, 'b>) -> Self { Arg { b: a.b.clone(), v: a.v.clone(), s: a.s.clone(), index: a.index, r_ifs: a.r_ifs.clone(), } } } impl<'n, 'e> PartialEq for Arg<'n, 'e> { fn eq(&self, other: &Arg<'n, 'e>) -> bool { self.b == other.b } } vendor/clap/src/args/any_arg.rs0000664000175000017500000001033114172417313017300 0ustar mwhudsonmwhudson// Std use std::{ ffi::{OsStr, OsString}, fmt as std_fmt, rc::Rc, }; // Internal use crate::{ args::settings::ArgSettings, map::{self, VecMap}, INTERNAL_ERROR_MSG, }; #[doc(hidden)] pub trait AnyArg<'n, 'e>: std_fmt::Display { fn name(&self) -> &'n str; fn overrides(&self) -> Option<&[&'e str]>; fn aliases(&self) -> Option>; fn requires(&self) -> Option<&[(Option<&'e str>, &'n str)]>; fn blacklist(&self) -> Option<&[&'e str]>; fn required_unless(&self) -> Option<&[&'e str]>; fn is_set(&self, setting: ArgSettings) -> bool; fn set(&mut self, setting: ArgSettings); fn has_switch(&self) -> bool; fn max_vals(&self) -> Option; fn min_vals(&self) -> Option; fn num_vals(&self) -> Option; fn possible_vals(&self) -> Option<&[&'e str]>; #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] fn validator(&self) -> Option<&Rc Result<(), String>>>; #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] fn validator_os(&self) -> Option<&Rc Result<(), OsString>>>; fn short(&self) -> Option; fn long(&self) -> Option<&'e str>; fn val_delim(&self) -> Option; fn takes_value(&self) -> bool; fn val_names(&self) -> Option<&VecMap<&'e str>>; fn help(&self) -> Option<&'e str>; fn long_help(&self) -> Option<&'e str>; fn default_val(&self) -> Option<&'e OsStr>; fn default_vals_ifs(&self) -> Option, &'e OsStr)>>; fn env<'s>(&'s self) -> Option<(&'n OsStr, Option<&'s OsString>)>; fn longest_filter(&self) -> bool; fn val_terminator(&self) -> Option<&'e str>; } pub trait DispOrder { fn disp_ord(&self) -> usize; } impl<'n, 'e, 'z, T: ?Sized> AnyArg<'n, 'e> for &'z T where T: AnyArg<'n, 'e> + 'z, { fn name(&self) -> &'n str { (*self).name() } fn overrides(&self) -> Option<&[&'e str]> { (*self).overrides() } fn aliases(&self) -> Option> { (*self).aliases() } fn requires(&self) -> Option<&[(Option<&'e str>, &'n str)]> { (*self).requires() } fn blacklist(&self) -> Option<&[&'e str]> { (*self).blacklist() } fn required_unless(&self) -> Option<&[&'e str]> { (*self).required_unless() } fn is_set(&self, a: ArgSettings) -> bool { (*self).is_set(a) } fn set(&mut self, _: ArgSettings) { panic!("{}", INTERNAL_ERROR_MSG) } fn has_switch(&self) -> bool { (*self).has_switch() } fn max_vals(&self) -> Option { (*self).max_vals() } fn min_vals(&self) -> Option { (*self).min_vals() } fn num_vals(&self) -> Option { (*self).num_vals() } fn possible_vals(&self) -> Option<&[&'e str]> { (*self).possible_vals() } #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] fn validator(&self) -> Option<&Rc Result<(), String>>> { (*self).validator() } #[cfg_attr(feature = "cargo-clippy", allow(clippy::type_complexity))] fn validator_os(&self) -> Option<&Rc Result<(), OsString>>> { (*self).validator_os() } fn short(&self) -> Option { (*self).short() } fn long(&self) -> Option<&'e str> { (*self).long() } fn val_delim(&self) -> Option { (*self).val_delim() } fn takes_value(&self) -> bool { (*self).takes_value() } fn val_names(&self) -> Option<&VecMap<&'e str>> { (*self).val_names() } fn help(&self) -> Option<&'e str> { (*self).help() } fn long_help(&self) -> Option<&'e str> { (*self).long_help() } fn default_val(&self) -> Option<&'e OsStr> { (*self).default_val() } fn default_vals_ifs(&self) -> Option, &'e OsStr)>> { (*self).default_vals_ifs() } fn env<'s>(&'s self) -> Option<(&'n OsStr, Option<&'s OsString>)> { (*self).env() } fn longest_filter(&self) -> bool { (*self).longest_filter() } fn val_terminator(&self) -> Option<&'e str> { (*self).val_terminator() } } vendor/clap/src/lib.rs0000664000175000017500000005724714172417313015513 0ustar mwhudsonmwhudson// Copyright â“’ 2015-2016 Kevin B. Knapp and [`clap-rs` contributors](https://github.com/clap-rs/clap/blob/v2.33.1/CONTRIBUTORS.md). // Licensed under the MIT license // (see LICENSE or ) All files in the project carrying such // notice may not be copied, modified, or distributed except according to those terms. //! `clap` is a simple-to-use, efficient, and full-featured library for parsing command line //! arguments and subcommands when writing console/terminal applications. //! //! ## About //! //! `clap` is used to parse *and validate* the string of command line arguments provided by the user //! at runtime. You provide the list of valid possibilities, and `clap` handles the rest. This means //! you focus on your *applications* functionality, and less on the parsing and validating of //! arguments. //! //! `clap` also provides the traditional version and help switches (or flags) 'for free' meaning //! automatically with no configuration. It does this by checking the list of valid possibilities you //! supplied and adding only the ones you haven't already defined. If you are using subcommands, //! `clap` will also auto-generate a `help` subcommand for you in addition to the traditional flags. //! //! Once `clap` parses the user provided string of arguments, it returns the matches along with any //! applicable values. If the user made an error or typo, `clap` informs them of the mistake and //! exits gracefully (or returns a `Result` type and allows you to perform any clean up prior to //! exit). Because of this, you can make reasonable assumptions in your code about the validity of //! the arguments. //! //! //! ## Quick Example //! //! The following examples show a quick example of some of the very basic functionality of `clap`. //! For more advanced usage, such as requirements, conflicts, groups, multiple values and //! occurrences see the [documentation](https://docs.rs/clap/), [examples/] directory of //! this repository or the [video tutorials]. //! //! **NOTE:** All of these examples are functionally the same, but show different styles in which to //! use `clap` //! //! The first example shows a method that allows more advanced configuration options (not shown in //! this small example), or even dynamically generating arguments when desired. The downside is it's //! more verbose. //! //! ```no_run //! // (Full example with detailed comments in examples/01b_quick_example.rs) //! // //! // This example demonstrates clap's full 'builder pattern' style of creating arguments which is //! // more verbose, but allows easier editing, and at times more advanced options, or the possibility //! // to generate arguments dynamically. //! extern crate clap; //! use clap::{Arg, App, SubCommand}; //! //! fn main() { //! let matches = App::new("My Super Program") //! .version("1.0") //! .author("Kevin K. ") //! .about("Does awesome things") //! .arg(Arg::with_name("config") //! .short("c") //! .long("config") //! .value_name("FILE") //! .help("Sets a custom config file") //! .takes_value(true)) //! .arg(Arg::with_name("INPUT") //! .help("Sets the input file to use") //! .required(true) //! .index(1)) //! .arg(Arg::with_name("v") //! .short("v") //! .multiple(true) //! .help("Sets the level of verbosity")) //! .subcommand(SubCommand::with_name("test") //! .about("controls testing features") //! .version("1.3") //! .author("Someone E. ") //! .arg(Arg::with_name("debug") //! .short("d") //! .help("print debug information verbosely"))) //! .get_matches(); //! //! // Gets a value for config if supplied by user, or defaults to "default.conf" //! let config = matches.value_of("config").unwrap_or("default.conf"); //! println!("Value for config: {}", config); //! //! // Calling .unwrap() is safe here because "INPUT" is required (if "INPUT" wasn't //! // required we could have used an 'if let' to conditionally get the value) //! println!("Using input file: {}", matches.value_of("INPUT").unwrap()); //! //! // Vary the output based on how many times the user used the "verbose" flag //! // (i.e. 'myprog -v -v -v' or 'myprog -vvv' vs 'myprog -v' //! match matches.occurrences_of("v") { //! 0 => println!("No verbose info"), //! 1 => println!("Some verbose info"), //! 2 => println!("Tons of verbose info"), //! 3 | _ => println!("Don't be crazy"), //! } //! //! // You can handle information about subcommands by requesting their matches by name //! // (as below), requesting just the name used, or both at the same time //! if let Some(matches) = matches.subcommand_matches("test") { //! if matches.is_present("debug") { //! println!("Printing debug info..."); //! } else { //! println!("Printing normally..."); //! } //! } //! //! // more program logic goes here... //! } //! ``` //! //! The next example shows a far less verbose method, but sacrifices some of the advanced //! configuration options (not shown in this small example). This method also takes a *very* minor //! runtime penalty. //! //! ```no_run //! // (Full example with detailed comments in examples/01a_quick_example.rs) //! // //! // This example demonstrates clap's "usage strings" method of creating arguments //! // which is less verbose //! extern crate clap; //! use clap::{Arg, App, SubCommand}; //! //! fn main() { //! let matches = App::new("myapp") //! .version("1.0") //! .author("Kevin K. ") //! .about("Does awesome things") //! .args_from_usage( //! "-c, --config=[FILE] 'Sets a custom config file' //! 'Sets the input file to use' //! -v... 'Sets the level of verbosity'") //! .subcommand(SubCommand::with_name("test") //! .about("controls testing features") //! .version("1.3") //! .author("Someone E. ") //! .arg_from_usage("-d, --debug 'Print debug information'")) //! .get_matches(); //! //! // Same as previous example... //! } //! ``` //! //! This third method shows how you can use a YAML file to build your CLI and keep your Rust source //! tidy or support multiple localized translations by having different YAML files for each //! localization. //! //! First, create the `cli.yml` file to hold your CLI options, but it could be called anything we //! like: //! //! ```yaml //! name: myapp //! version: "1.0" //! author: Kevin K. //! about: Does awesome things //! args: //! - config: //! short: c //! long: config //! value_name: FILE //! help: Sets a custom config file //! takes_value: true //! - INPUT: //! help: Sets the input file to use //! required: true //! index: 1 //! - verbose: //! short: v //! multiple: true //! help: Sets the level of verbosity //! subcommands: //! - test: //! about: controls testing features //! version: "1.3" //! author: Someone E. //! args: //! - debug: //! short: d //! help: print debug information //! ``` //! //! Since this feature requires additional dependencies that not everyone may want, it is *not* //! compiled in by default and we need to enable a feature flag in Cargo.toml: //! //! Simply change your `clap = "~2.27.0"` to `clap = {version = "~2.27.0", features = ["yaml"]}`. //! //! At last we create our `main.rs` file just like we would have with the previous two examples: //! //! ```ignore //! // (Full example with detailed comments in examples/17_yaml.rs) //! // //! // This example demonstrates clap's building from YAML style of creating arguments which is far //! // more clean, but takes a very small performance hit compared to the other two methods. //! #[macro_use] //! extern crate clap; //! use clap::App; //! //! fn main() { //! // The YAML file is found relative to the current file, similar to how modules are found //! let yaml = load_yaml!("cli.yml"); //! let matches = App::from_yaml(yaml).get_matches(); //! //! // Same as previous examples... //! } //! ``` //! //! Finally there is a macro version, which is like a hybrid approach offering the speed of the //! builder pattern (the first example), but without all the verbosity. //! //! ```no_run //! #[macro_use] //! extern crate clap; //! //! fn main() { //! let matches = clap_app!(myapp => //! (version: "1.0") //! (author: "Kevin K. ") //! (about: "Does awesome things") //! (@arg CONFIG: -c --config +takes_value "Sets a custom config file") //! (@arg INPUT: +required "Sets the input file to use") //! (@arg debug: -d ... "Sets the level of debugging information") //! (@subcommand test => //! (about: "controls testing features") //! (version: "1.3") //! (author: "Someone E. ") //! (@arg verbose: -v --verbose "Print test information verbosely") //! ) //! ).get_matches(); //! //! // Same as before... //! } //! ``` //! //! If you were to compile any of the above programs and run them with the flag `--help` or `-h` (or //! `help` subcommand, since we defined `test` as a subcommand) the following would be output //! //! ```text //! $ myprog --help //! My Super Program 1.0 //! Kevin K. //! Does awesome things //! //! USAGE: //! MyApp [FLAGS] [OPTIONS] [SUBCOMMAND] //! //! FLAGS: //! -h, --help Prints this message //! -v Sets the level of verbosity //! -V, --version Prints version information //! //! OPTIONS: //! -c, --config Sets a custom config file //! //! ARGS: //! INPUT The input file to use //! //! SUBCOMMANDS: //! help Prints this message //! test Controls testing features //! ``` //! //! **NOTE:** You could also run `myapp test --help` to see similar output and options for the //! `test` subcommand. //! //! ## Try it! //! //! ### Pre-Built Test //! //! To try out the pre-built example, use the following steps: //! //! * Clone the repository `$ git clone https://github.com/clap-rs/clap && cd clap-rs/tests` //! * Compile the example `$ cargo build --release` //! * Run the help info `$ ./target/release/claptests --help` //! * Play with the arguments! //! //! ### BYOB (Build Your Own Binary) //! //! To test out `clap`'s default auto-generated help/version follow these steps: //! //! * Create a new cargo project `$ cargo new fake --bin && cd fake` //! * Add `clap` to your `Cargo.toml` //! //! ```toml //! [dependencies] //! clap = "2" //! ``` //! //! * Add the following to your `src/main.rs` //! //! ```no_run //! extern crate clap; //! use clap::App; //! //! fn main() { //! App::new("fake").version("v1.0-beta").get_matches(); //! } //! ``` //! //! * Build your program `$ cargo build --release` //! * Run with help or version `$ ./target/release/fake --help` or `$ ./target/release/fake //! --version` //! //! ## Usage //! //! For full usage, add `clap` as a dependency in your `Cargo.toml` (it is **highly** recommended to //! use the `~major.minor.patch` style versions in your `Cargo.toml`, for more information see //! [Compatibility Policy](#compatibility-policy)) to use from crates.io: //! //! ```toml //! [dependencies] //! clap = "~2.27.0" //! ``` //! //! Or get the latest changes from the master branch at github: //! //! ```toml //! [dependencies.clap] //! git = "https://github.com/clap-rs/clap.git" //! ``` //! //! Add `extern crate clap;` to your crate root. //! //! Define a list of valid arguments for your program (see the //! [documentation](https://docs.rs/clap/) or [examples/] directory of this repo) //! //! Then run `cargo build` or `cargo update && cargo build` for your project. //! //! ### Optional Dependencies / Features //! //! #### Features enabled by default //! //! * `suggestions`: Turns on the `Did you mean '--myoption'?` feature for when users make typos. (builds dependency `strsim`) //! * `color`: Turns on colored error messages. This feature only works on non-Windows OSs. (builds dependency `ansi-term` and `atty`) //! * `wrap_help`: Wraps the help at the actual terminal width when //! available, instead of 120 characters. (builds dependency `textwrap` //! with feature `term_size`) //! //! To disable these, add this to your `Cargo.toml`: //! //! ```toml //! [dependencies.clap] //! version = "~2.27.0" //! default-features = false //! ``` //! //! You can also selectively enable only the features you'd like to include, by adding: //! //! ```toml //! [dependencies.clap] //! version = "~2.27.0" //! default-features = false //! //! # Cherry-pick the features you'd like to use //! features = [ "suggestions", "color" ] //! ``` //! //! #### Opt-in features //! //! * **"yaml"**: Enables building CLIs from YAML documents. (builds dependency `yaml-rust`) //! * **"unstable"**: Enables unstable `clap` features that may change from release to release //! //! ### Dependencies Tree //! //! The following graphic depicts `clap`s dependency graph (generated using //! [cargo-graph](https://github.com/kbknapp/cargo-graph)). //! //! * **Dashed** Line: Optional dependency //! * **Red** Color: **NOT** included by default (must use cargo `features` to enable) //! * **Blue** Color: Dev dependency, only used while developing. //! //! ![clap dependencies](https://github.com/clap-rs/clap/blob/v2.34.0/clap_dep_graph.png) //! //! ### More Information //! //! You can find complete documentation on the [docs.rs](https://docs.rs/clap/) for this project. //! //! You can also find usage examples in the [examples/] directory of this repo. //! //! #### Video Tutorials //! //! There's also the video tutorial series [Argument Parsing with Rust v2][video tutorials]. //! //! These videos slowly trickle out as I finish them and currently a work in progress. //! //! ## How to Contribute //! //! Contributions are always welcome! And there is a multitude of ways in which you can help //! depending on what you like to do, or are good at. Anything from documentation, code cleanup, //! issue completion, new features, you name it, even filing issues is contributing and greatly //! appreciated! //! //! Another really great way to help is if you find an interesting, or helpful way in which to use //! `clap`. You can either add it to the [examples/] directory, or file an issue and tell //! me. I'm all about giving credit where credit is due :) //! //! Please read [CONTRIBUTING.md](https://github.com/clap-rs/clap/blob/v2.34.0/.github/CONTRIBUTING.md) before you start contributing. //! //! //! ### Testing Code //! //! To test with all features both enabled and disabled, you can run theese commands: //! //! ```text //! $ cargo test --no-default-features //! $ cargo test --features "yaml unstable" //! ``` //! //! Alternatively, if you have [`just`](https://github.com/casey/just) installed you can run the //! prebuilt recipes. *Not* using `just` is perfectly fine as well, it simply bundles commands //! automatically. //! //! For example, to test the code, as above simply run: //! //! ```text //! $ just run-tests //! ``` //! //! From here on, I will list the appropriate `cargo` command as well as the `just` command. //! //! Sometimes it's helpful to only run a subset of the tests, which can be done via: //! //! ```text //! $ cargo test --test //! //! # Or //! //! $ just run-test //! ``` //! //! ### Linting Code //! //! During the CI process `clap` runs against many different lints using //! [`clippy`](https://github.com/Manishearth/rust-clippy). In order to check if these lints pass on //! your own computer prior to submitting a PR you'll need a nightly compiler. //! //! In order to check the code for lints run either: //! //! ```text //! $ rustup override add nightly //! $ cargo build --features lints //! $ rustup override remove //! //! # Or //! //! $ just lint //! ``` //! //! ### Debugging Code //! //! Another helpful technique is to see the `clap` debug output while developing features. In order //! to see the debug output while running the full test suite or individual tests, run: //! //! ```text //! $ cargo test --features debug //! //! # Or for individual tests //! $ cargo test --test --features debug //! //! # The corresponding just command for individual debugging tests is: //! $ just debug //! ``` //! //! ### Goals //! //! There are a few goals of `clap` that I'd like to maintain throughout contributions. If your //! proposed changes break, or go against any of these goals we'll discuss the changes further //! before merging (but will *not* be ignored, all contributes are welcome!). These are by no means //! hard-and-fast rules, as I'm no expert and break them myself from time to time (even if by //! mistake or ignorance). //! //! * Remain backwards compatible when possible //! - If backwards compatibility *must* be broken, use deprecation warnings if at all possible before //! removing legacy code - This does not apply for security concerns //! * Parse arguments quickly //! - Parsing of arguments shouldn't slow down usage of the main program - This is also true of //! generating help and usage information (although *slightly* less stringent, as the program is about //! to exit) //! * Try to be cognizant of memory usage //! - Once parsing is complete, the memory footprint of `clap` should be low since the main program //! is the star of the show //! * `panic!` on *developer* error, exit gracefully on *end-user* error //! //! ### Compatibility Policy //! //! Because `clap` takes `SemVer` and compatibility seriously, this is the official policy regarding //! breaking changes and previous versions of Rust. //! //! `clap` will pin the minimum required version of Rust to the CI builds. Bumping the minimum //! version of Rust is considered a minor breaking change, meaning *at a minimum* the minor version //! of `clap` will be bumped. //! //! In order to keep from being surprised by breaking changes, it is **highly** recommended to use //! the `~major.minor.patch` style in your `Cargo.toml`: //! //! ```toml //! [dependencies] clap = "~2.27.0" //! ``` //! //! This will cause *only* the patch version to be updated upon a `cargo update` call, and therefore //! cannot break due to new features, or bumped minimum versions of Rust. //! //! #### Minimum Version of Rust //! //! `clap` will officially support current stable Rust, minus two releases, but may work with prior //! releases as well. For example, current stable Rust at the time of this writing is 1.21.0, //! meaning `clap` is guaranteed to compile with 1.19.0 and beyond. At the 1.22.0 release, `clap` //! will be guaranteed to compile with 1.20.0 and beyond, etc. //! //! Upon bumping the minimum version of Rust (assuming it's within the stable-2 range), it *must* be //! clearly annotated in the `CHANGELOG.md` //! //! ## License //! //! `clap` is licensed under the MIT license. Please read the [LICENSE-MIT][license] file in //! this repository for more information. //! //! [examples/]: https://github.com/clap-rs/clap/tree/v2.34.0/examples //! [video tutorials]: https://www.youtube.com/playlist?list=PLza5oFLQGTl2Z5T8g1pRkIynR3E0_pc7U //! [license]: https://github.com/clap-rs/clap/blob/v2.34.0/LICENSE-MIT #![crate_type = "lib"] #![doc(html_root_url = "https://docs.rs/clap/2.34.0")] #![deny( missing_docs, missing_debug_implementations, missing_copy_implementations, trivial_casts, unused_import_braces, unused_allocation )] // Lints we'd like to deny but are currently failing for upstream crates // unused_qualifications (bitflags, clippy) // trivial_numeric_casts (bitflags) #![cfg_attr( not(any(feature = "cargo-clippy", feature = "nightly")), forbid(unstable_features) )] //#![cfg_attr(feature = "lints", feature(plugin))] //#![cfg_attr(feature = "lints", plugin(clippy))] // Need to disable deny(warnings) while deprecations are active //#![cfg_attr(feature = "cargo-clippy", deny(warnings))] // Due to our "MSRV for 2.x will remain unchanged" policy, we can't fix these warnings #![allow(bare_trait_objects, deprecated)] #[cfg(all(feature = "color", not(target_os = "windows")))] extern crate ansi_term; #[cfg(feature = "color")] extern crate atty; #[macro_use] extern crate bitflags; #[cfg(feature = "suggestions")] extern crate strsim; #[cfg(feature = "wrap_help")] extern crate term_size; extern crate textwrap; extern crate unicode_width; #[cfg(feature = "vec_map")] extern crate vec_map; #[cfg(feature = "yaml")] extern crate yaml_rust; pub use app::{App, AppSettings}; pub use args::{Arg, ArgGroup, ArgMatches, ArgSettings, OsValues, SubCommand, Values}; pub use completions::Shell; pub use errors::{Error, ErrorKind, Result}; pub use fmt::Format; #[cfg(feature = "yaml")] pub use yaml_rust::YamlLoader; #[macro_use] mod macros; mod app; mod args; mod completions; mod errors; mod fmt; mod map; mod osstringext; mod strext; mod suggestions; mod usage_parser; const INTERNAL_ERROR_MSG: &str = "Fatal internal error. Please consider filing a bug \ report at https://github.com/clap-rs/clap/issues"; const INVALID_UTF8: &str = "unexpected invalid UTF-8 code point"; #[cfg(unstable)] pub use derive::{ArgEnum, ClapApp, FromArgMatches, IntoApp}; #[cfg(unstable)] mod derive { /// @TODO @release @docs pub trait ClapApp: IntoApp + FromArgMatches + Sized { /// @TODO @release @docs fn parse() -> Self { Self::from_argmatches(Self::into_app().get_matches()) } /// @TODO @release @docs fn parse_from(argv: I) -> Self where I: IntoIterator, T: Into + Clone, { Self::from_argmatches(Self::into_app().get_matches_from(argv)) } /// @TODO @release @docs fn try_parse() -> Result { Self::try_from_argmatches(Self::into_app().get_matches_safe()?) } /// @TODO @release @docs fn try_parse_from(argv: I) -> Result where I: IntoIterator, T: Into + Clone, { Self::try_from_argmatches(Self::into_app().get_matches_from_safe(argv)?) } } /// @TODO @release @docs pub trait IntoApp { /// @TODO @release @docs fn into_app<'a, 'b>() -> clap::App<'a, 'b>; } /// @TODO @release @docs pub trait FromArgMatches: Sized { /// @TODO @release @docs fn from_argmatches<'a>(matches: clap::ArgMatches<'a>) -> Self; /// @TODO @release @docs fn try_from_argmatches<'a>(matches: clap::ArgMatches<'a>) -> Result; } /// @TODO @release @docs pub trait ArgEnum {} } vendor/clap/src/completions/0000775000175000017500000000000014172417313016714 5ustar mwhudsonmwhudsonvendor/clap/src/completions/elvish.rs0000664000175000017500000000702414172417313020557 0ustar mwhudsonmwhudson// Std use std::io::Write; // Internal use crate::{app::parser::Parser, INTERNAL_ERROR_MSG}; pub struct ElvishGen<'a, 'b> where 'a: 'b, { p: &'b Parser<'a, 'b>, } impl<'a, 'b> ElvishGen<'a, 'b> { pub fn new(p: &'b Parser<'a, 'b>) -> Self { ElvishGen { p } } pub fn generate_to(&self, buf: &mut W) { let bin_name = self.p.meta.bin_name.as_ref().unwrap(); let mut names = vec![]; let subcommands_cases = generate_inner(self.p, "", &mut names); let result = format!( r#" edit:completion:arg-completer[{bin_name}] = [@words]{{ fn spaces [n]{{ repeat $n ' ' | joins '' }} fn cand [text desc]{{ edit:complex-candidate $text &display-suffix=' '(spaces (- 14 (wcswidth $text)))$desc }} command = '{bin_name}' for word $words[1:-1] {{ if (has-prefix $word '-') {{ break }} command = $command';'$word }} completions = [{subcommands_cases} ] $completions[$command] }} "#, bin_name = bin_name, subcommands_cases = subcommands_cases ); w!(buf, result.as_bytes()); } } // Escape string inside single quotes fn escape_string(string: &str) -> String { string.replace("'", "''") } fn get_tooltip(help: Option<&str>, data: T) -> String { match help { Some(help) => escape_string(help), _ => data.to_string(), } } fn generate_inner<'a, 'b, 'p>( p: &'p Parser<'a, 'b>, previous_command_name: &str, names: &mut Vec<&'p str>, ) -> String { debugln!("ElvishGen::generate_inner;"); let command_name = if previous_command_name.is_empty() { p.meta.bin_name.as_ref().expect(INTERNAL_ERROR_MSG).clone() } else { format!("{};{}", previous_command_name, &p.meta.name) }; let mut completions = String::new(); let preamble = String::from("\n cand "); for option in p.opts() { if let Some(data) = option.s.short { let tooltip = get_tooltip(option.b.help, data); completions.push_str(&preamble); completions.push_str(format!("-{} '{}'", data, tooltip).as_str()); } if let Some(data) = option.s.long { let tooltip = get_tooltip(option.b.help, data); completions.push_str(&preamble); completions.push_str(format!("--{} '{}'", data, tooltip).as_str()); } } for flag in p.flags() { if let Some(data) = flag.s.short { let tooltip = get_tooltip(flag.b.help, data); completions.push_str(&preamble); completions.push_str(format!("-{} '{}'", data, tooltip).as_str()); } if let Some(data) = flag.s.long { let tooltip = get_tooltip(flag.b.help, data); completions.push_str(&preamble); completions.push_str(format!("--{} '{}'", data, tooltip).as_str()); } } for subcommand in &p.subcommands { let data = &subcommand.p.meta.name; let tooltip = get_tooltip(subcommand.p.meta.about, data); completions.push_str(&preamble); completions.push_str(format!("{} '{}'", data, tooltip).as_str()); } let mut subcommands_cases = format!( r" &'{}'= {{{} }}", &command_name, completions ); for subcommand in &p.subcommands { let subcommand_subcommands_cases = generate_inner(&subcommand.p, &command_name, names); subcommands_cases.push_str(&subcommand_subcommands_cases); } subcommands_cases } vendor/clap/src/completions/powershell.rs0000664000175000017500000001142014172417313021444 0ustar mwhudsonmwhudson// Std use std::io::Write; // Internal use crate::{app::parser::Parser, INTERNAL_ERROR_MSG}; pub struct PowerShellGen<'a, 'b> where 'a: 'b, { p: &'b Parser<'a, 'b>, } impl<'a, 'b> PowerShellGen<'a, 'b> { pub fn new(p: &'b Parser<'a, 'b>) -> Self { PowerShellGen { p } } pub fn generate_to(&self, buf: &mut W) { let bin_name = self.p.meta.bin_name.as_ref().unwrap(); let mut names = vec![]; let subcommands_cases = generate_inner(self.p, "", &mut names); let result = format!( r#" using namespace System.Management.Automation using namespace System.Management.Automation.Language Register-ArgumentCompleter -Native -CommandName '{bin_name}' -ScriptBlock {{ param($wordToComplete, $commandAst, $cursorPosition) $commandElements = $commandAst.CommandElements $command = @( '{bin_name}' for ($i = 1; $i -lt $commandElements.Count; $i++) {{ $element = $commandElements[$i] if ($element -isnot [StringConstantExpressionAst] -or $element.StringConstantType -ne [StringConstantType]::BareWord -or $element.Value.StartsWith('-')) {{ break }} $element.Value }}) -join ';' $completions = @(switch ($command) {{{subcommands_cases} }}) $completions.Where{{ $_.CompletionText -like "$wordToComplete*" }} | Sort-Object -Property ListItemText }} "#, bin_name = bin_name, subcommands_cases = subcommands_cases ); w!(buf, result.as_bytes()); } } // Escape string inside single quotes fn escape_string(string: &str) -> String { string.replace("'", "''") } fn get_tooltip(help: Option<&str>, data: T) -> String { match help { Some(help) => escape_string(help), _ => data.to_string(), } } fn generate_inner<'a, 'b, 'p>( p: &'p Parser<'a, 'b>, previous_command_name: &str, names: &mut Vec<&'p str>, ) -> String { debugln!("PowerShellGen::generate_inner;"); let command_name = if previous_command_name.is_empty() { p.meta.bin_name.as_ref().expect(INTERNAL_ERROR_MSG).clone() } else { format!("{};{}", previous_command_name, &p.meta.name) }; let mut completions = String::new(); let preamble = String::from("\n [CompletionResult]::new("); for option in p.opts() { if let Some(data) = option.s.short { let tooltip = get_tooltip(option.b.help, data); completions.push_str(&preamble); completions.push_str( format!( "'-{}', '{}', {}, '{}')", data, data, "[CompletionResultType]::ParameterName", tooltip ) .as_str(), ); } if let Some(data) = option.s.long { let tooltip = get_tooltip(option.b.help, data); completions.push_str(&preamble); completions.push_str( format!( "'--{}', '{}', {}, '{}')", data, data, "[CompletionResultType]::ParameterName", tooltip ) .as_str(), ); } } for flag in p.flags() { if let Some(data) = flag.s.short { let tooltip = get_tooltip(flag.b.help, data); completions.push_str(&preamble); completions.push_str( format!( "'-{}', '{}', {}, '{}')", data, data, "[CompletionResultType]::ParameterName", tooltip ) .as_str(), ); } if let Some(data) = flag.s.long { let tooltip = get_tooltip(flag.b.help, data); completions.push_str(&preamble); completions.push_str( format!( "'--{}', '{}', {}, '{}')", data, data, "[CompletionResultType]::ParameterName", tooltip ) .as_str(), ); } } for subcommand in &p.subcommands { let data = &subcommand.p.meta.name; let tooltip = get_tooltip(subcommand.p.meta.about, data); completions.push_str(&preamble); completions.push_str( format!( "'{}', '{}', {}, '{}')", data, data, "[CompletionResultType]::ParameterValue", tooltip ) .as_str(), ); } let mut subcommands_cases = format!( r" '{}' {{{} break }}", &command_name, completions ); for subcommand in &p.subcommands { let subcommand_subcommands_cases = generate_inner(&subcommand.p, &command_name, names); subcommands_cases.push_str(&subcommand_subcommands_cases); } subcommands_cases } vendor/clap/src/completions/mod.rs0000664000175000017500000001323214172417313020042 0ustar mwhudsonmwhudson#[macro_use] mod macros; mod bash; mod elvish; mod fish; mod powershell; mod shell; mod zsh; // Std use std::io::Write; // Internal pub use crate::completions::shell::Shell; use crate::{ app::parser::Parser, completions::{ bash::BashGen, elvish::ElvishGen, fish::FishGen, powershell::PowerShellGen, zsh::ZshGen, }, }; pub struct ComplGen<'a, 'b> where 'a: 'b, { p: &'b Parser<'a, 'b>, } impl<'a, 'b> ComplGen<'a, 'b> { pub fn new(p: &'b Parser<'a, 'b>) -> Self { ComplGen { p } } pub fn generate(&self, for_shell: Shell, buf: &mut W) { match for_shell { Shell::Bash => BashGen::new(self.p).generate_to(buf), Shell::Fish => FishGen::new(self.p).generate_to(buf), Shell::Zsh => ZshGen::new(self.p).generate_to(buf), Shell::PowerShell => PowerShellGen::new(self.p).generate_to(buf), Shell::Elvish => ElvishGen::new(self.p).generate_to(buf), } } } // Gets all subcommands including child subcommands in the form of 'name' where the name // is a single word (i.e. "install") of the path to said subcommand (i.e. // "rustup toolchain install") // // Also note, aliases are treated as their own subcommands but duplicates of whatever they're // aliasing. pub fn all_subcommand_names(p: &Parser) -> Vec { debugln!("all_subcommand_names;"); let mut subcmds: Vec<_> = subcommands_of(p) .iter() .map(|&(ref n, _)| n.clone()) .collect(); for sc_v in p.subcommands.iter().map(|s| all_subcommand_names(&s.p)) { subcmds.extend(sc_v); } subcmds.sort(); subcmds.dedup(); subcmds } // Gets all subcommands including child subcommands in the form of ('name', 'bin_name') where the name // is a single word (i.e. "install") of the path and full bin_name of said subcommand (i.e. // "rustup toolchain install") // // Also note, aliases are treated as their own subcommands but duplicates of whatever they're // aliasing. pub fn all_subcommands(p: &Parser) -> Vec<(String, String)> { debugln!("all_subcommands;"); let mut subcmds: Vec<_> = subcommands_of(p); for sc_v in p.subcommands.iter().map(|s| all_subcommands(&s.p)) { subcmds.extend(sc_v); } subcmds } // Gets all subcommands excluding child subcommands in the form of (name, bin_name) where the name // is a single word (i.e. "install") and the bin_name is a space delineated list of the path to said // subcommand (i.e. "rustup toolchain install") // // Also note, aliases are treated as their own subcommands but duplicates of whatever they're // aliasing. pub fn subcommands_of(p: &Parser) -> Vec<(String, String)> { debugln!( "subcommands_of: name={}, bin_name={}", p.meta.name, p.meta.bin_name.as_ref().unwrap() ); let mut subcmds = vec![]; debugln!( "subcommands_of: Has subcommands...{:?}", p.has_subcommands() ); if !p.has_subcommands() { let mut ret = vec![]; debugln!("subcommands_of: Looking for aliases..."); if let Some(ref aliases) = p.meta.aliases { for &(n, _) in aliases { debugln!("subcommands_of:iter:iter: Found alias...{}", n); let mut als_bin_name: Vec<_> = p.meta.bin_name.as_ref().unwrap().split(' ').collect(); als_bin_name.push(n); let old = als_bin_name.len() - 2; als_bin_name.swap_remove(old); ret.push((n.to_owned(), als_bin_name.join(" "))); } } return ret; } for sc in &p.subcommands { debugln!( "subcommands_of:iter: name={}, bin_name={}", sc.p.meta.name, sc.p.meta.bin_name.as_ref().unwrap() ); debugln!("subcommands_of:iter: Looking for aliases..."); if let Some(ref aliases) = sc.p.meta.aliases { for &(n, _) in aliases { debugln!("subcommands_of:iter:iter: Found alias...{}", n); let mut als_bin_name: Vec<_> = p.meta.bin_name.as_ref().unwrap().split(' ').collect(); als_bin_name.push(n); let old = als_bin_name.len() - 2; als_bin_name.swap_remove(old); subcmds.push((n.to_owned(), als_bin_name.join(" "))); } } subcmds.push(( sc.p.meta.name.clone(), sc.p.meta.bin_name.as_ref().unwrap().clone(), )); } subcmds } pub fn get_all_subcommand_paths(p: &Parser, first: bool) -> Vec { debugln!("get_all_subcommand_paths;"); let mut subcmds = vec![]; if !p.has_subcommands() { if !first { let name = &*p.meta.name; let path = p.meta.bin_name.as_ref().unwrap().clone().replace(" ", "__"); let mut ret = vec![path.clone()]; if let Some(ref aliases) = p.meta.aliases { for &(n, _) in aliases { ret.push(path.replace(name, n)); } } return ret; } return vec![]; } for sc in &p.subcommands { let name = &*sc.p.meta.name; let path = sc.p.meta .bin_name .as_ref() .unwrap() .clone() .replace(" ", "__"); subcmds.push(path.clone()); if let Some(ref aliases) = sc.p.meta.aliases { for &(n, _) in aliases { subcmds.push(path.replace(name, n)); } } } for sc_v in p .subcommands .iter() .map(|s| get_all_subcommand_paths(&s.p, false)) { subcmds.extend(sc_v); } subcmds } vendor/clap/src/completions/fish.rs0000664000175000017500000000655514172417313020226 0ustar mwhudsonmwhudson// Std use std::io::Write; // Internal use crate::app::parser::Parser; pub struct FishGen<'a, 'b> where 'a: 'b, { p: &'b Parser<'a, 'b>, } impl<'a, 'b> FishGen<'a, 'b> { pub fn new(p: &'b Parser<'a, 'b>) -> Self { FishGen { p } } pub fn generate_to(&self, buf: &mut W) { let command = self.p.meta.bin_name.as_ref().unwrap(); let mut buffer = String::new(); gen_fish_inner(command, self, command, &mut buffer); w!(buf, buffer.as_bytes()); } } // Escape string inside single quotes fn escape_string(string: &str) -> String { string.replace("\\", "\\\\").replace("'", "\\'") } fn gen_fish_inner(root_command: &str, comp_gen: &FishGen, subcommand: &str, buffer: &mut String) { debugln!("FishGen::gen_fish_inner;"); // example : // // complete // -c {command} // -d "{description}" // -s {short} // -l {long} // -a "{possible_arguments}" // -r # if require parameter // -f # don't use file completion // -n "__fish_use_subcommand" # complete for command "myprog" // -n "__fish_seen_subcommand_from subcmd1" # complete for command "myprog subcmd1" let mut basic_template = format!("complete -c {} -n ", root_command); if root_command == subcommand { basic_template.push_str("\"__fish_use_subcommand\""); } else { basic_template.push_str(format!("\"__fish_seen_subcommand_from {}\"", subcommand).as_str()); } for option in comp_gen.p.opts() { let mut template = basic_template.clone(); if let Some(data) = option.s.short { template.push_str(format!(" -s {}", data).as_str()); } if let Some(data) = option.s.long { template.push_str(format!(" -l {}", data).as_str()); } if let Some(data) = option.b.help { template.push_str(format!(" -d '{}'", escape_string(data)).as_str()); } if let Some(ref data) = option.v.possible_vals { template.push_str(format!(" -r -f -a \"{}\"", data.join(" ")).as_str()); } buffer.push_str(template.as_str()); buffer.push('\n'); } for flag in comp_gen.p.flags() { let mut template = basic_template.clone(); if let Some(data) = flag.s.short { template.push_str(format!(" -s {}", data).as_str()); } if let Some(data) = flag.s.long { template.push_str(format!(" -l {}", data).as_str()); } if let Some(data) = flag.b.help { template.push_str(format!(" -d '{}'", escape_string(data)).as_str()); } buffer.push_str(template.as_str()); buffer.push('\n'); } for subcommand in &comp_gen.p.subcommands { let mut template = basic_template.clone(); template.push_str(" -f"); template.push_str(format!(" -a \"{}\"", &subcommand.p.meta.name).as_str()); if let Some(data) = subcommand.p.meta.about { template.push_str(format!(" -d '{}'", escape_string(data)).as_str()) } buffer.push_str(template.as_str()); buffer.push('\n'); } // generate options of subcommands for subcommand in &comp_gen.p.subcommands { let sub_comp_gen = FishGen::new(&subcommand.p); gen_fish_inner(root_command, &sub_comp_gen, &subcommand.to_string(), buffer); } } vendor/clap/src/completions/bash.rs0000664000175000017500000001424514172417313020205 0ustar mwhudsonmwhudson// Std use std::io::Write; // Internal use crate::{ app::parser::Parser, args::{AnyArg, OptBuilder}, completions, }; pub struct BashGen<'a, 'b> where 'a: 'b, { p: &'b Parser<'a, 'b>, } impl<'a, 'b> BashGen<'a, 'b> { pub fn new(p: &'b Parser<'a, 'b>) -> Self { BashGen { p } } pub fn generate_to(&self, buf: &mut W) { w!( buf, format!( r#"_{name}() {{ local i cur prev opts cmds COMPREPLY=() cur="${{COMP_WORDS[COMP_CWORD]}}" prev="${{COMP_WORDS[COMP_CWORD-1]}}" cmd="" opts="" for i in ${{COMP_WORDS[@]}} do case "${{i}}" in {name}) cmd="{name}" ;; {subcmds} *) ;; esac done case "${{cmd}}" in {name}) opts="{name_opts}" if [[ ${{cur}} == -* || ${{COMP_CWORD}} -eq 1 ]] ; then COMPREPLY=( $(compgen -W "${{opts}}" -- "${{cur}}") ) return 0 fi case "${{prev}}" in {name_opts_details} *) COMPREPLY=() ;; esac COMPREPLY=( $(compgen -W "${{opts}}" -- "${{cur}}") ) return 0 ;; {subcmd_details} esac }} complete -F _{name} -o bashdefault -o default {name} "#, name = self.p.meta.bin_name.as_ref().unwrap(), name_opts = self.all_options_for_path(self.p.meta.bin_name.as_ref().unwrap()), name_opts_details = self.option_details_for_path(self.p.meta.bin_name.as_ref().unwrap()), subcmds = self.all_subcommands(), subcmd_details = self.subcommand_details() ) .as_bytes() ); } fn all_subcommands(&self) -> String { debugln!("BashGen::all_subcommands;"); let mut subcmds = String::new(); let scs = completions::all_subcommand_names(self.p); for sc in &scs { subcmds = format!( r#"{} {name}) cmd+="__{fn_name}" ;;"#, subcmds, name = sc, fn_name = sc.replace("-", "__") ); } subcmds } fn subcommand_details(&self) -> String { debugln!("BashGen::subcommand_details;"); let mut subcmd_dets = String::new(); let mut scs = completions::get_all_subcommand_paths(self.p, true); scs.sort(); scs.dedup(); for sc in &scs { subcmd_dets = format!( r#"{} {subcmd}) opts="{sc_opts}" if [[ ${{cur}} == -* || ${{COMP_CWORD}} -eq {level} ]] ; then COMPREPLY=( $(compgen -W "${{opts}}" -- "${{cur}}") ) return 0 fi case "${{prev}}" in {opts_details} *) COMPREPLY=() ;; esac COMPREPLY=( $(compgen -W "${{opts}}" -- "${{cur}}") ) return 0 ;;"#, subcmd_dets, subcmd = sc.replace("-", "__"), sc_opts = self.all_options_for_path(&*sc), level = sc.split("__").count(), opts_details = self.option_details_for_path(&*sc) ); } subcmd_dets } fn option_details_for_path(&self, path: &str) -> String { debugln!("BashGen::option_details_for_path: path={}", path); let mut p = self.p; for sc in path.split("__").skip(1) { debugln!("BashGen::option_details_for_path:iter: sc={}", sc); p = &find_subcmd!(p, sc).unwrap().p; } let mut opts = String::new(); for o in p.opts() { if let Some(l) = o.s.long { opts = format!( "{} --{}) COMPREPLY=({}) return 0 ;;", opts, l, self.vals_for(o) ); } if let Some(s) = o.s.short { opts = format!( "{} -{}) COMPREPLY=({}) return 0 ;;", opts, s, self.vals_for(o) ); } } opts } fn vals_for(&self, o: &OptBuilder) -> String { debugln!("BashGen::vals_for: o={}", o.b.name); if let Some(vals) = o.possible_vals() { format!(r#"$(compgen -W "{}" -- "${{cur}}")"#, vals.join(" ")) } else { String::from(r#"$(compgen -f "${cur}")"#) } } fn all_options_for_path(&self, path: &str) -> String { debugln!("BashGen::all_options_for_path: path={}", path); let mut p = self.p; for sc in path.split("__").skip(1) { debugln!("BashGen::all_options_for_path:iter: sc={}", sc); p = &find_subcmd!(p, sc).unwrap().p; } let mut opts = shorts!(p).fold(String::new(), |acc, s| format!("{} -{}", acc, s)); opts = format!( "{} {}", opts, longs!(p).fold(String::new(), |acc, l| format!("{} --{}", acc, l)) ); opts = format!( "{} {}", opts, p.positionals .values() .fold(String::new(), |acc, p| format!("{} {}", acc, p)) ); opts = format!( "{} {}", opts, p.subcommands .iter() .fold(String::new(), |acc, s| format!("{} {}", acc, s.p.meta.name)) ); for sc in &p.subcommands { if let Some(ref aliases) = sc.p.meta.aliases { opts = format!( "{} {}", opts, aliases .iter() .map(|&(n, _)| n) .fold(String::new(), |acc, a| format!("{} {}", acc, a)) ); } } opts } } vendor/clap/src/completions/zsh.rs0000664000175000017500000003361314172417313020074 0ustar mwhudsonmwhudson// Std #[allow(deprecated, unused_imports)] use std::{ascii::AsciiExt, io::Write}; // Internal use crate::{ app::{parser::Parser, App}, args::{AnyArg, ArgSettings}, completions, INTERNAL_ERROR_MSG, }; pub struct ZshGen<'a, 'b> where 'a: 'b, { p: &'b Parser<'a, 'b>, } impl<'a, 'b> ZshGen<'a, 'b> { pub fn new(p: &'b Parser<'a, 'b>) -> Self { debugln!("ZshGen::new;"); ZshGen { p } } pub fn generate_to(&self, buf: &mut W) { debugln!("ZshGen::generate_to;"); w!( buf, format!( "\ #compdef {name} autoload -U is-at-least _{name}() {{ typeset -A opt_args typeset -a _arguments_options local ret=1 if is-at-least 5.2; then _arguments_options=(-s -S -C) else _arguments_options=(-s -C) fi local context curcontext=\"$curcontext\" state line {initial_args} {subcommands} }} {subcommand_details} _{name} \"$@\"", name = self.p.meta.bin_name.as_ref().unwrap(), initial_args = get_args_of(self.p), subcommands = get_subcommands_of(self.p), subcommand_details = subcommand_details(self.p) ) .as_bytes() ); } } // Displays the commands of a subcommand // (( $+functions[_[bin_name_underscore]_commands] )) || // _[bin_name_underscore]_commands() { // local commands; commands=( // '[arg_name]:[arg_help]' // ) // _describe -t commands '[bin_name] commands' commands "$@" // // Where the following variables are present: // [bin_name_underscore]: The full space delineated bin_name, where spaces have been replaced by // underscore characters // [arg_name]: The name of the subcommand // [arg_help]: The help message of the subcommand // [bin_name]: The full space delineated bin_name // // Here's a snippet from rustup: // // (( $+functions[_rustup_commands] )) || // _rustup_commands() { // local commands; commands=( // 'show:Show the active and installed toolchains' // 'update:Update Rust toolchains' // # ... snip for brevity // 'help:Prints this message or the help of the given subcommand(s)' // ) // _describe -t commands 'rustup commands' commands "$@" // fn subcommand_details(p: &Parser) -> String { debugln!("ZshGen::subcommand_details;"); // First we do ourself let mut ret = vec![format!( "\ (( $+functions[_{bin_name_underscore}_commands] )) || _{bin_name_underscore}_commands() {{ local commands; commands=( {subcommands_and_args} ) _describe -t commands '{bin_name} commands' commands \"$@\" }}", bin_name_underscore = p.meta.bin_name.as_ref().unwrap().replace(" ", "__"), bin_name = p.meta.bin_name.as_ref().unwrap(), subcommands_and_args = subcommands_of(p) )]; // Next we start looping through all the children, grandchildren, etc. let mut all_subcommands = completions::all_subcommands(p); all_subcommands.sort(); all_subcommands.dedup(); for &(_, ref bin_name) in &all_subcommands { debugln!("ZshGen::subcommand_details:iter: bin_name={}", bin_name); ret.push(format!( "\ (( $+functions[_{bin_name_underscore}_commands] )) || _{bin_name_underscore}_commands() {{ local commands; commands=( {subcommands_and_args} ) _describe -t commands '{bin_name} commands' commands \"$@\" }}", bin_name_underscore = bin_name.replace(" ", "__"), bin_name = bin_name, subcommands_and_args = subcommands_of(parser_of(p, bin_name)) )); } ret.join("\n") } // Generates subcommand completions in form of // // '[arg_name]:[arg_help]' // // Where: // [arg_name]: the subcommand's name // [arg_help]: the help message of the subcommand // // A snippet from rustup: // 'show:Show the active and installed toolchains' // 'update:Update Rust toolchains' fn subcommands_of(p: &Parser) -> String { debugln!("ZshGen::subcommands_of;"); let mut ret = vec![]; fn add_sc(sc: &App, n: &str, ret: &mut Vec) { debugln!("ZshGen::add_sc;"); let s = format!( "\"{name}:{help}\" \\", name = n, help = sc.p.meta .about .unwrap_or("") .replace("[", "\\[") .replace("]", "\\]") ); if !s.is_empty() { ret.push(s); } } // The subcommands for sc in p.subcommands() { debugln!("ZshGen::subcommands_of:iter: subcommand={}", sc.p.meta.name); add_sc(sc, &sc.p.meta.name, &mut ret); if let Some(ref v) = sc.p.meta.aliases { for alias in v.iter().filter(|&&(_, vis)| vis).map(|&(n, _)| n) { add_sc(sc, alias, &mut ret); } } } ret.join("\n") } // Get's the subcommand section of a completion file // This looks roughly like: // // case $state in // ([bin_name]_args) // curcontext=\"${curcontext%:*:*}:[name_hyphen]-command-$words[1]:\" // case $line[1] in // // ([name]) // _arguments -C -s -S \ // [subcommand_args] // && ret=0 // // [RECURSIVE_CALLS] // // ;;", // // [repeat] // // esac // ;; // esac", // // Where the following variables are present: // [name] = The subcommand name in the form of "install" for "rustup toolchain install" // [bin_name] = The full space delineated bin_name such as "rustup toolchain install" // [name_hyphen] = The full space delineated bin_name, but replace spaces with hyphens // [repeat] = From the same recursive calls, but for all subcommands // [subcommand_args] = The same as zsh::get_args_of fn get_subcommands_of(p: &Parser) -> String { debugln!("get_subcommands_of;"); debugln!( "get_subcommands_of: Has subcommands...{:?}", p.has_subcommands() ); if !p.has_subcommands() { return String::new(); } let sc_names = completions::subcommands_of(p); let mut subcmds = vec![]; for &(ref name, ref bin_name) in &sc_names { let mut v = vec![format!("({})", name)]; let subcommand_args = get_args_of(parser_of(p, &*bin_name)); if !subcommand_args.is_empty() { v.push(subcommand_args); } let subcommands = get_subcommands_of(parser_of(p, &*bin_name)); if !subcommands.is_empty() { v.push(subcommands); } v.push(String::from(";;")); subcmds.push(v.join("\n")); } format!( "case $state in ({name}) words=($line[{pos}] \"${{words[@]}}\") (( CURRENT += 1 )) curcontext=\"${{curcontext%:*:*}}:{name_hyphen}-command-$line[{pos}]:\" case $line[{pos}] in {subcommands} esac ;; esac", name = p.meta.name, name_hyphen = p.meta.bin_name.as_ref().unwrap().replace(" ", "-"), subcommands = subcmds.join("\n"), pos = p.positionals().len() + 1 ) } fn parser_of<'a, 'b>(p: &'b Parser<'a, 'b>, sc: &str) -> &'b Parser<'a, 'b> { debugln!("parser_of: sc={}", sc); if sc == p.meta.bin_name.as_ref().unwrap_or(&String::new()) { return p; } &p.find_subcommand(sc).expect(INTERNAL_ERROR_MSG).p } // Writes out the args section, which ends up being the flags, opts and postionals, and a jump to // another ZSH function if there are subcommands. // The structer works like this: // ([conflicting_args]) [multiple] arg [takes_value] [[help]] [: :(possible_values)] // ^-- list '-v -h' ^--'*' ^--'+' ^-- list 'one two three' // // An example from the rustup command: // // _arguments -C -s -S \ // '(-h --help --verbose)-v[Enable verbose output]' \ // '(-V -v --version --verbose --help)-h[Prints help information]' \ // # ... snip for brevity // ':: :_rustup_commands' \ # <-- displays subcommands // '*::: :->rustup' \ # <-- displays subcommand args and child subcommands // && ret=0 // // The args used for _arguments are as follows: // -C: modify the $context internal variable // -s: Allow stacking of short args (i.e. -a -b -c => -abc) // -S: Do not complete anything after '--' and treat those as argument values fn get_args_of(p: &Parser) -> String { debugln!("get_args_of;"); let mut ret = vec![String::from("_arguments \"${_arguments_options[@]}\" \\")]; let opts = write_opts_of(p); let flags = write_flags_of(p); let positionals = write_positionals_of(p); let sc_or_a = if p.has_subcommands() { format!( "\":: :_{name}_commands\" \\", name = p.meta.bin_name.as_ref().unwrap().replace(" ", "__") ) } else { String::new() }; let sc = if p.has_subcommands() { format!("\"*::: :->{name}\" \\", name = p.meta.name) } else { String::new() }; if !opts.is_empty() { ret.push(opts); } if !flags.is_empty() { ret.push(flags); } if !positionals.is_empty() { ret.push(positionals); } if !sc_or_a.is_empty() { ret.push(sc_or_a); } if !sc.is_empty() { ret.push(sc); } ret.push(String::from("&& ret=0")); ret.join("\n") } // Escape help string inside single quotes and brackets fn escape_help(string: &str) -> String { string .replace("\\", "\\\\") .replace("'", "'\\''") .replace("[", "\\[") .replace("]", "\\]") } // Escape value string inside single quotes and parentheses fn escape_value(string: &str) -> String { string .replace("\\", "\\\\") .replace("'", "'\\''") .replace("(", "\\(") .replace(")", "\\)") .replace(" ", "\\ ") } fn write_opts_of(p: &Parser) -> String { debugln!("write_opts_of;"); let mut ret = vec![]; for o in p.opts() { debugln!("write_opts_of:iter: o={}", o.name()); let help = o.help().map_or(String::new(), escape_help); let mut conflicts = get_zsh_arg_conflicts!(p, o, INTERNAL_ERROR_MSG); conflicts = if conflicts.is_empty() { String::new() } else { format!("({})", conflicts) }; let multiple = if o.is_set(ArgSettings::Multiple) { "*" } else { "" }; let pv = if let Some(pv_vec) = o.possible_vals() { format!( ": :({})", pv_vec .iter() .map(|v| escape_value(*v)) .collect::>() .join(" ") ) } else { String::new() }; if let Some(short) = o.short() { let s = format!( "'{conflicts}{multiple}-{arg}+[{help}]{possible_values}' \\", conflicts = conflicts, multiple = multiple, arg = short, possible_values = pv, help = help ); debugln!("write_opts_of:iter: Wrote...{}", &*s); ret.push(s); } if let Some(long) = o.long() { let l = format!( "'{conflicts}{multiple}--{arg}=[{help}]{possible_values}' \\", conflicts = conflicts, multiple = multiple, arg = long, possible_values = pv, help = help ); debugln!("write_opts_of:iter: Wrote...{}", &*l); ret.push(l); } } ret.join("\n") } fn write_flags_of(p: &Parser) -> String { debugln!("write_flags_of;"); let mut ret = vec![]; for f in p.flags() { debugln!("write_flags_of:iter: f={}", f.name()); let help = f.help().map_or(String::new(), escape_help); let mut conflicts = get_zsh_arg_conflicts!(p, f, INTERNAL_ERROR_MSG); conflicts = if conflicts.is_empty() { String::new() } else { format!("({})", conflicts) }; let multiple = if f.is_set(ArgSettings::Multiple) { "*" } else { "" }; if let Some(short) = f.short() { let s = format!( "'{conflicts}{multiple}-{arg}[{help}]' \\", multiple = multiple, conflicts = conflicts, arg = short, help = help ); debugln!("write_flags_of:iter: Wrote...{}", &*s); ret.push(s); } if let Some(long) = f.long() { let l = format!( "'{conflicts}{multiple}--{arg}[{help}]' \\", conflicts = conflicts, multiple = multiple, arg = long, help = help ); debugln!("write_flags_of:iter: Wrote...{}", &*l); ret.push(l); } } ret.join("\n") } fn write_positionals_of(p: &Parser) -> String { debugln!("write_positionals_of;"); let mut ret = vec![]; for arg in p.positionals() { debugln!("write_positionals_of:iter: arg={}", arg.b.name); let a = format!( "'{optional}:{name}{help}:{action}' \\", optional = if !arg.b.is_set(ArgSettings::Required) { ":" } else { "" }, name = arg.b.name, help = arg .b .help .map_or("".to_owned(), |v| " -- ".to_owned() + v) .replace("[", "\\[") .replace("]", "\\]"), action = arg.possible_vals().map_or("_files".to_owned(), |values| { format!( "({})", values .iter() .map(|v| escape_value(*v)) .collect::>() .join(" ") ) }) ); debugln!("write_positionals_of:iter: Wrote...{}", a); ret.push(a); } ret.join("\n") } vendor/clap/src/completions/macros.rs0000664000175000017500000000143514160055207020546 0ustar mwhudsonmwhudsonmacro_rules! w { ($buf:expr, $to_w:expr) => { match $buf.write_all($to_w) { Ok(..) => (), Err(..) => panic!("Failed to write to completions file"), } }; } macro_rules! get_zsh_arg_conflicts { ($p:ident, $arg:ident, $msg:ident) => { if let Some(conf_vec) = $arg.blacklist() { let mut v = vec![]; for arg_name in conf_vec { let arg = $p.find_any_arg(arg_name).expect($msg); if let Some(s) = arg.short() { v.push(format!("-{}", s)); } if let Some(l) = arg.long() { v.push(format!("--{}", l)); } } v.join(" ") } else { String::new() } }; } vendor/clap/src/completions/shell.rs0000664000175000017500000000354114172417313020374 0ustar mwhudsonmwhudson#[allow(deprecated, unused_imports)] use std::ascii::AsciiExt; use std::fmt; use std::str::FromStr; /// Describes which shell to produce a completions file for #[derive(Debug, Copy, Clone)] pub enum Shell { /// Generates a .bash completion file for the Bourne Again SHell (BASH) Bash, /// Generates a .fish completion file for the Friendly Interactive SHell (fish) Fish, /// Generates a completion file for the Z SHell (ZSH) Zsh, /// Generates a completion file for PowerShell PowerShell, /// Generates a completion file for Elvish Elvish, } impl Shell { /// A list of possible variants in `&'static str` form pub fn variants() -> [&'static str; 5] { ["zsh", "bash", "fish", "powershell", "elvish"] } } impl FromStr for Shell { type Err = String; #[cfg_attr(feature = "cargo-clippy", allow(clippy::wildcard_in_or_patterns))] fn from_str(s: &str) -> Result { match s { "ZSH" | _ if s.eq_ignore_ascii_case("zsh") => Ok(Shell::Zsh), "FISH" | _ if s.eq_ignore_ascii_case("fish") => Ok(Shell::Fish), "BASH" | _ if s.eq_ignore_ascii_case("bash") => Ok(Shell::Bash), "POWERSHELL" | _ if s.eq_ignore_ascii_case("powershell") => Ok(Shell::PowerShell), "ELVISH" | _ if s.eq_ignore_ascii_case("elvish") => Ok(Shell::Elvish), _ => Err(String::from( "[valid values: bash, fish, zsh, powershell, elvish]", )), } } } impl fmt::Display for Shell { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match *self { Shell::Bash => write!(f, "BASH"), Shell::Fish => write!(f, "FISH"), Shell::Zsh => write!(f, "ZSH"), Shell::PowerShell => write!(f, "POWERSHELL"), Shell::Elvish => write!(f, "ELVISH"), } } } vendor/clap/justfile0000664000175000017500000000220114160055207015331 0ustar mwhudsonmwhudson@update-contributors: echo 'Removing old CONTRIBUTORS.md' mv CONTRIBUTORS.md CONTRIBUTORS.md.bak echo 'Downloading a list of new contributors' echo "the following is a list of contributors:" > CONTRIBUTORS.md echo "" >> CONTRIBUTORS.md echo "" >> CONTRIBUTORS.md githubcontrib --owner clap-rs --repo clap --sha master --cols 6 --format md --showlogin true --sortBy contributions --sortOrder desc >> CONTRIBUTORS.md echo "" >> CONTRIBUTORS.md echo "" >> CONTRIBUTORS.md echo "This list was generated by [mgechev/github-contributors-list](https://github.com/mgechev/github-contributors-list)" >> CONTRIBUTORS.md rm CONTRIBUTORS.md.bak run-test TEST: cargo test --test {{TEST}} debug TEST: cargo test --test {{TEST}} --features debug run-tests: cargo test --features "yaml unstable" @bench: nightly cargo bench && just remove-nightly nightly: rustup override add nightly remove-nightly: rustup override remove @lint: nightly cargo build --features lints && just remove-nightly clean: cargo clean find . -type f -name "*.orig" -exec rm {} \; find . -type f -name "*.bk" -exec rm {} \; find . -type f -name ".*~" -exec rm {} \; vendor/clap/LICENSE-MIT0000664000175000017500000000207614160055207015227 0ustar mwhudsonmwhudsonThe MIT License (MIT) Copyright (c) 2015-2016 Kevin B. Knapp Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/clap/CONTRIBUTORS.md0000664000175000017500000005364614160055207016063 0ustar mwhudsonmwhudsonthe following is a list of contributors: [kbknapp](https://github.com/kbknapp) |[homu](https://github.com/homu) |[Vinatorul](https://github.com/Vinatorul) |[tormol](https://github.com/tormol) |[willmurphyscode](https://github.com/willmurphyscode) |[little-dude](https://github.com/little-dude) | :---: |:---: |:---: |:---: |:---: |:---: | [kbknapp](https://github.com/kbknapp) |[homu](https://github.com/homu) |[Vinatorul](https://github.com/Vinatorul) |[tormol](https://github.com/tormol) |[willmurphyscode](https://github.com/willmurphyscode) |[little-dude](https://github.com/little-dude) | [sru](https://github.com/sru) |[mgeisler](https://github.com/mgeisler) |[nabijaczleweli](https://github.com/nabijaczleweli) |[Byron](https://github.com/Byron) |[hgrecco](https://github.com/hgrecco) |[bluejekyll](https://github.com/bluejekyll) | :---: |:---: |:---: |:---: |:---: |:---: | [sru](https://github.com/sru) |[mgeisler](https://github.com/mgeisler) |[nabijaczleweli](https://github.com/nabijaczleweli) |[Byron](https://github.com/Byron) |[hgrecco](https://github.com/hgrecco) |[bluejekyll](https://github.com/bluejekyll) | [segevfiner](https://github.com/segevfiner) |[ignatenkobrain](https://github.com/ignatenkobrain) |[james-darkfox](https://github.com/james-darkfox) |[H2CO3](https://github.com/H2CO3) |[nateozem](https://github.com/nateozem) |[glowing-chemist](https://github.com/glowing-chemist) | :---: |:---: |:---: |:---: |:---: |:---: | [segevfiner](https://github.com/segevfiner) |[ignatenkobrain](https://github.com/ignatenkobrain) |[james-darkfox](https://github.com/james-darkfox) |[H2CO3](https://github.com/H2CO3) |[nateozem](https://github.com/nateozem) |[glowing-chemist](https://github.com/glowing-chemist) | [discosultan](https://github.com/discosultan) |[rtaycher](https://github.com/rtaycher) |[Arnavion](https://github.com/Arnavion) |[japaric](https://github.com/japaric) |[untitaker](https://github.com/untitaker) |[afiune](https://github.com/afiune) | :---: |:---: |:---: |:---: |:---: |:---: | [discosultan](https://github.com/discosultan) |[rtaycher](https://github.com/rtaycher) |[Arnavion](https://github.com/Arnavion) |[japaric](https://github.com/japaric) |[untitaker](https://github.com/untitaker) |[afiune](https://github.com/afiune) | [crazymerlyn](https://github.com/crazymerlyn) |[SuperFluffy](https://github.com/SuperFluffy) |[matthiasbeyer](https://github.com/matthiasbeyer) |[malbarbo](https://github.com/malbarbo) |[tshepang](https://github.com/tshepang) |[golem131](https://github.com/golem131) | :---: |:---: |:---: |:---: |:---: |:---: | [crazymerlyn](https://github.com/crazymerlyn) |[SuperFluffy](https://github.com/SuperFluffy) |[matthiasbeyer](https://github.com/matthiasbeyer) |[malbarbo](https://github.com/malbarbo) |[tshepang](https://github.com/tshepang) |[golem131](https://github.com/golem131) | [jimmycuadra](https://github.com/jimmycuadra) |[Nemo157](https://github.com/Nemo157) |[severen](https://github.com/severen) |[Eijebong](https://github.com/Eijebong) |[cstorey](https://github.com/cstorey) |[wdv4758h](https://github.com/wdv4758h) | :---: |:---: |:---: |:---: |:---: |:---: | [jimmycuadra](https://github.com/jimmycuadra) |[Nemo157](https://github.com/Nemo157) |[severen](https://github.com/severen) |[Eijebong](https://github.com/Eijebong) |[cstorey](https://github.com/cstorey) |[wdv4758h](https://github.com/wdv4758h) | [frewsxcv](https://github.com/frewsxcv) |[hoodie](https://github.com/hoodie) |[huonw](https://github.com/huonw) |[GrappigPanda](https://github.com/GrappigPanda) |[shepmaster](https://github.com/shepmaster) |[starkat99](https://github.com/starkat99) | :---: |:---: |:---: |:---: |:---: |:---: | [frewsxcv](https://github.com/frewsxcv) |[hoodie](https://github.com/hoodie) |[huonw](https://github.com/huonw) |[GrappigPanda](https://github.com/GrappigPanda) |[shepmaster](https://github.com/shepmaster) |[starkat99](https://github.com/starkat99) | [porglezomp](https://github.com/porglezomp) |[kraai](https://github.com/kraai) |[musoke](https://github.com/musoke) |[nelsonjchen](https://github.com/nelsonjchen) |[pkgw](https://github.com/pkgw) |[Deedasmi](https://github.com/Deedasmi) | :---: |:---: |:---: |:---: |:---: |:---: | [porglezomp](https://github.com/porglezomp) |[kraai](https://github.com/kraai) |[musoke](https://github.com/musoke) |[nelsonjchen](https://github.com/nelsonjchen) |[pkgw](https://github.com/pkgw) |[Deedasmi](https://github.com/Deedasmi) | [vmchale](https://github.com/vmchale) |[etopiei](https://github.com/etopiei) |[messense](https://github.com/messense) |[Keats](https://github.com/Keats) |[kieraneglin](https://github.com/kieraneglin) |[durka](https://github.com/durka) | :---: |:---: |:---: |:---: |:---: |:---: | [vmchale](https://github.com/vmchale) |[etopiei](https://github.com/etopiei) |[messense](https://github.com/messense) |[Keats](https://github.com/Keats) |[kieraneglin](https://github.com/kieraneglin) |[durka](https://github.com/durka) | [alex-gulyas](https://github.com/alex-gulyas) |[cite-reader](https://github.com/cite-reader) |[alexbool](https://github.com/alexbool) |[AluisioASG](https://github.com/AluisioASG) |[BurntSushi](https://github.com/BurntSushi) |[AndrewGaspar](https://github.com/AndrewGaspar) | :---: |:---: |:---: |:---: |:---: |:---: | [alex-gulyas](https://github.com/alex-gulyas) |[cite-reader](https://github.com/cite-reader) |[alexbool](https://github.com/alexbool) |[AluisioASG](https://github.com/AluisioASG) |[BurntSushi](https://github.com/BurntSushi) |[AndrewGaspar](https://github.com/AndrewGaspar) | [nox](https://github.com/nox) |[mitsuhiko](https://github.com/mitsuhiko) |[pixelistik](https://github.com/pixelistik) |[ogham](https://github.com/ogham) |[Bilalh](https://github.com/Bilalh) |[dotdash](https://github.com/dotdash) | :---: |:---: |:---: |:---: |:---: |:---: | [nox](https://github.com/nox) |[mitsuhiko](https://github.com/mitsuhiko) |[pixelistik](https://github.com/pixelistik) |[ogham](https://github.com/ogham) |[Bilalh](https://github.com/Bilalh) |[dotdash](https://github.com/dotdash) | [bradurani](https://github.com/bradurani) |[Seeker14491](https://github.com/Seeker14491) |[brianp](https://github.com/brianp) |[cldershem](https://github.com/cldershem) |[casey](https://github.com/casey) |[volks73](https://github.com/volks73) | :---: |:---: |:---: |:---: |:---: |:---: | [bradurani](https://github.com/bradurani) |[Seeker14491](https://github.com/Seeker14491) |[brianp](https://github.com/brianp) |[cldershem](https://github.com/cldershem) |[casey](https://github.com/casey) |[volks73](https://github.com/volks73) | [daboross](https://github.com/daboross) |[da-x](https://github.com/da-x) |[mernen](https://github.com/mernen) |[dguo](https://github.com/dguo) |[davidszotten](https://github.com/davidszotten) |[drusellers](https://github.com/drusellers) | :---: |:---: |:---: |:---: |:---: |:---: | [daboross](https://github.com/daboross) |[da-x](https://github.com/da-x) |[mernen](https://github.com/mernen) |[dguo](https://github.com/dguo) |[davidszotten](https://github.com/davidszotten) |[drusellers](https://github.com/drusellers) | [eddyb](https://github.com/eddyb) |[Enet4](https://github.com/Enet4) |[Fraser999](https://github.com/Fraser999) |[birkenfeld](https://github.com/birkenfeld) |[guanqun](https://github.com/guanqun) |[tanakh](https://github.com/tanakh) | :---: |:---: |:---: |:---: |:---: |:---: | [eddyb](https://github.com/eddyb) |[Enet4](https://github.com/Enet4) |[Fraser999](https://github.com/Fraser999) |[birkenfeld](https://github.com/birkenfeld) |[guanqun](https://github.com/guanqun) |[tanakh](https://github.com/tanakh) | [SirVer](https://github.com/SirVer) |[idmit](https://github.com/idmit) |[archer884](https://github.com/archer884) |[jacobmischka](https://github.com/jacobmischka) |[jespino](https://github.com/jespino) |[jfrankenau](https://github.com/jfrankenau) | :---: |:---: |:---: |:---: |:---: |:---: | [SirVer](https://github.com/SirVer) |[idmit](https://github.com/idmit) |[archer884](https://github.com/archer884) |[jacobmischka](https://github.com/jacobmischka) |[jespino](https://github.com/jespino) |[jfrankenau](https://github.com/jfrankenau) | [jtdowney](https://github.com/jtdowney) |[andete](https://github.com/andete) |[joshtriplett](https://github.com/joshtriplett) |[Kalwyn](https://github.com/Kalwyn) |[manuel-rhdt](https://github.com/manuel-rhdt) |[Marwes](https://github.com/Marwes) | :---: |:---: |:---: |:---: |:---: |:---: | [jtdowney](https://github.com/jtdowney) |[andete](https://github.com/andete) |[joshtriplett](https://github.com/joshtriplett) |[Kalwyn](https://github.com/Kalwyn) |[manuel-rhdt](https://github.com/manuel-rhdt) |[Marwes](https://github.com/Marwes) | [mdaffin](https://github.com/mdaffin) |[iliekturtles](https://github.com/iliekturtles) |[nicompte](https://github.com/nicompte) |[NickeZ](https://github.com/NickeZ) |[nvzqz](https://github.com/nvzqz) |[nuew](https://github.com/nuew) | :---: |:---: |:---: |:---: |:---: |:---: | [mdaffin](https://github.com/mdaffin) |[iliekturtles](https://github.com/iliekturtles) |[nicompte](https://github.com/nicompte) |[NickeZ](https://github.com/NickeZ) |[nvzqz](https://github.com/nvzqz) |[nuew](https://github.com/nuew) | [Geogi](https://github.com/Geogi) |[focusaurus](https://github.com/focusaurus) |[flying-sheep](https://github.com/flying-sheep) |[Phlosioneer](https://github.com/Phlosioneer) |[peppsac](https://github.com/peppsac) |[golddranks](https://github.com/golddranks) | :---: |:---: |:---: |:---: |:---: |:---: | [Geogi](https://github.com/Geogi) |[focusaurus](https://github.com/focusaurus) |[flying-sheep](https://github.com/flying-sheep) |[Phlosioneer](https://github.com/Phlosioneer) |[peppsac](https://github.com/peppsac) |[golddranks](https://github.com/golddranks) | [hexjelly](https://github.com/hexjelly) |[rom1v](https://github.com/rom1v) |[rnelson](https://github.com/rnelson) |[swatteau](https://github.com/swatteau) |[tchajed](https://github.com/tchajed) |[tspiteri](https://github.com/tspiteri) | :---: |:---: |:---: |:---: |:---: |:---: | [hexjelly](https://github.com/hexjelly) |[rom1v](https://github.com/rom1v) |[rnelson](https://github.com/rnelson) |[swatteau](https://github.com/swatteau) |[tchajed](https://github.com/tchajed) |[tspiteri](https://github.com/tspiteri) | [siiptuo](https://github.com/siiptuo) |[vks](https://github.com/vks) |[vsupalov](https://github.com/vsupalov) |[mineo](https://github.com/mineo) |[wabain](https://github.com/wabain) |[grossws](https://github.com/grossws) | :---: |:---: |:---: |:---: |:---: |:---: | [siiptuo](https://github.com/siiptuo) |[vks](https://github.com/vks) |[vsupalov](https://github.com/vsupalov) |[mineo](https://github.com/mineo) |[wabain](https://github.com/wabain) |[grossws](https://github.com/grossws) | [kennytm](https://github.com/kennytm) |[king6cong](https://github.com/king6cong) |[mvaude](https://github.com/mvaude) |[panicbit](https://github.com/panicbit) |[brennie](https://github.com/brennie) | :---: |:---: |:---: |:---: |:---: | [kennytm](https://github.com/kennytm) |[king6cong](https://github.com/king6cong) |[mvaude](https://github.com/mvaude) |[panicbit](https://github.com/panicbit) |[brennie](https://github.com/brennie) | This list was generated by [mgechev/github-contributors-list](https://github.com/mgechev/github-contributors-list) vendor/clap/README.md0000664000175000017500000007613014172417313015057 0ustar mwhudsonmwhudsonclap ==== [![Crates.io](https://img.shields.io/crates/v/clap.svg)](https://crates.io/crates/clap) [![Crates.io](https://img.shields.io/crates/d/clap.svg)](https://crates.io/crates/clap) [![license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/clap-rs/clap/blob/master/LICENSE-MIT) [![Coverage Status](https://coveralls.io/repos/kbknapp/clap-rs/badge.svg?branch=master&service=github)](https://coveralls.io/github/kbknapp/clap-rs?branch=master) [![Join the chat at https://gitter.im/kbknapp/clap-rs](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/kbknapp/clap-rs?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) Linux: [![Build Status](https://travis-ci.org/clap-rs/clap.svg?branch=master)](https://travis-ci.org/clap-rs/clap) Windows: [![Build status](https://ci.appveyor.com/api/projects/status/ejg8c33dn31nhv36/branch/master?svg=true)](https://ci.appveyor.com/project/kbknapp/clap-rs/branch/master) Command Line Argument Parser for Rust It is a simple-to-use, efficient, and full-featured library for parsing command line arguments and subcommands when writing console/terminal applications. * [documentation](https://docs.rs/clap/) * [website](https://clap.rs/) * [video tutorials](https://www.youtube.com/playlist?list=PLza5oFLQGTl2Z5T8g1pRkIynR3E0_pc7U) Table of Contents ================= * [About](#about) * [FAQ](#faq) * [Features](#features) * [Quick Example](#quick-example) * [Try it!](#try-it) * [Pre-Built Test](#pre-built-test) * [BYOB (Build Your Own Binary)](#byob-build-your-own-binary) * [Usage](#usage) * [Optional Dependencies / Features](#optional-dependencies--features) * [Dependencies Tree](#dependencies-tree) * [More Information](#more-information) * [Video Tutorials](#video-tutorials) * [How to Contribute](#how-to-contribute) * [Compatibility Policy](#compatibility-policy) * [Minimum Version of Rust](#minimum-version-of-rust) * [Related Crates](#related-crates) * [License](#license) * [Recent Breaking Changes](#recent-breaking-changes) * [Deprecations](#deprecations) Created by [gh-md-toc](https://github.com/ekalinin/github-markdown-toc) ## About `clap` is used to parse *and validate* the string of command line arguments provided by a user at runtime. You provide the list of valid possibilities, and `clap` handles the rest. This means you focus on your *applications* functionality, and less on the parsing and validating of arguments. `clap` provides many things 'for free' (with no configuration) including the traditional version and help switches (or flags) along with associated messages. If you are using subcommands, `clap` will also auto-generate a `help` subcommand and separate associated help messages. Once `clap` parses the user provided string of arguments, it returns the matches along with any applicable values. If the user made an error or typo, `clap` informs them with a friendly message and exits gracefully (or returns a `Result` type and allows you to perform any clean up prior to exit). Because of this, you can make reasonable assumptions in your code about the validity of the arguments prior to your applications main execution. ## FAQ For a full FAQ and more in depth details, see [the wiki page](https://github.com/clap-rs/clap/wiki/FAQ) ### Comparisons First, let me say that these comparisons are highly subjective, and not meant in a critical or harsh manner. All the argument parsing libraries out there (to include `clap`) have their own strengths and weaknesses. Sometimes it just comes down to personal taste when all other factors are equal. When in doubt, try them all and pick one that you enjoy :) There's plenty of room in the Rust community for multiple implementations! #### How does `clap` compare to [getopts](https://github.com/rust-lang-nursery/getopts)? `getopts` is a very basic, fairly minimalist argument parsing library. This isn't a bad thing, sometimes you don't need tons of features, you just want to parse some simple arguments, and have some help text generated for you based on valid arguments you specify. The downside to this approach is that you must manually implement most of the common features (such as checking to display help messages, usage strings, etc.). If you want a highly custom argument parser, and don't mind writing the majority of the functionality yourself, `getopts` is an excellent base. `getopts` also doesn't allocate much, or at all. This gives it a very small performance boost. Although, as you start implementing additional features, that boost quickly disappears. Personally, I find many, many uses of `getopts` are manually implementing features that `clap` provides by default. Using `clap` simplifies your codebase allowing you to focus on your application, and not argument parsing. #### How does `clap` compare to [docopt.rs](https://github.com/docopt/docopt.rs)? I first want to say I'm a big a fan of BurntSushi's work, the creator of `Docopt.rs`. I aspire to produce the quality of libraries that this man does! When it comes to comparing these two libraries they are very different. `docopt` tasks you with writing a help message, and then it parsers that message for you to determine all valid arguments and their use. Some people LOVE this approach, others do not. If you're willing to write a detailed help message, it's nice that you can stick that in your program and have `docopt` do the rest. On the downside, it's far less flexible. `docopt` is also excellent at translating arguments into Rust types automatically. There is even a syntax extension which will do all this for you, if you're willing to use a nightly compiler (use of a stable compiler requires you to somewhat manually translate from arguments to Rust types). To use BurntSushi's words, `docopt` is also a sort of black box. You get what you get, and it's hard to tweak implementation or customize the experience for your use case. Because `docopt` is doing a ton of work to parse your help messages and determine what you were trying to communicate as valid arguments, it's also one of the more heavy weight parsers performance-wise. For most applications this isn't a concern and this isn't to say `docopt` is slow, in fact far from it. This is just something to keep in mind while comparing. #### All else being equal, what are some reasons to use `clap`? (The Pitch) `clap` is as fast, and as lightweight as possible while still giving all the features you'd expect from a modern argument parser. In fact, for the amount and type of features `clap` offers it remains about as fast as `getopts`. If you use `clap` when just need some simple arguments parsed, you'll find it's a walk in the park. `clap` also makes it possible to represent extremely complex, and advanced requirements, without too much thought. `clap` aims to be intuitive, easy to use, and fully capable for wide variety use cases and needs. #### All else being equal, what are some reasons *not* to use `clap`? (The Anti Pitch) Depending on the style in which you choose to define the valid arguments, `clap` can be very verbose. `clap` also offers so many fine-tuning knobs and dials, that learning everything can seem overwhelming. I strive to keep the simple cases simple, but when turning all those custom dials it can get complex. `clap` is also opinionated about parsing. Even though so much can be tweaked and tuned with `clap` (and I'm adding more all the time), there are still certain features which `clap` implements in specific ways which may be contrary to some users use-cases. Finally, `clap` is "stringly typed" when referring to arguments which can cause typos in code. This particular paper-cut is being actively worked on, and should be gone in v3.x. ## Features Below are a few of the features which `clap` supports, full descriptions and usage can be found in the [documentation](https://docs.rs/clap/) and [examples/](examples) directory * **Auto-generated Help, Version, and Usage information** - Can optionally be fully, or partially overridden if you want a custom help, version, or usage statements * **Auto-generated completion scripts at compile time (Bash, Zsh, Fish, and PowerShell)** - Even works through many multiple levels of subcommands - Works with options which only accept certain values - Works with subcommand aliases * **Flags / Switches** (i.e. bool fields) - Both short and long versions supported (i.e. `-f` and `--flag` respectively) - Supports combining short versions (i.e. `-fBgoZ` is the same as `-f -B -g -o -Z`) - Supports multiple occurrences (i.e. `-vvv` or `-v -v -v`) * **Positional Arguments** (i.e. those which are based off an index from the program name) - Supports multiple values (i.e. `myprog ...` such as `myprog file1.txt file2.txt` being two values for the same "file" argument) - Supports Specific Value Sets (See below) - Can set value parameters (such as the minimum number of values, the maximum number of values, or the exact number of values) - Can set custom validations on values to extend the argument parsing capability to truly custom domains * **Option Arguments** (i.e. those that take values) - Both short and long versions supported (i.e. `-o value`, `-ovalue`, `-o=value` and `--option value` or `--option=value` respectively) - Supports multiple values (i.e. `-o -o ` or `-o `) - Supports delimited values (i.e. `-o=val1,val2,val3`, can also change the delimiter) - Supports Specific Value Sets (See below) - Supports named values so that the usage/help info appears as `-o ` etc. for when you require specific multiple values - Can set value parameters (such as the minimum number of values, the maximum number of values, or the exact number of values) - Can set custom validations on values to extend the argument parsing capability to truly custom domains * **Sub-Commands** (i.e. `git add ` where `add` is a sub-command of `git`) - Support their own sub-arguments, and sub-sub-commands independent of the parent - Get their own auto-generated Help, Version, and Usage independent of parent * **Support for building CLIs from YAML** - This keeps your Rust source nice and tidy and makes supporting localized translation very simple! * **Requirement Rules**: Arguments can define the following types of requirement rules - Can be required by default - Can be required only if certain arguments are present - Can require other arguments to be present - Can be required only if certain values of other arguments are used * **Confliction Rules**: Arguments can optionally define the following types of exclusion rules - Can be disallowed when certain arguments are present - Can disallow use of other arguments when present * **Groups**: Arguments can be made part of a group - Fully compatible with other relational rules (requirements, conflicts, and overrides) which allows things like requiring the use of any arg in a group, or denying the use of an entire group conditionally * **Specific Value Sets**: Positional or Option Arguments can define a specific set of allowed values (i.e. imagine a `--mode` option which may *only* have one of two values `fast` or `slow` such as `--mode fast` or `--mode slow`) * **Default Values** - Also supports conditional default values (i.e. a default which only applies if specific arguments are used, or specific values of those arguments) * **Automatic Version from Cargo.toml**: `clap` is fully compatible with Rust's `env!()` macro for automatically setting the version of your application to the version in your Cargo.toml. See [09_auto_version example](examples/09_auto_version.rs) for how to do this (Thanks to [jhelwig](https://github.com/jhelwig) for pointing this out) * **Typed Values**: You can use several convenience macros provided by `clap` to get typed values (i.e. `i32`, `u8`, etc.) from positional or option arguments so long as the type you request implements `std::str::FromStr` See the [12_typed_values example](examples/12_typed_values.rs). You can also use `clap`s `arg_enum!` macro to create an enum with variants that automatically implement `std::str::FromStr`. See [13a_enum_values_automatic example](examples/13a_enum_values_automatic.rs) for details * **Suggestions**: Suggests corrections when the user enters a typo. For example, if you defined a `--myoption` argument, and the user mistakenly typed `--moyption` (notice `y` and `o` transposed), they would receive a `Did you mean '--myoption'?` error and exit gracefully. This also works for subcommands and flags. (Thanks to [Byron](https://github.com/Byron) for the implementation) (This feature can optionally be disabled, see 'Optional Dependencies / Features') * **Colorized Errors (Non Windows OS only)**: Error message are printed in in colored text (this feature can optionally be disabled, see 'Optional Dependencies / Features'). * **Global Arguments**: Arguments can optionally be defined once, and be available to all child subcommands. There values will also be propagated up/down throughout all subcommands. * **Custom Validations**: You can define a function to use as a validator of argument values. Imagine defining a function to validate IP addresses, or fail parsing upon error. This means your application logic can be solely focused on *using* values. * **POSIX Compatible Conflicts/Overrides** - In POSIX args can be conflicting, but not fail parsing because whichever arg comes *last* "wins" so to speak. This allows things such as aliases (i.e. `alias ls='ls -l'` but then using `ls -C` in your terminal which ends up passing `ls -l -C` as the final arguments. Since `-l` and `-C` aren't compatible, this effectively runs `ls -C` in `clap` if you choose...`clap` also supports hard conflicts that fail parsing). (Thanks to [Vinatorul](https://github.com/Vinatorul)!) * Supports the Unix `--` meaning, only positional arguments follow ## Quick Example The following examples show a quick example of some of the very basic functionality of `clap`. For more advanced usage, such as requirements, conflicts, groups, multiple values and occurrences see the [documentation](https://docs.rs/clap/), [examples/](examples) directory of this repository or the [video tutorials](https://www.youtube.com/playlist?list=PLza5oFLQGTl2Z5T8g1pRkIynR3E0_pc7U). **NOTE:** All of these examples are functionally the same, but show different styles in which to use `clap`. These different styles are purely a matter of personal preference. The first example shows a method using the 'Builder Pattern' which allows more advanced configuration options (not shown in this small example), or even dynamically generating arguments when desired. ```rust // (Full example with detailed comments in examples/01b_quick_example.rs) // // This example demonstrates clap's full 'builder pattern' style of creating arguments which is // more verbose, but allows easier editing, and at times more advanced options, or the possibility // to generate arguments dynamically. extern crate clap; use clap::{Arg, App, SubCommand}; fn main() { let matches = App::new("My Super Program") .version("1.0") .author("Kevin K. ") .about("Does awesome things") .arg(Arg::with_name("config") .short("c") .long("config") .value_name("FILE") .help("Sets a custom config file") .takes_value(true)) .arg(Arg::with_name("INPUT") .help("Sets the input file to use") .required(true) .index(1)) .arg(Arg::with_name("v") .short("v") .multiple(true) .help("Sets the level of verbosity")) .subcommand(SubCommand::with_name("test") .about("controls testing features") .version("1.3") .author("Someone E. ") .arg(Arg::with_name("debug") .short("d") .help("print debug information verbosely"))) .get_matches(); // Gets a value for config if supplied by user, or defaults to "default.conf" let config = matches.value_of("config").unwrap_or("default.conf"); println!("Value for config: {}", config); // Calling .unwrap() is safe here because "INPUT" is required (if "INPUT" wasn't // required we could have used an 'if let' to conditionally get the value) println!("Using input file: {}", matches.value_of("INPUT").unwrap()); // Vary the output based on how many times the user used the "verbose" flag // (i.e. 'myprog -v -v -v' or 'myprog -vvv' vs 'myprog -v' match matches.occurrences_of("v") { 0 => println!("No verbose info"), 1 => println!("Some verbose info"), 2 => println!("Tons of verbose info"), 3 | _ => println!("Don't be crazy"), } // You can handle information about subcommands by requesting their matches by name // (as below), requesting just the name used, or both at the same time if let Some(matches) = matches.subcommand_matches("test") { if matches.is_present("debug") { println!("Printing debug info..."); } else { println!("Printing normally..."); } } // more program logic goes here... } ``` One could also optionally declare their CLI in YAML format and keep your Rust source tidy or support multiple localized translations by having different YAML files for each localization. First, create the `cli.yml` file to hold your CLI options, but it could be called anything we like: ```yaml name: myapp version: "1.0" author: Kevin K. about: Does awesome things args: - config: short: c long: config value_name: FILE help: Sets a custom config file takes_value: true - INPUT: help: Sets the input file to use required: true index: 1 - verbose: short: v multiple: true help: Sets the level of verbosity subcommands: - test: about: controls testing features version: "1.3" author: Someone E. args: - debug: short: d help: print debug information ``` Since this feature requires additional dependencies that not everyone may want, it is *not* compiled in by default and we need to enable a feature flag in Cargo.toml: Simply change your `clap = "2.34"` to `clap = {version = "2.34", features = ["yaml"]}`. Finally we create our `main.rs` file just like we would have with the previous two examples: ```rust // (Full example with detailed comments in examples/17_yaml.rs) // // This example demonstrates clap's building from YAML style of creating arguments which is far // more clean, but takes a very small performance hit compared to the other two methods. #[macro_use] extern crate clap; use clap::App; fn main() { // The YAML file is found relative to the current file, similar to how modules are found let yaml = load_yaml!("cli.yml"); let matches = App::from_yaml(yaml).get_matches(); // Same as previous examples... } ``` If you were to compile any of the above programs and run them with the flag `--help` or `-h` (or `help` subcommand, since we defined `test` as a subcommand) the following would be output ```sh $ myprog --help My Super Program 1.0 Kevin K. Does awesome things USAGE: MyApp [FLAGS] [OPTIONS] [SUBCOMMAND] FLAGS: -h, --help Prints help information -v Sets the level of verbosity -V, --version Prints version information OPTIONS: -c, --config Sets a custom config file ARGS: INPUT The input file to use SUBCOMMANDS: help Prints this message or the help of the given subcommand(s) test Controls testing features ``` **NOTE:** You could also run `myapp test --help` or `myapp help test` to see the help message for the `test` subcommand. There are also two other methods to create CLIs. Which style you choose is largely a matter of personal preference. The two other methods are: * Using [usage strings (examples/01a_quick_example.rs)](examples/01a_quick_example.rs) similar to (but not exact) docopt style usage statements. This is far less verbose than the above methods, but incurs a slight runtime penalty. * Using [a macro (examples/01c_quick_example.rs)](examples/01c_quick_example.rs) which is like a hybrid of the builder and usage string style. It's less verbose, but doesn't incur the runtime penalty of the usage string style. The downside is that it's harder to debug, and more opaque. Examples of each method can be found in the [examples/](examples) directory of this repository. ## Try it! ### Pre-Built Test To try out the pre-built examples, use the following steps: * Clone the repository `$ git clone https://github.com/clap-rs/clap && cd clap-rs/` * Compile the example `$ cargo build --example ` * Run the help info `$ ./target/debug/examples/ --help` * Play with the arguments! * You can also do a onetime run via `$ cargo run --example -- [args to example]` ### BYOB (Build Your Own Binary) To test out `clap`'s default auto-generated help/version follow these steps: * Create a new cargo project `$ cargo new fake --bin && cd fake` * Add `clap` to your `Cargo.toml` ```toml [dependencies] clap = "2" ``` * Add the following to your `src/main.rs` ```rust extern crate clap; use clap::App; fn main() { App::new("fake").version("v1.0-beta").get_matches(); } ``` * Build your program `$ cargo build --release` * Run with help or version `$ ./target/release/fake --help` or `$ ./target/release/fake --version` ## Usage For full usage, add `clap` as a dependency in your `Cargo.toml` () to use from crates.io: ```toml [dependencies] clap = "~2.34" ``` (**note**: If you are concerned with supporting a minimum version of Rust that is *older* than the current stable Rust minus 2 stable releases, it's recommended to use the `~major.minor.patch` style versions in your `Cargo.toml` which will only update the patch version automatically. For more information see the [Compatibility Policy](#compatibility-policy)) Then add `extern crate clap;` to your crate root. Define a list of valid arguments for your program (see the [documentation](https://docs.rs/clap/) or [examples/](examples) directory of this repo) Then run `cargo build` or `cargo update && cargo build` for your project. ### Optional Dependencies / Features #### Features enabled by default * **"suggestions"**: Turns on the `Did you mean '--myoption'?` feature for when users make typos. (builds dependency `strsim`) * **"color"**: Turns on colored error messages. This feature only works on non-Windows OSs. (builds dependency `ansi-term` only on non-Windows targets) * **"vec_map"**: Use [`VecMap`](https://crates.io/crates/vec_map) internally instead of a [`BTreeMap`](https://doc.rust-lang.org/stable/std/collections/struct.BTreeMap.html). This feature provides a _slight_ performance improvement. (builds dependency `vec_map`) To disable these, add this to your `Cargo.toml`: ```toml [dependencies.clap] version = "2.34" default-features = false ``` You can also selectively enable only the features you'd like to include, by adding: ```toml [dependencies.clap] version = "2.34" default-features = false # Cherry-pick the features you'd like to use features = [ "suggestions", "color" ] ``` #### Opt-in features * **"yaml"**: Enables building CLIs from YAML documents. (builds dependency `yaml-rust`) * **"unstable"**: Enables unstable `clap` features that may change from release to release * **"wrap_help"**: Turns on the help text wrapping feature, based on the terminal size. (builds dependency `term-size`) ### Dependencies Tree The following graphic depicts `clap`s dependency graph (generated using [cargo-graph](https://github.com/kbknapp/cargo-graph)). * **Dashed** Line: Optional dependency * **Red** Color: **NOT** included by default (must use cargo `features` to enable) * **Blue** Color: Dev dependency, only used while developing. ![clap dependencies](clap_dep_graph.png) ### More Information You can find complete documentation on the [docs.rs](https://docs.rs/clap/) for this project. You can also find usage examples in the [examples/](examples) directory of this repo. #### Video Tutorials There's also the video tutorial series [Argument Parsing with Rust v2](https://www.youtube.com/playlist?list=PLza5oFLQGTl2Z5T8g1pRkIynR3E0_pc7U). These videos slowly trickle out as I finish them and currently a work in progress. ## How to Contribute Details on how to contribute can be found in the [CONTRIBUTING.md](.github/CONTRIBUTING.md) file. ### Compatibility Policy Because `clap` takes SemVer and compatibility seriously, this is the official policy regarding breaking changes and minimum required versions of Rust. `clap` will pin the minimum required version of Rust to the CI builds. Bumping the minimum version of Rust is considered a minor breaking change, meaning *at a minimum* the minor version of `clap` will be bumped. In order to keep from being surprised of breaking changes, it is **highly** recommended to use the `~major.minor.patch` style in your `Cargo.toml` only if you wish to target a version of Rust that is *older* than current stable minus two releases: ```toml [dependencies] clap = "~2.34" ``` This will cause *only* the patch version to be updated upon a `cargo update` call, and therefore cannot break due to new features, or bumped minimum versions of Rust. #### Warning about '~' Dependencies Using `~` can cause issues in certain circumstances. From @alexcrichton: Right now Cargo's version resolution is pretty naive, it's just a brute-force search of the solution space, returning the first resolvable graph. This also means that it currently won't terminate until it proves there is not possible resolvable graph. This leads to situations where workspaces with multiple binaries, for example, have two different dependencies such as: ```toml,no_sync # In one Cargo.toml [dependencies] clap = "~2.34.0" # In another Cargo.toml [dependencies] clap = "2.34.0" ``` This is inherently an unresolvable crate graph in Cargo right now. Cargo requires there's only one major version of a crate, and being in the same workspace these two crates must share a version. This is impossible in this location, though, as these version constraints cannot be met. #### Minimum Version of Rust `clap` will officially support current stable Rust, minus two releases, but may work with prior releases as well. For example, current stable Rust at the time of this writing is 1.41.0, meaning `clap` is guaranteed to compile with 1.39.0 and beyond. At the 1.42.0 stable release, `clap` will be guaranteed to compile with 1.40.0 and beyond, etc. Upon bumping the minimum version of Rust (assuming it's within the stable-2 range), it *must* be clearly annotated in the `CHANGELOG.md` #### Breaking Changes `clap` takes a similar policy to Rust and will bump the major version number upon breaking changes with only the following exceptions: * The breaking change is to fix a security concern * The breaking change is to be fixing a bug (i.e. relying on a bug as a feature) * The breaking change is a feature isn't used in the wild, or all users of said feature have given approval *prior* to the change #### Compatibility with Wasm A best effort is made to ensure that `clap` will work on projects targeting `wasm32-unknown-unknown`. However there is no dedicated CI build covering this specific target. ## License `clap` is licensed under the MIT license. Please read the [LICENSE-MIT](LICENSE-MIT) file in this repository for more information. ## Related Crates There are several excellent crates which can be used with `clap`, I recommend checking them all out! If you've got a crate that would be a good fit to be used with `clap` open an issue and let me know, I'd love to add it! * [`structopt`](https://github.com/TeXitoi/structopt) - This crate allows you to define a struct, and build a CLI from it! No more "stringly typed" and it uses `clap` behind the scenes! (*Note*: There is work underway to pull this crate into mainline `clap`). * [`assert_cli`](https://github.com/assert-rs/assert_cli) - This crate allows you test your CLIs in a very intuitive and functional way! ## Recent Breaking Changes `clap` follows semantic versioning, so breaking changes should only happen upon major version bumps. The only exception to this rule is breaking changes that happen due to implementation that was deemed to be a bug, security concerns, or it can be reasonably proved to affect no code. For the full details, see [CHANGELOG.md](./CHANGELOG.md). As of 2.27.0: * Argument values now take precedence over subcommand names. This only arises by using unrestrained multiple values and subcommands together where the subcommand name can coincide with one of the multiple values. Such as `$ prog ... `. The fix is to place restraints on number of values, or disallow the use of `$ prog ` structure. As of 2.0.0 (From 1.x) * **Fewer lifetimes! Yay!** * `App<'a, 'b, 'c, 'd, 'e, 'f>` => `App<'a, 'b>` * `Arg<'a, 'b, 'c, 'd, 'e, 'f>` => `Arg<'a, 'b>` * `ArgMatches<'a, 'b>` => `ArgMatches<'a>` * **Simply Renamed** * `App::arg_group` => `App::group` * `App::arg_groups` => `App::groups` * `ArgGroup::add` => `ArgGroup::arg` * `ArgGroup::add_all` => `ArgGroup::args` * `ClapError` => `Error` * struct field `ClapError::error_type` => `Error::kind` * `ClapResult` => `Result` * `ClapErrorType` => `ErrorKind` * **Removed Deprecated Functions and Methods** * `App::subcommands_negate_reqs` * `App::subcommand_required` * `App::arg_required_else_help` * `App::global_version(bool)` * `App::versionless_subcommands` * `App::unified_help_messages` * `App::wait_on_error` * `App::subcommand_required_else_help` * `SubCommand::new` * `App::error_on_no_subcommand` * `Arg::new` * `Arg::mutually_excludes` * `Arg::mutually_excludes_all` * `Arg::mutually_overrides_with` * `simple_enum!` * **Renamed Error Variants** * `InvalidUnicode` => `InvalidUtf8` * `InvalidArgument` => `UnknownArgument` * **Usage Parser** * Value names can now be specified inline, i.e. `-o, --option 'some option which takes two files'` * **There is now a priority of order to determine the name** - This is perhaps the biggest breaking change. See the documentation for full details. Prior to this change, the value name took precedence. **Ensure your args are using the proper names (i.e. typically the long or short and NOT the value name) throughout the code** * `ArgMatches::values_of` returns an `Values` now which implements `Iterator` (should not break any code) * `crate_version!` returns `&'static str` instead of `String` ### Deprecations Old method names will be left around for several minor version bumps, or one major version bump. As of 2.27.0: * **AppSettings::PropagateGlobalValuesDown:** this setting deprecated and is no longer required to propagate values down or up vendor/clap/clap-test.rs0000664000175000017500000000764114160055207016040 0ustar mwhudsonmwhudson#[allow(unused_imports, dead_code)] mod test { use std::str; use std::io::{Cursor, Write}; use regex::Regex; use clap::{App, Arg, SubCommand, ArgGroup}; fn compare(l: S, r: S2) -> bool where S: AsRef, S2: AsRef { let re = Regex::new("\x1b[^m]*m").unwrap(); // Strip out any mismatching \r character on windows that might sneak in on either side let ls = l.as_ref().trim().replace("\r", ""); let rs = r.as_ref().trim().replace("\r", ""); let left = re.replace_all(&*ls, ""); let right = re.replace_all(&*rs, ""); let b = left == right; if !b { println!(); println!("--> left"); println!("{}", left); println!("--> right"); println!("{}", right); println!("--") } b } pub fn compare_output(l: App, args: &str, right: &str, stderr: bool) -> bool { let mut buf = Cursor::new(Vec::with_capacity(50)); let res = l.get_matches_from_safe(args.split(' ').collect::>()); let err = res.unwrap_err(); err.write_to(&mut buf).unwrap(); let content = buf.into_inner(); let left = String::from_utf8(content).unwrap(); assert_eq!(stderr, err.use_stderr()); compare(left, right) } pub fn compare_output2(l: App, args: &str, right1: &str, right2: &str, stderr: bool) -> bool { let mut buf = Cursor::new(Vec::with_capacity(50)); let res = l.get_matches_from_safe(args.split(' ').collect::>()); let err = res.unwrap_err(); err.write_to(&mut buf).unwrap(); let content = buf.into_inner(); let left = String::from_utf8(content).unwrap(); assert_eq!(stderr, err.use_stderr()); compare(&*left, right1) || compare(&*left, right2) } // Legacy tests from the Python script days pub fn complex_app() -> App<'static, 'static> { let args = "-o --option=[opt]... 'tests options' [positional] 'tests positionals'"; let opt3_vals = ["fast", "slow"]; let pos3_vals = ["vi", "emacs"]; App::new("clap-test") .version("v1.4.8") .about("tests clap library") .author("Kevin K. ") .args_from_usage(args) .arg(Arg::from_usage("-f --flag... 'tests flags'") .global(true)) .args(&[ Arg::from_usage("[flag2] -F 'tests flags with exclusions'").conflicts_with("flag").requires("long-option-2"), Arg::from_usage("--long-option-2 [option2] 'tests long options with exclusions'").conflicts_with("option").requires("positional2"), Arg::from_usage("[positional2] 'tests positionals with exclusions'"), Arg::from_usage("-O --Option [option3] 'specific vals'").possible_values(&opt3_vals), Arg::from_usage("[positional3]... 'tests specific values'").possible_values(&pos3_vals), Arg::from_usage("--multvals [one] [two] 'Tests multiple values, not mult occs'"), Arg::from_usage("--multvalsmo... [one] [two] 'Tests multiple values, and mult occs'"), Arg::from_usage("--minvals2 [minvals]... 'Tests 2 min vals'").min_values(2), Arg::from_usage("--maxvals3 [maxvals]... 'Tests 3 max vals'").max_values(3) ]) .subcommand(SubCommand::with_name("subcmd") .about("tests subcommands") .version("0.1") .author("Kevin K. ") .arg_from_usage("-o --option [scoption]... 'tests options'") .arg_from_usage("-s --subcmdarg [subcmdarg] 'tests other args'") .arg_from_usage("[scpositional] 'tests positionals'")) } } vendor/bitmaps/0000775000175000017500000000000014160055207014306 5ustar mwhudsonmwhudsonvendor/bitmaps/.cargo-checksum.json0000664000175000017500000000013114160055207020145 0ustar mwhudsonmwhudson{"files":{},"package":"031043d04099746d8db04daf1fa424b2bc8bd69d92b25962dcde24da39ab64a2"}vendor/bitmaps/CODE_OF_CONDUCT.md0000664000175000017500000000623214160055207017110 0ustar mwhudsonmwhudson# Contributor Covenant Code of Conduct ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at admin@immutable.rs. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html [homepage]: https://www.contributor-covenant.org vendor/bitmaps/Cargo.toml0000664000175000017500000000206314160055207016237 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] edition = "2018" name = "bitmaps" version = "2.1.0" authors = ["Bodil Stokke "] exclude = ["release.toml", "proptest-regressions/**"] description = "Fixed size boolean arrays" documentation = "http://docs.rs/bitmaps" readme = "./README.md" categories = ["data-structures"] license = "MPL-2.0+" repository = "https://github.com/bodil/bitmaps" [dependencies.typenum] version = "1.10.0" [dev-dependencies.proptest] version = "0.9.1" [dev-dependencies.proptest-derive] version = "0.1.0" [features] default = ["std"] std = [] vendor/bitmaps/LICENCE.md0000664000175000017500000003627614160055207015710 0ustar mwhudsonmwhudsonMozilla Public License Version 2.0 ================================== ### 1. Definitions **1.1. “Contributorâ€** means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. **1.2. “Contributor Versionâ€** means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution. **1.3. “Contributionâ€** means Covered Software of a particular Contributor. **1.4. “Covered Softwareâ€** means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. **1.5. “Incompatible With Secondary Licensesâ€** means * **(a)** that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or * **(b)** that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. **1.6. “Executable Formâ€** means any form of the work other than Source Code Form. **1.7. “Larger Workâ€** means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. **1.8. “Licenseâ€** means this document. **1.9. “Licensableâ€** means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. **1.10. “Modificationsâ€** means any of the following: * **(a)** any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or * **(b)** any new file in Source Code Form that contains any Covered Software. **1.11. “Patent Claims†of a Contributor** means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. **1.12. “Secondary Licenseâ€** means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. **1.13. “Source Code Formâ€** means the form of the work preferred for making modifications. **1.14. “You†(or “Yourâ€)** means an individual or a legal entity exercising rights under this License. For legal entities, “You†includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, “control†means **(a)** the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or **(b)** ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. ### 2. License Grants and Conditions #### 2.1. Grants Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: * **(a)** under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and * **(b)** under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. #### 2.2. Effective Date The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. #### 2.3. Limitations on Grant Scope The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: * **(a)** for any code that a Contributor has removed from Covered Software; or * **(b)** for infringements caused by: **(i)** Your and any other third party's modifications of Covered Software, or **(ii)** the combination of its Contributions with other software (except as part of its Contributor Version); or * **(c)** under Patent Claims infringed by Covered Software in the absence of its Contributions. This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). #### 2.4. Subsequent Licenses No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). #### 2.5. Representation Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. #### 2.6. Fair Use This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. #### 2.7. Conditions Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. ### 3. Responsibilities #### 3.1. Distribution of Source Form All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form. #### 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: * **(a)** such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and * **(b)** You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License. #### 3.3. Distribution of a Larger Work You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). #### 3.4. Notices You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. #### 3.5. Application of Additional Terms You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. ### 4. Inability to Comply Due to Statute or Regulation If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: **(a)** comply with the terms of this License to the maximum extent possible; and **(b)** describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. ### 5. Termination **5.1.** The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated **(a)** provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and **(b)** on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. **5.2.** If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. **5.3.** In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. ### 6. Disclaimer of Warranty > Covered Software is provided under this License on an “as is†> basis, without warranty of any kind, either expressed, implied, or > statutory, including, without limitation, warranties that the > Covered Software is free of defects, merchantable, fit for a > particular purpose or non-infringing. The entire risk as to the > quality and performance of the Covered Software is with You. > Should any Covered Software prove defective in any respect, You > (not any Contributor) assume the cost of any necessary servicing, > repair, or correction. This disclaimer of warranty constitutes an > essential part of this License. No use of any Covered Software is > authorized under this License except under this disclaimer. ### 7. Limitation of Liability > Under no circumstances and under no legal theory, whether tort > (including negligence), contract, or otherwise, shall any > Contributor, or anyone who distributes Covered Software as > permitted above, be liable to You for any direct, indirect, > special, incidental, or consequential damages of any character > including, without limitation, damages for lost profits, loss of > goodwill, work stoppage, computer failure or malfunction, or any > and all other commercial damages or losses, even if such party > shall have been informed of the possibility of such damages. This > limitation of liability shall not apply to liability for death or > personal injury resulting from such party's negligence to the > extent applicable law prohibits such limitation. Some > jurisdictions do not allow the exclusion or limitation of > incidental or consequential damages, so this exclusion and > limitation may not apply to You. ### 8. Litigation Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims. ### 9. Miscellaneous This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. ### 10. Versions of the License #### 10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. #### 10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. #### 10.3. Modified Versions If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). #### 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. ## Exhibit A - Source Code Form License Notice This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. You may add additional accurate notices of copyright ownership. ## Exhibit B - “Incompatible With Secondary Licenses†Notice This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public License, v. 2.0. vendor/bitmaps/CHANGELOG.md0000664000175000017500000000242314160055207016120 0ustar mwhudsonmwhudson# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). ## [2.1.0] - 2020-03-26 ### ADDED - There is now a `std` feature flag, on by default, which you can disable to get a `no_std` crate. ## [2.0.0] - 2019-09-09 ### CHANGED - `Bits` now does a lot less work, which is now being done instead by the `BitOps` trait on its storage type. This turns out to improve compilation time quite considerably. If you were using methods on `Bits` directly, they will have moved to `BitOps`. - `Debug` now prints a single hex value for the entire bitmap, rather than deferring to the storage type. - `Iter` now takes a reference instead of a copy, which is more sensible for larger bitmaps. ### ADDED - `Bitmap` now implements `BitAnd`, `BitOr`, `BitXor`, their equivalent assignation traits, and `Not`, meaning you can now use bitwise operators on them, even the very big array-of-u128 ones. - A `Bitmap::mask()` constructor has been added, to construct bitmasks more efficiently, now that there are bitwise operators to use them with. ## [1.0.0] - 2019-09-06 Initial release. vendor/bitmaps/src/0000775000175000017500000000000014160055207015075 5ustar mwhudsonmwhudsonvendor/bitmaps/src/bitmap.rs0000664000175000017500000003176414160055207016732 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. use core::ops::*; use typenum::*; use crate::types::{BitOps, Bits}; #[cfg(feature = "std")] use std::fmt::{Debug, Error, Formatter}; /// A compact array of bits. /// /// The type used to store the bitmap will be the minimum unsigned integer type /// required to fit the number of bits, from `u8` to `u128`. If the size is 1, /// `bool` is used. If the size exceeds 128, an array of `u128` will be used, /// sized as appropriately. The maximum supported size is currently 1024, /// represented by an array `[u128; 8]`. pub struct Bitmap { pub(crate) data: Size::Store, } impl Clone for Bitmap { fn clone(&self) -> Self { Bitmap { data: self.data } } } impl Copy for Bitmap {} impl Default for Bitmap { fn default() -> Self { Bitmap { data: Size::Store::default(), } } } impl PartialEq for Bitmap { fn eq(&self, other: &Self) -> bool { self.data == other.data } } #[cfg(feature = "std")] impl Debug for Bitmap { fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error> { write!(f, "{}", Size::Store::to_hex(&self.data)) } } impl Bitmap { /// Construct a bitmap with every bit set to `false`. #[inline] pub fn new() -> Self { Self::default() } /// Construct a bitmap where every bit with index less than `bits` is /// `true`, and every other bit is `false`. #[inline] pub fn mask(bits: usize) -> Self { debug_assert!(bits < Size::USIZE); Self { data: Size::Store::make_mask(bits), } } /// Construct a bitmap from a value of the same type as its backing store. #[inline] pub fn from_value(data: Size::Store) -> Self { Self { data } } /// Convert this bitmap into a value of the type of its backing store. #[inline] pub fn into_value(self) -> Size::Store { self.data } /// Count the number of `true` bits in the bitmap. #[inline] pub fn len(self) -> usize { Size::Store::len(&self.data) } /// Test if the bitmap contains only `false` bits. #[inline] pub fn is_empty(self) -> bool { self.first_index().is_none() } /// Get the value of the bit at a given index. #[inline] pub fn get(self, index: usize) -> bool { debug_assert!(index < Size::USIZE); Size::Store::get(&self.data, index) } /// Set the value of the bit at a given index. /// /// Returns the previous value of the bit. #[inline] pub fn set(&mut self, index: usize, value: bool) -> bool { debug_assert!(index < Size::USIZE); Size::Store::set(&mut self.data, index, value) } /// Find the index of the first `true` bit in the bitmap. #[inline] pub fn first_index(self) -> Option { Size::Store::first_index(&self.data) } /// Invert all the bits in the bitmap. #[inline] pub fn invert(&mut self) { Size::Store::invert(&mut self.data); } } impl<'a, Size: Bits> IntoIterator for &'a Bitmap { type Item = usize; type IntoIter = Iter<'a, Size>; fn into_iter(self) -> Self::IntoIter { Iter { index: 0, data: self, } } } impl BitAnd for Bitmap { type Output = Self; fn bitand(mut self, rhs: Self) -> Self::Output { Size::Store::bit_and(&mut self.data, &rhs.data); self } } impl BitOr for Bitmap { type Output = Self; fn bitor(mut self, rhs: Self) -> Self::Output { Size::Store::bit_or(&mut self.data, &rhs.data); self } } impl BitXor for Bitmap { type Output = Self; fn bitxor(mut self, rhs: Self) -> Self::Output { Size::Store::bit_xor(&mut self.data, &rhs.data); self } } impl Not for Bitmap { type Output = Self; fn not(mut self) -> Self::Output { Size::Store::invert(&mut self.data); self } } impl BitAndAssign for Bitmap { fn bitand_assign(&mut self, rhs: Self) { Size::Store::bit_and(&mut self.data, &rhs.data); } } impl BitOrAssign for Bitmap { fn bitor_assign(&mut self, rhs: Self) { Size::Store::bit_or(&mut self.data, &rhs.data); } } impl BitXorAssign for Bitmap { fn bitxor_assign(&mut self, rhs: Self) { Size::Store::bit_xor(&mut self.data, &rhs.data); } } impl From<[u128; 2]> for Bitmap { fn from(data: [u128; 2]) -> Self { Bitmap { data } } } impl From<[u128; 3]> for Bitmap { fn from(data: [u128; 3]) -> Self { Bitmap { data } } } impl From<[u128; 4]> for Bitmap { fn from(data: [u128; 4]) -> Self { Bitmap { data } } } impl From<[u128; 5]> for Bitmap { fn from(data: [u128; 5]) -> Self { Bitmap { data } } } impl From<[u128; 6]> for Bitmap { fn from(data: [u128; 6]) -> Self { Bitmap { data } } } impl From<[u128; 7]> for Bitmap { fn from(data: [u128; 7]) -> Self { Bitmap { data } } } impl From<[u128; 8]> for Bitmap { fn from(data: [u128; 8]) -> Self { Bitmap { data } } } impl Into<[u128; 2]> for Bitmap { fn into(self) -> [u128; 2] { self.data } } impl Into<[u128; 3]> for Bitmap { fn into(self) -> [u128; 3] { self.data } } impl Into<[u128; 4]> for Bitmap { fn into(self) -> [u128; 4] { self.data } } impl Into<[u128; 5]> for Bitmap { fn into(self) -> [u128; 5] { self.data } } impl Into<[u128; 6]> for Bitmap { fn into(self) -> [u128; 6] { self.data } } impl Into<[u128; 7]> for Bitmap { fn into(self) -> [u128; 7] { self.data } } impl Into<[u128; 8]> for Bitmap { fn into(self) -> [u128; 8] { self.data } } /// An iterator over the indices in a bitmap which are `true`. /// /// This yields a sequence of `usize` indices, not their contents (which are /// always `true` anyway, by definition). /// /// # Examples /// /// ```rust /// # use bitmaps::Bitmap; /// # use typenum::U10; /// let mut bitmap: Bitmap = Bitmap::new(); /// bitmap.set(3, true); /// bitmap.set(5, true); /// bitmap.set(8, true); /// let true_indices: Vec = bitmap.into_iter().collect(); /// assert_eq!(vec![3, 5, 8], true_indices); /// ``` pub struct Iter<'a, Size: Bits> { index: usize, data: &'a Bitmap, } impl<'a, Size: Bits> Iterator for Iter<'a, Size> { type Item = usize; fn next(&mut self) -> Option { if self.index >= Size::USIZE { return None; } if self.data.get(self.index) { self.index += 1; Some(self.index - 1) } else { self.index += 1; self.next() } } } #[cfg(any(target_arch = "x86", target_arch = "x86_64"))] #[allow(clippy::cast_ptr_alignment)] mod x86_arch { use super::*; #[cfg(target_arch = "x86")] use core::arch::x86::*; #[cfg(target_arch = "x86_64")] use core::arch::x86_64::*; impl Bitmap { #[target_feature(enable = "sse2")] pub unsafe fn load_m128i(&self) -> __m128i { _mm_loadu_si128(&self.data as *const _ as *const __m128i) } } impl Bitmap { #[target_feature(enable = "sse2")] pub unsafe fn load_m128i(&self) -> [__m128i; 2] { let ptr = &self.data as *const _ as *const __m128i; [_mm_loadu_si128(ptr), _mm_loadu_si128(ptr.add(1))] } #[target_feature(enable = "avx")] pub unsafe fn load_m256i(&self) -> __m256i { _mm256_loadu_si256(&self.data as *const _ as *const __m256i) } } impl Bitmap { #[target_feature(enable = "sse2")] pub unsafe fn load_m128i(&self) -> [__m128i; 4] { let ptr = &self.data as *const _ as *const __m128i; [ _mm_loadu_si128(ptr), _mm_loadu_si128(ptr.add(1)), _mm_loadu_si128(ptr.add(2)), _mm_loadu_si128(ptr.add(3)), ] } #[target_feature(enable = "avx")] pub unsafe fn load_m256i(&self) -> [__m256i; 2] { let ptr = &self.data as *const _ as *const __m256i; [_mm256_loadu_si256(ptr), _mm256_loadu_si256(ptr.add(1))] } } impl Bitmap { #[target_feature(enable = "sse2")] pub unsafe fn load_m128i(&self) -> [__m128i; 6] { let ptr = &self.data as *const _ as *const __m128i; [ _mm_loadu_si128(ptr), _mm_loadu_si128(ptr.add(1)), _mm_loadu_si128(ptr.add(2)), _mm_loadu_si128(ptr.add(3)), _mm_loadu_si128(ptr.add(4)), _mm_loadu_si128(ptr.add(5)), ] } #[target_feature(enable = "avx")] pub unsafe fn load_m256i(&self) -> [__m256i; 3] { let ptr = &self.data as *const _ as *const __m256i; [ _mm256_loadu_si256(ptr), _mm256_loadu_si256(ptr.add(1)), _mm256_loadu_si256(ptr.add(2)), ] } } impl Bitmap { #[target_feature(enable = "sse2")] pub unsafe fn load_m128i(&self) -> [__m128i; 8] { let ptr = &self.data as *const _ as *const __m128i; [ _mm_loadu_si128(ptr), _mm_loadu_si128(ptr.add(1)), _mm_loadu_si128(ptr.add(2)), _mm_loadu_si128(ptr.add(3)), _mm_loadu_si128(ptr.add(4)), _mm_loadu_si128(ptr.add(5)), _mm_loadu_si128(ptr.add(6)), _mm_loadu_si128(ptr.add(7)), ] } #[target_feature(enable = "avx")] pub unsafe fn load_m256i(&self) -> [__m256i; 4] { let ptr = &self.data as *const _ as *const __m256i; [ _mm256_loadu_si256(ptr), _mm256_loadu_si256(ptr.add(1)), _mm256_loadu_si256(ptr.add(2)), _mm256_loadu_si256(ptr.add(3)), ] } } impl From<__m128i> for Bitmap { fn from(data: __m128i) -> Self { Self { data: unsafe { core::mem::transmute(data) }, } } } impl From<__m256i> for Bitmap { fn from(data: __m256i) -> Self { Self { data: unsafe { core::mem::transmute(data) }, } } } impl Into<__m128i> for Bitmap { fn into(self) -> __m128i { unsafe { self.load_m128i() } } } impl Into<__m256i> for Bitmap { fn into(self) -> __m256i { unsafe { self.load_m256i() } } } #[cfg(test)] mod test { use super::*; struct AlignmentTester where B: Bits, { _byte: u8, bits: Bitmap, } #[test] fn load_128() { let mut t: AlignmentTester = AlignmentTester { _byte: 0, bits: Bitmap::new(), }; t.bits.set(5, true); let m = unsafe { t.bits.load_m128i() }; let mut bits: Bitmap = m.into(); assert!(bits.set(5, false)); assert!(bits.is_empty()); } #[test] fn load_256() { let mut t: AlignmentTester = AlignmentTester { _byte: 0, bits: Bitmap::new(), }; t.bits.set(5, true); let m = unsafe { t.bits.load_m256i() }; let mut bits: Bitmap = m.into(); assert!(bits.set(5, false)); assert!(bits.is_empty()); } } } #[cfg(test)] mod test { use super::*; use proptest::collection::btree_set; use proptest::proptest; use typenum::{U1024, U64}; proptest! { #[test] fn get_set_and_iter_64(bits in btree_set(0..64usize, 0..64)) { let mut bitmap = Bitmap::::new(); for i in &bits { bitmap.set(*i, true); } for i in 0..64 { assert_eq!(bitmap.get(i), bits.contains(&i)); } assert!(bitmap.into_iter().eq(bits.into_iter())); } #[test] fn get_set_and_iter_1024(bits in btree_set(0..1024usize, 0..1024)) { let mut bitmap = Bitmap::::new(); for i in &bits { bitmap.set(*i, true); } for i in 0..1024 { assert_eq!(bitmap.get(i), bits.contains(&i)); } assert!(bitmap.into_iter().eq(bits.into_iter())); } } } vendor/bitmaps/src/types.rs0000664000175000017500000007703514160055207016623 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. use core::fmt::Debug; use typenum::*; /// A trait that defines generalised operations on a `Bits::Store` type. pub trait BitOps { fn get(bits: &Self, index: usize) -> bool; fn set(bits: &mut Self, index: usize, value: bool) -> bool; fn len(bits: &Self) -> usize; fn first_index(bits: &Self) -> Option; fn bit_and(bits: &mut Self, other_bits: &Self); fn bit_or(bits: &mut Self, other_bits: &Self); fn bit_xor(bits: &mut Self, other_bits: &Self); fn invert(bits: &mut Self); fn make_mask(shift: usize) -> Self; #[cfg(feature = "std")] fn to_hex(bits: &Self) -> String; } impl BitOps for bool { #[inline] fn get(bits: &Self, index: usize) -> bool { debug_assert!(index == 0); *bits } #[inline] fn set(bits: &mut Self, index: usize, value: bool) -> bool { debug_assert!(index == 0); core::mem::replace(bits, value) } #[inline] fn len(bits: &Self) -> usize { if *bits { 1 } else { 0 } } #[inline] fn first_index(bits: &Self) -> Option { if *bits { Some(0) } else { None } } #[inline] fn bit_and(bits: &mut Self, other_bits: &Self) { *bits &= *other_bits; } #[inline] fn bit_or(bits: &mut Self, other_bits: &Self) { *bits |= *other_bits; } #[inline] fn bit_xor(bits: &mut Self, other_bits: &Self) { *bits ^= *other_bits; } #[inline] fn invert(bits: &mut Self) { *bits = !*bits; } #[inline] fn make_mask(shift: usize) -> Self { shift > 0 } #[cfg(feature = "std")] fn to_hex(bits: &Self) -> String { if *bits { "1".to_owned() } else { "0".to_owned() } } } macro_rules! bitops_for { ($target:ty) => { impl BitOps for $target { #[inline] fn get(bits: &Self, index: usize) -> bool { bits & (1 << index) != 0 } #[inline] fn set(bits: &mut Self, index: usize, value: bool) -> bool { let mask = 1 << index; let prev = *bits & mask; if value { *bits |= mask; } else { *bits &= !mask; } prev != 0 } #[inline] fn len(bits: &Self) -> usize { bits.count_ones() as usize } #[inline] fn first_index(bits: &Self) -> Option { if *bits == 0 { None } else { Some(bits.trailing_zeros() as usize) } } #[inline] fn bit_and(bits: &mut Self, other_bits: &Self) { *bits &= *other_bits; } #[inline] fn bit_or(bits: &mut Self, other_bits: &Self) { *bits |= *other_bits; } #[inline] fn bit_xor(bits: &mut Self, other_bits: &Self) { *bits ^= *other_bits; } #[inline] fn invert(bits: &mut Self) { *bits = !*bits; } #[inline] fn make_mask(shift: usize) -> Self { (1 << shift) - 1 } #[cfg(feature = "std")] fn to_hex(bits: &Self) -> String { format!("{:x}", bits) } } }; } macro_rules! bitops_for_big { ($words:expr) => { impl BitOps for [u128; $words] { #[inline] fn get(bits: &Self, index: usize) -> bool { let word_index = index / 128; let index = index & 127; bits[word_index] & (1 << index) != 0 } #[inline] fn set(bits: &mut Self, index: usize, value: bool) -> bool { let word_index = index / 128; let index = index & 127; let mask = 1 << (index & 127); let bits = &mut bits[word_index]; let prev = *bits & mask; if value { *bits |= mask; } else { *bits &= !mask; } prev != 0 } fn make_mask(shift: usize) -> Self { let word_index = shift / 128; let index = shift & 127; let mut out = [0; $words]; for (chunk_index, chunk) in out.iter_mut().enumerate() { if chunk_index < word_index { *chunk = !0u128; } else if chunk_index == word_index { *chunk = (1 << index) - 1; } else { return out; } } out } #[inline] fn len(bits: &Self) -> usize { bits.iter().fold(0, |acc, next| acc + next.count_ones()) as usize } #[inline] fn first_index(bits: &Self) -> Option { for (index, part) in bits.iter().enumerate() { if *part != 0u128 { return Some(part.trailing_zeros() as usize + (128 * index)); } } None } #[inline] fn bit_and(bits: &mut Self, other_bits: &Self) { for (left, right) in bits.iter_mut().zip(other_bits.iter()) { *left &= *right; } } #[inline] fn bit_or(bits: &mut Self, other_bits: &Self) { for (left, right) in bits.iter_mut().zip(other_bits.iter()) { *left |= *right; } } #[inline] fn bit_xor(bits: &mut Self, other_bits: &Self) { for (left, right) in bits.iter_mut().zip(other_bits.iter()) { *left ^= *right; } } #[inline] fn invert(bits: &mut Self) { for chunk in bits.iter_mut() { *chunk = !*chunk; } } #[cfg(feature = "std")] fn to_hex(bits: &Self) -> String { let mut out = String::new(); for chunk in bits { out += &format!("{:x}", chunk); } out } } }; } bitops_for!(u8); bitops_for!(u16); bitops_for!(u32); bitops_for!(u64); bitops_for!(u128); bitops_for_big!(2); bitops_for_big!(3); bitops_for_big!(4); bitops_for_big!(5); bitops_for_big!(6); bitops_for_big!(7); bitops_for_big!(8); /// A type level number signifying the number of bits in a bitmap. /// /// This trait is implemented for type level numbers from `U1` to `U1024`. /// /// # Examples /// /// ```rust /// # #[macro_use] extern crate bitmaps; /// # use bitmaps::Bits; /// # use typenum::U10; /// assert_eq!( /// std::mem::size_of::<::Store>(), /// std::mem::size_of::() /// ); /// ``` pub trait Bits: Unsigned { /// A primitive integer type suitable for storing this many bits. type Store: BitOps + Default + Copy + PartialEq + Debug; } impl Bits for U1 { type Store = bool; } macro_rules! bits_for { ($num:ty, $result:ty) => { impl Bits for $num { type Store = $result; } }; } macro_rules! bits_for_big { ($num:ty, $words:expr) => { impl Bits for $num { type Store = [u128; $words]; } }; } bits_for!(U2, u8); bits_for!(U3, u8); bits_for!(U4, u8); bits_for!(U5, u8); bits_for!(U6, u8); bits_for!(U7, u8); bits_for!(U8, u8); bits_for!(U9, u16); bits_for!(U10, u16); bits_for!(U11, u16); bits_for!(U12, u16); bits_for!(U13, u16); bits_for!(U14, u16); bits_for!(U15, u16); bits_for!(U16, u16); bits_for!(U17, u32); bits_for!(U18, u32); bits_for!(U19, u32); bits_for!(U20, u32); bits_for!(U21, u32); bits_for!(U22, u32); bits_for!(U23, u32); bits_for!(U24, u32); bits_for!(U25, u32); bits_for!(U26, u32); bits_for!(U27, u32); bits_for!(U28, u32); bits_for!(U29, u32); bits_for!(U30, u32); bits_for!(U31, u32); bits_for!(U32, u32); bits_for!(U33, u64); bits_for!(U34, u64); bits_for!(U35, u64); bits_for!(U36, u64); bits_for!(U37, u64); bits_for!(U38, u64); bits_for!(U39, u64); bits_for!(U40, u64); bits_for!(U41, u64); bits_for!(U42, u64); bits_for!(U43, u64); bits_for!(U44, u64); bits_for!(U45, u64); bits_for!(U46, u64); bits_for!(U47, u64); bits_for!(U48, u64); bits_for!(U49, u64); bits_for!(U50, u64); bits_for!(U51, u64); bits_for!(U52, u64); bits_for!(U53, u64); bits_for!(U54, u64); bits_for!(U55, u64); bits_for!(U56, u64); bits_for!(U57, u64); bits_for!(U58, u64); bits_for!(U59, u64); bits_for!(U60, u64); bits_for!(U61, u64); bits_for!(U62, u64); bits_for!(U63, u64); bits_for!(U64, u64); bits_for!(U65, u128); bits_for!(U66, u128); bits_for!(U67, u128); bits_for!(U68, u128); bits_for!(U69, u128); bits_for!(U70, u128); bits_for!(U71, u128); bits_for!(U72, u128); bits_for!(U73, u128); bits_for!(U74, u128); bits_for!(U75, u128); bits_for!(U76, u128); bits_for!(U77, u128); bits_for!(U78, u128); bits_for!(U79, u128); bits_for!(U80, u128); bits_for!(U81, u128); bits_for!(U82, u128); bits_for!(U83, u128); bits_for!(U84, u128); bits_for!(U85, u128); bits_for!(U86, u128); bits_for!(U87, u128); bits_for!(U88, u128); bits_for!(U89, u128); bits_for!(U90, u128); bits_for!(U91, u128); bits_for!(U92, u128); bits_for!(U93, u128); bits_for!(U94, u128); bits_for!(U95, u128); bits_for!(U96, u128); bits_for!(U97, u128); bits_for!(U98, u128); bits_for!(U99, u128); bits_for!(U100, u128); bits_for!(U101, u128); bits_for!(U102, u128); bits_for!(U103, u128); bits_for!(U104, u128); bits_for!(U105, u128); bits_for!(U106, u128); bits_for!(U107, u128); bits_for!(U108, u128); bits_for!(U109, u128); bits_for!(U110, u128); bits_for!(U111, u128); bits_for!(U112, u128); bits_for!(U113, u128); bits_for!(U114, u128); bits_for!(U115, u128); bits_for!(U116, u128); bits_for!(U117, u128); bits_for!(U118, u128); bits_for!(U119, u128); bits_for!(U120, u128); bits_for!(U121, u128); bits_for!(U122, u128); bits_for!(U123, u128); bits_for!(U124, u128); bits_for!(U125, u128); bits_for!(U126, u128); bits_for!(U127, u128); bits_for!(U128, u128); bits_for_big!(U129, 2); bits_for_big!(U130, 2); bits_for_big!(U131, 2); bits_for_big!(U132, 2); bits_for_big!(U133, 2); bits_for_big!(U134, 2); bits_for_big!(U135, 2); bits_for_big!(U136, 2); bits_for_big!(U137, 2); bits_for_big!(U138, 2); bits_for_big!(U139, 2); bits_for_big!(U140, 2); bits_for_big!(U141, 2); bits_for_big!(U142, 2); bits_for_big!(U143, 2); bits_for_big!(U144, 2); bits_for_big!(U145, 2); bits_for_big!(U146, 2); bits_for_big!(U147, 2); bits_for_big!(U148, 2); bits_for_big!(U149, 2); bits_for_big!(U150, 2); bits_for_big!(U151, 2); bits_for_big!(U152, 2); bits_for_big!(U153, 2); bits_for_big!(U154, 2); bits_for_big!(U155, 2); bits_for_big!(U156, 2); bits_for_big!(U157, 2); bits_for_big!(U158, 2); bits_for_big!(U159, 2); bits_for_big!(U160, 2); bits_for_big!(U161, 2); bits_for_big!(U162, 2); bits_for_big!(U163, 2); bits_for_big!(U164, 2); bits_for_big!(U165, 2); bits_for_big!(U166, 2); bits_for_big!(U167, 2); bits_for_big!(U168, 2); bits_for_big!(U169, 2); bits_for_big!(U170, 2); bits_for_big!(U171, 2); bits_for_big!(U172, 2); bits_for_big!(U173, 2); bits_for_big!(U174, 2); bits_for_big!(U175, 2); bits_for_big!(U176, 2); bits_for_big!(U177, 2); bits_for_big!(U178, 2); bits_for_big!(U179, 2); bits_for_big!(U180, 2); bits_for_big!(U181, 2); bits_for_big!(U182, 2); bits_for_big!(U183, 2); bits_for_big!(U184, 2); bits_for_big!(U185, 2); bits_for_big!(U186, 2); bits_for_big!(U187, 2); bits_for_big!(U188, 2); bits_for_big!(U189, 2); bits_for_big!(U190, 2); bits_for_big!(U191, 2); bits_for_big!(U192, 2); bits_for_big!(U193, 2); bits_for_big!(U194, 2); bits_for_big!(U195, 2); bits_for_big!(U196, 2); bits_for_big!(U197, 2); bits_for_big!(U198, 2); bits_for_big!(U199, 2); bits_for_big!(U200, 2); bits_for_big!(U201, 2); bits_for_big!(U202, 2); bits_for_big!(U203, 2); bits_for_big!(U204, 2); bits_for_big!(U205, 2); bits_for_big!(U206, 2); bits_for_big!(U207, 2); bits_for_big!(U208, 2); bits_for_big!(U209, 2); bits_for_big!(U210, 2); bits_for_big!(U211, 2); bits_for_big!(U212, 2); bits_for_big!(U213, 2); bits_for_big!(U214, 2); bits_for_big!(U215, 2); bits_for_big!(U216, 2); bits_for_big!(U217, 2); bits_for_big!(U218, 2); bits_for_big!(U219, 2); bits_for_big!(U220, 2); bits_for_big!(U221, 2); bits_for_big!(U222, 2); bits_for_big!(U223, 2); bits_for_big!(U224, 2); bits_for_big!(U225, 2); bits_for_big!(U226, 2); bits_for_big!(U227, 2); bits_for_big!(U228, 2); bits_for_big!(U229, 2); bits_for_big!(U230, 2); bits_for_big!(U231, 2); bits_for_big!(U232, 2); bits_for_big!(U233, 2); bits_for_big!(U234, 2); bits_for_big!(U235, 2); bits_for_big!(U236, 2); bits_for_big!(U237, 2); bits_for_big!(U238, 2); bits_for_big!(U239, 2); bits_for_big!(U240, 2); bits_for_big!(U241, 2); bits_for_big!(U242, 2); bits_for_big!(U243, 2); bits_for_big!(U244, 2); bits_for_big!(U245, 2); bits_for_big!(U246, 2); bits_for_big!(U247, 2); bits_for_big!(U248, 2); bits_for_big!(U249, 2); bits_for_big!(U250, 2); bits_for_big!(U251, 2); bits_for_big!(U252, 2); bits_for_big!(U253, 2); bits_for_big!(U254, 2); bits_for_big!(U255, 2); bits_for_big!(U256, 2); bits_for_big!(U257, 3); bits_for_big!(U258, 3); bits_for_big!(U259, 3); bits_for_big!(U260, 3); bits_for_big!(U261, 3); bits_for_big!(U262, 3); bits_for_big!(U263, 3); bits_for_big!(U264, 3); bits_for_big!(U265, 3); bits_for_big!(U266, 3); bits_for_big!(U267, 3); bits_for_big!(U268, 3); bits_for_big!(U269, 3); bits_for_big!(U270, 3); bits_for_big!(U271, 3); bits_for_big!(U272, 3); bits_for_big!(U273, 3); bits_for_big!(U274, 3); bits_for_big!(U275, 3); bits_for_big!(U276, 3); bits_for_big!(U277, 3); bits_for_big!(U278, 3); bits_for_big!(U279, 3); bits_for_big!(U280, 3); bits_for_big!(U281, 3); bits_for_big!(U282, 3); bits_for_big!(U283, 3); bits_for_big!(U284, 3); bits_for_big!(U285, 3); bits_for_big!(U286, 3); bits_for_big!(U287, 3); bits_for_big!(U288, 3); bits_for_big!(U289, 3); bits_for_big!(U290, 3); bits_for_big!(U291, 3); bits_for_big!(U292, 3); bits_for_big!(U293, 3); bits_for_big!(U294, 3); bits_for_big!(U295, 3); bits_for_big!(U296, 3); bits_for_big!(U297, 3); bits_for_big!(U298, 3); bits_for_big!(U299, 3); bits_for_big!(U300, 3); bits_for_big!(U301, 3); bits_for_big!(U302, 3); bits_for_big!(U303, 3); bits_for_big!(U304, 3); bits_for_big!(U305, 3); bits_for_big!(U306, 3); bits_for_big!(U307, 3); bits_for_big!(U308, 3); bits_for_big!(U309, 3); bits_for_big!(U310, 3); bits_for_big!(U311, 3); bits_for_big!(U312, 3); bits_for_big!(U313, 3); bits_for_big!(U314, 3); bits_for_big!(U315, 3); bits_for_big!(U316, 3); bits_for_big!(U317, 3); bits_for_big!(U318, 3); bits_for_big!(U319, 3); bits_for_big!(U320, 3); bits_for_big!(U321, 3); bits_for_big!(U322, 3); bits_for_big!(U323, 3); bits_for_big!(U324, 3); bits_for_big!(U325, 3); bits_for_big!(U326, 3); bits_for_big!(U327, 3); bits_for_big!(U328, 3); bits_for_big!(U329, 3); bits_for_big!(U330, 3); bits_for_big!(U331, 3); bits_for_big!(U332, 3); bits_for_big!(U333, 3); bits_for_big!(U334, 3); bits_for_big!(U335, 3); bits_for_big!(U336, 3); bits_for_big!(U337, 3); bits_for_big!(U338, 3); bits_for_big!(U339, 3); bits_for_big!(U340, 3); bits_for_big!(U341, 3); bits_for_big!(U342, 3); bits_for_big!(U343, 3); bits_for_big!(U344, 3); bits_for_big!(U345, 3); bits_for_big!(U346, 3); bits_for_big!(U347, 3); bits_for_big!(U348, 3); bits_for_big!(U349, 3); bits_for_big!(U350, 3); bits_for_big!(U351, 3); bits_for_big!(U352, 3); bits_for_big!(U353, 3); bits_for_big!(U354, 3); bits_for_big!(U355, 3); bits_for_big!(U356, 3); bits_for_big!(U357, 3); bits_for_big!(U358, 3); bits_for_big!(U359, 3); bits_for_big!(U360, 3); bits_for_big!(U361, 3); bits_for_big!(U362, 3); bits_for_big!(U363, 3); bits_for_big!(U364, 3); bits_for_big!(U365, 3); bits_for_big!(U366, 3); bits_for_big!(U367, 3); bits_for_big!(U368, 3); bits_for_big!(U369, 3); bits_for_big!(U370, 3); bits_for_big!(U371, 3); bits_for_big!(U372, 3); bits_for_big!(U373, 3); bits_for_big!(U374, 3); bits_for_big!(U375, 3); bits_for_big!(U376, 3); bits_for_big!(U377, 3); bits_for_big!(U378, 3); bits_for_big!(U379, 3); bits_for_big!(U380, 3); bits_for_big!(U381, 3); bits_for_big!(U382, 3); bits_for_big!(U383, 3); bits_for_big!(U384, 3); bits_for_big!(U385, 4); bits_for_big!(U386, 4); bits_for_big!(U387, 4); bits_for_big!(U388, 4); bits_for_big!(U389, 4); bits_for_big!(U390, 4); bits_for_big!(U391, 4); bits_for_big!(U392, 4); bits_for_big!(U393, 4); bits_for_big!(U394, 4); bits_for_big!(U395, 4); bits_for_big!(U396, 4); bits_for_big!(U397, 4); bits_for_big!(U398, 4); bits_for_big!(U399, 4); bits_for_big!(U400, 4); bits_for_big!(U401, 4); bits_for_big!(U402, 4); bits_for_big!(U403, 4); bits_for_big!(U404, 4); bits_for_big!(U405, 4); bits_for_big!(U406, 4); bits_for_big!(U407, 4); bits_for_big!(U408, 4); bits_for_big!(U409, 4); bits_for_big!(U410, 4); bits_for_big!(U411, 4); bits_for_big!(U412, 4); bits_for_big!(U413, 4); bits_for_big!(U414, 4); bits_for_big!(U415, 4); bits_for_big!(U416, 4); bits_for_big!(U417, 4); bits_for_big!(U418, 4); bits_for_big!(U419, 4); bits_for_big!(U420, 4); bits_for_big!(U421, 4); bits_for_big!(U422, 4); bits_for_big!(U423, 4); bits_for_big!(U424, 4); bits_for_big!(U425, 4); bits_for_big!(U426, 4); bits_for_big!(U427, 4); bits_for_big!(U428, 4); bits_for_big!(U429, 4); bits_for_big!(U430, 4); bits_for_big!(U431, 4); bits_for_big!(U432, 4); bits_for_big!(U433, 4); bits_for_big!(U434, 4); bits_for_big!(U435, 4); bits_for_big!(U436, 4); bits_for_big!(U437, 4); bits_for_big!(U438, 4); bits_for_big!(U439, 4); bits_for_big!(U440, 4); bits_for_big!(U441, 4); bits_for_big!(U442, 4); bits_for_big!(U443, 4); bits_for_big!(U444, 4); bits_for_big!(U445, 4); bits_for_big!(U446, 4); bits_for_big!(U447, 4); bits_for_big!(U448, 4); bits_for_big!(U449, 4); bits_for_big!(U450, 4); bits_for_big!(U451, 4); bits_for_big!(U452, 4); bits_for_big!(U453, 4); bits_for_big!(U454, 4); bits_for_big!(U455, 4); bits_for_big!(U456, 4); bits_for_big!(U457, 4); bits_for_big!(U458, 4); bits_for_big!(U459, 4); bits_for_big!(U460, 4); bits_for_big!(U461, 4); bits_for_big!(U462, 4); bits_for_big!(U463, 4); bits_for_big!(U464, 4); bits_for_big!(U465, 4); bits_for_big!(U466, 4); bits_for_big!(U467, 4); bits_for_big!(U468, 4); bits_for_big!(U469, 4); bits_for_big!(U470, 4); bits_for_big!(U471, 4); bits_for_big!(U472, 4); bits_for_big!(U473, 4); bits_for_big!(U474, 4); bits_for_big!(U475, 4); bits_for_big!(U476, 4); bits_for_big!(U477, 4); bits_for_big!(U478, 4); bits_for_big!(U479, 4); bits_for_big!(U480, 4); bits_for_big!(U481, 4); bits_for_big!(U482, 4); bits_for_big!(U483, 4); bits_for_big!(U484, 4); bits_for_big!(U485, 4); bits_for_big!(U486, 4); bits_for_big!(U487, 4); bits_for_big!(U488, 4); bits_for_big!(U489, 4); bits_for_big!(U490, 4); bits_for_big!(U491, 4); bits_for_big!(U492, 4); bits_for_big!(U493, 4); bits_for_big!(U494, 4); bits_for_big!(U495, 4); bits_for_big!(U496, 4); bits_for_big!(U497, 4); bits_for_big!(U498, 4); bits_for_big!(U499, 4); bits_for_big!(U500, 4); bits_for_big!(U501, 4); bits_for_big!(U502, 4); bits_for_big!(U503, 4); bits_for_big!(U504, 4); bits_for_big!(U505, 4); bits_for_big!(U506, 4); bits_for_big!(U507, 4); bits_for_big!(U508, 4); bits_for_big!(U509, 4); bits_for_big!(U510, 4); bits_for_big!(U511, 4); bits_for_big!(U512, 4); bits_for_big!(U513, 5); bits_for_big!(U514, 5); bits_for_big!(U515, 5); bits_for_big!(U516, 5); bits_for_big!(U517, 5); bits_for_big!(U518, 5); bits_for_big!(U519, 5); bits_for_big!(U520, 5); bits_for_big!(U521, 5); bits_for_big!(U522, 5); bits_for_big!(U523, 5); bits_for_big!(U524, 5); bits_for_big!(U525, 5); bits_for_big!(U526, 5); bits_for_big!(U527, 5); bits_for_big!(U528, 5); bits_for_big!(U529, 5); bits_for_big!(U530, 5); bits_for_big!(U531, 5); bits_for_big!(U532, 5); bits_for_big!(U533, 5); bits_for_big!(U534, 5); bits_for_big!(U535, 5); bits_for_big!(U536, 5); bits_for_big!(U537, 5); bits_for_big!(U538, 5); bits_for_big!(U539, 5); bits_for_big!(U540, 5); bits_for_big!(U541, 5); bits_for_big!(U542, 5); bits_for_big!(U543, 5); bits_for_big!(U544, 5); bits_for_big!(U545, 5); bits_for_big!(U546, 5); bits_for_big!(U547, 5); bits_for_big!(U548, 5); bits_for_big!(U549, 5); bits_for_big!(U550, 5); bits_for_big!(U551, 5); bits_for_big!(U552, 5); bits_for_big!(U553, 5); bits_for_big!(U554, 5); bits_for_big!(U555, 5); bits_for_big!(U556, 5); bits_for_big!(U557, 5); bits_for_big!(U558, 5); bits_for_big!(U559, 5); bits_for_big!(U560, 5); bits_for_big!(U561, 5); bits_for_big!(U562, 5); bits_for_big!(U563, 5); bits_for_big!(U564, 5); bits_for_big!(U565, 5); bits_for_big!(U566, 5); bits_for_big!(U567, 5); bits_for_big!(U568, 5); bits_for_big!(U569, 5); bits_for_big!(U570, 5); bits_for_big!(U571, 5); bits_for_big!(U572, 5); bits_for_big!(U573, 5); bits_for_big!(U574, 5); bits_for_big!(U575, 5); bits_for_big!(U576, 5); bits_for_big!(U577, 5); bits_for_big!(U578, 5); bits_for_big!(U579, 5); bits_for_big!(U580, 5); bits_for_big!(U581, 5); bits_for_big!(U582, 5); bits_for_big!(U583, 5); bits_for_big!(U584, 5); bits_for_big!(U585, 5); bits_for_big!(U586, 5); bits_for_big!(U587, 5); bits_for_big!(U588, 5); bits_for_big!(U589, 5); bits_for_big!(U590, 5); bits_for_big!(U591, 5); bits_for_big!(U592, 5); bits_for_big!(U593, 5); bits_for_big!(U594, 5); bits_for_big!(U595, 5); bits_for_big!(U596, 5); bits_for_big!(U597, 5); bits_for_big!(U598, 5); bits_for_big!(U599, 5); bits_for_big!(U600, 5); bits_for_big!(U601, 5); bits_for_big!(U602, 5); bits_for_big!(U603, 5); bits_for_big!(U604, 5); bits_for_big!(U605, 5); bits_for_big!(U606, 5); bits_for_big!(U607, 5); bits_for_big!(U608, 5); bits_for_big!(U609, 5); bits_for_big!(U610, 5); bits_for_big!(U611, 5); bits_for_big!(U612, 5); bits_for_big!(U613, 5); bits_for_big!(U614, 5); bits_for_big!(U615, 5); bits_for_big!(U616, 5); bits_for_big!(U617, 5); bits_for_big!(U618, 5); bits_for_big!(U619, 5); bits_for_big!(U620, 5); bits_for_big!(U621, 5); bits_for_big!(U622, 5); bits_for_big!(U623, 5); bits_for_big!(U624, 5); bits_for_big!(U625, 5); bits_for_big!(U626, 5); bits_for_big!(U627, 5); bits_for_big!(U628, 5); bits_for_big!(U629, 5); bits_for_big!(U630, 5); bits_for_big!(U631, 5); bits_for_big!(U632, 5); bits_for_big!(U633, 5); bits_for_big!(U634, 5); bits_for_big!(U635, 5); bits_for_big!(U636, 5); bits_for_big!(U637, 5); bits_for_big!(U638, 5); bits_for_big!(U639, 5); bits_for_big!(U640, 5); bits_for_big!(U641, 6); bits_for_big!(U642, 6); bits_for_big!(U643, 6); bits_for_big!(U644, 6); bits_for_big!(U645, 6); bits_for_big!(U646, 6); bits_for_big!(U647, 6); bits_for_big!(U648, 6); bits_for_big!(U649, 6); bits_for_big!(U650, 6); bits_for_big!(U651, 6); bits_for_big!(U652, 6); bits_for_big!(U653, 6); bits_for_big!(U654, 6); bits_for_big!(U655, 6); bits_for_big!(U656, 6); bits_for_big!(U657, 6); bits_for_big!(U658, 6); bits_for_big!(U659, 6); bits_for_big!(U660, 6); bits_for_big!(U661, 6); bits_for_big!(U662, 6); bits_for_big!(U663, 6); bits_for_big!(U664, 6); bits_for_big!(U665, 6); bits_for_big!(U666, 6); bits_for_big!(U667, 6); bits_for_big!(U668, 6); bits_for_big!(U669, 6); bits_for_big!(U670, 6); bits_for_big!(U671, 6); bits_for_big!(U672, 6); bits_for_big!(U673, 6); bits_for_big!(U674, 6); bits_for_big!(U675, 6); bits_for_big!(U676, 6); bits_for_big!(U677, 6); bits_for_big!(U678, 6); bits_for_big!(U679, 6); bits_for_big!(U680, 6); bits_for_big!(U681, 6); bits_for_big!(U682, 6); bits_for_big!(U683, 6); bits_for_big!(U684, 6); bits_for_big!(U685, 6); bits_for_big!(U686, 6); bits_for_big!(U687, 6); bits_for_big!(U688, 6); bits_for_big!(U689, 6); bits_for_big!(U690, 6); bits_for_big!(U691, 6); bits_for_big!(U692, 6); bits_for_big!(U693, 6); bits_for_big!(U694, 6); bits_for_big!(U695, 6); bits_for_big!(U696, 6); bits_for_big!(U697, 6); bits_for_big!(U698, 6); bits_for_big!(U699, 6); bits_for_big!(U700, 6); bits_for_big!(U701, 6); bits_for_big!(U702, 6); bits_for_big!(U703, 6); bits_for_big!(U704, 6); bits_for_big!(U705, 6); bits_for_big!(U706, 6); bits_for_big!(U707, 6); bits_for_big!(U708, 6); bits_for_big!(U709, 6); bits_for_big!(U710, 6); bits_for_big!(U711, 6); bits_for_big!(U712, 6); bits_for_big!(U713, 6); bits_for_big!(U714, 6); bits_for_big!(U715, 6); bits_for_big!(U716, 6); bits_for_big!(U717, 6); bits_for_big!(U718, 6); bits_for_big!(U719, 6); bits_for_big!(U720, 6); bits_for_big!(U721, 6); bits_for_big!(U722, 6); bits_for_big!(U723, 6); bits_for_big!(U724, 6); bits_for_big!(U725, 6); bits_for_big!(U726, 6); bits_for_big!(U727, 6); bits_for_big!(U728, 6); bits_for_big!(U729, 6); bits_for_big!(U730, 6); bits_for_big!(U731, 6); bits_for_big!(U732, 6); bits_for_big!(U733, 6); bits_for_big!(U734, 6); bits_for_big!(U735, 6); bits_for_big!(U736, 6); bits_for_big!(U737, 6); bits_for_big!(U738, 6); bits_for_big!(U739, 6); bits_for_big!(U740, 6); bits_for_big!(U741, 6); bits_for_big!(U742, 6); bits_for_big!(U743, 6); bits_for_big!(U744, 6); bits_for_big!(U745, 6); bits_for_big!(U746, 6); bits_for_big!(U747, 6); bits_for_big!(U748, 6); bits_for_big!(U749, 6); bits_for_big!(U750, 6); bits_for_big!(U751, 6); bits_for_big!(U752, 6); bits_for_big!(U753, 6); bits_for_big!(U754, 6); bits_for_big!(U755, 6); bits_for_big!(U756, 6); bits_for_big!(U757, 6); bits_for_big!(U758, 6); bits_for_big!(U759, 6); bits_for_big!(U760, 6); bits_for_big!(U761, 6); bits_for_big!(U762, 6); bits_for_big!(U763, 6); bits_for_big!(U764, 6); bits_for_big!(U765, 6); bits_for_big!(U766, 6); bits_for_big!(U767, 6); bits_for_big!(U768, 6); bits_for_big!(U769, 7); bits_for_big!(U770, 7); bits_for_big!(U771, 7); bits_for_big!(U772, 7); bits_for_big!(U773, 7); bits_for_big!(U774, 7); bits_for_big!(U775, 7); bits_for_big!(U776, 7); bits_for_big!(U777, 7); bits_for_big!(U778, 7); bits_for_big!(U779, 7); bits_for_big!(U780, 7); bits_for_big!(U781, 7); bits_for_big!(U782, 7); bits_for_big!(U783, 7); bits_for_big!(U784, 7); bits_for_big!(U785, 7); bits_for_big!(U786, 7); bits_for_big!(U787, 7); bits_for_big!(U788, 7); bits_for_big!(U789, 7); bits_for_big!(U790, 7); bits_for_big!(U791, 7); bits_for_big!(U792, 7); bits_for_big!(U793, 7); bits_for_big!(U794, 7); bits_for_big!(U795, 7); bits_for_big!(U796, 7); bits_for_big!(U797, 7); bits_for_big!(U798, 7); bits_for_big!(U799, 7); bits_for_big!(U800, 7); bits_for_big!(U801, 7); bits_for_big!(U802, 7); bits_for_big!(U803, 7); bits_for_big!(U804, 7); bits_for_big!(U805, 7); bits_for_big!(U806, 7); bits_for_big!(U807, 7); bits_for_big!(U808, 7); bits_for_big!(U809, 7); bits_for_big!(U810, 7); bits_for_big!(U811, 7); bits_for_big!(U812, 7); bits_for_big!(U813, 7); bits_for_big!(U814, 7); bits_for_big!(U815, 7); bits_for_big!(U816, 7); bits_for_big!(U817, 7); bits_for_big!(U818, 7); bits_for_big!(U819, 7); bits_for_big!(U820, 7); bits_for_big!(U821, 7); bits_for_big!(U822, 7); bits_for_big!(U823, 7); bits_for_big!(U824, 7); bits_for_big!(U825, 7); bits_for_big!(U826, 7); bits_for_big!(U827, 7); bits_for_big!(U828, 7); bits_for_big!(U829, 7); bits_for_big!(U830, 7); bits_for_big!(U831, 7); bits_for_big!(U832, 7); bits_for_big!(U833, 7); bits_for_big!(U834, 7); bits_for_big!(U835, 7); bits_for_big!(U836, 7); bits_for_big!(U837, 7); bits_for_big!(U838, 7); bits_for_big!(U839, 7); bits_for_big!(U840, 7); bits_for_big!(U841, 7); bits_for_big!(U842, 7); bits_for_big!(U843, 7); bits_for_big!(U844, 7); bits_for_big!(U845, 7); bits_for_big!(U846, 7); bits_for_big!(U847, 7); bits_for_big!(U848, 7); bits_for_big!(U849, 7); bits_for_big!(U850, 7); bits_for_big!(U851, 7); bits_for_big!(U852, 7); bits_for_big!(U853, 7); bits_for_big!(U854, 7); bits_for_big!(U855, 7); bits_for_big!(U856, 7); bits_for_big!(U857, 7); bits_for_big!(U858, 7); bits_for_big!(U859, 7); bits_for_big!(U860, 7); bits_for_big!(U861, 7); bits_for_big!(U862, 7); bits_for_big!(U863, 7); bits_for_big!(U864, 7); bits_for_big!(U865, 7); bits_for_big!(U866, 7); bits_for_big!(U867, 7); bits_for_big!(U868, 7); bits_for_big!(U869, 7); bits_for_big!(U870, 7); bits_for_big!(U871, 7); bits_for_big!(U872, 7); bits_for_big!(U873, 7); bits_for_big!(U874, 7); bits_for_big!(U875, 7); bits_for_big!(U876, 7); bits_for_big!(U877, 7); bits_for_big!(U878, 7); bits_for_big!(U879, 7); bits_for_big!(U880, 7); bits_for_big!(U881, 7); bits_for_big!(U882, 7); bits_for_big!(U883, 7); bits_for_big!(U884, 7); bits_for_big!(U885, 7); bits_for_big!(U886, 7); bits_for_big!(U887, 7); bits_for_big!(U888, 7); bits_for_big!(U889, 7); bits_for_big!(U890, 7); bits_for_big!(U891, 7); bits_for_big!(U892, 7); bits_for_big!(U893, 7); bits_for_big!(U894, 7); bits_for_big!(U895, 7); bits_for_big!(U896, 7); bits_for_big!(U897, 8); bits_for_big!(U898, 8); bits_for_big!(U899, 8); bits_for_big!(U900, 8); bits_for_big!(U901, 8); bits_for_big!(U902, 8); bits_for_big!(U903, 8); bits_for_big!(U904, 8); bits_for_big!(U905, 8); bits_for_big!(U906, 8); bits_for_big!(U907, 8); bits_for_big!(U908, 8); bits_for_big!(U909, 8); bits_for_big!(U910, 8); bits_for_big!(U911, 8); bits_for_big!(U912, 8); bits_for_big!(U913, 8); bits_for_big!(U914, 8); bits_for_big!(U915, 8); bits_for_big!(U916, 8); bits_for_big!(U917, 8); bits_for_big!(U918, 8); bits_for_big!(U919, 8); bits_for_big!(U920, 8); bits_for_big!(U921, 8); bits_for_big!(U922, 8); bits_for_big!(U923, 8); bits_for_big!(U924, 8); bits_for_big!(U925, 8); bits_for_big!(U926, 8); bits_for_big!(U927, 8); bits_for_big!(U928, 8); bits_for_big!(U929, 8); bits_for_big!(U930, 8); bits_for_big!(U931, 8); bits_for_big!(U932, 8); bits_for_big!(U933, 8); bits_for_big!(U934, 8); bits_for_big!(U935, 8); bits_for_big!(U936, 8); bits_for_big!(U937, 8); bits_for_big!(U938, 8); bits_for_big!(U939, 8); bits_for_big!(U940, 8); bits_for_big!(U941, 8); bits_for_big!(U942, 8); bits_for_big!(U943, 8); bits_for_big!(U944, 8); bits_for_big!(U945, 8); bits_for_big!(U946, 8); bits_for_big!(U947, 8); bits_for_big!(U948, 8); bits_for_big!(U949, 8); bits_for_big!(U950, 8); bits_for_big!(U951, 8); bits_for_big!(U952, 8); bits_for_big!(U953, 8); bits_for_big!(U954, 8); bits_for_big!(U955, 8); bits_for_big!(U956, 8); bits_for_big!(U957, 8); bits_for_big!(U958, 8); bits_for_big!(U959, 8); bits_for_big!(U960, 8); bits_for_big!(U961, 8); bits_for_big!(U962, 8); bits_for_big!(U963, 8); bits_for_big!(U964, 8); bits_for_big!(U965, 8); bits_for_big!(U966, 8); bits_for_big!(U967, 8); bits_for_big!(U968, 8); bits_for_big!(U969, 8); bits_for_big!(U970, 8); bits_for_big!(U971, 8); bits_for_big!(U972, 8); bits_for_big!(U973, 8); bits_for_big!(U974, 8); bits_for_big!(U975, 8); bits_for_big!(U976, 8); bits_for_big!(U977, 8); bits_for_big!(U978, 8); bits_for_big!(U979, 8); bits_for_big!(U980, 8); bits_for_big!(U981, 8); bits_for_big!(U982, 8); bits_for_big!(U983, 8); bits_for_big!(U984, 8); bits_for_big!(U985, 8); bits_for_big!(U986, 8); bits_for_big!(U987, 8); bits_for_big!(U988, 8); bits_for_big!(U989, 8); bits_for_big!(U990, 8); bits_for_big!(U991, 8); bits_for_big!(U992, 8); bits_for_big!(U993, 8); bits_for_big!(U994, 8); bits_for_big!(U995, 8); bits_for_big!(U996, 8); bits_for_big!(U997, 8); bits_for_big!(U998, 8); bits_for_big!(U999, 8); bits_for_big!(U1000, 8); bits_for_big!(U1001, 8); bits_for_big!(U1002, 8); bits_for_big!(U1003, 8); bits_for_big!(U1004, 8); bits_for_big!(U1005, 8); bits_for_big!(U1006, 8); bits_for_big!(U1007, 8); bits_for_big!(U1008, 8); bits_for_big!(U1009, 8); bits_for_big!(U1010, 8); bits_for_big!(U1011, 8); bits_for_big!(U1012, 8); bits_for_big!(U1013, 8); bits_for_big!(U1014, 8); bits_for_big!(U1015, 8); bits_for_big!(U1016, 8); bits_for_big!(U1017, 8); bits_for_big!(U1018, 8); bits_for_big!(U1019, 8); bits_for_big!(U1020, 8); bits_for_big!(U1021, 8); bits_for_big!(U1022, 8); bits_for_big!(U1023, 8); bits_for_big!(U1024, 8); vendor/bitmaps/src/lib.rs0000664000175000017500000000461514160055207016217 0ustar mwhudsonmwhudson// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. #![forbid(rust_2018_idioms)] #![deny(nonstandard_style)] #![warn(unreachable_pub)] #![allow(clippy::missing_safety_doc)] #![cfg_attr(not(feature = "std"), no_std)] //! This crate provides the [`Bitmap`][Bitmap] type as a convenient and //! efficient way of declaring and working with fixed size bitmaps in Rust. //! //! # Examples //! //! ```rust //! # #[macro_use] extern crate bitmaps; //! # use bitmaps::Bitmap; //! # use typenum::U10; //! let mut bitmap: Bitmap = Bitmap::new(); //! assert_eq!(bitmap.set(5, true), false); //! assert_eq!(bitmap.set(5, true), true); //! assert_eq!(bitmap.get(5), true); //! assert_eq!(bitmap.get(6), false); //! assert_eq!(bitmap.len(), 1); //! assert_eq!(bitmap.set(3, true), false); //! assert_eq!(bitmap.len(), 2); //! assert_eq!(bitmap.first_index(), Some(3)); //! ``` //! //! # X86 Arch Support //! //! On `x86` and `x86_64` architectures, [`Bitmap`][Bitmap]s of size 256, 512, //! 768 and 1024 gain the [`load_m256i()`][load_m256i] method, which reads the //! bitmap into an [`__m256i`][m256i] or an array of [`__m256i`][m256i] using //! [`_mm256_loadu_si256()`][loadu_si256]. [`Bitmap`][Bitmap]s of size 128 as //! well as the previous gain the [`load_m128i()`][load_m128i] method, which //! does the same for [`__m128i`][m128i]. //! //! In addition, [`Bitmap`][Bitmap] and [`Bitmap`][Bitmap] will have //! `From` and `Into` implementations for [`__m128i`][m128i] and //! [`__m256i`][m256i] respectively. //! //! Note that alignment is unaffected - your bitmaps will be aligned //! appropriately for `u128`, not [`__m128i`][m128i] or [`__m256i`][m256i], //! unless you arrange for it to be otherwise. This may affect the performance //! of SIMD instructions. //! //! [Bitmap]: struct.Bitmap.html //! [load_m128i]: struct.Bitmap.html#method.load_m128i //! [load_m256i]: struct.Bitmap.html#method.load_m256i //! [m128i]: https://doc.rust-lang.org/core/arch/x86_64/struct.__m128i.html //! [m256i]: https://doc.rust-lang.org/core/arch/x86_64/struct.__m256i.html //! [loadu_si256]: https://doc.rust-lang.org/core/arch/x86_64/fn._mm256_loadu_si256.html mod bitmap; mod types; #[doc(inline)] pub use crate::bitmap::{Bitmap, Iter}; #[doc(inline)] pub use crate::types::{BitOps, Bits}; vendor/bitmaps/README.md0000664000175000017500000000260114160055207015564 0ustar mwhudsonmwhudson# bitmaps A fixed size compact boolean array in Rust. ## Overview This crate provides a convenient and efficient way of declaring and working with fixed size bitmaps in Rust. It was originally split out from the [sized-chunks] crate and its primary purpose is to support it, but the `Bitmap` type has proven to be generally useful enough that it was split off into a separate crate. ## Example ```rust use bitmaps::Bitmap; use typenum::U10; fn main() { let mut bitmap = Bitmap::::new(); assert_eq!(bitmap.set(5, true), false); assert_eq!(bitmap.set(5, true), true); assert_eq!(bitmap.get(5), true); assert_eq!(bitmap.get(6), false); assert_eq!(bitmap.len(), 1); assert_eq!(bitmap.set(3, true), false); assert_eq!(bitmap.len(), 2); assert_eq!(bitmap.first_index(), Some(3)); } ``` ## Documentation * [API docs](https://docs.rs/bitmaps) ## Licence Copyright 2019 Bodil Stokke This software is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. ## Code of Conduct Please note that this project is released with a [Contributor Code of Conduct][coc]. By participating in this project you agree to abide by its terms. [sized-chunks]: https://github.com/bodil/sized-chunks [coc]: https://github.com/bodil/bitmaps/blob/master/CODE_OF_CONDUCT.md vendor/autocfg/0000775000175000017500000000000014160055207014277 5ustar mwhudsonmwhudsonvendor/autocfg/.cargo-checksum.json0000664000175000017500000000013114160055207020136 0ustar mwhudsonmwhudson{"files":{},"package":"cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a"}vendor/autocfg/LICENSE-APACHE0000664000175000017500000002513714160055207016233 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/autocfg/Cargo.toml0000664000175000017500000000162714160055207016235 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] name = "autocfg" version = "1.0.1" authors = ["Josh Stone "] exclude = ["/.github/**", "/bors.toml"] description = "Automatic cfg for Rust compiler features" readme = "README.md" keywords = ["rustc", "build", "autoconf"] categories = ["development-tools::build-utils"] license = "Apache-2.0 OR MIT" repository = "https://github.com/cuviper/autocfg" [dependencies] vendor/autocfg/src/0000775000175000017500000000000014160055207015066 5ustar mwhudsonmwhudsonvendor/autocfg/src/error.rs0000664000175000017500000000260614160055207016571 0ustar mwhudsonmwhudsonuse std::error; use std::fmt; use std::io; use std::num; use std::str; /// A common error type for the `autocfg` crate. #[derive(Debug)] pub struct Error { kind: ErrorKind, } impl error::Error for Error { fn description(&self) -> &str { "AutoCfg error" } fn cause(&self) -> Option<&error::Error> { match self.kind { ErrorKind::Io(ref e) => Some(e), ErrorKind::Num(ref e) => Some(e), ErrorKind::Utf8(ref e) => Some(e), ErrorKind::Other(_) => None, } } } impl fmt::Display for Error { fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> { match self.kind { ErrorKind::Io(ref e) => e.fmt(f), ErrorKind::Num(ref e) => e.fmt(f), ErrorKind::Utf8(ref e) => e.fmt(f), ErrorKind::Other(s) => s.fmt(f), } } } #[derive(Debug)] enum ErrorKind { Io(io::Error), Num(num::ParseIntError), Utf8(str::Utf8Error), Other(&'static str), } pub fn from_io(e: io::Error) -> Error { Error { kind: ErrorKind::Io(e), } } pub fn from_num(e: num::ParseIntError) -> Error { Error { kind: ErrorKind::Num(e), } } pub fn from_utf8(e: str::Utf8Error) -> Error { Error { kind: ErrorKind::Utf8(e), } } pub fn from_str(s: &'static str) -> Error { Error { kind: ErrorKind::Other(s), } } vendor/autocfg/src/tests.rs0000664000175000017500000001154114160055207016600 0ustar mwhudsonmwhudsonuse super::AutoCfg; use std::env; impl AutoCfg { fn core_std(&self, path: &str) -> String { let krate = if self.no_std { "core" } else { "std" }; format!("{}::{}", krate, path) } fn assert_std(&self, probe_result: bool) { assert_eq!(!self.no_std, probe_result); } fn assert_min(&self, major: usize, minor: usize, probe_result: bool) { assert_eq!(self.probe_rustc_version(major, minor), probe_result); } fn for_test() -> Result { match env::var_os("TESTS_TARGET_DIR") { Some(d) => Self::with_dir(d), None => Self::with_dir("target"), } } } #[test] fn autocfg_version() { let ac = AutoCfg::for_test().unwrap(); println!("version: {:?}", ac.rustc_version); assert!(ac.probe_rustc_version(1, 0)); } #[test] fn version_cmp() { use super::version::Version; let v123 = Version::new(1, 2, 3); assert!(Version::new(1, 0, 0) < v123); assert!(Version::new(1, 2, 2) < v123); assert!(Version::new(1, 2, 3) == v123); assert!(Version::new(1, 2, 4) > v123); assert!(Version::new(1, 10, 0) > v123); assert!(Version::new(2, 0, 0) > v123); } #[test] fn probe_add() { let ac = AutoCfg::for_test().unwrap(); let add = ac.core_std("ops::Add"); let add_rhs = add.clone() + ""; let add_rhs_output = add.clone() + ""; let dyn_add_rhs_output = "dyn ".to_string() + &*add_rhs_output; assert!(ac.probe_path(&add)); assert!(ac.probe_trait(&add)); assert!(ac.probe_trait(&add_rhs)); assert!(ac.probe_trait(&add_rhs_output)); ac.assert_min(1, 27, ac.probe_type(&dyn_add_rhs_output)); } #[test] fn probe_as_ref() { let ac = AutoCfg::for_test().unwrap(); let as_ref = ac.core_std("convert::AsRef"); let as_ref_str = as_ref.clone() + ""; let dyn_as_ref_str = "dyn ".to_string() + &*as_ref_str; assert!(ac.probe_path(&as_ref)); assert!(ac.probe_trait(&as_ref_str)); assert!(ac.probe_type(&as_ref_str)); ac.assert_min(1, 27, ac.probe_type(&dyn_as_ref_str)); } #[test] fn probe_i128() { let ac = AutoCfg::for_test().unwrap(); let i128_path = ac.core_std("i128"); ac.assert_min(1, 26, ac.probe_path(&i128_path)); ac.assert_min(1, 26, ac.probe_type("i128")); } #[test] fn probe_sum() { let ac = AutoCfg::for_test().unwrap(); let sum = ac.core_std("iter::Sum"); let sum_i32 = sum.clone() + ""; let dyn_sum_i32 = "dyn ".to_string() + &*sum_i32; ac.assert_min(1, 12, ac.probe_path(&sum)); ac.assert_min(1, 12, ac.probe_trait(&sum)); ac.assert_min(1, 12, ac.probe_trait(&sum_i32)); ac.assert_min(1, 12, ac.probe_type(&sum_i32)); ac.assert_min(1, 27, ac.probe_type(&dyn_sum_i32)); } #[test] fn probe_std() { let ac = AutoCfg::for_test().unwrap(); ac.assert_std(ac.probe_sysroot_crate("std")); } #[test] fn probe_alloc() { let ac = AutoCfg::for_test().unwrap(); ac.assert_min(1, 36, ac.probe_sysroot_crate("alloc")); } #[test] fn probe_bad_sysroot_crate() { let ac = AutoCfg::for_test().unwrap(); assert!(!ac.probe_sysroot_crate("doesnt_exist")); } #[test] fn probe_no_std() { let ac = AutoCfg::for_test().unwrap(); assert!(ac.probe_type("i32")); assert!(ac.probe_type("[i32]")); ac.assert_std(ac.probe_type("Vec")); } #[test] fn probe_expression() { let ac = AutoCfg::for_test().unwrap(); assert!(ac.probe_expression(r#""test".trim_left()"#)); ac.assert_min(1, 30, ac.probe_expression(r#""test".trim_start()"#)); ac.assert_std(ac.probe_expression("[1, 2, 3].to_vec()")); } #[test] fn probe_constant() { let ac = AutoCfg::for_test().unwrap(); assert!(ac.probe_constant("1 + 2 + 3")); ac.assert_min(1, 33, ac.probe_constant("{ let x = 1 + 2 + 3; x * x }")); ac.assert_min(1, 39, ac.probe_constant(r#""test".len()"#)); } #[test] fn dir_does_not_contain_target() { assert!(!super::dir_contains_target( &Some("x86_64-unknown-linux-gnu".into()), &"/project/target/debug/build/project-ea75983148559682/out".into(), None, )); } #[test] fn dir_does_contain_target() { assert!(super::dir_contains_target( &Some("x86_64-unknown-linux-gnu".into()), &"/project/target/x86_64-unknown-linux-gnu/debug/build/project-0147aca016480b9d/out".into(), None, )); } #[test] fn dir_does_not_contain_target_with_custom_target_dir() { assert!(!super::dir_contains_target( &Some("x86_64-unknown-linux-gnu".into()), &"/project/custom/debug/build/project-ea75983148559682/out".into(), Some("custom".into()), )); } #[test] fn dir_does_contain_target_with_custom_target_dir() { assert!(super::dir_contains_target( &Some("x86_64-unknown-linux-gnu".into()), &"/project/custom/x86_64-unknown-linux-gnu/debug/build/project-0147aca016480b9d/out".into(), Some("custom".into()), )); } vendor/autocfg/src/version.rs0000664000175000017500000000405414160055207017124 0ustar mwhudsonmwhudsonuse std::path::Path; use std::process::Command; use std::str; use super::{error, Error}; /// A version structure for making relative comparisons. #[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord)] pub struct Version { major: usize, minor: usize, patch: usize, } impl Version { /// Creates a `Version` instance for a specific `major.minor.patch` version. pub fn new(major: usize, minor: usize, patch: usize) -> Self { Version { major: major, minor: minor, patch: patch, } } pub fn from_rustc(rustc: &Path) -> Result { // Get rustc's verbose version let output = try!(Command::new(rustc) .args(&["--version", "--verbose"]) .output() .map_err(error::from_io)); if !output.status.success() { return Err(error::from_str("could not execute rustc")); } let output = try!(str::from_utf8(&output.stdout).map_err(error::from_utf8)); // Find the release line in the verbose version output. let release = match output.lines().find(|line| line.starts_with("release: ")) { Some(line) => &line["release: ".len()..], None => return Err(error::from_str("could not find rustc release")), }; // Strip off any extra channel info, e.g. "-beta.N", "-nightly" let version = match release.find('-') { Some(i) => &release[..i], None => release, }; // Split the version into semver components. let mut iter = version.splitn(3, '.'); let major = try!(iter.next().ok_or(error::from_str("missing major version"))); let minor = try!(iter.next().ok_or(error::from_str("missing minor version"))); let patch = try!(iter.next().ok_or(error::from_str("missing patch version"))); Ok(Version::new( try!(major.parse().map_err(error::from_num)), try!(minor.parse().map_err(error::from_num)), try!(patch.parse().map_err(error::from_num)), )) } } vendor/autocfg/src/lib.rs0000664000175000017500000003356614160055207016217 0ustar mwhudsonmwhudson//! A Rust library for build scripts to automatically configure code based on //! compiler support. Code snippets are dynamically tested to see if the `rustc` //! will accept them, rather than hard-coding specific version support. //! //! //! ## Usage //! //! Add this to your `Cargo.toml`: //! //! ```toml //! [build-dependencies] //! autocfg = "1" //! ``` //! //! Then use it in your `build.rs` script to detect compiler features. For //! example, to test for 128-bit integer support, it might look like: //! //! ```rust //! extern crate autocfg; //! //! fn main() { //! # // Normally, cargo will set `OUT_DIR` for build scripts. //! # std::env::set_var("OUT_DIR", "target"); //! let ac = autocfg::new(); //! ac.emit_has_type("i128"); //! //! // (optional) We don't need to rerun for anything external. //! autocfg::rerun_path("build.rs"); //! } //! ``` //! //! If the type test succeeds, this will write a `cargo:rustc-cfg=has_i128` line //! for Cargo, which translates to Rust arguments `--cfg has_i128`. Then in the //! rest of your Rust code, you can add `#[cfg(has_i128)]` conditions on code that //! should only be used when the compiler supports it. //! //! ## Caution //! //! Many of the probing methods of `AutoCfg` document the particular template they //! use, **subject to change**. The inputs are not validated to make sure they are //! semantically correct for their expected use, so it's _possible_ to escape and //! inject something unintended. However, such abuse is unsupported and will not //! be considered when making changes to the templates. #![deny(missing_debug_implementations)] #![deny(missing_docs)] // allow future warnings that can't be fixed while keeping 1.0 compatibility #![allow(unknown_lints)] #![allow(bare_trait_objects)] #![allow(ellipsis_inclusive_range_patterns)] /// Local macro to avoid `std::try!`, deprecated in Rust 1.39. macro_rules! try { ($result:expr) => { match $result { Ok(value) => value, Err(error) => return Err(error), } }; } use std::env; use std::ffi::OsString; use std::fs; use std::io::{stderr, Write}; use std::path::PathBuf; use std::process::{Command, Stdio}; #[allow(deprecated)] use std::sync::atomic::ATOMIC_USIZE_INIT; use std::sync::atomic::{AtomicUsize, Ordering}; mod error; pub use error::Error; mod version; use version::Version; #[cfg(test)] mod tests; /// Helper to detect compiler features for `cfg` output in build scripts. #[derive(Clone, Debug)] pub struct AutoCfg { out_dir: PathBuf, rustc: PathBuf, rustc_version: Version, target: Option, no_std: bool, rustflags: Option>, } /// Writes a config flag for rustc on standard out. /// /// This looks like: `cargo:rustc-cfg=CFG` /// /// Cargo will use this in arguments to rustc, like `--cfg CFG`. pub fn emit(cfg: &str) { println!("cargo:rustc-cfg={}", cfg); } /// Writes a line telling Cargo to rerun the build script if `path` changes. /// /// This looks like: `cargo:rerun-if-changed=PATH` /// /// This requires at least cargo 0.7.0, corresponding to rustc 1.6.0. Earlier /// versions of cargo will simply ignore the directive. pub fn rerun_path(path: &str) { println!("cargo:rerun-if-changed={}", path); } /// Writes a line telling Cargo to rerun the build script if the environment /// variable `var` changes. /// /// This looks like: `cargo:rerun-if-env-changed=VAR` /// /// This requires at least cargo 0.21.0, corresponding to rustc 1.20.0. Earlier /// versions of cargo will simply ignore the directive. pub fn rerun_env(var: &str) { println!("cargo:rerun-if-env-changed={}", var); } /// Create a new `AutoCfg` instance. /// /// # Panics /// /// Panics if `AutoCfg::new()` returns an error. pub fn new() -> AutoCfg { AutoCfg::new().unwrap() } impl AutoCfg { /// Create a new `AutoCfg` instance. /// /// # Common errors /// /// - `rustc` can't be executed, from `RUSTC` or in the `PATH`. /// - The version output from `rustc` can't be parsed. /// - `OUT_DIR` is not set in the environment, or is not a writable directory. /// pub fn new() -> Result { match env::var_os("OUT_DIR") { Some(d) => Self::with_dir(d), None => Err(error::from_str("no OUT_DIR specified!")), } } /// Create a new `AutoCfg` instance with the specified output directory. /// /// # Common errors /// /// - `rustc` can't be executed, from `RUSTC` or in the `PATH`. /// - The version output from `rustc` can't be parsed. /// - `dir` is not a writable directory. /// pub fn with_dir>(dir: T) -> Result { let rustc = env::var_os("RUSTC").unwrap_or_else(|| "rustc".into()); let rustc: PathBuf = rustc.into(); let rustc_version = try!(Version::from_rustc(&rustc)); let target = env::var_os("TARGET"); // Sanity check the output directory let dir = dir.into(); let meta = try!(fs::metadata(&dir).map_err(error::from_io)); if !meta.is_dir() || meta.permissions().readonly() { return Err(error::from_str("output path is not a writable directory")); } // Cargo only applies RUSTFLAGS for building TARGET artifact in // cross-compilation environment. Sadly, we don't have a way to detect // when we're building HOST artifact in a cross-compilation environment, // so for now we only apply RUSTFLAGS when cross-compiling an artifact. // // See https://github.com/cuviper/autocfg/pull/10#issuecomment-527575030. let rustflags = if target != env::var_os("HOST") || dir_contains_target(&target, &dir, env::var_os("CARGO_TARGET_DIR")) { env::var("RUSTFLAGS").ok().map(|rustflags| { // This is meant to match how cargo handles the RUSTFLAG environment // variable. // See https://github.com/rust-lang/cargo/blob/69aea5b6f69add7c51cca939a79644080c0b0ba0/src/cargo/core/compiler/build_context/target_info.rs#L434-L441 rustflags .split(' ') .map(str::trim) .filter(|s| !s.is_empty()) .map(str::to_string) .collect::>() }) } else { None }; let mut ac = AutoCfg { out_dir: dir, rustc: rustc, rustc_version: rustc_version, target: target, no_std: false, rustflags: rustflags, }; // Sanity check with and without `std`. if !ac.probe("").unwrap_or(false) { ac.no_std = true; if !ac.probe("").unwrap_or(false) { // Neither worked, so assume nothing... ac.no_std = false; let warning = b"warning: autocfg could not probe for `std`\n"; stderr().write_all(warning).ok(); } } Ok(ac) } /// Test whether the current `rustc` reports a version greater than /// or equal to "`major`.`minor`". pub fn probe_rustc_version(&self, major: usize, minor: usize) -> bool { self.rustc_version >= Version::new(major, minor, 0) } /// Sets a `cfg` value of the form `rustc_major_minor`, like `rustc_1_29`, /// if the current `rustc` is at least that version. pub fn emit_rustc_version(&self, major: usize, minor: usize) { if self.probe_rustc_version(major, minor) { emit(&format!("rustc_{}_{}", major, minor)); } } fn probe>(&self, code: T) -> Result { #[allow(deprecated)] static ID: AtomicUsize = ATOMIC_USIZE_INIT; let id = ID.fetch_add(1, Ordering::Relaxed); let mut command = Command::new(&self.rustc); command .arg("--crate-name") .arg(format!("probe{}", id)) .arg("--crate-type=lib") .arg("--out-dir") .arg(&self.out_dir) .arg("--emit=llvm-ir"); if let &Some(ref rustflags) = &self.rustflags { command.args(rustflags); } if let Some(target) = self.target.as_ref() { command.arg("--target").arg(target); } command.arg("-").stdin(Stdio::piped()); let mut child = try!(command.spawn().map_err(error::from_io)); let mut stdin = child.stdin.take().expect("rustc stdin"); if self.no_std { try!(stdin.write_all(b"#![no_std]\n").map_err(error::from_io)); } try!(stdin.write_all(code.as_ref()).map_err(error::from_io)); drop(stdin); let status = try!(child.wait().map_err(error::from_io)); Ok(status.success()) } /// Tests whether the given sysroot crate can be used. /// /// The test code is subject to change, but currently looks like: /// /// ```ignore /// extern crate CRATE as probe; /// ``` pub fn probe_sysroot_crate(&self, name: &str) -> bool { self.probe(format!("extern crate {} as probe;", name)) // `as _` wasn't stabilized until Rust 1.33 .unwrap_or(false) } /// Emits a config value `has_CRATE` if `probe_sysroot_crate` returns true. pub fn emit_sysroot_crate(&self, name: &str) { if self.probe_sysroot_crate(name) { emit(&format!("has_{}", mangle(name))); } } /// Tests whether the given path can be used. /// /// The test code is subject to change, but currently looks like: /// /// ```ignore /// pub use PATH; /// ``` pub fn probe_path(&self, path: &str) -> bool { self.probe(format!("pub use {};", path)).unwrap_or(false) } /// Emits a config value `has_PATH` if `probe_path` returns true. /// /// Any non-identifier characters in the `path` will be replaced with /// `_` in the generated config value. pub fn emit_has_path(&self, path: &str) { if self.probe_path(path) { emit(&format!("has_{}", mangle(path))); } } /// Emits the given `cfg` value if `probe_path` returns true. pub fn emit_path_cfg(&self, path: &str, cfg: &str) { if self.probe_path(path) { emit(cfg); } } /// Tests whether the given trait can be used. /// /// The test code is subject to change, but currently looks like: /// /// ```ignore /// pub trait Probe: TRAIT + Sized {} /// ``` pub fn probe_trait(&self, name: &str) -> bool { self.probe(format!("pub trait Probe: {} + Sized {{}}", name)) .unwrap_or(false) } /// Emits a config value `has_TRAIT` if `probe_trait` returns true. /// /// Any non-identifier characters in the trait `name` will be replaced with /// `_` in the generated config value. pub fn emit_has_trait(&self, name: &str) { if self.probe_trait(name) { emit(&format!("has_{}", mangle(name))); } } /// Emits the given `cfg` value if `probe_trait` returns true. pub fn emit_trait_cfg(&self, name: &str, cfg: &str) { if self.probe_trait(name) { emit(cfg); } } /// Tests whether the given type can be used. /// /// The test code is subject to change, but currently looks like: /// /// ```ignore /// pub type Probe = TYPE; /// ``` pub fn probe_type(&self, name: &str) -> bool { self.probe(format!("pub type Probe = {};", name)) .unwrap_or(false) } /// Emits a config value `has_TYPE` if `probe_type` returns true. /// /// Any non-identifier characters in the type `name` will be replaced with /// `_` in the generated config value. pub fn emit_has_type(&self, name: &str) { if self.probe_type(name) { emit(&format!("has_{}", mangle(name))); } } /// Emits the given `cfg` value if `probe_type` returns true. pub fn emit_type_cfg(&self, name: &str, cfg: &str) { if self.probe_type(name) { emit(cfg); } } /// Tests whether the given expression can be used. /// /// The test code is subject to change, but currently looks like: /// /// ```ignore /// pub fn probe() { let _ = EXPR; } /// ``` pub fn probe_expression(&self, expr: &str) -> bool { self.probe(format!("pub fn probe() {{ let _ = {}; }}", expr)) .unwrap_or(false) } /// Emits the given `cfg` value if `probe_expression` returns true. pub fn emit_expression_cfg(&self, expr: &str, cfg: &str) { if self.probe_expression(expr) { emit(cfg); } } /// Tests whether the given constant expression can be used. /// /// The test code is subject to change, but currently looks like: /// /// ```ignore /// pub const PROBE: () = ((), EXPR).0; /// ``` pub fn probe_constant(&self, expr: &str) -> bool { self.probe(format!("pub const PROBE: () = ((), {}).0;", expr)) .unwrap_or(false) } /// Emits the given `cfg` value if `probe_constant` returns true. pub fn emit_constant_cfg(&self, expr: &str, cfg: &str) { if self.probe_constant(expr) { emit(cfg); } } } fn mangle(s: &str) -> String { s.chars() .map(|c| match c { 'A'...'Z' | 'a'...'z' | '0'...'9' => c, _ => '_', }) .collect() } fn dir_contains_target( target: &Option, dir: &PathBuf, cargo_target_dir: Option, ) -> bool { target .as_ref() .and_then(|target| { dir.to_str().and_then(|dir| { let mut cargo_target_dir = cargo_target_dir .map(PathBuf::from) .unwrap_or_else(|| PathBuf::from("target")); cargo_target_dir.push(target); cargo_target_dir .to_str() .map(|cargo_target_dir| dir.contains(&cargo_target_dir)) }) }) .unwrap_or(false) } vendor/autocfg/tests/0000775000175000017500000000000014160055207015441 5ustar mwhudsonmwhudsonvendor/autocfg/tests/rustflags.rs0000664000175000017500000000106614160055207020024 0ustar mwhudsonmwhudsonextern crate autocfg; use std::env; /// Tests that autocfg uses the RUSTFLAGS environment variable when running /// rustc. #[test] fn test_with_sysroot() { // Use the same path as this test binary. let dir = env::current_exe().unwrap().parent().unwrap().to_path_buf(); env::set_var("RUSTFLAGS", &format!("-L {}", dir.display())); env::set_var("OUT_DIR", &format!("{}", dir.display())); // Ensure HOST != TARGET. env::set_var("HOST", "lol"); let ac = autocfg::AutoCfg::new().unwrap(); assert!(ac.probe_sysroot_crate("autocfg")); } vendor/autocfg/examples/0000775000175000017500000000000014160055207016115 5ustar mwhudsonmwhudsonvendor/autocfg/examples/integers.rs0000664000175000017500000000035314160055207020304 0ustar mwhudsonmwhudsonextern crate autocfg; fn main() { // Normally, cargo will set `OUT_DIR` for build scripts. let ac = autocfg::AutoCfg::with_dir("target").unwrap(); for i in 3..8 { ac.emit_has_type(&format!("i{}", 1 << i)); } } vendor/autocfg/examples/versions.rs0000664000175000017500000000033714160055207020336 0ustar mwhudsonmwhudsonextern crate autocfg; fn main() { // Normally, cargo will set `OUT_DIR` for build scripts. let ac = autocfg::AutoCfg::with_dir("target").unwrap(); for i in 0..100 { ac.emit_rustc_version(1, i); } } vendor/autocfg/examples/paths.rs0000664000175000017500000000124014160055207017577 0ustar mwhudsonmwhudsonextern crate autocfg; fn main() { // Normally, cargo will set `OUT_DIR` for build scripts. let ac = autocfg::AutoCfg::with_dir("target").unwrap(); // since ancient times... ac.emit_has_path("std::vec::Vec"); ac.emit_path_cfg("std::vec::Vec", "has_vec"); // rustc 1.10.0 ac.emit_has_path("std::panic::PanicInfo"); ac.emit_path_cfg("std::panic::PanicInfo", "has_panic_info"); // rustc 1.20.0 ac.emit_has_path("std::mem::ManuallyDrop"); ac.emit_path_cfg("std::mem::ManuallyDrop", "has_manually_drop"); // rustc 1.25.0 ac.emit_has_path("std::ptr::NonNull"); ac.emit_path_cfg("std::ptr::NonNull", "has_non_null"); } vendor/autocfg/examples/traits.rs0000664000175000017500000000147214160055207017775 0ustar mwhudsonmwhudsonextern crate autocfg; fn main() { // Normally, cargo will set `OUT_DIR` for build scripts. let ac = autocfg::AutoCfg::with_dir("target").unwrap(); // since ancient times... ac.emit_has_trait("std::ops::Add"); ac.emit_trait_cfg("std::ops::Add", "has_ops"); // trait parameters have to be provided ac.emit_has_trait("std::borrow::Borrow"); ac.emit_trait_cfg("std::borrow::Borrow", "has_borrow"); // rustc 1.8.0 ac.emit_has_trait("std::ops::AddAssign"); ac.emit_trait_cfg("std::ops::AddAssign", "has_assign_ops"); // rustc 1.12.0 ac.emit_has_trait("std::iter::Sum"); ac.emit_trait_cfg("std::iter::Sum", "has_sum"); // rustc 1.28.0 ac.emit_has_trait("std::alloc::GlobalAlloc"); ac.emit_trait_cfg("std::alloc::GlobalAlloc", "has_global_alloc"); } vendor/autocfg/LICENSE-MIT0000664000175000017500000000203614160055207015734 0ustar mwhudsonmwhudsonCopyright (c) 2018 Josh Stone Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/autocfg/README.md0000664000175000017500000000547214160055207015566 0ustar mwhudsonmwhudsonautocfg ======= [![autocfg crate](https://img.shields.io/crates/v/autocfg.svg)](https://crates.io/crates/autocfg) [![autocfg documentation](https://docs.rs/autocfg/badge.svg)](https://docs.rs/autocfg) ![minimum rustc 1.0](https://img.shields.io/badge/rustc-1.0+-red.svg) ![build status](https://github.com/cuviper/autocfg/workflows/master/badge.svg) A Rust library for build scripts to automatically configure code based on compiler support. Code snippets are dynamically tested to see if the `rustc` will accept them, rather than hard-coding specific version support. ## Usage Add this to your `Cargo.toml`: ```toml [build-dependencies] autocfg = "1" ``` Then use it in your `build.rs` script to detect compiler features. For example, to test for 128-bit integer support, it might look like: ```rust extern crate autocfg; fn main() { let ac = autocfg::new(); ac.emit_has_type("i128"); // (optional) We don't need to rerun for anything external. autocfg::rerun_path("build.rs"); } ``` If the type test succeeds, this will write a `cargo:rustc-cfg=has_i128` line for Cargo, which translates to Rust arguments `--cfg has_i128`. Then in the rest of your Rust code, you can add `#[cfg(has_i128)]` conditions on code that should only be used when the compiler supports it. ## Release Notes - 1.0.1 (2020-08-20) - Apply `RUSTFLAGS` for more `--target` scenarios, by @adamreichold. - 1.0.0 (2020-01-08) - 🎉 Release 1.0! 🎉 (no breaking changes) - Add `probe_expression` and `emit_expression_cfg` to test arbitrary expressions. - Add `probe_constant` and `emit_constant_cfg` to test arbitrary constant expressions. - 0.1.7 (2019-10-20) - Apply `RUSTFLAGS` when probing `$TARGET != $HOST`, mainly for sysroot, by @roblabla. - 0.1.6 (2019-08-19) - Add `probe`/`emit_sysroot_crate`, by @leo60228. - 0.1.5 (2019-07-16) - Mask some warnings from newer rustc. - 0.1.4 (2019-05-22) - Relax `std`/`no_std` probing to a warning instead of an error. - Improve `rustc` bootstrap compatibility. - 0.1.3 (2019-05-21) - Auto-detects if `#![no_std]` is needed for the `$TARGET`. - 0.1.2 (2019-01-16) - Add `rerun_env(ENV)` to print `cargo:rerun-if-env-changed=ENV`. - Add `rerun_path(PATH)` to print `cargo:rerun-if-changed=PATH`. ## Minimum Rust version policy This crate's minimum supported `rustc` version is `1.0.0`. Compatibility is its entire reason for existence, so this crate will be extremely conservative about raising this requirement. If this is ever deemed necessary, it will be treated as a major breaking change for semver purposes. ## License This project is licensed under either of * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. vendor/autocfg/Cargo.lock0000664000175000017500000000021314160055207016200 0ustar mwhudsonmwhudson# This file is automatically @generated by Cargo. # It is not intended for manual editing. [[package]] name = "autocfg" version = "1.0.1" vendor/bytesize/0000775000175000017500000000000014160055207014505 5ustar mwhudsonmwhudsonvendor/bytesize/.cargo-checksum.json0000664000175000017500000000013114160055207020344 0ustar mwhudsonmwhudson{"files":{},"package":"6c58ec36aac5066d5ca17df51b3e70279f5670a72102f5752cb7e7c856adfc70"}vendor/bytesize/LICENSE0000664000175000017500000002613614160055207015522 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/bytesize/Cargo.toml0000664000175000017500000000200414160055207016431 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] name = "bytesize" version = "1.1.0" authors = ["Hyunsik Choi "] description = "an utility for human-readable bytes representations" homepage = "https://github.com/hyunsik/bytesize/" documentation = "https://docs.rs/bytesize/" readme = "README.md" keywords = ["byte", "byte-size", "utility", "human-readable", "format"] license = "Apache-2.0" repository = "https://github.com/hyunsik/bytesize/" [dependencies.serde] version = "1.0" features = ["derive"] optional = true vendor/bytesize/src/0000775000175000017500000000000014160055207015274 5ustar mwhudsonmwhudsonvendor/bytesize/src/parse.rs0000664000175000017500000001342414160055207016760 0ustar mwhudsonmwhudsonuse super::ByteSize; impl std::str::FromStr for ByteSize { type Err = String; fn from_str(value: &str) -> Result { if let Ok(v) = value.parse::() { return Ok(Self(v)); } let number: String = value .chars() .take_while(|c| c.is_digit(10) || c == &'.') .collect(); match number.parse::() { Ok(v) => { let suffix: String = value .chars() .skip_while(|c| c.is_whitespace() || c.is_digit(10) || c == &'.') .collect(); match suffix.parse::() { Ok(u) => Ok(Self((v * u) as u64)), Err(error) => Err(format!( "couldn't parse {:?} into a known SI unit, {}", suffix, error )), } } Err(error) => Err(format!( "couldn't parse {:?} into a ByteSize, {}", value, error )), } } } enum Unit { Byte, // power of tens KiloByte, MegaByte, GigaByte, TeraByte, PetaByte, // power of twos KibiByte, MebiByte, GibiByte, TebiByte, PebiByte, } impl Unit { fn factor(&self) -> u64 { match self { Self::Byte => super::B, // power of tens Self::KiloByte => super::KB, Self::MegaByte => super::MB, Self::GigaByte => super::GB, Self::TeraByte => super::TB, Self::PetaByte => super::PB, // power of twos Self::KibiByte => super::KIB, Self::MebiByte => super::MIB, Self::GibiByte => super::GIB, Self::TebiByte => super::TIB, Self::PebiByte => super::PIB, } } } mod impl_ops { use super::Unit; use std::ops; impl ops::Add for Unit { type Output = u64; fn add(self, other: u64) -> Self::Output { self.factor() + other } } impl ops::Add for u64 { type Output = u64; fn add(self, other: Unit) -> Self::Output { self + other.factor() } } impl ops::Mul for Unit { type Output = u64; fn mul(self, other: u64) -> Self::Output { self.factor() * other } } impl ops::Mul for u64 { type Output = u64; fn mul(self, other: Unit) -> Self::Output { self * other.factor() } } impl ops::Add for Unit { type Output = f64; fn add(self, other: f64) -> Self::Output { self.factor() as f64 + other } } impl ops::Add for f64 { type Output = f64; fn add(self, other: Unit) -> Self::Output { other.factor() as f64 + self } } impl ops::Mul for Unit { type Output = f64; fn mul(self, other: f64) -> Self::Output { self.factor() as f64 * other } } impl ops::Mul for f64 { type Output = f64; fn mul(self, other: Unit) -> Self::Output { other.factor() as f64 * self } } } impl std::str::FromStr for Unit { type Err = String; fn from_str(unit: &str) -> Result { match unit.to_lowercase().as_str() { "b" => Ok(Self::Byte), // power of tens "k" | "kb" => Ok(Self::KiloByte), "m" | "mb" => Ok(Self::MegaByte), "g" | "gb" => Ok(Self::GigaByte), "t" | "tb" => Ok(Self::TeraByte), "p" | "pb" => Ok(Self::PetaByte), // power of twos "ki" | "kib" => Ok(Self::KibiByte), "mi" | "mib" => Ok(Self::MebiByte), "gi" | "gib" => Ok(Self::GibiByte), "ti" | "tib" => Ok(Self::TebiByte), "pi" | "pib" => Ok(Self::PebiByte), _ => Err(format!("couldn't parse unit of {:?}", unit)), } } } #[cfg(test)] mod tests { use super::*; #[test] fn when_ok() { // shortcut for writing test cases fn parse(s: &str) -> u64 { s.parse::().unwrap().0 } assert_eq!("0".parse::().unwrap().0, 0); assert_eq!(parse("0"), 0); assert_eq!(parse("500"), 500); assert_eq!(parse("1K"), Unit::KiloByte * 1); assert_eq!(parse("1Ki"), Unit::KibiByte * 1); assert_eq!(parse("1.5Ki"), (1.5 * Unit::KibiByte) as u64); assert_eq!(parse("1KiB"), 1 * Unit::KibiByte); assert_eq!(parse("1.5KiB"), (1.5 * Unit::KibiByte) as u64); assert_eq!(parse("3 MB"), Unit::MegaByte * 3); assert_eq!(parse("4 MiB"), Unit::MebiByte * 4); assert_eq!(parse("6 GB"), 6 * Unit::GigaByte); assert_eq!(parse("4 GiB"), 4 * Unit::GibiByte); assert_eq!(parse("88TB"), 88 * Unit::TeraByte); assert_eq!(parse("521TiB"), 521 * Unit::TebiByte); assert_eq!(parse("8 PB"), 8 * Unit::PetaByte); assert_eq!(parse("8P"), 8 * Unit::PetaByte); assert_eq!(parse("12 PiB"), 12 * Unit::PebiByte); } #[test] fn when_err() { // shortcut for writing test cases fn parse(s: &str) -> Result { s.parse::() } assert!(parse("").is_err()); assert!(parse("a124GB").is_err()); } #[test] fn to_and_from_str() { // shortcut for writing test cases fn parse(s: &str) -> u64 { s.parse::().unwrap().0 } assert_eq!(parse(&format!("{}", parse("128GB"))), 128 * Unit::GigaByte); assert_eq!( parse(&crate::to_string(parse("128.000 GiB"), true)), 128 * Unit::GibiByte ); } } vendor/bytesize/src/lib.rs0000664000175000017500000002445214160055207016417 0ustar mwhudsonmwhudson//! ByteSize is an utility that easily makes bytes size representation //! and helps its arithmetic operations. //! //! ## Example //! //! ```ignore //! extern crate bytesize; //! //! use bytesize::ByteSize; //! //! fn byte_arithmetic_operator() { //! let x = ByteSize::mb(1); //! let y = ByteSize::kb(100); //! //! let plus = x + y; //! print!("{} bytes", plus.as_u64()); //! //! let minus = ByteSize::tb(100) - ByteSize::gb(4); //! print!("{} bytes", minus.as_u64()); //! } //! ``` //! //! It also provides its human readable string as follows: //! //! ```ignore= //! assert_eq!("482 GiB".to_string(), ByteSize::gb(518).to_string(true)); //! assert_eq!("518 GB".to_string(), ByteSize::gb(518).to_string(false)); //! ``` mod parse; #[cfg(feature = "serde")] #[macro_use] extern crate serde; use std::fmt::{Debug, Display, Formatter, Result}; use std::ops::{Add, AddAssign, Mul, MulAssign}; /// byte size for 1 byte pub const B: u64 = 1; /// bytes size for 1 kilobyte pub const KB: u64 = 1_000; /// bytes size for 1 megabyte pub const MB: u64 = 1_000_000; /// bytes size for 1 gigabyte pub const GB: u64 = 1_000_000_000; /// bytes size for 1 terabyte pub const TB: u64 = 1_000_000_000_000; /// bytes size for 1 petabyte pub const PB: u64 = 1_000_000_000_000_000; /// bytes size for 1 kibibyte pub const KIB: u64 = 1_024; /// bytes size for 1 mebibyte pub const MIB: u64 = 1_048_576; /// bytes size for 1 gibibyte pub const GIB: u64 = 1_073_741_824; /// bytes size for 1 tebibyte pub const TIB: u64 = 1_099_511_627_776; /// bytes size for 1 pebibyte pub const PIB: u64 = 1_125_899_906_842_624; static UNITS: &str = "KMGTPE"; static UNITS_SI: &str = "kMGTPE"; static LN_KB: f64 = 6.931471806; // ln 1024 static LN_KIB: f64 = 6.907755279; // ln 1000 pub fn kb>(size: V) -> u64 { size.into() * KB } pub fn kib>(size: V) -> u64 { size.into() * KIB } pub fn mb>(size: V) -> u64 { size.into() * MB } pub fn mib>(size: V) -> u64 { size.into() * MIB } pub fn gb>(size: V) -> u64 { size.into() * GB } pub fn gib>(size: V) -> u64 { size.into() * GIB } pub fn tb>(size: V) -> u64 { size.into() * TB } pub fn tib>(size: V) -> u64 { size.into() * TIB } pub fn pb>(size: V) -> u64 { size.into() * PB } pub fn pib>(size: V) -> u64 { size.into() * PIB } /// Byte size representation #[derive(Copy, Clone, PartialEq, PartialOrd, Eq, Ord, Hash, Default)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] pub struct ByteSize(pub u64); impl ByteSize { #[inline(always)] pub const fn b(size: u64) -> ByteSize { ByteSize(size) } #[inline(always)] pub const fn kb(size: u64) -> ByteSize { ByteSize(size * KB) } #[inline(always)] pub const fn kib(size: u64) -> ByteSize { ByteSize(size * KIB) } #[inline(always)] pub const fn mb(size: u64) -> ByteSize { ByteSize(size * MB) } #[inline(always)] pub const fn mib(size: u64) -> ByteSize { ByteSize(size * MIB) } #[inline(always)] pub const fn gb(size: u64) -> ByteSize { ByteSize(size * GB) } #[inline(always)] pub const fn gib(size: u64) -> ByteSize { ByteSize(size * GIB) } #[inline(always)] pub const fn tb(size: u64) -> ByteSize { ByteSize(size * TB) } #[inline(always)] pub const fn tib(size: u64) -> ByteSize { ByteSize(size * TIB) } #[inline(always)] pub const fn pb(size: u64) -> ByteSize { ByteSize(size * PB) } #[inline(always)] pub const fn pib(size: u64) -> ByteSize { ByteSize(size * PIB) } #[inline(always)] pub fn as_u64(&self) -> u64 { self.0 } #[inline(always)] pub fn to_string_as(&self, si_unit: bool) -> String { to_string(self.0, si_unit) } } pub fn to_string(bytes: u64, si_prefix: bool) -> String { let unit = if si_prefix { KIB } else { KB }; let unit_base = if si_prefix { LN_KIB } else { LN_KB }; let unit_prefix = if si_prefix { UNITS_SI.as_bytes() } else { UNITS.as_bytes() }; let unit_suffix = if si_prefix { "iB" } else { "B" }; if bytes < unit { format!("{} B", bytes) } else { let size = bytes as f64; let exp = match (size.ln() / unit_base) as usize { e if e == 0 => 1, e => e, }; format!( "{:.1} {}{}", (size / unit.pow(exp as u32) as f64), unit_prefix[exp - 1] as char, unit_suffix ) } } impl Display for ByteSize { fn fmt(&self, f: &mut Formatter) -> Result { f.pad(&to_string(self.0, false)) } } impl Debug for ByteSize { fn fmt(&self, f: &mut Formatter) -> Result { write!(f, "{}", self) } } macro_rules! commutative_op { ($t:ty) => { impl Add for $t { type Output = ByteSize; #[inline(always)] fn add(self, rhs: ByteSize) -> ByteSize { ByteSize(rhs.0 + (self as u64)) } } impl Mul for $t { type Output = ByteSize; #[inline(always)] fn mul(self, rhs: ByteSize) -> ByteSize { ByteSize(rhs.0 * (self as u64)) } } }; } commutative_op!(u64); commutative_op!(u32); commutative_op!(u16); commutative_op!(u8); impl Add for ByteSize { type Output = ByteSize; #[inline(always)] fn add(self, rhs: ByteSize) -> ByteSize { ByteSize(self.0 + rhs.0) } } impl AddAssign for ByteSize { #[inline(always)] fn add_assign(&mut self, rhs: ByteSize) { self.0 += rhs.0 } } impl Add for ByteSize where T: Into { type Output = ByteSize; #[inline(always)] fn add(self, rhs: T) -> ByteSize { ByteSize(self.0 + (rhs.into() as u64)) } } impl AddAssign for ByteSize where T: Into { #[inline(always)] fn add_assign(&mut self, rhs: T) { self.0 += rhs.into() as u64; } } impl Mul for ByteSize where T: Into { type Output = ByteSize; #[inline(always)] fn mul(self, rhs: T) -> ByteSize { ByteSize(self.0 * (rhs.into() as u64)) } } impl MulAssign for ByteSize where T: Into { #[inline(always)] fn mul_assign(&mut self, rhs: T) { self.0 *= rhs.into() as u64; } } #[cfg(test)] mod tests { use super::*; #[test] fn test_arithmetic_op() { let mut x = ByteSize::mb(1); let y = ByteSize::kb(100); assert_eq!((x + y).as_u64(), 1_100_000u64); assert_eq!((x + (100 * 1000) as u64).as_u64(), 1_100_000); assert_eq!((x * 2u64).as_u64(), 2_000_000); x += y; assert_eq!(x.as_u64(), 1_100_000); x *= 2u64; assert_eq!(x.as_u64(), 2_200_000); } #[test] fn test_arithmetic_primitives() { let mut x = ByteSize::mb(1); assert_eq!((x + MB as u64).as_u64(), 2_000_000); assert_eq!((x + MB as u32).as_u64(), 2_000_000); assert_eq!((x + KB as u16).as_u64(), 1_001_000); assert_eq!((x + B as u8).as_u64(), 1_000_001); x += MB as u64; x += MB as u32; x += 10 as u16; x += 1 as u8; assert_eq!(x.as_u64(), 3_000_011); } #[test] fn test_comparison() { assert!(ByteSize::mb(1) == ByteSize::kb(1000)); assert!(ByteSize::mib(1) == ByteSize::kib(1024)); assert!(ByteSize::mb(1) != ByteSize::kib(1024)); assert!(ByteSize::mb(1) < ByteSize::kib(1024)); assert!(ByteSize::b(0) < ByteSize::tib(1)); } fn assert_display(expected: &str, b: ByteSize) { assert_eq!(expected, format!("{}", b)); } #[test] fn test_display() { assert_display("215 B", ByteSize::b(215)); assert_display("1.0 KB", ByteSize::kb(1)); assert_display("301.0 KB", ByteSize::kb(301)); assert_display("419.0 MB", ByteSize::mb(419)); assert_display("518.0 GB", ByteSize::gb(518)); assert_display("815.0 TB", ByteSize::tb(815)); assert_display("609.0 PB", ByteSize::pb(609)); } #[test] fn test_display_alignment() { assert_eq!("|357 B |", format!("|{:10}|", ByteSize(357))); assert_eq!("| 357 B|", format!("|{:>10}|", ByteSize(357))); assert_eq!("|357 B |", format!("|{:<10}|", ByteSize(357))); assert_eq!("| 357 B |", format!("|{:^10}|", ByteSize(357))); assert_eq!("|-----357 B|", format!("|{:->10}|", ByteSize(357))); assert_eq!("|357 B-----|", format!("|{:-<10}|", ByteSize(357))); assert_eq!("|--357 B---|", format!("|{:-^10}|", ByteSize(357))); } fn assert_to_string(expected: &str, b: ByteSize, si: bool) { assert_eq!(expected.to_string(), b.to_string_as(si)); } #[test] fn test_to_string_as() { assert_to_string("215 B", ByteSize::b(215), true); assert_to_string("215 B", ByteSize::b(215), false); assert_to_string("1.0 kiB", ByteSize::kib(1), true); assert_to_string("1.0 KB", ByteSize::kib(1), false); assert_to_string("293.9 kiB", ByteSize::kb(301), true); assert_to_string("301.0 KB", ByteSize::kb(301), false); assert_to_string("1.0 MiB", ByteSize::mib(1), true); assert_to_string("1048.6 KB", ByteSize::mib(1), false); // a bug case: https://github.com/flang-project/bytesize/issues/8 assert_to_string("1.9 GiB", ByteSize::mib(1907), true); assert_to_string("2.0 GB", ByteSize::mib(1908), false); assert_to_string("399.6 MiB", ByteSize::mb(419), true); assert_to_string("419.0 MB", ByteSize::mb(419), false); assert_to_string("482.4 GiB", ByteSize::gb(518), true); assert_to_string("518.0 GB", ByteSize::gb(518), false); assert_to_string("741.2 TiB", ByteSize::tb(815), true); assert_to_string("815.0 TB", ByteSize::tb(815), false); assert_to_string("540.9 PiB", ByteSize::pb(609), true); assert_to_string("609.0 PB", ByteSize::pb(609), false); } #[test] fn test_default() { assert_eq!(ByteSize::b(0), ByteSize::default()); } #[test] fn test_to_string() { assert_to_string("609.0 PB", ByteSize::pb(609), false); } } vendor/bytesize/README.md0000664000175000017500000000775614160055207016003 0ustar mwhudsonmwhudson## ByteSize [![Build Status](https://travis-ci.org/hyunsik/bytesize.svg?branch=master)](https://travis-ci.org/hyunsik/bytesize) [![Crates.io Version](https://img.shields.io/crates/v/bytesize.svg)](https://crates.io/crates/bytesize) ByteSize is an utility for human-readable byte count representation. Features: * Pre-defined constants for various size units (e.g., B, Kb, kib, Mb, Mib, Gb, Gib, ... PB) * `ByteSize` type which presents size units convertible to different size units. * Artimetic operations for `ByteSize` * FromStr impl for `ByteSize`, allowing to parse from string size representations like 1.5KiB and 521TiB. [API Documentation](https://docs.rs/bytesize/) ## Usage Add this to your Cargo.toml: ```toml [dependencies] bytesize = {version = "1.1.0", features = ["serde"]} ``` and this to your crate root: ```rust extern crate bytesize; ``` ## Example ### Human readable representations (SI unit and Binary unit) ```rust #[allow(dead_code)] fn assert_display(expected: &str, b: ByteSize) { assert_eq!(expected, format!("{}", b)); } #[test] fn test_display() { assert_display("215 B", ByteSize(215)); assert_display("215 B", ByteSize::b(215)); assert_display("1.0 KB", ByteSize::kb(1)); assert_display("301.0 KB", ByteSize::kb(301)); assert_display("419.0 MB", ByteSize::mb(419)); assert_display("518.0 GB", ByteSize::gb(518)); assert_display("815.0 TB", ByteSize::tb(815)); assert_display("609.0 PB", ByteSize::pb(609)); } fn assert_to_string(expected: &str, b: ByteSize, si: bool) { assert_eq!(expected.to_string(), b.to_string_as(si)); } #[test] fn test_to_string() { assert_to_string("215 B", ByteSize(215), true); assert_to_string("215 B", ByteSize(215), false); assert_to_string("215 B", ByteSize::b(215), true); assert_to_string("215 B", ByteSize::b(215), false); assert_to_string("1.0 kiB", ByteSize::kib(1), true); assert_to_string("1.0 KB", ByteSize::kib(1), false); assert_to_string("293.9 kiB", ByteSize::kb(301), true); assert_to_string("301.0 KB", ByteSize::kb(301), false); assert_to_string("1.0 MiB", ByteSize::mib(1), true); assert_to_string("1048.6 KB", ByteSize::mib(1), false); assert_to_string("399.6 MiB", ByteSize::mb(419), true); assert_to_string("419.0 MB", ByteSize::mb(419), false); assert_to_string("482.4 GiB", ByteSize::gb(518), true); assert_to_string("518.0 GB", ByteSize::gb(518), false); assert_to_string("741.2 TiB", ByteSize::tb(815), true); assert_to_string("815.0 TB", ByteSize::tb(815), false); assert_to_string("540.9 PiB", ByteSize::pb(609), true); assert_to_string("609.0 PB", ByteSize::pb(609), false); } #[test] fn test_parsing_from_str() { // shortcut for writing test cases fn parse(s: &str) -> u64 { s.parse::().unwrap().0 } assert_eq!("0".parse::().unwrap().0, 0); assert_eq!(parse("0"), 0); assert_eq!(parse("500"), 500); assert_eq!(parse("1K"), Unit::KiloByte * 1); assert_eq!(parse("1Ki"), Unit::KibiByte * 1); assert_eq!(parse("1.5Ki"), (1.5 * Unit::KibiByte) as u64); assert_eq!(parse("1KiB"), 1 * Unit::KibiByte); assert_eq!(parse("1.5KiB"), (1.5 * Unit::KibiByte) as u64); assert_eq!(parse("3 MB"), Unit::MegaByte * 3); assert_eq!(parse("4 MiB"), Unit::MebiByte * 4); assert_eq!(parse("6 GB"), 6 * Unit::GigaByte); assert_eq!(parse("4 GiB"), 4 * Unit::GibiByte); assert_eq!(parse("88TB"), 88 * Unit::TeraByte); assert_eq!(parse("521TiB"), 521 * Unit::TebiByte); assert_eq!(parse("8 PB"), 8 * Unit::PetaByte); assert_eq!(parse("8P"), 8 * Unit::PetaByte); assert_eq!(parse("12 PiB"), 12 * Unit::PebiByte); } ``` ### Arithmetic operations ```rust extern crate bytesize; use bytesize::ByteSize; fn byte_arithmetic_operator() { let x = ByteSize::mb(1); let y = ByteSize::kb(100); let plus = x + y; print!("{}", plus); let minus = ByteSize::tb(100) + ByteSize::gb(4); print!("{}", minus); } ``` vendor/tinyvec/0000775000175000017500000000000014172417313014333 5ustar mwhudsonmwhudsonvendor/tinyvec/.cargo-checksum.json0000664000175000017500000000013114172417313020172 0ustar mwhudsonmwhudson{"files":{},"package":"2c1c1d5a42b6245520c249549ec267180beaffcc0615401ac8e31853d4b6d8d2"}vendor/tinyvec/Cargo.toml0000664000175000017500000000361314172417313016266 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2018" name = "tinyvec" version = "1.5.1" authors = ["Lokathor "] description = "`tinyvec` provides 100% safe vec-like data structures." keywords = ["vec", "no_std", "no-std"] categories = ["data-structures", "no-std"] license = "Zlib OR Apache-2.0 OR MIT" repository = "https://github.com/Lokathor/tinyvec" [package.metadata.docs.rs] features = ["alloc", "std", "grab_spare_slice", "rustc_1_40", "rustc_1_55", "serde"] rustdoc-args = ["--cfg", "docs_rs"] [package.metadata.playground] features = ["alloc", "std", "grab_spare_slice", "rustc_1_40", "rustc_1_55", "serde"] [profile.bench] debug = 2 [profile.test] opt-level = 3 [[test]] name = "tinyvec" required-features = ["alloc", "std"] [[bench]] name = "macros" harness = false required-features = ["alloc"] [[bench]] name = "smallvec" harness = false required-features = ["alloc", "real_blackbox"] [dependencies.arbitrary] version = "1" optional = true [dependencies.serde] version = "1.0" optional = true default-features = false [dependencies.tinyvec_macros] version = "0.1" optional = true [dev-dependencies.criterion] version = "0.3.0" [dev-dependencies.serde_test] version = "1.0" [dev-dependencies.smallvec] version = "1" [features] alloc = ["tinyvec_macros"] default = [] experimental_write_impl = [] grab_spare_slice = [] nightly_slice_partition_dedup = [] real_blackbox = ["criterion/real_blackbox"] rustc_1_40 = [] rustc_1_55 = ["rustc_1_40"] std = [] vendor/tinyvec/gen-array-impls.sh0000664000175000017500000000165314160055207017700 0ustar mwhudsonmwhudson#!/usr/bin/env bash gen_impl() { local len=$1 cat <<-END impl Array for [T; $len] { type Item = T; const CAPACITY: usize = $len; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ $(for ((i = 0; i < $len; i += 6)) do echo -n ' ' for ((j = 0; j < 6 && j + i < $len; j++)) do echo -n ' T::default(),' done echo done) ] } } END } cat <<-END // Generated file, to regenerate run // ./gen-array-impls.sh > src/array/generated_impl.rs // from the repo root use super::Array; $(for ((i = 0; i <= 33; i++)); do gen_impl $i; done) $(for ((i = 64; i <= 4096; i *= 2)); do gen_impl $i; done) END # vim: noet vendor/tinyvec/CHANGELOG.md0000664000175000017500000000504314172417313016146 0ustar mwhudsonmwhudson# Changelog ## 1.5.1 * [madsmtm](https://github.com/madsmtm) fixed an error with the `alloc` feature on very old rustc versions. [pr 154](https://github.com/Lokathor/tinyvec/pull/154) ## 1.5.0 * [eeeebbbbrrrr](https://github.com/eeeebbbbrrrr) added an impl for [std::io::Write](https://doc.rust-lang.org/std/io/trait.Write.html) to `TinyVec` when the element type is `u8`. This is gated behind the new `std` feature. [pr 152](https://github.com/Lokathor/tinyvec/pull/152) ## 1.4.0 * [saethlin](https://github.com/saethlin) stabilized the usage of const generics and array map with the `rustc_1_55` feature. [pr 149](https://github.com/Lokathor/tinyvec/pull/149) ## 1.3.1 * Improved the performance of the `clone_from` method [pr 144](https://github.com/Lokathor/tinyvec/pull/144) ## 1.3.0 * [jeffa5](https://github.com/jeffa5) added arbitrary implementations for `TinyVec` and `ArrayVec` [pr 146](https://github.com/Lokathor/tinyvec/pull/146). * [elomatreb](https://github.com/elomatreb) implemented `DoubleEndedIterator` for `TinyVecIterator` [pr 145](https://github.com/Lokathor/tinyvec/pull/145). ## 1.2.0 * [Cryptjar](https://github.com/Cryptjar) removed the `A:Array` bound on the struct of `ArrayVec`, and added the `from_array_empty` method, which is a `const fn` constructor [pr 141](https://github.com/Lokathor/tinyvec/pull/141). ## 1.1.1 * [saethlin](https://github.com/saethlin) contributed many PRs ( [127](https://github.com/Lokathor/tinyvec/pull/127), [128](https://github.com/Lokathor/tinyvec/pull/128), [129](https://github.com/Lokathor/tinyvec/pull/129), [131](https://github.com/Lokathor/tinyvec/pull/131), [132](https://github.com/Lokathor/tinyvec/pull/132) ) to help in several benchmarks. ## 1.1.0 * [slightlyoutofphase](https://github.com/slightlyoutofphase) added "array splat" style syntax to the `array_vec!` and `tiny_vec!` macros. You can now write `array_vec![true; 5]` and get a length 5 array vec full of `true`, just like normal array initialization allows. Same goes for `tiny_vec!`. ([pr 118](https://github.com/Lokathor/tinyvec/pull/118)) * [not-a-seagull](https://github.com/not-a-seagull) added `ArrayVec::into_inner` so that you can get the array out of an `ArrayVec`. ([pr 124](https://github.com/Lokathor/tinyvec/pull/124)) ## 1.0.2 * Added license files for the MIT and Apache-2.0 license options. ## 1.0.1 * Display additional features in the [docs.rs/tinyvec](https://docs.rs/tinyvec) documentation. ## 1.0.0 Initial Stable Release. vendor/tinyvec/benches/0000775000175000017500000000000014172417313015742 5ustar mwhudsonmwhudsonvendor/tinyvec/benches/smallvec.rs0000664000175000017500000003106314172417313020121 0ustar mwhudsonmwhudson//! Benchmarks that compare TinyVec to SmallVec //! //! All the following commentary is based on the latest nightly at the time: //! rustc 1.55.0 (c8dfcfe04 2021-09-06). //! //! Some of these benchmarks are just a few instructions, so we put our own for loop inside //! the criterion::Bencher::iter call. This seems to improve the stability of measurements, and it //! has the wonderful side effect of making the emitted assembly easier to follow. Some of these //! benchmarks are totally inlined so that there are no calls at all in the hot path, so finding //! this for loop is an easy way to find your way around the emitted assembly. //! //! The clear method is cheaper to call for arrays of elements without a Drop impl, so wherever //! possible we reuse a single object in the benchmark loop, with a clear + black_box on each //! iteration in an attempt to not make that visible to the optimizer. //! //! We always call black_box(&v), instead of v = black_box(v) because the latter does a move of the //! inline array, which is linear in the size of the array and thus varies based on the array type //! being benchmarked, and this move can be more expensive than the function we're trying to //! benchmark. //! //! We also black_box the input to each method call. This has a significant effect on the assembly //! emitted, for example if we do not black_box the range we iterate over in the ::push benchmarks, //! the loop is unrolled. It's not entirely clear if it's better to black_box the iterator that //! yields the items being pushed, or to black_box at a deeper level: v.push(black_box(i)) for //! example. Anecdotally, it seems like the latter approach produces unreasonably bad assembly. //! use criterion::{black_box, criterion_group, criterion_main, Criterion}; use smallvec::SmallVec; use std::iter::FromIterator; use tinyvec::TinyVec; const ITERS: usize = 10_000; macro_rules! tinyvec_benches { ($c:expr, $type:ty ; $len:expr) => {{ let mut g = $c.benchmark_group(concat!( "TinyVec_", stringify!($type), "_", stringify!($len) )); g.bench_function( concat!( "TinyVec<[", stringify!($type), "; ", stringify!($len), "]>::default" ), |b| { b.iter(|| { for _ in 0..ITERS { let v: TinyVec<[$type; $len]> = TinyVec::default(); black_box(&v); } }); }, ); g.bench_function( concat!( "TinyVec<[", stringify!($type), "; ", stringify!($len), "]>::clone" ), |b| { b.iter(|| { let outer: TinyVec<[$type; $len]> = black_box(TinyVec::from_iter(0..=($len as usize - 1) as _)); for _ in 0..ITERS { let v = outer.clone(); black_box(&v); } }); }, ); g.bench_function( concat!( "TinyVec<[", stringify!($type), "; ", stringify!($len), "]>::clear" ), |b| { b.iter(|| { let mut v: TinyVec<[$type; $len]> = TinyVec::default(); for _ in 0..ITERS { v.clear(); black_box(&v); } }); }, ); g.bench_function( concat!( "TinyVec<[", stringify!($type), "; ", stringify!($len), "]>::push" ), |b| { b.iter(|| { let mut v: TinyVec<[$type; $len]> = TinyVec::default(); for _ in 0..ITERS { v.clear(); black_box(&v); for i in black_box(0..=($len as usize - 1) as _) { v.push(i); } black_box(&v); } }); }, ); g.bench_function( concat!( "TinyVec<[", stringify!($type), "; ", stringify!($len), "]>::from_iter" ), |b| { b.iter(|| { for _ in 0..ITERS { let v: TinyVec<[$type; $len]> = TinyVec::from_iter(black_box(0..=($len as usize - 1) as _)); black_box(&v); } }); }, ); g.bench_function( concat!( "TinyVec<[", stringify!($type), "; ", stringify!($len), "]>::from_slice" ), |b| { b.iter(|| { let data: &[$type] = &[0, 1, 2, 3, 4, 5, 6, 7]; for _ in 0..ITERS { let v: TinyVec<[$type; $len]> = TinyVec::from(black_box(data)); black_box(&v); } }); }, ); g.bench_function( concat!( "TinyVec<[", stringify!($type), "; ", stringify!($len), "]>::extend" ), |b| { b.iter(|| { let mut v: TinyVec<[$type; $len]> = black_box(TinyVec::default()); for _ in 0..ITERS { v.clear(); black_box(&v); v.extend(black_box(0..=($len as usize - 1) as _)); black_box(&v); } }); }, ); g.bench_function( concat!( "TinyVec<[", stringify!($type), "; ", stringify!($len), "]>::extend_from_slice" ), |b| { b.iter(|| { let data: &[$type] = black_box(&[0, 1, 2, 3, 4, 5, 6, 7]); let mut v: TinyVec<[$type; $len]> = black_box(TinyVec::default()); for _ in 0..ITERS { v.clear(); black_box(&v); v.extend_from_slice(data); black_box(&v); } }); }, ); g.bench_function( concat!( "TinyVec<[", stringify!($type), "; ", stringify!($len), "]>::insert" ), |b| { b.iter(|| { let mut v: TinyVec<[$type; $len]> = TinyVec::default(); for _ in 0..ITERS { v.clear(); black_box(&v); for i in black_box(0..=($len as usize - 1) as _) { v.insert(i as usize, i); } black_box(&v); } }); }, ); g.bench_function( concat!( "TinyVec<[", stringify!($type), "; ", stringify!($len), "]>::remove" ), |b| { b.iter(|| { let outer: TinyVec<[$type; $len]> = black_box(TinyVec::from_iter(0..=($len as usize - 1) as _)); for _ in 0..ITERS { let mut v = outer.clone(); for i in black_box((0..=($len as usize - 1) as _).rev()) { v.remove(i); } black_box(&v); } }); }, ); }}; } fn tinyvec_benches(c: &mut Criterion) { tinyvec_benches!(c, u8; 8); tinyvec_benches!(c, u8; 16); tinyvec_benches!(c, u8; 32); tinyvec_benches!(c, u8; 64); tinyvec_benches!(c, u8; 128); tinyvec_benches!(c, u8; 256); tinyvec_benches!(c, u64; 2); tinyvec_benches!(c, u64; 4); tinyvec_benches!(c, u64; 8); tinyvec_benches!(c, u64; 16); tinyvec_benches!(c, u64; 32); } macro_rules! smallvec_benches { ($c:expr, $type:ty ; $len:expr) => {{ let mut g = $c.benchmark_group(concat!( "SmallVec_", stringify!($type), "_", stringify!($len) )); g.bench_function( concat!( "SmallVec<[", stringify!($type), "; ", stringify!($len), "]>::default" ), |b| { b.iter(|| { for _ in 0..ITERS { let v: SmallVec<[$type; $len]> = SmallVec::default(); black_box(&v); } }); }, ); g.bench_function( concat!( "SmallVec<[", stringify!($type), "; ", stringify!($len), "]>::clone" ), |b| { b.iter(|| { let outer: SmallVec<[$type; $len]> = black_box(SmallVec::from_iter(0..=($len as usize - 1) as _)); for _ in 0..ITERS { let v = outer.clone(); black_box(&v); } }); }, ); g.bench_function( concat!( "SmallVec<[", stringify!($type), "; ", stringify!($len), "]>::clear" ), |b| { b.iter(|| { let mut v: SmallVec<[$type; $len]> = SmallVec::default(); for _ in 0..ITERS { v.clear(); black_box(&v); } }); }, ); g.bench_function( concat!( "SmallVec<[", stringify!($type), "; ", stringify!($len), "]>::push" ), |b| { b.iter(|| { let mut v: SmallVec<[$type; $len]> = SmallVec::default(); for _ in 0..ITERS { v.clear(); black_box(&v); for i in black_box(0..=($len as usize - 1) as _) { v.push(i); } black_box(&v); } }); }, ); g.bench_function( concat!( "SmallVec<[", stringify!($type), "; ", stringify!($len), "]>::from_iter" ), |b| { b.iter(|| { for _ in 0..ITERS { let v: SmallVec<[$type; $len]> = SmallVec::from_iter(black_box(0..=($len as usize - 1) as _)); black_box(&v); } }); }, ); g.bench_function( concat!( "SmallVec<[", stringify!($type), "; ", stringify!($len), "]>::from_slice" ), |b| { b.iter(|| { let data: &[$type] = &[0, 1, 2, 3, 4, 5, 6, 7]; for _ in 0..ITERS { let v: SmallVec<[$type; $len]> = SmallVec::from(black_box(data)); black_box(&v); } }); }, ); g.bench_function( concat!( "SmallVec<[", stringify!($type), "; ", stringify!($len), "]>::extend" ), |b| { b.iter(|| { let mut v: SmallVec<[$type; $len]> = black_box(SmallVec::default()); for _ in 0..ITERS { v.clear(); black_box(&v); v.extend(black_box(0..=($len as usize - 1) as _)); black_box(&v); } }); }, ); g.bench_function( concat!( "SmallVec<[", stringify!($type), "; ", stringify!($len), "]>::extend_from_slice" ), |b| { b.iter(|| { let data: &[$type] = black_box(&[0, 1, 2, 3, 4, 5, 6, 7]); let mut v: SmallVec<[$type; $len]> = black_box(SmallVec::default()); for _ in 0..ITERS { v.clear(); black_box(&v); v.extend_from_slice(data); black_box(&v); } }); }, ); g.bench_function( concat!( "SmallVec<[", stringify!($type), "; ", stringify!($len), "]>::insert" ), |b| { b.iter(|| { let mut v: SmallVec<[$type; $len]> = SmallVec::default(); for _ in 0..ITERS { v.clear(); black_box(&v); for i in black_box(0..=($len as usize - 1) as _) { v.insert(i as usize, i); } black_box(&v); } }); }, ); g.bench_function( concat!( "SmallVec<[", stringify!($type), "; ", stringify!($len), "]>::remove" ), |b| { b.iter(|| { let outer: SmallVec<[$type; $len]> = black_box(SmallVec::from_iter(0..=($len as usize - 1) as _)); for _ in 0..ITERS { let mut v = outer.clone(); for i in black_box((0..=($len as usize - 1) as _).rev()) { v.remove(i); } black_box(&v); } }); }, ); }}; } fn smallvec_benches(c: &mut Criterion) { smallvec_benches!(c, u8; 8); smallvec_benches!(c, u8; 16); smallvec_benches!(c, u8; 32); smallvec_benches!(c, u8; 64); smallvec_benches!(c, u8; 128); smallvec_benches!(c, u8; 256); smallvec_benches!(c, u64; 2); smallvec_benches!(c, u64; 4); smallvec_benches!(c, u64; 8); smallvec_benches!(c, u64; 16); smallvec_benches!(c, u64; 32); } criterion_group!(benches, tinyvec_benches, smallvec_benches); criterion_main!(benches); vendor/tinyvec/benches/macros.rs0000664000175000017500000000262014160055207017571 0ustar mwhudsonmwhudsonuse criterion::{criterion_group, criterion_main, Criterion}; use tinyvec::tiny_vec; fn bench_tinyvec_macro(c: &mut Criterion) { let mut g = c.benchmark_group("tinyvec_macro"); g.bench_function("0 of 32", |b| { b.iter(|| tiny_vec!([u8; 32])); }); g.bench_function("16 of 32", |b| { b.iter(|| { tiny_vec!([u8; 32]=> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, ) }); }); g.bench_function("32 of 32", |b| { b.iter(|| { tiny_vec!([u8; 32]=> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, ) }); }); g.bench_function("33 of 32", |b| { b.iter(|| { tiny_vec!([u8; 32]=> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, ) }); }); g.bench_function("64 of 32", |b| { b.iter(|| { tiny_vec!([u8; 32]=> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, ) }); }); } criterion_group!(benches, bench_tinyvec_macro); criterion_main!(benches); vendor/tinyvec/src-backup/0000775000175000017500000000000014160055207016362 5ustar mwhudsonmwhudsonvendor/tinyvec/src-backup/arrayset.rs0000664000175000017500000001606214160055207020567 0ustar mwhudsonmwhudson#![cfg(feature = "experimental_array_set")] // This was contributed by user `dhardy`! Big thanks. use super::{take, Array}; use core::{ borrow::Borrow, fmt, mem::swap, ops::{AddAssign, SubAssign}, }; /// Error resulting from attempting to insert into a full array #[derive(Copy, Clone, Debug, PartialEq, Eq)] pub struct InsertError; // TODO(when std): impl std::error::Error for InsertError {} impl fmt::Display for InsertError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "ArraySet: insertion failed") } } /// An array-backed set /// /// This set supports `O(n)` operations and has a fixed size, thus may fail to /// insert items. The potential advantage is a *really* small size. /// /// The set is backed by an array of type `A` and indexed by type `L`. /// The item type must support `Default`. /// Due to restrictions, `L` may be only `u8` or `u16`. #[derive(Clone, Debug, Default)] pub struct ArraySet { arr: A, len: L, } impl> ArraySet { /// Constructs a new, empty, set #[inline] pub fn new() -> Self { ArraySet { arr: Default::default(), len: 0.into() } } } impl> ArraySet { /// Constructs a new set from given inputs /// /// Panics if `len> arr.len()`. #[inline] pub fn from(arr: A, len: L) -> Self { if len.into() > A::CAPACITY { panic!("ArraySet::from(array, len): len > array.len()"); } ArraySet { arr, len } } } impl ArraySet where L: Copy + PartialEq + From + Into, { /// Returns the fixed capacity of the set #[inline] pub fn capacity(&self) -> usize { A::CAPACITY } /// Returns the number of elements in the set #[inline] pub fn len(&self) -> usize { self.len.into() } /// Returns true when the set contains no elements #[inline] pub fn is_empty(&self) -> bool { self.len == 0.into() } /// Removes all elements #[inline] pub fn clear(&mut self) { self.len = 0.into(); } /// Iterate over all contents #[inline] pub fn iter(&self) -> Iter { Iter { a: self.arr.as_slice(), i: 0 } } } impl ArraySet where L: Copy + PartialOrd + AddAssign + SubAssign + From + Into, { /// Check whether the set contains `elt` #[inline] pub fn contains(&self, elt: &Q) -> bool where A::Item: Borrow, { self.get(elt).is_some() } /// Get a reference to a contained item matching `elt` pub fn get(&self, elt: &Q) -> Option<&A::Item> where A::Item: Borrow, { let len: usize = self.len.into(); let arr = self.arr.as_slice(); for i in 0..len { if arr[i].borrow() == elt { return Some(&arr[i]); } } None } /// Remove an item matching `elt`, if any pub fn remove(&mut self, elt: &Q) -> Option where A::Item: Borrow, { let len: usize = self.len.into(); let arr = self.arr.as_slice_mut(); for i in 0..len { if arr[i].borrow() == elt { let l1 = len - 1; if i < l1 { arr.swap(i, l1); } self.len -= L::from(1); return Some(take(&mut arr[l1])); } } None } /// Remove any items for which `f(item) == false` pub fn retain(&mut self, mut f: F) where F: FnMut(&A::Item) -> bool, { let mut len = self.len; let arr = self.arr.as_slice_mut(); let mut i = 0; while i < len.into() { if !f(&arr[i]) { len -= L::from(1); if i < len.into() { arr.swap(i, len.into()); } } else { i += 1; } } self.len = len; } } impl ArraySet where A::Item: Eq, L: Copy + PartialOrd + AddAssign + SubAssign + From + Into, { /// Insert an item /// /// Due to the fixed size of the backing array, insertion may fail. #[inline] pub fn insert(&mut self, elt: A::Item) -> Result { if self.contains(&elt) { return Ok(false); } let len = self.len.into(); let arr = self.arr.as_slice_mut(); if len >= arr.len() { return Err(InsertError); } arr[len] = elt; self.len += L::from(1); Ok(true) } /* Hits borrow checker pub fn get_or_insert(&mut self, elt: A::Item) -> Result<&A::Item, InsertError> { if let Some(r) = self.get(&elt) { return Ok(r); } self.insert(elt)?; let len: usize = self.len.into(); Ok(&self.arr.as_slice()[len - 1]) } */ /// Replace an item matching `elt` with `elt`, or insert `elt` /// /// Returns the replaced item, if any. Fails when there is no matching item /// and the backing array is full, preventing insertion. pub fn replace( &mut self, mut elt: A::Item, ) -> Result, InsertError> { let len: usize = self.len.into(); let arr = self.arr.as_slice_mut(); for i in 0..len { if arr[i] == elt { swap(&mut arr[i], &mut elt); return Ok(Some(elt)); } } if len >= arr.len() { return Err(InsertError); } arr[len] = elt; self.len += L::from(1); Ok(None) } } /// Type returned by [`ArraySet::iter`] pub struct Iter<'a, T> { a: &'a [T], i: usize, } impl<'a, T> ExactSizeIterator for Iter<'a, T> { #[inline] fn len(&self) -> usize { self.a.len() - self.i } } impl<'a, T> Iterator for Iter<'a, T> { type Item = &'a T; #[inline] fn next(&mut self) -> Option { if self.i < self.a.len() { let i = self.i; self.i += 1; Some(&self.a[i]) } else { None } } #[inline] fn size_hint(&self) -> (usize, Option) { let len = self.len(); (len, Some(len)) } } #[cfg(test)] mod test { use super::*; use core::mem::size_of; #[test] fn test_size() { assert_eq!(size_of::>(), 8); } #[test] fn test() { let mut set: ArraySet<[i8; 7], u8> = ArraySet::new(); assert_eq!(set.capacity(), 7); assert_eq!(set.insert(1), Ok(true)); assert_eq!(set.insert(5), Ok(true)); assert_eq!(set.insert(6), Ok(true)); assert_eq!(set.len(), 3); assert_eq!(set.insert(5), Ok(false)); assert_eq!(set.len(), 3); assert_eq!(set.replace(1), Ok(Some(1))); assert_eq!(set.replace(2), Ok(None)); assert_eq!(set.len(), 4); assert_eq!(set.insert(3), Ok(true)); assert_eq!(set.insert(4), Ok(true)); assert_eq!(set.insert(7), Ok(true)); assert_eq!(set.insert(8), Err(InsertError)); assert_eq!(set.len(), 7); assert_eq!(set.replace(9), Err(InsertError)); assert_eq!(set.remove(&3), Some(3)); assert_eq!(set.len(), 6); set.retain(|x| *x == 3 || *x == 6); assert_eq!(set.len(), 1); assert!(!set.contains(&3)); assert!(set.contains(&6)); } } vendor/tinyvec/LICENSE-ZLIB.md0000664000175000017500000000153614160055207016477 0ustar mwhudsonmwhudsonCopyright (c) 2019 Daniel "Lokathor" Gee. This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. vendor/tinyvec/src/0000775000175000017500000000000014172417313015122 5ustar mwhudsonmwhudsonvendor/tinyvec/src/tinyvec.rs0000664000175000017500000012274414172417313017163 0ustar mwhudsonmwhudson#![cfg(feature = "alloc")] use super::*; use alloc::vec::{self, Vec}; use core::convert::TryFrom; use tinyvec_macros::impl_mirrored; #[cfg(feature = "serde")] use core::marker::PhantomData; #[cfg(feature = "serde")] use serde::de::{Deserialize, Deserializer, SeqAccess, Visitor}; #[cfg(feature = "serde")] use serde::ser::{Serialize, SerializeSeq, Serializer}; /// Helper to make a `TinyVec`. /// /// You specify the backing array type, and optionally give all the elements you /// want to initially place into the array. /// /// ```rust /// use tinyvec::*; /// /// // The backing array type can be specified in the macro call /// let empty_tv = tiny_vec!([u8; 16]); /// let some_ints = tiny_vec!([i32; 4] => 1, 2, 3); /// let many_ints = tiny_vec!([i32; 4] => 1, 2, 3, 4, 5, 6, 7, 8, 9, 10); /// /// // Or left to inference /// let empty_tv: TinyVec<[u8; 16]> = tiny_vec!(); /// let some_ints: TinyVec<[i32; 4]> = tiny_vec!(1, 2, 3); /// let many_ints: TinyVec<[i32; 4]> = tiny_vec!(1, 2, 3, 4, 5, 6, 7, 8, 9, 10); /// ``` #[macro_export] #[cfg_attr(docs_rs, doc(cfg(feature = "alloc")))] macro_rules! tiny_vec { ($array_type:ty => $($elem:expr),* $(,)?) => { { // https://github.com/rust-lang/lang-team/issues/28 const INVOKED_ELEM_COUNT: usize = 0 $( + { let _ = stringify!($elem); 1 })*; // If we have more `$elem` than the `CAPACITY` we will simply go directly // to constructing on the heap. match $crate::TinyVec::constructor_for_capacity(INVOKED_ELEM_COUNT) { $crate::TinyVecConstructor::Inline(f) => { f($crate::array_vec!($array_type => $($elem),*)) } $crate::TinyVecConstructor::Heap(f) => { f(vec!($($elem),*)) } } } }; ($array_type:ty) => { $crate::TinyVec::<$array_type>::default() }; ($($elem:expr),*) => { $crate::tiny_vec!(_ => $($elem),*) }; ($elem:expr; $n:expr) => { $crate::TinyVec::from([$elem; $n]) }; () => { $crate::tiny_vec!(_) }; } #[doc(hidden)] // Internal implementation details of `tiny_vec!` pub enum TinyVecConstructor { Inline(fn(ArrayVec) -> TinyVec), Heap(fn(Vec) -> TinyVec), } /// A vector that starts inline, but can automatically move to the heap. /// /// * Requires the `alloc` feature /// /// A `TinyVec` is either an Inline([`ArrayVec`](crate::ArrayVec::)) or /// Heap([`Vec`](https://doc.rust-lang.org/alloc/vec/struct.Vec.html)). The /// interface for the type as a whole is a bunch of methods that just match on /// the enum variant and then call the same method on the inner vec. /// /// ## Construction /// /// Because it's an enum, you can construct a `TinyVec` simply by making an /// `ArrayVec` or `Vec` and then putting it into the enum. /// /// There is also a macro /// /// ```rust /// # use tinyvec::*; /// let empty_tv = tiny_vec!([u8; 16]); /// let some_ints = tiny_vec!([i32; 4] => 1, 2, 3); /// ``` #[cfg_attr(docs_rs, doc(cfg(feature = "alloc")))] pub enum TinyVec { #[allow(missing_docs)] Inline(ArrayVec), #[allow(missing_docs)] Heap(Vec), } impl Clone for TinyVec where A: Array + Clone, A::Item: Clone, { #[inline] fn clone(&self) -> Self { match self { TinyVec::Heap(v) => TinyVec::Heap(v.clone()), TinyVec::Inline(v) => TinyVec::Inline(v.clone()), } } #[inline] fn clone_from(&mut self, o: &Self) { if o.len() > self.len() { self.reserve(o.len() - self.len()); } else { self.truncate(o.len()); } let (start, end) = o.split_at(self.len()); for (dst, src) in self.iter_mut().zip(start) { dst.clone_from(src); } self.extend_from_slice(end); } } impl Default for TinyVec { #[inline] #[must_use] fn default() -> Self { TinyVec::Inline(ArrayVec::default()) } } impl Deref for TinyVec { type Target = [A::Item]; impl_mirrored! { type Mirror = TinyVec; #[inline(always)] #[must_use] fn deref(self: &Self) -> &Self::Target; } } impl DerefMut for TinyVec { impl_mirrored! { type Mirror = TinyVec; #[inline(always)] #[must_use] fn deref_mut(self: &mut Self) -> &mut Self::Target; } } impl> Index for TinyVec { type Output = >::Output; #[inline(always)] #[must_use] fn index(&self, index: I) -> &Self::Output { &self.deref()[index] } } impl> IndexMut for TinyVec { #[inline(always)] #[must_use] fn index_mut(&mut self, index: I) -> &mut Self::Output { &mut self.deref_mut()[index] } } #[cfg(feature = "std")] #[cfg_attr(docs_rs, doc(cfg(feature = "std")))] impl> std::io::Write for TinyVec { #[inline(always)] fn write(&mut self, buf: &[u8]) -> std::io::Result { self.extend_from_slice(buf); Ok(buf.len()) } #[inline(always)] fn flush(&mut self) -> std::io::Result<()> { Ok(()) } } #[cfg(feature = "serde")] #[cfg_attr(docs_rs, doc(cfg(feature = "serde")))] impl Serialize for TinyVec where A::Item: Serialize, { #[must_use] fn serialize(&self, serializer: S) -> Result where S: Serializer, { let mut seq = serializer.serialize_seq(Some(self.len()))?; for element in self.iter() { seq.serialize_element(element)?; } seq.end() } } #[cfg(feature = "serde")] #[cfg_attr(docs_rs, doc(cfg(feature = "serde")))] impl<'de, A: Array> Deserialize<'de> for TinyVec where A::Item: Deserialize<'de>, { fn deserialize(deserializer: D) -> Result where D: Deserializer<'de>, { deserializer.deserialize_seq(TinyVecVisitor(PhantomData)) } } #[cfg(feature = "arbitrary")] #[cfg_attr(docs_rs, doc(cfg(feature = "arbitrary")))] impl<'a, A> arbitrary::Arbitrary<'a> for TinyVec where A: Array, A::Item: arbitrary::Arbitrary<'a>, { fn arbitrary(u: &mut arbitrary::Unstructured<'a>) -> arbitrary::Result { let v = Vec::arbitrary(u)?; let mut tv = TinyVec::Heap(v); tv.shrink_to_fit(); Ok(tv) } } impl TinyVec { /// Returns whether elements are on heap #[inline(always)] #[must_use] pub fn is_heap(&self) -> bool { match self { TinyVec::Heap(_) => true, TinyVec::Inline(_) => false, } } /// Returns whether elements are on stack #[inline(always)] #[must_use] pub fn is_inline(&self) -> bool { !self.is_heap() } /// Shrinks the capacity of the vector as much as possible.\ /// It is inlined if length is less than `A::CAPACITY`. /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 2] => 1, 2, 3); /// assert!(tv.is_heap()); /// let _ = tv.pop(); /// assert!(tv.is_heap()); /// tv.shrink_to_fit(); /// assert!(tv.is_inline()); /// ``` pub fn shrink_to_fit(&mut self) { let vec = match self { TinyVec::Inline(_) => return, TinyVec::Heap(h) => h, }; if vec.len() > A::CAPACITY { return vec.shrink_to_fit(); } let moved_vec = core::mem::replace(vec, Vec::new()); let mut av = ArrayVec::default(); let mut rest = av.fill(moved_vec); debug_assert!(rest.next().is_none()); *self = TinyVec::Inline(av); } /// Moves the content of the TinyVec to the heap, if it's inline. /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 4] => 1, 2, 3); /// assert!(tv.is_inline()); /// tv.move_to_the_heap(); /// assert!(tv.is_heap()); /// ``` #[allow(clippy::missing_inline_in_public_items)] pub fn move_to_the_heap(&mut self) { let arr = match self { TinyVec::Heap(_) => return, TinyVec::Inline(a) => a, }; let v = arr.drain_to_vec(); *self = TinyVec::Heap(v); } /// If TinyVec is inline, moves the content of it to the heap. /// Also reserves additional space. /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 4] => 1, 2, 3); /// assert!(tv.is_inline()); /// tv.move_to_the_heap_and_reserve(32); /// assert!(tv.is_heap()); /// assert!(tv.capacity() >= 35); /// ``` pub fn move_to_the_heap_and_reserve(&mut self, n: usize) { let arr = match self { TinyVec::Heap(h) => return h.reserve(n), TinyVec::Inline(a) => a, }; let v = arr.drain_to_vec_and_reserve(n); *self = TinyVec::Heap(v); } /// Reserves additional space. /// Moves to the heap if array can't hold `n` more items /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 4] => 1, 2, 3, 4); /// assert!(tv.is_inline()); /// tv.reserve(1); /// assert!(tv.is_heap()); /// assert!(tv.capacity() >= 5); /// ``` pub fn reserve(&mut self, n: usize) { let arr = match self { TinyVec::Heap(h) => return h.reserve(n), TinyVec::Inline(a) => a, }; if n > arr.capacity() - arr.len() { let v = arr.drain_to_vec_and_reserve(n); *self = TinyVec::Heap(v); } /* In this place array has enough place, so no work is needed more */ return; } /// Reserves additional space. /// Moves to the heap if array can't hold `n` more items /// /// From [Vec::reserve_exact](https://doc.rust-lang.org/std/vec/struct.Vec.html#method.reserve_exact) /// ```text /// Note that the allocator may give the collection more space than it requests. /// Therefore, capacity can not be relied upon to be precisely minimal. /// Prefer `reserve` if future insertions are expected. /// ``` /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 4] => 1, 2, 3, 4); /// assert!(tv.is_inline()); /// tv.reserve_exact(1); /// assert!(tv.is_heap()); /// assert!(tv.capacity() >= 5); /// ``` pub fn reserve_exact(&mut self, n: usize) { let arr = match self { TinyVec::Heap(h) => return h.reserve_exact(n), TinyVec::Inline(a) => a, }; if n > arr.capacity() - arr.len() { let v = arr.drain_to_vec_and_reserve(n); *self = TinyVec::Heap(v); } /* In this place array has enough place, so no work is needed more */ return; } /// Makes a new TinyVec with _at least_ the given capacity. /// /// If the requested capacity is less than or equal to the array capacity you /// get an inline vec. If it's greater than you get a heap vec. /// ``` /// # use tinyvec::*; /// let t = TinyVec::<[u8; 10]>::with_capacity(5); /// assert!(t.is_inline()); /// assert!(t.capacity() >= 5); /// /// let t = TinyVec::<[u8; 10]>::with_capacity(20); /// assert!(t.is_heap()); /// assert!(t.capacity() >= 20); /// ``` #[inline] #[must_use] pub fn with_capacity(cap: usize) -> Self { if cap <= A::CAPACITY { TinyVec::Inline(ArrayVec::default()) } else { TinyVec::Heap(Vec::with_capacity(cap)) } } } impl TinyVec { /// Move all values from `other` into this vec. #[cfg(feature = "rustc_1_40")] #[inline] pub fn append(&mut self, other: &mut Self) { self.reserve(other.len()); /* Doing append should be faster, because it is effectively a memcpy */ match (self, other) { (TinyVec::Heap(sh), TinyVec::Heap(oh)) => sh.append(oh), (TinyVec::Inline(a), TinyVec::Heap(h)) => a.extend(h.drain(..)), (ref mut this, TinyVec::Inline(arr)) => this.extend(arr.drain(..)), } } /// Move all values from `other` into this vec. #[cfg(not(feature = "rustc_1_40"))] #[inline] pub fn append(&mut self, other: &mut Self) { match other { TinyVec::Inline(a) => self.extend(a.drain(..)), TinyVec::Heap(h) => self.extend(h.drain(..)), } } impl_mirrored! { type Mirror = TinyVec; /// Remove an element, swapping the end of the vec into its place. /// /// ## Panics /// * If the index is out of bounds. /// /// ## Example /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([&str; 4] => "foo", "bar", "quack", "zap"); /// /// assert_eq!(tv.swap_remove(1), "bar"); /// assert_eq!(tv.as_slice(), &["foo", "zap", "quack"][..]); /// /// assert_eq!(tv.swap_remove(0), "foo"); /// assert_eq!(tv.as_slice(), &["quack", "zap"][..]); /// ``` #[inline] pub fn swap_remove(self: &mut Self, index: usize) -> A::Item; /// Remove and return the last element of the vec, if there is one. /// /// ## Failure /// * If the vec is empty you get `None`. #[inline] pub fn pop(self: &mut Self) -> Option; /// Removes the item at `index`, shifting all others down by one index. /// /// Returns the removed element. /// /// ## Panics /// /// If the index is out of bounds. /// /// ## Example /// /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 4] => 1, 2, 3); /// assert_eq!(tv.remove(1), 2); /// assert_eq!(tv.as_slice(), &[1, 3][..]); /// ``` #[inline] pub fn remove(self: &mut Self, index: usize) -> A::Item; /// The length of the vec (in elements). #[inline(always)] #[must_use] pub fn len(self: &Self) -> usize; /// The capacity of the `TinyVec`. /// /// When not heap allocated this is fixed based on the array type. /// Otherwise its the result of the underlying Vec::capacity. #[inline(always)] #[must_use] pub fn capacity(self: &Self) -> usize; /// Reduces the vec's length to the given value. /// /// If the vec is already shorter than the input, nothing happens. #[inline] pub fn truncate(self: &mut Self, new_len: usize); /// A mutable pointer to the backing array. /// /// ## Safety /// /// This pointer has provenance over the _entire_ backing array/buffer. #[inline(always)] #[must_use] pub fn as_mut_ptr(self: &mut Self) -> *mut A::Item; /// A const pointer to the backing array. /// /// ## Safety /// /// This pointer has provenance over the _entire_ backing array/buffer. #[inline(always)] #[must_use] pub fn as_ptr(self: &Self) -> *const A::Item; } /// Walk the vec and keep only the elements that pass the predicate given. /// /// ## Example /// /// ```rust /// use tinyvec::*; /// /// let mut tv = tiny_vec!([i32; 10] => 1, 2, 3, 4); /// tv.retain(|&x| x % 2 == 0); /// assert_eq!(tv.as_slice(), &[2, 4][..]); /// ``` #[inline] pub fn retain bool>(self: &mut Self, acceptable: F) { match self { TinyVec::Inline(i) => i.retain(acceptable), TinyVec::Heap(h) => h.retain(acceptable), } } /// Helper for getting the mut slice. #[inline(always)] #[must_use] pub fn as_mut_slice(self: &mut Self) -> &mut [A::Item] { self.deref_mut() } /// Helper for getting the shared slice. #[inline(always)] #[must_use] pub fn as_slice(self: &Self) -> &[A::Item] { self.deref() } /// Removes all elements from the vec. #[inline(always)] pub fn clear(&mut self) { self.truncate(0) } /// De-duplicates the vec. #[cfg(feature = "nightly_slice_partition_dedup")] #[inline(always)] pub fn dedup(&mut self) where A::Item: PartialEq, { self.dedup_by(|a, b| a == b) } /// De-duplicates the vec according to the predicate given. #[cfg(feature = "nightly_slice_partition_dedup")] #[inline(always)] pub fn dedup_by(&mut self, same_bucket: F) where F: FnMut(&mut A::Item, &mut A::Item) -> bool, { let len = { let (dedup, _) = self.as_mut_slice().partition_dedup_by(same_bucket); dedup.len() }; self.truncate(len); } /// De-duplicates the vec according to the key selector given. #[cfg(feature = "nightly_slice_partition_dedup")] #[inline(always)] pub fn dedup_by_key(&mut self, mut key: F) where F: FnMut(&mut A::Item) -> K, K: PartialEq, { self.dedup_by(|a, b| key(a) == key(b)) } /// Creates a draining iterator that removes the specified range in the vector /// and yields the removed items. /// /// **Note: This method has significant performance issues compared to /// matching on the TinyVec and then calling drain on the Inline or Heap value /// inside. The draining iterator has to branch on every single access. It is /// provided for simplicity and compatability only.** /// /// ## Panics /// * If the start is greater than the end /// * If the end is past the edge of the vec. /// /// ## Example /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 4] => 1, 2, 3); /// let tv2: TinyVec<[i32; 4]> = tv.drain(1..).collect(); /// assert_eq!(tv.as_slice(), &[1][..]); /// assert_eq!(tv2.as_slice(), &[2, 3][..]); /// /// tv.drain(..); /// assert_eq!(tv.as_slice(), &[]); /// ``` #[inline] pub fn drain>( &mut self, range: R, ) -> TinyVecDrain<'_, A> { match self { TinyVec::Inline(i) => TinyVecDrain::Inline(i.drain(range)), TinyVec::Heap(h) => TinyVecDrain::Heap(h.drain(range)), } } /// Clone each element of the slice into this vec. /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 4] => 1, 2); /// tv.extend_from_slice(&[3, 4]); /// assert_eq!(tv.as_slice(), [1, 2, 3, 4]); /// ``` #[inline] pub fn extend_from_slice(&mut self, sli: &[A::Item]) where A::Item: Clone, { self.reserve(sli.len()); match self { TinyVec::Inline(a) => a.extend_from_slice(sli), TinyVec::Heap(h) => h.extend_from_slice(sli), } } /// Wraps up an array and uses the given length as the initial length. /// /// Note that the `From` impl for arrays assumes the full length is used. /// /// ## Panics /// /// The length must be less than or equal to the capacity of the array. #[inline] #[must_use] #[allow(clippy::match_wild_err_arm)] pub fn from_array_len(data: A, len: usize) -> Self { match Self::try_from_array_len(data, len) { Ok(out) => out, Err(_) => { panic!("TinyVec: length {} exceeds capacity {}!", len, A::CAPACITY) } } } /// This is an internal implementation detail of the `tiny_vec!` macro, and /// using it other than from that macro is not supported by this crate's /// SemVer guarantee. #[inline(always)] #[doc(hidden)] pub fn constructor_for_capacity(cap: usize) -> TinyVecConstructor { if cap <= A::CAPACITY { TinyVecConstructor::Inline(TinyVec::Inline) } else { TinyVecConstructor::Heap(TinyVec::Heap) } } /// Inserts an item at the position given, moving all following elements +1 /// index. /// /// ## Panics /// * If `index` > `len` /// /// ## Example /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 10] => 1, 2, 3); /// tv.insert(1, 4); /// assert_eq!(tv.as_slice(), &[1, 4, 2, 3]); /// tv.insert(4, 5); /// assert_eq!(tv.as_slice(), &[1, 4, 2, 3, 5]); /// ``` #[inline] pub fn insert(&mut self, index: usize, item: A::Item) { assert!( index <= self.len(), "insertion index (is {}) should be <= len (is {})", index, self.len() ); let arr = match self { TinyVec::Heap(v) => return v.insert(index, item), TinyVec::Inline(a) => a, }; if let Some(x) = arr.try_insert(index, item) { let mut v = Vec::with_capacity(arr.len() * 2); let mut it = arr.iter_mut().map(|r| core::mem::replace(r, Default::default())); v.extend(it.by_ref().take(index)); v.push(x); v.extend(it); *self = TinyVec::Heap(v); } } /// If the vec is empty. #[inline(always)] #[must_use] pub fn is_empty(&self) -> bool { self.len() == 0 } /// Makes a new, empty vec. #[inline(always)] #[must_use] pub fn new() -> Self { Self::default() } /// Place an element onto the end of the vec. /// ## Panics /// * If the length of the vec would overflow the capacity. /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 10] => 1, 2, 3); /// tv.push(4); /// assert_eq!(tv.as_slice(), &[1, 2, 3, 4]); /// ``` #[inline] pub fn push(&mut self, val: A::Item) { // The code path for moving the inline contents to the heap produces a lot // of instructions, but we have a strong guarantee that this is a cold // path. LLVM doesn't know this, inlines it, and this tends to cause a // cascade of other bad inlining decisions because the body of push looks // huge even though nearly every call executes the same few instructions. // // Moving the logic out of line with #[cold] causes the hot code to be // inlined together, and we take the extra cost of a function call only // in rare cases. #[cold] fn drain_to_heap_and_push( arr: &mut ArrayVec, val: A::Item, ) -> TinyVec { /* Make the Vec twice the size to amortize the cost of draining */ let mut v = arr.drain_to_vec_and_reserve(arr.len()); v.push(val); TinyVec::Heap(v) } match self { TinyVec::Heap(v) => v.push(val), TinyVec::Inline(arr) => { if let Some(x) = arr.try_push(val) { *self = drain_to_heap_and_push(arr, x); } } } } /// Resize the vec to the new length. /// /// If it needs to be longer, it's filled with clones of the provided value. /// If it needs to be shorter, it's truncated. /// /// ## Example /// /// ```rust /// use tinyvec::*; /// /// let mut tv = tiny_vec!([&str; 10] => "hello"); /// tv.resize(3, "world"); /// assert_eq!(tv.as_slice(), &["hello", "world", "world"][..]); /// /// let mut tv = tiny_vec!([i32; 10] => 1, 2, 3, 4); /// tv.resize(2, 0); /// assert_eq!(tv.as_slice(), &[1, 2][..]); /// ``` #[inline] pub fn resize(&mut self, new_len: usize, new_val: A::Item) where A::Item: Clone, { self.resize_with(new_len, || new_val.clone()); } /// Resize the vec to the new length. /// /// If it needs to be longer, it's filled with repeated calls to the provided /// function. If it needs to be shorter, it's truncated. /// /// ## Example /// /// ```rust /// use tinyvec::*; /// /// let mut tv = tiny_vec!([i32; 3] => 1, 2, 3); /// tv.resize_with(5, Default::default); /// assert_eq!(tv.as_slice(), &[1, 2, 3, 0, 0][..]); /// /// let mut tv = tiny_vec!([i32; 2]); /// let mut p = 1; /// tv.resize_with(4, || { /// p *= 2; /// p /// }); /// assert_eq!(tv.as_slice(), &[2, 4, 8, 16][..]); /// ``` #[inline] pub fn resize_with A::Item>(&mut self, new_len: usize, f: F) { match new_len.checked_sub(self.len()) { None => return self.truncate(new_len), Some(n) => self.reserve(n), } match self { TinyVec::Inline(a) => a.resize_with(new_len, f), TinyVec::Heap(v) => v.resize_with(new_len, f), } } /// Splits the collection at the point given. /// /// * `[0, at)` stays in this vec /// * `[at, len)` ends up in the new vec. /// /// ## Panics /// * if at > len /// /// ## Example /// /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 4] => 1, 2, 3); /// let tv2 = tv.split_off(1); /// assert_eq!(tv.as_slice(), &[1][..]); /// assert_eq!(tv2.as_slice(), &[2, 3][..]); /// ``` #[inline] pub fn split_off(&mut self, at: usize) -> Self { match self { TinyVec::Inline(a) => TinyVec::Inline(a.split_off(at)), TinyVec::Heap(v) => TinyVec::Heap(v.split_off(at)), } } /// Creates a splicing iterator that removes the specified range in the /// vector, yields the removed items, and replaces them with elements from /// the provided iterator. /// /// `splice` fuses the provided iterator, so elements after the first `None` /// are ignored. /// /// ## Panics /// * If the start is greater than the end. /// * If the end is past the edge of the vec. /// * If the provided iterator panics. /// /// ## Example /// ```rust /// use tinyvec::*; /// let mut tv = tiny_vec!([i32; 4] => 1, 2, 3); /// let tv2: TinyVec<[i32; 4]> = tv.splice(1.., 4..=6).collect(); /// assert_eq!(tv.as_slice(), &[1, 4, 5, 6][..]); /// assert_eq!(tv2.as_slice(), &[2, 3][..]); /// /// tv.splice(.., None); /// assert_eq!(tv.as_slice(), &[]); /// ``` #[inline] pub fn splice( &mut self, range: R, replacement: I, ) -> TinyVecSplice<'_, A, core::iter::Fuse> where R: RangeBounds, I: IntoIterator, { use core::ops::Bound; let start = match range.start_bound() { Bound::Included(x) => *x, Bound::Excluded(x) => x.saturating_add(1), Bound::Unbounded => 0, }; let end = match range.end_bound() { Bound::Included(x) => x.saturating_add(1), Bound::Excluded(x) => *x, Bound::Unbounded => self.len(), }; assert!( start <= end, "TinyVec::splice> Illegal range, {} to {}", start, end ); assert!( end <= self.len(), "TinyVec::splice> Range ends at {} but length is only {}!", end, self.len() ); TinyVecSplice { removal_start: start, removal_end: end, parent: self, replacement: replacement.into_iter().fuse(), } } /// Wraps an array, using the given length as the starting length. /// /// If you want to use the whole length of the array, you can just use the /// `From` impl. /// /// ## Failure /// /// If the given length is greater than the capacity of the array this will /// error, and you'll get the array back in the `Err`. #[inline] pub fn try_from_array_len(data: A, len: usize) -> Result { let arr = ArrayVec::try_from_array_len(data, len)?; Ok(TinyVec::Inline(arr)) } } /// Draining iterator for `TinyVecDrain` /// /// See [`TinyVecDrain::drain`](TinyVecDrain::::drain) #[cfg_attr(docs_rs, doc(cfg(feature = "alloc")))] pub enum TinyVecDrain<'p, A: Array> { #[allow(missing_docs)] Inline(ArrayVecDrain<'p, A::Item>), #[allow(missing_docs)] Heap(vec::Drain<'p, A::Item>), } impl<'p, A: Array> Iterator for TinyVecDrain<'p, A> { type Item = A::Item; impl_mirrored! { type Mirror = TinyVecDrain; #[inline] fn next(self: &mut Self) -> Option; #[inline] fn nth(self: &mut Self, n: usize) -> Option; #[inline] fn size_hint(self: &Self) -> (usize, Option); #[inline] fn last(self: Self) -> Option; #[inline] fn count(self: Self) -> usize; } #[inline] fn for_each(self, f: F) { match self { TinyVecDrain::Inline(i) => i.for_each(f), TinyVecDrain::Heap(h) => h.for_each(f), } } } impl<'p, A: Array> DoubleEndedIterator for TinyVecDrain<'p, A> { impl_mirrored! { type Mirror = TinyVecDrain; #[inline] fn next_back(self: &mut Self) -> Option; #[cfg(feature = "rustc_1_40")] #[inline] fn nth_back(self: &mut Self, n: usize) -> Option; } } /// Splicing iterator for `TinyVec` /// See [`TinyVec::splice`](TinyVec::::splice) #[cfg_attr(docs_rs, doc(cfg(feature = "alloc")))] pub struct TinyVecSplice<'p, A: Array, I: Iterator> { parent: &'p mut TinyVec, removal_start: usize, removal_end: usize, replacement: I, } impl<'p, A, I> Iterator for TinyVecSplice<'p, A, I> where A: Array, I: Iterator, { type Item = A::Item; #[inline] fn next(&mut self) -> Option { if self.removal_start < self.removal_end { match self.replacement.next() { Some(replacement) => { let removed = core::mem::replace( &mut self.parent[self.removal_start], replacement, ); self.removal_start += 1; Some(removed) } None => { let removed = self.parent.remove(self.removal_start); self.removal_end -= 1; Some(removed) } } } else { None } } #[inline] fn size_hint(&self) -> (usize, Option) { let len = self.len(); (len, Some(len)) } } impl<'p, A, I> ExactSizeIterator for TinyVecSplice<'p, A, I> where A: Array, I: Iterator, { #[inline] fn len(&self) -> usize { self.removal_end - self.removal_start } } impl<'p, A, I> FusedIterator for TinyVecSplice<'p, A, I> where A: Array, I: Iterator, { } impl<'p, A, I> DoubleEndedIterator for TinyVecSplice<'p, A, I> where A: Array, I: Iterator + DoubleEndedIterator, { #[inline] fn next_back(&mut self) -> Option { if self.removal_start < self.removal_end { match self.replacement.next_back() { Some(replacement) => { let removed = core::mem::replace( &mut self.parent[self.removal_end - 1], replacement, ); self.removal_end -= 1; Some(removed) } None => { let removed = self.parent.remove(self.removal_end - 1); self.removal_end -= 1; Some(removed) } } } else { None } } } impl<'p, A: Array, I: Iterator> Drop for TinyVecSplice<'p, A, I> { fn drop(&mut self) { for _ in self.by_ref() {} let (lower_bound, _) = self.replacement.size_hint(); self.parent.reserve(lower_bound); for replacement in self.replacement.by_ref() { self.parent.insert(self.removal_end, replacement); self.removal_end += 1; } } } impl AsMut<[A::Item]> for TinyVec { #[inline(always)] #[must_use] fn as_mut(&mut self) -> &mut [A::Item] { &mut *self } } impl AsRef<[A::Item]> for TinyVec { #[inline(always)] #[must_use] fn as_ref(&self) -> &[A::Item] { &*self } } impl Borrow<[A::Item]> for TinyVec { #[inline(always)] #[must_use] fn borrow(&self) -> &[A::Item] { &*self } } impl BorrowMut<[A::Item]> for TinyVec { #[inline(always)] #[must_use] fn borrow_mut(&mut self) -> &mut [A::Item] { &mut *self } } impl Extend for TinyVec { #[inline] fn extend>(&mut self, iter: T) { let iter = iter.into_iter(); let (lower_bound, _) = iter.size_hint(); self.reserve(lower_bound); let a = match self { TinyVec::Heap(h) => return h.extend(iter), TinyVec::Inline(a) => a, }; let mut iter = a.fill(iter); let maybe = iter.next(); let surely = match maybe { Some(x) => x, None => return, }; let mut v = a.drain_to_vec_and_reserve(a.len()); v.push(surely); v.extend(iter); *self = TinyVec::Heap(v); } } impl From> for TinyVec { #[inline(always)] #[must_use] fn from(arr: ArrayVec) -> Self { TinyVec::Inline(arr) } } impl From for TinyVec { fn from(array: A) -> Self { TinyVec::Inline(ArrayVec::from(array)) } } impl From<&'_ [T]> for TinyVec where T: Clone + Default, A: Array, { #[inline] #[must_use] fn from(slice: &[T]) -> Self { if let Ok(arr) = ArrayVec::try_from(slice) { TinyVec::Inline(arr) } else { TinyVec::Heap(slice.into()) } } } impl From<&'_ mut [T]> for TinyVec where T: Clone + Default, A: Array, { #[inline] #[must_use] fn from(slice: &mut [T]) -> Self { Self::from(&*slice) } } impl FromIterator for TinyVec { #[inline] #[must_use] fn from_iter>(iter: T) -> Self { let mut av = Self::default(); av.extend(iter); av } } /// Iterator for consuming an `TinyVec` and returning owned elements. #[cfg_attr(docs_rs, doc(cfg(feature = "alloc")))] pub enum TinyVecIterator { #[allow(missing_docs)] Inline(ArrayVecIterator), #[allow(missing_docs)] Heap(alloc::vec::IntoIter), } impl TinyVecIterator { impl_mirrored! { type Mirror = TinyVecIterator; /// Returns the remaining items of this iterator as a slice. #[inline] #[must_use] pub fn as_slice(self: &Self) -> &[A::Item]; } } impl FusedIterator for TinyVecIterator {} impl Iterator for TinyVecIterator { type Item = A::Item; impl_mirrored! { type Mirror = TinyVecIterator; #[inline] fn next(self: &mut Self) -> Option; #[inline(always)] #[must_use] fn size_hint(self: &Self) -> (usize, Option); #[inline(always)] fn count(self: Self) -> usize; #[inline] fn last(self: Self) -> Option; #[inline] fn nth(self: &mut Self, n: usize) -> Option; } } impl DoubleEndedIterator for TinyVecIterator { impl_mirrored! { type Mirror = TinyVecIterator; #[inline] fn next_back(self: &mut Self) -> Option; #[cfg(feature = "rustc_1_40")] #[inline] fn nth_back(self: &mut Self, n: usize) -> Option; } } impl Debug for TinyVecIterator where A::Item: Debug, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { f.debug_tuple("TinyVecIterator").field(&self.as_slice()).finish() } } impl IntoIterator for TinyVec { type Item = A::Item; type IntoIter = TinyVecIterator; #[inline(always)] #[must_use] fn into_iter(self) -> Self::IntoIter { match self { TinyVec::Inline(a) => TinyVecIterator::Inline(a.into_iter()), TinyVec::Heap(v) => TinyVecIterator::Heap(v.into_iter()), } } } impl<'a, A: Array> IntoIterator for &'a mut TinyVec { type Item = &'a mut A::Item; type IntoIter = core::slice::IterMut<'a, A::Item>; #[inline(always)] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter_mut() } } impl<'a, A: Array> IntoIterator for &'a TinyVec { type Item = &'a A::Item; type IntoIter = core::slice::Iter<'a, A::Item>; #[inline(always)] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter() } } impl PartialEq for TinyVec where A::Item: PartialEq, { #[inline] #[must_use] fn eq(&self, other: &Self) -> bool { self.as_slice().eq(other.as_slice()) } } impl Eq for TinyVec where A::Item: Eq {} impl PartialOrd for TinyVec where A::Item: PartialOrd, { #[inline] #[must_use] fn partial_cmp(&self, other: &Self) -> Option { self.as_slice().partial_cmp(other.as_slice()) } } impl Ord for TinyVec where A::Item: Ord, { #[inline] #[must_use] fn cmp(&self, other: &Self) -> core::cmp::Ordering { self.as_slice().cmp(other.as_slice()) } } impl PartialEq<&A> for TinyVec where A::Item: PartialEq, { #[inline] #[must_use] fn eq(&self, other: &&A) -> bool { self.as_slice().eq(other.as_slice()) } } impl PartialEq<&[A::Item]> for TinyVec where A::Item: PartialEq, { #[inline] #[must_use] fn eq(&self, other: &&[A::Item]) -> bool { self.as_slice().eq(*other) } } impl Hash for TinyVec where A::Item: Hash, { #[inline] fn hash(&self, state: &mut H) { self.as_slice().hash(state) } } // // // // // // // // // Formatting impls // // // // // // // // impl Binary for TinyVec where A::Item: Binary, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Binary::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl Debug for TinyVec where A::Item: Debug, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Debug::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl Display for TinyVec where A::Item: Display, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Display::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl LowerExp for TinyVec where A::Item: LowerExp, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } LowerExp::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl LowerHex for TinyVec where A::Item: LowerHex, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } LowerHex::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl Octal for TinyVec where A::Item: Octal, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Octal::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl Pointer for TinyVec where A::Item: Pointer, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Pointer::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl UpperExp for TinyVec where A::Item: UpperExp, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } UpperExp::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl UpperHex for TinyVec where A::Item: UpperHex, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } UpperHex::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } #[cfg(feature = "serde")] #[cfg_attr(docs_rs, doc(cfg(feature = "alloc")))] struct TinyVecVisitor(PhantomData); #[cfg(feature = "serde")] impl<'de, A: Array> Visitor<'de> for TinyVecVisitor where A::Item: Deserialize<'de>, { type Value = TinyVec; fn expecting( &self, formatter: &mut core::fmt::Formatter, ) -> core::fmt::Result { formatter.write_str("a sequence") } fn visit_seq(self, mut seq: S) -> Result where S: SeqAccess<'de>, { let mut new_tinyvec = match seq.size_hint() { Some(expected_size) => TinyVec::with_capacity(expected_size), None => Default::default(), }; while let Some(value) = seq.next_element()? { new_tinyvec.push(value); } Ok(new_tinyvec) } } vendor/tinyvec/src/array/0000775000175000017500000000000014160055207016235 5ustar mwhudsonmwhudsonvendor/tinyvec/src/array/generated_impl.rs0000664000175000017500000060030714160055207021570 0ustar mwhudsonmwhudson// Generated file, to regenerate run // ./gen-array-impls.sh > src/array/generated_impl.rs // from the repo root use super::Array; impl Array for [T; 0] { type Item = T; const CAPACITY: usize = 0; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [] } } impl Array for [T; 1] { type Item = T; const CAPACITY: usize = 1; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [T::default()] } } impl Array for [T; 2] { type Item = T; const CAPACITY: usize = 2; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [T::default(), T::default()] } } impl Array for [T; 3] { type Item = T; const CAPACITY: usize = 3; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [T::default(), T::default(), T::default()] } } impl Array for [T; 4] { type Item = T; const CAPACITY: usize = 4; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [T::default(), T::default(), T::default(), T::default()] } } impl Array for [T; 5] { type Item = T; const CAPACITY: usize = 5; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [T::default(), T::default(), T::default(), T::default(), T::default()] } } impl Array for [T; 6] { type Item = T; const CAPACITY: usize = 6; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 7] { type Item = T; const CAPACITY: usize = 7; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 8] { type Item = T; const CAPACITY: usize = 8; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 9] { type Item = T; const CAPACITY: usize = 9; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 10] { type Item = T; const CAPACITY: usize = 10; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 11] { type Item = T; const CAPACITY: usize = 11; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 12] { type Item = T; const CAPACITY: usize = 12; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 13] { type Item = T; const CAPACITY: usize = 13; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 14] { type Item = T; const CAPACITY: usize = 14; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 15] { type Item = T; const CAPACITY: usize = 15; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 16] { type Item = T; const CAPACITY: usize = 16; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 17] { type Item = T; const CAPACITY: usize = 17; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 18] { type Item = T; const CAPACITY: usize = 18; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 19] { type Item = T; const CAPACITY: usize = 19; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 20] { type Item = T; const CAPACITY: usize = 20; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 21] { type Item = T; const CAPACITY: usize = 21; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 22] { type Item = T; const CAPACITY: usize = 22; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 23] { type Item = T; const CAPACITY: usize = 23; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 24] { type Item = T; const CAPACITY: usize = 24; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 25] { type Item = T; const CAPACITY: usize = 25; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 26] { type Item = T; const CAPACITY: usize = 26; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 27] { type Item = T; const CAPACITY: usize = 27; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 28] { type Item = T; const CAPACITY: usize = 28; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 29] { type Item = T; const CAPACITY: usize = 29; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 30] { type Item = T; const CAPACITY: usize = 30; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 31] { type Item = T; const CAPACITY: usize = 31; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 32] { type Item = T; const CAPACITY: usize = 32; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 33] { type Item = T; const CAPACITY: usize = 33; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 64] { type Item = T; const CAPACITY: usize = 64; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 128] { type Item = T; const CAPACITY: usize = 128; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 256] { type Item = T; const CAPACITY: usize = 256; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 512] { type Item = T; const CAPACITY: usize = 512; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 1024] { type Item = T; const CAPACITY: usize = 1024; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 2048] { type Item = T; const CAPACITY: usize = 2048; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } impl Array for [T; 4096] { type Item = T; const CAPACITY: usize = 4096; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [ T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), T::default(), ] } } vendor/tinyvec/src/array/const_generic_impl.rs0000664000175000017500000000063214160055207022447 0ustar mwhudsonmwhudsonuse super::Array; impl Array for [T; N] { type Item = T; const CAPACITY: usize = N; #[inline(always)] #[must_use] fn as_slice(&self) -> &[T] { &*self } #[inline(always)] #[must_use] fn as_slice_mut(&mut self) -> &mut [T] { &mut *self } #[inline(always)] fn default() -> Self { [(); N].map(|_| Default::default()) } } vendor/tinyvec/src/array.rs0000664000175000017500000000331414160055207016604 0ustar mwhudsonmwhudson/// A trait for types that are an array. /// /// An "array", for our purposes, has the following properties: /// * Owns some number of elements. /// * The element type can be generic, but must implement [`Default`]. /// * The capacity is fixed at compile time, based on the implementing type. /// * You can get a shared or mutable slice to the elements. /// /// You are generally **not** expected to need to implement this yourself. It is /// already implemented for all the major array lengths (`0..=32` and the powers /// of 2 up to 4,096), or for all array lengths with the feature `rustc_1_55`. /// /// **Additional lengths can easily be added upon request.** /// /// ## Safety Reminder /// /// Just a reminder: this trait is 100% safe, which means that `unsafe` code /// **must not** rely on an instance of this trait being correct. pub trait Array { /// The type of the items in the thing. type Item: Default; /// The number of slots in the thing. const CAPACITY: usize; /// Gives a shared slice over the whole thing. /// /// A correct implementation will return a slice with a length equal to the /// `CAPACITY` value. fn as_slice(&self) -> &[Self::Item]; /// Gives a unique slice over the whole thing. /// /// A correct implementation will return a slice with a length equal to the /// `CAPACITY` value. fn as_slice_mut(&mut self) -> &mut [Self::Item]; /// Create a default-initialized instance of ourself, similar to the /// [`Default`] trait, but implemented for the same range of sizes as /// [`Array`]. fn default() -> Self; } #[cfg(feature = "rustc_1_55")] mod const_generic_impl; #[cfg(not(feature = "rustc_1_55"))] mod generated_impl; vendor/tinyvec/src/arrayvec_drain.rs0000664000175000017500000000451114160055207020457 0ustar mwhudsonmwhudsonuse super::*; use core::{ ops::{Bound, RangeBounds}, slice, }; /// Draining iterator for [`ArrayVec`] /// /// See [`ArrayVec::drain`](ArrayVec::drain) pub struct ArrayVecDrain<'a, T: 'a + Default> { iter: slice::IterMut<'a, T>, } impl<'a, T: 'a + Default> ArrayVecDrain<'a, T> { pub(crate) fn new(arr: &'a mut ArrayVec, range: R) -> Self where A: Array, R: RangeBounds, { let start = match range.start_bound() { Bound::Unbounded => 0, Bound::Included(&n) => n, Bound::Excluded(&n) => n.saturating_add(1), }; let end = match range.end_bound() { Bound::Unbounded => arr.len(), Bound::Included(&n) => n.saturating_add(1), Bound::Excluded(&n) => n, }; assert!( start <= end, "ArrayVec::drain> Illegal range, {} to {}", start, end ); assert!( end <= arr.len(), "ArrayVec::drain> Range ends at {}, but length is only {}", end, arr.len() ); let len = end - start; let to_rotate = &mut arr[start..]; to_rotate.rotate_left(len); let oldlen = arr.len(); let newlen = oldlen - len; arr.set_len(newlen); let slice = &mut arr.data.as_slice_mut()[newlen..oldlen]; let iter = slice.iter_mut(); Self { iter } } } impl<'a, T: 'a + Default> DoubleEndedIterator for ArrayVecDrain<'a, T> { fn next_back(&mut self) -> Option { self.iter.next_back().map(take) } #[cfg(feature = "rustc_1_40")] fn nth_back(&mut self, n: usize) -> Option { self.iter.nth_back(n).map(take) } } impl<'a, T: 'a + Default> Iterator for ArrayVecDrain<'a, T> { type Item = T; fn next(&mut self) -> Option { self.iter.next().map(take) } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } fn nth(&mut self, n: usize) -> Option { self.iter.nth(n).map(take) } fn last(self) -> Option { self.iter.last().map(take) } fn for_each(self, f: F) where F: FnMut(Self::Item), { self.iter.map(take).for_each(f) } } impl<'a, T: 'a + Default> FusedIterator for ArrayVecDrain<'a, T> {} impl<'a, T: 'a + Default> ExactSizeIterator for ArrayVecDrain<'a, T> {} /* No need to impl Drop! */ vendor/tinyvec/src/slicevec.rs0000664000175000017500000006523414160055207017274 0ustar mwhudsonmwhudson#![allow(unused_variables)] #![allow(missing_docs)] use super::*; /// A slice-backed vector-like data structure. /// /// This is a very similar concept to `ArrayVec`, but instead /// of the backing memory being an owned array, the backing /// memory is a unique-borrowed slice. You can thus create /// one of these structures "around" some slice that you're /// working with to make it easier to manipulate. /// /// * Has a fixed capacity (the initial slice size). /// * Has a variable length. pub struct SliceVec<'s, T> { data: &'s mut [T], len: usize, } impl<'s, T> Default for SliceVec<'s, T> { #[inline(always)] #[must_use] fn default() -> Self { Self { data: &mut [], len: 0 } } } impl<'s, T> Deref for SliceVec<'s, T> { type Target = [T]; #[inline(always)] #[must_use] fn deref(&self) -> &Self::Target { &self.data[..self.len] } } impl<'s, T> DerefMut for SliceVec<'s, T> { #[inline(always)] #[must_use] fn deref_mut(&mut self) -> &mut Self::Target { &mut self.data[..self.len] } } impl<'s, T, I> Index for SliceVec<'s, T> where I: SliceIndex<[T]>, { type Output = >::Output; #[inline(always)] #[must_use] fn index(&self, index: I) -> &Self::Output { &self.deref()[index] } } impl<'s, T, I> IndexMut for SliceVec<'s, T> where I: SliceIndex<[T]>, { #[inline(always)] #[must_use] fn index_mut(&mut self, index: I) -> &mut Self::Output { &mut self.deref_mut()[index] } } impl<'s, T> SliceVec<'s, T> { #[inline] pub fn append(&mut self, other: &mut Self) where T: Default, { for item in other.drain(..) { self.push(item) } } /// A `*mut` pointer to the backing slice. /// /// ## Safety /// /// This pointer has provenance over the _entire_ backing slice. #[inline(always)] #[must_use] pub fn as_mut_ptr(&mut self) -> *mut T { self.data.as_mut_ptr() } /// Performs a `deref_mut`, into unique slice form. #[inline(always)] #[must_use] pub fn as_mut_slice(&mut self) -> &mut [T] { self.deref_mut() } /// A `*const` pointer to the backing slice. /// /// ## Safety /// /// This pointer has provenance over the _entire_ backing slice. #[inline(always)] #[must_use] pub fn as_ptr(&self) -> *const T { self.data.as_ptr() } /// Performs a `deref`, into shared slice form. #[inline(always)] #[must_use] pub fn as_slice(&self) -> &[T] { self.deref() } /// The capacity of the `SliceVec`. /// /// This the length of the initial backing slice. #[inline(always)] #[must_use] pub fn capacity(&self) -> usize { self.data.len() } /// Truncates the `SliceVec` down to length 0. #[inline(always)] pub fn clear(&mut self) where T: Default, { self.truncate(0) } /// Creates a draining iterator that removes the specified range in the vector /// and yields the removed items. /// /// ## Panics /// * If the start is greater than the end /// * If the end is past the edge of the vec. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut arr = [6, 7, 8]; /// let mut sv = SliceVec::from(&mut arr); /// let drained_values: ArrayVec<[i32; 4]> = sv.drain(1..).collect(); /// assert_eq!(sv.as_slice(), &[6][..]); /// assert_eq!(drained_values.as_slice(), &[7, 8][..]); /// /// sv.drain(..); /// assert_eq!(sv.as_slice(), &[]); /// ``` #[inline] pub fn drain<'p, R: RangeBounds>( &'p mut self, range: R, ) -> SliceVecDrain<'p, 's, T> where T: Default, { use core::ops::Bound; let start = match range.start_bound() { Bound::Included(x) => *x, Bound::Excluded(x) => x.saturating_add(1), Bound::Unbounded => 0, }; let end = match range.end_bound() { Bound::Included(x) => x.saturating_add(1), Bound::Excluded(x) => *x, Bound::Unbounded => self.len, }; assert!( start <= end, "SliceVec::drain> Illegal range, {} to {}", start, end ); assert!( end <= self.len, "SliceVec::drain> Range ends at {} but length is only {}!", end, self.len ); SliceVecDrain { parent: self, target_start: start, target_index: start, target_end: end, } } #[inline] pub fn extend_from_slice(&mut self, sli: &[T]) where T: Clone, { if sli.is_empty() { return; } let new_len = self.len + sli.len(); if new_len > self.capacity() { panic!( "SliceVec::extend_from_slice> total length {} exceeds capacity {}", new_len, self.capacity() ) } let target = &mut self.data[self.len..new_len]; target.clone_from_slice(sli); self.set_len(new_len); } /// Fill the vector until its capacity has been reached. /// /// Successively fills unused space in the spare slice of the vector with /// elements from the iterator. It then returns the remaining iterator /// without exhausting it. This also allows appending the head of an /// infinite iterator. /// /// This is an alternative to `Extend::extend` method for cases where the /// length of the iterator can not be checked. Since this vector can not /// reallocate to increase its capacity, it is unclear what to do with /// remaining elements in the iterator and the iterator itself. The /// interface also provides no way to communicate this to the caller. /// /// ## Panics /// * If the `next` method of the provided iterator panics. /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// let mut arr = [7, 7, 7, 7]; /// let mut sv = SliceVec::from_slice_len(&mut arr, 0); /// let mut to_inf = sv.fill(0..); /// assert_eq!(&sv[..], [0, 1, 2, 3]); /// assert_eq!(to_inf.next(), Some(4)); /// ``` #[inline] pub fn fill>(&mut self, iter: I) -> I::IntoIter { let mut iter = iter.into_iter(); for element in iter.by_ref().take(self.capacity() - self.len()) { self.push(element); } iter } /// Wraps up a slice and uses the given length as the initial length. /// /// If you want to simply use the full slice, use `from` instead. /// /// ## Panics /// /// * The length specified must be less than or equal to the capacity of the /// slice. #[inline] #[must_use] #[allow(clippy::match_wild_err_arm)] pub fn from_slice_len(data: &'s mut [T], len: usize) -> Self { assert!(len <= data.len()); Self { data, len } } /// Inserts an item at the position given, moving all following elements +1 /// index. /// /// ## Panics /// * If `index` > `len` /// * If the capacity is exhausted /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut arr = [1, 2, 3, 0, 0]; /// let mut sv = SliceVec::from_slice_len(&mut arr, 3); /// sv.insert(1, 4); /// assert_eq!(sv.as_slice(), &[1, 4, 2, 3]); /// sv.insert(4, 5); /// assert_eq!(sv.as_slice(), &[1, 4, 2, 3, 5]); /// ``` #[inline] pub fn insert(&mut self, index: usize, item: T) { if index > self.len { panic!("SliceVec::insert> index {} is out of bounds {}", index, self.len); } // Try to push the element. self.push(item); // And move it into its place. self.as_mut_slice()[index..].rotate_right(1); } /// Checks if the length is 0. #[inline(always)] #[must_use] pub fn is_empty(&self) -> bool { self.len == 0 } /// The length of the `SliceVec` (in elements). #[inline(always)] #[must_use] pub fn len(&self) -> usize { self.len } /// Remove and return the last element of the vec, if there is one. /// /// ## Failure /// * If the vec is empty you get `None`. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut arr = [1, 2]; /// let mut sv = SliceVec::from(&mut arr); /// assert_eq!(sv.pop(), Some(2)); /// assert_eq!(sv.pop(), Some(1)); /// assert_eq!(sv.pop(), None); /// ``` #[inline] pub fn pop(&mut self) -> Option where T: Default, { if self.len > 0 { self.len -= 1; let out = take(&mut self.data[self.len]); Some(out) } else { None } } /// Place an element onto the end of the vec. /// /// ## Panics /// * If the length of the vec would overflow the capacity. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut arr = [0, 0]; /// let mut sv = SliceVec::from_slice_len(&mut arr, 0); /// assert_eq!(&sv[..], []); /// sv.push(1); /// assert_eq!(&sv[..], [1]); /// sv.push(2); /// assert_eq!(&sv[..], [1, 2]); /// // sv.push(3); this would overflow the ArrayVec and panic! /// ``` #[inline(always)] pub fn push(&mut self, val: T) { if self.len < self.capacity() { self.data[self.len] = val; self.len += 1; } else { panic!("SliceVec::push> capacity overflow") } } /// Removes the item at `index`, shifting all others down by one index. /// /// Returns the removed element. /// /// ## Panics /// /// * If the index is out of bounds. /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// let mut arr = [1, 2, 3]; /// let mut sv = SliceVec::from(&mut arr); /// assert_eq!(sv.remove(1), 2); /// assert_eq!(&sv[..], [1, 3]); /// ``` #[inline] pub fn remove(&mut self, index: usize) -> T where T: Default, { let targets: &mut [T] = &mut self.deref_mut()[index..]; let item = take(&mut targets[0]); targets.rotate_left(1); self.len -= 1; item } /// As [`resize_with`](SliceVec::resize_with) /// and it clones the value as the closure. /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// // bigger /// let mut arr = ["hello", "", "", "", ""]; /// let mut sv = SliceVec::from_slice_len(&mut arr, 1); /// sv.resize(3, "world"); /// assert_eq!(&sv[..], ["hello", "world", "world"]); /// /// // smaller /// let mut arr = ['a', 'b', 'c', 'd']; /// let mut sv = SliceVec::from(&mut arr); /// sv.resize(2, 'z'); /// assert_eq!(&sv[..], ['a', 'b']); /// ``` #[inline] pub fn resize(&mut self, new_len: usize, new_val: T) where T: Clone, { self.resize_with(new_len, || new_val.clone()) } /// Resize the vec to the new length. /// /// * If it needs to be longer, it's filled with repeated calls to the /// provided function. /// * If it needs to be shorter, it's truncated. /// * If the type needs to drop the truncated slots are filled with calls to /// the provided function. /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// let mut arr = [1, 2, 3, 7, 7, 7, 7]; /// let mut sv = SliceVec::from_slice_len(&mut arr, 3); /// sv.resize_with(5, Default::default); /// assert_eq!(&sv[..], [1, 2, 3, 0, 0]); /// /// let mut arr = [0, 0, 0, 0]; /// let mut sv = SliceVec::from_slice_len(&mut arr, 0); /// let mut p = 1; /// sv.resize_with(4, || { /// p *= 2; /// p /// }); /// assert_eq!(&sv[..], [2, 4, 8, 16]); /// ``` #[inline] pub fn resize_with T>(&mut self, new_len: usize, mut f: F) { match new_len.checked_sub(self.len) { None => { if needs_drop::() { while self.len() > new_len { self.len -= 1; self.data[self.len] = f(); } } else { self.len = new_len; } } Some(new_elements) => { for _ in 0..new_elements { self.push(f()); } } } } /// Walk the vec and keep only the elements that pass the predicate given. /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// /// let mut arr = [1, 1, 2, 3, 3, 4]; /// let mut sv = SliceVec::from(&mut arr); /// sv.retain(|&x| x % 2 == 0); /// assert_eq!(&sv[..], [2, 4]); /// ``` #[inline] pub fn retain bool>(&mut self, mut acceptable: F) where T: Default, { // Drop guard to contain exactly the remaining elements when the test // panics. struct JoinOnDrop<'vec, Item> { items: &'vec mut [Item], done_end: usize, // Start of tail relative to `done_end`. tail_start: usize, } impl Drop for JoinOnDrop<'_, Item> { fn drop(&mut self) { self.items[self.done_end..].rotate_left(self.tail_start); } } let mut rest = JoinOnDrop { items: self.data, done_end: 0, tail_start: 0 }; for idx in 0..self.len { // Loop start invariant: idx = rest.done_end + rest.tail_start if !acceptable(&rest.items[idx]) { let _ = take(&mut rest.items[idx]); self.len -= 1; rest.tail_start += 1; } else { rest.items.swap(rest.done_end, idx); rest.done_end += 1; } } } /// Forces the length of the vector to `new_len`. /// /// ## Panics /// * If `new_len` is greater than the vec's capacity. /// /// ## Safety /// * This is a fully safe operation! The inactive memory already counts as /// "initialized" by Rust's rules. /// * Other than "the memory is initialized" there are no other guarantees /// regarding what you find in the inactive portion of the vec. #[inline(always)] pub fn set_len(&mut self, new_len: usize) { if new_len > self.capacity() { // Note(Lokathor): Technically we don't have to panic here, and we could // just let some other call later on trigger a panic on accident when the // length is wrong. However, it's a lot easier to catch bugs when things // are more "fail-fast". panic!( "SliceVec::set_len> new length {} exceeds capacity {}", new_len, self.capacity() ) } else { self.len = new_len; } } /// Splits the collection at the point given. /// /// * `[0, at)` stays in this vec (and this vec is now full). /// * `[at, len)` ends up in the new vec (with any spare capacity). /// /// ## Panics /// * if `at` > `self.len()` /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// let mut arr = [1, 2, 3]; /// let mut sv = SliceVec::from(&mut arr); /// let sv2 = sv.split_off(1); /// assert_eq!(&sv[..], [1]); /// assert_eq!(&sv2[..], [2, 3]); /// ``` #[inline] pub fn split_off<'a>(&'a mut self, at: usize) -> SliceVec<'s, T> { let mut new = Self::default(); let backing: &'s mut [T] = replace(&mut self.data, &mut []); let (me, other) = backing.split_at_mut(at); new.len = self.len - at; new.data = other; self.len = me.len(); self.data = me; new } /// Remove an element, swapping the end of the vec into its place. /// /// ## Panics /// * If the index is out of bounds. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut arr = ["foo", "bar", "quack", "zap"]; /// let mut sv = SliceVec::from(&mut arr); /// /// assert_eq!(sv.swap_remove(1), "bar"); /// assert_eq!(&sv[..], ["foo", "zap", "quack"]); /// /// assert_eq!(sv.swap_remove(0), "foo"); /// assert_eq!(&sv[..], ["quack", "zap"]); /// ``` #[inline] pub fn swap_remove(&mut self, index: usize) -> T where T: Default, { assert!( index < self.len, "SliceVec::swap_remove> index {} is out of bounds {}", index, self.len ); if index == self.len - 1 { self.pop().unwrap() } else { let i = self.pop().unwrap(); replace(&mut self[index], i) } } /// Reduces the vec's length to the given value. /// /// If the vec is already shorter than the input, nothing happens. #[inline] pub fn truncate(&mut self, new_len: usize) where T: Default, { if needs_drop::() { while self.len > new_len { self.pop(); } } else { self.len = self.len.min(new_len); } } /// Wraps a slice, using the given length as the starting length. /// /// If you want to use the whole length of the slice, you can just use the /// `From` impl. /// /// ## Failure /// /// If the given length is greater than the length of the slice you get /// `None`. #[inline] pub fn try_from_slice_len(data: &'s mut [T], len: usize) -> Option { if len <= data.len() { Some(Self { data, len }) } else { None } } } #[cfg(feature = "grab_spare_slice")] impl<'s, T> SliceVec<'s, T> { /// Obtain the shared slice of the array _after_ the active memory. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut arr = [0; 4]; /// let mut sv = SliceVec::from_slice_len(&mut arr, 0); /// assert_eq!(sv.grab_spare_slice().len(), 4); /// sv.push(10); /// sv.push(11); /// sv.push(12); /// sv.push(13); /// assert_eq!(sv.grab_spare_slice().len(), 0); /// ``` #[inline(always)] pub fn grab_spare_slice(&self) -> &[T] { &self.data[self.len..] } /// Obtain the mutable slice of the array _after_ the active memory. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut arr = [0; 4]; /// let mut sv = SliceVec::from_slice_len(&mut arr, 0); /// assert_eq!(sv.grab_spare_slice_mut().len(), 4); /// sv.push(10); /// sv.push(11); /// assert_eq!(sv.grab_spare_slice_mut().len(), 2); /// ``` #[inline(always)] pub fn grab_spare_slice_mut(&mut self) -> &mut [T] { &mut self.data[self.len..] } } impl<'s, T> From<&'s mut [T]> for SliceVec<'s, T> { /// Uses the full slice as the initial length. /// ## Example /// ```rust /// # use tinyvec::*; /// let mut arr = [0_i32; 2]; /// let mut sv = SliceVec::from(&mut arr[..]); /// ``` fn from(data: &'s mut [T]) -> Self { let len = data.len(); Self { data, len } } } impl<'s, T, A> From<&'s mut A> for SliceVec<'s, T> where A: AsMut<[T]>, { /// Calls `AsRef::as_mut` then uses the full slice as the initial length. /// ## Example /// ```rust /// # use tinyvec::*; /// let mut arr = [0, 0]; /// let mut sv = SliceVec::from(&mut arr); /// ``` fn from(a: &'s mut A) -> Self { let data = a.as_mut(); let len = data.len(); Self { data, len } } } /// Draining iterator for [`SliceVec`] /// /// See [`SliceVec::drain`](SliceVec::drain) pub struct SliceVecDrain<'p, 's, T: Default> { parent: &'p mut SliceVec<'s, T>, target_start: usize, target_index: usize, target_end: usize, } impl<'p, 's, T: Default> Iterator for SliceVecDrain<'p, 's, T> { type Item = T; #[inline] fn next(&mut self) -> Option { if self.target_index != self.target_end { let out = take(&mut self.parent[self.target_index]); self.target_index += 1; Some(out) } else { None } } } impl<'p, 's, T: Default> FusedIterator for SliceVecDrain<'p, 's, T> {} impl<'p, 's, T: Default> Drop for SliceVecDrain<'p, 's, T> { #[inline] fn drop(&mut self) { // Changed because it was moving `self`, it's also more clear and the std // does the same self.for_each(drop); // Implementation very similar to [`SliceVec::remove`](SliceVec::remove) let count = self.target_end - self.target_start; let targets: &mut [T] = &mut self.parent.deref_mut()[self.target_start..]; targets.rotate_left(count); self.parent.len -= count; } } impl<'s, T> AsMut<[T]> for SliceVec<'s, T> { #[inline(always)] #[must_use] fn as_mut(&mut self) -> &mut [T] { &mut *self } } impl<'s, T> AsRef<[T]> for SliceVec<'s, T> { #[inline(always)] #[must_use] fn as_ref(&self) -> &[T] { &*self } } impl<'s, T> Borrow<[T]> for SliceVec<'s, T> { #[inline(always)] #[must_use] fn borrow(&self) -> &[T] { &*self } } impl<'s, T> BorrowMut<[T]> for SliceVec<'s, T> { #[inline(always)] #[must_use] fn borrow_mut(&mut self) -> &mut [T] { &mut *self } } impl<'s, T> Extend for SliceVec<'s, T> { #[inline] fn extend>(&mut self, iter: I) { for t in iter { self.push(t) } } } impl<'s, T> IntoIterator for SliceVec<'s, T> { type Item = &'s mut T; type IntoIter = core::slice::IterMut<'s, T>; #[inline(always)] #[must_use] fn into_iter(self) -> Self::IntoIter { self.data.iter_mut() } } impl<'s, T> PartialEq for SliceVec<'s, T> where T: PartialEq, { #[inline] #[must_use] fn eq(&self, other: &Self) -> bool { self.as_slice().eq(other.as_slice()) } } impl<'s, T> Eq for SliceVec<'s, T> where T: Eq {} impl<'s, T> PartialOrd for SliceVec<'s, T> where T: PartialOrd, { #[inline] #[must_use] fn partial_cmp(&self, other: &Self) -> Option { self.as_slice().partial_cmp(other.as_slice()) } } impl<'s, T> Ord for SliceVec<'s, T> where T: Ord, { #[inline] #[must_use] fn cmp(&self, other: &Self) -> core::cmp::Ordering { self.as_slice().cmp(other.as_slice()) } } impl<'s, T> PartialEq<&[T]> for SliceVec<'s, T> where T: PartialEq, { #[inline] #[must_use] fn eq(&self, other: &&[T]) -> bool { self.as_slice().eq(*other) } } impl<'s, T> Hash for SliceVec<'s, T> where T: Hash, { #[inline] fn hash(&self, state: &mut H) { self.as_slice().hash(state) } } #[cfg(feature = "experimental_write_impl")] impl<'s> core::fmt::Write for SliceVec<'s, u8> { fn write_str(&mut self, s: &str) -> core::fmt::Result { let my_len = self.len(); let str_len = s.as_bytes().len(); if my_len + str_len <= self.capacity() { let remainder = &mut self.data[my_len..]; let target = &mut remainder[..str_len]; target.copy_from_slice(s.as_bytes()); Ok(()) } else { Err(core::fmt::Error) } } } // // // // // // // // // Formatting impls // // // // // // // // impl<'s, T> Binary for SliceVec<'s, T> where T: Binary, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Binary::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl<'s, T> Debug for SliceVec<'s, T> where T: Debug, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Debug::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl<'s, T> Display for SliceVec<'s, T> where T: Display, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Display::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl<'s, T> LowerExp for SliceVec<'s, T> where T: LowerExp, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } LowerExp::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl<'s, T> LowerHex for SliceVec<'s, T> where T: LowerHex, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } LowerHex::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl<'s, T> Octal for SliceVec<'s, T> where T: Octal, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Octal::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl<'s, T> Pointer for SliceVec<'s, T> where T: Pointer, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Pointer::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl<'s, T> UpperExp for SliceVec<'s, T> where T: UpperExp, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } UpperExp::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl<'s, T> UpperHex for SliceVec<'s, T> where T: UpperHex, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } UpperHex::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } vendor/tinyvec/src/lib.rs0000664000175000017500000001013514160055207016233 0ustar mwhudsonmwhudson#![cfg_attr(not(feature = "std"), no_std)] #![forbid(unsafe_code)] #![cfg_attr( feature = "nightly_slice_partition_dedup", feature(slice_partition_dedup) )] #![cfg_attr(docs_rs, feature(doc_cfg))] #![warn(clippy::missing_inline_in_public_items)] #![warn(clippy::must_use_candidate)] #![warn(missing_docs)] //! `tinyvec` provides 100% safe vec-like data structures. //! //! ## Provided Types //! With no features enabled, this crate provides the [`ArrayVec`] type, which //! is an array-backed storage. You can push values into the array and pop them //! out of the array and so on. If the array is made to overflow it will panic. //! //! Similarly, there is also a [`SliceVec`] type available, which is a vec-like //! that's backed by a slice you provide. You can add and remove elements, but //! if you overflow the slice it will panic. //! //! With the `alloc` feature enabled, the crate also has a [`TinyVec`] type. //! This is an enum type which is either an `Inline(ArrayVec)` or a `Heap(Vec)`. //! If a `TinyVec` is `Inline` and would overflow it automatically transitions //! itself into being `Heap` mode instead of a panic. //! //! All of this is done with no `unsafe` code within the crate. Technically the //! `Vec` type from the standard library uses `unsafe` internally, but *this //! crate* introduces no new `unsafe` code into your project. //! //! The limitation is that the element type of a vec from this crate must //! support the [`Default`] trait. This means that this crate isn't suitable for //! all situations, but a very surprising number of types do support `Default`. //! //! ## Other Features //! * `grab_spare_slice` lets you get access to the "inactive" portions of an //! ArrayVec. //! * `rustc_1_40` makes the crate assume a minimum rust version of `1.40.0`, //! which allows some better internal optimizations. //! * `serde` provides a `Serialize` and `Deserialize` implementation for //! [`TinyVec`] and [`ArrayVec`] types, provided the inner item also has an //! implementation. //! //! ## API //! The general goal of the crate is that, as much as possible, the vecs here //! should be a "drop in" replacement for the standard library `Vec` type. We //! strive to provide all of the `Vec` methods with the same names and //! signatures. The exception is that the element type of some methods will have //! a `Default` bound that's not part of the normal `Vec` type. //! //! The vecs here also have a few additional methods that aren't on the `Vec` //! type. In this case, the names tend to be fairly long so that they are //! unlikely to clash with any future methods added to `Vec`. //! //! ## Stability //! * The `1.0` series of the crate works with Rustc `1.34.0` or later, though //! you still need to have Rustc `1.36.0` to use the `alloc` feature. //! * The `2.0` version of the crate is planned for some time after the //! `min_const_generics` stuff becomes stable. This would greatly raise the //! minimum rust version and also allow us to totally eliminate the need for //! the `Array` trait. The actual usage of the crate is not expected to break //! significantly in this transition. #[allow(unused_imports)] use core::{ borrow::{Borrow, BorrowMut}, cmp::PartialEq, convert::AsMut, default::Default, fmt::{ Binary, Debug, Display, Formatter, LowerExp, LowerHex, Octal, Pointer, UpperExp, UpperHex, }, hash::{Hash, Hasher}, iter::{Extend, FromIterator, FusedIterator, IntoIterator, Iterator}, mem::{needs_drop, replace}, ops::{Deref, DerefMut, Index, IndexMut, RangeBounds}, slice::SliceIndex, }; #[cfg(feature = "alloc")] #[doc(hidden)] // re-export for macros pub extern crate alloc; mod array; pub use array::*; mod arrayvec; pub use arrayvec::*; mod arrayvec_drain; pub use arrayvec_drain::*; mod slicevec; pub use slicevec::*; #[cfg(feature = "alloc")] mod tinyvec; #[cfg(feature = "alloc")] pub use crate::tinyvec::*; // TODO MSRV(1.40.0): Just call the normal `core::mem::take` #[inline(always)] fn take(from: &mut T) -> T { replace(from, T::default()) } vendor/tinyvec/src/arrayvec.rs0000664000175000017500000014004314160055207017303 0ustar mwhudsonmwhudsonuse super::*; use core::convert::{TryFrom, TryInto}; #[cfg(feature = "serde")] use core::marker::PhantomData; #[cfg(feature = "serde")] use serde::de::{ Deserialize, Deserializer, Error as DeserializeError, SeqAccess, Visitor, }; #[cfg(feature = "serde")] use serde::ser::{Serialize, SerializeSeq, Serializer}; /// Helper to make an `ArrayVec`. /// /// You specify the backing array type, and optionally give all the elements you /// want to initially place into the array. /// /// ```rust /// use tinyvec::*; /// /// // The backing array type can be specified in the macro call /// let empty_av = array_vec!([u8; 16]); /// let some_ints = array_vec!([i32; 4] => 1, 2, 3); /// /// // Or left to inference /// let empty_av: ArrayVec<[u8; 10]> = array_vec!(); /// let some_ints: ArrayVec<[u8; 10]> = array_vec!(5, 6, 7, 8); /// ``` #[macro_export] macro_rules! array_vec { ($array_type:ty => $($elem:expr),* $(,)?) => { { let mut av: $crate::ArrayVec<$array_type> = Default::default(); $( av.push($elem); )* av } }; ($array_type:ty) => { $crate::ArrayVec::<$array_type>::default() }; ($($elem:expr),*) => { $crate::array_vec!(_ => $($elem),*) }; ($elem:expr; $n:expr) => { $crate::ArrayVec::from([$elem; $n]) }; () => { $crate::array_vec!(_) }; } /// An array-backed, vector-like data structure. /// /// * `ArrayVec` has a fixed capacity, equal to the array size. /// * `ArrayVec` has a variable length, as you add and remove elements. Attempts /// to fill the vec beyond its capacity will cause a panic. /// * All of the vec's array slots are always initialized in terms of Rust's /// memory model. When you remove a element from a location, the old value at /// that location is replaced with the type's default value. /// /// The overall API of this type is intended to, as much as possible, emulate /// the API of the [`Vec`](https://doc.rust-lang.org/alloc/vec/struct.Vec.html) /// type. /// /// ## Construction /// /// You can use the `array_vec!` macro similarly to how you might use the `vec!` /// macro. Specify the array type, then optionally give all the initial values /// you want to have. /// ```rust /// # use tinyvec::*; /// let some_ints = array_vec!([i32; 4] => 1, 2, 3); /// assert_eq!(some_ints.len(), 3); /// ``` /// /// The [`default`](ArrayVec::new) for an `ArrayVec` is to have a default /// array with length 0. The [`new`](ArrayVec::new) method is the same as /// calling `default` /// ```rust /// # use tinyvec::*; /// let some_ints = ArrayVec::<[i32; 7]>::default(); /// assert_eq!(some_ints.len(), 0); /// /// let more_ints = ArrayVec::<[i32; 7]>::new(); /// assert_eq!(some_ints, more_ints); /// ``` /// /// If you have an array and want the _whole thing_ so count as being "in" the /// new `ArrayVec` you can use one of the `from` implementations. If you want /// _part of_ the array then you can use /// [`from_array_len`](ArrayVec::from_array_len): /// ```rust /// # use tinyvec::*; /// let some_ints = ArrayVec::from([5, 6, 7, 8]); /// assert_eq!(some_ints.len(), 4); /// /// let more_ints = ArrayVec::from_array_len([5, 6, 7, 8], 2); /// assert_eq!(more_ints.len(), 2); /// /// let no_ints: ArrayVec<[u8; 5]> = ArrayVec::from_array_empty([1, 2, 3, 4, 5]); /// assert_eq!(no_ints.len(), 0); /// ``` #[repr(C)] pub struct ArrayVec { len: u16, pub(crate) data: A, } impl Clone for ArrayVec where A: Array + Clone, A::Item: Clone, { #[inline] fn clone(&self) -> Self { Self { data: self.data.clone(), len: self.len } } #[inline] fn clone_from(&mut self, o: &Self) { let iter = self .data .as_slice_mut() .iter_mut() .zip(o.data.as_slice()) .take(self.len.max(o.len) as usize); for (dst, src) in iter { dst.clone_from(src) } if let Some(to_drop) = self.data.as_slice_mut().get_mut((o.len as usize)..(self.len as usize)) { to_drop.iter_mut().for_each(|x| drop(take(x))); } self.len = o.len; } } impl Copy for ArrayVec where A: Array + Copy, A::Item: Copy, { } impl Default for ArrayVec { fn default() -> Self { Self { len: 0, data: A::default() } } } impl Deref for ArrayVec { type Target = [A::Item]; #[inline(always)] #[must_use] fn deref(&self) -> &Self::Target { &self.data.as_slice()[..self.len as usize] } } impl DerefMut for ArrayVec { #[inline(always)] #[must_use] fn deref_mut(&mut self) -> &mut Self::Target { &mut self.data.as_slice_mut()[..self.len as usize] } } impl> Index for ArrayVec { type Output = >::Output; #[inline(always)] #[must_use] fn index(&self, index: I) -> &Self::Output { &self.deref()[index] } } impl> IndexMut for ArrayVec { #[inline(always)] #[must_use] fn index_mut(&mut self, index: I) -> &mut Self::Output { &mut self.deref_mut()[index] } } #[cfg(feature = "serde")] #[cfg_attr(docs_rs, doc(cfg(feature = "serde")))] impl Serialize for ArrayVec where A::Item: Serialize, { #[must_use] fn serialize(&self, serializer: S) -> Result where S: Serializer, { let mut seq = serializer.serialize_seq(Some(self.len()))?; for element in self.iter() { seq.serialize_element(element)?; } seq.end() } } #[cfg(feature = "serde")] #[cfg_attr(docs_rs, doc(cfg(feature = "serde")))] impl<'de, A: Array> Deserialize<'de> for ArrayVec where A::Item: Deserialize<'de>, { fn deserialize(deserializer: D) -> Result where D: Deserializer<'de>, { deserializer.deserialize_seq(ArrayVecVisitor(PhantomData)) } } #[cfg(all(feature = "arbitrary", feature = "nightly_const_generics"))] #[cfg_attr( docs_rs, doc(cfg(all(feature = "arbitrary", feature = "nightly_const_generics"))) )] impl<'a, T, const N: usize> arbitrary::Arbitrary<'a> for ArrayVec<[T; N]> where T: arbitrary::Arbitrary<'a> + Default, { fn arbitrary(u: &mut arbitrary::Unstructured<'a>) -> arbitrary::Result { let v = <[T; N]>::arbitrary(u)?; let av = ArrayVec::from(v); Ok(av) } } impl ArrayVec { /// Move all values from `other` into this vec. /// /// ## Panics /// * If the vec overflows its capacity /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([i32; 10] => 1, 2, 3); /// let mut av2 = array_vec!([i32; 10] => 4, 5, 6); /// av.append(&mut av2); /// assert_eq!(av, &[1, 2, 3, 4, 5, 6][..]); /// assert_eq!(av2, &[][..]); /// ``` #[inline] pub fn append(&mut self, other: &mut Self) { assert!( self.try_append(other).is_none(), "ArrayVec::append> total length {} exceeds capacity {}!", self.len() + other.len(), A::CAPACITY ); } /// Move all values from `other` into this vec. /// If appending would overflow the capacity, Some(other) is returned. /// ## Example /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([i32; 7] => 1, 2, 3); /// let mut av2 = array_vec!([i32; 7] => 4, 5, 6); /// av.append(&mut av2); /// assert_eq!(av, &[1, 2, 3, 4, 5, 6][..]); /// assert_eq!(av2, &[][..]); /// /// let mut av3 = array_vec!([i32; 7] => 7, 8, 9); /// assert!(av.try_append(&mut av3).is_some()); /// assert_eq!(av, &[1, 2, 3, 4, 5, 6][..]); /// assert_eq!(av3, &[7, 8, 9][..]); /// ``` #[inline] pub fn try_append<'other>( &mut self, other: &'other mut Self, ) -> Option<&'other mut Self> { let new_len = self.len() + other.len(); if new_len > A::CAPACITY { return Some(other); } let iter = other.iter_mut().map(take); for item in iter { self.push(item); } other.set_len(0); return None; } /// A `*mut` pointer to the backing array. /// /// ## Safety /// /// This pointer has provenance over the _entire_ backing array. #[inline(always)] #[must_use] pub fn as_mut_ptr(&mut self) -> *mut A::Item { self.data.as_slice_mut().as_mut_ptr() } /// Performs a `deref_mut`, into unique slice form. #[inline(always)] #[must_use] pub fn as_mut_slice(&mut self) -> &mut [A::Item] { self.deref_mut() } /// A `*const` pointer to the backing array. /// /// ## Safety /// /// This pointer has provenance over the _entire_ backing array. #[inline(always)] #[must_use] pub fn as_ptr(&self) -> *const A::Item { self.data.as_slice().as_ptr() } /// Performs a `deref`, into shared slice form. #[inline(always)] #[must_use] pub fn as_slice(&self) -> &[A::Item] { self.deref() } /// The capacity of the `ArrayVec`. /// /// This is fixed based on the array type, but can't yet be made a `const fn` /// on Stable Rust. #[inline(always)] #[must_use] pub fn capacity(&self) -> usize { // Note: This shouldn't use A::CAPACITY, because unsafe code can't rely on // any Array invariants. This ensures that at the very least, the returned // value is a valid length for a subslice of the backing array. self.data.as_slice().len() } /// Truncates the `ArrayVec` down to length 0. #[inline(always)] pub fn clear(&mut self) { self.truncate(0) } /// Creates a draining iterator that removes the specified range in the vector /// and yields the removed items. /// /// ## Panics /// * If the start is greater than the end /// * If the end is past the edge of the vec. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([i32; 4] => 1, 2, 3); /// let av2: ArrayVec<[i32; 4]> = av.drain(1..).collect(); /// assert_eq!(av.as_slice(), &[1][..]); /// assert_eq!(av2.as_slice(), &[2, 3][..]); /// /// av.drain(..); /// assert_eq!(av.as_slice(), &[]); /// ``` #[inline] pub fn drain(&mut self, range: R) -> ArrayVecDrain<'_, A::Item> where R: RangeBounds, { ArrayVecDrain::new(self, range) } /// Returns the inner array of the `ArrayVec`. /// /// This returns the full array, even if the `ArrayVec` length is currently /// less than that. /// /// ## Example /// /// ```rust /// # use tinyvec::{array_vec, ArrayVec}; /// let mut favorite_numbers = array_vec!([i32; 5] => 87, 48, 33, 9, 26); /// assert_eq!(favorite_numbers.clone().into_inner(), [87, 48, 33, 9, 26]); /// /// favorite_numbers.pop(); /// assert_eq!(favorite_numbers.into_inner(), [87, 48, 33, 9, 0]); /// ``` /// /// A use for this function is to build an array from an iterator by first /// collecting it into an `ArrayVec`. /// /// ```rust /// # use tinyvec::ArrayVec; /// let arr_vec: ArrayVec<[i32; 10]> = (1..=3).cycle().take(10).collect(); /// let inner = arr_vec.into_inner(); /// assert_eq!(inner, [1, 2, 3, 1, 2, 3, 1, 2, 3, 1]); /// ``` #[inline] pub fn into_inner(self) -> A { self.data } /// Clone each element of the slice into this `ArrayVec`. /// /// ## Panics /// * If the `ArrayVec` would overflow, this will panic. #[inline] pub fn extend_from_slice(&mut self, sli: &[A::Item]) where A::Item: Clone, { if sli.is_empty() { return; } let new_len = self.len as usize + sli.len(); assert!( new_len <= A::CAPACITY, "ArrayVec::extend_from_slice> total length {} exceeds capacity {}!", new_len, A::CAPACITY ); let target = &mut self.data.as_slice_mut()[self.len as usize..new_len]; target.clone_from_slice(sli); self.set_len(new_len); } /// Fill the vector until its capacity has been reached. /// /// Successively fills unused space in the spare slice of the vector with /// elements from the iterator. It then returns the remaining iterator /// without exhausting it. This also allows appending the head of an /// infinite iterator. /// /// This is an alternative to `Extend::extend` method for cases where the /// length of the iterator can not be checked. Since this vector can not /// reallocate to increase its capacity, it is unclear what to do with /// remaining elements in the iterator and the iterator itself. The /// interface also provides no way to communicate this to the caller. /// /// ## Panics /// * If the `next` method of the provided iterator panics. /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([i32; 4]); /// let mut to_inf = av.fill(0..); /// assert_eq!(&av[..], [0, 1, 2, 3]); /// assert_eq!(to_inf.next(), Some(4)); /// ``` #[inline] pub fn fill>( &mut self, iter: I, ) -> I::IntoIter { // If this is written as a call to push for each element in iter, the // compiler emits code that updates the length for every element. The // additional complexity from that length update is worth nearly 2x in // the runtime of this function. let mut iter = iter.into_iter(); let mut pushed = 0; let to_take = self.capacity() - self.len(); let target = &mut self.data.as_slice_mut()[self.len as usize..]; for element in iter.by_ref().take(to_take) { target[pushed] = element; pushed += 1; } self.len += pushed as u16; iter } /// Wraps up an array and uses the given length as the initial length. /// /// If you want to simply use the full array, use `from` instead. /// /// ## Panics /// /// * The length specified must be less than or equal to the capacity of the /// array. #[inline] #[must_use] #[allow(clippy::match_wild_err_arm)] pub fn from_array_len(data: A, len: usize) -> Self { match Self::try_from_array_len(data, len) { Ok(out) => out, Err(_) => panic!( "ArrayVec::from_array_len> length {} exceeds capacity {}!", len, A::CAPACITY ), } } /// Inserts an item at the position given, moving all following elements +1 /// index. /// /// ## Panics /// * If `index` > `len` /// * If the capacity is exhausted /// /// ## Example /// ```rust /// use tinyvec::*; /// let mut av = array_vec!([i32; 10] => 1, 2, 3); /// av.insert(1, 4); /// assert_eq!(av.as_slice(), &[1, 4, 2, 3]); /// av.insert(4, 5); /// assert_eq!(av.as_slice(), &[1, 4, 2, 3, 5]); /// ``` #[inline] pub fn insert(&mut self, index: usize, item: A::Item) { let x = self.try_insert(index, item); assert!(x.is_none(), "ArrayVec::insert> capacity overflow!"); } /// Tries to insert an item at the position given, moving all following /// elements +1 index. /// Returns back the element if the capacity is exhausted, /// otherwise returns None. /// /// ## Panics /// * If `index` > `len` /// /// ## Example /// ```rust /// use tinyvec::*; /// let mut av = array_vec!([&'static str; 4] => "one", "two", "three"); /// av.insert(1, "four"); /// assert_eq!(av.as_slice(), &["one", "four", "two", "three"]); /// assert_eq!(av.try_insert(4, "five"), Some("five")); /// ``` #[inline] pub fn try_insert( &mut self, index: usize, mut item: A::Item, ) -> Option { assert!( index <= self.len as usize, "ArrayVec::try_insert> index {} is out of bounds {}", index, self.len ); // A previous implementation used self.try_push and slice::rotate_right // rotate_right and rotate_left generate a huge amount of code and fail to // inline; calling them here incurs the cost of all the cases they // handle even though we're rotating a usually-small array by a constant // 1 offset. This swap-based implementation benchmarks much better for // small array lengths in particular. if (self.len as usize) < A::CAPACITY { self.len += 1; } else { return Some(item); } let target = &mut self.as_mut_slice()[index..]; for i in 0..target.len() { core::mem::swap(&mut item, &mut target[i]); } return None; } /// Checks if the length is 0. #[inline(always)] #[must_use] pub fn is_empty(&self) -> bool { self.len == 0 } /// The length of the `ArrayVec` (in elements). #[inline(always)] #[must_use] pub fn len(&self) -> usize { self.len as usize } /// Makes a new, empty `ArrayVec`. #[inline(always)] #[must_use] pub fn new() -> Self { Self::default() } /// Remove and return the last element of the vec, if there is one. /// /// ## Failure /// * If the vec is empty you get `None`. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([i32; 10] => 1, 2); /// assert_eq!(av.pop(), Some(2)); /// assert_eq!(av.pop(), Some(1)); /// assert_eq!(av.pop(), None); /// ``` #[inline] pub fn pop(&mut self) -> Option { if self.len > 0 { self.len -= 1; let out = take(&mut self.data.as_slice_mut()[self.len as usize]); Some(out) } else { None } } /// Place an element onto the end of the vec. /// /// ## Panics /// * If the length of the vec would overflow the capacity. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([i32; 2]); /// assert_eq!(&av[..], []); /// av.push(1); /// assert_eq!(&av[..], [1]); /// av.push(2); /// assert_eq!(&av[..], [1, 2]); /// // av.push(3); this would overflow the ArrayVec and panic! /// ``` #[inline(always)] pub fn push(&mut self, val: A::Item) { let x = self.try_push(val); assert!(x.is_none(), "ArrayVec::push> capacity overflow!"); } /// Tries to place an element onto the end of the vec.\ /// Returns back the element if the capacity is exhausted, /// otherwise returns None. /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([i32; 2]); /// assert_eq!(av.as_slice(), []); /// assert_eq!(av.try_push(1), None); /// assert_eq!(&av[..], [1]); /// assert_eq!(av.try_push(2), None); /// assert_eq!(&av[..], [1, 2]); /// assert_eq!(av.try_push(3), Some(3)); /// ``` #[inline(always)] pub fn try_push(&mut self, val: A::Item) -> Option { debug_assert!(self.len as usize <= A::CAPACITY); let itemref = match self.data.as_slice_mut().get_mut(self.len as usize) { None => return Some(val), Some(x) => x, }; *itemref = val; self.len += 1; return None; } /// Removes the item at `index`, shifting all others down by one index. /// /// Returns the removed element. /// /// ## Panics /// /// * If the index is out of bounds. /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([i32; 4] => 1, 2, 3); /// assert_eq!(av.remove(1), 2); /// assert_eq!(&av[..], [1, 3]); /// ``` #[inline] pub fn remove(&mut self, index: usize) -> A::Item { let targets: &mut [A::Item] = &mut self.deref_mut()[index..]; let item = take(&mut targets[0]); // A previous implementation used rotate_left // rotate_right and rotate_left generate a huge amount of code and fail to // inline; calling them here incurs the cost of all the cases they // handle even though we're rotating a usually-small array by a constant // 1 offset. This swap-based implementation benchmarks much better for // small array lengths in particular. for i in 0..targets.len() - 1 { targets.swap(i, i + 1); } self.len -= 1; item } /// As [`resize_with`](ArrayVec::resize_with) /// and it clones the value as the closure. /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// /// let mut av = array_vec!([&str; 10] => "hello"); /// av.resize(3, "world"); /// assert_eq!(&av[..], ["hello", "world", "world"]); /// /// let mut av = array_vec!([i32; 10] => 1, 2, 3, 4); /// av.resize(2, 0); /// assert_eq!(&av[..], [1, 2]); /// ``` #[inline] pub fn resize(&mut self, new_len: usize, new_val: A::Item) where A::Item: Clone, { self.resize_with(new_len, || new_val.clone()) } /// Resize the vec to the new length. /// /// If it needs to be longer, it's filled with repeated calls to the provided /// function. If it needs to be shorter, it's truncated. /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// /// let mut av = array_vec!([i32; 10] => 1, 2, 3); /// av.resize_with(5, Default::default); /// assert_eq!(&av[..], [1, 2, 3, 0, 0]); /// /// let mut av = array_vec!([i32; 10]); /// let mut p = 1; /// av.resize_with(4, || { /// p *= 2; /// p /// }); /// assert_eq!(&av[..], [2, 4, 8, 16]); /// ``` #[inline] pub fn resize_with A::Item>( &mut self, new_len: usize, mut f: F, ) { match new_len.checked_sub(self.len as usize) { None => self.truncate(new_len), Some(new_elements) => { for _ in 0..new_elements { self.push(f()); } } } } /// Walk the vec and keep only the elements that pass the predicate given. /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// /// let mut av = array_vec!([i32; 10] => 1, 1, 2, 3, 3, 4); /// av.retain(|&x| x % 2 == 0); /// assert_eq!(&av[..], [2, 4]); /// ``` #[inline] pub fn retain bool>(&mut self, mut acceptable: F) { // Drop guard to contain exactly the remaining elements when the test // panics. struct JoinOnDrop<'vec, Item> { items: &'vec mut [Item], done_end: usize, // Start of tail relative to `done_end`. tail_start: usize, } impl Drop for JoinOnDrop<'_, Item> { fn drop(&mut self) { self.items[self.done_end..].rotate_left(self.tail_start); } } let mut rest = JoinOnDrop { items: &mut self.data.as_slice_mut()[..self.len as usize], done_end: 0, tail_start: 0, }; let len = self.len as usize; for idx in 0..len { // Loop start invariant: idx = rest.done_end + rest.tail_start if !acceptable(&rest.items[idx]) { let _ = take(&mut rest.items[idx]); self.len -= 1; rest.tail_start += 1; } else { rest.items.swap(rest.done_end, idx); rest.done_end += 1; } } } /// Forces the length of the vector to `new_len`. /// /// ## Panics /// * If `new_len` is greater than the vec's capacity. /// /// ## Safety /// * This is a fully safe operation! The inactive memory already counts as /// "initialized" by Rust's rules. /// * Other than "the memory is initialized" there are no other guarantees /// regarding what you find in the inactive portion of the vec. #[inline(always)] pub fn set_len(&mut self, new_len: usize) { if new_len > A::CAPACITY { // Note(Lokathor): Technically we don't have to panic here, and we could // just let some other call later on trigger a panic on accident when the // length is wrong. However, it's a lot easier to catch bugs when things // are more "fail-fast". panic!( "ArrayVec::set_len> new length {} exceeds capacity {}", new_len, A::CAPACITY ) } let new_len: u16 = new_len .try_into() .expect("ArrayVec::set_len> new length is not in range 0..=u16::MAX"); self.len = new_len; } /// Splits the collection at the point given. /// /// * `[0, at)` stays in this vec /// * `[at, len)` ends up in the new vec. /// /// ## Panics /// * if at > len /// /// ## Example /// /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([i32; 4] => 1, 2, 3); /// let av2 = av.split_off(1); /// assert_eq!(&av[..], [1]); /// assert_eq!(&av2[..], [2, 3]); /// ``` #[inline] pub fn split_off(&mut self, at: usize) -> Self { // FIXME: should this just use drain into the output? if at > self.len() { panic!( "ArrayVec::split_off> at value {} exceeds length of {}", at, self.len ); } let mut new = Self::default(); let moves = &mut self.as_mut_slice()[at..]; let split_len = moves.len(); let targets = &mut new.data.as_slice_mut()[..split_len]; moves.swap_with_slice(targets); /* moves.len() <= u16::MAX, so these are surely in u16 range */ new.len = split_len as u16; self.len = at as u16; new } /// Creates a splicing iterator that removes the specified range in the /// vector, yields the removed items, and replaces them with elements from /// the provided iterator. /// /// `splice` fuses the provided iterator, so elements after the first `None` /// are ignored. /// /// ## Panics /// * If the start is greater than the end. /// * If the end is past the edge of the vec. /// * If the provided iterator panics. /// * If the new length would overflow the capacity of the array. Because /// `ArrayVecSplice` adds elements to this vec in its destructor when /// necessary, this panic would occur when it is dropped. /// /// ## Example /// ```rust /// use tinyvec::*; /// let mut av = array_vec!([i32; 4] => 1, 2, 3); /// let av2: ArrayVec<[i32; 4]> = av.splice(1.., 4..=6).collect(); /// assert_eq!(av.as_slice(), &[1, 4, 5, 6][..]); /// assert_eq!(av2.as_slice(), &[2, 3][..]); /// /// av.splice(.., None); /// assert_eq!(av.as_slice(), &[]); /// ``` #[inline] pub fn splice( &mut self, range: R, replacement: I, ) -> ArrayVecSplice<'_, A, core::iter::Fuse> where R: RangeBounds, I: IntoIterator, { use core::ops::Bound; let start = match range.start_bound() { Bound::Included(x) => *x, Bound::Excluded(x) => x.saturating_add(1), Bound::Unbounded => 0, }; let end = match range.end_bound() { Bound::Included(x) => x.saturating_add(1), Bound::Excluded(x) => *x, Bound::Unbounded => self.len(), }; assert!( start <= end, "ArrayVec::splice> Illegal range, {} to {}", start, end ); assert!( end <= self.len(), "ArrayVec::splice> Range ends at {} but length is only {}!", end, self.len() ); ArrayVecSplice { removal_start: start, removal_end: end, parent: self, replacement: replacement.into_iter().fuse(), } } /// Remove an element, swapping the end of the vec into its place. /// /// ## Panics /// * If the index is out of bounds. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([&str; 4] => "foo", "bar", "quack", "zap"); /// /// assert_eq!(av.swap_remove(1), "bar"); /// assert_eq!(&av[..], ["foo", "zap", "quack"]); /// /// assert_eq!(av.swap_remove(0), "foo"); /// assert_eq!(&av[..], ["quack", "zap"]); /// ``` #[inline] pub fn swap_remove(&mut self, index: usize) -> A::Item { assert!( index < self.len(), "ArrayVec::swap_remove> index {} is out of bounds {}", index, self.len ); if index == self.len() - 1 { self.pop().unwrap() } else { let i = self.pop().unwrap(); replace(&mut self[index], i) } } /// Reduces the vec's length to the given value. /// /// If the vec is already shorter than the input, nothing happens. #[inline] pub fn truncate(&mut self, new_len: usize) { if new_len >= self.len as usize { return; } if needs_drop::() { let len = self.len as usize; self.data.as_slice_mut()[new_len..len] .iter_mut() .map(take) .for_each(drop); } /* new_len is less than self.len */ self.len = new_len as u16; } /// Wraps an array, using the given length as the starting length. /// /// If you want to use the whole length of the array, you can just use the /// `From` impl. /// /// ## Failure /// /// If the given length is greater than the capacity of the array this will /// error, and you'll get the array back in the `Err`. #[inline] pub fn try_from_array_len(data: A, len: usize) -> Result { /* Note(Soveu): Should we allow A::CAPACITY > u16::MAX for now? */ if len <= A::CAPACITY { Ok(Self { data, len: len as u16 }) } else { Err(data) } } } impl ArrayVec { /// Wraps up an array as a new empty `ArrayVec`. /// /// If you want to simply use the full array, use `from` instead. /// /// ## Examples /// /// This method in particular allows to create values for statics: /// /// ```rust /// # use tinyvec::ArrayVec; /// static DATA: ArrayVec<[u8; 5]> = ArrayVec::from_array_empty([0; 5]); /// assert_eq!(DATA.len(), 0); /// ``` /// /// But of course it is just an normal empty `ArrayVec`: /// /// ```rust /// # use tinyvec::ArrayVec; /// let mut data = ArrayVec::from_array_empty([1, 2, 3, 4]); /// assert_eq!(&data[..], &[]); /// data.push(42); /// assert_eq!(&data[..], &[42]); /// ``` #[inline] #[must_use] pub const fn from_array_empty(data: A) -> Self { Self { data, len: 0 } } } #[cfg(feature = "grab_spare_slice")] impl ArrayVec { /// Obtain the shared slice of the array _after_ the active memory. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([i32; 4]); /// assert_eq!(av.grab_spare_slice().len(), 4); /// av.push(10); /// av.push(11); /// av.push(12); /// av.push(13); /// assert_eq!(av.grab_spare_slice().len(), 0); /// ``` #[inline(always)] pub fn grab_spare_slice(&self) -> &[A::Item] { &self.data.as_slice()[self.len as usize..] } /// Obtain the mutable slice of the array _after_ the active memory. /// /// ## Example /// ```rust /// # use tinyvec::*; /// let mut av = array_vec!([i32; 4]); /// assert_eq!(av.grab_spare_slice_mut().len(), 4); /// av.push(10); /// av.push(11); /// assert_eq!(av.grab_spare_slice_mut().len(), 2); /// ``` #[inline(always)] pub fn grab_spare_slice_mut(&mut self) -> &mut [A::Item] { &mut self.data.as_slice_mut()[self.len as usize..] } } #[cfg(feature = "nightly_slice_partition_dedup")] impl ArrayVec { /// De-duplicates the vec contents. #[inline(always)] pub fn dedup(&mut self) where A::Item: PartialEq, { self.dedup_by(|a, b| a == b) } /// De-duplicates the vec according to the predicate given. #[inline(always)] pub fn dedup_by(&mut self, same_bucket: F) where F: FnMut(&mut A::Item, &mut A::Item) -> bool, { let len = { let (dedup, _) = self.as_mut_slice().partition_dedup_by(same_bucket); dedup.len() }; self.truncate(len); } /// De-duplicates the vec according to the key selector given. #[inline(always)] pub fn dedup_by_key(&mut self, mut key: F) where F: FnMut(&mut A::Item) -> K, K: PartialEq, { self.dedup_by(|a, b| key(a) == key(b)) } } /// Splicing iterator for `ArrayVec` /// See [`ArrayVec::splice`](ArrayVec::::splice) pub struct ArrayVecSplice<'p, A: Array, I: Iterator> { parent: &'p mut ArrayVec, removal_start: usize, removal_end: usize, replacement: I, } impl<'p, A: Array, I: Iterator> Iterator for ArrayVecSplice<'p, A, I> { type Item = A::Item; #[inline] fn next(&mut self) -> Option { if self.removal_start < self.removal_end { match self.replacement.next() { Some(replacement) => { let removed = core::mem::replace( &mut self.parent[self.removal_start], replacement, ); self.removal_start += 1; Some(removed) } None => { let removed = self.parent.remove(self.removal_start); self.removal_end -= 1; Some(removed) } } } else { None } } #[inline] fn size_hint(&self) -> (usize, Option) { let len = self.len(); (len, Some(len)) } } impl<'p, A, I> ExactSizeIterator for ArrayVecSplice<'p, A, I> where A: Array, I: Iterator, { #[inline] fn len(&self) -> usize { self.removal_end - self.removal_start } } impl<'p, A, I> FusedIterator for ArrayVecSplice<'p, A, I> where A: Array, I: Iterator, { } impl<'p, A, I> DoubleEndedIterator for ArrayVecSplice<'p, A, I> where A: Array, I: Iterator + DoubleEndedIterator, { #[inline] fn next_back(&mut self) -> Option { if self.removal_start < self.removal_end { match self.replacement.next_back() { Some(replacement) => { let removed = core::mem::replace( &mut self.parent[self.removal_end - 1], replacement, ); self.removal_end -= 1; Some(removed) } None => { let removed = self.parent.remove(self.removal_end - 1); self.removal_end -= 1; Some(removed) } } } else { None } } } impl<'p, A: Array, I: Iterator> Drop for ArrayVecSplice<'p, A, I> { fn drop(&mut self) { for _ in self.by_ref() {} // FIXME: reserve lower bound of size_hint for replacement in self.replacement.by_ref() { self.parent.insert(self.removal_end, replacement); self.removal_end += 1; } } } impl AsMut<[A::Item]> for ArrayVec { #[inline(always)] #[must_use] fn as_mut(&mut self) -> &mut [A::Item] { &mut *self } } impl AsRef<[A::Item]> for ArrayVec { #[inline(always)] #[must_use] fn as_ref(&self) -> &[A::Item] { &*self } } impl Borrow<[A::Item]> for ArrayVec { #[inline(always)] #[must_use] fn borrow(&self) -> &[A::Item] { &*self } } impl BorrowMut<[A::Item]> for ArrayVec { #[inline(always)] #[must_use] fn borrow_mut(&mut self) -> &mut [A::Item] { &mut *self } } impl Extend for ArrayVec { #[inline] fn extend>(&mut self, iter: T) { for t in iter { self.push(t) } } } impl From for ArrayVec { #[inline(always)] #[must_use] /// The output has a length equal to the full array. /// /// If you want to select a length, use /// [`from_array_len`](ArrayVec::from_array_len) fn from(data: A) -> Self { let len: u16 = data .as_slice() .len() .try_into() .expect("ArrayVec::from> lenght must be in range 0..=u16::MAX"); Self { len, data } } } /// The error type returned when a conversion from a slice to an [`ArrayVec`] /// fails. #[derive(Debug, Copy, Clone)] pub struct TryFromSliceError(()); impl TryFrom<&'_ [T]> for ArrayVec where T: Clone + Default, A: Array, { type Error = TryFromSliceError; #[inline] #[must_use] /// The output has a length equal to that of the slice, with the same capacity /// as `A`. fn try_from(slice: &[T]) -> Result { if slice.len() > A::CAPACITY { Err(TryFromSliceError(())) } else { let mut arr = ArrayVec::new(); // We do not use ArrayVec::extend_from_slice, because it looks like LLVM // fails to deduplicate all the length-checking logic between the // above if and the contents of that method, thus producing much // slower code. Unlike many of the other optimizations in this // crate, this one is worth keeping an eye on. I see no reason, for // any element type, that these should produce different code. But // they do. (rustc 1.51.0) arr.set_len(slice.len()); arr.as_mut_slice().clone_from_slice(slice); Ok(arr) } } } impl FromIterator for ArrayVec { #[inline] #[must_use] fn from_iter>(iter: T) -> Self { let mut av = Self::default(); for i in iter { av.push(i) } av } } /// Iterator for consuming an `ArrayVec` and returning owned elements. pub struct ArrayVecIterator { base: u16, tail: u16, data: A, } impl ArrayVecIterator { /// Returns the remaining items of this iterator as a slice. #[inline] #[must_use] pub fn as_slice(&self) -> &[A::Item] { &self.data.as_slice()[self.base as usize..self.tail as usize] } } impl FusedIterator for ArrayVecIterator {} impl Iterator for ArrayVecIterator { type Item = A::Item; #[inline] fn next(&mut self) -> Option { let slice = &mut self.data.as_slice_mut()[self.base as usize..self.tail as usize]; let itemref = slice.first_mut()?; self.base += 1; return Some(take(itemref)); } #[inline(always)] #[must_use] fn size_hint(&self) -> (usize, Option) { let s = self.tail - self.base; let s = s as usize; (s, Some(s)) } #[inline(always)] fn count(self) -> usize { self.size_hint().0 } #[inline] fn last(mut self) -> Option { self.next_back() } #[inline] fn nth(&mut self, n: usize) -> Option { let slice = &mut self.data.as_slice_mut(); let slice = &mut slice[self.base as usize..self.tail as usize]; if let Some(x) = slice.get_mut(n) { /* n is in range [0 .. self.tail - self.base) so in u16 range */ self.base += n as u16 + 1; return Some(take(x)); } self.base = self.tail; return None; } } impl DoubleEndedIterator for ArrayVecIterator { #[inline] fn next_back(&mut self) -> Option { let slice = &mut self.data.as_slice_mut()[self.base as usize..self.tail as usize]; let item = slice.last_mut()?; self.tail -= 1; return Some(take(item)); } #[cfg(feature = "rustc_1_40")] #[inline] fn nth_back(&mut self, n: usize) -> Option { let base = self.base as usize; let tail = self.tail as usize; let slice = &mut self.data.as_slice_mut()[base..tail]; let n = n.saturating_add(1); if let Some(n) = slice.len().checked_sub(n) { let item = &mut slice[n]; /* n is in [0..self.tail - self.base] range, so in u16 range */ self.tail = self.base + n as u16; return Some(take(item)); } self.tail = self.base; return None; } } impl Debug for ArrayVecIterator where A::Item: Debug, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { f.debug_tuple("ArrayVecIterator").field(&self.as_slice()).finish() } } impl IntoIterator for ArrayVec { type Item = A::Item; type IntoIter = ArrayVecIterator; #[inline(always)] #[must_use] fn into_iter(self) -> Self::IntoIter { ArrayVecIterator { base: 0, tail: self.len, data: self.data } } } impl<'a, A: Array> IntoIterator for &'a mut ArrayVec { type Item = &'a mut A::Item; type IntoIter = core::slice::IterMut<'a, A::Item>; #[inline(always)] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter_mut() } } impl<'a, A: Array> IntoIterator for &'a ArrayVec { type Item = &'a A::Item; type IntoIter = core::slice::Iter<'a, A::Item>; #[inline(always)] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter() } } impl PartialEq for ArrayVec where A::Item: PartialEq, { #[inline] #[must_use] fn eq(&self, other: &Self) -> bool { self.as_slice().eq(other.as_slice()) } } impl Eq for ArrayVec where A::Item: Eq {} impl PartialOrd for ArrayVec where A::Item: PartialOrd, { #[inline] #[must_use] fn partial_cmp(&self, other: &Self) -> Option { self.as_slice().partial_cmp(other.as_slice()) } } impl Ord for ArrayVec where A::Item: Ord, { #[inline] #[must_use] fn cmp(&self, other: &Self) -> core::cmp::Ordering { self.as_slice().cmp(other.as_slice()) } } impl PartialEq<&A> for ArrayVec where A::Item: PartialEq, { #[inline] #[must_use] fn eq(&self, other: &&A) -> bool { self.as_slice().eq(other.as_slice()) } } impl PartialEq<&[A::Item]> for ArrayVec where A::Item: PartialEq, { #[inline] #[must_use] fn eq(&self, other: &&[A::Item]) -> bool { self.as_slice().eq(*other) } } impl Hash for ArrayVec where A::Item: Hash, { #[inline] fn hash(&self, state: &mut H) { self.as_slice().hash(state) } } #[cfg(feature = "experimental_write_impl")] impl> core::fmt::Write for ArrayVec { fn write_str(&mut self, s: &str) -> core::fmt::Result { let my_len = self.len(); let str_len = s.as_bytes().len(); if my_len + str_len <= A::CAPACITY { let remainder = &mut self.data.as_slice_mut()[my_len..]; let target = &mut remainder[..str_len]; target.copy_from_slice(s.as_bytes()); Ok(()) } else { Err(core::fmt::Error) } } } // // // // // // // // // Formatting impls // // // // // // // // impl Binary for ArrayVec where A::Item: Binary, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Binary::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl Debug for ArrayVec where A::Item: Debug, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Debug::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl Display for ArrayVec where A::Item: Display, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Display::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl LowerExp for ArrayVec where A::Item: LowerExp, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } LowerExp::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl LowerHex for ArrayVec where A::Item: LowerHex, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } LowerHex::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl Octal for ArrayVec where A::Item: Octal, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Octal::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl Pointer for ArrayVec where A::Item: Pointer, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } Pointer::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl UpperExp for ArrayVec where A::Item: UpperExp, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } UpperExp::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } impl UpperHex for ArrayVec where A::Item: UpperHex, { #[allow(clippy::missing_inline_in_public_items)] fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { write!(f, "[")?; if f.alternate() { write!(f, "\n ")?; } for (i, elem) in self.iter().enumerate() { if i > 0 { write!(f, ",{}", if f.alternate() { "\n " } else { " " })?; } UpperHex::fmt(elem, f)?; } if f.alternate() { write!(f, ",\n")?; } write!(f, "]") } } #[cfg(feature = "alloc")] use alloc::vec::Vec; #[cfg(feature = "alloc")] impl ArrayVec { /// Drains all elements to a Vec, but reserves additional space /// ``` /// # use tinyvec::*; /// let mut av = array_vec!([i32; 7] => 1, 2, 3); /// let v = av.drain_to_vec_and_reserve(10); /// assert_eq!(v, &[1, 2, 3]); /// assert_eq!(v.capacity(), 13); /// ``` pub fn drain_to_vec_and_reserve(&mut self, n: usize) -> Vec { let cap = n + self.len(); let mut v = Vec::with_capacity(cap); let iter = self.iter_mut().map(take); v.extend(iter); self.set_len(0); return v; } /// Drains all elements to a Vec /// ``` /// # use tinyvec::*; /// let mut av = array_vec!([i32; 7] => 1, 2, 3); /// let v = av.drain_to_vec(); /// assert_eq!(v, &[1, 2, 3]); /// assert_eq!(v.capacity(), 3); /// ``` pub fn drain_to_vec(&mut self) -> Vec { self.drain_to_vec_and_reserve(0) } } #[cfg(feature = "serde")] struct ArrayVecVisitor(PhantomData); #[cfg(feature = "serde")] impl<'de, A: Array> Visitor<'de> for ArrayVecVisitor where A::Item: Deserialize<'de>, { type Value = ArrayVec; fn expecting( &self, formatter: &mut core::fmt::Formatter, ) -> core::fmt::Result { formatter.write_str("a sequence") } fn visit_seq(self, mut seq: S) -> Result where S: SeqAccess<'de>, { let mut new_arrayvec: ArrayVec = Default::default(); let mut idx = 0usize; while let Some(value) = seq.next_element()? { if new_arrayvec.len() >= new_arrayvec.capacity() { return Err(DeserializeError::invalid_length(idx, &self)); } new_arrayvec.push(value); idx = idx + 1; } Ok(new_arrayvec) } } vendor/tinyvec/LICENSE-APACHE.md0000664000175000017500000002645014160055207016662 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/tinyvec/tests/0000775000175000017500000000000014160055207015472 5ustar mwhudsonmwhudsonvendor/tinyvec/tests/tinyvec.rs0000664000175000017500000002612114160055207017523 0ustar mwhudsonmwhudson#![cfg(feature = "alloc")] #![allow(bad_style)] #![allow(clippy::redundant_clone)] #[cfg(feature = "serde")] use serde_test::{assert_tokens, Token}; use std::iter::FromIterator; use tinyvec::*; #[test] fn TinyVec_swap_remove() { let mut tv: TinyVec<[i32; 10]> = Default::default(); tv.push(1); tv.push(2); tv.push(3); tv.push(4); assert_eq!(tv.swap_remove(3), 4); assert_eq!(&tv[..], &[1, 2, 3][..]); assert_eq!(tv.swap_remove(0), 1); assert_eq!(&tv[..], &[3, 2][..]); assert_eq!(tv.swap_remove(0), 3); assert_eq!(&tv[..], &[2][..]); assert_eq!(tv.swap_remove(0), 2); assert_eq!(&tv[..], &[][..]); } #[test] fn TinyVec_capacity() { let mut tv: TinyVec<[i32; 1]> = Default::default(); assert_eq!(tv.capacity(), 1); tv.move_to_the_heap(); tv.extend_from_slice(&[1, 2, 3, 4]); assert_eq!(tv.capacity(), 4); } #[test] fn TinyVec_drain() { let mut tv: TinyVec<[i32; 10]> = Default::default(); tv.push(1); tv.push(2); tv.push(3); assert_eq!(Vec::from_iter(tv.clone().drain(..)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().drain(..2)), vec![1, 2]); assert_eq!(Vec::from_iter(tv.clone().drain(..3)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().drain(..=1)), vec![1, 2]); assert_eq!(Vec::from_iter(tv.clone().drain(..=2)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().drain(0..)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().drain(1..)), vec![2, 3]); assert_eq!(Vec::from_iter(tv.clone().drain(0..2)), vec![1, 2]); assert_eq!(Vec::from_iter(tv.clone().drain(0..3)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().drain(1..2)), vec![2]); assert_eq!(Vec::from_iter(tv.clone().drain(1..3)), vec![2, 3]); assert_eq!(Vec::from_iter(tv.clone().drain(0..=1)), vec![1, 2]); assert_eq!(Vec::from_iter(tv.clone().drain(0..=2)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().drain(1..=1)), vec![2]); assert_eq!(Vec::from_iter(tv.clone().drain(1..=2)), vec![2, 3]); } #[test] fn TinyVec_splice() { let mut tv: TinyVec<[i32; 10]> = Default::default(); tv.push(1); tv.push(2); tv.push(3); // splice returns the same things as drain assert_eq!(Vec::from_iter(tv.clone().splice(.., None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().splice(..2, None)), vec![1, 2]); assert_eq!(Vec::from_iter(tv.clone().splice(..3, None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().splice(..=1, None)), vec![1, 2]); assert_eq!(Vec::from_iter(tv.clone().splice(..=2, None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().splice(0.., None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().splice(1.., None)), vec![2, 3]); assert_eq!(Vec::from_iter(tv.clone().splice(0..2, None)), vec![1, 2]); assert_eq!(Vec::from_iter(tv.clone().splice(0..3, None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().splice(1..2, None)), vec![2]); assert_eq!(Vec::from_iter(tv.clone().splice(1..3, None)), vec![2, 3]); assert_eq!(Vec::from_iter(tv.clone().splice(0..=1, None)), vec![1, 2]); assert_eq!(Vec::from_iter(tv.clone().splice(0..=2, None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(tv.clone().splice(1..=1, None)), vec![2]); assert_eq!(Vec::from_iter(tv.clone().splice(1..=2, None)), vec![2, 3]); // splice removes the same things as drain let mut tv2 = tv.clone(); tv2.splice(.., None); assert_eq!(tv2, tiny_vec![]); let mut tv2 = tv.clone(); tv2.splice(..2, None); assert_eq!(tv2, tiny_vec![3]); let mut tv2 = tv.clone(); tv2.splice(..3, None); assert_eq!(tv2, tiny_vec![]); let mut tv2 = tv.clone(); tv2.splice(..=1, None); assert_eq!(tv2, tiny_vec![3]); let mut tv2 = tv.clone(); tv2.splice(..=2, None); assert_eq!(tv2, tiny_vec![]); let mut tv2 = tv.clone(); tv2.splice(0.., None); assert_eq!(tv2, tiny_vec![]); let mut tv2 = tv.clone(); tv2.splice(1.., None); assert_eq!(tv2, tiny_vec![1]); let mut tv2 = tv.clone(); tv2.splice(0..2, None); assert_eq!(tv2, tiny_vec![3]); let mut tv2 = tv.clone(); tv2.splice(0..3, None); assert_eq!(tv2, tiny_vec![]); let mut tv2 = tv.clone(); tv2.splice(1..2, None); assert_eq!(tv2, tiny_vec![1, 3]); let mut tv2 = tv.clone(); tv2.splice(1..3, None); assert_eq!(tv2, tiny_vec![1]); let mut tv2 = tv.clone(); tv2.splice(0..=1, None); assert_eq!(tv2, tiny_vec![3]); let mut tv2 = tv.clone(); tv2.splice(0..=2, None); assert_eq!(tv2, tiny_vec![]); let mut tv2 = tv.clone(); tv2.splice(1..=1, None); assert_eq!(tv2, tiny_vec![1, 3]); let mut tv2 = tv.clone(); tv2.splice(1..=2, None); assert_eq!(tv2, tiny_vec![1]); // splice adds the elements correctly let mut tv2 = tv.clone(); tv2.splice(.., 4..=6); assert_eq!(tv2, tiny_vec![4, 5, 6]); let mut tv2 = tv.clone(); tv2.splice(..2, 4..=6); assert_eq!(tv2, tiny_vec![4, 5, 6, 3]); let mut tv2 = tv.clone(); tv2.splice(..3, 4..=6); assert_eq!(tv2, tiny_vec![4, 5, 6]); let mut tv2 = tv.clone(); tv2.splice(..=1, 4..=6); assert_eq!(tv2, tiny_vec![4, 5, 6, 3]); let mut tv2 = tv.clone(); tv2.splice(..=2, 4..=6); assert_eq!(tv2, tiny_vec![4, 5, 6]); let mut tv2 = tv.clone(); tv2.splice(0.., 4..=6); assert_eq!(tv2, tiny_vec![4, 5, 6]); let mut tv2 = tv.clone(); tv2.splice(1.., 4..=6); assert_eq!(tv2, tiny_vec![1, 4, 5, 6]); let mut tv2 = tv.clone(); tv2.splice(0..2, 4..=6); assert_eq!(tv2, tiny_vec![4, 5, 6, 3]); let mut tv2 = tv.clone(); tv2.splice(0..3, 4..=6); assert_eq!(tv2, tiny_vec![4, 5, 6]); let mut tv2 = tv.clone(); tv2.splice(1..2, 4..=6); assert_eq!(tv2, tiny_vec![1, 4, 5, 6, 3]); let mut tv2 = tv.clone(); tv2.splice(1..3, 4..=6); assert_eq!(tv2, tiny_vec![1, 4, 5, 6]); let mut tv2 = tv.clone(); tv2.splice(0..=1, 4..=6); assert_eq!(tv2, tiny_vec![4, 5, 6, 3]); let mut tv2 = tv.clone(); tv2.splice(0..=2, 4..=6); assert_eq!(tv2, tiny_vec![4, 5, 6]); let mut tv2 = tv.clone(); tv2.splice(1..=1, 4..=6); assert_eq!(tv2, tiny_vec![1, 4, 5, 6, 3]); let mut tv2 = tv.clone(); tv2.splice(1..=2, 4..=6); assert_eq!(tv2, tiny_vec![1, 4, 5, 6]); // splice adds the elements correctly when the replacement is smaller let mut tv2 = tv.clone(); tv2.splice(.., Some(4)); assert_eq!(tv2, tiny_vec![4]); let mut tv2 = tv.clone(); tv2.splice(..2, Some(4)); assert_eq!(tv2, tiny_vec![4, 3]); let mut tv2 = tv.clone(); tv2.splice(1.., Some(4)); assert_eq!(tv2, tiny_vec![1, 4]); let mut tv2 = tv.clone(); tv2.splice(1..=1, Some(4)); assert_eq!(tv2, tiny_vec![1, 4, 3]); } #[test] fn TinyVec_resize() { let mut tv: TinyVec<[i32; 10]> = Default::default(); tv.resize(20, 5); assert_eq!(&tv[..], &[5; 20]); } #[test] fn TinyVec_from_slice_impl() { let bigger_slice: [u8; 11] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; let tinyvec: TinyVec<[u8; 10]> = TinyVec::Heap((&bigger_slice[..]).into()); assert_eq!(TinyVec::from(&bigger_slice[..]), tinyvec); let smaller_slice: [u8; 5] = [0, 1, 2, 3, 4]; let tinyvec: TinyVec<[u8; 10]> = TinyVec::Inline(ArrayVec::from_array_len( [0, 1, 2, 3, 4, 0, 0, 0, 0, 0], 5, )); assert_eq!(TinyVec::from(&smaller_slice[..]), tinyvec); let same_size: [u8; 4] = [0, 1, 2, 3]; let tinyvec: TinyVec<[u8; 4]> = TinyVec::Inline(ArrayVec::from_array_len(same_size, 4)); assert_eq!(TinyVec::from(&same_size[..]), tinyvec); } #[test] fn TinyVec_from_array() { let array = [9, 8, 7, 6, 5, 4, 3, 2, 1]; let tv = TinyVec::from(array); assert_eq!(&array, &tv[..]); } #[test] fn TinyVec_macro() { let mut expected: TinyVec<[i32; 4]> = Default::default(); expected.push(1); expected.push(2); expected.push(3); let actual = tiny_vec!(1, 2, 3); assert_eq!(expected, actual); assert_eq!(tiny_vec![0u8; 4], tiny_vec!(0u8, 0u8, 0u8, 0u8)); assert_eq!(tiny_vec![0u8; 4], tiny_vec!([u8; 4] => 0, 0, 0, 0)); assert_eq!(tiny_vec![0; 4], tiny_vec!(0, 0, 0, 0)); assert_eq!(tiny_vec![0; 4], tiny_vec!([u8; 4] => 0, 0, 0, 0)); let expected2 = tiny_vec![1.1; 3]; let actual2 = tiny_vec!([f32; 3] => 1.1, 1.1, 1.1); assert_eq!(expected2, actual2); } #[test] fn TinyVec_macro_non_copy() { // must use a variable here to avoid macro shenanigans let s = String::new(); let _: TinyVec<[String; 10]> = tiny_vec!([String; 10] => s); } #[test] fn TinyVec_reserve() { let mut tv: TinyVec<[i32; 4]> = Default::default(); assert_eq!(tv.capacity(), 4); tv.extend_from_slice(&[1, 2]); assert_eq!(tv.capacity(), 4); tv.reserve(2); assert_eq!(tv.capacity(), 4); tv.reserve(4); assert!(tv.capacity() >= 6); tv.extend_from_slice(&[3, 4, 5, 6]); tv.reserve(4); assert!(tv.capacity() >= 10); } #[test] fn TinyVec_reserve_exact() { let mut tv: TinyVec<[i32; 4]> = Default::default(); assert_eq!(tv.capacity(), 4); tv.extend_from_slice(&[1, 2]); assert_eq!(tv.capacity(), 4); tv.reserve_exact(2); assert_eq!(tv.capacity(), 4); tv.reserve_exact(4); assert!(tv.capacity() >= 6); tv.extend_from_slice(&[3, 4, 5, 6]); tv.reserve_exact(4); assert!(tv.capacity() >= 10); } #[test] fn TinyVec_move_to_heap_and_shrink() { let mut tv: TinyVec<[i32; 4]> = Default::default(); assert!(tv.is_inline()); tv.move_to_the_heap(); assert!(tv.is_heap()); assert_eq!(tv.capacity(), 0); tv.push(1); tv.shrink_to_fit(); assert!(tv.is_inline()); assert_eq!(tv.capacity(), 4); tv.move_to_the_heap_and_reserve(3); assert!(tv.is_heap()); assert_eq!(tv.capacity(), 4); tv.extend(2..=4); assert_eq!(tv.capacity(), 4); assert_eq!(tv.as_slice(), [1, 2, 3, 4]); } #[cfg(feature = "serde")] #[test] fn TinyVec_ser_de_empty() { let tv: TinyVec<[i32; 0]> = tiny_vec![]; assert_tokens(&tv, &[Token::Seq { len: Some(0) }, Token::SeqEnd]); } #[cfg(feature = "serde")] #[test] fn TinyVec_ser_de() { let tv: TinyVec<[i32; 4]> = tiny_vec![1, 2, 3, 4]; assert_tokens( &tv, &[ Token::Seq { len: Some(4) }, Token::I32(1), Token::I32(2), Token::I32(3), Token::I32(4), Token::SeqEnd, ], ); } #[cfg(feature = "serde")] #[test] fn TinyVec_ser_de_heap() { let mut tv: TinyVec<[i32; 4]> = tiny_vec![1, 2, 3, 4]; tv.move_to_the_heap(); assert_tokens( &tv, &[ Token::Seq { len: Some(4) }, Token::I32(1), Token::I32(2), Token::I32(3), Token::I32(4), Token::SeqEnd, ], ); } #[test] fn TinyVec_pretty_debug() { let tv: TinyVec<[i32; 6]> = tiny_vec![1, 2, 3]; let s = format!("{:#?}", tv); let expected = format!("{:#?}", tv.as_slice()); assert_eq!(s, expected); } #[cfg(feature = "std")] #[test] fn TinyVec_std_io_write() { use std::io::Write; let mut tv: TinyVec<[u8; 3]> = TinyVec::new(); tv.write_all(b"foo").ok(); assert!(tv.is_inline()); assert_eq!(tv, tiny_vec![b'f', b'o', b'o']); tv.write_all(b"bar").ok(); assert!(tv.is_heap()); assert_eq!(tv, tiny_vec![b'f', b'o', b'o', b'b', b'a', b'r']); } vendor/tinyvec/tests/arrayvec.rs0000664000175000017500000003133614160055207017662 0ustar mwhudsonmwhudson#![allow(bad_style)] #[cfg(feature = "serde")] use serde_test::{assert_tokens, Token}; use std::iter::FromIterator; use tinyvec::*; #[test] fn test_a_vec() { let mut expected: ArrayVec<[i32; 4]> = Default::default(); expected.push(1); expected.push(2); expected.push(3); let actual = array_vec!(1, 2, 3); assert_eq!(expected, actual); assert_eq!(array_vec![0u8; 4], array_vec!(0u8, 0u8, 0u8, 0u8)); assert_eq!(array_vec![0u8; 4], array_vec!([u8; 4] => 0, 0, 0, 0)); assert_eq!(array_vec![0; 4], array_vec!(0, 0, 0, 0)); assert_eq!(array_vec![0; 4], array_vec!([u8; 4] => 0, 0, 0, 0)); let expected2 = array_vec![1.1; 3]; let actual2 = array_vec!([f32; 3] => 1.1, 1.1, 1.1); assert_eq!(expected2, actual2); } #[test] fn ArrayVec_push_pop() { let mut av: ArrayVec<[i32; 4]> = Default::default(); assert_eq!(av.len(), 0); assert_eq!(av.pop(), None); av.push(10_i32); assert_eq!(av.len(), 1); assert_eq!(av[0], 10); assert_eq!(av.pop(), Some(10)); assert_eq!(av.len(), 0); assert_eq!(av.pop(), None); av.push(10); av.push(11); av.push(12); av.push(13); assert_eq!(av[0], 10); assert_eq!(av[1], 11); assert_eq!(av[2], 12); assert_eq!(av[3], 13); assert_eq!(av.len(), 4); assert_eq!(av.pop(), Some(13)); assert_eq!(av.len(), 3); assert_eq!(av.pop(), Some(12)); assert_eq!(av.len(), 2); assert_eq!(av.pop(), Some(11)); assert_eq!(av.len(), 1); assert_eq!(av.pop(), Some(10)); assert_eq!(av.len(), 0); assert_eq!(av.pop(), None); } #[test] #[should_panic] fn ArrayVec_push_overflow() { let mut av: ArrayVec<[i32; 0]> = Default::default(); av.push(7); } #[test] fn ArrayVec_formatting() { // check that we get the comma placement correct let mut av: ArrayVec<[i32; 4]> = Default::default(); assert_eq!(format!("{:?}", av), "[]"); av.push(10); assert_eq!(format!("{:?}", av), "[10]"); av.push(11); assert_eq!(format!("{:?}", av), "[10, 11]"); av.push(12); assert_eq!(format!("{:?}", av), "[10, 11, 12]"); // below here just asserts that the impls exist. // let av: ArrayVec<[i32; 4]> = Default::default(); assert_eq!(format!("{:b}", av), "[]"); assert_eq!(format!("{:o}", av), "[]"); assert_eq!(format!("{:x}", av), "[]"); assert_eq!(format!("{:X}", av), "[]"); assert_eq!(format!("{}", av), "[]"); // let av: ArrayVec<[f32; 4]> = Default::default(); assert_eq!(format!("{:e}", av), "[]"); assert_eq!(format!("{:E}", av), "[]"); // let av: ArrayVec<[&'static str; 4]> = Default::default(); assert_eq!(format!("{:p}", av), "[]"); } #[test] fn ArrayVec_iteration() { let av = array_vec!([i32; 4] => 10, 11, 12, 13); let mut i = av.into_iter(); assert_eq!(i.next(), Some(10)); assert_eq!(i.next(), Some(11)); assert_eq!(i.next(), Some(12)); assert_eq!(i.next(), Some(13)); assert_eq!(i.next(), None); let av = array_vec!([i32; 4] => 10, 11, 12, 13); let mut av2: ArrayVec<[i32; 4]> = av.clone().into_iter().collect(); assert_eq!(av, av2); // IntoIterator for &mut ArrayVec for x in &mut av2 { *x = -*x; } // IntoIterator for &ArrayVec assert!(av.iter().zip(&av2).all(|(&a, &b)| a == -b)); } #[test] fn ArrayVec_append() { let mut av = array_vec!([i32; 8] => 1, 2, 3); let mut av2 = array_vec!([i32; 8] => 4, 5, 6); // av.append(&mut av2); assert_eq!(av.as_slice(), &[1_i32, 2, 3, 4, 5, 6]); assert_eq!(av2.as_slice(), &[]); } #[test] fn ArrayVec_remove() { let mut av: ArrayVec<[i32; 10]> = Default::default(); av.push(1); av.push(2); av.push(3); assert_eq!(av.remove(1), 2); assert_eq!(&av[..], &[1, 3][..]); } #[test] #[should_panic] fn ArrayVec_remove_invalid() { let mut av: ArrayVec<[i32; 1]> = Default::default(); av.push(1); av.remove(1); } #[test] fn ArrayVec_swap_remove() { let mut av: ArrayVec<[i32; 10]> = Default::default(); av.push(1); av.push(2); av.push(3); av.push(4); assert_eq!(av.swap_remove(3), 4); assert_eq!(&av[..], &[1, 2, 3][..]); assert_eq!(av.swap_remove(0), 1); assert_eq!(&av[..], &[3, 2][..]); assert_eq!(av.swap_remove(0), 3); assert_eq!(&av[..], &[2][..]); assert_eq!(av.swap_remove(0), 2); assert_eq!(&av[..], &[][..]); } #[test] fn ArrayVec_drain() { let mut av: ArrayVec<[i32; 10]> = Default::default(); av.push(1); av.push(2); av.push(3); assert_eq!(Vec::from_iter(av.clone().drain(..)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().drain(..2)), vec![1, 2]); assert_eq!(Vec::from_iter(av.clone().drain(..3)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().drain(..=1)), vec![1, 2]); assert_eq!(Vec::from_iter(av.clone().drain(..=2)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().drain(0..)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().drain(1..)), vec![2, 3]); assert_eq!(Vec::from_iter(av.clone().drain(0..2)), vec![1, 2]); assert_eq!(Vec::from_iter(av.clone().drain(0..3)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().drain(1..2)), vec![2]); assert_eq!(Vec::from_iter(av.clone().drain(1..3)), vec![2, 3]); assert_eq!(Vec::from_iter(av.clone().drain(0..=1)), vec![1, 2]); assert_eq!(Vec::from_iter(av.clone().drain(0..=2)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().drain(1..=1)), vec![2]); assert_eq!(Vec::from_iter(av.clone().drain(1..=2)), vec![2, 3]); } #[test] fn ArrayVec_splice() { let mut av: ArrayVec<[i32; 10]> = Default::default(); av.push(1); av.push(2); av.push(3); // splice returns the same things as drain assert_eq!(Vec::from_iter(av.clone().splice(.., None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().splice(..2, None)), vec![1, 2]); assert_eq!(Vec::from_iter(av.clone().splice(..3, None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().splice(..=1, None)), vec![1, 2]); assert_eq!(Vec::from_iter(av.clone().splice(..=2, None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().splice(0.., None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().splice(1.., None)), vec![2, 3]); assert_eq!(Vec::from_iter(av.clone().splice(0..2, None)), vec![1, 2]); assert_eq!(Vec::from_iter(av.clone().splice(0..3, None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().splice(1..2, None)), vec![2]); assert_eq!(Vec::from_iter(av.clone().splice(1..3, None)), vec![2, 3]); assert_eq!(Vec::from_iter(av.clone().splice(0..=1, None)), vec![1, 2]); assert_eq!(Vec::from_iter(av.clone().splice(0..=2, None)), vec![1, 2, 3]); assert_eq!(Vec::from_iter(av.clone().splice(1..=1, None)), vec![2]); assert_eq!(Vec::from_iter(av.clone().splice(1..=2, None)), vec![2, 3]); // splice removes the same things as drain let mut av2 = av.clone(); av2.splice(.., None); assert_eq!(av2, array_vec![]); let mut av2 = av.clone(); av2.splice(..2, None); assert_eq!(av2, array_vec![3]); let mut av2 = av.clone(); av2.splice(..3, None); assert_eq!(av2, array_vec![]); let mut av2 = av.clone(); av2.splice(..=1, None); assert_eq!(av2, array_vec![3]); let mut av2 = av.clone(); av2.splice(..=2, None); assert_eq!(av2, array_vec![]); let mut av2 = av.clone(); av2.splice(0.., None); assert_eq!(av2, array_vec![]); let mut av2 = av.clone(); av2.splice(1.., None); assert_eq!(av2, array_vec![1]); let mut av2 = av.clone(); av2.splice(0..2, None); assert_eq!(av2, array_vec![3]); let mut av2 = av.clone(); av2.splice(0..3, None); assert_eq!(av2, array_vec![]); let mut av2 = av.clone(); av2.splice(1..2, None); assert_eq!(av2, array_vec![1, 3]); let mut av2 = av.clone(); av2.splice(1..3, None); assert_eq!(av2, array_vec![1]); let mut av2 = av.clone(); av2.splice(0..=1, None); assert_eq!(av2, array_vec![3]); let mut av2 = av.clone(); av2.splice(0..=2, None); assert_eq!(av2, array_vec![]); let mut av2 = av.clone(); av2.splice(1..=1, None); assert_eq!(av2, array_vec![1, 3]); let mut av2 = av.clone(); av2.splice(1..=2, None); assert_eq!(av2, array_vec![1]); // splice adds the elements correctly let mut av2 = av.clone(); av2.splice(.., 4..=6); assert_eq!(av2, array_vec![4, 5, 6]); let mut av2 = av.clone(); av2.splice(..2, 4..=6); assert_eq!(av2, array_vec![4, 5, 6, 3]); let mut av2 = av.clone(); av2.splice(..3, 4..=6); assert_eq!(av2, array_vec![4, 5, 6]); let mut av2 = av.clone(); av2.splice(..=1, 4..=6); assert_eq!(av2, array_vec![4, 5, 6, 3]); let mut av2 = av.clone(); av2.splice(..=2, 4..=6); assert_eq!(av2, array_vec![4, 5, 6]); let mut av2 = av.clone(); av2.splice(0.., 4..=6); assert_eq!(av2, array_vec![4, 5, 6]); let mut av2 = av.clone(); av2.splice(1.., 4..=6); assert_eq!(av2, array_vec![1, 4, 5, 6]); let mut av2 = av.clone(); av2.splice(0..2, 4..=6); assert_eq!(av2, array_vec![4, 5, 6, 3]); let mut av2 = av.clone(); av2.splice(0..3, 4..=6); assert_eq!(av2, array_vec![4, 5, 6]); let mut av2 = av.clone(); av2.splice(1..2, 4..=6); assert_eq!(av2, array_vec![1, 4, 5, 6, 3]); let mut av2 = av.clone(); av2.splice(1..3, 4..=6); assert_eq!(av2, array_vec![1, 4, 5, 6]); let mut av2 = av.clone(); av2.splice(0..=1, 4..=6); assert_eq!(av2, array_vec![4, 5, 6, 3]); let mut av2 = av.clone(); av2.splice(0..=2, 4..=6); assert_eq!(av2, array_vec![4, 5, 6]); let mut av2 = av.clone(); av2.splice(1..=1, 4..=6); assert_eq!(av2, array_vec![1, 4, 5, 6, 3]); let mut av2 = av.clone(); av2.splice(1..=2, 4..=6); assert_eq!(av2, array_vec![1, 4, 5, 6]); // splice adds the elements correctly when the replacement is smaller let mut av2 = av.clone(); av2.splice(.., Some(4)); assert_eq!(av2, array_vec![4]); let mut av2 = av.clone(); av2.splice(..2, Some(4)); assert_eq!(av2, array_vec![4, 3]); let mut av2 = av.clone(); av2.splice(1.., Some(4)); assert_eq!(av2, array_vec![1, 4]); let mut av2 = av.clone(); av2.splice(1..=1, Some(4)); assert_eq!(av2, array_vec![1, 4, 3]); } #[test] fn iter_last_nth() { let mut av: ArrayVec<[i32; 10]> = Default::default(); av.push(1); av.push(2); av.push(3); av.push(4); assert_eq!(av.len(), 4); let mut iter = av.into_iter(); assert_eq!(iter.next(), Some(1)); assert_eq!(iter.next(), Some(2)); assert_eq!(iter.next(), Some(3)); assert_eq!(iter.next(), Some(4)); assert_eq!(iter.next(), None); assert_eq!(iter.last(), None); let mut av: ArrayVec<[i32; 10]> = Default::default(); av.push(1); av.push(2); av.push(3); assert_eq!(av.into_iter().nth(0), Some(1)); } #[test] #[cfg(feature = "rustc_1_40")] fn reviter() { let mut av: ArrayVec<[i32; 10]> = Default::default(); av.push(1); av.push(2); av.push(3); av.push(4); let mut iter = av.into_iter(); assert_eq!(iter.next(), Some(1)); assert_eq!(iter.next_back(), Some(4)); assert_eq!(iter.next(), Some(2)); assert_eq!(iter.next_back(), Some(3)); assert_eq!(iter.next(), None); assert_eq!(iter.next_back(), None); let mut av: ArrayVec<[i32; 32]> = Default::default(); av.extend(0..32); let mut iter = av.into_iter(); assert_eq!(iter.nth_back(0), Some(31)); assert_eq!(iter.nth_back(2), Some(28)); assert_eq!(iter.nth_back(0), Some(27)); assert_eq!(iter.nth_back(99), None); assert_eq!(iter.nth_back(99), None); } #[cfg(feature = "serde")] #[test] fn ArrayVec_ser_de_empty() { let tv: ArrayVec<[i32; 0]> = Default::default(); assert_tokens(&tv, &[Token::Seq { len: Some(0) }, Token::SeqEnd]); } #[cfg(feature = "serde")] #[test] fn ArrayVec_ser_de() { let mut tv: ArrayVec<[i32; 4]> = Default::default(); tv.push(1); tv.push(2); tv.push(3); tv.push(4); assert_tokens( &tv, &[ Token::Seq { len: Some(4) }, Token::I32(1), Token::I32(2), Token::I32(3), Token::I32(4), Token::SeqEnd, ], ); } #[test] fn ArrayVec_try_from_slice() { use std::convert::TryFrom; let nums = [1, 2, 3, 4]; let empty: Result, _> = ArrayVec::try_from(&nums[..0]); assert!(empty.is_ok()); assert_eq!(empty.unwrap().as_slice(), &[]); let fits: Result, _> = ArrayVec::try_from(&nums[..2]); assert!(fits.is_ok()); assert_eq!(fits.unwrap().as_slice(), &[1, 2]); let doesnt_fit: Result, _> = ArrayVec::try_from(&nums[..4]); assert!(doesnt_fit.is_err()); } #[test] fn ArrayVec_pretty_debug() { let arr: [i32; 3] = [1, 2, 3]; let expect = format!("{:#?}", arr); let arr: ArrayVec<[i32; 3]> = array_vec![1, 2, 3]; let got = format!("{:#?}", arr); assert_eq!(got, expect); } vendor/tinyvec/compare_benchmarks.py0000664000175000017500000000166414172417313020537 0ustar mwhudsonmwhudsonimport os import os.path import json comparisons = [] for (root, _dirs, files) in os.walk('target/criterion'): for file in files: if file == 'estimates.json' and root.endswith( 'new') and 'TinyVec' in root: path = os.path.join(root, file) bench_name = path.split('/')[3] tinyvec_time = json.load(open(path))['mean']['point_estimate'] path = path.replace('TinyVec', 'SmallVec') smallvec_time = json.load(open(path))['mean']['point_estimate'] comparisons.append((bench_name, tinyvec_time / smallvec_time)) comparisons.sort(key=lambda x: x[1]) longest_name = max(len(c[0]) for c in comparisons) for (name, ratio) in comparisons: # Undo the criterion name mangling name = name.replace('_[', '<[') name = name.replace(']___', ']>::') name = name.ljust(longest_name) print(f"{name} {ratio:.2f}") vendor/tinyvec/rustfmt.toml0000664000175000017500000000043314160055207016731 0ustar mwhudsonmwhudson # Stable edition = "2018" fn_args_layout = "Compressed" max_width = 80 tab_spaces = 2 use_field_init_shorthand = true use_try_shorthand = true use_small_heuristics = "Max" # Unstable format_code_in_doc_comments = true wrap_comments = true imports_granularity="Crate" vendor/tinyvec/README.md0000664000175000017500000000214514160055207015611 0ustar mwhudsonmwhudson[![License:Zlib](https://img.shields.io/badge/License-Zlib-brightgreen.svg)](https://opensource.org/licenses/Zlib) ![Minimum Rust Version](https://img.shields.io/badge/Min%20Rust-1.34-green.svg) [![crates.io](https://img.shields.io/crates/v/tinyvec.svg)](https://crates.io/crates/tinyvec) [![docs.rs](https://docs.rs/tinyvec/badge.svg)](https://docs.rs/tinyvec/) ![Unsafe-Zero-Percent](https://img.shields.io/badge/Unsafety-0%25-brightgreen.svg) # tinyvec A 100% safe crate of vec-like types. `#![forbid(unsafe_code)]` Main types are as follows: * `ArrayVec` is an array-backed vec-like data structure. It panics on overflow. * `SliceVec` is the same deal, but using a `&mut [T]`. * `TinyVec` (`alloc` feature) is an enum that's either an `Inline(ArrayVec)` or a `Heap(Vec)`. If a `TinyVec` is `Inline` and would overflow it automatically transitions to `Heap` and continues whatever it was doing. To attain this "100% safe code" status there is one compromise: the element type of the vecs must implement `Default`. For more details, please see [the docs.rs documentation](https://docs.rs/tinyvec/) vendor/tinyvec/LICENSE-MIT.md0000664000175000017500000000200414160055207016357 0ustar mwhudsonmwhudsonPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/rustfix/0000775000175000017500000000000014160055207014353 5ustar mwhudsonmwhudsonvendor/rustfix/.cargo-checksum.json0000664000175000017500000000013114160055207020212 0ustar mwhudsonmwhudson{"files":{},"package":"6f0be05fc0675ef4f47119dc39cfc46636bb77d4fc4ef1bd851b9c3f7697f32a"}vendor/rustfix/LICENSE-APACHE0000664000175000017500000002613614160055207016307 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/rustfix/proptest-regressions/0000775000175000017500000000000014160055207020574 5ustar mwhudsonmwhudsonvendor/rustfix/proptest-regressions/replace.txt0000664000175000017500000000073014160055207022750 0ustar mwhudsonmwhudson# Seeds for failure cases proptest has generated in the past. It is # automatically read and these particular cases re-run before any # novel cases are generated. # # It is recommended to check this file in to source control so that # everyone who runs the test benefits from these saved cases. xs 358148376 3634975642 2528447681 3675516813 # shrinks to ref s = "" xs 3127423015 3362740891 2605681441 2390162043 # shrinks to ref data = "", ref replacements = [(0..0, [])] vendor/rustfix/Cargo.toml0000664000175000017500000000260314160055207016304 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] edition = "2018" name = "rustfix" version = "0.6.0" authors = ["Pascal Hertleif ", "Oliver Schneider "] exclude = ["etc/*", "examples/*", "tests/*"] description = "Automatically apply the suggestions made by rustc" documentation = "https://docs.rs/rustfix" readme = "Readme.md" license = "Apache-2.0/MIT" repository = "https://github.com/rust-lang-nursery/rustfix" [dependencies.anyhow] version = "1.0.0" [dependencies.log] version = "0.4.1" [dependencies.serde] version = "1.0" features = ["derive"] [dependencies.serde_json] version = "1.0" [dev-dependencies.duct] version = "0.9" [dev-dependencies.env_logger] version = "0.5.0-rc.1" [dev-dependencies.log] version = "0.4.1" [dev-dependencies.proptest] version = "0.7.0" [dev-dependencies.similar] version = "0.4.0" [dev-dependencies.tempdir] version = "0.3.5" vendor/rustfix/src/0000775000175000017500000000000014160055207015142 5ustar mwhudsonmwhudsonvendor/rustfix/src/diagnostics.rs0000664000175000017500000000566714160055207020035 0ustar mwhudsonmwhudson//! Rustc Diagnostic JSON Output //! //! The following data types are copied from [rust-lang/rust](https://github.com/rust-lang/rust/blob/de78655bca47cac8e783dbb563e7e5c25c1fae40/src/libsyntax/json.rs) use serde::Deserialize; #[derive(Clone, Deserialize, Debug, Hash, Eq, PartialEq)] pub struct Diagnostic { /// The primary error message. pub message: String, pub code: Option, /// "error: internal compiler error", "error", "warning", "note", "help". level: String, pub spans: Vec, /// Associated diagnostic messages. pub children: Vec, /// The message as rustc would render it. Currently this is only /// `Some` for "suggestions", but eventually it will include all /// snippets. pub rendered: Option, } #[derive(Clone, Deserialize, Debug, Hash, Eq, PartialEq)] pub struct DiagnosticSpan { pub file_name: String, pub byte_start: u32, pub byte_end: u32, /// 1-based. pub line_start: usize, pub line_end: usize, /// 1-based, character offset. pub column_start: usize, pub column_end: usize, /// Is this a "primary" span -- meaning the point, or one of the points, /// where the error occurred? is_primary: bool, /// Source text from the start of line_start to the end of line_end. pub text: Vec, /// Label that should be placed at this location (if any) label: Option, /// If we are suggesting a replacement, this will contain text /// that should be sliced in atop this span. You may prefer to /// load the fully rendered version from the parent `Diagnostic`, /// however. pub suggested_replacement: Option, pub suggestion_applicability: Option, /// Macro invocations that created the code at this span, if any. expansion: Option>, } #[derive(Copy, Clone, Debug, PartialEq, Deserialize, Hash, Eq)] pub enum Applicability { MachineApplicable, HasPlaceholders, MaybeIncorrect, Unspecified, } #[derive(Clone, Deserialize, Debug, Eq, PartialEq, Hash)] pub struct DiagnosticSpanLine { pub text: String, /// 1-based, character offset in self.text. pub highlight_start: usize, pub highlight_end: usize, } #[derive(Clone, Deserialize, Debug, Eq, PartialEq, Hash)] struct DiagnosticSpanMacroExpansion { /// span where macro was applied to generate this code; note that /// this may itself derive from a macro (if /// `span.expansion.is_some()`) span: DiagnosticSpan, /// name of macro that was applied (e.g., "foo!" or "#[derive(Eq)]") macro_decl_name: String, /// span where macro was defined (if known) def_site_span: Option, } #[derive(Clone, Deserialize, Debug, Eq, PartialEq, Hash)] pub struct DiagnosticCode { /// The code itself. pub code: String, /// An explanation for the code. explanation: Option, } vendor/rustfix/src/replace.rs0000664000175000017500000002372014160055207017127 0ustar mwhudsonmwhudson//! A small module giving you a simple container that allows easy and cheap //! replacement of parts of its content, with the ability to prevent changing //! the same parts multiple times. use anyhow::{anyhow, ensure, Error}; use std::rc::Rc; #[derive(Debug, Clone, PartialEq, Eq)] enum State { Initial, Replaced(Rc<[u8]>), Inserted(Rc<[u8]>), } impl State { fn is_inserted(&self) -> bool { matches!(*self, State::Inserted(..)) } } #[derive(Debug, Clone, PartialEq, Eq)] struct Span { /// Start of this span in parent data start: usize, /// up to end including end: usize, data: State, } /// A container that allows easily replacing chunks of its data #[derive(Debug, Clone, Default)] pub struct Data { original: Vec, parts: Vec, } impl Data { /// Create a new data container from a slice of bytes pub fn new(data: &[u8]) -> Self { Data { original: data.into(), parts: vec![Span { data: State::Initial, start: 0, end: data.len().saturating_sub(1), }], } } /// Render this data as a vector of bytes pub fn to_vec(&self) -> Vec { if self.original.is_empty() { return Vec::new(); } self.parts.iter().fold(Vec::new(), |mut acc, d| { match d.data { State::Initial => acc.extend_from_slice(&self.original[d.start..=d.end]), State::Replaced(ref d) | State::Inserted(ref d) => acc.extend_from_slice(&d), }; acc }) } /// Replace a chunk of data with the given slice, erroring when this part /// was already changed previously. pub fn replace_range( &mut self, from: usize, up_to_and_including: usize, data: &[u8], ) -> Result<(), Error> { let exclusive_end = up_to_and_including + 1; ensure!( from <= exclusive_end, "Invalid range {}...{}, start is larger than end", from, up_to_and_including ); ensure!( up_to_and_including <= self.original.len(), "Invalid range {}...{} given, original data is only {} byte long", from, up_to_and_including, self.original.len() ); let insert_only = from == exclusive_end; // Since we error out when replacing an already replaced chunk of data, // we can take some shortcuts here. For example, there can be no // overlapping replacements -- we _always_ split a chunk of 'initial' // data into three[^empty] parts, and there can't ever be two 'initial' // parts touching. // // [^empty]: Leading and trailing ones might be empty if we replace // the whole chunk. As an optimization and without loss of generality we // don't add empty parts. let new_parts = { let index_of_part_to_split = self .parts .iter() .position(|p| { !p.data.is_inserted() && p.start <= from && p.end >= up_to_and_including }) .ok_or_else(|| { use log::Level::Debug; if log_enabled!(Debug) { let slices = self .parts .iter() .map(|p| { ( p.start, p.end, match p.data { State::Initial => "initial", State::Replaced(..) => "replaced", State::Inserted(..) => "inserted", }, ) }) .collect::>(); debug!( "no single slice covering {}...{}, current slices: {:?}", from, up_to_and_including, slices, ); } anyhow!( "Could not replace range {}...{} in file \ -- maybe parts of it were already replaced?", from, up_to_and_including ) })?; let part_to_split = &self.parts[index_of_part_to_split]; // If this replacement matches exactly the part that we would // otherwise split then we ignore this for now. This means that you // can replace the exact same range with the exact same content // multiple times and we'll process and allow it. // // This is currently done to alleviate issues like // rust-lang/rust#51211 although this clause likely wants to be // removed if that's fixed deeper in the compiler. if part_to_split.start == from && part_to_split.end == up_to_and_including { if let State::Replaced(ref replacement) = part_to_split.data { if &**replacement == data { return Ok(()); } } } ensure!( part_to_split.data == State::Initial, "Cannot replace slice of data that was already replaced" ); let mut new_parts = Vec::with_capacity(self.parts.len() + 2); // Previous parts if let Some(ps) = self.parts.get(..index_of_part_to_split) { new_parts.extend_from_slice(&ps); } // Keep initial data on left side of part if from > part_to_split.start { new_parts.push(Span { start: part_to_split.start, end: from.saturating_sub(1), data: State::Initial, }); } // New part new_parts.push(Span { start: from, end: up_to_and_including, data: if insert_only { State::Inserted(data.into()) } else { State::Replaced(data.into()) }, }); // Keep initial data on right side of part if up_to_and_including < part_to_split.end { new_parts.push(Span { start: up_to_and_including + 1, end: part_to_split.end, data: State::Initial, }); } // Following parts if let Some(ps) = self.parts.get(index_of_part_to_split + 1..) { new_parts.extend_from_slice(&ps); } new_parts }; self.parts = new_parts; Ok(()) } } #[cfg(test)] mod tests { use super::*; use proptest::prelude::*; fn str(i: &[u8]) -> &str { ::std::str::from_utf8(i).unwrap() } #[test] fn replace_some_stuff() { let mut d = Data::new(b"foo bar baz"); d.replace_range(4, 6, b"lol").unwrap(); assert_eq!("foo lol baz", str(&d.to_vec())); } #[test] fn replace_a_single_char() { let mut d = Data::new(b"let y = true;"); d.replace_range(4, 4, b"mut y").unwrap(); assert_eq!("let mut y = true;", str(&d.to_vec())); } #[test] fn replace_multiple_lines() { let mut d = Data::new(b"lorem\nipsum\ndolor"); d.replace_range(6, 10, b"lol").unwrap(); assert_eq!("lorem\nlol\ndolor", str(&d.to_vec())); d.replace_range(12, 16, b"lol").unwrap(); assert_eq!("lorem\nlol\nlol", str(&d.to_vec())); } #[test] fn replace_multiple_lines_with_insert_only() { let mut d = Data::new(b"foo!"); d.replace_range(3, 2, b"bar").unwrap(); assert_eq!("foobar!", str(&d.to_vec())); d.replace_range(0, 2, b"baz").unwrap(); assert_eq!("bazbar!", str(&d.to_vec())); d.replace_range(3, 3, b"?").unwrap(); assert_eq!("bazbar?", str(&d.to_vec())); } #[test] fn replace_invalid_range() { let mut d = Data::new(b"foo!"); assert!(d.replace_range(2, 0, b"bar").is_err()); assert!(d.replace_range(0, 2, b"bar").is_ok()); } #[test] fn empty_to_vec_roundtrip() { let s = ""; assert_eq!(s.as_bytes(), Data::new(s.as_bytes()).to_vec().as_slice()); } #[test] #[should_panic(expected = "Cannot replace slice of data that was already replaced")] fn replace_overlapping_stuff_errs() { let mut d = Data::new(b"foo bar baz"); d.replace_range(4, 6, b"lol").unwrap(); assert_eq!("foo lol baz", str(&d.to_vec())); d.replace_range(4, 6, b"lol2").unwrap(); } #[test] #[should_panic(expected = "original data is only 3 byte long")] fn broken_replacements() { let mut d = Data::new(b"foo"); d.replace_range(4, 7, b"lol").unwrap(); } #[test] fn replace_same_twice() { let mut d = Data::new(b"foo"); d.replace_range(0, 0, b"b").unwrap(); d.replace_range(0, 0, b"b").unwrap(); assert_eq!("boo", str(&d.to_vec())); } proptest! { #[test] #[ignore] fn new_to_vec_roundtrip(ref s in "\\PC*") { assert_eq!(s.as_bytes(), Data::new(s.as_bytes()).to_vec().as_slice()); } #[test] #[ignore] fn replace_random_chunks( ref data in "\\PC*", ref replacements in prop::collection::vec( (any::<::std::ops::Range>(), any::>()), 1..1337, ) ) { let mut d = Data::new(data.as_bytes()); for &(ref range, ref bytes) in replacements { let _ = d.replace_range(range.start, range.end, bytes); } } } } vendor/rustfix/src/lib.rs0000664000175000017500000001712014160055207016257 0ustar mwhudsonmwhudson#![warn(rust_2018_idioms)] #[macro_use] extern crate log; #[cfg(test)] #[macro_use] extern crate proptest; use std::collections::HashSet; use std::ops::Range; use anyhow::Error; pub mod diagnostics; use crate::diagnostics::{Diagnostic, DiagnosticSpan}; mod replace; #[derive(Debug, Clone, Copy)] pub enum Filter { MachineApplicableOnly, Everything, } pub fn get_suggestions_from_json( input: &str, only: &HashSet, filter: Filter, ) -> serde_json::error::Result> { let mut result = Vec::new(); for cargo_msg in serde_json::Deserializer::from_str(input).into_iter::() { // One diagnostic line might have multiple suggestions result.extend(collect_suggestions(&cargo_msg?, only, filter)); } Ok(result) } #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub struct LinePosition { pub line: usize, pub column: usize, } impl std::fmt::Display for LinePosition { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!(f, "{}:{}", self.line, self.column) } } #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub struct LineRange { pub start: LinePosition, pub end: LinePosition, } impl std::fmt::Display for LineRange { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!(f, "{}-{}", self.start, self.end) } } #[derive(Debug, Clone, Hash, PartialEq, Eq)] /// An error/warning and possible solutions for fixing it pub struct Suggestion { pub message: String, pub snippets: Vec, pub solutions: Vec, } #[derive(Debug, Clone, Hash, PartialEq, Eq)] pub struct Solution { pub message: String, pub replacements: Vec, } #[derive(Debug, Clone, Hash, PartialEq, Eq)] pub struct Snippet { pub file_name: String, pub line_range: LineRange, pub range: Range, /// leading surrounding text, text to replace, trailing surrounding text /// /// This split is useful for higlighting the part that gets replaced pub text: (String, String, String), } #[derive(Debug, Clone, Hash, PartialEq, Eq)] pub struct Replacement { pub snippet: Snippet, pub replacement: String, } fn parse_snippet(span: &DiagnosticSpan) -> Option { // unindent the snippet let indent = span .text .iter() .map(|line| { let indent = line .text .chars() .take_while(|&c| char::is_whitespace(c)) .count(); std::cmp::min(indent, line.highlight_start) }) .min()?; let text_slice = span.text[0].text.chars().collect::>(); // We subtract `1` because these highlights are 1-based // Check the `min` so that it doesn't attempt to index out-of-bounds when // the span points to the "end" of the line. For example, a line of // "foo\n" with a highlight_start of 5 is intended to highlight *after* // the line. This needs to compensate since the newline has been removed // from the text slice. let start = (span.text[0].highlight_start - 1).min(text_slice.len()); let end = (span.text[0].highlight_end - 1).min(text_slice.len()); let lead = text_slice[indent..start].iter().collect(); let mut body: String = text_slice[start..end].iter().collect(); for line in span.text.iter().take(span.text.len() - 1).skip(1) { body.push('\n'); body.push_str(&line.text[indent..]); } let mut tail = String::new(); let last = &span.text[span.text.len() - 1]; // If we get a DiagnosticSpanLine where highlight_end > text.len(), we prevent an 'out of // bounds' access by making sure the index is within the array bounds. // `saturating_sub` is used in case of an empty file let last_tail_index = last.highlight_end.min(last.text.len()).saturating_sub(1); let last_slice = last.text.chars().collect::>(); if span.text.len() > 1 { body.push('\n'); body.push_str( &last_slice[indent..last_tail_index] .iter() .collect::(), ); } tail.push_str(&last_slice[last_tail_index..].iter().collect::()); Some(Snippet { file_name: span.file_name.clone(), line_range: LineRange { start: LinePosition { line: span.line_start, column: span.column_start, }, end: LinePosition { line: span.line_end, column: span.column_end, }, }, range: (span.byte_start as usize)..(span.byte_end as usize), text: (lead, body, tail), }) } fn collect_span(span: &DiagnosticSpan) -> Option { let snippet = parse_snippet(span)?; let replacement = span.suggested_replacement.clone()?; Some(Replacement { snippet, replacement, }) } pub fn collect_suggestions( diagnostic: &Diagnostic, only: &HashSet, filter: Filter, ) -> Option { if !only.is_empty() { if let Some(ref code) = diagnostic.code { if !only.contains(&code.code) { // This is not the code we are looking for return None; } } else { // No code, probably a weird builtin warning/error return None; } } let snippets = diagnostic .spans .iter() .filter_map(|span| parse_snippet(span)) .collect(); let solutions: Vec<_> = diagnostic .children .iter() .filter_map(|child| { let replacements: Vec<_> = child .spans .iter() .filter(|span| { use crate::diagnostics::Applicability::*; use crate::Filter::*; match (filter, &span.suggestion_applicability) { (MachineApplicableOnly, Some(MachineApplicable)) => true, (MachineApplicableOnly, _) => false, (Everything, _) => true, } }) .filter_map(collect_span) .collect(); if replacements.len() >= 1 { Some(Solution { message: child.message.clone(), replacements, }) } else { None } }) .collect(); if solutions.is_empty() { None } else { Some(Suggestion { message: diagnostic.message.clone(), snippets, solutions, }) } } pub struct CodeFix { data: replace::Data, } impl CodeFix { pub fn new(s: &str) -> CodeFix { CodeFix { data: replace::Data::new(s.as_bytes()), } } pub fn apply(&mut self, suggestion: &Suggestion) -> Result<(), Error> { for sol in &suggestion.solutions { for r in &sol.replacements { self.data.replace_range( r.snippet.range.start, r.snippet.range.end.saturating_sub(1), r.replacement.as_bytes(), )?; } } Ok(()) } pub fn finish(&self) -> Result { Ok(String::from_utf8(self.data.to_vec())?) } } pub fn apply_suggestions(code: &str, suggestions: &[Suggestion]) -> Result { let mut fix = CodeFix::new(code); for suggestion in suggestions.iter().rev() { fix.apply(suggestion)?; } fix.finish() } vendor/rustfix/Readme.md0000664000175000017500000000275514160055207016103 0ustar mwhudsonmwhudson# rustfix The goal of this tool is to read and apply the suggestions made by rustc. ## Current status Currently, rustfix is split into two crates: - `rustfix`, a library for consuming and applying suggestions in the format that `rustc` outputs - and `cargo-fix`, a binary that works as cargo subcommand and that end users will use to fix their code. The magic of rustfix is entirely dependent on the diagnostics implemented in the Rust compiler (and external lints, like [clippy]). [clippy]: https://github.com/rust-lang-nursery/rust-clippy ## Installation To use the rustfix library, add it to your `Cargo.toml`. To get the tool to automatically fix warnings in, run `cargo install cargo-fix`. This will give you `cargo fix`. ## Using `cargo fix` to transition to Rust 2018 Instructions on how to use this tool to transition a crate to Rust 2018 can be found [in the Rust Edition Guide.](https://rust-lang-nursery.github.io/edition-guide/editions/transitioning-an-existing-project-to-a-new-edition.html) ## License Licensed under either of - Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or ) - MIT license ([LICENSE-MIT](LICENSE-MIT) or ) at your option. ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. vendor/rustfix/LICENSE-MIT0000664000175000017500000000207214160055207016010 0ustar mwhudsonmwhudsonThe MIT License (MIT) Copyright (c) 2016 Pascal Hertleif Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/rustfix/Changelog.md0000664000175000017500000000357214160055207016573 0ustar mwhudsonmwhudson# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## [Unreleased] ## [0.4.6] - 2019-07-16 ### Changed Internal changes: - Change example to automatically determine filename - Migrate to Rust 2018 - use `derive` feature over `serde_derive` crate ## [0.4.5] - 2019-03-26 ### Added - Implement common traits for Diagnostic and related types ### Fixed - Fix out of bounds access in parse_snippet ## [0.4.4] - 2018-12-13 ### Added - Make Diagnostic::rendered public. ### Changed - Revert faulty "Allow multiple solutions in a suggestion" ## [0.4.3] - 2018-12-09 - *yanked!* ### Added - Allow multiple solutions in a suggestion ### Changed - use `RUSTC` environment var if present ## [0.4.2] - 2018-07-31 ### Added - Expose an interface to apply fixes on-by-one ### Changed - Handle invalid snippets instead of panicking ## [0.4.1] - 2018-07-26 ### Changed - Ignore duplicate replacements ## [0.4.0] - 2018-05-23 ### Changed - Filter by machine applicability by default [Unreleased]: https://github.com/rust-lang-nursery/rustfix/compare/rustfix-0.4.6...HEAD [0.4.6]: https://github.com/rust-lang-nursery/rustfix/compare/rustfix-0.4.5...rustfix-0.4.6 [0.4.5]: https://github.com/rust-lang-nursery/rustfix/compare/rustfix-0.4.4...rustfix-0.4.5 [0.4.4]: https://github.com/rust-lang-nursery/rustfix/compare/rustfix-0.4.3...rustfix-0.4.4 [0.4.3]: https://github.com/rust-lang-nursery/rustfix/compare/rustfix-0.4.2...rustfix-0.4.3 [0.4.2]: https://github.com/rust-lang-nursery/rustfix/compare/rustfix-0.4.1...rustfix-0.4.2 [0.4.1]: https://github.com/rust-lang-nursery/rustfix/compare/rustfix-0.4.0...rustfix-0.4.1 [0.4.0]: https://github.com/rust-lang-nursery/rustfix/compare/rustfix-0.4.0 vendor/socket2/0000775000175000017500000000000014172417313014224 5ustar mwhudsonmwhudsonvendor/socket2/.cargo-checksum.json0000664000175000017500000000013114172417313020063 0ustar mwhudsonmwhudson{"files":{},"package":"0f82496b90c36d70af5fcd482edaa2e0bd16fade569de1330405fecbbdac736b"}vendor/socket2/LICENSE-APACHE0000664000175000017500000002513714160055207016155 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. vendor/socket2/Cargo.toml0000664000175000017500000000263614172417313016163 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2018" name = "socket2" version = "0.4.3" authors = ["Alex Crichton ", "Thomas de Zeeuw "] include = ["Cargo.toml", "LICENSE-APACHE", "LICENSE-MIT", "README.md", "src/**/*.rs"] description = "Utilities for handling networking sockets with a maximal amount of configuration\npossible intended.\n" homepage = "https://github.com/rust-lang/socket2" documentation = "https://docs.rs/socket2" readme = "README.md" keywords = ["io", "socket", "network"] categories = ["api-bindings", "network-programming"] license = "MIT/Apache-2.0" repository = "https://github.com/rust-lang/socket2" [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] [package.metadata.playground] features = ["all"] [features] all = [] [target."cfg(unix)".dependencies.libc] version = "0.2.113" [target."cfg(windows)".dependencies.winapi] version = "0.3.9" features = ["handleapi", "ws2ipdef", "ws2tcpip"] vendor/socket2/src/0000775000175000017500000000000014172417313015013 5ustar mwhudsonmwhudsonvendor/socket2/src/sockaddr.rs0000664000175000017500000003020614172417313017154 0ustar mwhudsonmwhudsonuse std::mem::{self, size_of, MaybeUninit}; use std::net::{SocketAddr, SocketAddrV4, SocketAddrV6}; use std::{fmt, io}; use crate::sys::{ sa_family_t, sockaddr, sockaddr_in, sockaddr_in6, sockaddr_storage, socklen_t, AF_INET, AF_INET6, }; #[cfg(windows)] use winapi::shared::ws2ipdef::SOCKADDR_IN6_LH_u; /// The address of a socket. /// /// `SockAddr`s may be constructed directly to and from the standard library /// [`SocketAddr`], [`SocketAddrV4`], and [`SocketAddrV6`] types. pub struct SockAddr { storage: sockaddr_storage, len: socklen_t, } #[allow(clippy::len_without_is_empty)] impl SockAddr { /// Create a `SockAddr` from the underlying storage and its length. /// /// # Safety /// /// Caller must ensure that the address family and length match the type of /// storage address. For example if `storage.ss_family` is set to `AF_INET` /// the `storage` must be initialised as `sockaddr_in`, setting the content /// and length appropriately. /// /// # Examples /// /// ``` /// # fn main() -> std::io::Result<()> { /// # #[cfg(unix)] { /// use std::io; /// use std::mem; /// use std::os::unix::io::AsRawFd; /// /// use socket2::{SockAddr, Socket, Domain, Type}; /// /// let socket = Socket::new(Domain::IPV4, Type::STREAM, None)?; /// /// // Initialise a `SocketAddr` byte calling `getsockname(2)`. /// let mut addr_storage: libc::sockaddr_storage = unsafe { mem::zeroed() }; /// let mut len = mem::size_of_val(&addr_storage) as libc::socklen_t; /// /// // The `getsockname(2)` system call will intiliase `storage` for /// // us, setting `len` to the correct length. /// let res = unsafe { /// libc::getsockname( /// socket.as_raw_fd(), /// (&mut addr_storage as *mut libc::sockaddr_storage).cast(), /// &mut len, /// ) /// }; /// if res == -1 { /// return Err(io::Error::last_os_error()); /// } /// /// let address = unsafe { SockAddr::new(addr_storage, len) }; /// # drop(address); /// # } /// # Ok(()) /// # } /// ``` pub const unsafe fn new(storage: sockaddr_storage, len: socklen_t) -> SockAddr { SockAddr { storage, len } } /// Initialise a `SockAddr` by calling the function `init`. /// /// The type of the address storage and length passed to the function `init` /// is OS/architecture specific. /// /// The address is zeroed before `init` is called and is thus valid to /// dereference and read from. The length initialised to the maximum length /// of the storage. /// /// # Safety /// /// Caller must ensure that the address family and length match the type of /// storage address. For example if `storage.ss_family` is set to `AF_INET` /// the `storage` must be initialised as `sockaddr_in`, setting the content /// and length appropriately. /// /// # Examples /// /// ``` /// # fn main() -> std::io::Result<()> { /// # #[cfg(unix)] { /// use std::io; /// use std::os::unix::io::AsRawFd; /// /// use socket2::{SockAddr, Socket, Domain, Type}; /// /// let socket = Socket::new(Domain::IPV4, Type::STREAM, None)?; /// /// // Initialise a `SocketAddr` byte calling `getsockname(2)`. /// let (_, address) = unsafe { /// SockAddr::init(|addr_storage, len| { /// // The `getsockname(2)` system call will intiliase `storage` for /// // us, setting `len` to the correct length. /// if libc::getsockname(socket.as_raw_fd(), addr_storage.cast(), len) == -1 { /// Err(io::Error::last_os_error()) /// } else { /// Ok(()) /// } /// }) /// }?; /// # drop(address); /// # } /// # Ok(()) /// # } /// ``` pub unsafe fn init(init: F) -> io::Result<(T, SockAddr)> where F: FnOnce(*mut sockaddr_storage, *mut socklen_t) -> io::Result, { const STORAGE_SIZE: socklen_t = size_of::() as socklen_t; // NOTE: `SockAddr::unix` depends on the storage being zeroed before // calling `init`. // NOTE: calling `recvfrom` with an empty buffer also depends on the // storage being zeroed before calling `init` as the OS might not // initialise it. let mut storage = MaybeUninit::::zeroed(); let mut len = STORAGE_SIZE; init(storage.as_mut_ptr(), &mut len).map(|res| { debug_assert!(len <= STORAGE_SIZE, "overflown address storage"); let addr = SockAddr { // Safety: zeroed-out `sockaddr_storage` is valid, caller must // ensure at least `len` bytes are valid. storage: storage.assume_init(), len, }; (res, addr) }) } /// Returns this address's family. pub const fn family(&self) -> sa_family_t { self.storage.ss_family } /// Returns the size of this address in bytes. pub const fn len(&self) -> socklen_t { self.len } /// Returns a raw pointer to the address. pub const fn as_ptr(&self) -> *const sockaddr { &self.storage as *const _ as *const _ } /// Returns a raw pointer to the address storage. #[cfg(all(unix, not(target_os = "redox")))] pub(crate) const fn as_storage_ptr(&self) -> *const sockaddr_storage { &self.storage } /// Returns this address as a `SocketAddr` if it is in the `AF_INET` (IPv4) /// or `AF_INET6` (IPv6) family, otherwise returns `None`. pub fn as_socket(&self) -> Option { if self.storage.ss_family == AF_INET as sa_family_t { // Safety: if the ss_family field is AF_INET then storage must be a sockaddr_in. let addr = unsafe { &*(&self.storage as *const _ as *const sockaddr_in) }; let ip = crate::sys::from_in_addr(addr.sin_addr); let port = u16::from_be(addr.sin_port); Some(SocketAddr::V4(SocketAddrV4::new(ip, port))) } else if self.storage.ss_family == AF_INET6 as sa_family_t { // Safety: if the ss_family field is AF_INET6 then storage must be a sockaddr_in6. let addr = unsafe { &*(&self.storage as *const _ as *const sockaddr_in6) }; let ip = crate::sys::from_in6_addr(addr.sin6_addr); let port = u16::from_be(addr.sin6_port); Some(SocketAddr::V6(SocketAddrV6::new( ip, port, addr.sin6_flowinfo, #[cfg(unix)] addr.sin6_scope_id, #[cfg(windows)] unsafe { *addr.u.sin6_scope_id() }, ))) } else { None } } /// Returns this address as a [`SocketAddrV4`] if it is in the `AF_INET` /// family. pub fn as_socket_ipv4(&self) -> Option { match self.as_socket() { Some(SocketAddr::V4(addr)) => Some(addr), _ => None, } } /// Returns this address as a [`SocketAddrV6`] if it is in the `AF_INET6` /// family. pub fn as_socket_ipv6(&self) -> Option { match self.as_socket() { Some(SocketAddr::V6(addr)) => Some(addr), _ => None, } } } impl From for SockAddr { fn from(addr: SocketAddr) -> SockAddr { match addr { SocketAddr::V4(addr) => addr.into(), SocketAddr::V6(addr) => addr.into(), } } } impl From for SockAddr { fn from(addr: SocketAddrV4) -> SockAddr { let sockaddr_in = sockaddr_in { sin_family: AF_INET as sa_family_t, sin_port: addr.port().to_be(), sin_addr: crate::sys::to_in_addr(addr.ip()), sin_zero: Default::default(), #[cfg(any( target_os = "dragonfly", target_os = "freebsd", target_os = "haiku", target_os = "ios", target_os = "macos", target_os = "netbsd", target_os = "openbsd" ))] sin_len: 0, }; let mut storage = MaybeUninit::::zeroed(); // Safety: A `sockaddr_in` is memory compatible with a `sockaddr_storage` unsafe { (storage.as_mut_ptr() as *mut sockaddr_in).write(sockaddr_in) }; SockAddr { storage: unsafe { storage.assume_init() }, len: mem::size_of::() as socklen_t, } } } impl From for SockAddr { fn from(addr: SocketAddrV6) -> SockAddr { #[cfg(windows)] let u = unsafe { let mut u = mem::zeroed::(); *u.sin6_scope_id_mut() = addr.scope_id(); u }; let sockaddr_in6 = sockaddr_in6 { sin6_family: AF_INET6 as sa_family_t, sin6_port: addr.port().to_be(), sin6_addr: crate::sys::to_in6_addr(addr.ip()), sin6_flowinfo: addr.flowinfo(), #[cfg(unix)] sin6_scope_id: addr.scope_id(), #[cfg(windows)] u, #[cfg(any( target_os = "dragonfly", target_os = "freebsd", target_os = "haiku", target_os = "ios", target_os = "macos", target_os = "netbsd", target_os = "openbsd" ))] sin6_len: 0, #[cfg(any(target_os = "solaris", target_os = "illumos"))] __sin6_src_id: 0, }; let mut storage = MaybeUninit::::zeroed(); // Safety: A `sockaddr_in6` is memory compatible with a `sockaddr_storage` unsafe { (storage.as_mut_ptr() as *mut sockaddr_in6).write(sockaddr_in6) }; SockAddr { storage: unsafe { storage.assume_init() }, len: mem::size_of::() as socklen_t, } } } impl fmt::Debug for SockAddr { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { let mut f = fmt.debug_struct("SockAddr"); #[cfg(any( target_os = "dragonfly", target_os = "freebsd", target_os = "haiku", target_os = "hermit", target_os = "ios", target_os = "macos", target_os = "netbsd", target_os = "openbsd", target_os = "vxworks", ))] f.field("ss_len", &self.storage.ss_len); f.field("ss_family", &self.storage.ss_family) .field("len", &self.len) .finish() } } #[test] fn ipv4() { use std::net::Ipv4Addr; let std = SocketAddrV4::new(Ipv4Addr::new(1, 2, 3, 4), 9876); let addr = SockAddr::from(std); assert_eq!(addr.family(), AF_INET as sa_family_t); assert_eq!(addr.len(), size_of::() as socklen_t); assert_eq!(addr.as_socket(), Some(SocketAddr::V4(std))); assert_eq!(addr.as_socket_ipv4(), Some(std)); assert!(addr.as_socket_ipv6().is_none()); let addr = SockAddr::from(SocketAddr::from(std)); assert_eq!(addr.family(), AF_INET as sa_family_t); assert_eq!(addr.len(), size_of::() as socklen_t); assert_eq!(addr.as_socket(), Some(SocketAddr::V4(std))); assert_eq!(addr.as_socket_ipv4(), Some(std)); assert!(addr.as_socket_ipv6().is_none()); } #[test] fn ipv6() { use std::net::Ipv6Addr; let std = SocketAddrV6::new(Ipv6Addr::new(1, 2, 3, 4, 5, 6, 7, 8), 9876, 11, 12); let addr = SockAddr::from(std); assert_eq!(addr.family(), AF_INET6 as sa_family_t); assert_eq!(addr.len(), size_of::() as socklen_t); assert_eq!(addr.as_socket(), Some(SocketAddr::V6(std))); assert!(addr.as_socket_ipv4().is_none()); assert_eq!(addr.as_socket_ipv6(), Some(std)); let addr = SockAddr::from(SocketAddr::from(std)); assert_eq!(addr.family(), AF_INET6 as sa_family_t); assert_eq!(addr.len(), size_of::() as socklen_t); assert_eq!(addr.as_socket(), Some(SocketAddr::V6(std))); assert!(addr.as_socket_ipv4().is_none()); assert_eq!(addr.as_socket_ipv6(), Some(std)); } vendor/socket2/src/sockref.rs0000664000175000017500000001167114160055207017020 0ustar mwhudsonmwhudsonuse std::fmt; use std::marker::PhantomData; use std::mem::ManuallyDrop; use std::ops::Deref; #[cfg(unix)] use std::os::unix::io::{AsRawFd, FromRawFd}; #[cfg(windows)] use std::os::windows::io::{AsRawSocket, FromRawSocket}; use crate::Socket; /// A reference to a [`Socket`] that can be used to configure socket types other /// than the `Socket` type itself. /// /// This allows for example a [`TcpStream`], found in the standard library, to /// be configured using all the additional methods found in the [`Socket`] API. /// /// `SockRef` can be created from any socket type that implements [`AsRawFd`] /// (Unix) or [`AsRawSocket`] (Windows) using the [`From`] implementation, but /// the caller must ensure the file descriptor/socket is a valid. /// /// [`TcpStream`]: std::net::TcpStream // Don't use intra-doc links because they won't build on every platform. /// [`AsRawFd`]: https://doc.rust-lang.org/stable/std/os/unix/io/trait.AsRawFd.html /// [`AsRawSocket`]: https://doc.rust-lang.org/stable/std/os/windows/io/trait.AsRawSocket.html /// /// # Examples /// /// Below is an example of converting a [`TcpStream`] into a [`SockRef`]. /// /// ``` /// use std::net::{TcpStream, SocketAddr}; /// /// use socket2::SockRef; /// /// # fn main() -> Result<(), Box> { /// // Create `TcpStream` from the standard library. /// let address: SocketAddr = "127.0.0.1:1234".parse()?; /// # let b1 = std::sync::Arc::new(std::sync::Barrier::new(2)); /// # let b2 = b1.clone(); /// # let handle = std::thread::spawn(move || { /// # let listener = std::net::TcpListener::bind(address).unwrap(); /// # b2.wait(); /// # let (stream, _) = listener.accept().unwrap(); /// # std::thread::sleep(std::time::Duration::from_millis(10)); /// # drop(stream); /// # }); /// # b1.wait(); /// let stream = TcpStream::connect(address)?; /// /// // Create a `SockRef`erence to the stream. /// let socket_ref = SockRef::from(&stream); /// // Use `Socket::set_nodelay` on the stream. /// socket_ref.set_nodelay(true)?; /// drop(socket_ref); /// /// assert_eq!(stream.nodelay()?, true); /// # handle.join().unwrap(); /// # Ok(()) /// # } /// ``` /// /// Below is an example of **incorrect usage** of `SockRef::from`, which is /// currently possible (but not intended and will be fixed in future versions). /// /// ```compile_fail /// use socket2::SockRef; /// /// # fn main() -> Result<(), Box> { /// /// THIS USAGE IS NOT VALID! /// let socket_ref = SockRef::from(&123); /// // The above line is overseen possibility when using `SockRef::from`, it /// // uses the `RawFd` (on Unix), which is a type alias for `c_int`/`i32`, /// // which implements `AsRawFd`. However it may be clear that this usage is /// // invalid as it doesn't guarantee that `123` is a valid file descriptor. /// /// // Using `Socket::set_nodelay` now will call it on a file descriptor we /// // don't own! We don't even not if the file descriptor is valid or a socket. /// socket_ref.set_nodelay(true)?; /// drop(socket_ref); /// # Ok(()) /// # } /// # DO_NOT_COMPILE /// ``` pub struct SockRef<'s> { /// Because this is a reference we don't own the `Socket`, however `Socket` /// closes itself when dropped, so we use `ManuallyDrop` to prevent it from /// closing itself. socket: ManuallyDrop, /// Because we don't own the socket we need to ensure the socket remains /// open while we have a "reference" to it, the lifetime `'s` ensures this. _lifetime: PhantomData<&'s Socket>, } impl<'s> Deref for SockRef<'s> { type Target = Socket; fn deref(&self) -> &Self::Target { &self.socket } } /// On Windows, a corresponding `From<&impl AsRawSocket>` implementation exists. #[cfg(unix)] #[cfg_attr(docsrs, doc(cfg(unix)))] impl<'s, S> From<&'s S> for SockRef<'s> where S: AsRawFd, { /// The caller must ensure `S` is actually a socket. fn from(socket: &'s S) -> Self { let fd = socket.as_raw_fd(); assert!(fd >= 0); SockRef { socket: ManuallyDrop::new(unsafe { Socket::from_raw_fd(fd) }), _lifetime: PhantomData, } } } /// On Unix, a corresponding `From<&impl AsRawFd>` implementation exists. #[cfg(windows)] #[cfg_attr(docsrs, doc(cfg(windows)))] impl<'s, S> From<&'s S> for SockRef<'s> where S: AsRawSocket, { /// See the `From<&impl AsRawFd>` implementation. fn from(socket: &'s S) -> Self { let socket = socket.as_raw_socket(); assert!(socket != winapi::um::winsock2::INVALID_SOCKET as _); SockRef { socket: ManuallyDrop::new(unsafe { Socket::from_raw_socket(socket) }), _lifetime: PhantomData, } } } impl fmt::Debug for SockRef<'_> { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("SockRef") .field("raw", &self.socket.as_raw()) .field("local_addr", &self.socket.local_addr().ok()) .field("peer_addr", &self.socket.peer_addr().ok()) .finish() } } vendor/socket2/src/sys/0000775000175000017500000000000014172417313015631 5ustar mwhudsonmwhudsonvendor/socket2/src/sys/windows.rs0000664000175000017500000006302414172417313017676 0ustar mwhudsonmwhudson// Copyright 2015 The Rust Project Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use std::cmp::min; use std::io::{self, IoSlice}; use std::marker::PhantomData; use std::mem::{self, size_of, MaybeUninit}; use std::net::{self, Ipv4Addr, Ipv6Addr, Shutdown}; use std::os::windows::prelude::*; use std::sync::Once; use std::time::{Duration, Instant}; use std::{ptr, slice}; use winapi::ctypes::c_long; use winapi::shared::in6addr::*; use winapi::shared::inaddr::*; use winapi::shared::minwindef::DWORD; use winapi::shared::minwindef::ULONG; use winapi::shared::mstcpip::{tcp_keepalive, SIO_KEEPALIVE_VALS}; use winapi::shared::ntdef::HANDLE; use winapi::shared::ws2def; use winapi::shared::ws2def::WSABUF; use winapi::um::handleapi::SetHandleInformation; use winapi::um::processthreadsapi::GetCurrentProcessId; use winapi::um::winbase::{self, INFINITE}; use winapi::um::winsock2::{ self as sock, u_long, POLLERR, POLLHUP, POLLRDNORM, POLLWRNORM, SD_BOTH, SD_RECEIVE, SD_SEND, WSAPOLLFD, }; use crate::{RecvFlags, SockAddr, TcpKeepalive, Type}; pub(crate) use winapi::ctypes::c_int; /// Fake MSG_TRUNC flag for the [`RecvFlags`] struct. /// /// The flag is enabled when a `WSARecv[From]` call returns `WSAEMSGSIZE`. The /// value of the flag is defined by us. pub(crate) const MSG_TRUNC: c_int = 0x01; // Used in `Domain`. pub(crate) use winapi::shared::ws2def::{AF_INET, AF_INET6}; // Used in `Type`. pub(crate) use winapi::shared::ws2def::{SOCK_DGRAM, SOCK_STREAM}; #[cfg(feature = "all")] pub(crate) use winapi::shared::ws2def::{SOCK_RAW, SOCK_SEQPACKET}; // Used in `Protocol`. pub(crate) const IPPROTO_ICMP: c_int = winapi::shared::ws2def::IPPROTO_ICMP as c_int; pub(crate) const IPPROTO_ICMPV6: c_int = winapi::shared::ws2def::IPPROTO_ICMPV6 as c_int; pub(crate) const IPPROTO_TCP: c_int = winapi::shared::ws2def::IPPROTO_TCP as c_int; pub(crate) const IPPROTO_UDP: c_int = winapi::shared::ws2def::IPPROTO_UDP as c_int; // Used in `SockAddr`. pub(crate) use winapi::shared::ws2def::{ ADDRESS_FAMILY as sa_family_t, SOCKADDR as sockaddr, SOCKADDR_IN as sockaddr_in, SOCKADDR_STORAGE as sockaddr_storage, }; pub(crate) use winapi::shared::ws2ipdef::SOCKADDR_IN6_LH as sockaddr_in6; pub(crate) use winapi::um::ws2tcpip::socklen_t; // Used in `Socket`. pub(crate) use winapi::shared::ws2def::{ IPPROTO_IP, SOL_SOCKET, SO_BROADCAST, SO_ERROR, SO_KEEPALIVE, SO_LINGER, SO_OOBINLINE, SO_RCVBUF, SO_RCVTIMEO, SO_REUSEADDR, SO_SNDBUF, SO_SNDTIMEO, SO_TYPE, TCP_NODELAY, }; #[cfg(feature = "all")] pub(crate) use winapi::shared::ws2ipdef::IP_HDRINCL; pub(crate) use winapi::shared::ws2ipdef::{ IPV6_ADD_MEMBERSHIP, IPV6_DROP_MEMBERSHIP, IPV6_MREQ as Ipv6Mreq, IPV6_MULTICAST_HOPS, IPV6_MULTICAST_IF, IPV6_MULTICAST_LOOP, IPV6_UNICAST_HOPS, IPV6_V6ONLY, IP_ADD_MEMBERSHIP, IP_DROP_MEMBERSHIP, IP_MREQ as IpMreq, IP_MULTICAST_IF, IP_MULTICAST_LOOP, IP_MULTICAST_TTL, IP_TOS, IP_TTL, }; pub(crate) use winapi::um::winsock2::{linger, MSG_OOB, MSG_PEEK}; pub(crate) const IPPROTO_IPV6: c_int = winapi::shared::ws2def::IPPROTO_IPV6 as c_int; /// Type used in set/getsockopt to retrieve the `TCP_NODELAY` option. /// /// NOTE: /// documents that options such as `TCP_NODELAY` and `SO_KEEPALIVE` expect a /// `BOOL` (alias for `c_int`, 4 bytes), however in practice this turns out to /// be false (or misleading) as a `BOOLEAN` (`c_uchar`, 1 byte) is returned by /// `getsockopt`. pub(crate) type Bool = winapi::shared::ntdef::BOOLEAN; /// Maximum size of a buffer passed to system call like `recv` and `send`. const MAX_BUF_LEN: usize = ::max_value() as usize; /// Helper macro to execute a system call that returns an `io::Result`. macro_rules! syscall { ($fn: ident ( $($arg: expr),* $(,)* ), $err_test: path, $err_value: expr) => {{ #[allow(unused_unsafe)] let res = unsafe { sock::$fn($($arg, )*) }; if $err_test(&res, &$err_value) { Err(io::Error::last_os_error()) } else { Ok(res) } }}; } impl_debug!( crate::Domain, ws2def::AF_INET, ws2def::AF_INET6, ws2def::AF_UNIX, ws2def::AF_UNSPEC, // = 0. ); /// Windows only API. impl Type { /// Our custom flag to set `WSA_FLAG_NO_HANDLE_INHERIT` on socket creation. /// Trying to mimic `Type::cloexec` on windows. const NO_INHERIT: c_int = 1 << ((size_of::() * 8) - 1); // Last bit. /// Set `WSA_FLAG_NO_HANDLE_INHERIT` on the socket. #[cfg(feature = "all")] #[cfg_attr(docsrs, doc(cfg(all(windows, feature = "all"))))] pub const fn no_inherit(self) -> Type { self._no_inherit() } pub(crate) const fn _no_inherit(self) -> Type { Type(self.0 | Type::NO_INHERIT) } } impl_debug!( crate::Type, ws2def::SOCK_STREAM, ws2def::SOCK_DGRAM, ws2def::SOCK_RAW, ws2def::SOCK_RDM, ws2def::SOCK_SEQPACKET, ); impl_debug!( crate::Protocol, self::IPPROTO_ICMP, self::IPPROTO_ICMPV6, self::IPPROTO_TCP, self::IPPROTO_UDP, ); impl std::fmt::Debug for RecvFlags { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("RecvFlags") .field("is_truncated", &self.is_truncated()) .finish() } } #[repr(transparent)] pub struct MaybeUninitSlice<'a> { vec: WSABUF, _lifetime: PhantomData<&'a mut [MaybeUninit]>, } unsafe impl<'a> Send for MaybeUninitSlice<'a> {} unsafe impl<'a> Sync for MaybeUninitSlice<'a> {} impl<'a> MaybeUninitSlice<'a> { pub fn new(buf: &'a mut [MaybeUninit]) -> MaybeUninitSlice<'a> { assert!(buf.len() <= ULONG::MAX as usize); MaybeUninitSlice { vec: WSABUF { len: buf.len() as ULONG, buf: buf.as_mut_ptr().cast(), }, _lifetime: PhantomData, } } pub fn as_slice(&self) -> &[MaybeUninit] { unsafe { slice::from_raw_parts(self.vec.buf.cast(), self.vec.len as usize) } } pub fn as_mut_slice(&mut self) -> &mut [MaybeUninit] { unsafe { slice::from_raw_parts_mut(self.vec.buf.cast(), self.vec.len as usize) } } } fn init() { static INIT: Once = Once::new(); INIT.call_once(|| { // Initialize winsock through the standard library by just creating a // dummy socket. Whether this is successful or not we drop the result as // libstd will be sure to have initialized winsock. let _ = net::UdpSocket::bind("127.0.0.1:34254"); }); } pub(crate) type Socket = sock::SOCKET; pub(crate) unsafe fn socket_from_raw(socket: Socket) -> crate::socket::Inner { crate::socket::Inner::from_raw_socket(socket as RawSocket) } pub(crate) fn socket_as_raw(socket: &crate::socket::Inner) -> Socket { socket.as_raw_socket() as Socket } pub(crate) fn socket_into_raw(socket: crate::socket::Inner) -> Socket { socket.into_raw_socket() as Socket } pub(crate) fn socket(family: c_int, mut ty: c_int, protocol: c_int) -> io::Result { init(); // Check if we set our custom flag. let flags = if ty & Type::NO_INHERIT != 0 { ty = ty & !Type::NO_INHERIT; sock::WSA_FLAG_NO_HANDLE_INHERIT } else { 0 }; syscall!( WSASocketW( family, ty, protocol, ptr::null_mut(), 0, sock::WSA_FLAG_OVERLAPPED | flags, ), PartialEq::eq, sock::INVALID_SOCKET ) } pub(crate) fn bind(socket: Socket, addr: &SockAddr) -> io::Result<()> { syscall!(bind(socket, addr.as_ptr(), addr.len()), PartialEq::ne, 0).map(|_| ()) } pub(crate) fn connect(socket: Socket, addr: &SockAddr) -> io::Result<()> { syscall!(connect(socket, addr.as_ptr(), addr.len()), PartialEq::ne, 0).map(|_| ()) } pub(crate) fn poll_connect(socket: &crate::Socket, timeout: Duration) -> io::Result<()> { let start = Instant::now(); let mut fd_array = WSAPOLLFD { fd: socket.as_raw(), events: POLLRDNORM | POLLWRNORM, revents: 0, }; loop { let elapsed = start.elapsed(); if elapsed >= timeout { return Err(io::ErrorKind::TimedOut.into()); } let timeout = (timeout - elapsed).as_millis(); let timeout = clamp(timeout, 1, c_int::max_value() as u128) as c_int; match syscall!( WSAPoll(&mut fd_array, 1, timeout), PartialEq::eq, sock::SOCKET_ERROR ) { Ok(0) => return Err(io::ErrorKind::TimedOut.into()), Ok(_) => { // Error or hang up indicates an error (or failure to connect). if (fd_array.revents & POLLERR) != 0 || (fd_array.revents & POLLHUP) != 0 { match socket.take_error() { Ok(Some(err)) => return Err(err), Ok(None) => { return Err(io::Error::new( io::ErrorKind::Other, "no error set after POLLHUP", )) } Err(err) => return Err(err), } } return Ok(()); } // Got interrupted, try again. Err(ref err) if err.kind() == io::ErrorKind::Interrupted => continue, Err(err) => return Err(err), } } } // TODO: use clamp from std lib, stable since 1.50. fn clamp(value: T, min: T, max: T) -> T where T: Ord, { if value <= min { min } else if value >= max { max } else { value } } pub(crate) fn listen(socket: Socket, backlog: c_int) -> io::Result<()> { syscall!(listen(socket, backlog), PartialEq::ne, 0).map(|_| ()) } pub(crate) fn accept(socket: Socket) -> io::Result<(Socket, SockAddr)> { // Safety: `accept` initialises the `SockAddr` for us. unsafe { SockAddr::init(|storage, len| { syscall!( accept(socket, storage.cast(), len), PartialEq::eq, sock::INVALID_SOCKET ) }) } } pub(crate) fn getsockname(socket: Socket) -> io::Result { // Safety: `getsockname` initialises the `SockAddr` for us. unsafe { SockAddr::init(|storage, len| { syscall!( getsockname(socket, storage.cast(), len), PartialEq::eq, sock::SOCKET_ERROR ) }) } .map(|(_, addr)| addr) } pub(crate) fn getpeername(socket: Socket) -> io::Result { // Safety: `getpeername` initialises the `SockAddr` for us. unsafe { SockAddr::init(|storage, len| { syscall!( getpeername(socket, storage.cast(), len), PartialEq::eq, sock::SOCKET_ERROR ) }) } .map(|(_, addr)| addr) } pub(crate) fn try_clone(socket: Socket) -> io::Result { let mut info: MaybeUninit = MaybeUninit::uninit(); syscall!( WSADuplicateSocketW(socket, GetCurrentProcessId(), info.as_mut_ptr()), PartialEq::eq, sock::SOCKET_ERROR )?; // Safety: `WSADuplicateSocketW` intialised `info` for us. let mut info = unsafe { info.assume_init() }; syscall!( WSASocketW( info.iAddressFamily, info.iSocketType, info.iProtocol, &mut info, 0, sock::WSA_FLAG_OVERLAPPED | sock::WSA_FLAG_NO_HANDLE_INHERIT, ), PartialEq::eq, sock::INVALID_SOCKET ) } pub(crate) fn set_nonblocking(socket: Socket, nonblocking: bool) -> io::Result<()> { let mut nonblocking = nonblocking as u_long; ioctlsocket(socket, sock::FIONBIO, &mut nonblocking) } pub(crate) fn shutdown(socket: Socket, how: Shutdown) -> io::Result<()> { let how = match how { Shutdown::Write => SD_SEND, Shutdown::Read => SD_RECEIVE, Shutdown::Both => SD_BOTH, }; syscall!(shutdown(socket, how), PartialEq::eq, sock::SOCKET_ERROR).map(|_| ()) } pub(crate) fn recv(socket: Socket, buf: &mut [MaybeUninit], flags: c_int) -> io::Result { let res = syscall!( recv( socket, buf.as_mut_ptr().cast(), min(buf.len(), MAX_BUF_LEN) as c_int, flags, ), PartialEq::eq, sock::SOCKET_ERROR ); match res { Ok(n) => Ok(n as usize), Err(ref err) if err.raw_os_error() == Some(sock::WSAESHUTDOWN as i32) => Ok(0), Err(err) => Err(err), } } pub(crate) fn recv_vectored( socket: Socket, bufs: &mut [crate::MaybeUninitSlice<'_>], flags: c_int, ) -> io::Result<(usize, RecvFlags)> { let mut nread = 0; let mut flags = flags as DWORD; let res = syscall!( WSARecv( socket, bufs.as_mut_ptr().cast(), min(bufs.len(), DWORD::max_value() as usize) as DWORD, &mut nread, &mut flags, ptr::null_mut(), None, ), PartialEq::eq, sock::SOCKET_ERROR ); match res { Ok(_) => Ok((nread as usize, RecvFlags(0))), Err(ref err) if err.raw_os_error() == Some(sock::WSAESHUTDOWN as i32) => { Ok((0, RecvFlags(0))) } Err(ref err) if err.raw_os_error() == Some(sock::WSAEMSGSIZE as i32) => { Ok((nread as usize, RecvFlags(MSG_TRUNC))) } Err(err) => Err(err), } } pub(crate) fn recv_from( socket: Socket, buf: &mut [MaybeUninit], flags: c_int, ) -> io::Result<(usize, SockAddr)> { // Safety: `recvfrom` initialises the `SockAddr` for us. unsafe { SockAddr::init(|storage, addrlen| { let res = syscall!( recvfrom( socket, buf.as_mut_ptr().cast(), min(buf.len(), MAX_BUF_LEN) as c_int, flags, storage.cast(), addrlen, ), PartialEq::eq, sock::SOCKET_ERROR ); match res { Ok(n) => Ok(n as usize), Err(ref err) if err.raw_os_error() == Some(sock::WSAESHUTDOWN as i32) => Ok(0), Err(err) => Err(err), } }) } } pub(crate) fn recv_from_vectored( socket: Socket, bufs: &mut [crate::MaybeUninitSlice<'_>], flags: c_int, ) -> io::Result<(usize, RecvFlags, SockAddr)> { // Safety: `recvfrom` initialises the `SockAddr` for us. unsafe { SockAddr::init(|storage, addrlen| { let mut nread = 0; let mut flags = flags as DWORD; let res = syscall!( WSARecvFrom( socket, bufs.as_mut_ptr().cast(), min(bufs.len(), DWORD::max_value() as usize) as DWORD, &mut nread, &mut flags, storage.cast(), addrlen, ptr::null_mut(), None, ), PartialEq::eq, sock::SOCKET_ERROR ); match res { Ok(_) => Ok((nread as usize, RecvFlags(0))), Err(ref err) if err.raw_os_error() == Some(sock::WSAESHUTDOWN as i32) => { Ok((nread as usize, RecvFlags(0))) } Err(ref err) if err.raw_os_error() == Some(sock::WSAEMSGSIZE as i32) => { Ok((nread as usize, RecvFlags(MSG_TRUNC))) } Err(err) => Err(err), } }) } .map(|((n, recv_flags), addr)| (n, recv_flags, addr)) } pub(crate) fn send(socket: Socket, buf: &[u8], flags: c_int) -> io::Result { syscall!( send( socket, buf.as_ptr().cast(), min(buf.len(), MAX_BUF_LEN) as c_int, flags, ), PartialEq::eq, sock::SOCKET_ERROR ) .map(|n| n as usize) } pub(crate) fn send_vectored( socket: Socket, bufs: &[IoSlice<'_>], flags: c_int, ) -> io::Result { let mut nsent = 0; syscall!( WSASend( socket, // FIXME: From the `WSASend` docs [1]: // > For a Winsock application, once the WSASend function is called, // > the system owns these buffers and the application may not // > access them. // // So what we're doing is actually UB as `bufs` needs to be `&mut // [IoSlice<'_>]`. // // Tracking issue: https://github.com/rust-lang/socket2-rs/issues/129. // // NOTE: `send_to_vectored` has the same problem. // // [1] https://docs.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-wsasend bufs.as_ptr() as *mut _, min(bufs.len(), DWORD::max_value() as usize) as DWORD, &mut nsent, flags as DWORD, std::ptr::null_mut(), None, ), PartialEq::eq, sock::SOCKET_ERROR ) .map(|_| nsent as usize) } pub(crate) fn send_to( socket: Socket, buf: &[u8], addr: &SockAddr, flags: c_int, ) -> io::Result { syscall!( sendto( socket, buf.as_ptr().cast(), min(buf.len(), MAX_BUF_LEN) as c_int, flags, addr.as_ptr(), addr.len(), ), PartialEq::eq, sock::SOCKET_ERROR ) .map(|n| n as usize) } pub(crate) fn send_to_vectored( socket: Socket, bufs: &[IoSlice<'_>], addr: &SockAddr, flags: c_int, ) -> io::Result { let mut nsent = 0; syscall!( WSASendTo( socket, // FIXME: Same problem as in `send_vectored`. bufs.as_ptr() as *mut _, bufs.len().min(DWORD::MAX as usize) as DWORD, &mut nsent, flags as DWORD, addr.as_ptr(), addr.len(), ptr::null_mut(), None, ), PartialEq::eq, sock::SOCKET_ERROR ) .map(|_| nsent as usize) } /// Wrapper around `getsockopt` to deal with platform specific timeouts. pub(crate) fn timeout_opt(fd: Socket, lvl: c_int, name: c_int) -> io::Result> { unsafe { getsockopt(fd, lvl, name).map(from_ms) } } fn from_ms(duration: DWORD) -> Option { if duration == 0 { None } else { let secs = duration / 1000; let nsec = (duration % 1000) * 1000000; Some(Duration::new(secs as u64, nsec as u32)) } } /// Wrapper around `setsockopt` to deal with platform specific timeouts. pub(crate) fn set_timeout_opt( fd: Socket, level: c_int, optname: c_int, duration: Option, ) -> io::Result<()> { let duration = into_ms(duration); unsafe { setsockopt(fd, level, optname, duration) } } fn into_ms(duration: Option) -> DWORD { // Note that a duration is a (u64, u32) (seconds, nanoseconds) pair, and the // timeouts in windows APIs are typically u32 milliseconds. To translate, we // have two pieces to take care of: // // * Nanosecond precision is rounded up // * Greater than u32::MAX milliseconds (50 days) is rounded up to // INFINITE (never time out). duration .map(|duration| min(duration.as_millis(), INFINITE as u128) as DWORD) .unwrap_or(0) } pub(crate) fn set_tcp_keepalive(socket: Socket, keepalive: &TcpKeepalive) -> io::Result<()> { let mut keepalive = tcp_keepalive { onoff: 1, keepalivetime: into_ms(keepalive.time), keepaliveinterval: into_ms(keepalive.interval), }; let mut out = 0; syscall!( WSAIoctl( socket, SIO_KEEPALIVE_VALS, &mut keepalive as *mut _ as *mut _, size_of::() as _, ptr::null_mut(), 0, &mut out, ptr::null_mut(), None, ), PartialEq::eq, sock::SOCKET_ERROR ) .map(|_| ()) } /// Caller must ensure `T` is the correct type for `level` and `optname`. pub(crate) unsafe fn getsockopt(socket: Socket, level: c_int, optname: c_int) -> io::Result { let mut optval: MaybeUninit = MaybeUninit::uninit(); let mut optlen = mem::size_of::() as c_int; syscall!( getsockopt( socket, level, optname, optval.as_mut_ptr().cast(), &mut optlen, ), PartialEq::eq, sock::SOCKET_ERROR ) .map(|_| { debug_assert_eq!(optlen as usize, mem::size_of::()); // Safety: `getsockopt` initialised `optval` for us. optval.assume_init() }) } /// Caller must ensure `T` is the correct type for `level` and `optname`. pub(crate) unsafe fn setsockopt( socket: Socket, level: c_int, optname: c_int, optval: T, ) -> io::Result<()> { syscall!( setsockopt( socket, level, optname, (&optval as *const T).cast(), mem::size_of::() as c_int, ), PartialEq::eq, sock::SOCKET_ERROR ) .map(|_| ()) } fn ioctlsocket(socket: Socket, cmd: c_long, payload: &mut u_long) -> io::Result<()> { syscall!( ioctlsocket(socket, cmd, payload), PartialEq::eq, sock::SOCKET_ERROR ) .map(|_| ()) } pub(crate) fn to_in_addr(addr: &Ipv4Addr) -> IN_ADDR { let mut s_un: in_addr_S_un = unsafe { mem::zeroed() }; // `S_un` is stored as BE on all machines, and the array is in BE order. So // the native endian conversion method is used so that it's never swapped. unsafe { *(s_un.S_addr_mut()) = u32::from_ne_bytes(addr.octets()) }; IN_ADDR { S_un: s_un } } pub(crate) fn from_in_addr(in_addr: IN_ADDR) -> Ipv4Addr { Ipv4Addr::from(unsafe { *in_addr.S_un.S_addr() }.to_ne_bytes()) } pub(crate) fn to_in6_addr(addr: &Ipv6Addr) -> in6_addr { let mut ret_addr: in6_addr_u = unsafe { mem::zeroed() }; unsafe { *(ret_addr.Byte_mut()) = addr.octets() }; let mut ret: in6_addr = unsafe { mem::zeroed() }; ret.u = ret_addr; ret } pub(crate) fn from_in6_addr(addr: in6_addr) -> Ipv6Addr { Ipv6Addr::from(*unsafe { addr.u.Byte() }) } pub(crate) fn to_mreqn( multiaddr: &Ipv4Addr, interface: &crate::socket::InterfaceIndexOrAddress, ) -> IpMreq { IpMreq { imr_multiaddr: to_in_addr(multiaddr), // Per https://docs.microsoft.com/en-us/windows/win32/api/ws2ipdef/ns-ws2ipdef-ip_mreq#members: // // imr_interface // // The local IPv4 address of the interface or the interface index on // which the multicast group should be joined or dropped. This value is // in network byte order. If this member specifies an IPv4 address of // 0.0.0.0, the default IPv4 multicast interface is used. // // To use an interface index of 1 would be the same as an IP address of // 0.0.0.1. imr_interface: match interface { crate::socket::InterfaceIndexOrAddress::Index(interface) => { to_in_addr(&(*interface).into()) } crate::socket::InterfaceIndexOrAddress::Address(interface) => to_in_addr(interface), }, } } /// Windows only API. impl crate::Socket { /// Sets `HANDLE_FLAG_INHERIT` using `SetHandleInformation`. #[cfg(feature = "all")] #[cfg_attr(docsrs, doc(cfg(all(windows, feature = "all"))))] pub fn set_no_inherit(&self, no_inherit: bool) -> io::Result<()> { self._set_no_inherit(no_inherit) } pub(crate) fn _set_no_inherit(&self, no_inherit: bool) -> io::Result<()> { // NOTE: can't use `syscall!` because it expects the function in the // `sock::` path. let res = unsafe { SetHandleInformation( self.as_raw() as HANDLE, winbase::HANDLE_FLAG_INHERIT, !no_inherit as _, ) }; if res == 0 { // Zero means error. Err(io::Error::last_os_error()) } else { Ok(()) } } } impl AsRawSocket for crate::Socket { fn as_raw_socket(&self) -> RawSocket { self.as_raw() as RawSocket } } impl IntoRawSocket for crate::Socket { fn into_raw_socket(self) -> RawSocket { self.into_raw() as RawSocket } } impl FromRawSocket for crate::Socket { unsafe fn from_raw_socket(socket: RawSocket) -> crate::Socket { crate::Socket::from_raw(socket as Socket) } } #[test] fn in_addr_convertion() { let ip = Ipv4Addr::new(127, 0, 0, 1); let raw = to_in_addr(&ip); assert_eq!(unsafe { *raw.S_un.S_addr() }, 127 << 0 | 1 << 24); assert_eq!(from_in_addr(raw), ip); let ip = Ipv4Addr::new(127, 34, 4, 12); let raw = to_in_addr(&ip); assert_eq!( unsafe { *raw.S_un.S_addr() }, 127 << 0 | 34 << 8 | 4 << 16 | 12 << 24 ); assert_eq!(from_in_addr(raw), ip); } #[test] fn in6_addr_convertion() { let ip = Ipv6Addr::new(0x2000, 1, 2, 3, 4, 5, 6, 7); let raw = to_in6_addr(&ip); let want = [ 0x2000u16.to_be(), 1u16.to_be(), 2u16.to_be(), 3u16.to_be(), 4u16.to_be(), 5u16.to_be(), 6u16.to_be(), 7u16.to_be(), ]; assert_eq!(unsafe { *raw.u.Word() }, want); assert_eq!(from_in6_addr(raw), ip); } vendor/socket2/src/sys/unix.rs0000664000175000017500000020071014172417313017162 0ustar mwhudsonmwhudson// Copyright 2015 The Rust Project Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use std::cmp::min; #[cfg(not(target_os = "redox"))] use std::io::IoSlice; use std::marker::PhantomData; use std::mem::{self, size_of, MaybeUninit}; use std::net::Shutdown; use std::net::{Ipv4Addr, Ipv6Addr}; #[cfg(all(feature = "all", target_vendor = "apple"))] use std::num::NonZeroU32; #[cfg(all( feature = "all", any( target_os = "android", target_os = "freebsd", target_os = "linux", target_vendor = "apple", ) ))] use std::num::NonZeroUsize; #[cfg(feature = "all")] use std::os::unix::ffi::OsStrExt; #[cfg(all( feature = "all", any( target_os = "android", target_os = "freebsd", target_os = "linux", target_vendor = "apple", ) ))] use std::os::unix::io::RawFd; use std::os::unix::io::{AsRawFd, FromRawFd, IntoRawFd}; #[cfg(feature = "all")] use std::os::unix::net::{UnixDatagram, UnixListener, UnixStream}; #[cfg(feature = "all")] use std::path::Path; #[cfg(not(all(target_os = "redox", not(feature = "all"))))] use std::ptr; use std::time::{Duration, Instant}; use std::{io, slice}; #[cfg(not(target_vendor = "apple"))] use libc::ssize_t; use libc::{c_void, in6_addr, in_addr}; #[cfg(not(target_os = "redox"))] use crate::RecvFlags; use crate::{Domain, Protocol, SockAddr, TcpKeepalive, Type}; pub(crate) use libc::c_int; // Used in `Domain`. pub(crate) use libc::{AF_INET, AF_INET6}; // Used in `Type`. #[cfg(all(feature = "all", not(target_os = "redox")))] pub(crate) use libc::SOCK_RAW; #[cfg(feature = "all")] pub(crate) use libc::SOCK_SEQPACKET; pub(crate) use libc::{SOCK_DGRAM, SOCK_STREAM}; // Used in `Protocol`. pub(crate) use libc::{IPPROTO_ICMP, IPPROTO_ICMPV6, IPPROTO_TCP, IPPROTO_UDP}; // Used in `SockAddr`. pub(crate) use libc::{ sa_family_t, sockaddr, sockaddr_in, sockaddr_in6, sockaddr_storage, socklen_t, }; // Used in `RecvFlags`. #[cfg(not(target_os = "redox"))] pub(crate) use libc::{MSG_TRUNC, SO_OOBINLINE}; // Used in `Socket`. #[cfg(all(feature = "all", not(target_os = "redox")))] pub(crate) use libc::IP_HDRINCL; #[cfg(not(any( target_os = "fuschia", target_os = "redox", target_os = "solaris", target_os = "illumos", )))] pub(crate) use libc::IP_TOS; #[cfg(not(target_vendor = "apple"))] pub(crate) use libc::SO_LINGER; #[cfg(target_vendor = "apple")] pub(crate) use libc::SO_LINGER_SEC as SO_LINGER; pub(crate) use libc::{ ip_mreq as IpMreq, ipv6_mreq as Ipv6Mreq, linger, IPPROTO_IP, IPPROTO_IPV6, IPV6_MULTICAST_HOPS, IPV6_MULTICAST_IF, IPV6_MULTICAST_LOOP, IPV6_UNICAST_HOPS, IPV6_V6ONLY, IP_ADD_MEMBERSHIP, IP_DROP_MEMBERSHIP, IP_MULTICAST_IF, IP_MULTICAST_LOOP, IP_MULTICAST_TTL, IP_TTL, MSG_OOB, MSG_PEEK, SOL_SOCKET, SO_BROADCAST, SO_ERROR, SO_KEEPALIVE, SO_RCVBUF, SO_RCVTIMEO, SO_REUSEADDR, SO_SNDBUF, SO_SNDTIMEO, SO_TYPE, TCP_NODELAY, }; #[cfg(not(any( target_os = "dragonfly", target_os = "freebsd", target_os = "haiku", target_os = "illumos", target_os = "netbsd", target_os = "openbsd", target_os = "solaris", target_vendor = "apple" )))] pub(crate) use libc::{IPV6_ADD_MEMBERSHIP, IPV6_DROP_MEMBERSHIP}; #[cfg(any( target_os = "dragonfly", target_os = "freebsd", target_os = "haiku", target_os = "illumos", target_os = "netbsd", target_os = "openbsd", target_os = "solaris", target_vendor = "apple", ))] pub(crate) use libc::{ IPV6_JOIN_GROUP as IPV6_ADD_MEMBERSHIP, IPV6_LEAVE_GROUP as IPV6_DROP_MEMBERSHIP, }; #[cfg(all( feature = "all", any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_vendor = "apple", ) ))] pub(crate) use libc::{TCP_KEEPCNT, TCP_KEEPINTVL}; // See this type in the Windows file. pub(crate) type Bool = c_int; #[cfg(target_vendor = "apple")] use libc::TCP_KEEPALIVE as KEEPALIVE_TIME; #[cfg(not(any(target_vendor = "apple", target_os = "haiku", target_os = "openbsd")))] use libc::TCP_KEEPIDLE as KEEPALIVE_TIME; /// Helper macro to execute a system call that returns an `io::Result`. macro_rules! syscall { ($fn: ident ( $($arg: expr),* $(,)* ) ) => {{ #[allow(unused_unsafe)] let res = unsafe { libc::$fn($($arg, )*) }; if res == -1 { Err(std::io::Error::last_os_error()) } else { Ok(res) } }}; } /// Maximum size of a buffer passed to system call like `recv` and `send`. #[cfg(not(target_vendor = "apple"))] const MAX_BUF_LEN: usize = ::max_value() as usize; // The maximum read limit on most posix-like systems is `SSIZE_MAX`, with the // man page quoting that if the count of bytes to read is greater than // `SSIZE_MAX` the result is "unspecified". // // On macOS, however, apparently the 64-bit libc is either buggy or // intentionally showing odd behavior by rejecting any read with a size larger // than or equal to INT_MAX. To handle both of these the read size is capped on // both platforms. #[cfg(target_vendor = "apple")] const MAX_BUF_LEN: usize = ::max_value() as usize - 1; #[cfg(any( all( target_os = "linux", any( target_env = "gnu", all(target_env = "uclibc", target_pointer_width = "64") ) ), target_os = "android", ))] type IovLen = usize; #[cfg(any( all( target_os = "linux", any( target_env = "musl", all(target_env = "uclibc", target_pointer_width = "32") ) ), target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "haiku", target_os = "illumos", target_os = "netbsd", target_os = "openbsd", target_os = "solaris", target_vendor = "apple", ))] type IovLen = c_int; /// Unix only API. impl Domain { /// Domain for Unix socket communication, corresponding to `AF_UNIX`. #[cfg_attr(docsrs, doc(cfg(unix)))] pub const UNIX: Domain = Domain(libc::AF_UNIX); /// Domain for low-level packet interface, corresponding to `AF_PACKET`. #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub const PACKET: Domain = Domain(libc::AF_PACKET); /// Domain for low-level VSOCK interface, corresponding to `AF_VSOCK`. #[cfg(all(feature = "all", any(target_os = "android", target_os = "linux")))] #[cfg_attr( docsrs, doc(cfg(all(feature = "all", any(target_os = "android", target_os = "linux")))) )] pub const VSOCK: Domain = Domain(libc::AF_VSOCK); } impl_debug!( Domain, libc::AF_INET, libc::AF_INET6, libc::AF_UNIX, #[cfg(any(target_os = "android", target_os = "fuchsia", target_os = "linux"))] #[cfg_attr( docsrs, doc(cfg(any(target_os = "android", target_os = "fuchsia", target_os = "linux"))) )] libc::AF_PACKET, #[cfg(any(target_os = "android", target_os = "linux"))] #[cfg_attr(docsrs, doc(cfg(any(target_os = "android", target_os = "linux"))))] libc::AF_VSOCK, libc::AF_UNSPEC, // = 0. ); /// Unix only API. impl Type { /// Set `SOCK_NONBLOCK` on the `Type`. #[cfg(all( feature = "all", any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd" ) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd" ) ))) )] pub const fn nonblocking(self) -> Type { Type(self.0 | libc::SOCK_NONBLOCK) } /// Set `SOCK_CLOEXEC` on the `Type`. #[cfg(all( feature = "all", any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd" ) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd" ) ))) )] pub const fn cloexec(self) -> Type { self._cloexec() } #[cfg(any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd" ))] pub(crate) const fn _cloexec(self) -> Type { Type(self.0 | libc::SOCK_CLOEXEC) } } impl_debug!( Type, libc::SOCK_STREAM, libc::SOCK_DGRAM, #[cfg(not(target_os = "redox"))] libc::SOCK_RAW, #[cfg(not(any(target_os = "redox", target_os = "haiku")))] libc::SOCK_RDM, libc::SOCK_SEQPACKET, /* TODO: add these optional bit OR-ed flags: #[cfg(any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "linux", target_os = "netbsd", target_os = "openbsd" ))] libc::SOCK_NONBLOCK, #[cfg(any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "linux", target_os = "netbsd", target_os = "openbsd" ))] libc::SOCK_CLOEXEC, */ ); impl_debug!( Protocol, libc::IPPROTO_ICMP, libc::IPPROTO_ICMPV6, libc::IPPROTO_TCP, libc::IPPROTO_UDP, ); /// Unix-only API. #[cfg(not(target_os = "redox"))] impl RecvFlags { /// Check if the message terminates a record. /// /// Not all socket types support the notion of records. /// For socket types that do support it (such as [`SEQPACKET`][Type::SEQPACKET]), /// a record is terminated by sending a message with the end-of-record flag set. /// /// On Unix this corresponds to the MSG_EOR flag. pub const fn is_end_of_record(self) -> bool { self.0 & libc::MSG_EOR != 0 } /// Check if the message contains out-of-band data. /// /// This is useful for protocols where you receive out-of-band data /// mixed in with the normal data stream. /// /// On Unix this corresponds to the MSG_OOB flag. pub const fn is_out_of_band(self) -> bool { self.0 & libc::MSG_OOB != 0 } } #[cfg(not(target_os = "redox"))] impl std::fmt::Debug for RecvFlags { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("RecvFlags") .field("is_end_of_record", &self.is_end_of_record()) .field("is_out_of_band", &self.is_out_of_band()) .field("is_truncated", &self.is_truncated()) .finish() } } #[repr(transparent)] pub struct MaybeUninitSlice<'a> { vec: libc::iovec, _lifetime: PhantomData<&'a mut [MaybeUninit]>, } unsafe impl<'a> Send for MaybeUninitSlice<'a> {} unsafe impl<'a> Sync for MaybeUninitSlice<'a> {} impl<'a> MaybeUninitSlice<'a> { pub(crate) fn new(buf: &'a mut [MaybeUninit]) -> MaybeUninitSlice<'a> { MaybeUninitSlice { vec: libc::iovec { iov_base: buf.as_mut_ptr().cast(), iov_len: buf.len(), }, _lifetime: PhantomData, } } pub(crate) fn as_slice(&self) -> &[MaybeUninit] { unsafe { slice::from_raw_parts(self.vec.iov_base.cast(), self.vec.iov_len) } } pub(crate) fn as_mut_slice(&mut self) -> &mut [MaybeUninit] { unsafe { slice::from_raw_parts_mut(self.vec.iov_base.cast(), self.vec.iov_len) } } } /// Unix only API. impl SockAddr { /// Constructs a `SockAddr` with the family `AF_UNIX` and the provided path. /// /// # Failure /// /// Returns an error if the path is longer than `SUN_LEN`. #[cfg(feature = "all")] #[cfg_attr(docsrs, doc(cfg(all(unix, feature = "all"))))] #[allow(unused_unsafe)] // TODO: replace with `unsafe_op_in_unsafe_fn` once stable. pub fn unix

(path: P) -> io::Result where P: AsRef, { unsafe { SockAddr::init(|storage, len| { // Safety: `SockAddr::init` zeros the address, which is a valid // representation. let storage: &mut libc::sockaddr_un = unsafe { &mut *storage.cast() }; let len: &mut socklen_t = unsafe { &mut *len }; let bytes = path.as_ref().as_os_str().as_bytes(); let too_long = match bytes.first() { None => false, // linux abstract namespaces aren't null-terminated Some(&0) => bytes.len() > storage.sun_path.len(), Some(_) => bytes.len() >= storage.sun_path.len(), }; if too_long { return Err(io::Error::new( io::ErrorKind::InvalidInput, "path must be shorter than SUN_LEN", )); } storage.sun_family = libc::AF_UNIX as sa_family_t; // Safety: `bytes` and `addr.sun_path` are not overlapping and // both point to valid memory. // `SockAddr::init` zeroes the memory, so the path is already // null terminated. unsafe { ptr::copy_nonoverlapping( bytes.as_ptr(), storage.sun_path.as_mut_ptr() as *mut u8, bytes.len(), ) }; let base = storage as *const _ as usize; let path = &storage.sun_path as *const _ as usize; let sun_path_offset = path - base; let length = sun_path_offset + bytes.len() + match bytes.first() { Some(&0) | None => 0, Some(_) => 1, }; *len = length as socklen_t; Ok(()) }) } .map(|(_, addr)| addr) } } impl SockAddr { /// Constructs a `SockAddr` with the family `AF_VSOCK` and the provided CID/port. /// /// # Errors /// /// This function can never fail. In a future version of this library it will be made /// infallible. #[allow(unused_unsafe)] // TODO: replace with `unsafe_op_in_unsafe_fn` once stable. #[cfg(all(feature = "all", any(target_os = "android", target_os = "linux")))] #[cfg_attr( docsrs, doc(cfg(all(feature = "all", any(target_os = "android", target_os = "linux")))) )] pub fn vsock(cid: u32, port: u32) -> io::Result { unsafe { SockAddr::init(|storage, len| { // Safety: `SockAddr::init` zeros the address, which is a valid // representation. let storage: &mut libc::sockaddr_vm = unsafe { &mut *storage.cast() }; let len: &mut socklen_t = unsafe { &mut *len }; storage.svm_family = libc::AF_VSOCK as sa_family_t; storage.svm_cid = cid; storage.svm_port = port; *len = mem::size_of::() as socklen_t; Ok(()) }) } .map(|(_, addr)| addr) } /// Returns this address VSOCK CID/port if it is in the `AF_VSOCK` family, /// otherwise return `None`. #[cfg(all(feature = "all", any(target_os = "android", target_os = "linux")))] #[cfg_attr( docsrs, doc(cfg(all(feature = "all", any(target_os = "android", target_os = "linux")))) )] pub fn vsock_address(&self) -> Option<(u32, u32)> { if self.family() == libc::AF_VSOCK as sa_family_t { // Safety: if the ss_family field is AF_VSOCK then storage must be a sockaddr_vm. let addr = unsafe { &*(self.as_ptr() as *const libc::sockaddr_vm) }; Some((addr.svm_cid, addr.svm_port)) } else { None } } } pub(crate) type Socket = c_int; pub(crate) unsafe fn socket_from_raw(socket: Socket) -> crate::socket::Inner { crate::socket::Inner::from_raw_fd(socket) } pub(crate) fn socket_as_raw(socket: &crate::socket::Inner) -> Socket { socket.as_raw_fd() } pub(crate) fn socket_into_raw(socket: crate::socket::Inner) -> Socket { socket.into_raw_fd() } pub(crate) fn socket(family: c_int, ty: c_int, protocol: c_int) -> io::Result { syscall!(socket(family, ty, protocol)) } #[cfg(feature = "all")] pub(crate) fn socketpair(family: c_int, ty: c_int, protocol: c_int) -> io::Result<[Socket; 2]> { let mut fds = [0, 0]; syscall!(socketpair(family, ty, protocol, fds.as_mut_ptr())).map(|_| fds) } pub(crate) fn bind(fd: Socket, addr: &SockAddr) -> io::Result<()> { syscall!(bind(fd, addr.as_ptr(), addr.len() as _)).map(|_| ()) } pub(crate) fn connect(fd: Socket, addr: &SockAddr) -> io::Result<()> { syscall!(connect(fd, addr.as_ptr(), addr.len())).map(|_| ()) } pub(crate) fn poll_connect(socket: &crate::Socket, timeout: Duration) -> io::Result<()> { let start = Instant::now(); let mut pollfd = libc::pollfd { fd: socket.as_raw(), events: libc::POLLIN | libc::POLLOUT, revents: 0, }; loop { let elapsed = start.elapsed(); if elapsed >= timeout { return Err(io::ErrorKind::TimedOut.into()); } let timeout = (timeout - elapsed).as_millis(); let timeout = clamp(timeout, 1, c_int::max_value() as u128) as c_int; match syscall!(poll(&mut pollfd, 1, timeout)) { Ok(0) => return Err(io::ErrorKind::TimedOut.into()), Ok(_) => { // Error or hang up indicates an error (or failure to connect). if (pollfd.revents & libc::POLLHUP) != 0 || (pollfd.revents & libc::POLLERR) != 0 { match socket.take_error() { Ok(Some(err)) => return Err(err), Ok(None) => { return Err(io::Error::new( io::ErrorKind::Other, "no error set after POLLHUP", )) } Err(err) => return Err(err), } } return Ok(()); } // Got interrupted, try again. Err(ref err) if err.kind() == io::ErrorKind::Interrupted => continue, Err(err) => return Err(err), } } } // TODO: use clamp from std lib, stable since 1.50. fn clamp(value: T, min: T, max: T) -> T where T: Ord, { if value <= min { min } else if value >= max { max } else { value } } pub(crate) fn listen(fd: Socket, backlog: c_int) -> io::Result<()> { syscall!(listen(fd, backlog)).map(|_| ()) } pub(crate) fn accept(fd: Socket) -> io::Result<(Socket, SockAddr)> { // Safety: `accept` initialises the `SockAddr` for us. unsafe { SockAddr::init(|storage, len| syscall!(accept(fd, storage.cast(), len))) } } pub(crate) fn getsockname(fd: Socket) -> io::Result { // Safety: `accept` initialises the `SockAddr` for us. unsafe { SockAddr::init(|storage, len| syscall!(getsockname(fd, storage.cast(), len))) } .map(|(_, addr)| addr) } pub(crate) fn getpeername(fd: Socket) -> io::Result { // Safety: `accept` initialises the `SockAddr` for us. unsafe { SockAddr::init(|storage, len| syscall!(getpeername(fd, storage.cast(), len))) } .map(|(_, addr)| addr) } pub(crate) fn try_clone(fd: Socket) -> io::Result { syscall!(fcntl(fd, libc::F_DUPFD_CLOEXEC, 0)) } pub(crate) fn set_nonblocking(fd: Socket, nonblocking: bool) -> io::Result<()> { if nonblocking { fcntl_add(fd, libc::F_GETFL, libc::F_SETFL, libc::O_NONBLOCK) } else { fcntl_remove(fd, libc::F_GETFL, libc::F_SETFL, libc::O_NONBLOCK) } } pub(crate) fn shutdown(fd: Socket, how: Shutdown) -> io::Result<()> { let how = match how { Shutdown::Write => libc::SHUT_WR, Shutdown::Read => libc::SHUT_RD, Shutdown::Both => libc::SHUT_RDWR, }; syscall!(shutdown(fd, how)).map(|_| ()) } pub(crate) fn recv(fd: Socket, buf: &mut [MaybeUninit], flags: c_int) -> io::Result { syscall!(recv( fd, buf.as_mut_ptr().cast(), min(buf.len(), MAX_BUF_LEN), flags, )) .map(|n| n as usize) } pub(crate) fn recv_from( fd: Socket, buf: &mut [MaybeUninit], flags: c_int, ) -> io::Result<(usize, SockAddr)> { // Safety: `recvfrom` initialises the `SockAddr` for us. unsafe { SockAddr::init(|addr, addrlen| { syscall!(recvfrom( fd, buf.as_mut_ptr().cast(), min(buf.len(), MAX_BUF_LEN), flags, addr.cast(), addrlen )) .map(|n| n as usize) }) } } #[cfg(not(target_os = "redox"))] pub(crate) fn recv_vectored( fd: Socket, bufs: &mut [crate::MaybeUninitSlice<'_>], flags: c_int, ) -> io::Result<(usize, RecvFlags)> { recvmsg(fd, ptr::null_mut(), bufs, flags).map(|(n, _, recv_flags)| (n, recv_flags)) } #[cfg(not(target_os = "redox"))] pub(crate) fn recv_from_vectored( fd: Socket, bufs: &mut [crate::MaybeUninitSlice<'_>], flags: c_int, ) -> io::Result<(usize, RecvFlags, SockAddr)> { // Safety: `recvmsg` initialises the address storage and we set the length // manually. unsafe { SockAddr::init(|storage, len| { recvmsg(fd, storage, bufs, flags).map(|(n, addrlen, recv_flags)| { // Set the correct address length. *len = addrlen; (n, recv_flags) }) }) } .map(|((n, recv_flags), addr)| (n, recv_flags, addr)) } /// Returns the (bytes received, sending address len, `RecvFlags`). #[cfg(not(target_os = "redox"))] fn recvmsg( fd: Socket, msg_name: *mut sockaddr_storage, bufs: &mut [crate::MaybeUninitSlice<'_>], flags: c_int, ) -> io::Result<(usize, libc::socklen_t, RecvFlags)> { let msg_namelen = if msg_name.is_null() { 0 } else { size_of::() as libc::socklen_t }; // libc::msghdr contains unexported padding fields on Fuchsia. let mut msg: libc::msghdr = unsafe { mem::zeroed() }; msg.msg_name = msg_name.cast(); msg.msg_namelen = msg_namelen; msg.msg_iov = bufs.as_mut_ptr().cast(); msg.msg_iovlen = min(bufs.len(), IovLen::MAX as usize) as IovLen; syscall!(recvmsg(fd, &mut msg, flags)) .map(|n| (n as usize, msg.msg_namelen, RecvFlags(msg.msg_flags))) } pub(crate) fn send(fd: Socket, buf: &[u8], flags: c_int) -> io::Result { syscall!(send( fd, buf.as_ptr().cast(), min(buf.len(), MAX_BUF_LEN), flags, )) .map(|n| n as usize) } #[cfg(not(target_os = "redox"))] pub(crate) fn send_vectored(fd: Socket, bufs: &[IoSlice<'_>], flags: c_int) -> io::Result { sendmsg(fd, ptr::null(), 0, bufs, flags) } pub(crate) fn send_to(fd: Socket, buf: &[u8], addr: &SockAddr, flags: c_int) -> io::Result { syscall!(sendto( fd, buf.as_ptr().cast(), min(buf.len(), MAX_BUF_LEN), flags, addr.as_ptr(), addr.len(), )) .map(|n| n as usize) } #[cfg(not(target_os = "redox"))] pub(crate) fn send_to_vectored( fd: Socket, bufs: &[IoSlice<'_>], addr: &SockAddr, flags: c_int, ) -> io::Result { sendmsg(fd, addr.as_storage_ptr(), addr.len(), bufs, flags) } /// Returns the (bytes received, sending address len, `RecvFlags`). #[cfg(not(target_os = "redox"))] fn sendmsg( fd: Socket, msg_name: *const sockaddr_storage, msg_namelen: socklen_t, bufs: &[IoSlice<'_>], flags: c_int, ) -> io::Result { // libc::msghdr contains unexported padding fields on Fuchsia. let mut msg: libc::msghdr = unsafe { mem::zeroed() }; // Safety: we're creating a `*mut` pointer from a reference, which is UB // once actually used. However the OS should not write to it in the // `sendmsg` system call. msg.msg_name = (msg_name as *mut sockaddr_storage).cast(); msg.msg_namelen = msg_namelen; // Safety: Same as above about `*const` -> `*mut`. msg.msg_iov = bufs.as_ptr() as *mut _; msg.msg_iovlen = min(bufs.len(), IovLen::MAX as usize) as IovLen; syscall!(sendmsg(fd, &msg, flags)).map(|n| n as usize) } /// Wrapper around `getsockopt` to deal with platform specific timeouts. pub(crate) fn timeout_opt(fd: Socket, opt: c_int, val: c_int) -> io::Result> { unsafe { getsockopt(fd, opt, val).map(from_timeval) } } fn from_timeval(duration: libc::timeval) -> Option { if duration.tv_sec == 0 && duration.tv_usec == 0 { None } else { let sec = duration.tv_sec as u64; let nsec = (duration.tv_usec as u32) * 1000; Some(Duration::new(sec, nsec)) } } /// Wrapper around `setsockopt` to deal with platform specific timeouts. pub(crate) fn set_timeout_opt( fd: Socket, opt: c_int, val: c_int, duration: Option, ) -> io::Result<()> { let duration = into_timeval(duration); unsafe { setsockopt(fd, opt, val, duration) } } fn into_timeval(duration: Option) -> libc::timeval { match duration { // https://github.com/rust-lang/libc/issues/1848 #[cfg_attr(target_env = "musl", allow(deprecated))] Some(duration) => libc::timeval { tv_sec: min(duration.as_secs(), libc::time_t::max_value() as u64) as libc::time_t, tv_usec: duration.subsec_micros() as libc::suseconds_t, }, None => libc::timeval { tv_sec: 0, tv_usec: 0, }, } } #[cfg(feature = "all")] #[cfg(not(any(target_os = "haiku", target_os = "openbsd")))] pub(crate) fn keepalive_time(fd: Socket) -> io::Result { unsafe { getsockopt::(fd, IPPROTO_TCP, KEEPALIVE_TIME) .map(|secs| Duration::from_secs(secs as u64)) } } #[allow(unused_variables)] pub(crate) fn set_tcp_keepalive(fd: Socket, keepalive: &TcpKeepalive) -> io::Result<()> { #[cfg(not(any(target_os = "haiku", target_os = "openbsd")))] if let Some(time) = keepalive.time { let secs = into_secs(time); unsafe { setsockopt(fd, libc::IPPROTO_TCP, KEEPALIVE_TIME, secs)? } } #[cfg(any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_vendor = "apple", ))] { if let Some(interval) = keepalive.interval { let secs = into_secs(interval); unsafe { setsockopt(fd, libc::IPPROTO_TCP, libc::TCP_KEEPINTVL, secs)? } } if let Some(retries) = keepalive.retries { unsafe { setsockopt(fd, libc::IPPROTO_TCP, libc::TCP_KEEPCNT, retries as c_int)? } } } Ok(()) } #[cfg(not(any(target_os = "haiku", target_os = "openbsd")))] fn into_secs(duration: Duration) -> c_int { min(duration.as_secs(), c_int::max_value() as u64) as c_int } /// Add `flag` to the current set flags of `F_GETFD`. fn fcntl_add(fd: Socket, get_cmd: c_int, set_cmd: c_int, flag: c_int) -> io::Result<()> { let previous = syscall!(fcntl(fd, get_cmd))?; let new = previous | flag; if new != previous { syscall!(fcntl(fd, set_cmd, new)).map(|_| ()) } else { // Flag was already set. Ok(()) } } /// Remove `flag` to the current set flags of `F_GETFD`. fn fcntl_remove(fd: Socket, get_cmd: c_int, set_cmd: c_int, flag: c_int) -> io::Result<()> { let previous = syscall!(fcntl(fd, get_cmd))?; let new = previous & !flag; if new != previous { syscall!(fcntl(fd, set_cmd, new)).map(|_| ()) } else { // Flag was already set. Ok(()) } } /// Caller must ensure `T` is the correct type for `opt` and `val`. pub(crate) unsafe fn getsockopt(fd: Socket, opt: c_int, val: c_int) -> io::Result { let mut payload: MaybeUninit = MaybeUninit::uninit(); let mut len = size_of::() as libc::socklen_t; syscall!(getsockopt( fd, opt, val, payload.as_mut_ptr().cast(), &mut len, )) .map(|_| { debug_assert_eq!(len as usize, size_of::()); // Safety: `getsockopt` initialised `payload` for us. payload.assume_init() }) } /// Caller must ensure `T` is the correct type for `opt` and `val`. pub(crate) unsafe fn setsockopt( fd: Socket, opt: c_int, val: c_int, payload: T, ) -> io::Result<()> { let payload = &payload as *const T as *const c_void; syscall!(setsockopt( fd, opt, val, payload, mem::size_of::() as libc::socklen_t, )) .map(|_| ()) } pub(crate) fn to_in_addr(addr: &Ipv4Addr) -> in_addr { // `s_addr` is stored as BE on all machines, and the array is in BE order. // So the native endian conversion method is used so that it's never // swapped. in_addr { s_addr: u32::from_ne_bytes(addr.octets()), } } pub(crate) fn from_in_addr(in_addr: in_addr) -> Ipv4Addr { Ipv4Addr::from(in_addr.s_addr.to_ne_bytes()) } pub(crate) fn to_in6_addr(addr: &Ipv6Addr) -> in6_addr { in6_addr { s6_addr: addr.octets(), } } pub(crate) fn from_in6_addr(addr: in6_addr) -> Ipv6Addr { Ipv6Addr::from(addr.s6_addr) } #[cfg(not(any( target_os = "haiku", target_os = "illumos", target_os = "netbsd", target_os = "redox", target_os = "solaris", )))] pub(crate) fn to_mreqn( multiaddr: &Ipv4Addr, interface: &crate::socket::InterfaceIndexOrAddress, ) -> libc::ip_mreqn { match interface { crate::socket::InterfaceIndexOrAddress::Index(interface) => libc::ip_mreqn { imr_multiaddr: to_in_addr(multiaddr), imr_address: to_in_addr(&Ipv4Addr::UNSPECIFIED), imr_ifindex: *interface as _, }, crate::socket::InterfaceIndexOrAddress::Address(interface) => libc::ip_mreqn { imr_multiaddr: to_in_addr(multiaddr), imr_address: to_in_addr(interface), imr_ifindex: 0, }, } } /// Unix only API. impl crate::Socket { /// Accept a new incoming connection from this listener. /// /// This function directly corresponds to the `accept4(2)` function. /// /// This function will block the calling thread until a new connection is /// established. When established, the corresponding `Socket` and the remote /// peer's address will be returned. #[cfg(all( feature = "all", any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd" ) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd" ) ))) )] pub fn accept4(&self, flags: c_int) -> io::Result<(crate::Socket, SockAddr)> { self._accept4(flags) } #[cfg(any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd" ))] pub(crate) fn _accept4(&self, flags: c_int) -> io::Result<(crate::Socket, SockAddr)> { // Safety: `accept4` initialises the `SockAddr` for us. unsafe { SockAddr::init(|storage, len| { syscall!(accept4(self.as_raw(), storage.cast(), len, flags)) .map(crate::Socket::from_raw) }) } } /// Sets `CLOEXEC` on the socket. /// /// # Notes /// /// On supported platforms you can use [`Type::cloexec`]. #[cfg(feature = "all")] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", unix))))] pub fn set_cloexec(&self, close_on_exec: bool) -> io::Result<()> { self._set_cloexec(close_on_exec) } pub(crate) fn _set_cloexec(&self, close_on_exec: bool) -> io::Result<()> { if close_on_exec { fcntl_add( self.as_raw(), libc::F_GETFD, libc::F_SETFD, libc::FD_CLOEXEC, ) } else { fcntl_remove( self.as_raw(), libc::F_GETFD, libc::F_SETFD, libc::FD_CLOEXEC, ) } } /// Sets `SO_NOSIGPIPE` on the socket. #[cfg(all(feature = "all", any(doc, target_vendor = "apple")))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", target_vendor = "apple"))))] pub fn set_nosigpipe(&self, nosigpipe: bool) -> io::Result<()> { self._set_nosigpipe(nosigpipe) } #[cfg(target_vendor = "apple")] pub(crate) fn _set_nosigpipe(&self, nosigpipe: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), libc::SOL_SOCKET, libc::SO_NOSIGPIPE, nosigpipe as c_int, ) } } /// Gets the value of the `TCP_MAXSEG` option on this socket. /// /// For more information about this option, see [`set_mss`]. /// /// [`set_mss`]: crate::Socket::set_mss #[cfg(all(feature = "all", not(target_os = "redox")))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", unix, not(target_os = "redox")))))] pub fn mss(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), libc::IPPROTO_TCP, libc::TCP_MAXSEG) .map(|mss| mss as u32) } } /// Sets the value of the `TCP_MAXSEG` option on this socket. /// /// The `TCP_MAXSEG` option denotes the TCP Maximum Segment Size and is only /// available on TCP sockets. #[cfg(all(feature = "all", not(target_os = "redox")))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", unix, not(target_os = "redox")))))] pub fn set_mss(&self, mss: u32) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), libc::IPPROTO_TCP, libc::TCP_MAXSEG, mss as c_int, ) } } /// Returns `true` if `listen(2)` was called on this socket by checking the /// `SO_ACCEPTCONN` option on this socket. #[cfg(all( feature = "all", any( target_os = "android", target_os = "freebsd", target_os = "fuchsia", target_os = "linux", ) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any( target_os = "android", target_os = "freebsd", target_os = "fuchsia", target_os = "linux", ) ))) )] pub fn is_listener(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), libc::SOL_SOCKET, libc::SO_ACCEPTCONN) .map(|v| v != 0) } } /// Returns the [`Domain`] of this socket by checking the `SO_DOMAIN` option /// on this socket. #[cfg(all( feature = "all", any( target_os = "android", // TODO: add FreeBSD. // target_os = "freebsd", target_os = "fuchsia", target_os = "linux", ) ))] #[cfg_attr(docsrs, doc(cfg(all( feature = "all", any( target_os = "android", // TODO: add FreeBSD. // target_os = "freebsd", target_os = "fuchsia", target_os = "linux", ) ))))] pub fn domain(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), libc::SOL_SOCKET, libc::SO_DOMAIN).map(Domain) } } /// Returns the [`Protocol`] of this socket by checking the `SO_PROTOCOL` /// option on this socket. #[cfg(all( feature = "all", any( target_os = "android", target_os = "freebsd", target_os = "fuchsia", target_os = "linux", ) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any( target_os = "android", target_os = "freebsd", target_os = "fuchsia", target_os = "linux", ) ))) )] pub fn protocol(&self) -> io::Result> { unsafe { getsockopt::(self.as_raw(), libc::SOL_SOCKET, libc::SO_PROTOCOL).map(|v| match v { 0 => None, p => Some(Protocol(p)), }) } } /// Gets the value for the `SO_MARK` option on this socket. /// /// This value gets the socket mark field for each packet sent through /// this socket. /// /// On Linux this function requires the `CAP_NET_ADMIN` capability. #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn mark(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), libc::SOL_SOCKET, libc::SO_MARK) .map(|mark| mark as u32) } } /// Sets the value for the `SO_MARK` option on this socket. /// /// This value sets the socket mark field for each packet sent through /// this socket. Changing the mark can be used for mark-based routing /// without netfilter or for packet filtering. /// /// On Linux this function requires the `CAP_NET_ADMIN` capability. #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn set_mark(&self, mark: u32) -> io::Result<()> { unsafe { setsockopt::( self.as_raw(), libc::SOL_SOCKET, libc::SO_MARK, mark as c_int, ) } } /// Get the value of the `TCP_CORK` option on this socket. /// /// For more information about this option, see [`set_cork`]. /// /// [`set_cork`]: Socket::set_cork #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn cork(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), libc::IPPROTO_TCP, libc::TCP_CORK) .map(|cork| cork != 0) } } /// Set the value of the `TCP_CORK` option on this socket. /// /// If set, don't send out partial frames. All queued partial frames are /// sent when the option is cleared again. There is a 200 millisecond ceiling on /// the time for which output is corked by `TCP_CORK`. If this ceiling is reached, /// then queued data is automatically transmitted. #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn set_cork(&self, cork: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), libc::IPPROTO_TCP, libc::TCP_CORK, cork as c_int, ) } } /// Get the value of the `TCP_QUICKACK` option on this socket. /// /// For more information about this option, see [`set_quickack`]. /// /// [`set_quickack`]: Socket::set_quickack #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn quickack(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), libc::IPPROTO_TCP, libc::TCP_QUICKACK) .map(|quickack| quickack != 0) } } /// Set the value of the `TCP_QUICKACK` option on this socket. /// /// If set, acks are sent immediately, rather than delayed if needed in accordance to normal /// TCP operation. This flag is not permanent, it only enables a switch to or from quickack mode. /// Subsequent operation of the TCP protocol will once again enter/leave quickack mode depending on /// internal protocol processing and factors such as delayed ack timeouts occurring and data transfer. #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn set_quickack(&self, quickack: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), libc::IPPROTO_TCP, libc::TCP_QUICKACK, quickack as c_int, ) } } /// Get the value of the `TCP_THIN_LINEAR_TIMEOUTS` option on this socket. /// /// For more information about this option, see [`set_thin_linear_timeouts`]. /// /// [`set_thin_linear_timeouts`]: Socket::set_thin_linear_timeouts #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn thin_linear_timeouts(&self) -> io::Result { unsafe { getsockopt::( self.as_raw(), libc::IPPROTO_TCP, libc::TCP_THIN_LINEAR_TIMEOUTS, ) .map(|timeouts| timeouts != 0) } } /// Set the value of the `TCP_THIN_LINEAR_TIMEOUTS` option on this socket. /// /// If set, the kernel will dynamically detect a thin-stream connection if there are less than four packets in flight. /// With less than four packets in flight the normal TCP fast retransmission will not be effective. /// The kernel will modify the retransmission to avoid the very high latencies that thin stream suffer because of exponential backoff. #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn set_thin_linear_timeouts(&self, timeouts: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), libc::IPPROTO_TCP, libc::TCP_THIN_LINEAR_TIMEOUTS, timeouts as c_int, ) } } /// Gets the value for the `SO_BINDTODEVICE` option on this socket. /// /// This value gets the socket binded device's interface name. #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn device(&self) -> io::Result>> { // TODO: replace with `MaybeUninit::uninit_array` once stable. let mut buf: [MaybeUninit; libc::IFNAMSIZ] = unsafe { MaybeUninit::uninit().assume_init() }; let mut len = buf.len() as libc::socklen_t; unsafe { syscall!(getsockopt( self.as_raw(), libc::SOL_SOCKET, libc::SO_BINDTODEVICE, buf.as_mut_ptr().cast(), &mut len, ))?; } if len == 0 { Ok(None) } else { let buf = &buf[..len as usize - 1]; // TODO: use `MaybeUninit::slice_assume_init_ref` once stable. Ok(Some(unsafe { &*(buf as *const [_] as *const [u8]) }.into())) } } /// Sets the value for the `SO_BINDTODEVICE` option on this socket. /// /// If a socket is bound to an interface, only packets received from that /// particular interface are processed by the socket. Note that this only /// works for some socket types, particularly `AF_INET` sockets. /// /// If `interface` is `None` or an empty string it removes the binding. #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn bind_device(&self, interface: Option<&[u8]>) -> io::Result<()> { let (value, len) = if let Some(interface) = interface { (interface.as_ptr(), interface.len()) } else { (ptr::null(), 0) }; syscall!(setsockopt( self.as_raw(), libc::SOL_SOCKET, libc::SO_BINDTODEVICE, value.cast(), len as libc::socklen_t, )) .map(|_| ()) } /// Sets the value for the `SO_SETFIB` option on this socket. /// /// Bind socket to the specified forwarding table (VRF) on a FreeBSD. #[cfg(all(feature = "all", any(target_os = "freebsd")))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", any(target_os = "freebsd")))))] pub fn set_fib(&self, fib: u32) -> io::Result<()> { syscall!(setsockopt( self.as_raw(), libc::SOL_SOCKET, libc::SO_SETFIB, (&fib as *const u32).cast(), mem::size_of::() as libc::socklen_t, )) .map(|_| ()) } /// Sets the value for `IP_BOUND_IF` option on this socket. /// /// If a socket is bound to an interface, only packets received from that /// particular interface are processed by the socket. /// /// If `interface` is `None`, the binding is removed. If the `interface` /// index is not valid, an error is returned. /// /// One can use `libc::if_nametoindex` to convert an interface alias to an /// index. #[cfg(all(feature = "all", target_vendor = "apple"))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", target_vendor = "apple"))))] pub fn bind_device_by_index(&self, interface: Option) -> io::Result<()> { let index = interface.map(NonZeroU32::get).unwrap_or(0); unsafe { setsockopt(self.as_raw(), IPPROTO_IP, libc::IP_BOUND_IF, index) } } /// Gets the value for `IP_BOUND_IF` option on this socket, i.e. the index /// for the interface to which the socket is bound. /// /// Returns `None` if the socket is not bound to any interface, otherwise /// returns an interface index. #[cfg(all(feature = "all", target_vendor = "apple"))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", target_vendor = "apple"))))] pub fn device_index(&self) -> io::Result> { let index = unsafe { getsockopt::(self.as_raw(), IPPROTO_IP, libc::IP_BOUND_IF)? }; Ok(NonZeroU32::new(index)) } /// Get the value of the `SO_INCOMING_CPU` option on this socket. /// /// For more information about this option, see [`set_cpu_affinity`]. /// /// [`set_cpu_affinity`]: crate::Socket::set_cpu_affinity #[cfg(all(feature = "all", target_os = "linux"))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", target_os = "linux"))))] pub fn cpu_affinity(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), libc::SOL_SOCKET, libc::SO_INCOMING_CPU) .map(|cpu| cpu as usize) } } /// Set value for the `SO_INCOMING_CPU` option on this socket. /// /// Sets the CPU affinity of the socket. #[cfg(all(feature = "all", target_os = "linux"))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", target_os = "linux"))))] pub fn set_cpu_affinity(&self, cpu: usize) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), libc::SOL_SOCKET, libc::SO_INCOMING_CPU, cpu as c_int, ) } } /// Get the value of the `SO_REUSEPORT` option on this socket. /// /// For more information about this option, see [`set_reuse_port`]. /// /// [`set_reuse_port`]: crate::Socket::set_reuse_port #[cfg(all( feature = "all", not(any(target_os = "solaris", target_os = "illumos")) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", unix, not(any(target_os = "solaris", target_os = "illumos")) ))) )] pub fn reuse_port(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), libc::SOL_SOCKET, libc::SO_REUSEPORT) .map(|reuse| reuse != 0) } } /// Set value for the `SO_REUSEPORT` option on this socket. /// /// This indicates that further calls to `bind` may allow reuse of local /// addresses. For IPv4 sockets this means that a socket may bind even when /// there's a socket already listening on this port. #[cfg(all( feature = "all", not(any(target_os = "solaris", target_os = "illumos")) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", unix, not(any(target_os = "solaris", target_os = "illumos")) ))) )] pub fn set_reuse_port(&self, reuse: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), libc::SOL_SOCKET, libc::SO_REUSEPORT, reuse as c_int, ) } } /// Get the value of the `IP_FREEBIND` option on this socket. /// /// For more information about this option, see [`set_freebind`]. /// /// [`set_freebind`]: crate::Socket::set_freebind #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn freebind(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), libc::SOL_IP, libc::IP_FREEBIND) .map(|freebind| freebind != 0) } } /// Set value for the `IP_FREEBIND` option on this socket. /// /// If enabled, this boolean option allows binding to an IP address that is /// nonlocal or does not (yet) exist. This permits listening on a socket, /// without requiring the underlying network interface or the specified /// dynamic IP address to be up at the time that the application is trying /// to bind to it. #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn set_freebind(&self, freebind: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), libc::SOL_IP, libc::IP_FREEBIND, freebind as c_int, ) } } /// Get the value of the `IPV6_FREEBIND` option on this socket. /// /// This is an IPv6 counterpart of `IP_FREEBIND` socket option on /// Android/Linux. For more information about this option, see /// [`set_freebind`]. /// /// [`set_freebind`]: crate::Socket::set_freebind #[cfg(all(feature = "all", any(target_os = "android", target_os = "linux")))] #[cfg_attr( docsrs, doc(cfg(all(feature = "all", any(target_os = "android", target_os = "linux")))) )] pub fn freebind_ipv6(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), libc::SOL_IPV6, libc::IPV6_FREEBIND) .map(|freebind| freebind != 0) } } /// Set value for the `IPV6_FREEBIND` option on this socket. /// /// This is an IPv6 counterpart of `IP_FREEBIND` socket option on /// Android/Linux. For more information about this option, see /// [`set_freebind`]. /// /// [`set_freebind`]: crate::Socket::set_freebind /// /// # Examples /// /// On Linux: /// /// ``` /// use socket2::{Domain, Socket, Type}; /// use std::io::{self, Error, ErrorKind}; /// /// fn enable_freebind(socket: &Socket) -> io::Result<()> { /// match socket.domain()? { /// Domain::IPV4 => socket.set_freebind(true)?, /// Domain::IPV6 => socket.set_freebind_ipv6(true)?, /// _ => return Err(Error::new(ErrorKind::Other, "unsupported domain")), /// }; /// Ok(()) /// } /// /// # fn main() -> io::Result<()> { /// # let socket = Socket::new(Domain::IPV6, Type::STREAM, None)?; /// # enable_freebind(&socket) /// # } /// ``` #[cfg(all(feature = "all", any(target_os = "android", target_os = "linux")))] #[cfg_attr( docsrs, doc(cfg(all(feature = "all", any(target_os = "android", target_os = "linux")))) )] pub fn set_freebind_ipv6(&self, freebind: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), libc::SOL_IPV6, libc::IPV6_FREEBIND, freebind as c_int, ) } } /// Copies data between a `file` and this socket using the `sendfile(2)` /// system call. Because this copying is done within the kernel, /// `sendfile()` is more efficient than the combination of `read(2)` and /// `write(2)`, which would require transferring data to and from user /// space. /// /// Different OSs support different kinds of `file`s, see the OS /// documentation for what kind of files are supported. Generally *regular* /// files are supported by all OSs. /// /// The `offset` is the absolute offset into the `file` to use as starting /// point. /// /// Depending on the OS this function *may* change the offset of `file`. For /// the best results reset the offset of the file before using it again. /// /// The `length` determines how many bytes to send, where a length of `None` /// means it will try to send all bytes. #[cfg(all( feature = "all", any( target_os = "android", target_os = "freebsd", target_os = "linux", target_vendor = "apple", ) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any( target_os = "android", target_os = "freebsd", target_os = "linux", target_vendor = "apple", ) ))) )] pub fn sendfile( &self, file: &F, offset: usize, length: Option, ) -> io::Result where F: AsRawFd, { self._sendfile(file.as_raw_fd(), offset as _, length) } #[cfg(all(feature = "all", target_vendor = "apple"))] fn _sendfile( &self, file: RawFd, offset: libc::off_t, length: Option, ) -> io::Result { // On macOS `length` is value-result parameter. It determines the number // of bytes to write and returns the number of bytes written. let mut length = match length { Some(n) => n.get() as libc::off_t, // A value of `0` means send all bytes. None => 0, }; syscall!(sendfile( file, self.as_raw(), offset, &mut length, ptr::null_mut(), 0, )) .map(|_| length as usize) } #[cfg(all(feature = "all", any(target_os = "android", target_os = "linux")))] fn _sendfile( &self, file: RawFd, offset: libc::off_t, length: Option, ) -> io::Result { let count = match length { Some(n) => n.get() as libc::size_t, // The maximum the Linux kernel will write in a single call. None => 0x7ffff000, // 2,147,479,552 bytes. }; let mut offset = offset; syscall!(sendfile(self.as_raw(), file, &mut offset, count)).map(|n| n as usize) } #[cfg(all(feature = "all", target_os = "freebsd"))] fn _sendfile( &self, file: RawFd, offset: libc::off_t, length: Option, ) -> io::Result { let nbytes = match length { Some(n) => n.get() as libc::size_t, // A value of `0` means send all bytes. None => 0, }; let mut sbytes: libc::off_t = 0; syscall!(sendfile( file, self.as_raw(), offset, nbytes, ptr::null_mut(), &mut sbytes, 0, )) .map(|_| sbytes as usize) } /// Set the value of the `TCP_USER_TIMEOUT` option on this socket. /// /// If set, this specifies the maximum amount of time that transmitted data may remain /// unacknowledged or buffered data may remain untransmitted before TCP will forcibly close the /// corresponding connection. /// /// Setting `timeout` to `None` or a zero duration causes the system default timeouts to /// be used. If `timeout` in milliseconds is larger than `c_uint::MAX`, the timeout is clamped /// to `c_uint::MAX`. For example, when `c_uint` is a 32-bit value, this limits the timeout to /// approximately 49.71 days. #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn set_tcp_user_timeout(&self, timeout: Option) -> io::Result<()> { let timeout = timeout .map(|to| min(to.as_millis(), libc::c_uint::MAX as u128) as libc::c_uint) .unwrap_or(0); unsafe { setsockopt( self.as_raw(), libc::IPPROTO_TCP, libc::TCP_USER_TIMEOUT, timeout, ) } } /// Get the value of the `TCP_USER_TIMEOUT` option on this socket. /// /// For more information about this option, see [`set_tcp_user_timeout`]. /// /// [`set_tcp_user_timeout`]: Socket::set_tcp_user_timeout #[cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any(target_os = "android", target_os = "fuchsia", target_os = "linux") ))) )] pub fn tcp_user_timeout(&self) -> io::Result> { unsafe { getsockopt::(self.as_raw(), libc::IPPROTO_TCP, libc::TCP_USER_TIMEOUT) .map(|millis| { if millis == 0 { None } else { Some(Duration::from_millis(millis as u64)) } }) } } /// Attach Berkeley Packet Filter(BPF) on this socket. /// /// BPF allows a user-space program to attach a filter onto any socket /// and allow or disallow certain types of data to come through the socket. /// /// For more information about this option, see [filter](https://www.kernel.org/doc/html/v5.12/networking/filter.html) #[cfg(all(feature = "all", any(target_os = "linux", target_os = "android")))] pub fn attach_filter(&self, filters: &[libc::sock_filter]) -> io::Result<()> { let prog = libc::sock_fprog { len: filters.len() as u16, filter: filters.as_ptr() as *mut _, }; unsafe { setsockopt( self.as_raw(), libc::SOL_SOCKET, libc::SO_ATTACH_FILTER, prog, ) } } /// Detach Berkeley Packet Filter(BPF) from this socket. /// /// For more information about this option, see [`attach_filter`] #[cfg(all(feature = "all", any(target_os = "linux", target_os = "android")))] pub fn detach_filter(&self) -> io::Result<()> { unsafe { setsockopt(self.as_raw(), libc::SOL_SOCKET, libc::SO_DETACH_FILTER, 0) } } } #[cfg_attr(docsrs, doc(cfg(unix)))] impl AsRawFd for crate::Socket { fn as_raw_fd(&self) -> c_int { self.as_raw() } } #[cfg_attr(docsrs, doc(cfg(unix)))] impl IntoRawFd for crate::Socket { fn into_raw_fd(self) -> c_int { self.into_raw() } } #[cfg_attr(docsrs, doc(cfg(unix)))] impl FromRawFd for crate::Socket { unsafe fn from_raw_fd(fd: c_int) -> crate::Socket { crate::Socket::from_raw(fd) } } #[cfg(feature = "all")] from!(UnixStream, crate::Socket); #[cfg(feature = "all")] from!(UnixListener, crate::Socket); #[cfg(feature = "all")] from!(UnixDatagram, crate::Socket); #[cfg(feature = "all")] from!(crate::Socket, UnixStream); #[cfg(feature = "all")] from!(crate::Socket, UnixListener); #[cfg(feature = "all")] from!(crate::Socket, UnixDatagram); #[test] fn in_addr_convertion() { let ip = Ipv4Addr::new(127, 0, 0, 1); let raw = to_in_addr(&ip); // NOTE: `in_addr` is packed on NetBSD and it's unsafe to borrow. let a = raw.s_addr; assert_eq!(a, u32::from_ne_bytes([127, 0, 0, 1])); assert_eq!(from_in_addr(raw), ip); let ip = Ipv4Addr::new(127, 34, 4, 12); let raw = to_in_addr(&ip); let a = raw.s_addr; assert_eq!(a, u32::from_ne_bytes([127, 34, 4, 12])); assert_eq!(from_in_addr(raw), ip); } #[test] fn in6_addr_convertion() { let ip = Ipv6Addr::new(0x2000, 1, 2, 3, 4, 5, 6, 7); let raw = to_in6_addr(&ip); let want = [32, 0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, 7]; assert_eq!(raw.s6_addr, want); assert_eq!(from_in6_addr(raw), ip); } vendor/socket2/src/socket.rs0000664000175000017500000017652614172417313016672 0ustar mwhudsonmwhudson// Copyright 2015 The Rust Project Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use std::fmt; use std::io::{self, Read, Write}; #[cfg(not(target_os = "redox"))] use std::io::{IoSlice, IoSliceMut}; use std::mem::MaybeUninit; use std::net::{self, Ipv4Addr, Ipv6Addr, Shutdown}; #[cfg(unix)] use std::os::unix::io::{FromRawFd, IntoRawFd}; #[cfg(windows)] use std::os::windows::io::{FromRawSocket, IntoRawSocket}; use std::time::Duration; use crate::sys::{self, c_int, getsockopt, setsockopt, Bool}; use crate::{Domain, Protocol, SockAddr, TcpKeepalive, Type}; #[cfg(not(target_os = "redox"))] use crate::{MaybeUninitSlice, RecvFlags}; /// Owned wrapper around a system socket. /// /// This type simply wraps an instance of a file descriptor (`c_int`) on Unix /// and an instance of `SOCKET` on Windows. This is the main type exported by /// this crate and is intended to mirror the raw semantics of sockets on /// platforms as closely as possible. Almost all methods correspond to /// precisely one libc or OS API call which is essentially just a "Rustic /// translation" of what's below. /// /// ## Converting to and from other types /// /// This type can be freely converted into the network primitives provided by /// the standard library, such as [`TcpStream`] or [`UdpSocket`], using the /// [`From`] trait, see the example below. /// /// [`TcpStream`]: std::net::TcpStream /// [`UdpSocket`]: std::net::UdpSocket /// /// # Notes /// /// Some methods that set options on `Socket` require two system calls to set /// there options without overwriting previously set options. We do this by /// first getting the current settings, applying the desired changes and than /// updating the settings. This means that the operation is **not** atomic. This /// can lead to a data race when two threads are changing options in parallel. /// /// # Examples /// ```no_run /// # fn main() -> std::io::Result<()> { /// use std::net::{SocketAddr, TcpListener}; /// use socket2::{Socket, Domain, Type}; /// /// // create a TCP listener bound to two addresses /// let socket = Socket::new(Domain::IPV4, Type::STREAM, None)?; /// /// let address: SocketAddr = "[::1]:12345".parse().unwrap(); /// let address = address.into(); /// socket.bind(&address)?; /// socket.bind(&address)?; /// socket.listen(128)?; /// /// let listener: TcpListener = socket.into(); /// // ... /// # drop(listener); /// # Ok(()) } /// ``` pub struct Socket { inner: Inner, } /// Store a `TcpStream` internally to take advantage of its niche optimizations on Unix platforms. pub(crate) type Inner = std::net::TcpStream; impl Socket { /// # Safety /// /// The caller must ensure `raw` is a valid file descriptor/socket. NOTE: /// this should really be marked `unsafe`, but this being an internal /// function, often passed as mapping function, it's makes it very /// inconvenient to mark it as `unsafe`. pub(crate) fn from_raw(raw: sys::Socket) -> Socket { Socket { inner: unsafe { // SAFETY: the caller must ensure that `raw` is a valid file // descriptor, but when it isn't it could return I/O errors, or // potentially close a fd it doesn't own. All of that isn't // memory unsafe, so it's not desired but never memory unsafe or // causes UB. // // However there is one exception. We use `TcpStream` to // represent the `Socket` internally (see `Inner` type), // `TcpStream` has a layout optimisation that doesn't allow for // negative file descriptors (as those are always invalid). // Violating this assumption (fd never negative) causes UB, // something we don't want. So check for that we have this // `assert!`. #[cfg(unix)] assert!(raw >= 0, "tried to create a `Socket` with an invalid fd"); sys::socket_from_raw(raw) }, } } pub(crate) fn as_raw(&self) -> sys::Socket { sys::socket_as_raw(&self.inner) } pub(crate) fn into_raw(self) -> sys::Socket { sys::socket_into_raw(self.inner) } /// Creates a new socket and sets common flags. /// /// This function corresponds to `socket(2)` on Unix and `WSASocketW` on /// Windows. /// /// On Unix-like systems, the close-on-exec flag is set on the new socket. /// Additionally, on Apple platforms `SOCK_NOSIGPIPE` is set. On Windows, /// the socket is made non-inheritable. /// /// [`Socket::new_raw`] can be used if you don't want these flags to be set. pub fn new(domain: Domain, ty: Type, protocol: Option) -> io::Result { let ty = set_common_type(ty); Socket::new_raw(domain, ty, protocol).and_then(set_common_flags) } /// Creates a new socket ready to be configured. /// /// This function corresponds to `socket(2)` on Unix and `WSASocketW` on /// Windows and simply creates a new socket, no other configuration is done. pub fn new_raw(domain: Domain, ty: Type, protocol: Option) -> io::Result { let protocol = protocol.map(|p| p.0).unwrap_or(0); sys::socket(domain.0, ty.0, protocol).map(Socket::from_raw) } /// Creates a pair of sockets which are connected to each other. /// /// This function corresponds to `socketpair(2)`. /// /// This function sets the same flags as in done for [`Socket::new`], /// [`Socket::pair_raw`] can be used if you don't want to set those flags. #[cfg(any(doc, all(feature = "all", unix)))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", unix))))] pub fn pair( domain: Domain, ty: Type, protocol: Option, ) -> io::Result<(Socket, Socket)> { let ty = set_common_type(ty); let (a, b) = Socket::pair_raw(domain, ty, protocol)?; let a = set_common_flags(a)?; let b = set_common_flags(b)?; Ok((a, b)) } /// Creates a pair of sockets which are connected to each other. /// /// This function corresponds to `socketpair(2)`. #[cfg(any(doc, all(feature = "all", unix)))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", unix))))] pub fn pair_raw( domain: Domain, ty: Type, protocol: Option, ) -> io::Result<(Socket, Socket)> { let protocol = protocol.map(|p| p.0).unwrap_or(0); sys::socketpair(domain.0, ty.0, protocol) .map(|[a, b]| (Socket::from_raw(a), Socket::from_raw(b))) } /// Binds this socket to the specified address. /// /// This function directly corresponds to the `bind(2)` function on Windows /// and Unix. pub fn bind(&self, address: &SockAddr) -> io::Result<()> { sys::bind(self.as_raw(), address) } /// Initiate a connection on this socket to the specified address. /// /// This function directly corresponds to the `connect(2)` function on /// Windows and Unix. /// /// An error will be returned if `listen` or `connect` has already been /// called on this builder. /// /// # Notes /// /// When using a non-blocking connect (by setting the socket into /// non-blocking mode before calling this function), socket option can't be /// set *while connecting*. This will cause errors on Windows. Socket /// options can be safely set before and after connecting the socket. pub fn connect(&self, address: &SockAddr) -> io::Result<()> { sys::connect(self.as_raw(), address) } /// Initiate a connection on this socket to the specified address, only /// only waiting for a certain period of time for the connection to be /// established. /// /// Unlike many other methods on `Socket`, this does *not* correspond to a /// single C function. It sets the socket to nonblocking mode, connects via /// connect(2), and then waits for the connection to complete with poll(2) /// on Unix and select on Windows. When the connection is complete, the /// socket is set back to blocking mode. On Unix, this will loop over /// `EINTR` errors. /// /// # Warnings /// /// The non-blocking state of the socket is overridden by this function - /// it will be returned in blocking mode on success, and in an indeterminate /// state on failure. /// /// If the connection request times out, it may still be processing in the /// background - a second call to `connect` or `connect_timeout` may fail. pub fn connect_timeout(&self, addr: &SockAddr, timeout: Duration) -> io::Result<()> { self.set_nonblocking(true)?; let res = self.connect(addr); self.set_nonblocking(false)?; match res { Ok(()) => return Ok(()), Err(ref e) if e.kind() == io::ErrorKind::WouldBlock => {} #[cfg(unix)] Err(ref e) if e.raw_os_error() == Some(libc::EINPROGRESS) => {} Err(e) => return Err(e), } sys::poll_connect(self, timeout) } /// Mark a socket as ready to accept incoming connection requests using /// [`Socket::accept()`]. /// /// This function directly corresponds to the `listen(2)` function on /// Windows and Unix. /// /// An error will be returned if `listen` or `connect` has already been /// called on this builder. pub fn listen(&self, backlog: c_int) -> io::Result<()> { sys::listen(self.as_raw(), backlog) } /// Accept a new incoming connection from this listener. /// /// This function uses `accept4(2)` on platforms that support it and /// `accept(2)` platforms that do not. /// /// This function sets the same flags as in done for [`Socket::new`], /// [`Socket::accept_raw`] can be used if you don't want to set those flags. pub fn accept(&self) -> io::Result<(Socket, SockAddr)> { // Use `accept4` on platforms that support it. #[cfg(any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd", ))] return self._accept4(libc::SOCK_CLOEXEC); // Fall back to `accept` on platforms that do not support `accept4`. #[cfg(not(any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd", )))] { let (socket, addr) = self.accept_raw()?; let socket = set_common_flags(socket)?; // `set_common_flags` does not disable inheritance on Windows because `Socket::new` // unlike `accept` is able to create the socket with inheritance disabled. #[cfg(windows)] socket._set_no_inherit(true)?; Ok((socket, addr)) } } /// Accept a new incoming connection from this listener. /// /// This function directly corresponds to the `accept(2)` function on /// Windows and Unix. pub fn accept_raw(&self) -> io::Result<(Socket, SockAddr)> { sys::accept(self.as_raw()).map(|(inner, addr)| (Socket::from_raw(inner), addr)) } /// Returns the socket address of the local half of this socket. /// /// # Notes /// /// Depending on the OS this may return an error if the socket is not /// [bound]. /// /// [bound]: Socket::bind pub fn local_addr(&self) -> io::Result { sys::getsockname(self.as_raw()) } /// Returns the socket address of the remote peer of this socket. /// /// # Notes /// /// This returns an error if the socket is not [`connect`ed]. /// /// [`connect`ed]: Socket::connect pub fn peer_addr(&self) -> io::Result { sys::getpeername(self.as_raw()) } /// Returns the [`Type`] of this socket by checking the `SO_TYPE` option on /// this socket. pub fn r#type(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::SOL_SOCKET, sys::SO_TYPE).map(Type) } } /// Creates a new independently owned handle to the underlying socket. /// /// # Notes /// /// On Unix this uses `F_DUPFD_CLOEXEC` and thus sets the `FD_CLOEXEC` on /// the returned socket. /// /// On Windows this uses `WSA_FLAG_NO_HANDLE_INHERIT` setting inheriting to /// false. /// /// On Windows this can **not** be used function cannot be used on a /// QOS-enabled socket, see /// . pub fn try_clone(&self) -> io::Result { sys::try_clone(self.as_raw()).map(Socket::from_raw) } /// Moves this TCP stream into or out of nonblocking mode. /// /// # Notes /// /// On Unix this corresponds to calling `fcntl` (un)setting `O_NONBLOCK`. /// /// On Windows this corresponds to calling `ioctlsocket` (un)setting /// `FIONBIO`. pub fn set_nonblocking(&self, nonblocking: bool) -> io::Result<()> { sys::set_nonblocking(self.as_raw(), nonblocking) } /// Shuts down the read, write, or both halves of this connection. /// /// This function will cause all pending and future I/O on the specified /// portions to return immediately with an appropriate value. pub fn shutdown(&self, how: Shutdown) -> io::Result<()> { sys::shutdown(self.as_raw(), how) } /// Receives data on the socket from the remote address to which it is /// connected. /// /// The [`connect`] method will connect this socket to a remote address. /// This method might fail if the socket is not connected. /// /// [`connect`]: Socket::connect /// /// # Safety /// /// Normally casting a `&mut [u8]` to `&mut [MaybeUninit]` would be /// unsound, as that allows us to write uninitialised bytes to the buffer. /// However this implementation promises to not write uninitialised bytes to /// the `buf`fer and passes it directly to `recv(2)` system call. This /// promise ensures that this function can be called using a `buf`fer of /// type `&mut [u8]`. /// /// Note that the [`io::Read::read`] implementation calls this function with /// a `buf`fer of type `&mut [u8]`, allowing initialised buffers to be used /// without using `unsafe`. pub fn recv(&self, buf: &mut [MaybeUninit]) -> io::Result { self.recv_with_flags(buf, 0) } /// Receives out-of-band (OOB) data on the socket from the remote address to /// which it is connected by setting the `MSG_OOB` flag for this call. /// /// For more information, see [`recv`], [`out_of_band_inline`]. /// /// [`recv`]: Socket::recv /// [`out_of_band_inline`]: Socket::out_of_band_inline pub fn recv_out_of_band(&self, buf: &mut [MaybeUninit]) -> io::Result { self.recv_with_flags(buf, sys::MSG_OOB) } /// Identical to [`recv`] but allows for specification of arbitrary flags to /// the underlying `recv` call. /// /// [`recv`]: Socket::recv pub fn recv_with_flags( &self, buf: &mut [MaybeUninit], flags: sys::c_int, ) -> io::Result { sys::recv(self.as_raw(), buf, flags) } /// Receives data on the socket from the remote address to which it is /// connected. Unlike [`recv`] this allows passing multiple buffers. /// /// The [`connect`] method will connect this socket to a remote address. /// This method might fail if the socket is not connected. /// /// In addition to the number of bytes read, this function returns the flags /// for the received message. See [`RecvFlags`] for more information about /// the returned flags. /// /// [`recv`]: Socket::recv /// [`connect`]: Socket::connect /// /// # Safety /// /// Normally casting a `IoSliceMut` to `MaybeUninitSlice` would be unsound, /// as that allows us to write uninitialised bytes to the buffer. However /// this implementation promises to not write uninitialised bytes to the /// `bufs` and passes it directly to `recvmsg(2)` system call. This promise /// ensures that this function can be called using `bufs` of type `&mut /// [IoSliceMut]`. /// /// Note that the [`io::Read::read_vectored`] implementation calls this /// function with `buf`s of type `&mut [IoSliceMut]`, allowing initialised /// buffers to be used without using `unsafe`. #[cfg(not(target_os = "redox"))] #[cfg_attr(docsrs, doc(cfg(not(target_os = "redox"))))] pub fn recv_vectored( &self, bufs: &mut [MaybeUninitSlice<'_>], ) -> io::Result<(usize, RecvFlags)> { self.recv_vectored_with_flags(bufs, 0) } /// Identical to [`recv_vectored`] but allows for specification of arbitrary /// flags to the underlying `recvmsg`/`WSARecv` call. /// /// [`recv_vectored`]: Socket::recv_vectored /// /// # Safety /// /// `recv_from_vectored` makes the same safety guarantees regarding `bufs` /// as [`recv_vectored`]. /// /// [`recv_vectored`]: Socket::recv_vectored #[cfg(not(target_os = "redox"))] #[cfg_attr(docsrs, doc(cfg(not(target_os = "redox"))))] pub fn recv_vectored_with_flags( &self, bufs: &mut [MaybeUninitSlice<'_>], flags: c_int, ) -> io::Result<(usize, RecvFlags)> { sys::recv_vectored(self.as_raw(), bufs, flags) } /// Receives data on the socket from the remote adress to which it is /// connected, without removing that data from the queue. On success, /// returns the number of bytes peeked. /// /// Successive calls return the same data. This is accomplished by passing /// `MSG_PEEK` as a flag to the underlying `recv` system call. /// /// # Safety /// /// `peek` makes the same safety guarantees regarding the `buf`fer as /// [`recv`]. /// /// [`recv`]: Socket::recv pub fn peek(&self, buf: &mut [MaybeUninit]) -> io::Result { self.recv_with_flags(buf, sys::MSG_PEEK) } /// Receives data from the socket. On success, returns the number of bytes /// read and the address from whence the data came. /// /// # Safety /// /// `recv_from` makes the same safety guarantees regarding the `buf`fer as /// [`recv`]. /// /// [`recv`]: Socket::recv pub fn recv_from(&self, buf: &mut [MaybeUninit]) -> io::Result<(usize, SockAddr)> { self.recv_from_with_flags(buf, 0) } /// Identical to [`recv_from`] but allows for specification of arbitrary /// flags to the underlying `recvfrom` call. /// /// [`recv_from`]: Socket::recv_from pub fn recv_from_with_flags( &self, buf: &mut [MaybeUninit], flags: c_int, ) -> io::Result<(usize, SockAddr)> { sys::recv_from(self.as_raw(), buf, flags) } /// Receives data from the socket. Returns the amount of bytes read, the /// [`RecvFlags`] and the remote address from the data is coming. Unlike /// [`recv_from`] this allows passing multiple buffers. /// /// [`recv_from`]: Socket::recv_from /// /// # Safety /// /// `recv_from_vectored` makes the same safety guarantees regarding `bufs` /// as [`recv_vectored`]. /// /// [`recv_vectored`]: Socket::recv_vectored #[cfg(not(target_os = "redox"))] #[cfg_attr(docsrs, doc(cfg(not(target_os = "redox"))))] pub fn recv_from_vectored( &self, bufs: &mut [MaybeUninitSlice<'_>], ) -> io::Result<(usize, RecvFlags, SockAddr)> { self.recv_from_vectored_with_flags(bufs, 0) } /// Identical to [`recv_from_vectored`] but allows for specification of /// arbitrary flags to the underlying `recvmsg`/`WSARecvFrom` call. /// /// [`recv_from_vectored`]: Socket::recv_from_vectored /// /// # Safety /// /// `recv_from_vectored` makes the same safety guarantees regarding `bufs` /// as [`recv_vectored`]. /// /// [`recv_vectored`]: Socket::recv_vectored #[cfg(not(target_os = "redox"))] #[cfg_attr(docsrs, doc(cfg(not(target_os = "redox"))))] pub fn recv_from_vectored_with_flags( &self, bufs: &mut [MaybeUninitSlice<'_>], flags: c_int, ) -> io::Result<(usize, RecvFlags, SockAddr)> { sys::recv_from_vectored(self.as_raw(), bufs, flags) } /// Receives data from the socket, without removing it from the queue. /// /// Successive calls return the same data. This is accomplished by passing /// `MSG_PEEK` as a flag to the underlying `recvfrom` system call. /// /// On success, returns the number of bytes peeked and the address from /// whence the data came. /// /// # Safety /// /// `peek_from` makes the same safety guarantees regarding the `buf`fer as /// [`recv`]. /// /// [`recv`]: Socket::recv pub fn peek_from(&self, buf: &mut [MaybeUninit]) -> io::Result<(usize, SockAddr)> { self.recv_from_with_flags(buf, sys::MSG_PEEK) } /// Sends data on the socket to a connected peer. /// /// This is typically used on TCP sockets or datagram sockets which have /// been connected. /// /// On success returns the number of bytes that were sent. pub fn send(&self, buf: &[u8]) -> io::Result { self.send_with_flags(buf, 0) } /// Identical to [`send`] but allows for specification of arbitrary flags to the underlying /// `send` call. /// /// [`send`]: #method.send pub fn send_with_flags(&self, buf: &[u8], flags: c_int) -> io::Result { sys::send(self.as_raw(), buf, flags) } /// Send data to the connected peer. Returns the amount of bytes written. #[cfg(not(target_os = "redox"))] #[cfg_attr(docsrs, doc(cfg(not(target_os = "redox"))))] pub fn send_vectored(&self, bufs: &[IoSlice<'_>]) -> io::Result { self.send_vectored_with_flags(bufs, 0) } /// Identical to [`send_vectored`] but allows for specification of arbitrary /// flags to the underlying `sendmsg`/`WSASend` call. /// /// [`send_vectored`]: Socket::send_vectored #[cfg(not(target_os = "redox"))] #[cfg_attr(docsrs, doc(cfg(not(target_os = "redox"))))] pub fn send_vectored_with_flags( &self, bufs: &[IoSlice<'_>], flags: c_int, ) -> io::Result { sys::send_vectored(self.as_raw(), bufs, flags) } /// Sends out-of-band (OOB) data on the socket to connected peer /// by setting the `MSG_OOB` flag for this call. /// /// For more information, see [`send`], [`out_of_band_inline`]. /// /// [`send`]: #method.send /// [`out_of_band_inline`]: #method.out_of_band_inline pub fn send_out_of_band(&self, buf: &[u8]) -> io::Result { self.send_with_flags(buf, sys::MSG_OOB) } /// Sends data on the socket to the given address. On success, returns the /// number of bytes written. /// /// This is typically used on UDP or datagram-oriented sockets. pub fn send_to(&self, buf: &[u8], addr: &SockAddr) -> io::Result { self.send_to_with_flags(buf, addr, 0) } /// Identical to [`send_to`] but allows for specification of arbitrary flags /// to the underlying `sendto` call. /// /// [`send_to`]: Socket::send_to pub fn send_to_with_flags( &self, buf: &[u8], addr: &SockAddr, flags: c_int, ) -> io::Result { sys::send_to(self.as_raw(), buf, addr, flags) } /// Send data to a peer listening on `addr`. Returns the amount of bytes /// written. #[cfg(not(target_os = "redox"))] #[cfg_attr(docsrs, doc(cfg(not(target_os = "redox"))))] pub fn send_to_vectored(&self, bufs: &[IoSlice<'_>], addr: &SockAddr) -> io::Result { self.send_to_vectored_with_flags(bufs, addr, 0) } /// Identical to [`send_to_vectored`] but allows for specification of /// arbitrary flags to the underlying `sendmsg`/`WSASendTo` call. /// /// [`send_to_vectored`]: Socket::send_to_vectored #[cfg(not(target_os = "redox"))] #[cfg_attr(docsrs, doc(cfg(not(target_os = "redox"))))] pub fn send_to_vectored_with_flags( &self, bufs: &[IoSlice<'_>], addr: &SockAddr, flags: c_int, ) -> io::Result { sys::send_to_vectored(self.as_raw(), bufs, addr, flags) } } /// Set `SOCK_CLOEXEC` and `NO_HANDLE_INHERIT` on the `ty`pe on platforms that /// support it. #[inline(always)] fn set_common_type(ty: Type) -> Type { // On platforms that support it set `SOCK_CLOEXEC`. #[cfg(any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd", ))] let ty = ty._cloexec(); // On windows set `NO_HANDLE_INHERIT`. #[cfg(windows)] let ty = ty._no_inherit(); ty } /// Set `FD_CLOEXEC` and `NOSIGPIPE` on the `socket` for platforms that need it. #[inline(always)] #[allow(clippy::unnecessary_wraps)] fn set_common_flags(socket: Socket) -> io::Result { // On platforms that don't have `SOCK_CLOEXEC` use `FD_CLOEXEC`. #[cfg(all( unix, not(any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_os = "openbsd", )) ))] socket._set_cloexec(true)?; // On Apple platforms set `NOSIGPIPE`. #[cfg(target_vendor = "apple")] socket._set_nosigpipe(true)?; Ok(socket) } /// A local interface specified by its index or an address assigned to it. /// /// `Index(0)` and `Address(Ipv4Addr::UNSPECIFIED)` are equivalent and indicate /// that an appropriate interface should be selected by the system. #[cfg(not(any( target_os = "haiku", target_os = "illumos", target_os = "netbsd", target_os = "redox", target_os = "solaris", )))] #[derive(Debug)] pub enum InterfaceIndexOrAddress { /// An interface index. Index(u32), /// An address assigned to an interface. Address(Ipv4Addr), } /// Socket options get/set using `SOL_SOCKET`. /// /// Additional documentation can be found in documentation of the OS. /// * Linux: /// * Windows: impl Socket { /// Get the value of the `SO_BROADCAST` option for this socket. /// /// For more information about this option, see [`set_broadcast`]. /// /// [`set_broadcast`]: Socket::set_broadcast pub fn broadcast(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::SOL_SOCKET, sys::SO_BROADCAST) .map(|broadcast| broadcast != 0) } } /// Set the value of the `SO_BROADCAST` option for this socket. /// /// When enabled, this socket is allowed to send packets to a broadcast /// address. pub fn set_broadcast(&self, broadcast: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::SOL_SOCKET, sys::SO_BROADCAST, broadcast as c_int, ) } } /// Get the value of the `SO_ERROR` option on this socket. /// /// This will retrieve the stored error in the underlying socket, clearing /// the field in the process. This can be useful for checking errors between /// calls. pub fn take_error(&self) -> io::Result> { match unsafe { getsockopt::(self.as_raw(), sys::SOL_SOCKET, sys::SO_ERROR) } { Ok(0) => Ok(None), Ok(errno) => Ok(Some(io::Error::from_raw_os_error(errno))), Err(err) => Err(err), } } /// Get the value of the `SO_KEEPALIVE` option on this socket. /// /// For more information about this option, see [`set_keepalive`]. /// /// [`set_keepalive`]: Socket::set_keepalive pub fn keepalive(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::SOL_SOCKET, sys::SO_KEEPALIVE) .map(|keepalive| keepalive != 0) } } /// Set value for the `SO_KEEPALIVE` option on this socket. /// /// Enable sending of keep-alive messages on connection-oriented sockets. pub fn set_keepalive(&self, keepalive: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::SOL_SOCKET, sys::SO_KEEPALIVE, keepalive as c_int, ) } } /// Get the value of the `SO_LINGER` option on this socket. /// /// For more information about this option, see [`set_linger`]. /// /// [`set_linger`]: Socket::set_linger pub fn linger(&self) -> io::Result> { unsafe { getsockopt::(self.as_raw(), sys::SOL_SOCKET, sys::SO_LINGER) .map(from_linger) } } /// Set value for the `SO_LINGER` option on this socket. /// /// If `linger` is not `None`, a close(2) or shutdown(2) will not return /// until all queued messages for the socket have been successfully sent or /// the linger timeout has been reached. Otherwise, the call returns /// immediately and the closing is done in the background. When the socket /// is closed as part of exit(2), it always lingers in the background. /// /// # Notes /// /// On most OSs the duration only has a precision of seconds and will be /// silently truncated. /// /// On Apple platforms (e.g. macOS, iOS, etc) this uses `SO_LINGER_SEC`. pub fn set_linger(&self, linger: Option) -> io::Result<()> { let linger = into_linger(linger); unsafe { setsockopt(self.as_raw(), sys::SOL_SOCKET, sys::SO_LINGER, linger) } } /// Get value for the `SO_OOBINLINE` option on this socket. /// /// For more information about this option, see [`set_out_of_band_inline`]. /// /// [`set_out_of_band_inline`]: Socket::set_out_of_band_inline #[cfg(not(target_os = "redox"))] #[cfg_attr(docsrs, doc(cfg(not(target_os = "redox"))))] pub fn out_of_band_inline(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::SOL_SOCKET, sys::SO_OOBINLINE) .map(|oob_inline| oob_inline != 0) } } /// Set value for the `SO_OOBINLINE` option on this socket. /// /// If this option is enabled, out-of-band data is directly placed into the /// receive data stream. Otherwise, out-of-band data is passed only when the /// `MSG_OOB` flag is set during receiving. As per RFC6093, TCP sockets /// using the Urgent mechanism are encouraged to set this flag. #[cfg(not(target_os = "redox"))] #[cfg_attr(docsrs, doc(cfg(not(target_os = "redox"))))] pub fn set_out_of_band_inline(&self, oob_inline: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::SOL_SOCKET, sys::SO_OOBINLINE, oob_inline as c_int, ) } } /// Get value for the `SO_RCVBUF` option on this socket. /// /// For more information about this option, see [`set_recv_buffer_size`]. /// /// [`set_recv_buffer_size`]: Socket::set_recv_buffer_size pub fn recv_buffer_size(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::SOL_SOCKET, sys::SO_RCVBUF) .map(|size| size as usize) } } /// Set value for the `SO_RCVBUF` option on this socket. /// /// Changes the size of the operating system's receive buffer associated /// with the socket. pub fn set_recv_buffer_size(&self, size: usize) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::SOL_SOCKET, sys::SO_RCVBUF, size as c_int, ) } } /// Get value for the `SO_RCVTIMEO` option on this socket. /// /// If the returned timeout is `None`, then `read` and `recv` calls will /// block indefinitely. pub fn read_timeout(&self) -> io::Result> { sys::timeout_opt(self.as_raw(), sys::SOL_SOCKET, sys::SO_RCVTIMEO) } /// Set value for the `SO_RCVTIMEO` option on this socket. /// /// If `timeout` is `None`, then `read` and `recv` calls will block /// indefinitely. pub fn set_read_timeout(&self, duration: Option) -> io::Result<()> { sys::set_timeout_opt(self.as_raw(), sys::SOL_SOCKET, sys::SO_RCVTIMEO, duration) } /// Get the value of the `SO_REUSEADDR` option on this socket. /// /// For more information about this option, see [`set_reuse_address`]. /// /// [`set_reuse_address`]: Socket::set_reuse_address pub fn reuse_address(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::SOL_SOCKET, sys::SO_REUSEADDR) .map(|reuse| reuse != 0) } } /// Set value for the `SO_REUSEADDR` option on this socket. /// /// This indicates that futher calls to `bind` may allow reuse of local /// addresses. For IPv4 sockets this means that a socket may bind even when /// there's a socket already listening on this port. pub fn set_reuse_address(&self, reuse: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::SOL_SOCKET, sys::SO_REUSEADDR, reuse as c_int, ) } } /// Get the value of the `SO_SNDBUF` option on this socket. /// /// For more information about this option, see [`set_send_buffer_size`]. /// /// [`set_send_buffer_size`]: Socket::set_send_buffer_size pub fn send_buffer_size(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::SOL_SOCKET, sys::SO_SNDBUF) .map(|size| size as usize) } } /// Set value for the `SO_SNDBUF` option on this socket. /// /// Changes the size of the operating system's send buffer associated with /// the socket. pub fn set_send_buffer_size(&self, size: usize) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::SOL_SOCKET, sys::SO_SNDBUF, size as c_int, ) } } /// Get value for the `SO_SNDTIMEO` option on this socket. /// /// If the returned timeout is `None`, then `write` and `send` calls will /// block indefinitely. pub fn write_timeout(&self) -> io::Result> { sys::timeout_opt(self.as_raw(), sys::SOL_SOCKET, sys::SO_SNDTIMEO) } /// Set value for the `SO_SNDTIMEO` option on this socket. /// /// If `timeout` is `None`, then `write` and `send` calls will block /// indefinitely. pub fn set_write_timeout(&self, duration: Option) -> io::Result<()> { sys::set_timeout_opt(self.as_raw(), sys::SOL_SOCKET, sys::SO_SNDTIMEO, duration) } } fn from_linger(linger: sys::linger) -> Option { if linger.l_onoff == 0 { None } else { Some(Duration::from_secs(linger.l_linger as u64)) } } fn into_linger(duration: Option) -> sys::linger { match duration { Some(duration) => sys::linger { l_onoff: 1, l_linger: duration.as_secs() as _, }, None => sys::linger { l_onoff: 0, l_linger: 0, }, } } /// Socket options for IPv4 sockets, get/set using `IPPROTO_IP`. /// /// Additional documentation can be found in documentation of the OS. /// * Linux: /// * Windows: impl Socket { /// Get the value of the `IP_HDRINCL` option on this socket. /// /// For more information about this option, see [`set_header_included`]. /// /// [`set_header_included`]: Socket::set_header_included #[cfg(all(feature = "all", not(target_os = "redox")))] #[cfg_attr(docsrs, doc(all(feature = "all", not(target_os = "redox"))))] pub fn header_included(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_IP, sys::IP_HDRINCL) .map(|included| included != 0) } } /// Set the value of the `IP_HDRINCL` option on this socket. /// /// If enabled, the user supplies an IP header in front of the user data. /// Valid only for [`SOCK_RAW`] sockets; see [raw(7)] for more information. /// When this flag is enabled, the values set by `IP_OPTIONS`, [`IP_TTL`], /// and [`IP_TOS`] are ignored. /// /// [`SOCK_RAW`]: Type::RAW /// [raw(7)]: https://man7.org/linux/man-pages/man7/raw.7.html /// [`IP_TTL`]: Socket::set_ttl /// [`IP_TOS`]: Socket::set_tos #[cfg(all(feature = "all", not(target_os = "redox")))] #[cfg_attr(docsrs, doc(all(feature = "all", not(target_os = "redox"))))] pub fn set_header_included(&self, included: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IP, sys::IP_HDRINCL, included as c_int, ) } } /// Get the value of the `IP_TRANSPARENT` option on this socket. /// /// For more information about this option, see [`set_ip_transparent`]. /// /// [`set_ip_transparent`]: Socket::set_ip_transparent #[cfg(any(doc, all(feature = "all", target_os = "linux")))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", target_os = "linux"))))] pub fn ip_transparent(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_IP, libc::IP_TRANSPARENT) .map(|transparent| transparent != 0) } } /// Set the value of the `IP_TRANSPARENT` option on this socket. /// /// Setting this boolean option enables transparent proxying /// on this socket. This socket option allows the calling /// application to bind to a nonlocal IP address and operate /// both as a client and a server with the foreign address as /// the local endpoint. NOTE: this requires that routing be /// set up in a way that packets going to the foreign address /// are routed through the TProxy box (i.e., the system /// hosting the application that employs the IP_TRANSPARENT /// socket option). Enabling this socket option requires /// superuser privileges (the `CAP_NET_ADMIN` capability). /// /// TProxy redirection with the iptables TPROXY target also /// requires that this option be set on the redirected socket. #[cfg(any(doc, all(feature = "all", target_os = "linux")))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", target_os = "linux"))))] pub fn set_ip_transparent(&self, transparent: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IP, libc::IP_TRANSPARENT, transparent as c_int, ) } } /// Join a multicast group using `IP_ADD_MEMBERSHIP` option on this socket. /// /// This function specifies a new multicast group for this socket to join. /// The address must be a valid multicast address, and `interface` is the /// address of the local interface with which the system should join the /// multicast group. If it's [`Ipv4Addr::UNSPECIFIED`] (`INADDR_ANY`) then /// an appropriate interface is chosen by the system. pub fn join_multicast_v4(&self, multiaddr: &Ipv4Addr, interface: &Ipv4Addr) -> io::Result<()> { let mreq = sys::IpMreq { imr_multiaddr: sys::to_in_addr(multiaddr), imr_interface: sys::to_in_addr(interface), }; unsafe { setsockopt(self.as_raw(), sys::IPPROTO_IP, sys::IP_ADD_MEMBERSHIP, mreq) } } /// Leave a multicast group using `IP_DROP_MEMBERSHIP` option on this socket. /// /// For more information about this option, see [`join_multicast_v4`]. /// /// [`join_multicast_v4`]: Socket::join_multicast_v4 pub fn leave_multicast_v4(&self, multiaddr: &Ipv4Addr, interface: &Ipv4Addr) -> io::Result<()> { let mreq = sys::IpMreq { imr_multiaddr: sys::to_in_addr(multiaddr), imr_interface: sys::to_in_addr(interface), }; unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IP, sys::IP_DROP_MEMBERSHIP, mreq, ) } } /// Join a multicast group using `IP_ADD_MEMBERSHIP` option on this socket. /// /// This function specifies a new multicast group for this socket to join. /// The address must be a valid multicast address, and `interface` specifies /// the local interface with which the system should join the multicast /// group. See [`InterfaceIndexOrAddress`]. #[cfg(not(any( target_os = "haiku", target_os = "illumos", target_os = "netbsd", target_os = "redox", target_os = "solaris", )))] pub fn join_multicast_v4_n( &self, multiaddr: &Ipv4Addr, interface: &InterfaceIndexOrAddress, ) -> io::Result<()> { let mreqn = sys::to_mreqn(multiaddr, interface); unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IP, sys::IP_ADD_MEMBERSHIP, mreqn, ) } } /// Leave a multicast group using `IP_DROP_MEMBERSHIP` option on this socket. /// /// For more information about this option, see [`join_multicast_v4_n`]. /// /// [`join_multicast_v4_n`]: Socket::join_multicast_v4_n #[cfg(not(any( target_os = "haiku", target_os = "illumos", target_os = "netbsd", target_os = "redox", target_os = "solaris", )))] pub fn leave_multicast_v4_n( &self, multiaddr: &Ipv4Addr, interface: &InterfaceIndexOrAddress, ) -> io::Result<()> { let mreqn = sys::to_mreqn(multiaddr, interface); unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IP, sys::IP_DROP_MEMBERSHIP, mreqn, ) } } /// Get the value of the `IP_MULTICAST_IF` option for this socket. /// /// For more information about this option, see [`set_multicast_if_v4`]. /// /// [`set_multicast_if_v4`]: Socket::set_multicast_if_v4 pub fn multicast_if_v4(&self) -> io::Result { unsafe { getsockopt(self.as_raw(), sys::IPPROTO_IP, sys::IP_MULTICAST_IF).map(sys::from_in_addr) } } /// Set the value of the `IP_MULTICAST_IF` option for this socket. /// /// Specifies the interface to use for routing multicast packets. pub fn set_multicast_if_v4(&self, interface: &Ipv4Addr) -> io::Result<()> { let interface = sys::to_in_addr(interface); unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IP, sys::IP_MULTICAST_IF, interface, ) } } /// Get the value of the `IP_MULTICAST_LOOP` option for this socket. /// /// For more information about this option, see [`set_multicast_loop_v4`]. /// /// [`set_multicast_loop_v4`]: Socket::set_multicast_loop_v4 pub fn multicast_loop_v4(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_IP, sys::IP_MULTICAST_LOOP) .map(|loop_v4| loop_v4 != 0) } } /// Set the value of the `IP_MULTICAST_LOOP` option for this socket. /// /// If enabled, multicast packets will be looped back to the local socket. /// Note that this may not have any affect on IPv6 sockets. pub fn set_multicast_loop_v4(&self, loop_v4: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IP, sys::IP_MULTICAST_LOOP, loop_v4 as c_int, ) } } /// Get the value of the `IP_MULTICAST_TTL` option for this socket. /// /// For more information about this option, see [`set_multicast_ttl_v4`]. /// /// [`set_multicast_ttl_v4`]: Socket::set_multicast_ttl_v4 pub fn multicast_ttl_v4(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_IP, sys::IP_MULTICAST_TTL) .map(|ttl| ttl as u32) } } /// Set the value of the `IP_MULTICAST_TTL` option for this socket. /// /// Indicates the time-to-live value of outgoing multicast packets for /// this socket. The default value is 1 which means that multicast packets /// don't leave the local network unless explicitly requested. /// /// Note that this may not have any affect on IPv6 sockets. pub fn set_multicast_ttl_v4(&self, ttl: u32) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IP, sys::IP_MULTICAST_TTL, ttl as c_int, ) } } /// Get the value of the `IP_TTL` option for this socket. /// /// For more information about this option, see [`set_ttl`]. /// /// [`set_ttl`]: Socket::set_ttl pub fn ttl(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_IP, sys::IP_TTL).map(|ttl| ttl as u32) } } /// Set the value of the `IP_TTL` option for this socket. /// /// This value sets the time-to-live field that is used in every packet sent /// from this socket. pub fn set_ttl(&self, ttl: u32) -> io::Result<()> { unsafe { setsockopt(self.as_raw(), sys::IPPROTO_IP, sys::IP_TTL, ttl as c_int) } } /// Set the value of the `IP_TOS` option for this socket. /// /// This value sets the type-of-service field that is used in every packet /// sent from this socket. /// /// NOTE: /// documents that not all versions of windows support `IP_TOS`. #[cfg(not(any( target_os = "fuschia", target_os = "redox", target_os = "solaris", target_os = "illumos", )))] pub fn set_tos(&self, tos: u32) -> io::Result<()> { unsafe { setsockopt(self.as_raw(), sys::IPPROTO_IP, sys::IP_TOS, tos as c_int) } } /// Get the value of the `IP_TOS` option for this socket. /// /// For more information about this option, see [`set_tos`]. /// /// NOTE: /// documents that not all versions of windows support `IP_TOS`. /// /// [`set_tos`]: Socket::set_tos #[cfg(not(any( target_os = "fuschia", target_os = "redox", target_os = "solaris", target_os = "illumos", )))] pub fn tos(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_IP, sys::IP_TOS).map(|tos| tos as u32) } } } /// Socket options for IPv6 sockets, get/set using `IPPROTO_IPV6`. /// /// Additional documentation can be found in documentation of the OS. /// * Linux: /// * Windows: impl Socket { /// Join a multicast group using `IPV6_ADD_MEMBERSHIP` option on this socket. /// /// Some OSs use `IPV6_JOIN_GROUP` for this option. /// /// This function specifies a new multicast group for this socket to join. /// The address must be a valid multicast address, and `interface` is the /// index of the interface to join/leave (or 0 to indicate any interface). pub fn join_multicast_v6(&self, multiaddr: &Ipv6Addr, interface: u32) -> io::Result<()> { let mreq = sys::Ipv6Mreq { ipv6mr_multiaddr: sys::to_in6_addr(multiaddr), // NOTE: some OSs use `c_int`, others use `c_uint`. ipv6mr_interface: interface as _, }; unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_ADD_MEMBERSHIP, mreq, ) } } /// Leave a multicast group using `IPV6_DROP_MEMBERSHIP` option on this socket. /// /// Some OSs use `IPV6_LEAVE_GROUP` for this option. /// /// For more information about this option, see [`join_multicast_v6`]. /// /// [`join_multicast_v6`]: Socket::join_multicast_v6 pub fn leave_multicast_v6(&self, multiaddr: &Ipv6Addr, interface: u32) -> io::Result<()> { let mreq = sys::Ipv6Mreq { ipv6mr_multiaddr: sys::to_in6_addr(multiaddr), // NOTE: some OSs use `c_int`, others use `c_uint`. ipv6mr_interface: interface as _, }; unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_DROP_MEMBERSHIP, mreq, ) } } /// Get the value of the `IPV6_MULTICAST_HOPS` option for this socket /// /// For more information about this option, see [`set_multicast_hops_v6`]. /// /// [`set_multicast_hops_v6`]: Socket::set_multicast_hops_v6 pub fn multicast_hops_v6(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_MULTICAST_HOPS) .map(|hops| hops as u32) } } /// Set the value of the `IPV6_MULTICAST_HOPS` option for this socket /// /// Indicates the number of "routers" multicast packets will transit for /// this socket. The default value is 1 which means that multicast packets /// don't leave the local network unless explicitly requested. pub fn set_multicast_hops_v6(&self, hops: u32) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_MULTICAST_HOPS, hops as c_int, ) } } /// Get the value of the `IPV6_MULTICAST_IF` option for this socket. /// /// For more information about this option, see [`set_multicast_if_v6`]. /// /// [`set_multicast_if_v6`]: Socket::set_multicast_if_v6 pub fn multicast_if_v6(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_MULTICAST_IF) .map(|interface| interface as u32) } } /// Set the value of the `IPV6_MULTICAST_IF` option for this socket. /// /// Specifies the interface to use for routing multicast packets. Unlike /// ipv4, this is generally required in ipv6 contexts where network routing /// prefixes may overlap. pub fn set_multicast_if_v6(&self, interface: u32) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_MULTICAST_IF, interface as c_int, ) } } /// Get the value of the `IPV6_MULTICAST_LOOP` option for this socket. /// /// For more information about this option, see [`set_multicast_loop_v6`]. /// /// [`set_multicast_loop_v6`]: Socket::set_multicast_loop_v6 pub fn multicast_loop_v6(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_MULTICAST_LOOP) .map(|loop_v6| loop_v6 != 0) } } /// Set the value of the `IPV6_MULTICAST_LOOP` option for this socket. /// /// Controls whether this socket sees the multicast packets it sends itself. /// Note that this may not have any affect on IPv4 sockets. pub fn set_multicast_loop_v6(&self, loop_v6: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_MULTICAST_LOOP, loop_v6 as c_int, ) } } /// Get the value of the `IPV6_UNICAST_HOPS` option for this socket. /// /// Specifies the hop limit for ipv6 unicast packets pub fn unicast_hops_v6(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_UNICAST_HOPS) .map(|hops| hops as u32) } } /// Set the value for the `IPV6_UNICAST_HOPS` option on this socket. /// /// Specifies the hop limit for ipv6 unicast packets pub fn set_unicast_hops_v6(&self, hops: u32) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_UNICAST_HOPS, hops as c_int, ) } } /// Get the value of the `IPV6_V6ONLY` option for this socket. /// /// For more information about this option, see [`set_only_v6`]. /// /// [`set_only_v6`]: Socket::set_only_v6 pub fn only_v6(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_V6ONLY) .map(|only_v6| only_v6 != 0) } } /// Set the value for the `IPV6_V6ONLY` option on this socket. /// /// If this is set to `true` then the socket is restricted to sending and /// receiving IPv6 packets only. In this case two IPv4 and IPv6 applications /// can bind the same port at the same time. /// /// If this is set to `false` then the socket can be used to send and /// receive packets from an IPv4-mapped IPv6 address. pub fn set_only_v6(&self, only_v6: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::IPPROTO_IPV6, sys::IPV6_V6ONLY, only_v6 as c_int, ) } } } /// Socket options for TCP sockets, get/set using `IPPROTO_TCP`. /// /// Additional documentation can be found in documentation of the OS. /// * Linux: /// * Windows: impl Socket { /// Get the value of the `TCP_KEEPIDLE` option on this socket. /// /// This returns the value of `TCP_KEEPALIVE` on macOS and iOS and `TCP_KEEPIDLE` on all other /// supported Unix operating systems. #[cfg(any( doc, all( feature = "all", not(any(windows, target_os = "haiku", target_os = "openbsd")) ) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", not(any(windows, target_os = "haiku", target_os = "openbsd")) ))) )] pub fn keepalive_time(&self) -> io::Result { sys::keepalive_time(self.as_raw()) } /// Get the value of the `TCP_KEEPINTVL` option on this socket. /// /// For more information about this option, see [`set_tcp_keepalive`]. /// /// [`set_tcp_keepalive`]: Socket::set_tcp_keepalive #[cfg(all( feature = "all", any( doc, target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_vendor = "apple", ) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_vendor = "apple", ) ))) )] pub fn keepalive_interval(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_TCP, sys::TCP_KEEPINTVL) .map(|secs| Duration::from_secs(secs as u64)) } } /// Get the value of the `TCP_KEEPCNT` option on this socket. /// /// For more information about this option, see [`set_tcp_keepalive`]. /// /// [`set_tcp_keepalive`]: Socket::set_tcp_keepalive #[cfg(all( feature = "all", any( doc, target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_vendor = "apple", ) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any( target_os = "android", target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "illumos", target_os = "linux", target_os = "netbsd", target_vendor = "apple", ) ))) )] pub fn keepalive_retries(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_TCP, sys::TCP_KEEPCNT) .map(|retries| retries as u32) } } /// Set parameters configuring TCP keepalive probes for this socket. /// /// The supported parameters depend on the operating system, and are /// configured using the [`TcpKeepalive`] struct. At a minimum, all systems /// support configuring the [keepalive time]: the time after which the OS /// will start sending keepalive messages on an idle connection. /// /// [keepalive time]: TcpKeepalive::with_time /// /// # Notes /// /// * This will enable `SO_KEEPALIVE` on this socket, if it is not already /// enabled. /// * On some platforms, such as Windows, any keepalive parameters *not* /// configured by the `TcpKeepalive` struct passed to this function may be /// overwritten with their default values. Therefore, this function should /// either only be called once per socket, or the same parameters should /// be passed every time it is called. /// /// # Examples /// /// ``` /// use std::time::Duration; /// /// use socket2::{Socket, TcpKeepalive, Domain, Type}; /// /// # fn main() -> std::io::Result<()> { /// let socket = Socket::new(Domain::IPV4, Type::STREAM, None)?; /// let keepalive = TcpKeepalive::new() /// .with_time(Duration::from_secs(4)); /// // Depending on the target operating system, we may also be able to /// // configure the keepalive probe interval and/or the number of /// // retries here as well. /// /// socket.set_tcp_keepalive(&keepalive)?; /// # Ok(()) } /// ``` /// pub fn set_tcp_keepalive(&self, params: &TcpKeepalive) -> io::Result<()> { self.set_keepalive(true)?; sys::set_tcp_keepalive(self.as_raw(), params) } /// Get the value of the `TCP_NODELAY` option on this socket. /// /// For more information about this option, see [`set_nodelay`]. /// /// [`set_nodelay`]: Socket::set_nodelay pub fn nodelay(&self) -> io::Result { unsafe { getsockopt::(self.as_raw(), sys::IPPROTO_TCP, sys::TCP_NODELAY) .map(|nodelay| nodelay != 0) } } /// Set the value of the `TCP_NODELAY` option on this socket. /// /// If set, this option disables the Nagle algorithm. This means that /// segments are always sent as soon as possible, even if there is only a /// small amount of data. When not set, data is buffered until there is a /// sufficient amount to send out, thereby avoiding the frequent sending of /// small packets. pub fn set_nodelay(&self, nodelay: bool) -> io::Result<()> { unsafe { setsockopt( self.as_raw(), sys::IPPROTO_TCP, sys::TCP_NODELAY, nodelay as c_int, ) } } } impl Read for Socket { fn read(&mut self, buf: &mut [u8]) -> io::Result { // Safety: the `recv` implementation promises not to write uninitialised // bytes to the `buf`fer, so this casting is safe. let buf = unsafe { &mut *(buf as *mut [u8] as *mut [MaybeUninit]) }; self.recv(buf) } #[cfg(not(target_os = "redox"))] fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result { // Safety: both `IoSliceMut` and `MaybeUninitSlice` promise to have the // same layout, that of `iovec`/`WSABUF`. Furthermore `recv_vectored` // promises to not write unitialised bytes to the `bufs` and pass it // directly to the `recvmsg` system call, so this is safe. let bufs = unsafe { &mut *(bufs as *mut [IoSliceMut<'_>] as *mut [MaybeUninitSlice<'_>]) }; self.recv_vectored(bufs).map(|(n, _)| n) } } impl<'a> Read for &'a Socket { fn read(&mut self, buf: &mut [u8]) -> io::Result { // Safety: see other `Read::read` impl. let buf = unsafe { &mut *(buf as *mut [u8] as *mut [MaybeUninit]) }; self.recv(buf) } #[cfg(not(target_os = "redox"))] fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result { // Safety: see other `Read::read` impl. let bufs = unsafe { &mut *(bufs as *mut [IoSliceMut<'_>] as *mut [MaybeUninitSlice<'_>]) }; self.recv_vectored(bufs).map(|(n, _)| n) } } impl Write for Socket { fn write(&mut self, buf: &[u8]) -> io::Result { self.send(buf) } #[cfg(not(target_os = "redox"))] fn write_vectored(&mut self, bufs: &[IoSlice<'_>]) -> io::Result { self.send_vectored(bufs) } fn flush(&mut self) -> io::Result<()> { Ok(()) } } impl<'a> Write for &'a Socket { fn write(&mut self, buf: &[u8]) -> io::Result { self.send(buf) } #[cfg(not(target_os = "redox"))] fn write_vectored(&mut self, bufs: &[IoSlice<'_>]) -> io::Result { self.send_vectored(bufs) } fn flush(&mut self) -> io::Result<()> { Ok(()) } } impl fmt::Debug for Socket { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("Socket") .field("raw", &self.as_raw()) .field("local_addr", &self.local_addr().ok()) .field("peer_addr", &self.peer_addr().ok()) .finish() } } from!(net::TcpStream, Socket); from!(net::TcpListener, Socket); from!(net::UdpSocket, Socket); from!(Socket, net::TcpStream); from!(Socket, net::TcpListener); from!(Socket, net::UdpSocket); vendor/socket2/src/lib.rs0000664000175000017500000003216514172417313016136 0ustar mwhudsonmwhudson// Copyright 2015 The Rust Project Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Utilities for creating and using sockets. //! //! The goal of this crate is to create and use a socket using advanced //! configuration options (those that are not available in the types in the //! standard library) without using any unsafe code. //! //! This crate provides as direct as possible access to the system's //! functionality for sockets, this means little effort to provide //! cross-platform utilities. It is up to the user to know how to use sockets //! when using this crate. *If you don't know how to create a socket using //! libc/system calls then this crate is not for you*. Most, if not all, //! functions directly relate to the equivalent system call with no error //! handling applied, so no handling errors such as [`EINTR`]. As a result using //! this crate can be a little wordy, but it should give you maximal flexibility //! over configuration of sockets. //! //! [`EINTR`]: std::io::ErrorKind::Interrupted //! //! # Examples //! //! ```no_run //! # fn main() -> std::io::Result<()> { //! use std::net::{SocketAddr, TcpListener}; //! use socket2::{Socket, Domain, Type}; //! //! // Create a TCP listener bound to two addresses. //! let socket = Socket::new(Domain::IPV6, Type::STREAM, None)?; //! //! socket.set_only_v6(false)?; //! let address: SocketAddr = "[::1]:12345".parse().unwrap(); //! socket.bind(&address.into())?; //! socket.listen(128)?; //! //! let listener: TcpListener = socket.into(); //! // ... //! # drop(listener); //! # Ok(()) } //! ``` //! //! ## Features //! //! This crate has a single feature `all`, which enables all functions even ones //! that are not available on all OSs. #![doc(html_root_url = "https://docs.rs/socket2/0.3")] #![deny(missing_docs, missing_debug_implementations, rust_2018_idioms)] // Show required OS/features on docs.rs. #![cfg_attr(docsrs, feature(doc_cfg))] // Disallow warnings when running tests. #![cfg_attr(test, deny(warnings))] // Disallow warnings in examples. #![doc(test(attr(deny(warnings))))] use std::fmt; use std::mem::MaybeUninit; use std::net::SocketAddr; use std::ops::{Deref, DerefMut}; use std::time::Duration; /// Macro to implement `fmt::Debug` for a type, printing the constant names /// rather than a number. /// /// Note this is used in the `sys` module and thus must be defined before /// defining the modules. macro_rules! impl_debug { ( // Type name for which to implement `fmt::Debug`. $type: path, $( $(#[$target: meta])* // The flag(s) to check. // Need to specific the libc crate because Windows doesn't use // `libc` but `winapi`. $libc: ident :: $flag: ident ),+ $(,)* ) => { impl std::fmt::Debug for $type { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { let string = match self.0 { $( $(#[$target])* $libc :: $flag => stringify!($flag), )+ n => return write!(f, "{}", n), }; f.write_str(string) } } }; } /// Macro to convert from one network type to another. macro_rules! from { ($from: ty, $for: ty) => { impl From<$from> for $for { fn from(socket: $from) -> $for { #[cfg(unix)] unsafe { <$for>::from_raw_fd(socket.into_raw_fd()) } #[cfg(windows)] unsafe { <$for>::from_raw_socket(socket.into_raw_socket()) } } } }; } mod sockaddr; mod socket; mod sockref; #[cfg_attr(unix, path = "sys/unix.rs")] #[cfg_attr(windows, path = "sys/windows.rs")] mod sys; #[cfg(not(any(windows, unix)))] compile_error!("Socket2 doesn't support the compile target"); use sys::c_int; pub use sockaddr::SockAddr; pub use socket::Socket; pub use sockref::SockRef; #[cfg(not(any( target_os = "haiku", target_os = "illumos", target_os = "netbsd", target_os = "redox", target_os = "solaris", )))] pub use socket::InterfaceIndexOrAddress; /// Specification of the communication domain for a socket. /// /// This is a newtype wrapper around an integer which provides a nicer API in /// addition to an injection point for documentation. Convenience constants such /// as [`Domain::IPV4`], [`Domain::IPV6`], etc, are provided to avoid reaching /// into libc for various constants. /// /// This type is freely interconvertible with C's `int` type, however, if a raw /// value needs to be provided. #[derive(Copy, Clone, Eq, PartialEq)] pub struct Domain(c_int); impl Domain { /// Domain for IPv4 communication, corresponding to `AF_INET`. pub const IPV4: Domain = Domain(sys::AF_INET); /// Domain for IPv6 communication, corresponding to `AF_INET6`. pub const IPV6: Domain = Domain(sys::AF_INET6); /// Returns the correct domain for `address`. pub const fn for_address(address: SocketAddr) -> Domain { match address { SocketAddr::V4(_) => Domain::IPV4, SocketAddr::V6(_) => Domain::IPV6, } } } impl From for Domain { fn from(d: c_int) -> Domain { Domain(d) } } impl From for c_int { fn from(d: Domain) -> c_int { d.0 } } /// Specification of communication semantics on a socket. /// /// This is a newtype wrapper around an integer which provides a nicer API in /// addition to an injection point for documentation. Convenience constants such /// as [`Type::STREAM`], [`Type::DGRAM`], etc, are provided to avoid reaching /// into libc for various constants. /// /// This type is freely interconvertible with C's `int` type, however, if a raw /// value needs to be provided. #[derive(Copy, Clone, Eq, PartialEq)] pub struct Type(c_int); impl Type { /// Type corresponding to `SOCK_STREAM`. /// /// Used for protocols such as TCP. pub const STREAM: Type = Type(sys::SOCK_STREAM); /// Type corresponding to `SOCK_DGRAM`. /// /// Used for protocols such as UDP. pub const DGRAM: Type = Type(sys::SOCK_DGRAM); /// Type corresponding to `SOCK_SEQPACKET`. #[cfg(feature = "all")] #[cfg_attr(docsrs, doc(cfg(feature = "all")))] pub const SEQPACKET: Type = Type(sys::SOCK_SEQPACKET); /// Type corresponding to `SOCK_RAW`. #[cfg(all(feature = "all", not(target_os = "redox")))] #[cfg_attr(docsrs, doc(cfg(all(feature = "all", not(target_os = "redox")))))] pub const RAW: Type = Type(sys::SOCK_RAW); } impl From for Type { fn from(t: c_int) -> Type { Type(t) } } impl From for c_int { fn from(t: Type) -> c_int { t.0 } } /// Protocol specification used for creating sockets via `Socket::new`. /// /// This is a newtype wrapper around an integer which provides a nicer API in /// addition to an injection point for documentation. /// /// This type is freely interconvertible with C's `int` type, however, if a raw /// value needs to be provided. #[derive(Copy, Clone, Eq, PartialEq)] pub struct Protocol(c_int); impl Protocol { /// Protocol corresponding to `ICMPv4`. pub const ICMPV4: Protocol = Protocol(sys::IPPROTO_ICMP); /// Protocol corresponding to `ICMPv6`. pub const ICMPV6: Protocol = Protocol(sys::IPPROTO_ICMPV6); /// Protocol corresponding to `TCP`. pub const TCP: Protocol = Protocol(sys::IPPROTO_TCP); /// Protocol corresponding to `UDP`. pub const UDP: Protocol = Protocol(sys::IPPROTO_UDP); } impl From for Protocol { fn from(p: c_int) -> Protocol { Protocol(p) } } impl From for c_int { fn from(p: Protocol) -> c_int { p.0 } } /// Flags for incoming messages. /// /// Flags provide additional information about incoming messages. #[cfg(not(target_os = "redox"))] #[cfg_attr(docsrs, doc(cfg(not(target_os = "redox"))))] #[derive(Copy, Clone, Eq, PartialEq)] pub struct RecvFlags(c_int); #[cfg(not(target_os = "redox"))] impl RecvFlags { /// Check if the message contains a truncated datagram. /// /// This flag is only used for datagram-based sockets, /// not for stream sockets. /// /// On Unix this corresponds to the `MSG_TRUNC` flag. /// On Windows this corresponds to the `WSAEMSGSIZE` error code. pub const fn is_truncated(self) -> bool { self.0 & sys::MSG_TRUNC != 0 } } /// A version of [`IoSliceMut`] that allows the buffer to be uninitialised. /// /// [`IoSliceMut`]: std::io::IoSliceMut #[repr(transparent)] pub struct MaybeUninitSlice<'a>(sys::MaybeUninitSlice<'a>); impl<'a> fmt::Debug for MaybeUninitSlice<'a> { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt::Debug::fmt(self.0.as_slice(), fmt) } } impl<'a> MaybeUninitSlice<'a> { /// Creates a new `MaybeUninitSlice` wrapping a byte slice. /// /// # Panics /// /// Panics on Windows if the slice is larger than 4GB. pub fn new(buf: &'a mut [MaybeUninit]) -> MaybeUninitSlice<'a> { MaybeUninitSlice(sys::MaybeUninitSlice::new(buf)) } } impl<'a> Deref for MaybeUninitSlice<'a> { type Target = [MaybeUninit]; fn deref(&self) -> &[MaybeUninit] { self.0.as_slice() } } impl<'a> DerefMut for MaybeUninitSlice<'a> { fn deref_mut(&mut self) -> &mut [MaybeUninit] { self.0.as_mut_slice() } } /// Configures a socket's TCP keepalive parameters. /// /// See [`Socket::set_tcp_keepalive`]. #[derive(Debug, Clone)] pub struct TcpKeepalive { time: Option, #[cfg(not(any(target_os = "redox", target_os = "solaris")))] interval: Option, #[cfg(not(any(target_os = "redox", target_os = "solaris", target_os = "windows")))] retries: Option, } impl TcpKeepalive { /// Returns a new, empty set of TCP keepalive parameters. pub const fn new() -> TcpKeepalive { TcpKeepalive { time: None, #[cfg(not(any(target_os = "redox", target_os = "solaris")))] interval: None, #[cfg(not(any(target_os = "redox", target_os = "solaris", target_os = "windows")))] retries: None, } } /// Set the amount of time after which TCP keepalive probes will be sent on /// idle connections. /// /// This will set `TCP_KEEPALIVE` on macOS and iOS, and /// `TCP_KEEPIDLE` on all other Unix operating systems, except /// OpenBSD and Haiku which don't support any way to set this /// option. On Windows, this sets the value of the `tcp_keepalive` /// struct's `keepalivetime` field. /// /// Some platforms specify this value in seconds, so sub-second /// specifications may be omitted. pub const fn with_time(self, time: Duration) -> Self { Self { time: Some(time), ..self } } /// Set the value of the `TCP_KEEPINTVL` option. On Windows, this sets the /// value of the `tcp_keepalive` struct's `keepaliveinterval` field. /// /// Sets the time interval between TCP keepalive probes. /// /// Some platforms specify this value in seconds, so sub-second /// specifications may be omitted. #[cfg(all( feature = "all", any( target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "linux", target_os = "netbsd", target_vendor = "apple", windows, ) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any( target_os = "freebsd", target_os = "fuchsia", target_os = "linux", target_os = "netbsd", target_vendor = "apple", windows, ) ))) )] pub const fn with_interval(self, interval: Duration) -> Self { Self { interval: Some(interval), ..self } } /// Set the value of the `TCP_KEEPCNT` option. /// /// Set the maximum number of TCP keepalive probes that will be sent before /// dropping a connection, if TCP keepalive is enabled on this socket. #[cfg(all( feature = "all", any( doc, target_os = "dragonfly", target_os = "freebsd", target_os = "fuchsia", target_os = "linux", target_os = "netbsd", target_vendor = "apple", ) ))] #[cfg_attr( docsrs, doc(cfg(all( feature = "all", any( target_os = "freebsd", target_os = "fuchsia", target_os = "linux", target_os = "netbsd", target_vendor = "apple", ) ))) )] pub const fn with_retries(self, retries: u32) -> Self { Self { retries: Some(retries), ..self } } } vendor/socket2/LICENSE-MIT0000664000175000017500000000204114160055207015652 0ustar mwhudsonmwhudsonCopyright (c) 2014 Alex Crichton Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/socket2/README.md0000664000175000017500000000505214160055207015502 0ustar mwhudsonmwhudson# Socket2 Socket2 is a crate that provides utilities for creating and using sockets. The goal of this crate is to create and use a socket using advanced configuration options (those that are not available in the types in the standard library) without using any unsafe code. This crate provides as direct as possible access to the system's functionality for sockets, this means little effort to provide cross-platform utilities. It is up to the user to know how to use sockets when using this crate. *If you don't know how to create a socket using libc/system calls then this crate is not for you*. Most, if not all, functions directly relate to the equivalent system call with no error handling applied, so no handling errors such as `EINTR`. As a result using this crate can be a little wordy, but it should give you maximal flexibility over configuration of sockets. See the [API documentation] for more. [API documentation]: https://docs.rs/socket2 # Two branches Currently Socket2 supports two versions: v0.4 and v0.3. Version 0.4 is developed in the master branch, version 0.3 in the [v0.3.x branch]. [v0.3.x branch]: https://github.com/rust-lang/socket2/tree/v0.3.x # OS support Socket2 attempts to support the same OS/architectures as Rust does, see https://doc.rust-lang.org/nightly/rustc/platform-support.html. However this is not always possible, below is current list of support OSs. *If your favorite OS is not on the list consider contributing it! See [issue #78].* [issue #78]: https://github.com/rust-lang/socket2/issues/78 ### Tier 1 These OSs are tested with each commit in the CI and must always pass the tests. All functions/types/etc., excluding ones behind the `all` feature, must work on these OSs. * Linux * macOS * Windows ### Tier 2 These OSs are currently build in the CI, but not tested. Not all functions/types/etc. may work on these OSs, even ones **not** behind the `all` feature flag. * Android * FreeBSD * Fuchsia * iOS * illumos * NetBSD * Redox * Solaris # Minimum Supported Rust Version (MSRV) Socket2 uses 1.46.0 as MSRV. # License This project is licensed under either of * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this project by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. vendor/textwrap/0000775000175000017500000000000014160055207014525 5ustar mwhudsonmwhudsonvendor/textwrap/.cargo-checksum.json0000664000175000017500000000013114160055207020364 0ustar mwhudsonmwhudson{"files":{},"package":"d326610f408c7a4eb6f51c37c330e496b08506c9457c9d34287ecc38809fb060"}vendor/textwrap/LICENSE0000664000175000017500000000205714160055207015536 0ustar mwhudsonmwhudsonMIT License Copyright (c) 2016 Martin Geisler Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vendor/textwrap/Cargo.toml0000664000175000017500000000336114160055207016460 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g. crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] name = "textwrap" version = "0.11.0" authors = ["Martin Geisler "] exclude = [".dir-locals.el"] description = "Textwrap is a small library for word wrapping, indenting, and\ndedenting strings.\n\nYou can use it to format strings (such as help and error messages) for\ndisplay in commandline applications. It is designed to be efficient\nand handle Unicode characters correctly.\n" documentation = "https://docs.rs/textwrap/" readme = "README.md" keywords = ["text", "formatting", "wrap", "typesetting", "hyphenation"] categories = ["text-processing", "command-line-interface"] license = "MIT" repository = "https://github.com/mgeisler/textwrap" [package.metadata.docs.rs] all-features = true [dependencies.hyphenation] version = "0.7.1" features = ["embed_all"] optional = true [dependencies.term_size] version = "0.3.0" optional = true [dependencies.unicode-width] version = "0.1.3" [dev-dependencies.lipsum] version = "0.6" [dev-dependencies.rand] version = "0.6" [dev-dependencies.rand_xorshift] version = "0.1" [dev-dependencies.version-sync] version = "0.6" [badges.appveyor] repository = "mgeisler/textwrap" [badges.codecov] repository = "mgeisler/textwrap" [badges.travis-ci] repository = "mgeisler/textwrap" vendor/textwrap/benches/0000775000175000017500000000000014160055207016134 5ustar mwhudsonmwhudsonvendor/textwrap/benches/linear.rs0000664000175000017500000000643214160055207017761 0ustar mwhudsonmwhudson#![feature(test)] // The benchmarks here verify that the complexity grows as O(*n*) // where *n* is the number of characters in the text to be wrapped. #[cfg(feature = "hyphenation")] extern crate hyphenation; extern crate lipsum; extern crate rand; extern crate rand_xorshift; extern crate test; extern crate textwrap; #[cfg(feature = "hyphenation")] use hyphenation::{Language, Load, Standard}; use lipsum::MarkovChain; use rand::SeedableRng; use rand_xorshift::XorShiftRng; use test::Bencher; #[cfg(feature = "hyphenation")] use textwrap::Wrapper; const LINE_LENGTH: usize = 60; /// Generate a lorem ipsum text with the given number of characters. fn lorem_ipsum(length: usize) -> String { // The average word length in the lorem ipsum text is somewhere // between 6 and 7. So we conservatively divide by 5 to have a // long enough text that we can truncate below. let rng = XorShiftRng::seed_from_u64(0); let mut chain = MarkovChain::new_with_rng(rng); chain.learn(lipsum::LOREM_IPSUM); chain.learn(lipsum::LIBER_PRIMUS); let mut text = chain.generate_from(length / 5, ("Lorem", "ipsum")); text.truncate(length); text } #[bench] fn fill_100(b: &mut Bencher) { let text = &lorem_ipsum(100); b.iter(|| textwrap::fill(text, LINE_LENGTH)) } #[bench] fn fill_200(b: &mut Bencher) { let text = &lorem_ipsum(200); b.iter(|| textwrap::fill(text, LINE_LENGTH)) } #[bench] fn fill_400(b: &mut Bencher) { let text = &lorem_ipsum(400); b.iter(|| textwrap::fill(text, LINE_LENGTH)) } #[bench] fn fill_800(b: &mut Bencher) { let text = &lorem_ipsum(800); b.iter(|| textwrap::fill(text, LINE_LENGTH)) } #[bench] fn wrap_100(b: &mut Bencher) { let text = &lorem_ipsum(100); b.iter(|| textwrap::wrap(text, LINE_LENGTH)) } #[bench] fn wrap_200(b: &mut Bencher) { let text = &lorem_ipsum(200); b.iter(|| textwrap::wrap(text, LINE_LENGTH)) } #[bench] fn wrap_400(b: &mut Bencher) { let text = &lorem_ipsum(400); b.iter(|| textwrap::wrap(text, LINE_LENGTH)) } #[bench] fn wrap_800(b: &mut Bencher) { let text = &lorem_ipsum(800); b.iter(|| textwrap::wrap(text, LINE_LENGTH)) } #[bench] #[cfg(feature = "hyphenation")] fn hyphenation_fill_100(b: &mut Bencher) { let text = &lorem_ipsum(100); let dictionary = Standard::from_embedded(Language::Latin).unwrap(); let wrapper = Wrapper::with_splitter(LINE_LENGTH, dictionary); b.iter(|| wrapper.fill(text)) } #[bench] #[cfg(feature = "hyphenation")] fn hyphenation_fill_200(b: &mut Bencher) { let text = &lorem_ipsum(200); let dictionary = Standard::from_embedded(Language::Latin).unwrap(); let wrapper = Wrapper::with_splitter(LINE_LENGTH, dictionary); b.iter(|| wrapper.fill(text)) } #[bench] #[cfg(feature = "hyphenation")] fn hyphenation_fill_400(b: &mut Bencher) { let text = &lorem_ipsum(400); let dictionary = Standard::from_embedded(Language::Latin).unwrap(); let wrapper = Wrapper::with_splitter(LINE_LENGTH, dictionary); b.iter(|| wrapper.fill(text)) } #[bench] #[cfg(feature = "hyphenation")] fn hyphenation_fill_800(b: &mut Bencher) { let text = &lorem_ipsum(800); let dictionary = Standard::from_embedded(Language::Latin).unwrap(); let wrapper = Wrapper::with_splitter(LINE_LENGTH, dictionary); b.iter(|| wrapper.fill(text)) } vendor/textwrap/src/0000775000175000017500000000000014160055207015314 5ustar mwhudsonmwhudsonvendor/textwrap/src/splitting.rs0000664000175000017500000001176214160055207017706 0ustar mwhudsonmwhudson//! Word splitting functionality. //! //! To wrap text into lines, long words sometimes need to be split //! across lines. The [`WordSplitter`] trait defines this //! functionality. [`HyphenSplitter`] is the default implementation of //! this treat: it will simply split words on existing hyphens. #[cfg(feature = "hyphenation")] use hyphenation::{Hyphenator, Standard}; /// An interface for splitting words. /// /// When the [`wrap_iter`] method will try to fit text into a line, it /// will eventually find a word that it too large the current text /// width. It will then call the currently configured `WordSplitter` to /// have it attempt to split the word into smaller parts. This trait /// describes that functionality via the [`split`] method. /// /// If the `textwrap` crate has been compiled with the `hyphenation` /// feature enabled, you will find an implementation of `WordSplitter` /// by the `hyphenation::language::Corpus` struct. Use this struct for /// language-aware hyphenation. See the [`hyphenation` documentation] /// for details. /// /// [`wrap_iter`]: ../struct.Wrapper.html#method.wrap_iter /// [`split`]: #tymethod.split /// [`hyphenation` documentation]: https://docs.rs/hyphenation/ pub trait WordSplitter { /// Return all possible splits of word. Each split is a triple /// with a head, a hyphen, and a tail where `head + &hyphen + /// &tail == word`. The hyphen can be empty if there is already a /// hyphen in the head. /// /// The splits should go from smallest to longest and should /// include no split at all. So the word "technology" could be /// split into /// /// ```no_run /// vec![("tech", "-", "nology"), /// ("technol", "-", "ogy"), /// ("technolo", "-", "gy"), /// ("technology", "", "")]; /// ``` fn split<'w>(&self, word: &'w str) -> Vec<(&'w str, &'w str, &'w str)>; } /// Use this as a [`Wrapper.splitter`] to avoid any kind of /// hyphenation: /// /// ``` /// use textwrap::{Wrapper, NoHyphenation}; /// /// let wrapper = Wrapper::with_splitter(8, NoHyphenation); /// assert_eq!(wrapper.wrap("foo bar-baz"), vec!["foo", "bar-baz"]); /// ``` /// /// [`Wrapper.splitter`]: ../struct.Wrapper.html#structfield.splitter #[derive(Clone, Debug)] pub struct NoHyphenation; /// `NoHyphenation` implements `WordSplitter` by not splitting the /// word at all. impl WordSplitter for NoHyphenation { fn split<'w>(&self, word: &'w str) -> Vec<(&'w str, &'w str, &'w str)> { vec![(word, "", "")] } } /// Simple and default way to split words: splitting on existing /// hyphens only. /// /// You probably don't need to use this type since it's already used /// by default by `Wrapper::new`. #[derive(Clone, Debug)] pub struct HyphenSplitter; /// `HyphenSplitter` is the default `WordSplitter` used by /// `Wrapper::new`. It will split words on any existing hyphens in the /// word. /// /// It will only use hyphens that are surrounded by alphanumeric /// characters, which prevents a word like "--foo-bar" from being /// split on the first or second hyphen. impl WordSplitter for HyphenSplitter { fn split<'w>(&self, word: &'w str) -> Vec<(&'w str, &'w str, &'w str)> { let mut triples = Vec::new(); // Split on hyphens, smallest split first. We only use hyphens // that are surrounded by alphanumeric characters. This is to // avoid splitting on repeated hyphens, such as those found in // --foo-bar. let mut char_indices = word.char_indices(); // Early return if the word is empty. let mut prev = match char_indices.next() { None => return vec![(word, "", "")], Some((_, ch)) => ch, }; // Find current word, or return early if the word only has a // single character. let (mut idx, mut cur) = match char_indices.next() { None => return vec![(word, "", "")], Some((idx, cur)) => (idx, cur), }; for (i, next) in char_indices { if prev.is_alphanumeric() && cur == '-' && next.is_alphanumeric() { let (head, tail) = word.split_at(idx + 1); triples.push((head, "", tail)); } prev = cur; idx = i; cur = next; } // Finally option is no split at all. triples.push((word, "", "")); triples } } /// A hyphenation dictionary can be used to do language-specific /// hyphenation using patterns from the hyphenation crate. #[cfg(feature = "hyphenation")] impl WordSplitter for Standard { fn split<'w>(&self, word: &'w str) -> Vec<(&'w str, &'w str, &'w str)> { // Find splits based on language dictionary. let mut triples = Vec::new(); for n in self.hyphenate(word).breaks { let (head, tail) = word.split_at(n); let hyphen = if head.ends_with('-') { "" } else { "-" }; triples.push((head, hyphen, tail)); } // Finally option is no split at all. triples.push((word, "", "")); triples } } vendor/textwrap/src/indentation.rs0000664000175000017500000001647614160055207020214 0ustar mwhudsonmwhudson//! Functions related to adding and removing indentation from lines of //! text. //! //! The functions here can be used to uniformly indent or dedent //! (unindent) word wrapped lines of text. /// Add prefix to each non-empty line. /// /// ``` /// use textwrap::indent; /// /// assert_eq!(indent(" /// Foo /// Bar /// ", " "), " /// Foo /// Bar /// "); /// ``` /// /// Empty lines (lines consisting only of whitespace) are not indented /// and the whitespace is replaced by a single newline (`\n`): /// /// ``` /// use textwrap::indent; /// /// assert_eq!(indent(" /// Foo /// /// Bar /// \t /// Baz /// ", "->"), " /// ->Foo /// /// ->Bar /// /// ->Baz /// "); /// ``` /// /// Leading and trailing whitespace on non-empty lines is kept /// unchanged: /// /// ``` /// use textwrap::indent; /// /// assert_eq!(indent(" \t Foo ", "->"), "-> \t Foo \n"); /// ``` pub fn indent(s: &str, prefix: &str) -> String { let mut result = String::new(); for line in s.lines() { if line.chars().any(|c| !c.is_whitespace()) { result.push_str(prefix); result.push_str(line); } result.push('\n'); } result } /// Removes common leading whitespace from each line. /// /// This function will look at each non-empty line and determine the /// maximum amount of whitespace that can be removed from all lines: /// /// ``` /// use textwrap::dedent; /// /// assert_eq!(dedent(" /// 1st line /// 2nd line /// 3rd line /// "), " /// 1st line /// 2nd line /// 3rd line /// "); /// ``` pub fn dedent(s: &str) -> String { let mut prefix = ""; let mut lines = s.lines(); // We first search for a non-empty line to find a prefix. for line in &mut lines { let mut whitespace_idx = line.len(); for (idx, ch) in line.char_indices() { if !ch.is_whitespace() { whitespace_idx = idx; break; } } // Check if the line had anything but whitespace if whitespace_idx < line.len() { prefix = &line[..whitespace_idx]; break; } } // We then continue looking through the remaining lines to // possibly shorten the prefix. for line in &mut lines { let mut whitespace_idx = line.len(); for ((idx, a), b) in line.char_indices().zip(prefix.chars()) { if a != b { whitespace_idx = idx; break; } } // Check if the line had anything but whitespace and if we // have found a shorter prefix if whitespace_idx < line.len() && whitespace_idx < prefix.len() { prefix = &line[..whitespace_idx]; } } // We now go over the lines a second time to build the result. let mut result = String::new(); for line in s.lines() { if line.starts_with(&prefix) && line.chars().any(|c| !c.is_whitespace()) { let (_, tail) = line.split_at(prefix.len()); result.push_str(tail); } result.push('\n'); } if result.ends_with('\n') && !s.ends_with('\n') { let new_len = result.len() - 1; result.truncate(new_len); } result } #[cfg(test)] mod tests { use super::*; /// Add newlines. Ensures that the final line in the vector also /// has a newline. fn add_nl(lines: &[&str]) -> String { lines.join("\n") + "\n" } #[test] fn indent_empty() { assert_eq!(indent("\n", " "), "\n"); } #[test] #[cfg_attr(rustfmt, rustfmt_skip)] fn indent_nonempty() { let x = vec![" foo", "bar", " baz"]; let y = vec!["// foo", "//bar", "// baz"]; assert_eq!(indent(&add_nl(&x), "//"), add_nl(&y)); } #[test] #[cfg_attr(rustfmt, rustfmt_skip)] fn indent_empty_line() { let x = vec![" foo", "bar", "", " baz"]; let y = vec!["// foo", "//bar", "", "// baz"]; assert_eq!(indent(&add_nl(&x), "//"), add_nl(&y)); } #[test] fn dedent_empty() { assert_eq!(dedent(""), ""); } #[test] #[cfg_attr(rustfmt, rustfmt_skip)] fn dedent_multi_line() { let x = vec![" foo", " bar", " baz"]; let y = vec![" foo", "bar", " baz"]; assert_eq!(dedent(&add_nl(&x)), add_nl(&y)); } #[test] #[cfg_attr(rustfmt, rustfmt_skip)] fn dedent_empty_line() { let x = vec![" foo", " bar", " ", " baz"]; let y = vec![" foo", "bar", "", " baz"]; assert_eq!(dedent(&add_nl(&x)), add_nl(&y)); } #[test] #[cfg_attr(rustfmt, rustfmt_skip)] fn dedent_blank_line() { let x = vec![" foo", "", " bar", " foo", " bar", " baz"]; let y = vec!["foo", "", " bar", " foo", " bar", " baz"]; assert_eq!(dedent(&add_nl(&x)), add_nl(&y)); } #[test] #[cfg_attr(rustfmt, rustfmt_skip)] fn dedent_whitespace_line() { let x = vec![" foo", " ", " bar", " foo", " bar", " baz"]; let y = vec!["foo", "", " bar", " foo", " bar", " baz"]; assert_eq!(dedent(&add_nl(&x)), add_nl(&y)); } #[test] #[cfg_attr(rustfmt, rustfmt_skip)] fn dedent_mixed_whitespace() { let x = vec!["\tfoo", " bar"]; let y = vec!["\tfoo", " bar"]; assert_eq!(dedent(&add_nl(&x)), add_nl(&y)); } #[test] #[cfg_attr(rustfmt, rustfmt_skip)] fn dedent_tabbed_whitespace() { let x = vec!["\t\tfoo", "\t\t\tbar"]; let y = vec!["foo", "\tbar"]; assert_eq!(dedent(&add_nl(&x)), add_nl(&y)); } #[test] #[cfg_attr(rustfmt, rustfmt_skip)] fn dedent_mixed_tabbed_whitespace() { let x = vec!["\t \tfoo", "\t \t\tbar"]; let y = vec!["foo", "\tbar"]; assert_eq!(dedent(&add_nl(&x)), add_nl(&y)); } #[test] #[cfg_attr(rustfmt, rustfmt_skip)] fn dedent_mixed_tabbed_whitespace2() { let x = vec!["\t \tfoo", "\t \tbar"]; let y = vec!["\tfoo", " \tbar"]; assert_eq!(dedent(&add_nl(&x)), add_nl(&y)); } #[test] #[cfg_attr(rustfmt, rustfmt_skip)] fn dedent_preserve_no_terminating_newline() { let x = vec![" foo", " bar"].join("\n"); let y = vec!["foo", " bar"].join("\n"); assert_eq!(dedent(&x), y); } } vendor/textwrap/src/lib.rs0000664000175000017500000010066314160055207016436 0ustar mwhudsonmwhudson//! `textwrap` provides functions for word wrapping and filling text. //! //! Wrapping text can be very useful in commandline programs where you //! want to format dynamic output nicely so it looks good in a //! terminal. A quick example: //! //! ```no_run //! extern crate textwrap; //! use textwrap::fill; //! //! fn main() { //! let text = "textwrap: a small library for wrapping text."; //! println!("{}", fill(text, 18)); //! } //! ``` //! //! This will display the following output: //! //! ```text //! textwrap: a small //! library for //! wrapping text. //! ``` //! //! # Displayed Width vs Byte Size //! //! To word wrap text, one must know the width of each word so one can //! know when to break lines. This library measures the width of text //! using the [displayed width][unicode-width], not the size in bytes. //! //! This is important for non-ASCII text. ASCII characters such as `a` //! and `!` are simple and take up one column each. This means that //! the displayed width is equal to the string length in bytes. //! However, non-ASCII characters and symbols take up more than one //! byte when UTF-8 encoded: `é` is `0xc3 0xa9` (two bytes) and `âš™` is //! `0xe2 0x9a 0x99` (three bytes) in UTF-8, respectively. //! //! This is why we take care to use the displayed width instead of the //! byte count when computing line lengths. All functions in this //! library handle Unicode characters like this. //! //! [unicode-width]: https://docs.rs/unicode-width/ #![doc(html_root_url = "https://docs.rs/textwrap/0.11.0")] #![deny(missing_docs)] #![deny(missing_debug_implementations)] #[cfg(feature = "hyphenation")] extern crate hyphenation; #[cfg(feature = "term_size")] extern crate term_size; extern crate unicode_width; use std::borrow::Cow; use std::str::CharIndices; use unicode_width::UnicodeWidthChar; use unicode_width::UnicodeWidthStr; /// A non-breaking space. const NBSP: char = '\u{a0}'; mod indentation; pub use indentation::dedent; pub use indentation::indent; mod splitting; pub use splitting::{HyphenSplitter, NoHyphenation, WordSplitter}; /// A Wrapper holds settings for wrapping and filling text. Use it /// when the convenience [`wrap_iter`], [`wrap`] and [`fill`] functions /// are not flexible enough. /// /// [`wrap_iter`]: fn.wrap_iter.html /// [`wrap`]: fn.wrap.html /// [`fill`]: fn.fill.html /// /// The algorithm used by the `WrapIter` iterator (returned from the /// `wrap_iter` method) works by doing successive partial scans over /// words in the input string (where each single scan yields a single /// line) so that the overall time and memory complexity is O(*n*) where /// *n* is the length of the input string. #[derive(Clone, Debug)] pub struct Wrapper<'a, S: WordSplitter> { /// The width in columns at which the text will be wrapped. pub width: usize, /// Indentation used for the first line of output. pub initial_indent: &'a str, /// Indentation used for subsequent lines of output. pub subsequent_indent: &'a str, /// Allow long words to be broken if they cannot fit on a line. /// When set to `false`, some lines may be longer than /// `self.width`. pub break_words: bool, /// The method for splitting words. If the `hyphenation` feature /// is enabled, you can use a `hyphenation::Standard` dictionary /// here to get language-aware hyphenation. pub splitter: S, } impl<'a> Wrapper<'a, HyphenSplitter> { /// Create a new Wrapper for wrapping at the specified width. By /// default, we allow words longer than `width` to be broken. A /// [`HyphenSplitter`] will be used by default for splitting /// words. See the [`WordSplitter`] trait for other options. /// /// [`HyphenSplitter`]: struct.HyphenSplitter.html /// [`WordSplitter`]: trait.WordSplitter.html pub fn new(width: usize) -> Wrapper<'a, HyphenSplitter> { Wrapper::with_splitter(width, HyphenSplitter) } /// Create a new Wrapper for wrapping text at the current terminal /// width. If the terminal width cannot be determined (typically /// because the standard input and output is not connected to a /// terminal), a width of 80 characters will be used. Other /// settings use the same defaults as `Wrapper::new`. /// /// Equivalent to: /// /// ```no_run /// # #![allow(unused_variables)] /// use textwrap::{Wrapper, termwidth}; /// /// let wrapper = Wrapper::new(termwidth()); /// ``` #[cfg(feature = "term_size")] pub fn with_termwidth() -> Wrapper<'a, HyphenSplitter> { Wrapper::new(termwidth()) } } impl<'a, S: WordSplitter> Wrapper<'a, S> { /// Use the given [`WordSplitter`] to create a new Wrapper for /// wrapping at the specified width. By default, we allow words /// longer than `width` to be broken. /// /// [`WordSplitter`]: trait.WordSplitter.html pub fn with_splitter(width: usize, splitter: S) -> Wrapper<'a, S> { Wrapper { width: width, initial_indent: "", subsequent_indent: "", break_words: true, splitter: splitter, } } /// Change [`self.initial_indent`]. The initial indentation is /// used on the very first line of output. /// /// # Examples /// /// Classic paragraph indentation can be achieved by specifying an /// initial indentation and wrapping each paragraph by itself: /// /// ```no_run /// # #![allow(unused_variables)] /// use textwrap::Wrapper; /// /// let wrapper = Wrapper::new(15).initial_indent(" "); /// ``` /// /// [`self.initial_indent`]: #structfield.initial_indent pub fn initial_indent(self, indent: &'a str) -> Wrapper<'a, S> { Wrapper { initial_indent: indent, ..self } } /// Change [`self.subsequent_indent`]. The subsequent indentation /// is used on lines following the first line of output. /// /// # Examples /// /// Combining initial and subsequent indentation lets you format a /// single paragraph as a bullet list: /// /// ```no_run /// # #![allow(unused_variables)] /// use textwrap::Wrapper; /// /// let wrapper = Wrapper::new(15) /// .initial_indent("* ") /// .subsequent_indent(" "); /// ``` /// /// [`self.subsequent_indent`]: #structfield.subsequent_indent pub fn subsequent_indent(self, indent: &'a str) -> Wrapper<'a, S> { Wrapper { subsequent_indent: indent, ..self } } /// Change [`self.break_words`]. This controls if words longer /// than `self.width` can be broken, or if they will be left /// sticking out into the right margin. /// /// [`self.break_words`]: #structfield.break_words pub fn break_words(self, setting: bool) -> Wrapper<'a, S> { Wrapper { break_words: setting, ..self } } /// Fill a line of text at `self.width` characters. Strings are /// wrapped based on their displayed width, not their size in /// bytes. /// /// The result is a string with newlines between each line. Use /// the `wrap` method if you need access to the individual lines. /// /// # Complexities /// /// This method simply joins the lines produced by `wrap_iter`. As /// such, it inherits the O(*n*) time and memory complexity where /// *n* is the input string length. /// /// # Examples /// /// ``` /// use textwrap::Wrapper; /// /// let wrapper = Wrapper::new(15); /// assert_eq!(wrapper.fill("Memory safety without garbage collection."), /// "Memory safety\nwithout garbage\ncollection."); /// ``` pub fn fill(&self, s: &str) -> String { // This will avoid reallocation in simple cases (no // indentation, no hyphenation). let mut result = String::with_capacity(s.len()); for (i, line) in self.wrap_iter(s).enumerate() { if i > 0 { result.push('\n'); } result.push_str(&line); } result } /// Wrap a line of text at `self.width` characters. Strings are /// wrapped based on their displayed width, not their size in /// bytes. /// /// # Complexities /// /// This method simply collects the lines produced by `wrap_iter`. /// As such, it inherits the O(*n*) overall time and memory /// complexity where *n* is the input string length. /// /// # Examples /// /// ``` /// use textwrap::Wrapper; /// /// let wrap15 = Wrapper::new(15); /// assert_eq!(wrap15.wrap("Concurrency without data races."), /// vec!["Concurrency", /// "without data", /// "races."]); /// /// let wrap20 = Wrapper::new(20); /// assert_eq!(wrap20.wrap("Concurrency without data races."), /// vec!["Concurrency without", /// "data races."]); /// ``` /// /// Notice that newlines in the input are preserved. This means /// that they force a line break, regardless of how long the /// current line is: /// /// ``` /// use textwrap::Wrapper; /// /// let wrapper = Wrapper::new(40); /// assert_eq!(wrapper.wrap("First line.\nSecond line."), /// vec!["First line.", "Second line."]); /// ``` /// pub fn wrap(&self, s: &'a str) -> Vec> { self.wrap_iter(s).collect::>() } /// Lazily wrap a line of text at `self.width` characters. Strings /// are wrapped based on their displayed width, not their size in /// bytes. /// /// The [`WordSplitter`] stored in [`self.splitter`] is used /// whenever when a word is too large to fit on the current line. /// By changing the field, different hyphenation strategies can be /// implemented. /// /// # Complexities /// /// This method returns a [`WrapIter`] iterator which borrows this /// `Wrapper`. The algorithm used has a linear complexity, so /// getting the next line from the iterator will take O(*w*) time, /// where *w* is the wrapping width. Fully processing the iterator /// will take O(*n*) time for an input string of length *n*. /// /// When no indentation is used, each line returned is a slice of /// the input string and the memory overhead is thus constant. /// Otherwise new memory is allocated for each line returned. /// /// # Examples /// /// ``` /// use std::borrow::Cow; /// use textwrap::Wrapper; /// /// let wrap20 = Wrapper::new(20); /// let mut wrap20_iter = wrap20.wrap_iter("Zero-cost abstractions."); /// assert_eq!(wrap20_iter.next(), Some(Cow::from("Zero-cost"))); /// assert_eq!(wrap20_iter.next(), Some(Cow::from("abstractions."))); /// assert_eq!(wrap20_iter.next(), None); /// /// let wrap25 = Wrapper::new(25); /// let mut wrap25_iter = wrap25.wrap_iter("Zero-cost abstractions."); /// assert_eq!(wrap25_iter.next(), Some(Cow::from("Zero-cost abstractions."))); /// assert_eq!(wrap25_iter.next(), None); /// ``` /// /// [`self.splitter`]: #structfield.splitter /// [`WordSplitter`]: trait.WordSplitter.html /// [`WrapIter`]: struct.WrapIter.html pub fn wrap_iter<'w>(&'w self, s: &'a str) -> WrapIter<'w, 'a, S> { WrapIter { wrapper: self, inner: WrapIterImpl::new(self, s), } } /// Lazily wrap a line of text at `self.width` characters. Strings /// are wrapped based on their displayed width, not their size in /// bytes. /// /// The [`WordSplitter`] stored in [`self.splitter`] is used /// whenever when a word is too large to fit on the current line. /// By changing the field, different hyphenation strategies can be /// implemented. /// /// # Complexities /// /// This method consumes the `Wrapper` and returns a /// [`IntoWrapIter`] iterator. Fully processing the iterator has /// the same O(*n*) time complexity as [`wrap_iter`], where *n* is /// the length of the input string. /// /// # Examples /// /// ``` /// use std::borrow::Cow; /// use textwrap::Wrapper; /// /// let wrap20 = Wrapper::new(20); /// let mut wrap20_iter = wrap20.into_wrap_iter("Zero-cost abstractions."); /// assert_eq!(wrap20_iter.next(), Some(Cow::from("Zero-cost"))); /// assert_eq!(wrap20_iter.next(), Some(Cow::from("abstractions."))); /// assert_eq!(wrap20_iter.next(), None); /// ``` /// /// [`self.splitter`]: #structfield.splitter /// [`WordSplitter`]: trait.WordSplitter.html /// [`IntoWrapIter`]: struct.IntoWrapIter.html /// [`wrap_iter`]: #method.wrap_iter pub fn into_wrap_iter(self, s: &'a str) -> IntoWrapIter<'a, S> { let inner = WrapIterImpl::new(&self, s); IntoWrapIter { wrapper: self, inner: inner, } } } /// An iterator over the lines of the input string which owns a /// `Wrapper`. An instance of `IntoWrapIter` is typically obtained /// through either [`wrap_iter`] or [`Wrapper::into_wrap_iter`]. /// /// Each call of `.next()` method yields a line wrapped in `Some` if the /// input hasn't been fully processed yet. Otherwise it returns `None`. /// /// [`wrap_iter`]: fn.wrap_iter.html /// [`Wrapper::into_wrap_iter`]: struct.Wrapper.html#method.into_wrap_iter #[derive(Debug)] pub struct IntoWrapIter<'a, S: WordSplitter> { wrapper: Wrapper<'a, S>, inner: WrapIterImpl<'a>, } impl<'a, S: WordSplitter> Iterator for IntoWrapIter<'a, S> { type Item = Cow<'a, str>; fn next(&mut self) -> Option> { self.inner.next(&self.wrapper) } } /// An iterator over the lines of the input string which borrows a /// `Wrapper`. An instance of `WrapIter` is typically obtained /// through the [`Wrapper::wrap_iter`] method. /// /// Each call of `.next()` method yields a line wrapped in `Some` if the /// input hasn't been fully processed yet. Otherwise it returns `None`. /// /// [`Wrapper::wrap_iter`]: struct.Wrapper.html#method.wrap_iter #[derive(Debug)] pub struct WrapIter<'w, 'a: 'w, S: WordSplitter + 'w> { wrapper: &'w Wrapper<'a, S>, inner: WrapIterImpl<'a>, } impl<'w, 'a: 'w, S: WordSplitter> Iterator for WrapIter<'w, 'a, S> { type Item = Cow<'a, str>; fn next(&mut self) -> Option> { self.inner.next(self.wrapper) } } /// Like `char::is_whitespace`, but non-breaking spaces don't count. #[inline] fn is_whitespace(ch: char) -> bool { ch.is_whitespace() && ch != NBSP } /// Common implementation details for `WrapIter` and `IntoWrapIter`. #[derive(Debug)] struct WrapIterImpl<'a> { // String to wrap. source: &'a str, // CharIndices iterator over self.source. char_indices: CharIndices<'a>, // Byte index where the current line starts. start: usize, // Byte index of the last place where the string can be split. split: usize, // Size in bytes of the character at self.source[self.split]. split_len: usize, // Width of self.source[self.start..idx]. line_width: usize, // Width of self.source[self.start..self.split]. line_width_at_split: usize, // Tracking runs of whitespace characters. in_whitespace: bool, // Has iterator finished producing elements? finished: bool, } impl<'a> WrapIterImpl<'a> { fn new(wrapper: &Wrapper<'a, S>, s: &'a str) -> WrapIterImpl<'a> { WrapIterImpl { source: s, char_indices: s.char_indices(), start: 0, split: 0, split_len: 0, line_width: wrapper.initial_indent.width(), line_width_at_split: wrapper.initial_indent.width(), in_whitespace: false, finished: false, } } fn create_result_line(&self, wrapper: &Wrapper<'a, S>) -> Cow<'a, str> { if self.start == 0 { Cow::from(wrapper.initial_indent) } else { Cow::from(wrapper.subsequent_indent) } } fn next(&mut self, wrapper: &Wrapper<'a, S>) -> Option> { if self.finished { return None; } while let Some((idx, ch)) = self.char_indices.next() { let char_width = ch.width().unwrap_or(0); let char_len = ch.len_utf8(); if ch == '\n' { self.split = idx; self.split_len = char_len; self.line_width_at_split = self.line_width; self.in_whitespace = false; // If this is not the final line, return the current line. Otherwise, // we will return the line with its line break after exiting the loop if self.split + self.split_len < self.source.len() { let mut line = self.create_result_line(wrapper); line += &self.source[self.start..self.split]; self.start = self.split + self.split_len; self.line_width = wrapper.subsequent_indent.width(); return Some(line); } } else if is_whitespace(ch) { // Extend the previous split or create a new one. if self.in_whitespace { self.split_len += char_len; } else { self.split = idx; self.split_len = char_len; } self.line_width_at_split = self.line_width + char_width; self.in_whitespace = true; } else if self.line_width + char_width > wrapper.width { // There is no room for this character on the current // line. Try to split the final word. self.in_whitespace = false; let remaining_text = &self.source[self.split + self.split_len..]; let final_word = match remaining_text.find(is_whitespace) { Some(i) => &remaining_text[..i], None => remaining_text, }; let mut hyphen = ""; let splits = wrapper.splitter.split(final_word); for &(head, hyp, _) in splits.iter().rev() { if self.line_width_at_split + head.width() + hyp.width() <= wrapper.width { // We can fit head into the current line. // Advance the split point by the width of the // whitespace and the head length. self.split += self.split_len + head.len(); self.split_len = 0; hyphen = hyp; break; } } if self.start >= self.split { // The word is too big to fit on a single line, so we // need to split it at the current index. if wrapper.break_words { // Break work at current index. self.split = idx; self.split_len = 0; self.line_width_at_split = self.line_width; } else { // Add smallest split. self.split = self.start + splits[0].0.len(); self.split_len = 0; self.line_width_at_split = self.line_width; } } if self.start < self.split { let mut line = self.create_result_line(wrapper); line += &self.source[self.start..self.split]; line += hyphen; self.start = self.split + self.split_len; self.line_width += wrapper.subsequent_indent.width(); self.line_width -= self.line_width_at_split; self.line_width += char_width; return Some(line); } } else { self.in_whitespace = false; } self.line_width += char_width; } self.finished = true; // Add final line. if self.start < self.source.len() { let mut line = self.create_result_line(wrapper); line += &self.source[self.start..]; return Some(line); } None } } /// Return the current terminal width. If the terminal width cannot be /// determined (typically because the standard output is not connected /// to a terminal), a default width of 80 characters will be used. /// /// # Examples /// /// Create a `Wrapper` for the current terminal with a two column /// margin: /// /// ```no_run /// # #![allow(unused_variables)] /// use textwrap::{Wrapper, NoHyphenation, termwidth}; /// /// let width = termwidth() - 4; // Two columns on each side. /// let wrapper = Wrapper::with_splitter(width, NoHyphenation) /// .initial_indent(" ") /// .subsequent_indent(" "); /// ``` #[cfg(feature = "term_size")] pub fn termwidth() -> usize { term_size::dimensions_stdout().map_or(80, |(w, _)| w) } /// Fill a line of text at `width` characters. Strings are wrapped /// based on their displayed width, not their size in bytes. /// /// The result is a string with newlines between each line. Use /// [`wrap`] if you need access to the individual lines or /// [`wrap_iter`] for its iterator counterpart. /// /// ``` /// use textwrap::fill; /// /// assert_eq!(fill("Memory safety without garbage collection.", 15), /// "Memory safety\nwithout garbage\ncollection."); /// ``` /// /// This function creates a Wrapper on the fly with default settings. /// If you need to set a language corpus for automatic hyphenation, or /// need to fill many strings, then it is suggested to create a Wrapper /// and call its [`fill` method]. /// /// [`wrap`]: fn.wrap.html /// [`wrap_iter`]: fn.wrap_iter.html /// [`fill` method]: struct.Wrapper.html#method.fill pub fn fill(s: &str, width: usize) -> String { Wrapper::new(width).fill(s) } /// Wrap a line of text at `width` characters. Strings are wrapped /// based on their displayed width, not their size in bytes. /// /// This function creates a Wrapper on the fly with default settings. /// If you need to set a language corpus for automatic hyphenation, or /// need to wrap many strings, then it is suggested to create a Wrapper /// and call its [`wrap` method]. /// /// The result is a vector of strings. Use [`wrap_iter`] if you need an /// iterator version. /// /// # Examples /// /// ``` /// use textwrap::wrap; /// /// assert_eq!(wrap("Concurrency without data races.", 15), /// vec!["Concurrency", /// "without data", /// "races."]); /// /// assert_eq!(wrap("Concurrency without data races.", 20), /// vec!["Concurrency without", /// "data races."]); /// ``` /// /// [`wrap_iter`]: fn.wrap_iter.html /// [`wrap` method]: struct.Wrapper.html#method.wrap pub fn wrap(s: &str, width: usize) -> Vec> { Wrapper::new(width).wrap(s) } /// Lazily wrap a line of text at `width` characters. Strings are /// wrapped based on their displayed width, not their size in bytes. /// /// This function creates a Wrapper on the fly with default settings. /// It then calls the [`into_wrap_iter`] method. Hence, the return /// value is an [`IntoWrapIter`], not a [`WrapIter`] as the function /// name would otherwise suggest. /// /// If you need to set a language corpus for automatic hyphenation, or /// need to wrap many strings, then it is suggested to create a Wrapper /// and call its [`wrap_iter`] or [`into_wrap_iter`] methods. /// /// # Examples /// /// ``` /// use std::borrow::Cow; /// use textwrap::wrap_iter; /// /// let mut wrap20_iter = wrap_iter("Zero-cost abstractions.", 20); /// assert_eq!(wrap20_iter.next(), Some(Cow::from("Zero-cost"))); /// assert_eq!(wrap20_iter.next(), Some(Cow::from("abstractions."))); /// assert_eq!(wrap20_iter.next(), None); /// /// let mut wrap25_iter = wrap_iter("Zero-cost abstractions.", 25); /// assert_eq!(wrap25_iter.next(), Some(Cow::from("Zero-cost abstractions."))); /// assert_eq!(wrap25_iter.next(), None); /// ``` /// /// [`wrap_iter`]: struct.Wrapper.html#method.wrap_iter /// [`into_wrap_iter`]: struct.Wrapper.html#method.into_wrap_iter /// [`IntoWrapIter`]: struct.IntoWrapIter.html /// [`WrapIter`]: struct.WrapIter.html pub fn wrap_iter(s: &str, width: usize) -> IntoWrapIter { Wrapper::new(width).into_wrap_iter(s) } #[cfg(test)] mod tests { #[cfg(feature = "hyphenation")] extern crate hyphenation; use super::*; #[cfg(feature = "hyphenation")] use hyphenation::{Language, Load, Standard}; #[test] fn no_wrap() { assert_eq!(wrap("foo", 10), vec!["foo"]); } #[test] fn simple() { assert_eq!(wrap("foo bar baz", 5), vec!["foo", "bar", "baz"]); } #[test] fn multi_word_on_line() { assert_eq!(wrap("foo bar baz", 10), vec!["foo bar", "baz"]); } #[test] fn long_word() { assert_eq!(wrap("foo", 0), vec!["f", "o", "o"]); } #[test] fn long_words() { assert_eq!(wrap("foo bar", 0), vec!["f", "o", "o", "b", "a", "r"]); } #[test] fn max_width() { assert_eq!(wrap("foo bar", usize::max_value()), vec!["foo bar"]); } #[test] fn leading_whitespace() { assert_eq!(wrap(" foo bar", 6), vec![" foo", "bar"]); } #[test] fn trailing_whitespace() { assert_eq!(wrap("foo bar ", 6), vec!["foo", "bar "]); } #[test] fn interior_whitespace() { assert_eq!(wrap("foo: bar baz", 10), vec!["foo: bar", "baz"]); } #[test] fn extra_whitespace_start_of_line() { // Whitespace is only significant inside a line. After a line // gets too long and is broken, the first word starts in // column zero and is not indented. The line before might end // up with trailing whitespace. assert_eq!(wrap("foo bar", 5), vec!["foo", "bar"]); } #[test] fn issue_99() { // We did not reset the in_whitespace flag correctly and did // not handle single-character words after a line break. assert_eq!( wrap("aaabbbccc x yyyzzzwww", 9), vec!["aaabbbccc", "x", "yyyzzzwww"] ); } #[test] fn issue_129() { // The dash is an em-dash which takes up four bytes. We used // to panic since we tried to index into the character. assert_eq!(wrap("x – x", 1), vec!["x", "–", "x"]); } #[test] fn wide_character_handling() { assert_eq!(wrap("Hello, World!", 15), vec!["Hello, World!"]); assert_eq!( wrap("Hellï½, ï¼·ï½ï½’ld!", 15), vec!["Hellï½,", "ï¼·ï½ï½’ld!"] ); } #[test] fn empty_input_not_indented() { let wrapper = Wrapper::new(10).initial_indent("!!!"); assert_eq!(wrapper.fill(""), ""); } #[test] fn indent_single_line() { let wrapper = Wrapper::new(10).initial_indent(">>>"); // No trailing space assert_eq!(wrapper.fill("foo"), ">>>foo"); } #[test] fn indent_multiple_lines() { let wrapper = Wrapper::new(6).initial_indent("* ").subsequent_indent(" "); assert_eq!(wrapper.wrap("foo bar baz"), vec!["* foo", " bar", " baz"]); } #[test] fn indent_break_words() { let wrapper = Wrapper::new(5).initial_indent("* ").subsequent_indent(" "); assert_eq!(wrapper.wrap("foobarbaz"), vec!["* foo", " bar", " baz"]); } #[test] fn hyphens() { assert_eq!(wrap("foo-bar", 5), vec!["foo-", "bar"]); } #[test] fn trailing_hyphen() { let wrapper = Wrapper::new(5).break_words(false); assert_eq!(wrapper.wrap("foobar-"), vec!["foobar-"]); } #[test] fn multiple_hyphens() { assert_eq!(wrap("foo-bar-baz", 5), vec!["foo-", "bar-", "baz"]); } #[test] fn hyphens_flag() { let wrapper = Wrapper::new(5).break_words(false); assert_eq!( wrapper.wrap("The --foo-bar flag."), vec!["The", "--foo-", "bar", "flag."] ); } #[test] fn repeated_hyphens() { let wrapper = Wrapper::new(4).break_words(false); assert_eq!(wrapper.wrap("foo--bar"), vec!["foo--bar"]); } #[test] fn hyphens_alphanumeric() { assert_eq!(wrap("Na2-CH4", 5), vec!["Na2-", "CH4"]); } #[test] fn hyphens_non_alphanumeric() { let wrapper = Wrapper::new(5).break_words(false); assert_eq!(wrapper.wrap("foo(-)bar"), vec!["foo(-)bar"]); } #[test] fn multiple_splits() { assert_eq!(wrap("foo-bar-baz", 9), vec!["foo-bar-", "baz"]); } #[test] fn forced_split() { let wrapper = Wrapper::new(5).break_words(false); assert_eq!(wrapper.wrap("foobar-baz"), vec!["foobar-", "baz"]); } #[test] fn no_hyphenation() { let wrapper = Wrapper::with_splitter(8, NoHyphenation); assert_eq!(wrapper.wrap("foo bar-baz"), vec!["foo", "bar-baz"]); } #[test] #[cfg(feature = "hyphenation")] fn auto_hyphenation() { let dictionary = Standard::from_embedded(Language::EnglishUS).unwrap(); let wrapper = Wrapper::new(10); assert_eq!( wrapper.wrap("Internationalization"), vec!["Internatio", "nalization"] ); let wrapper = Wrapper::with_splitter(10, dictionary); assert_eq!( wrapper.wrap("Internationalization"), vec!["Interna-", "tionaliza-", "tion"] ); } #[test] #[cfg(feature = "hyphenation")] fn split_len_hyphenation() { // Test that hyphenation takes the width of the wihtespace // into account. let dictionary = Standard::from_embedded(Language::EnglishUS).unwrap(); let wrapper = Wrapper::with_splitter(15, dictionary); assert_eq!( wrapper.wrap("garbage collection"), vec!["garbage col-", "lection"] ); } #[test] #[cfg(feature = "hyphenation")] fn borrowed_lines() { // Lines that end with an extra hyphen are owned, the final // line is borrowed. use std::borrow::Cow::{Borrowed, Owned}; let dictionary = Standard::from_embedded(Language::EnglishUS).unwrap(); let wrapper = Wrapper::with_splitter(10, dictionary); let lines = wrapper.wrap("Internationalization"); if let Borrowed(s) = lines[0] { assert!(false, "should not have been borrowed: {:?}", s); } if let Borrowed(s) = lines[1] { assert!(false, "should not have been borrowed: {:?}", s); } if let Owned(ref s) = lines[2] { assert!(false, "should not have been owned: {:?}", s); } } #[test] #[cfg(feature = "hyphenation")] fn auto_hyphenation_with_hyphen() { let dictionary = Standard::from_embedded(Language::EnglishUS).unwrap(); let wrapper = Wrapper::new(8).break_words(false); assert_eq!(wrapper.wrap("over-caffinated"), vec!["over-", "caffinated"]); let wrapper = Wrapper::with_splitter(8, dictionary).break_words(false); assert_eq!( wrapper.wrap("over-caffinated"), vec!["over-", "caffi-", "nated"] ); } #[test] fn break_words() { assert_eq!(wrap("foobarbaz", 3), vec!["foo", "bar", "baz"]); } #[test] fn break_words_wide_characters() { assert_eq!(wrap("Hellï½", 5), vec!["He", "ll", "ï½"]); } #[test] fn break_words_zero_width() { assert_eq!(wrap("foobar", 0), vec!["f", "o", "o", "b", "a", "r"]); } #[test] fn break_words_line_breaks() { assert_eq!(fill("ab\ncdefghijkl", 5), "ab\ncdefg\nhijkl"); assert_eq!(fill("abcdefgh\nijkl", 5), "abcde\nfgh\nijkl"); } #[test] fn preserve_line_breaks() { assert_eq!(fill("test\n", 11), "test\n"); assert_eq!(fill("test\n\na\n\n", 11), "test\n\na\n\n"); assert_eq!(fill("1 3 5 7\n1 3 5 7", 7), "1 3 5 7\n1 3 5 7"); } #[test] fn wrap_preserve_line_breaks() { assert_eq!(fill("1 3 5 7\n1 3 5 7", 5), "1 3 5\n7\n1 3 5\n7"); } #[test] fn non_breaking_space() { let wrapper = Wrapper::new(5).break_words(false); assert_eq!(wrapper.fill("foo bar baz"), "foo bar baz"); } #[test] fn non_breaking_hyphen() { let wrapper = Wrapper::new(5).break_words(false); assert_eq!(wrapper.fill("foo‑bar‑baz"), "foo‑bar‑baz"); } #[test] fn fill_simple() { assert_eq!(fill("foo bar baz", 10), "foo bar\nbaz"); } } vendor/textwrap/tests/0000775000175000017500000000000014160055207015667 5ustar mwhudsonmwhudsonvendor/textwrap/tests/version-numbers.rs0000664000175000017500000000052514160055207021375 0ustar mwhudsonmwhudson#[macro_use] extern crate version_sync; #[test] fn test_readme_deps() { assert_markdown_deps_updated!("README.md"); } #[test] fn test_readme_changelog() { assert_contains_regex!("README.md", r"^### Version {version} — .* \d\d?.., 20\d\d$"); } #[test] fn test_html_root_url() { assert_html_root_url_updated!("src/lib.rs"); } vendor/textwrap/examples/0000775000175000017500000000000014160055207016343 5ustar mwhudsonmwhudsonvendor/textwrap/examples/layout.rs0000664000175000017500000000220514160055207020225 0ustar mwhudsonmwhudson#[cfg(feature = "hyphenation")] extern crate hyphenation; extern crate textwrap; #[cfg(feature = "hyphenation")] use hyphenation::{Language, Load}; use textwrap::Wrapper; #[cfg(not(feature = "hyphenation"))] fn new_wrapper<'a>() -> Wrapper<'a, textwrap::HyphenSplitter> { Wrapper::new(0) } #[cfg(feature = "hyphenation")] fn new_wrapper<'a>() -> Wrapper<'a, hyphenation::Standard> { let dictionary = hyphenation::Standard::from_embedded(Language::EnglishUS).unwrap(); Wrapper::with_splitter(0, dictionary) } fn main() { let example = "Memory safety without garbage collection. \ Concurrency without data races. \ Zero-cost abstractions."; let mut prev_lines = vec![]; let mut wrapper = new_wrapper(); for width in 15..60 { wrapper.width = width; let lines = wrapper.wrap(example); if lines != prev_lines { let title = format!(" Width: {} ", width); println!(".{:-^1$}.", title, width + 2); for line in &lines { println!("| {:1$} |", line, width); } prev_lines = lines; } } } vendor/textwrap/examples/termwidth.rs0000664000175000017500000000247014160055207020723 0ustar mwhudsonmwhudson#[cfg(feature = "hyphenation")] extern crate hyphenation; extern crate textwrap; #[cfg(feature = "hyphenation")] use hyphenation::{Language, Load, Standard}; #[cfg(feature = "term_size")] use textwrap::Wrapper; #[cfg(not(feature = "term_size"))] fn main() { println!("Please enable the term_size feature to run this example."); } #[cfg(feature = "term_size")] fn main() { #[cfg(not(feature = "hyphenation"))] fn new_wrapper<'a>() -> (&'static str, Wrapper<'a, textwrap::HyphenSplitter>) { ("without hyphenation", Wrapper::with_termwidth()) } #[cfg(feature = "hyphenation")] fn new_wrapper<'a>() -> (&'static str, Wrapper<'a, Standard>) { let dictionary = Standard::from_embedded(Language::EnglishUS).unwrap(); ( "with hyphenation", Wrapper::with_splitter(textwrap::termwidth(), dictionary), ) } let example = "Memory safety without garbage collection. \ Concurrency without data races. \ Zero-cost abstractions."; // Create a new Wrapper -- automatically set the width to the // current terminal width. let (msg, wrapper) = new_wrapper(); println!("Formatted {} in {} columns:", msg, wrapper.width); println!("----"); println!("{}", wrapper.fill(example)); println!("----"); } vendor/textwrap/README.md0000664000175000017500000002460414160055207016012 0ustar mwhudsonmwhudson# Textwrap [![](https://img.shields.io/crates/v/textwrap.svg)][crates-io] [![](https://docs.rs/textwrap/badge.svg)][api-docs] [![](https://travis-ci.org/mgeisler/textwrap.svg?branch=master)][travis-ci] [![](https://ci.appveyor.com/api/projects/status/github/mgeisler/textwrap?branch=master&svg=true)][appveyor] [![](https://codecov.io/gh/mgeisler/textwrap/branch/master/graph/badge.svg)][codecov] Textwrap is a small Rust crate for word wrapping text. You can use it to format strings for display in commandline applications. The crate name and interface is inspired by the [Python textwrap module][py-textwrap]. ## Usage Add this to your `Cargo.toml`: ```toml [dependencies] textwrap = "0.11" ``` and this to your crate root: ```rust extern crate textwrap; ``` If you would like to have automatic hyphenation, specify the dependency as: ```toml [dependencies] textwrap = { version = "0.11", features = ["hyphenation"] } ``` To conveniently wrap text at the current terminal width, enable the `term_size` feature: ```toml [dependencies] textwrap = { version = "0.11", features = ["term_size"] } ``` ## Documentation **[API documentation][api-docs]** ## Getting Started Word wrapping single strings is easy using the `fill` function: ```rust extern crate textwrap; use textwrap::fill; fn main() { let text = "textwrap: a small library for wrapping text."; println!("{}", fill(text, 18)); } ``` The output is ``` textwrap: a small library for wrapping text. ``` With the `hyphenation` feature, you can get automatic hyphenation for [about 70 languages][patterns]. Your program must load and configure the hyphenation patterns to use: ```rust extern crate hyphenation; extern crate textwrap; use hyphenation::{Language, Load, Standard}; use textwrap::Wrapper; fn main() { let hyphenator = Standard::from_embedded(Language::EnglishUS).unwrap(); let wrapper = Wrapper::with_splitter(18, hyphenator); let text = "textwrap: a small library for wrapping text."; println!("{}", wrapper.fill(text)) } ``` The output now looks like this: ``` textwrap: a small library for wrap- ping text. ``` The hyphenation uses high-quality TeX hyphenation patterns. ## Examples The library comes with some small example programs that shows various features. ### Layout Example The `layout` example shows how a fixed example string is wrapped at different widths. Run the example with: ```shell $ cargo run --features hyphenation --example layout ``` The program will use the following string: > Memory safety without garbage collection. Concurrency without data > races. Zero-cost abstractions. The string is wrapped at all widths between 15 and 60 columns. With narrow columns the output looks like this: ``` .--- Width: 15 ---. | Memory safety | | without garbage | | collection. | | Concurrency | | without data | | races. Zero- | | cost abstrac- | | tions. | .--- Width: 16 ----. | Memory safety | | without garbage | | collection. Con- | | currency without | | data races. Ze- | | ro-cost abstrac- | | tions. | ``` Later, longer lines are used and the output now looks like this: ``` .-------------------- Width: 49 --------------------. | Memory safety without garbage collection. Concur- | | rency without data races. Zero-cost abstractions. | .---------------------- Width: 53 ----------------------. | Memory safety without garbage collection. Concurrency | | without data races. Zero-cost abstractions. | .------------------------- Width: 59 -------------------------. | Memory safety without garbage collection. Concurrency with- | | out data races. Zero-cost abstractions. | ``` Notice how words are split at hyphens (such as "zero-cost") but also how words are hyphenated using automatic/machine hyphenation. ### Terminal Width Example The `termwidth` example simply shows how the width can be set automatically to the current terminal width. Run it with this command: ``` $ cargo run --example termwidth ``` If you run it in a narrow terminal, you'll see output like this: ``` Formatted in within 60 columns: ---- Memory safety without garbage collection. Concurrency without data races. Zero-cost abstractions. ---- ``` If `stdout` is not connected to the terminal, the program will use a default of 80 columns for the width: ``` $ cargo run --example termwidth | cat Formatted in within 80 columns: ---- Memory safety without garbage collection. Concurrency without data races. Zero- cost abstractions. ---- ``` ## Release History This section lists the largest changes per release. ### Version 0.11.0 — December 9th, 2018 Due to our dependencies bumping their minimum supported version of Rust, the minimum version of Rust we test against is now 1.22.0. * Merged [#141][issue-141]: Fix `dedent` handling of empty lines and trailing newlines. Thanks @bbqsrc! * Fixed [#151][issue-151]: Release of version with hyphenation 0.7. ### Version 0.10.0 — April 28th, 2018 Due to our dependencies bumping their minimum supported version of Rust, the minimum version of Rust we test against is now 1.17.0. * Fixed [#99][issue-99]: Word broken even though it would fit on line. * Fixed [#107][issue-107]: Automatic hyphenation is off by one. * Fixed [#122][issue-122]: Take newlines into account when wrapping. * Fixed [#129][issue-129]: Panic on string with em-dash. ### Version 0.9.0 — October 5th, 2017 The dependency on `term_size` is now optional, and by default this feature is not enabled. This is a *breaking change* for users of `Wrapper::with_termwidth`. Enable the `term_size` feature to restore the old functionality. Added a regression test for the case where `width` is set to `usize::MAX`, thanks @Fraser999! All public structs now implement `Debug`, thanks @hcpl! * Fixed [#101][issue-101]: Make `term_size` an optional dependency. ### Version 0.8.0 — September 4th, 2017 The `Wrapper` stuct is now generic over the type of word splitter being used. This means less boxing and a nicer API. The `Wrapper::word_splitter` method has been removed. This is a *breaking API change* if you used the method to change the word splitter. The `Wrapper` struct has two new methods that will wrap the input text lazily: `Wrapper::wrap_iter` and `Wrapper::into_wrap_iter`. Use those if you will be iterating over the wrapped lines one by one. * Fixed [#59][issue-59]: `wrap` could return an iterator. Thanks @hcpl! * Fixed [#81][issue-81]: Set `html_root_url`. ### Version 0.7.0 — July 20th, 2017 Version 0.7.0 changes the return type of `Wrapper::wrap` from `Vec` to `Vec>`. This means that the output lines borrow data from the input string. This is a *breaking API change* if you relied on the exact return type of `Wrapper::wrap`. Callers of the `textwrap::fill` convenience function will see no breakage. The above change and other optimizations makes version 0.7.0 roughly 15-30% faster than version 0.6.0. The `squeeze_whitespace` option has been removed since it was complicating the above optimization. Let us know if this option is important for you so we can provide a work around. * Fixed [#58][issue-58]: Add a "fast_wrap" function. * Fixed [#61][issue-61]: Documentation errors. ### Version 0.6.0 — May 22nd, 2017 Version 0.6.0 adds builder methods to `Wrapper` for easy one-line initialization and configuration: ```rust let wrapper = Wrapper::new(60).break_words(false); ``` It also add a new `NoHyphenation` word splitter that will never split words, not even at existing hyphens. * Fixed [#28][issue-28]: Support not squeezing whitespace. ### Version 0.5.0 — May 15th, 2017 Version 0.5.0 has *breaking API changes*. However, this only affects code using the hyphenation feature. The feature is now optional, so you will first need to enable the `hyphenation` feature as described above. Afterwards, please change your code from ```rust wrapper.corpus = Some(&corpus); ``` to ```rust wrapper.splitter = Box::new(corpus); ``` Other changes include optimizations, so version 0.5.0 is roughly 10-15% faster than version 0.4.0. * Fixed [#19][issue-19]: Add support for finding terminal size. * Fixed [#25][issue-25]: Handle words longer than `self.width`. * Fixed [#26][issue-26]: Support custom indentation. * Fixed [#36][issue-36]: Support building without `hyphenation`. * Fixed [#39][issue-39]: Respect non-breaking spaces. ### Version 0.4.0 — January 24th, 2017 Documented complexities and tested these via `cargo bench`. * Fixed [#13][issue-13]: Immediatedly add word if it fits. * Fixed [#14][issue-14]: Avoid splitting on initial hyphens. ### Version 0.3.0 — January 7th, 2017 Added support for automatic hyphenation. ### Version 0.2.0 — December 28th, 2016 Introduced `Wrapper` struct. Added support for wrapping on hyphens. ### Version 0.1.0 — December 17th, 2016 First public release with support for wrapping strings on whitespace. ## License Textwrap can be distributed according to the [MIT license][mit]. Contributions will be accepted under the same license. [crates-io]: https://crates.io/crates/textwrap [travis-ci]: https://travis-ci.org/mgeisler/textwrap [appveyor]: https://ci.appveyor.com/project/mgeisler/textwrap [codecov]: https://codecov.io/gh/mgeisler/textwrap [py-textwrap]: https://docs.python.org/library/textwrap [patterns]: https://github.com/tapeinosyne/hyphenation/tree/master/patterns-tex [api-docs]: https://docs.rs/textwrap/ [issue-13]: https://github.com/mgeisler/textwrap/issues/13 [issue-14]: https://github.com/mgeisler/textwrap/issues/14 [issue-19]: https://github.com/mgeisler/textwrap/issues/19 [issue-25]: https://github.com/mgeisler/textwrap/issues/25 [issue-26]: https://github.com/mgeisler/textwrap/issues/26 [issue-28]: https://github.com/mgeisler/textwrap/issues/28 [issue-36]: https://github.com/mgeisler/textwrap/issues/36 [issue-39]: https://github.com/mgeisler/textwrap/issues/39 [issue-58]: https://github.com/mgeisler/textwrap/issues/58 [issue-59]: https://github.com/mgeisler/textwrap/issues/59 [issue-61]: https://github.com/mgeisler/textwrap/issues/61 [issue-81]: https://github.com/mgeisler/textwrap/issues/81 [issue-99]: https://github.com/mgeisler/textwrap/issues/99 [issue-101]: https://github.com/mgeisler/textwrap/issues/101 [issue-107]: https://github.com/mgeisler/textwrap/issues/107 [issue-122]: https://github.com/mgeisler/textwrap/issues/122 [issue-129]: https://github.com/mgeisler/textwrap/issues/129 [issue-141]: https://github.com/mgeisler/textwrap/issues/141 [issue-151]: https://github.com/mgeisler/textwrap/issues/151 [mit]: LICENSE vendor/typenum/0000775000175000017500000000000014172417313014353 5ustar mwhudsonmwhudsonvendor/typenum/.cargo-checksum.json0000664000175000017500000000013114172417313020212 0ustar mwhudsonmwhudson{"files":{},"package":"dcf81ac59edc17cc8697ff311e8f5ef2d99fcbd9817b34cec66f90b6c3dfd987"}vendor/typenum/LICENSE-APACHE0000664000175000017500000002512314160055207016277 0ustar mwhudsonmwhudson Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2014 Paho Lurie-Gregg Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.vendor/typenum/LICENSE0000664000175000017500000000002114160055207015346 0ustar mwhudsonmwhudsonMIT OR Apache-2.0vendor/typenum/Cargo.toml0000664000175000017500000000242214172417313016303 0ustar mwhudsonmwhudson# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2018" name = "typenum" version = "1.15.0" authors = ["Paho Lurie-Gregg ", "Andre Bogus "] build = "build/main.rs" description = "Typenum is a Rust library for type-level numbers evaluated at\n compile time. It currently supports bits, unsigned integers, and signed\n integers. It also provides a type-level array of type-level numbers, but its\n implementation is incomplete." documentation = "https://docs.rs/typenum" readme = "README.md" categories = ["no-std"] license = "MIT OR Apache-2.0" repository = "https://github.com/paholg/typenum" [lib] name = "typenum" [dependencies.scale-info] version = "1.0" optional = true default-features = false [features] force_unix_path_separator = [] i128 = [] no_std = [] scale_info = ["scale-info/derive"] strict = [] vendor/typenum/CHANGELOG.md0000664000175000017500000001050014172417313016160 0ustar mwhudsonmwhudson# Changelog This project follows semantic versioning. The MSRV (Minimum Supported Rust Version) is 1.37.0, and typenum is tested against this Rust version. ### Unreleased ### 1.15.0 (2021-12-25) - [fixed] Cross-compilation issue due to doing math in build script. (PR #177) - [added] New feature `scale_info` for using inside [Substrate](https://github.com/paritytech/substrate.git)-based runtimes (PR #175) ### 1.14.0 (2021-09-01) - [changed] Sealed all marker traits. Documentation already stated that these should not be implemented outside the crate, so this is not considered a breaking change. ### 1.13.0 (2021-03-12) - [changed] MSRV from 1.22.0 to 1.37.0. - [fixed] `op` macro with 2018 edition import. - [changed] Allowed calling `assert_type_eq` and `assert_type` at top level. - [added] Marker trait `Zero` for `Z0`, `U0`, and `B0`. - [added] Implementation of `Pow` trait for f32 and f64 with negative exponent. - [added] Trait `ToInt`. ### 1.12.0 (2020-04-13) - [added] Feature `force_unix_path_separator` to support building without Cargo. - [added] Greatest common divisor operator `Gcd` with alias `Gcf`. - [added] `gcd` to the `op!` macro. - [changed] Added `Copy` bound to `Rhs` of `Mul` impl for ``. - [changed] Added `Copy` bound to `Rhs` of `Div` impl for ``. - [changed] Added `Copy` bound to `Rhs` of `PartialDiv` impl for ``. - [changed] Added `Copy` bound to `Rhs` of `Rem` impl for ``. - [fixed] Make all functions #[inline]. ### 1.11.2 (2019-08-26) - [fixed] Cross compilation from Linux to Windows. ### 1.11.1 (2019-08-25) - [fixed] Builds on earlier Rust builds again and added Rust 1.22.0 to Travis to prevent future breakage. ### 1.11.0 (2019-08-25) - [added] Integer `log2` to the `op!` macro. - [added] Integer binary logarithm operator `Logarithm2` with alias `Log2`. - [changed] Removed `feature(i128_type)` when running with the `i128` feature. Kept the feature flag. for typenum to maintain compatibility with old Rust versions. - [added] Integer `sqrt` to the `op!` macro. - [added] Integer square root operator `SquareRoot` with alias `Sqrt`. - [fixed] Bug with attempting to create U1024 type alias twice. ### 1.10.0 (2018-03-11) - [added] The `PowerOfTwo` marker trait. - [added] Associated constants for `Bit`, `Unsigned`, and `Integer`. ### 1.9.0 (2017-05-14) - [added] The `Abs` type operater and corresponding `AbsVal` alias. - [added] The feature `i128` that enables creating 128-bit integers from typenums. - [added] The `assert_type!` and `assert_type_eq!` macros. - [added] Operators to the `op!` macro, including those performed by `cmp!`. - [fixed] Bug in `op!` macro involving functions and convoluted expressions. - [deprecated] The `cmp!` macro. ### 1.8.0 (2017-04-12) - [added] The `op!` macro for conveniently performing type-level operations. - [added] The `cmp!` macro for conveniently performing type-level comparisons. - [added] Some comparison type-operators that are used by the `cmp!` macro. ### 1.7.0 (2017-03-24) - [added] Type operators `Min` and `Max` with accompanying aliases `Minimum` and `Maximum` ### 1.6.0 (2017-02-24) - [fixed] Bug in `Array` division. - [fixed] Bug where `Rem` would sometimes exit early with the wrong answer. - [added] `PartialDiv` operator that performs division as a partial function -- it's defined only when there is no remainder. ### 1.5.2 (2017-02-04) - [fixed] Bug between `Div` implementation and type system. ### 1.5.1 (2016-11-08) - [fixed] Expanded implementation of `Pow` for primitives. ### 1.5.0 (2016-11-03) - [added] Functions to the `Pow` and `Len` traits. This is *technically* a breaking change, but it would only break someone's code if they have a custom impl for `Pow`. I would be very surprised if that is anyone other than me. ### 1.4.0 (2016-10-29) - [added] Type-level arrays of type-level integers. (PR #66) - [added] The types in this crate are now instantiable. (Issue #67, PR #68) ### 1.3.1 (2016-03-31) - [fixed] Bug with recent nightlies. ### 1.3.0 (2016-02-07) - [changed] Removed dependency on libstd. (Issue #53, PR #55) - [changed] Reorganized module structure. (PR #57) ### 1.2.0 (2016-01-03) - [added] This change log! - [added] Convenience type aliases for operators. (Issue #48, PR #50) - [added] Types in this crate now derive all possible traits. (Issue #42, PR #51) vendor/typenum/build/0000775000175000017500000000000014172417313015452 5ustar mwhudsonmwhudsonvendor/typenum/build/main.rs0000664000175000017500000001107414172417313016747 0ustar mwhudsonmwhudsonuse std::env; use std::fmt; use std::fs::File; use std::io::Write; use std::path::Path; mod op; mod tests; pub enum UIntCode { Term, Zero(Box), One(Box), } pub enum IntCode { Zero, Pos(Box), Neg(Box), } impl fmt::Display for UIntCode { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match *self { UIntCode::Term => write!(f, "UTerm"), UIntCode::Zero(ref inner) => write!(f, "UInt<{}, B0>", inner), UIntCode::One(ref inner) => write!(f, "UInt<{}, B1>", inner), } } } impl fmt::Display for IntCode { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match *self { IntCode::Zero => write!(f, "Z0"), IntCode::Pos(ref inner) => write!(f, "PInt<{}>", inner), IntCode::Neg(ref inner) => write!(f, "NInt<{}>", inner), } } } pub fn gen_uint(u: u64) -> UIntCode { let mut result = UIntCode::Term; let mut x = 1u64 << 63; while x > u { x >>= 1 } while x > 0 { result = if x & u > 0 { UIntCode::One(Box::new(result)) } else { UIntCode::Zero(Box::new(result)) }; x >>= 1; } result } pub fn gen_int(i: i64) -> IntCode { use std::cmp::Ordering::{Equal, Greater, Less}; match i.cmp(&0) { Greater => IntCode::Pos(Box::new(gen_uint(i as u64))), Less => IntCode::Neg(Box::new(gen_uint(i.abs() as u64))), Equal => IntCode::Zero, } } #[cfg_attr( feature = "no_std", deprecated( since = "1.3.0", note = "the `no_std` flag is no longer necessary and will be removed in the future" ) )] pub fn no_std() {} // fixme: get a warning when testing without this #[allow(dead_code)] fn main() { let highest: u64 = 1024; // Use hardcoded values to avoid issues with cross-compilation. // See https://github.com/paholg/typenum/issues/162 let first2: u32 = 11; // (highest as f64).log(2.0).round() as u32 + 1; let first10: u32 = 4; // (highest as f64).log(10.0) as u32 + 1; let uints = (0..(highest + 1)) .chain((first2..64).map(|i| 2u64.pow(i))) .chain((first10..20).map(|i| 10u64.pow(i))); let out_dir = env::var("OUT_DIR").unwrap(); let dest = Path::new(&out_dir).join("consts.rs"); println!("cargo:rustc-env=TYPENUM_BUILD_CONSTS={}", dest.display()); let mut f = File::create(&dest).unwrap(); no_std(); // Header stuff here! write!( f, " /** Type aliases for many constants. This file is generated by typenum's build script. For unsigned integers, the format is `U` followed by the number. We define aliases for - Numbers 0 through {highest} - Powers of 2 below `u64::MAX` - Powers of 10 below `u64::MAX` These alias definitions look like this: ```rust use typenum::{{B0, B1, UInt, UTerm}}; # #[allow(dead_code)] type U6 = UInt, B1>, B0>; ``` For positive signed integers, the format is `P` followed by the number and for negative signed integers it is `N` followed by the number. For the signed integer zero, we use `Z0`. We define aliases for - Numbers -{highest} through {highest} - Powers of 2 between `i64::MIN` and `i64::MAX` - Powers of 10 between `i64::MIN` and `i64::MAX` These alias definitions look like this: ```rust use typenum::{{B0, B1, UInt, UTerm, PInt, NInt}}; # #[allow(dead_code)] type P6 = PInt, B1>, B0>>; # #[allow(dead_code)] type N6 = NInt, B1>, B0>>; ``` # Example ```rust # #[allow(unused_imports)] use typenum::{{U0, U1, U2, U3, U4, U5, U6}}; # #[allow(unused_imports)] use typenum::{{N3, N2, N1, Z0, P1, P2, P3}}; # #[allow(unused_imports)] use typenum::{{U774, N17, N10000, P1024, P4096}}; ``` We also define the aliases `False` and `True` for `B0` and `B1`, respectively. */ #[allow(missing_docs)] pub mod consts {{ use crate::uint::{{UInt, UTerm}}; use crate::int::{{PInt, NInt}}; pub use crate::bit::{{B0, B1}}; pub use crate::int::Z0; pub type True = B1; pub type False = B0; ", highest = highest ) .unwrap(); for u in uints { writeln!(f, " pub type U{} = {};", u, gen_uint(u)).unwrap(); if u <= ::std::i64::MAX as u64 && u != 0 { let i = u as i64; writeln!( f, " pub type P{i} = PInt; pub type N{i} = NInt;", i = i ) .unwrap(); } } write!(f, "}}").unwrap(); tests::build_tests().unwrap(); op::write_op_macro().unwrap(); } vendor/typenum/build/tests.rs0000664000175000017500000002137514160055207017167 0ustar mwhudsonmwhudsonuse std::{env, fmt, fs, io, path}; use super::{gen_int, gen_uint}; /// Computes the greatest common divisor of two integers. fn gcdi(mut a: i64, mut b: i64) -> i64 { a = a.abs(); b = b.abs(); while a != 0 { let tmp = b % a; b = a; a = tmp; } b } fn gcdu(mut a: u64, mut b: u64) -> u64 { while a != 0 { let tmp = b % a; b = a; a = tmp; } b } fn sign(i: i64) -> char { use std::cmp::Ordering::*; match i.cmp(&0) { Greater => 'P', Less => 'N', Equal => '_', } } struct UIntTest { a: u64, op: &'static str, b: Option, r: u64, } impl fmt::Display for UIntTest { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match self.b { Some(b) => write!( f, " #[test] #[allow(non_snake_case)] fn test_{a}_{op}_{b}() {{ type A = {gen_a}; type B = {gen_b}; type U{r} = {result}; #[allow(non_camel_case_types)] type U{a}{op}U{b} = <>::Output as Same>::Output; assert_eq!(::to_u64(), ::to_u64()); }}", gen_a = gen_uint(self.a), gen_b = gen_uint(b), r = self.r, result = gen_uint(self.r), a = self.a, b = b, op = self.op ), None => write!( f, " #[test] #[allow(non_snake_case)] fn test_{a}_{op}() {{ type A = {gen_a}; type U{r} = {result}; #[allow(non_camel_case_types)] type {op}U{a} = <::Output as Same>::Output; assert_eq!(<{op}U{a} as Unsigned>::to_u64(), ::to_u64()); }}", gen_a = gen_uint(self.a), r = self.r, result = gen_uint(self.r), a = self.a, op = self.op ), } } } fn uint_binary_test(left: u64, operator: &'static str, right: u64, result: u64) -> UIntTest { UIntTest { a: left, op: operator, b: Option::Some(right), r: result, } } // fn uint_unary_test(op: &'static str, a: u64, result: u64) -> UIntTest { // UIntTest { a: a, op: op, b: Option::None, r: result } // } struct IntBinaryTest { a: i64, op: &'static str, b: i64, r: i64, } impl fmt::Display for IntBinaryTest { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!( f, " #[test] #[allow(non_snake_case)] fn test_{sa}{a}_{op}_{sb}{b}() {{ type A = {gen_a}; type B = {gen_b}; type {sr}{r} = {result}; #[allow(non_camel_case_types)] type {sa}{a}{op}{sb}{b} = <>::Output as Same<{sr}{r}>>::Output; assert_eq!(<{sa}{a}{op}{sb}{b} as Integer>::to_i64(), <{sr}{r} as Integer>::to_i64()); }}", gen_a = gen_int(self.a), gen_b = gen_int(self.b), r = self.r.abs(), sr = sign(self.r), result = gen_int(self.r), a = self.a.abs(), b = self.b.abs(), sa = sign(self.a), sb = sign(self.b), op = self.op ) } } fn int_binary_test(left: i64, operator: &'static str, right: i64, result: i64) -> IntBinaryTest { IntBinaryTest { a: left, op: operator, b: right, r: result, } } struct IntUnaryTest { op: &'static str, a: i64, r: i64, } impl fmt::Display for IntUnaryTest { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!( f, " #[test] #[allow(non_snake_case)] fn test_{sa}{a}_{op}() {{ type A = {gen_a}; type {sr}{r} = {result}; #[allow(non_camel_case_types)] type {op}{sa}{a} = <::Output as Same<{sr}{r}>>::Output; assert_eq!(<{op}{sa}{a} as Integer>::to_i64(), <{sr}{r} as Integer>::to_i64()); }}", gen_a = gen_int(self.a), r = self.r.abs(), sr = sign(self.r), result = gen_int(self.r), a = self.a.abs(), sa = sign(self.a), op = self.op ) } } fn int_unary_test(operator: &'static str, num: i64, result: i64) -> IntUnaryTest { IntUnaryTest { op: operator, a: num, r: result, } } fn uint_cmp_test(a: u64, b: u64) -> String { format!( " #[test] #[allow(non_snake_case)] fn test_{a}_Cmp_{b}() {{ type A = {gen_a}; type B = {gen_b}; #[allow(non_camel_case_types)] type U{a}CmpU{b} = >::Output; assert_eq!(::to_ordering(), Ordering::{result:?}); }}", a = a, b = b, gen_a = gen_uint(a), gen_b = gen_uint(b), result = a.cmp(&b) ) } fn int_cmp_test(a: i64, b: i64) -> String { format!( " #[test] #[allow(non_snake_case)] fn test_{sa}{a}_Cmp_{sb}{b}() {{ type A = {gen_a}; type B = {gen_b}; #[allow(non_camel_case_types)] type {sa}{a}Cmp{sb}{b} = >::Output; assert_eq!(<{sa}{a}Cmp{sb}{b} as Ord>::to_ordering(), Ordering::{result:?}); }}", a = a.abs(), b = b.abs(), sa = sign(a), sb = sign(b), gen_a = gen_int(a), gen_b = gen_int(b), result = a.cmp(&b) ) } // Allow for rustc 1.22 compatibility. #[allow(bare_trait_objects)] pub fn build_tests() -> Result<(), Box<::std::error::Error>> { // will test all permutations of number pairs up to this (and down to its opposite for ints) let high: i64 = 5; let uints = (0u64..high as u64 + 1).flat_map(|a| (a..a + 1).cycle().zip(0..high as u64 + 1)); let ints = (-high..high + 1).flat_map(|a| (a..a + 1).cycle().zip(-high..high + 1)); let out_dir = env::var("OUT_DIR")?; let dest = path::Path::new(&out_dir).join("tests.rs"); let f = fs::File::create(&dest)?; let mut writer = io::BufWriter::new(&f); use std::io::Write; writer.write_all( b" extern crate typenum; use std::ops::*; use std::cmp::Ordering; use typenum::*; ", )?; use std::cmp; // uint operators: for (a, b) in uints { write!(writer, "{}", uint_binary_test(a, "BitAnd", b, a & b))?; write!(writer, "{}", uint_binary_test(a, "BitOr", b, a | b))?; write!(writer, "{}", uint_binary_test(a, "BitXor", b, a ^ b))?; write!(writer, "{}", uint_binary_test(a, "Shl", b, a << b))?; write!(writer, "{}", uint_binary_test(a, "Shr", b, a >> b))?; write!(writer, "{}", uint_binary_test(a, "Add", b, a + b))?; write!(writer, "{}", uint_binary_test(a, "Min", b, cmp::min(a, b)))?; write!(writer, "{}", uint_binary_test(a, "Max", b, cmp::max(a, b)))?; write!(writer, "{}", uint_binary_test(a, "Gcd", b, gcdu(a, b)))?; if a >= b { write!(writer, "{}", uint_binary_test(a, "Sub", b, a - b))?; } write!(writer, "{}", uint_binary_test(a, "Mul", b, a * b))?; if b != 0 { write!(writer, "{}", uint_binary_test(a, "Div", b, a / b))?; write!(writer, "{}", uint_binary_test(a, "Rem", b, a % b))?; if a % b == 0 { write!(writer, "{}", uint_binary_test(a, "PartialDiv", b, a / b))?; } } write!(writer, "{}", uint_binary_test(a, "Pow", b, a.pow(b as u32)))?; write!(writer, "{}", uint_cmp_test(a, b))?; } // int operators: for (a, b) in ints { write!(writer, "{}", int_binary_test(a, "Add", b, a + b))?; write!(writer, "{}", int_binary_test(a, "Sub", b, a - b))?; write!(writer, "{}", int_binary_test(a, "Mul", b, a * b))?; write!(writer, "{}", int_binary_test(a, "Min", b, cmp::min(a, b)))?; write!(writer, "{}", int_binary_test(a, "Max", b, cmp::max(a, b)))?; write!(writer, "{}", int_binary_test(a, "Gcd", b, gcdi(a, b)))?; if b != 0 { write!(writer, "{}", int_binary_test(a, "Div", b, a / b))?; write!(writer, "{}", int_binary_test(a, "Rem", b, a % b))?; if a % b == 0 { write!(writer, "{}", int_binary_test(a, "PartialDiv", b, a / b))?; } } if b >= 0 || a.abs() == 1 { let result = if b < 0 { if a == 1 { a } else if a == -1 { a.pow((-b) as u32) } else { unreachable!() } } else { a.pow(b as u32) }; write!(writer, "{}", int_binary_test(a, "Pow", b, result))?; } write!(writer, "{}", int_cmp_test(a, b))?; } // int unary operators: for n in -high..high + 1 { write!(writer, "{}", int_unary_test("Neg", n, -n))?; write!(writer, "{}", int_unary_test("Abs", n, n.abs()))?; } writer.flush()?; Ok(()) } vendor/typenum/build/op.rs0000664000175000017500000003606014160055207016440 0ustar mwhudsonmwhudson#[derive(Debug, Copy, Clone, Eq, PartialEq)] enum OpType { Operator, Function, } use self::OpType::*; struct Op { token: &'static str, operator: &'static str, example: (&'static str, &'static str), precedence: u8, n_args: u8, op_type: OpType, } pub fn write_op_macro() -> ::std::io::Result<()> { let out_dir = ::std::env::var("OUT_DIR").unwrap(); let dest = ::std::path::Path::new(&out_dir).join("op.rs"); println!("cargo:rustc-env=TYPENUM_BUILD_OP={}", dest.display()); let mut f = ::std::fs::File::create(&dest).unwrap(); // Operator precedence is taken from // https://doc.rust-lang.org/reference.html#operator-precedence // // We choose 16 as the highest precedence (functions are set to 255 but it doesn't matter // for them). We also only use operators that are left associative so we don't have to worry // about that. let ops = &[ Op { token: "*", operator: "Prod", example: ("P2 * P3", "P6"), precedence: 16, n_args: 2, op_type: Operator, }, Op { token: "/", operator: "Quot", example: ("P6 / P2", "P3"), precedence: 16, n_args: 2, op_type: Operator, }, Op { token: "%", operator: "Mod", example: ("P5 % P3", "P2"), precedence: 16, n_args: 2, op_type: Operator, }, Op { token: "+", operator: "Sum", example: ("P2 + P3", "P5"), precedence: 15, n_args: 2, op_type: Operator, }, Op { token: "-", operator: "Diff", example: ("P2 - P3", "N1"), precedence: 15, n_args: 2, op_type: Operator, }, Op { token: "<<", operator: "Shleft", example: ("U1 << U5", "U32"), precedence: 14, n_args: 2, op_type: Operator, }, Op { token: ">>", operator: "Shright", example: ("U32 >> U5", "U1"), precedence: 14, n_args: 2, op_type: Operator, }, Op { token: "&", operator: "And", example: ("U5 & U3", "U1"), precedence: 13, n_args: 2, op_type: Operator, }, Op { token: "^", operator: "Xor", example: ("U5 ^ U3", "U6"), precedence: 12, n_args: 2, op_type: Operator, }, Op { token: "|", operator: "Or", example: ("U5 | U3", "U7"), precedence: 11, n_args: 2, op_type: Operator, }, Op { token: "==", operator: "Eq", example: ("P5 == P3 + P2", "True"), precedence: 10, n_args: 2, op_type: Operator, }, Op { token: "!=", operator: "NotEq", example: ("P5 != P3 + P2", "False"), precedence: 10, n_args: 2, op_type: Operator, }, Op { token: "<=", operator: "LeEq", example: ("P6 <= P3 + P2", "False"), precedence: 10, n_args: 2, op_type: Operator, }, Op { token: ">=", operator: "GrEq", example: ("P6 >= P3 + P2", "True"), precedence: 10, n_args: 2, op_type: Operator, }, Op { token: "<", operator: "Le", example: ("P4 < P3 + P2", "True"), precedence: 10, n_args: 2, op_type: Operator, }, Op { token: ">", operator: "Gr", example: ("P5 < P3 + P2", "False"), precedence: 10, n_args: 2, op_type: Operator, }, Op { token: "cmp", operator: "Compare", example: ("cmp(P2, P3)", "Less"), precedence: !0, n_args: 2, op_type: Function, }, Op { token: "sqr", operator: "Square", example: ("sqr(P2)", "P4"), precedence: !0, n_args: 1, op_type: Function, }, Op { token: "sqrt", operator: "Sqrt", example: ("sqrt(U9)", "U3"), precedence: !0, n_args: 1, op_type: Function, }, Op { token: "abs", operator: "AbsVal", example: ("abs(N2)", "P2"), precedence: !0, n_args: 1, op_type: Function, }, Op { token: "cube", operator: "Cube", example: ("cube(P2)", "P8"), precedence: !0, n_args: 1, op_type: Function, }, Op { token: "pow", operator: "Exp", example: ("pow(P2, P3)", "P8"), precedence: !0, n_args: 2, op_type: Function, }, Op { token: "min", operator: "Minimum", example: ("min(P2, P3)", "P2"), precedence: !0, n_args: 2, op_type: Function, }, Op { token: "max", operator: "Maximum", example: ("max(P2, P3)", "P3"), precedence: !0, n_args: 2, op_type: Function, }, Op { token: "log2", operator: "Log2", example: ("log2(U9)", "U3"), precedence: !0, n_args: 1, op_type: Function, }, Op { token: "gcd", operator: "Gcf", example: ("gcd(U9, U21)", "U3"), precedence: !0, n_args: 2, op_type: Function, }, ]; use std::io::Write; write!( f, " /** Convenient type operations. Any types representing values must be able to be expressed as `ident`s. That means they need to be in scope. For example, `P5` is okay, but `typenum::P5` is not. You may combine operators arbitrarily, although doing so excessively may require raising the recursion limit. # Example ```rust #![recursion_limit=\"128\"] #[macro_use] extern crate typenum; use typenum::consts::*; fn main() {{ assert_type!( op!(min((P1 - P2) * (N3 + N7), P5 * (P3 + P4)) == P10) ); }} ``` Operators are evaluated based on the operator precedence outlined [here](https://doc.rust-lang.org/reference.html#operator-precedence). The full list of supported operators and functions is as follows: {} They all expand to type aliases defined in the `operator_aliases` module. Here is an expanded list, including examples: ", ops.iter() .map(|op| format!("`{}`", op.token)) .collect::>() .join(", ") )?; //write!(f, "Token | Alias | Example\n ===|===|===\n")?; for op in ops.iter() { write!( f, "---\nOperator `{token}`. Expands to `{operator}`. ```rust # #[macro_use] extern crate typenum; # use typenum::*; # fn main() {{ assert_type_eq!(op!({ex0}), {ex1}); # }} ```\n ", token = op.token, operator = op.operator, ex0 = op.example.0, ex1 = op.example.1 )?; } write!( f, "*/ #[macro_export(local_inner_macros)] macro_rules! op {{ ($($tail:tt)*) => ( __op_internal__!($($tail)*) ); }} #[doc(hidden)] #[macro_export(local_inner_macros)] macro_rules! __op_internal__ {{ " )?; // We first us the shunting-yard algorithm to produce our tokens in Polish notation. // See: https://en.wikipedia.org/wiki/Shunting-yard_algorithm // Note: Due to macro asymmetry, "the top of the stack" refers to the first element, not the // last // ----------------------------------------------------------------------------------------- // Stage 1: There are tokens to be read: // ------- // Case 1: Token is a function => Push it onto the stack: for fun in ops.iter().filter(|f| f.op_type == Function) { write!( f, " (@stack[$($stack:ident,)*] @queue[$($queue:ident,)*] @tail: {f_token} $($tail:tt)*) => ( __op_internal__!(@stack[{f_op}, $($stack,)*] @queue[$($queue,)*] @tail: $($tail)*) );", f_token = fun.token, f_op = fun.operator )?; } // ------- // Case 2: Token is a comma => Until the top of the stack is a LParen, // Pop operators from stack to queue // Base case: Top of stack is LParen, ditch comma and continue write!( f, " (@stack[LParen, $($stack:ident,)*] @queue[$($queue:ident,)*] @tail: , $($tail:tt)*) => ( __op_internal__!(@stack[LParen, $($stack,)*] @queue[$($queue,)*] @tail: $($tail)*) );" )?; // Recursive case: Not LParen, pop from stack to queue write!( f, " (@stack[$stack_top:ident, $($stack:ident,)*] @queue[$($queue:ident,)*] @tail: , $($tail:tt)*) => ( __op_internal__!(@stack[$($stack,)*] @queue[$stack_top, $($queue,)*] @tail: , $($tail)*) );" )?; // ------- // Case 3: Token is an operator, o1: for o1 in ops.iter().filter(|op| op.op_type == Operator) { // If top of stack is operator o2 with o1.precedence <= o2.precedence, // Then pop o2 off stack onto queue: for o2 in ops .iter() .filter(|op| op.op_type == Operator) .filter(|o2| o1.precedence <= o2.precedence) { write!( f, " (@stack[{o2_op}, $($stack:ident,)*] @queue[$($queue:ident,)*] @tail: {o1_token} $($tail:tt)*) => ( __op_internal__!(@stack[$($stack,)*] @queue[{o2_op}, $($queue,)*] @tail: {o1_token} $($tail)*) );", o2_op = o2.operator, o1_token = o1.token )?; } // Base case: push o1 onto stack write!( f, " (@stack[$($stack:ident,)*] @queue[$($queue:ident,)*] @tail: {o1_token} $($tail:tt)*) => ( __op_internal__!(@stack[{o1_op}, $($stack,)*] @queue[$($queue,)*] @tail: $($tail)*) );", o1_op = o1.operator, o1_token = o1.token )?; } // ------- // Case 4: Token is "(": push it onto stack as "LParen". Also convert the ")" to "RParen" to // appease the macro gods: write!( f, " (@stack[$($stack:ident,)*] @queue[$($queue:ident,)*] @tail: ( $($stuff:tt)* ) $($tail:tt)* ) => ( __op_internal__!(@stack[LParen, $($stack,)*] @queue[$($queue,)*] @tail: $($stuff)* RParen $($tail)*) );" )?; // ------- // Case 5: Token is "RParen": // 1. Pop from stack to queue until we see an "LParen", // 2. Kill the "LParen", // 3. If the top of the stack is a function, pop it onto the queue // 2. Base case: write!( f, " (@stack[LParen, $($stack:ident,)*] @queue[$($queue:ident,)*] @tail: RParen $($tail:tt)*) => ( __op_internal__!(@rp3 @stack[$($stack,)*] @queue[$($queue,)*] @tail: $($tail)*) );" )?; // 1. Recursive case: write!( f, " (@stack[$stack_top:ident, $($stack:ident,)*] @queue[$($queue:ident,)*] @tail: RParen $($tail:tt)*) => ( __op_internal__!(@stack[$($stack,)*] @queue[$stack_top, $($queue,)*] @tail: RParen $($tail)*) );" )?; // 3. Check for function: for fun in ops.iter().filter(|f| f.op_type == Function) { write!( f, " (@rp3 @stack[{fun_op}, $($stack:ident,)*] @queue[$($queue:ident,)*] @tail: $($tail:tt)*) => ( __op_internal__!(@stack[$($stack,)*] @queue[{fun_op}, $($queue,)*] @tail: $($tail)*) );", fun_op = fun.operator )?; } // 3. If no function found: write!( f, " (@rp3 @stack[$($stack:ident,)*] @queue[$($queue:ident,)*] @tail: $($tail:tt)*) => ( __op_internal__!(@stack[$($stack,)*] @queue[$($queue,)*] @tail: $($tail)*) );" )?; // ------- // Case 6: Token is a number: Push it onto the queue write!( f, " (@stack[$($stack:ident,)*] @queue[$($queue:ident,)*] @tail: $num:ident $($tail:tt)*) => ( __op_internal__!(@stack[$($stack,)*] @queue[$num, $($queue,)*] @tail: $($tail)*) );" )?; // ------- // Case 7: Out of tokens: // Base case: Stack empty: Start evaluating write!( f, " (@stack[] @queue[$($queue:ident,)*] @tail: ) => ( __op_internal__!(@reverse[] @input: $($queue,)*) );" )?; // Recursive case: Pop stack to queue write!( f, " (@stack[$stack_top:ident, $($stack:ident,)*] @queue[$($queue:ident,)*] @tail:) => ( __op_internal__!(@stack[$($stack,)*] @queue[$stack_top, $($queue,)*] @tail: ) );" )?; // ----------------------------------------------------------------------------------------- // Stage 2: Reverse so we have RPN write!( f, " (@reverse[$($revved:ident,)*] @input: $head:ident, $($tail:ident,)* ) => ( __op_internal__!(@reverse[$head, $($revved,)*] @input: $($tail,)*) );" )?; write!( f, " (@reverse[$($revved:ident,)*] @input: ) => ( __op_internal__!(@eval @stack[] @input[$($revved,)*]) );" )?; // ----------------------------------------------------------------------------------------- // Stage 3: Evaluate in Reverse Polish Notation // Operators / Operators with 2 args: for op in ops.iter().filter(|op| op.n_args == 2) { // Note: We have to switch $a and $b here, otherwise non-commutative functions are backwards write!( f, " (@eval @stack[$a:ty, $b:ty, $($stack:ty,)*] @input[{op}, $($tail:ident,)*]) => ( __op_internal__!(@eval @stack[$crate::{op}<$b, $a>, $($stack,)*] @input[$($tail,)*]) );", op = op.operator )?; } // Operators with 1 arg: for op in ops.iter().filter(|op| op.n_args == 1) { write!( f, " (@eval @stack[$a:ty, $($stack:ty,)*] @input[{op}, $($tail:ident,)*]) => ( __op_internal__!(@eval @stack[$crate::{op}<$a>, $($stack,)*] @input[$($tail,)*]) );", op = op.operator )?; } // Wasn't a function or operator, so must be a value => push onto stack write!( f, " (@eval @stack[$($stack:ty,)*] @input[$head:ident, $($tail:ident,)*]) => ( __op_internal__!(@eval @stack[$head, $($stack,)*] @input[$($tail,)*]) );" )?; // No input left: write!( f, " (@eval @stack[$stack:ty,] @input[]) => ( $stack );" )?; // ----------------------------------------------------------------------------------------- // Stage 0: Get it started write!( f, " ($($tail:tt)* ) => ( __op_internal__!(@stack[] @queue[] @tail: $($tail)*) );" )?; write!( f, " }}" )?; Ok(()) } vendor/typenum/clippy.toml0000664000175000017500000000004214160055207016541 0ustar mwhudsonmwhudsoncognitive-complexity-threshold=35 vendor/typenum/src/0000775000175000017500000000000014172417313015142 5ustar mwhudsonmwhudsonvendor/typenum/src/marker_traits.rs0000664000175000017500000001136514160055207020362 0ustar mwhudsonmwhudson//! All of the **marker traits** used in typenum. //! //! Note that the definition here for marker traits is slightly different than //! the conventional one -- we include traits with functions that convert a type //! to the corresponding value, as well as associated constants that do the //! same. //! //! For example, the `Integer` trait includes the function (among others) `fn //! to_i32() -> i32` and the associated constant `I32` so that one can do this: //! //! ``` //! use typenum::{Integer, N42}; //! //! assert_eq!(-42, N42::to_i32()); //! assert_eq!(-42, N42::I32); //! ``` use crate::sealed::Sealed; /// A **marker trait** to designate that a type is not zero. All number types in this /// crate implement `NonZero` except `B0`, `U0`, and `Z0`. pub trait NonZero: Sealed {} /// A **marker trait** to designate that a type is zero. Only `B0`, `U0`, and `Z0` /// implement this trait. pub trait Zero: Sealed {} /// A **Marker trait** for the types `Greater`, `Equal`, and `Less`. pub trait Ord: Sealed { #[allow(missing_docs)] fn to_ordering() -> ::core::cmp::Ordering; } /// The **marker trait** for compile time bits. pub trait Bit: Sealed + Copy + Default + 'static { #[allow(missing_docs)] const U8: u8; #[allow(missing_docs)] const BOOL: bool; /// Instantiates a singleton representing this bit. fn new() -> Self; #[allow(missing_docs)] fn to_u8() -> u8; #[allow(missing_docs)] fn to_bool() -> bool; } /// The **marker trait** for compile time unsigned integers. /// /// # Example /// ```rust /// use typenum::{Unsigned, U3}; /// /// assert_eq!(U3::to_u32(), 3); /// assert_eq!(U3::I32, 3); /// ``` pub trait Unsigned: Sealed + Copy + Default + 'static { #[allow(missing_docs)] const U8: u8; #[allow(missing_docs)] const U16: u16; #[allow(missing_docs)] const U32: u32; #[allow(missing_docs)] const U64: u64; #[cfg(feature = "i128")] #[allow(missing_docs)] const U128: u128; #[allow(missing_docs)] const USIZE: usize; #[allow(missing_docs)] const I8: i8; #[allow(missing_docs)] const I16: i16; #[allow(missing_docs)] const I32: i32; #[allow(missing_docs)] const I64: i64; #[cfg(feature = "i128")] #[allow(missing_docs)] const I128: i128; #[allow(missing_docs)] const ISIZE: isize; #[allow(missing_docs)] fn to_u8() -> u8; #[allow(missing_docs)] fn to_u16() -> u16; #[allow(missing_docs)] fn to_u32() -> u32; #[allow(missing_docs)] fn to_u64() -> u64; #[cfg(feature = "i128")] #[allow(missing_docs)] fn to_u128() -> u128; #[allow(missing_docs)] fn to_usize() -> usize; #[allow(missing_docs)] fn to_i8() -> i8; #[allow(missing_docs)] fn to_i16() -> i16; #[allow(missing_docs)] fn to_i32() -> i32; #[allow(missing_docs)] fn to_i64() -> i64; #[cfg(feature = "i128")] #[allow(missing_docs)] fn to_i128() -> i128; #[allow(missing_docs)] fn to_isize() -> isize; } /// The **marker trait** for compile time signed integers. /// /// # Example /// ```rust /// use typenum::{Integer, P3}; /// /// assert_eq!(P3::to_i32(), 3); /// assert_eq!(P3::I32, 3); /// ``` pub trait Integer: Sealed + Copy + Default + 'static { #[allow(missing_docs)] const I8: i8; #[allow(missing_docs)] const I16: i16; #[allow(missing_docs)] const I32: i32; #[allow(missing_docs)] const I64: i64; #[cfg(feature = "i128")] #[allow(missing_docs)] const I128: i128; #[allow(missing_docs)] const ISIZE: isize; #[allow(missing_docs)] fn to_i8() -> i8; #[allow(missing_docs)] fn to_i16() -> i16; #[allow(missing_docs)] fn to_i32() -> i32; #[allow(missing_docs)] fn to_i64() -> i64; #[cfg(feature = "i128")] #[allow(missing_docs)] fn to_i128() -> i128; #[allow(missing_docs)] fn to_isize() -> isize; } /// The **marker trait** for type-level arrays of type-level numbers. /// /// Someday, it may contain an associated constant to produce a runtime array, /// like the other marker traits here. However, that is blocked by [this /// issue](https://github.com/rust-lang/rust/issues/44168). pub trait TypeArray: Sealed {} /// The **marker trait** for type-level numbers which are a power of two. /// /// # Examples /// /// Here's a working example: /// /// ```rust /// use typenum::{PowerOfTwo, P4, P8}; /// /// fn only_p2() {} /// /// only_p2::(); /// only_p2::(); /// ``` /// /// Numbers which are not a power of two will fail to compile in this example: /// /// ```rust,compile_fail /// use typenum::{P9, P511, P1023, PowerOfTwo}; /// /// fn only_p2() { } /// /// only_p2::(); /// only_p2::(); /// only_p2::(); /// ``` pub trait PowerOfTwo: Sealed {} vendor/typenum/src/uint.rs0000664000175000017500000017371014172417313016500 0ustar mwhudsonmwhudson//! Type-level unsigned integers. //! //! //! **Type operators** implemented: //! //! From `::core::ops`: `BitAnd`, `BitOr`, `BitXor`, `Shl`, `Shr`, `Add`, `Sub`, //! `Mul`, `Div`, and `Rem`. //! From `typenum`: `Same`, `Cmp`, and `Pow`. //! //! Rather than directly using the structs defined in this module, it is recommended that //! you import and use the relevant aliases from the [consts](../consts/index.html) module. //! //! # Example //! ```rust //! use std::ops::{Add, BitAnd, BitOr, BitXor, Div, Mul, Rem, Shl, Shr, Sub}; //! use typenum::{Unsigned, U1, U2, U3, U4}; //! //! assert_eq!(>::Output::to_u32(), 2); //! assert_eq!(>::Output::to_u32(), 7); //! assert_eq!(>::Output::to_u32(), 1); //! assert_eq!(>::Output::to_u32(), 6); //! assert_eq!(>::Output::to_u32(), 1); //! assert_eq!(>::Output::to_u32(), 5); //! assert_eq!(>::Output::to_u32(), 1); //! assert_eq!(>::Output::to_u32(), 6); //! assert_eq!(>::Output::to_u32(), 1); //! assert_eq!(>::Output::to_u32(), 1); //! ``` use crate::{ bit::{Bit, B0, B1}, consts::{U0, U1}, private::{ BitDiff, BitDiffOut, Internal, InternalMarker, PrivateAnd, PrivateAndOut, PrivateCmp, PrivateCmpOut, PrivateLogarithm2, PrivatePow, PrivatePowOut, PrivateSquareRoot, PrivateSub, PrivateSubOut, PrivateXor, PrivateXorOut, Trim, TrimOut, }, Add1, Cmp, Double, Equal, Gcd, Gcf, GrEq, Greater, IsGreaterOrEqual, Len, Length, Less, Log2, Logarithm2, Maximum, Minimum, NonZero, Or, Ord, Pow, Prod, Shleft, Shright, Sqrt, Square, SquareRoot, Sub1, Sum, ToInt, Zero, }; use core::ops::{Add, BitAnd, BitOr, BitXor, Mul, Shl, Shr, Sub}; pub use crate::marker_traits::{PowerOfTwo, Unsigned}; /// The terminating type for `UInt`; it always comes after the most significant /// bit. `UTerm` by itself represents zero, which is aliased to `U0`. #[derive(Eq, PartialEq, Ord, PartialOrd, Clone, Copy, Hash, Debug, Default)] #[cfg_attr(feature = "scale_info", derive(scale_info::TypeInfo))] pub struct UTerm; impl UTerm { /// Instantiates a singleton representing this unsigned integer. #[inline] pub fn new() -> UTerm { UTerm } } impl Unsigned for UTerm { const U8: u8 = 0; const U16: u16 = 0; const U32: u32 = 0; const U64: u64 = 0; #[cfg(feature = "i128")] const U128: u128 = 0; const USIZE: usize = 0; const I8: i8 = 0; const I16: i16 = 0; const I32: i32 = 0; const I64: i64 = 0; #[cfg(feature = "i128")] const I128: i128 = 0; const ISIZE: isize = 0; #[inline] fn to_u8() -> u8 { 0 } #[inline] fn to_u16() -> u16 { 0 } #[inline] fn to_u32() -> u32 { 0 } #[inline] fn to_u64() -> u64 { 0 } #[cfg(feature = "i128")] #[inline] fn to_u128() -> u128 { 0 } #[inline] fn to_usize() -> usize { 0 } #[inline] fn to_i8() -> i8 { 0 } #[inline] fn to_i16() -> i16 { 0 } #[inline] fn to_i32() -> i32 { 0 } #[inline] fn to_i64() -> i64 { 0 } #[cfg(feature = "i128")] #[inline] fn to_i128() -> i128 { 0 } #[inline] fn to_isize() -> isize { 0 } } /// `UInt` is defined recursively, where `B` is the least significant bit and `U` is the rest /// of the number. Conceptually, `U` should be bound by the trait `Unsigned` and `B` should /// be bound by the trait `Bit`, but enforcing these bounds causes linear instead of /// logrithmic scaling in some places, so they are left off for now. They may be enforced in /// future. /// /// In order to keep numbers unique, leading zeros are not allowed, so `UInt` is /// forbidden. /// /// # Example /// ```rust /// use typenum::{UInt, UTerm, B0, B1}; /// /// # #[allow(dead_code)] /// type U6 = UInt, B1>, B0>; /// ``` #[derive(Eq, PartialEq, Ord, PartialOrd, Clone, Copy, Hash, Debug, Default)] #[cfg_attr(feature = "scale_info", derive(scale_info::TypeInfo))] pub struct UInt { /// The more significant bits of `Self`. pub(crate) msb: U, /// The least significant bit of `Self`. pub(crate) lsb: B, } impl UInt { /// Instantiates a singleton representing this unsigned integer. #[inline] pub fn new() -> UInt { UInt::default() } } impl Unsigned for UInt { const U8: u8 = B::U8 | U::U8 << 1; const U16: u16 = B::U8 as u16 | U::U16 << 1; const U32: u32 = B::U8 as u32 | U::U32 << 1; const U64: u64 = B::U8 as u64 | U::U64 << 1; #[cfg(feature = "i128")] const U128: u128 = B::U8 as u128 | U::U128 << 1; const USIZE: usize = B::U8 as usize | U::USIZE << 1; const I8: i8 = B::U8 as i8 | U::I8 << 1; const I16: i16 = B::U8 as i16 | U::I16 << 1; const I32: i32 = B::U8 as i32 | U::I32 << 1; const I64: i64 = B::U8 as i64 | U::I64 << 1; #[cfg(feature = "i128")] const I128: i128 = B::U8 as i128 | U::I128 << 1; const ISIZE: isize = B::U8 as isize | U::ISIZE << 1; #[inline] fn to_u8() -> u8 { B::to_u8() | U::to_u8() << 1 } #[inline] fn to_u16() -> u16 { u16::from(B::to_u8()) | U::to_u16() << 1 } #[inline] fn to_u32() -> u32 { u32::from(B::to_u8()) | U::to_u32() << 1 } #[inline] fn to_u64() -> u64 { u64::from(B::to_u8()) | U::to_u64() << 1 } #[cfg(feature = "i128")] #[inline] fn to_u128() -> u128 { u128::from(B::to_u8()) | U::to_u128() << 1 } #[inline] fn to_usize() -> usize { usize::from(B::to_u8()) | U::to_usize() << 1 } #[inline] fn to_i8() -> i8 { B::to_u8() as i8 | U::to_i8() << 1 } #[inline] fn to_i16() -> i16 { i16::from(B::to_u8()) | U::to_i16() << 1 } #[inline] fn to_i32() -> i32 { i32::from(B::to_u8()) | U::to_i32() << 1 } #[inline] fn to_i64() -> i64 { i64::from(B::to_u8()) | U::to_i64() << 1 } #[cfg(feature = "i128")] #[inline] fn to_i128() -> i128 { i128::from(B::to_u8()) | U::to_i128() << 1 } #[inline] fn to_isize() -> isize { B::to_u8() as isize | U::to_isize() << 1 } } impl NonZero for UInt {} impl Zero for UTerm {} impl PowerOfTwo for UInt {} impl PowerOfTwo for UInt {} // --------------------------------------------------------------------------------------- // Getting length of unsigned integers, which is defined as the number of bits before `UTerm` /// Length of `UTerm` by itself is 0 impl Len for UTerm { type Output = U0; #[inline] fn len(&self) -> Self::Output { UTerm } } /// Length of a bit is 1 impl Len for UInt where U: Len, Length: Add, Add1>: Unsigned, { type Output = Add1>; #[inline] fn len(&self) -> Self::Output { self.msb.len() + B1 } } // --------------------------------------------------------------------------------------- // Adding bits to unsigned integers /// `UTerm + B0 = UTerm` impl Add for UTerm { type Output = UTerm; #[inline] fn add(self, _: B0) -> Self::Output { UTerm } } /// `U + B0 = U` impl Add for UInt { type Output = UInt; #[inline] fn add(self, _: B0) -> Self::Output { UInt::new() } } /// `UTerm + B1 = UInt` impl Add for UTerm { type Output = UInt; #[inline] fn add(self, _: B1) -> Self::Output { UInt::new() } } /// `UInt + B1 = UInt` impl Add for UInt { type Output = UInt; #[inline] fn add(self, _: B1) -> Self::Output { UInt::new() } } /// `UInt + B1 = UInt` impl Add for UInt where U: Add, Add1: Unsigned, { type Output = UInt, B0>; #[inline] fn add(self, _: B1) -> Self::Output { UInt::new() } } // --------------------------------------------------------------------------------------- // Adding unsigned integers /// `UTerm + U = U` impl Add for UTerm { type Output = U; #[inline] fn add(self, rhs: U) -> Self::Output { rhs } } /// `UInt + UTerm = UInt` impl Add for UInt { type Output = UInt; #[inline] fn add(self, _: UTerm) -> Self::Output { UInt::new() } } /// `UInt + UInt = UInt