zip-2.5.0/.cargo_vcs_info.json0000644000000001360000000000100116420ustar { "git": { "sha1": "77cde6a909feac44c75603811d78049a832a6fa2" }, "path_in_vcs": "" }zip-2.5.0/.gitattributes000064400000000000000000000001351046102023000133240ustar 00000000000000tests/data/** binary=true tests/data/**/LICENSE.*.txt binary=false fuzz/corpus/** binary=truezip-2.5.0/.gitignore000064400000000000000000000001051046102023000124160ustar 00000000000000Cargo.lock target .DS_Store \.idea/ /fuzz_read/out/ /fuzz_write/out/ zip-2.5.0/.whitesource000064400000000000000000000004231046102023000127730ustar 00000000000000{ "scanSettings": { "baseBranches": [] }, "checkRunSettings": { "vulnerableCheckRunConclusionLevel": "failure", "displayMode": "diff", "useMendCheckNames": true }, "issueSettings": { "minSeverityLevel": "LOW", "issueType": "DEPENDENCY" } }zip-2.5.0/CHANGELOG.md000064400000000000000000000560351046102023000122540ustar 00000000000000# Changelog ## [2.5.0](https://github.com/zip-rs/zip2/compare/v2.4.2...v2.5.0) - 2025-03-23 ### šŸš€ Features - Add support for `time::PrimitiveDateTime` ([#322](https://github.com/zip-rs/zip2/pull/322)) - Add `jiff` integration ([#323](https://github.com/zip-rs/zip2/pull/323)) ### šŸ› Bug Fixes - improve error message for duplicated file ([#277](https://github.com/zip-rs/zip2/pull/277)) ## [2.4.2](https://github.com/zip-rs/zip2/compare/v2.4.1...v2.4.2) - 2025-03-18 ### šŸ› Bug Fixes - `deep_copy_file` produced a mangled file header on big-endian platforms (#309) ## [2.4.1](https://github.com/zip-rs/zip2/compare/v2.4.0...v2.4.1) - 2025-03-17 ### šŸ› Bug Fixes - type issue in test - double as_ref().canonicalize()? - CI failures - Create directory for extraction if necessary ([#314](https://github.com/zip-rs/zip2/pull/314)) ## [2.4.0](https://github.com/zip-rs/zip2/compare/v2.3.0...v2.4.0) - 2025-03-17 ### šŸš€ Features - `ZipArchive::root_dir` and `ZipArchive::extract_unwrapped_root_dir` ([#304](https://github.com/zip-rs/zip2/pull/304)) ### šŸ› Bug Fixes - wasm build failure due to a missing use statement ([#313](https://github.com/zip-rs/zip2/pull/313)) ## [2.3.0](https://github.com/zip-rs/zip2/compare/v2.2.3...v2.3.0) - 2025-03-16 ### šŸš€ Features - Add support for NTFS extra field ([#279](https://github.com/zip-rs/zip2/pull/279)) ### šŸ› Bug Fixes - *(test)* Conditionalize a zip64 doctest ([#308](https://github.com/zip-rs/zip2/pull/308)) - fix failing tests, remove symlink loop check - Canonicalize output path to avoid false negatives - Symlink handling in stream extraction - Canonicalize output paths and symlink targets, and ensure they descend from the destination ### āš™ļø Miscellaneous Tasks - Fix clippy and cargo fmt warnings ([#310](https://github.com/zip-rs/zip2/pull/310)) ## [2.2.3](https://github.com/zip-rs/zip2/compare/v2.2.2...v2.2.3) - 2025-02-26 ### 🚜 Refactor - Change the inner structure of `DateTime` (#267) ### āš™ļø Miscellaneous Tasks - cargo fix --edition ## [2.2.2](https://github.com/zip-rs/zip2/compare/v2.2.1...v2.2.2) - 2024-12-16 ### šŸ› Bug Fixes - rewrite the EOCD/EOCD64 detection to fix extreme performance regression (#247) ## [2.2.1](https://github.com/zip-rs/zip2/compare/v2.2.0...v2.2.1) - 2024-11-20 ### šŸ› Bug Fixes - remove executable bit ([#238](https://github.com/zip-rs/zip2/pull/238)) - *(lzma)* fixed panic in case of invalid lzma stream ([#259](https://github.com/zip-rs/zip2/pull/259)) - resolve new clippy warnings on nightly ([#262](https://github.com/zip-rs/zip2/pull/262)) - resolve clippy warning in nightly ([#252](https://github.com/zip-rs/zip2/pull/252)) ### ⚔ Performance - Faster cde rejection ([#255](https://github.com/zip-rs/zip2/pull/255)) ## [2.2.0](https://github.com/zip-rs/zip2/compare/v2.1.6...v2.2.0) - 2024-08-11 ### šŸš€ Features - Expose `ZipArchive::central_directory_start` ([#232](https://github.com/zip-rs/zip2/pull/232)) ## [2.1.6](https://github.com/zip-rs/zip2/compare/v2.1.5...v2.1.6) - 2024-07-29 ### šŸ› Bug Fixes - ([#33](https://github.com/zip-rs/zip2/pull/33)) Rare combination of settings could lead to writing a corrupt archive with overlength extra data, and data_start locations when reading the archive back were also wrong ([#221](https://github.com/zip-rs/zip2/pull/221)) ### 🚜 Refactor - Eliminate some magic numbers and unnecessary path prefixes ([#225](https://github.com/zip-rs/zip2/pull/225)) ## [2.1.5](https://github.com/zip-rs/zip2/compare/v2.1.4...v2.1.5) - 2024-07-20 ### 🚜 Refactor - change invalid_state() return type to io::Result ## [2.1.4](https://github.com/zip-rs/zip2/compare/v2.1.3...v2.1.4) - 2024-07-18 ### šŸ› Bug Fixes - fix([#215](https://github.com/zip-rs/zip2/pull/215)): Upgrade to deflate64 0.1.9 - Panic when reading a file truncated in the middle of an XZ block header - Some archives with over u16::MAX files were handled incorrectly or slowly ([#189](https://github.com/zip-rs/zip2/pull/189)) - Check number of files when deciding whether a CDE is the real one - Could still select a fake CDE over a real one in some cases - May have to consider multiple CDEs before filtering for validity - We now keep searching for a real CDE header after read an invalid one from the file comment - Always search for data start when opening an archive for append, and reject the header if data appears to start after central directory - `deep_copy_file` no longer allows overwriting an existing file, to match the behavior of `shallow_copy_file` - File start position was wrong when extra data was present - Abort file if central extra data is too large - Overflow panic when central directory extra data is too large - ZIP64 header was being written twice when copying a file - ZIP64 header was being written to central header twice - Start position was incorrect when file had no extra data - Allow all reserved headers we can create - Fix a bug where alignment padding interacts with other extra-data fields - Fix bugs involving alignment padding and Unicode extra fields - Incorrect header when adding AES-encrypted files - Parse the extra field and reject it if invalid - Incorrect behavior following a rare combination of `merge_archive`, `abort_file` and `deep_copy_file`. As well, we now return an error when a file is being copied to itself. - path_to_string now properly handles the case of an empty path - Implement `Debug` for `ZipWriter` even when it's not implemented for the inner writer's type - Fix an issue where the central directory could be incorrectly detected - `finish_into_readable()` would corrupt the archive if the central directory had moved ### 🚜 Refactor - Verify with debug assertions that no FixedSizeBlock expects a multi-byte alignment ([#198](https://github.com/zip-rs/zip2/pull/198)) - Use new do_or_abort_file method ### ⚔ Performance - Speed up CRC when encrypting small files - Limit the number of extra fields - Refactor extra-data validation - Store extra data in plain vectors until after validation - Only build one IndexMap after choosing among the possible valid headers - Simplify validation of empty extra-data fields - Validate automatic extra-data fields only once, even if several are present - Remove redundant `validate_extra_data()` call - Skip searching for the ZIP32 header if a valid ZIP64 header is present ([#189](https://github.com/zip-rs/zip2/pull/189)) ### āš™ļø Miscellaneous Tasks - Fix a bug introduced by c934c824 - Fix a failing unit test - Fix build errors on older Rust versions - Fix build - Fix another fuzz failure - Switch to `ok_or_abort_file`, and inline when that fails borrow checker - Switch to `ok_or_abort_file`, and inline when that fails borrow checker - Fix a build error - Fix boxed_local warning (can borrow instead) - Partial debug - Fix more errors when parsing multiple extra fields - Fix an error when decoding AES header - Fix an error caused by not allowing 0xa11e field - Bug fix: crypto_header was being counted toward extra_data_end - Bug fix: revert a change where crypto_header was incorrectly treated as an extra field - Fix a bug where a modulo of 0 was used - Fix a bug when ZipCrypto, alignment *and* a custom header are used - Fix a bug when both ZipCrypto and alignment are used - Fix another bug: header_end vs extra_data_end - Fix use of a stale value in a `debug_assert_eq!` - Fix: may still get an incorrect size if opening an invalid file for append - Fix: may need the absolute start as tiebreaker to ensure deterministic behavior ## [2.1.3](https://github.com/zip-rs/zip2/compare/v2.1.2...v2.1.3) - 2024-06-04 ### šŸ› Bug Fixes - Some date/time filters were previously unreliable (i.e. later-pass filters had no earliest-pass or latest-fail, and vice-versa) - Decode Zip-Info UTF8 name and comment fields ([#159](https://github.com/zip-rs/zip2/pull/159)) ### 🚜 Refactor - Return extended timestamp fields copied rather than borrowed ([#183](https://github.com/zip-rs/zip2/pull/183)) ### āš™ļø Miscellaneous Tasks - Fix a new Clippy warning - Fix a bug and inline `deserialize` for safety - Add check for wrong-length blocks, and incorporate fixed-size requirement into the trait name - Fix a fuzz failure by using checked_sub - Add feature gate for new unit test ## [2.1.1](https://github.com/zip-rs/zip2/compare/v2.1.0...v2.1.1) - 2024-05-28 ### šŸ› Bug Fixes - Derive `Debug` for `ZipWriter` - lower default version to 4.5 and use the version-needed-to-extract where feasible. ### 🚜 Refactor - use a MIN_VERSION constant ### āš™ļø Miscellaneous Tasks - Bug fixes for debug implementation - Bug fixes for debug implementation - Update unit tests - Remove unused import ## [2.1.0](https://github.com/zip-rs/zip2/compare/v2.0.0...v2.1.0) - 2024-05-25 ### šŸš€ Features - Support mutual conversion between `DateTime` and MS-DOS pair ### šŸ› Bug Fixes - version-needed-to-extract was incorrect in central header, and version-made-by could be lower than that ([#100](https://github.com/zip-rs/zip2/pull/100)) - version-needed-to-extract was incorrect in central header, and version-made-by could be lower than that ([#100](https://github.com/zip-rs/zip2/pull/100)) ### āš™ļø Miscellaneous Tasks - Another tweak to ensure `version_needed` is applied - Tweaks to make `version_needed` and `version_made_by` work with recently-merged changes ## [2.0.0](https://github.com/zip-rs/zip2/compare/v1.3.1...v2.0.0) - 2024-05-24 ### šŸš€ Features - Add `fmt::Display` for `DateTime` - Implement more traits for `DateTime` ### 🚜 Refactor - Change type of `last_modified_time` to `Option` - [**breaking**] Rename `from_msdos` to `from_msdos_unchecked`, make it unsafe, and add `try_from_msdos` ([#145](https://github.com/zip-rs/zip2/pull/145)) ### āš™ļø Miscellaneous Tasks - Continue to accept archives with invalid DateTime, and use `now_utc()` as default only when writing, not reading ## [1.3.1](https://github.com/zip-rs/zip2/compare/v1.3.0...v1.3.1) - 2024-05-21 ### 🚜 Refactor - Make `deflate` enable both default implementations - Merge the hidden deflate-flate2 flag into the public one - Rename _deflate-non-zopfli to _deflate-flate2 - Reject encrypted and using_data_descriptor files slightly faster in read_zipfile_from_stream - Convert `impl TryInto for DateTime` to `impl TryFrom for NaiveDateTime` ([#136](https://github.com/zip-rs/zip2/pull/136)) ### ⚔ Performance - Change default compression implementation to `flate2/zlib-ng` ### āš™ļø Miscellaneous Tasks - chore([#132](https://github.com/zip-rs/zip2/pull/132)): Attribution for some copied test data - chore([#133](https://github.com/zip-rs/zip2/pull/133)): chmod -x src/result.rs ## [1.3.0](https://github.com/zip-rs/zip2/compare/v1.2.3...v1.3.0) - 2024-05-17 ### šŸš€ Features - Add `is_symlink` method ### šŸ› Bug Fixes - Extract symlinks into symlinks on Unix and Windows, and fix a bug that affected making directories writable on MacOS ### 🚜 Refactor - Eliminate deprecation warning when `--all-features` implicitly enables the deprecated feature - Check if archive contains a symlink's target, without borrowing both at the same time - Eliminate a clone that's no longer necessary - is_dir only needs to look at the filename - Remove unnecessary #[cfg] attributes ### āš™ļø Miscellaneous Tasks - Fix borrow-of-moved-value - Box doesn't directly convert to PathBuf, so convert back to String first - partial revert - only &str has chars(), but Box should auto-deref - contains_key needs a `Box`, so generify `is_dir` to accept one - Add missing `ZipFileData::is_dir()` method - Fix another Windows-specific error - More bug fixes for Windows-specific symlink code - More bug fixes for Windows-specific symlink code - Bug fix: variable name change - Bug fix: need both internal and output path to determine whether to symlink_dir - Another bug fix - Fix another error-type conversion error - Fix error-type conversion on Windows - Fix conditionally-unused import - Fix continued issues, and factor out the Vec-to-OsString conversion (cc: [#125](https://github.com/zip-rs/zip2/pull/125)) - Fix CI failure involving conversion to OsString for symlinks (see my comments on [#125](https://github.com/zip-rs/zip2/pull/125)) - Move path join into platform-independent code ## [1.2.3](https://github.com/zip-rs/zip2/compare/v1.2.2...v1.2.3) - 2024-05-10 ### šŸ› Bug Fixes - Remove a window when an extracted directory might be unexpectedly listable and/or `cd`able by non-owners - Extract directory contents on Unix even if the directory doesn't have write permission (https://github.com/zip-rs/zip-old/issues/423) ### āš™ļø Miscellaneous Tasks - More conditionally-unused imports ## [1.2.2](https://github.com/zip-rs/zip2/compare/v1.2.1...v1.2.2) - 2024-05-09 ### šŸ› Bug Fixes - Failed to clear "writing_raw" before finishing a symlink, leading to dropped extra fields ### ⚔ Performance - Use boxed slice for archive comment, since it can't be concatenated - Optimize for the fact that false signatures can't overlap with real ones ## [1.2.1](https://github.com/zip-rs/zip2/compare/v1.2.0...v1.2.1) - 2024-05-06 ### šŸ› Bug Fixes - Prevent panic when trying to read a file with an unsupported compression method - Prevent panic after reading an invalid LZMA file - Make `Stored` the default compression method if `Deflated` isn't available, so that zip files are readable by as much software as possible - version_needed was wrong when e.g. cfg(bzip2) but current file wasn't bzip2 ([#100](https://github.com/zip-rs/zip2/pull/100)) - file paths shouldn't start with slashes ([#102](https://github.com/zip-rs/zip2/pull/102)) ### 🚜 Refactor - Overhaul `impl Arbitrary for FileOptions` - Remove unused `atomic` module ## [1.2.0](https://github.com/zip-rs/zip2/compare/v1.1.4...v1.2.0) - 2024-05-06 ### šŸš€ Features - Add method `decompressed_size()` so non-recursive ZIP bombs can be detected ### 🚜 Refactor - Make `ZipWriter::finish()` consume the `ZipWriter` ### āš™ļø Miscellaneous Tasks - Use panic! rather than abort to ensure the fuzz harness can process the failure - Update fuzz_write to use replace_with - Remove a drop that can no longer be explicit - Add `#![allow(unexpected_cfgs)]` in nightly ## [1.1.4](https://github.com/zip-rs/zip2/compare/v1.1.3...v1.1.4) - 2024-05-04 ### šŸ› Bug Fixes - Build was failing with bzip2 enabled - use is_dir in more places where Windows paths might be handled incorrectly ### ⚔ Performance - Quick filter for paths that contain "/../" or "/./" or start with "./" or "../" - Fast handling for separator-free paths - Speed up logic if main separator isn't '/' - Drop `normalized_components` slightly sooner when not using it - Speed up `path_to_string` in cases where the path is already in the proper format ### āš™ļø Miscellaneous Tasks - Refactor: can short-circuit handling of paths that start with MAIN_SEPARATOR, no matter what MAIN_SEPARATOR is - Bug fix: non-canonical path detection when MAIN_SEPARATOR is not slash or occurs twice in a row - Bug fix: must recreate if . or .. is a path element - Bug fix ### ā—€ļø Revert - [#58](https://github.com/zip-rs/zip2/pull/58) (partial): `bzip2-rs` can't replace `bzip2` because it's decompress-only ## [1.1.3](https://github.com/zip-rs/zip2/compare/v1.1.2...v1.1.3) - 2024-04-30 ### šŸ› Bug Fixes - Rare bug where find_and_parse would give up prematurely on detecting a false end-of-CDR header ## [1.1.2](https://github.com/Pr0methean/zip/compare/v1.1.1...v1.1.2) - 2024-04-28 ### šŸ› Bug Fixes - Alignment was previously handled incorrectly ([#33](https://github.com/Pr0methean/zip/pull/33)) ### 🚜 Refactor - deprecate `deflate-miniz` feature since it's now equivalent to `deflate` ([#35](https://github.com/Pr0methean/zip/pull/35)) ## [1.1.1] ### Added - `index_for_name`, `index_for_path`, `name_for_index`: get the index of a file given its path or vice-versa, without initializing metadata from the local-file header or needing to mutably borrow the `ZipArchive`. - `add_symlink_from_path`, `shallow_copy_file_from_path`, `deep_copy_file_from_path`, `raw_copy_file_to_path`: copy a file or create a symlink using `AsRef` arguments ### Changed - `add_directory_from_path` and `start_file_from_path` are no longer deprecated, and they now normalize `..` as well as `.`. ## [1.1.0] ### Added - Support for decoding LZMA. ### Changed - Eliminated a custom `AtomicU64` type by replacing it with `OnceLock` in the only place it's used. - `FileOptions` now has the subtype `SimpleFileOptions` which implements `Copy` but has no extra data. ## [1.0.1] ### Changed - The published package on crates.io no longer includes the tests or examples. ## [1.0.0] ### Changed - Now uses boxed slices rather than `String` or `Vec` for metadata fields that aren't likely to grow. ## [0.11.0] ### Added - Support for `DEFLATE64` (decompression only). - Support for Zopfli compression levels up to `i64::MAX`. ### Changed - `InvalidPassword` is now a kind of `ZipError` to eliminate the need for nested `Result` structs. - Updated dependencies. ## [0.10.3] ### Changed - Updated dependencies. - MSRV increased to `1.67`. ### Fixed - Fixed some rare bugs that could cause panics when trying to read an invalid ZIP file or using an incorrect password. ## [0.10.2] ### Changed - Where possible, methods are now `const`. This improves performance, especially when reading. ## [0.10.1] ### Changed - Date and time conversion methods now return `DateTimeRangeError` rather than `()` on error. ## [0.10.0] ### Changed - Replaces the `flush_on_finish_file` parameter of `ZipWriter::new` and `ZipWriter::Append` with a `set_flush_on_finish_file` method. ### Fixed - Fixes build errors that occur when all default features are disabled. - Fixes more cases of a bug when ZIP64 magic bytes occur in filenames. ## [0.9.2] ### Added - `zlib-ng` for fast Deflate compression. This is now the default for compression levels 0-9. - `chrono` to convert zip::DateTime to and from chrono::NaiveDateTime ## [0.9.1] ### Added - Zopfli for aggressive Deflate compression. ## [0.9.0] ### Added - `flush_on_finish_file` parameter for `ZipWriter`. ## [0.8.3] ### Changed - Uses the `aes::cipher::KeyInit` trait from `aes` 0.8.2 where appropriate. ### Fixed - Calling `abort_file()` no longer corrupts the archive if called on a shallow copy of a remaining file, or on an archive whose CDR entries are out of sequence. However, it may leave an unused entry in the archive. - Calling `abort_file()` while writing a ZipCrypto-encrypted file no longer causes a crash. - Calling `abort_file()` on the last file before `finish()` no longer produces an invalid ZIP file or garbage in the comment. ### Added - `ZipWriter` methods `get_comment()` and `get_raw_comment()`. ## [0.8.2] ### Fixed - Fixed an issue where code might spuriously fail during write fuzzing. ### Added - New method `with_alignment` on `FileOptions`. ## [0.8.1] ### Fixed - `ZipWriter` now once again implements `Send` if the underlying writer does. ## [0.8.0] ### Deleted - Methods `start_file_aligned`, `start_file_with_extra_data`, `end_local_start_central_extra_data` and `end_extra_data` (see below). ### Changed - Alignment and extra-data fields are now attributes of [`zip::unstable::write::FileOptions`], allowing them to be specified for `add_directory` and `add_symlink`. - Extra-data fields are now formatted by the `FileOptions` method `add_extra_data`. - Improved performance, especially for `shallow_copy_file` and `deep_copy_file` on files with extra data. ### Fixed - Fixes a rare bug where the size of the extra-data field could overflow when `large_file` was set. - Fixes more cases of a bug when ZIP64 magic bytes occur in filenames. ## [0.7.5] ### Fixed - Fixed a bug that occurs when ZIP64 magic bytes occur twice in a filename or across two filenames. ## [0.7.4] ### Added - Added experimental [`zip::unstable::write::FileOptions::with_deprecated_encryption`] API to enable encrypting files with PKWARE encryption. ## [0.7.3] ### Fixed - Fixed a bug that occurs when a filename in a ZIP32 file includes the ZIP64 magic bytes. ## [0.7.2] ### Added - Method `abort_file` - removes the current or most recently-finished file from the archive. ### Fixed - Fixed a bug where a file could remain open for writing after validations failed. ## [0.7.1] ### Changed - Bumped the version number in order to upload an updated README to crates.io. ## [0.7.0] ### Fixed - Calling `start_file` with invalid parameters no longer closes the `ZipWriter`. - Attempting to write a 4GiB file without calling `FileOptions::large_file(true)` now removes the file from the archive but does not close the `ZipWriter`. - Attempting to write a file with an unrepresentable or invalid last-modified date will instead add it with a date of 1980-01-01 00:00:00. ### Added - Method `is_writing_file` - indicates whether a file is open for writing. ## [0.6.13] ### Fixed - Fixed a possible bug in deep_copy_file. ## [0.6.12] ### Fixed - Fixed a Clippy warning that was missed during the last release. ## [0.6.11] ### Fixed - Fixed a bug that could cause later writes to fail after a `deep_copy_file` call. ## [0.6.10] ### Changed - Updated dependency versions. ## [0.6.9] ### Fixed - Fixed an issue that prevented `ZipWriter` from implementing `Send`. ## [0.6.8] ### Added - Detects duplicate filenames. ### Fixed - `deep_copy_file` could set incorrect Unix permissions. - `deep_copy_file` could handle files incorrectly if their compressed size was u32::MAX bytes or less but their uncompressed size was not. - Documented that `deep_copy_file` does not copy a directory's contents. ### Changed - Improved performance of `deep_copy_file` by using a HashMap and eliminating a redundant search. ## [0.6.7] ### Added - `deep_copy_file` method: more standards-compliant way to copy a file from within the ZipWriter ## [0.6.6] ### Fixed - Unused flag `#![feature(read_buf)]` was breaking compatibility with stable compiler. ### Changed - Updated `aes` dependency to `0.8.2` (https://github.com/zip-rs/zip/pull/354) - Updated other dependency versions. ## [0.6.5] ### Changed - Added experimental [`zip::unstable::write::FileOptions::with_deprecated_encryption`] API to enable encrypting files with PKWARE encryption. ### Added - `shallow_copy_file` method: copy a file from within the ZipWriter ## [0.6.4] ### Changed - [#333](https://github.com/zip-rs/zip/pull/333): disabled the default features of the `time` dependency, and also `formatting` and `macros`, as they were enabled by mistake. - Deprecated [`DateTime::from_time`](https://docs.rs/zip/0.6/zip/struct.DateTime.html#method.from_time) in favor of [`DateTime::try_from`](https://docs.rs/zip/0.6/zip/struct.DateTime.html#impl-TryFrom-for-DateTime) zip-2.5.0/CODE_OF_CONDUCT.md000064400000000000000000000064521046102023000132400ustar 00000000000000# Contributor Covenant Code of Conduct ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to make participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at ryan.levick@gmail.com. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html [homepage]: https://www.contributor-covenant.org For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq zip-2.5.0/CONTRIBUTING.md000064400000000000000000000002671046102023000126700ustar 00000000000000Pull requests are welcome, but they're subject to some requirements that a lot of them don't meet. See https://github.com/zip-rs/zip2/raw/master/pull_request_template.md for details. zip-2.5.0/Cargo.lock0000644000000733660000000000100076340ustar # This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "adler2" version = "2.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "512761e0bb2578dd7380c6baaa0f4ce03e84f95e960231d1dec8bf4d7d6e2627" [[package]] name = "aes" version = "0.8.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b169f7a6d4742236a0a00c541b845991d0ac43e546831af1249753ab4c3aa3a0" dependencies = [ "cfg-if", "cipher", "cpufeatures", ] [[package]] name = "android-tzdata" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e999941b234f3131b00bc13c22d06e8c5ff726d1b6318ac7eb276997bbb4fef0" [[package]] name = "android_system_properties" version = "0.1.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "819e7219dbd41043ac279b19830f2efc897156490d7fd6ea916720117ee66311" dependencies = [ "libc", ] [[package]] name = "anstream" version = "0.6.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8acc5369981196006228e28809f761875c0327210a891e941f4c683b3a99529b" dependencies = [ "anstyle", "anstyle-parse", "anstyle-query", "anstyle-wincon", "colorchoice", "is_terminal_polyfill", "utf8parse", ] [[package]] name = "anstyle" version = "1.0.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "55cc3b69f167a1ef2e161439aa98aed94e6028e5f9a59be9a6ffb47aef1651f9" [[package]] name = "anstyle-parse" version = "0.2.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3b2d16507662817a6a20a9ea92df6652ee4f94f914589377d69f3b21bc5798a9" dependencies = [ "utf8parse", ] [[package]] name = "anstyle-query" version = "1.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "79947af37f4177cfead1110013d678905c37501914fba0efea834c3fe9a8d60c" dependencies = [ "windows-sys", ] [[package]] name = "anstyle-wincon" version = "3.0.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ca3534e77181a9cc07539ad51f2141fe32f6c3ffd4df76db8ad92346b003ae4e" dependencies = [ "anstyle", "once_cell", "windows-sys", ] [[package]] name = "anyhow" version = "1.0.97" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dcfed56ad506cb2c684a14971b8861fdc3baaaae314b9e5f9bb532cbe3ba7a4f" [[package]] name = "arbitrary" version = "1.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dde20b3d026af13f561bdd0f15edf01fc734f0dafcedbaf42bba506a9517f223" dependencies = [ "derive_arbitrary", ] [[package]] name = "autocfg" version = "1.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26" [[package]] name = "bencher" version = "0.1.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7dfdb4953a096c551ce9ace855a604d702e6e62d77fac690575ae347571717f5" [[package]] name = "bitflags" version = "2.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5c8214115b7bf84099f1309324e63141d4c5d7cc26862f97a0a857dbefe165bd" [[package]] name = "block-buffer" version = "0.10.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71" dependencies = [ "generic-array", ] [[package]] name = "bumpalo" version = "3.17.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1628fb46dfa0b37568d12e5edd512553eccf6a22a78e8bde00bb4aed84d5bdbf" [[package]] name = "byteorder" version = "1.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b" [[package]] name = "bzip2" version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "49ecfb22d906f800d4fe833b6282cf4dc1c298f5057ca0b5445e5c209735ca47" dependencies = [ "bzip2-sys", ] [[package]] name = "bzip2-sys" version = "0.1.13+1.0.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "225bff33b2141874fe80d71e07d6eec4f85c5c216453dd96388240f96e1acc14" dependencies = [ "cc", "pkg-config", ] [[package]] name = "cc" version = "1.2.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1fcb57c740ae1daf453ae85f16e37396f672b039e00d9d866e07ddb24e328e3a" dependencies = [ "jobserver", "libc", "shlex", ] [[package]] name = "cfg-if" version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "chrono" version = "0.4.40" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1a7964611d71df112cb1730f2ee67324fcf4d0fc6606acbbe9bfe06df124637c" dependencies = [ "android-tzdata", "iana-time-zone", "js-sys", "num-traits", "wasm-bindgen", "windows-link", ] [[package]] name = "cipher" version = "0.4.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "773f3b9af64447d2ce9850330c473515014aa235e6a783b02db81ff39e4a3dad" dependencies = [ "crypto-common", "inout", ] [[package]] name = "clap" version = "4.4.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1e578d6ec4194633722ccf9544794b71b1385c3c027efe0c55db226fc880865c" dependencies = [ "clap_builder", "clap_derive", ] [[package]] name = "clap_builder" version = "4.4.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4df4df40ec50c46000231c914968278b1eb05098cf8f1b3a518a95030e71d1c7" dependencies = [ "anstream", "anstyle", "clap_lex", "strsim", ] [[package]] name = "clap_derive" version = "4.4.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf9804afaaf59a91e75b022a30fb7229a7901f60c755489cc61c9b423b836442" dependencies = [ "heck", "proc-macro2", "quote", "syn", ] [[package]] name = "clap_lex" version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "702fc72eb24e5a1e48ce58027a675bc24edd52096d5397d4aea7c6dd9eca0bd1" [[package]] name = "cmake" version = "0.1.54" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e7caa3f9de89ddbe2c607f4101924c5abec803763ae9534e4f4d7d8f84aa81f0" dependencies = [ "cc", ] [[package]] name = "colorchoice" version = "1.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5b63caa9aa9397e2d9480a9b13673856c78d8ac123288526c37d7839f2a86990" [[package]] name = "constant_time_eq" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7c74b8349d32d297c9134b8c88677813a227df8f779daa29bfc29c183fe3dca6" [[package]] name = "core-foundation-sys" version = "0.8.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b" [[package]] name = "cpufeatures" version = "0.2.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "59ed5838eebb26a2bb2e58f6d5b5316989ae9d08bab10e0e6d103e656d1b0280" dependencies = [ "libc", ] [[package]] name = "crc" version = "3.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "69e6e4d7b33a94f0991c26729976b10ebde1d34c3ee82408fb536164fa10d636" dependencies = [ "crc-catalog", ] [[package]] name = "crc-catalog" version = "2.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "19d374276b40fb8bbdee95aef7c7fa6b5316ec764510eb64b8dd0e2ed0d7e7f5" [[package]] name = "crc32fast" version = "1.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a97769d94ddab943e4510d138150169a2758b5ef3eb191a9ee688de3e23ef7b3" dependencies = [ "cfg-if", ] [[package]] name = "crossbeam-utils" version = "0.8.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d0a5c400df2834b80a4c3327b3aad3a4c4cd4de0629063962b03235697506a28" [[package]] name = "crypto-common" version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1bfb12502f3fc46cca1bb51ac28df9d618d813cdc3d2f25b9fe775a34af26bb3" dependencies = [ "generic-array", "typenum", ] [[package]] name = "deflate64" version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "da692b8d1080ea3045efaab14434d40468c3d8657e42abddfffca87b428f4c1b" [[package]] name = "deranged" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9c9e6a11ca8224451684bc0d7d5a7adbf8f2fd6887261a1cfc3c0432f9d4068e" dependencies = [ "powerfmt", ] [[package]] name = "derive_arbitrary" version = "1.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "30542c1ad912e0e3d22a1935c290e12e8a29d704a420177a31faad4a601a0800" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "digest" version = "0.10.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" dependencies = [ "block-buffer", "crypto-common", "subtle", ] [[package]] name = "equivalent" version = "1.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f" [[package]] name = "errno" version = "0.3.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "33d852cb9b869c2a9b3df2f71a3074817f01e1844f839a144f5fcef059a4eb5d" dependencies = [ "libc", "windows-sys", ] [[package]] name = "fastrand" version = "2.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be" [[package]] name = "flate2" version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "11faaf5a5236997af9848be0bef4db95824b1d534ebc64d0f0c6cf3e67bd38dc" dependencies = [ "crc32fast", "libz-ng-sys", "libz-sys", "miniz_oxide", ] [[package]] name = "generic-array" version = "0.14.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a" dependencies = [ "typenum", "version_check", ] [[package]] name = "getrandom" version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "73fea8450eea4bac3940448fb7ae50d91f034f941199fcd9d909a5a07aa455f0" dependencies = [ "cfg-if", "js-sys", "libc", "r-efi", "wasi", "wasm-bindgen", ] [[package]] name = "hashbrown" version = "0.15.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bf151400ff0baff5465007dd2f3e717f3fe502074ca563069ce3a6629d07b289" [[package]] name = "heck" version = "0.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8" [[package]] name = "hmac" version = "0.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6c49c37c09c17a53d937dfbb742eb3a961d65a994e6bcdcf37e7399d0cc8ab5e" dependencies = [ "digest", ] [[package]] name = "iana-time-zone" version = "0.1.61" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "235e081f3925a06703c2d0117ea8b91f042756fd6e7a6e5d901e8ca1a996b220" dependencies = [ "android_system_properties", "core-foundation-sys", "iana-time-zone-haiku", "js-sys", "wasm-bindgen", "windows-core", ] [[package]] name = "iana-time-zone-haiku" version = "0.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f31827a206f56af32e590ba56d5d2d085f558508192593743f16b2306495269f" dependencies = [ "cc", ] [[package]] name = "indexmap" version = "2.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3954d50fe15b02142bf25d3b8bdadb634ec3948f103d04ffe3031bc8fe9d7058" dependencies = [ "equivalent", "hashbrown", ] [[package]] name = "inout" version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "879f10e63c20629ecabbb64a8010319738c66a5cd0c29b02d63d272b03751d01" dependencies = [ "generic-array", ] [[package]] name = "is_terminal_polyfill" version = "1.70.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7943c866cc5cd64cbc25b2e01621d07fa8eb2a1a23160ee81ce38704e97b8ecf" [[package]] name = "itoa" version = "1.0.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4a5f13b858c8d314ee3e8f639011f7ccefe71f97f96e50151fb991f267928e2c" [[package]] name = "jiff" version = "0.2.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c102670231191d07d37a35af3eb77f1f0dbf7a71be51a962dcd57ea607be7260" dependencies = [ "jiff-static", "jiff-tzdb-platform", "log", "portable-atomic", "portable-atomic-util", "serde", "windows-sys", ] [[package]] name = "jiff-static" version = "0.2.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4cdde31a9d349f1b1f51a0b3714a5940ac022976f4b49485fc04be052b183b4c" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "jiff-tzdb" version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c1283705eb0a21404d2bfd6eef2a7593d240bc42a0bdb39db0ad6fa2ec026524" [[package]] name = "jiff-tzdb-platform" version = "0.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "875a5a69ac2bab1a891711cf5eccbec1ce0341ea805560dcd90b7a2e925132e8" dependencies = [ "jiff-tzdb", ] [[package]] name = "jobserver" version = "0.1.32" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "48d1dbcbbeb6a7fec7e059840aa538bd62aaccf972c7346c4d9d2059312853d0" dependencies = [ "libc", ] [[package]] name = "js-sys" version = "0.3.77" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1cfaf33c695fc6e08064efbc1f72ec937429614f25eef83af942d0e227c3a28f" dependencies = [ "once_cell", "wasm-bindgen", ] [[package]] name = "libc" version = "0.2.171" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c19937216e9d3aa9956d9bb8dfc0b0c8beb6058fc4f7a4dc4d850edf86a237d6" [[package]] name = "libz-ng-sys" version = "1.1.22" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a7118c2c2a3c7b6edc279a8b19507672b9c4d716f95e671172dfa4e23f9fd824" dependencies = [ "cmake", "libc", ] [[package]] name = "libz-sys" version = "1.1.22" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8b70e7a7df205e92a1a4cd9aaae7898dac0aa555503cc0a649494d0d60e7651d" dependencies = [ "cc", "pkg-config", "vcpkg", ] [[package]] name = "linux-raw-sys" version = "0.9.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fe7db12097d22ec582439daf8618b8fdd1a7bef6270e9af3b1ebcd30893cf413" [[package]] name = "lockfree-object-pool" version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9374ef4228402d4b7e403e5838cb880d9ee663314b0a900d5a6aabf0c213552e" [[package]] name = "log" version = "0.4.26" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "30bde2b3dc3671ae49d8e2e9f044c7c005836e7a023ee57cffa25ab82764bb9e" [[package]] name = "lzma-rs" version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "297e814c836ae64db86b36cf2a557ba54368d03f6afcd7d947c266692f71115e" dependencies = [ "byteorder", "crc", ] [[package]] name = "lzma-sys" version = "0.1.20" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5fda04ab3764e6cde78b9974eec4f779acaba7c4e84b36eca3cf77c581b85d27" dependencies = [ "cc", "libc", "pkg-config", ] [[package]] name = "memchr" version = "2.7.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3" [[package]] name = "miniz_oxide" version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e3e04debbb59698c15bacbb6d93584a8c0ca9cc3213cb423d31f760d8843ce5" dependencies = [ "adler2", ] [[package]] name = "nt-time" version = "0.10.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1c367e8edaff1f8a871e56343eb5e03888f6e0d1c2861880ccf4b7dc830899ed" dependencies = [ "time", ] [[package]] name = "num-conv" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "51d515d32fb182ee37cda2ccdcb92950d6a3c2893aa280e540671c2cd0f3b1d9" [[package]] name = "num-traits" version = "0.2.19" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841" dependencies = [ "autocfg", ] [[package]] name = "once_cell" version = "1.21.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d75b0bedcc4fe52caa0e03d9f1151a323e4aa5e2d78ba3580400cd3c9e2bc4bc" [[package]] name = "pbkdf2" version = "0.12.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f8ed6a7761f76e3b9f92dfb0a60a6a6477c61024b775147ff0973a02653abaf2" dependencies = [ "digest", "hmac", ] [[package]] name = "pkg-config" version = "0.3.32" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c" [[package]] name = "portable-atomic" version = "1.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "350e9b48cbc6b0e028b0473b114454c6316e57336ee184ceab6e53f72c178b3e" [[package]] name = "portable-atomic-util" version = "0.2.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d8a2f0d8d040d7848a709caf78912debcc3f33ee4b3cac47d73d1e1069e83507" dependencies = [ "portable-atomic", ] [[package]] name = "powerfmt" version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "439ee305def115ba05938db6eb1644ff94165c5ab5e9420d1c1bcedbba909391" [[package]] name = "proc-macro2" version = "1.0.94" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a31971752e70b8b2686d7e46ec17fb38dad4051d94024c88df49b667caea9c84" dependencies = [ "unicode-ident", ] [[package]] name = "quote" version = "1.0.40" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1885c039570dc00dcb4ff087a89e185fd56bae234ddc7f056a945bf36467248d" dependencies = [ "proc-macro2", ] [[package]] name = "r-efi" version = "5.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "74765f6d916ee2faa39bc8e68e4f3ed8949b48cccdac59983d287a7cb71ce9c5" [[package]] name = "rustix" version = "1.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e56a18552996ac8d29ecc3b190b4fdbb2d91ca4ec396de7bbffaf43f3d637e96" dependencies = [ "bitflags", "errno", "libc", "linux-raw-sys", "windows-sys", ] [[package]] name = "rustversion" version = "1.0.20" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "eded382c5f5f786b989652c49544c4877d9f015cc22e145a5ea8ea66c2921cd2" [[package]] name = "same-file" version = "1.0.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502" dependencies = [ "winapi-util", ] [[package]] name = "serde" version = "1.0.219" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6" dependencies = [ "serde_derive", ] [[package]] name = "serde_derive" version = "1.0.219" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "sha1" version = "0.10.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba" dependencies = [ "cfg-if", "cpufeatures", "digest", ] [[package]] name = "shlex" version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64" [[package]] name = "simd-adler32" version = "0.3.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d66dc143e6b11c1eddc06d5c423cfc97062865baf299914ab64caa38182078fe" [[package]] name = "strsim" version = "0.10.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "73473c0e59e6d5812c5dfe2a064a6444949f089e20eec9a2e5506596494e4623" [[package]] name = "subtle" version = "2.6.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292" [[package]] name = "syn" version = "2.0.100" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b09a44accad81e1ba1cd74a32461ba89dee89095ba17b32f5d03683b1b1fc2a0" dependencies = [ "proc-macro2", "quote", "unicode-ident", ] [[package]] name = "tempfile" version = "3.19.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7437ac7763b9b123ccf33c338a5cc1bac6f69b45a136c19bdd8a65e3916435bf" dependencies = [ "fastrand", "getrandom", "once_cell", "rustix", "windows-sys", ] [[package]] name = "time" version = "0.3.40" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9d9c75b47bdff86fa3334a3db91356b8d7d86a9b839dab7d0bdc5c3d3a077618" dependencies = [ "deranged", "itoa", "num-conv", "powerfmt", "serde", "time-core", "time-macros", ] [[package]] name = "time-core" version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c9e9a38711f559d9e3ce1cdb06dd7c5b8ea546bc90052da6d06bb76da74bb07c" [[package]] name = "time-macros" version = "0.2.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "29aa485584182073ed57fd5004aa09c371f021325014694e432313345865fd04" dependencies = [ "num-conv", "time-core", ] [[package]] name = "typenum" version = "1.18.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1dccffe3ce07af9386bfd29e80c0ab1a8205a2fc34e4bcd40364df902cfa8f3f" [[package]] name = "unicode-ident" version = "1.0.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5a5f39404a5da50712a4c1eecf25e90dd62b613502b7e925fd4e4d19b5c96512" [[package]] name = "utf8parse" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821" [[package]] name = "vcpkg" version = "0.2.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "accd4ea62f7bb7a82fe23066fb0957d48ef677f6eeb8215f372f52e48bb32426" [[package]] name = "version_check" version = "0.9.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a" [[package]] name = "walkdir" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "29790946404f91d9c5d06f9874efddea1dc06c5efe94541a7d6863108e3a5e4b" dependencies = [ "same-file", "winapi-util", ] [[package]] name = "wasi" version = "0.14.2+wasi-0.2.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9683f9a5a998d873c0d21fcbe3c083009670149a8fab228644b8bd36b2c48cb3" dependencies = [ "wit-bindgen-rt", ] [[package]] name = "wasm-bindgen" version = "0.2.100" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1edc8929d7499fc4e8f0be2262a241556cfc54a0bea223790e71446f2aab1ef5" dependencies = [ "cfg-if", "once_cell", "rustversion", "wasm-bindgen-macro", ] [[package]] name = "wasm-bindgen-backend" version = "0.2.100" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2f0a0651a5c2bc21487bde11ee802ccaf4c51935d0d3d42a6101f98161700bc6" dependencies = [ "bumpalo", "log", "proc-macro2", "quote", "syn", "wasm-bindgen-shared", ] [[package]] name = "wasm-bindgen-macro" version = "0.2.100" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7fe63fc6d09ed3792bd0897b314f53de8e16568c2b3f7982f468c0bf9bd0b407" dependencies = [ "quote", "wasm-bindgen-macro-support", ] [[package]] name = "wasm-bindgen-macro-support" version = "0.2.100" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8ae87ea40c9f689fc23f209965b6fb8a99ad69aeeb0231408be24920604395de" dependencies = [ "proc-macro2", "quote", "syn", "wasm-bindgen-backend", "wasm-bindgen-shared", ] [[package]] name = "wasm-bindgen-shared" version = "0.2.100" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1a05d73b933a847d6cccdda8f838a22ff101ad9bf93e33684f39c1f5f0eece3d" dependencies = [ "unicode-ident", ] [[package]] name = "winapi-util" version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb" dependencies = [ "windows-sys", ] [[package]] name = "windows-core" version = "0.52.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "33ab640c8d7e35bf8ba19b884ba838ceb4fba93a4e8c65a9059d08afcfc683d9" dependencies = [ "windows-targets", ] [[package]] name = "windows-link" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "76840935b766e1b0a05c0066835fb9ec80071d4c09a16f6bd5f7e655e3c14c38" [[package]] name = "windows-sys" version = "0.59.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1e38bc4d79ed67fd075bcc251a1c39b32a1776bbe92e5bef1f0bf1f8c531853b" dependencies = [ "windows-targets", ] [[package]] name = "windows-targets" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973" dependencies = [ "windows_aarch64_gnullvm", "windows_aarch64_msvc", "windows_i686_gnu", "windows_i686_gnullvm", "windows_i686_msvc", "windows_x86_64_gnu", "windows_x86_64_gnullvm", "windows_x86_64_msvc", ] [[package]] name = "windows_aarch64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3" [[package]] name = "windows_aarch64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469" [[package]] name = "windows_i686_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b" [[package]] name = "windows_i686_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66" [[package]] name = "windows_i686_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66" [[package]] name = "windows_x86_64_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78" [[package]] name = "windows_x86_64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d" [[package]] name = "windows_x86_64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec" [[package]] name = "wit-bindgen-rt" version = "0.39.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6f42320e61fe2cfd34354ecb597f86f413484a798ba44a8ca1165c58d42da6c1" dependencies = [ "bitflags", ] [[package]] name = "xz2" version = "0.1.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "388c44dc09d76f1536602ead6d325eb532f5c122f17782bd57fb47baeeb767e2" dependencies = [ "lzma-sys", ] [[package]] name = "zeroize" version = "1.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ced3678a2879b30306d323f4542626697a464a97c0a07c9aebf7ebca65cd4dde" dependencies = [ "zeroize_derive", ] [[package]] name = "zeroize_derive" version = "1.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ce36e65b0d2999d2aafac989fb249189a141aee1f53c612c1f37d72631959f69" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "zip" version = "2.5.0" dependencies = [ "aes", "anyhow", "arbitrary", "bencher", "bzip2", "chrono", "clap", "constant_time_eq", "crc32fast", "crossbeam-utils", "deflate64", "flate2", "getrandom", "hmac", "indexmap", "jiff", "lzma-rs", "memchr", "nt-time", "pbkdf2", "proc-macro2", "sha1", "tempfile", "time", "walkdir", "xz2", "zeroize", "zopfli", "zstd", ] [[package]] name = "zopfli" version = "0.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e5019f391bac5cf252e93bbcc53d039ffd62c7bfb7c150414d61369afe57e946" dependencies = [ "bumpalo", "crc32fast", "lockfree-object-pool", "log", "once_cell", "simd-adler32", ] [[package]] name = "zstd" version = "0.13.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e91ee311a569c327171651566e07972200e76fcfe2242a4fa446149a3881c08a" dependencies = [ "zstd-safe", ] [[package]] name = "zstd-safe" version = "7.2.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8f49c4d5f0abb602a93fb8736af2a4f4dd9512e36f7f570d66e65ff867ed3b9d" dependencies = [ "zstd-sys", ] [[package]] name = "zstd-sys" version = "2.0.15+zstd.1.5.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "eb81183ddd97d0c74cedf1d50d85c8d08c1b8b68ee863bdee9e706eedba1a237" dependencies = [ "cc", "pkg-config", ] zip-2.5.0/Cargo.toml0000644000000106220000000000100076410ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2021" rust-version = "1.73.0" name = "zip" version = "2.5.0" authors = [ "Mathijs van de Nes ", "Marli Frost ", "Ryan Levick ", "Chris Hennick ", ] build = "src/build.rs" exclude = [ "tests/**", "examples/**", ".github/**", "fuzz_read/**", "fuzz_write/**", ] autolib = false autobins = false autoexamples = false autotests = false autobenches = false description = """ Library to support the reading and writing of zip files. """ readme = "README.md" keywords = [ "zip", "archive", "compression", ] license = "MIT" repository = "https://github.com/zip-rs/zip2.git" [package.metadata.docs.rs] all-features = true rustdoc-args = [ "--cfg", "docsrs", ] [features] _all-features = [] _deflate-any = [] aes-crypto = [ "aes", "constant_time_eq", "hmac", "pbkdf2", "sha1", "getrandom", "zeroize", ] chrono = ["chrono/default"] default = [ "aes-crypto", "bzip2", "deflate64", "deflate", "lzma", "time", "zstd", "xz", ] deflate = [ "flate2/rust_backend", "deflate-zopfli", "deflate-flate2", ] deflate-flate2 = ["_deflate-any"] deflate-miniz = [ "deflate", "deflate-flate2", ] deflate-zlib = [ "flate2/zlib", "deflate-flate2", ] deflate-zlib-ng = [ "flate2/zlib-ng", "deflate-flate2", ] deflate-zopfli = [ "zopfli", "_deflate-any", ] jiff-02 = ["dep:jiff"] lzma = ["lzma-rs/stream"] nt-time = ["dep:nt-time"] unreserved = [] xz = ["dep:xz2"] [lib] name = "zip" path = "src/lib.rs" [[bench]] name = "merge_archive" path = "benches/merge_archive.rs" harness = false [[bench]] name = "read_entry" path = "benches/read_entry.rs" harness = false [[bench]] name = "read_metadata" path = "benches/read_metadata.rs" harness = false [dependencies.aes] version = "0.8" optional = true [dependencies.bzip2] version = "0.5.0" optional = true [dependencies.chrono] version = "0.4" optional = true [dependencies.constant_time_eq] version = "0.3" optional = true [dependencies.crc32fast] version = "1.4" [dependencies.deflate64] version = "0.1.9" optional = true [dependencies.flate2] version = "1.0" optional = true default-features = false [dependencies.getrandom] version = "0.3.1" features = [ "wasm_js", "std", ] optional = true [dependencies.hmac] version = "0.12" features = ["reset"] optional = true [dependencies.indexmap] version = "2" [dependencies.jiff] version = "0.2.4" optional = true [dependencies.lzma-rs] version = "0.3" optional = true default-features = false [dependencies.memchr] version = "2.7" [dependencies.nt-time] version = "0.10.6" optional = true default-features = false [dependencies.pbkdf2] version = "0.12" optional = true [dependencies.proc-macro2] version = ">=1.0.60" optional = true [dependencies.sha1] version = "0.10" optional = true [dependencies.time] version = "0.3.37" features = ["std"] optional = true default-features = false [dependencies.xz2] version = "0.1.7" optional = true [dependencies.zeroize] version = "1.8" features = ["zeroize_derive"] optional = true [dependencies.zopfli] version = "0.8" optional = true [dependencies.zstd] version = "0.13" optional = true default-features = false [dev-dependencies.anyhow] version = "1.0.95" [dev-dependencies.bencher] version = "0.1.5" [dev-dependencies.clap] version = "=4.4.18" features = ["derive"] [dev-dependencies.getrandom] version = "0.3.1" features = [ "wasm_js", "std", ] [dev-dependencies.tempfile] version = "3.15" [dev-dependencies.time] version = "0.3.37" features = [ "formatting", "macros", ] default-features = false [dev-dependencies.walkdir] version = "2.5" [target.'cfg(any(all(target_arch = "arm", target_pointer_width = "32"), target_arch = "mips", target_arch = "powerpc"))'.dependencies.crossbeam-utils] version = "0.8.21" [target."cfg(fuzzing)".dependencies.arbitrary] version = "1.4.1" features = ["derive"] zip-2.5.0/Cargo.toml.orig0000644000000066750000000000100106150ustar [package] name = "zip" version = "2.5.0" authors = [ "Mathijs van de Nes ", "Marli Frost ", "Ryan Levick ", "Chris Hennick ", ] license = "MIT" repository = "https://github.com/zip-rs/zip2.git" keywords = ["zip", "archive", "compression"] rust-version = "1.73.0" description = """ Library to support the reading and writing of zip files. """ edition = "2021" exclude = ["tests/**", "examples/**", ".github/**", "fuzz_read/**", "fuzz_write/**"] build = "src/build.rs" [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] [workspace.dependencies] time = { version = "0.3.37", default-features = false } [dependencies] aes = { version = "0.8", optional = true } bzip2 = { version = "0.5.0", optional = true } chrono = { version = "0.4", optional = true } constant_time_eq = { version = "0.3", optional = true } crc32fast = "1.4" flate2 = { version = "1.0", default-features = false, optional = true } getrandom = { version = "0.3.1", features = ["wasm_js", "std"], optional = true} hmac = { version = "0.12", optional = true, features = ["reset"] } indexmap = "2" jiff = { version = "0.2.4", optional = true } memchr = "2.7" nt-time = { version = "0.10.6", default-features = false, optional = true } pbkdf2 = { version = "0.12", optional = true } sha1 = { version = "0.10", optional = true } time = { workspace = true, optional = true, features = [ "std", ] } zeroize = { version = "1.8", optional = true, features = ["zeroize_derive"] } zstd = { version = "0.13", optional = true, default-features = false } zopfli = { version = "0.8", optional = true } deflate64 = { version = "0.1.9", optional = true } lzma-rs = { version = "0.3", default-features = false, optional = true } xz2 = { version = "0.1.7", optional = true } proc-macro2 = { version = ">=1.0.60", optional = true } # Override transitive dep on 1.0.59 due to https://github.com/rust-lang/rust/issues/113152 [target.'cfg(any(all(target_arch = "arm", target_pointer_width = "32"), target_arch = "mips", target_arch = "powerpc"))'.dependencies] crossbeam-utils = "0.8.21" [target.'cfg(fuzzing)'.dependencies] arbitrary = { version = "1.4.1", features = ["derive"] } [dev-dependencies] bencher = "0.1.5" getrandom = { version = "0.3.1", features = ["wasm_js", "std"] } walkdir = "2.5" time = { workspace = true, features = ["formatting", "macros"] } anyhow = "1.0.95" clap = { version = "=4.4.18", features = ["derive"] } tempfile = "3.15" [features] aes-crypto = ["aes", "constant_time_eq", "hmac", "pbkdf2", "sha1", "getrandom", "zeroize"] chrono = ["chrono/default"] _deflate-any = [] _all-features = [] # Detect when --all-features is used deflate = ["flate2/rust_backend", "deflate-zopfli", "deflate-flate2"] deflate-flate2 = ["_deflate-any"] # DEPRECATED: previously enabled `flate2/miniz_oxide` which is equivalent to `flate2/rust_backend` deflate-miniz = ["deflate", "deflate-flate2"] deflate-zlib = ["flate2/zlib", "deflate-flate2"] deflate-zlib-ng = ["flate2/zlib-ng", "deflate-flate2"] deflate-zopfli = ["zopfli", "_deflate-any"] jiff-02 = ["dep:jiff"] nt-time = ["dep:nt-time"] lzma = ["lzma-rs/stream"] unreserved = [] xz = ["dep:xz2"] default = [ "aes-crypto", "bzip2", "deflate64", "deflate", "lzma", "time", "zstd", "xz", ] [[bench]] name = "read_entry" harness = false [[bench]] name = "read_metadata" harness = false [[bench]] name = "merge_archive" harness = false zip-2.5.0/Cargo.toml.orig000064400000000000000000000066751046102023000133370ustar 00000000000000[package] name = "zip" version = "2.5.0" authors = [ "Mathijs van de Nes ", "Marli Frost ", "Ryan Levick ", "Chris Hennick ", ] license = "MIT" repository = "https://github.com/zip-rs/zip2.git" keywords = ["zip", "archive", "compression"] rust-version = "1.73.0" description = """ Library to support the reading and writing of zip files. """ edition = "2021" exclude = ["tests/**", "examples/**", ".github/**", "fuzz_read/**", "fuzz_write/**"] build = "src/build.rs" [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] [workspace.dependencies] time = { version = "0.3.37", default-features = false } [dependencies] aes = { version = "0.8", optional = true } bzip2 = { version = "0.5.0", optional = true } chrono = { version = "0.4", optional = true } constant_time_eq = { version = "0.3", optional = true } crc32fast = "1.4" flate2 = { version = "1.0", default-features = false, optional = true } getrandom = { version = "0.3.1", features = ["wasm_js", "std"], optional = true} hmac = { version = "0.12", optional = true, features = ["reset"] } indexmap = "2" jiff = { version = "0.2.4", optional = true } memchr = "2.7" nt-time = { version = "0.10.6", default-features = false, optional = true } pbkdf2 = { version = "0.12", optional = true } sha1 = { version = "0.10", optional = true } time = { workspace = true, optional = true, features = [ "std", ] } zeroize = { version = "1.8", optional = true, features = ["zeroize_derive"] } zstd = { version = "0.13", optional = true, default-features = false } zopfli = { version = "0.8", optional = true } deflate64 = { version = "0.1.9", optional = true } lzma-rs = { version = "0.3", default-features = false, optional = true } xz2 = { version = "0.1.7", optional = true } proc-macro2 = { version = ">=1.0.60", optional = true } # Override transitive dep on 1.0.59 due to https://github.com/rust-lang/rust/issues/113152 [target.'cfg(any(all(target_arch = "arm", target_pointer_width = "32"), target_arch = "mips", target_arch = "powerpc"))'.dependencies] crossbeam-utils = "0.8.21" [target.'cfg(fuzzing)'.dependencies] arbitrary = { version = "1.4.1", features = ["derive"] } [dev-dependencies] bencher = "0.1.5" getrandom = { version = "0.3.1", features = ["wasm_js", "std"] } walkdir = "2.5" time = { workspace = true, features = ["formatting", "macros"] } anyhow = "1.0.95" clap = { version = "=4.4.18", features = ["derive"] } tempfile = "3.15" [features] aes-crypto = ["aes", "constant_time_eq", "hmac", "pbkdf2", "sha1", "getrandom", "zeroize"] chrono = ["chrono/default"] _deflate-any = [] _all-features = [] # Detect when --all-features is used deflate = ["flate2/rust_backend", "deflate-zopfli", "deflate-flate2"] deflate-flate2 = ["_deflate-any"] # DEPRECATED: previously enabled `flate2/miniz_oxide` which is equivalent to `flate2/rust_backend` deflate-miniz = ["deflate", "deflate-flate2"] deflate-zlib = ["flate2/zlib", "deflate-flate2"] deflate-zlib-ng = ["flate2/zlib-ng", "deflate-flate2"] deflate-zopfli = ["zopfli", "_deflate-any"] jiff-02 = ["dep:jiff"] nt-time = ["dep:nt-time"] lzma = ["lzma-rs/stream"] unreserved = [] xz = ["dep:xz2"] default = [ "aes-crypto", "bzip2", "deflate64", "deflate", "lzma", "time", "zstd", "xz", ] [[bench]] name = "read_entry" harness = false [[bench]] name = "read_metadata" harness = false [[bench]] name = "merge_archive" harness = false zip-2.5.0/LICENSE000064400000000000000000000023011046102023000114330ustar 00000000000000The MIT License (MIT) Copyright (c) 2014 Mathijs van de Nes Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Some files in the "tests/data" subdirectory of this repository are under other licences; see files named LICENSE.*.txt for details.zip-2.5.0/README.md000064400000000000000000000065461046102023000117240ustar 00000000000000zip ======== [![Build Status](https://github.com/zip-rs/zip2/actions/workflows/ci.yaml/badge.svg)](https://github.com/Pr0methean/zip/actions?query=branch%3Amaster+workflow%3ACI) [![Crates.io version](https://img.shields.io/crates/v/zip.svg)](https://crates.io/crates/zip) [Documentation](https://docs.rs/zip/latest/zip/) Info ---- A zip library for rust which supports reading and writing of simple ZIP files. Formerly hosted at https://github.com/zip-rs/zip2. Supported compression formats: * stored (i.e. none) * deflate * deflate64 (decompression only) * bzip2 * zstd * lzma (decompression only) * xz (decompression only) Currently unsupported zip extensions: * Multi-disk Features -------- The features available are: * `aes-crypto`: Enables decryption of files which were encrypted with AES. Supports AE-1 and AE-2 methods. * `deflate`: Enables compressing and decompressing an unspecified implementation (that may change in future versions) of the deflate compression algorithm, which is the default for zip files. Supports compression quality 1..=264. * `deflate-flate2`: Combine this with any `flate2` feature flag that enables a back-end, to support deflate compression at quality 1..=9. * `deflate-zopfli`: Enables deflating files with the `zopfli` library (used when compression quality is 10..=264). This is the most effective `deflate` implementation available, but also among the slowest. * `deflate64`: Enables the deflate64 compression algorithm. Only decompression is supported. * `lzma`: Enables the LZMA compression algorithm. Only decompression is supported. * `bzip2`: Enables the BZip2 compression algorithm. * `time`: Enables features using the [time](https://github.com/rust-lang-deprecated/time) crate. * `chrono`: Enables converting last-modified `zip::DateTime` to and from `chrono::NaiveDateTime`. * `jiff-02`: Enables converting last-modified `zip::DateTime` to and from `jiff::civil::DateTime`. * `nt-time`: Enables returning timestamps stored in the NTFS extra field as `nt_time::FileTime`. * `zstd`: Enables the Zstandard compression algorithm. By default `aes-crypto`, `bzip2`, `deflate`, `deflate64`, `lzma`, `time` and `zstd` are enabled. The following feature flags are deprecated: * `deflate-miniz`: Use `flate2`'s default backend for compression. Currently the same as `deflate`. MSRV ---- Our current Minimum Supported Rust Version is **1.73**. When adding features, we will follow these guidelines: - We will always support the latest four minor Rust versions. This gives you a 6 month window to upgrade your compiler. - Any change to the MSRV will be accompanied with a **minor** version bump. Examples -------- See the [examples directory](examples) for: * How to write a file to a zip. * How to write a directory of files to a zip (using [walkdir](https://github.com/BurntSushi/walkdir)). * How to extract a zip file. * How to extract a single file from a zip. * How to read a zip from the standard input. * How to append a directory to an existing archive Fuzzing ------- Fuzzing support is through [cargo fuzz](https://github.com/rust-fuzz/cargo-fuzz). To install cargo fuzz: ```bash cargo install cargo-fuzz ``` To list fuzz targets: ```bash cargo +nightly fuzz list ``` To start fuzzing zip extraction: ```bash cargo +nightly fuzz run fuzz_read ``` To start fuzzing zip creation: ```bash cargo +nightly fuzz run fuzz_write ``` zip-2.5.0/SECURITY.md000064400000000000000000000035571046102023000122350ustar 00000000000000# Security Policy ## Supported Versions Use this section to tell people about which versions of your project are currently being supported with security updates. | Version | Supported | | ------- | ------------------ | | 2.4.x | :white_check_mark: | | 2.3.x | :white_check_mark: | | < 2.0 | :x: | ## Reporting a Vulnerability To report a vulnerability, please go to https://github.com/zip-rs/zip2/security/advisories/new. We'll attempt to: * Close the report within 7 days if it's invalid, or if a fix has already been released but some old versions needed to be yanked. * Provide progress reports at least every 7 days to the original reporter. * Fix vulnerabilities within 30 days of the initial report. ## Disclosure A vulnerability will only be publicly disclosed once a fix is released. At that point, the delay before full public disclosure will be determined as follows: * If the proof-of-concept is very simple, or an exploit is already in the wild (whether or not it specifically targets `zip`, all details will be made public right away. * If the vulnerability is specific to `zip` and cannot easily be reverse-engineered from the code history, then the proof-of-concept and most of the details will be withheld until 14 days after the fix is released and all vulnerable versions are yanked with `cargo yank`. * If a potential victim requests more time to deploy a fix based on a credible risk, then the withholding of details can be extended up to 30 days. This may be extended to 90 days if the victim is high-value (e.g. manages over US$1 billion worth of financial assets or intellectual property, or has evidence that they're a target of nation-state attackers) and there's a valid reason why they cannot deploy the fix as fast as most users (e.g. heavy reliance on an old version's interface, or infrastructure damage in a war zone). zip-2.5.0/cliff.toml000064400000000000000000000064641046102023000124240ustar 00000000000000# git-cliff ~ default configuration file # https://git-cliff.org/docs/configuration # # Lines starting with "#" are comments. # Configuration options are organized into tables and keys. # See documentation for more information on available options. [changelog] # changelog header header = """ # Changelog\n All notable changes to this project will be documented in this file.\n """ # template for the changelog body # https://keats.github.io/tera/docs/#introduction body = """ {% if version %}\ ## [{{ version | trim_start_matches(pat="v") }}] - {{ timestamp | date(format="%Y-%m-%d") }} {% else %}\ ## [unreleased] {% endif %}\ {% for group, commits in commits | group_by(attribute="group") %} ### {{ group | striptags | trim | upper_first }} {% for commit in commits %} - {% if commit.scope %}*({{ commit.scope }})* {% endif %}\ {% if commit.breaking %}[**breaking**] {% endif %}\ {{ commit.message | upper_first }}\ {% endfor %} {% endfor %}\n """ # template for the changelog footer footer = """ """ # remove the leading and trailing s trim = true # postprocessors postprocessors = [ # { pattern = '', replace = "https://github.com/orhun/git-cliff" }, # replace repository URL ] [git] # parse the commits based on https://www.conventionalcommits.org conventional_commits = true # filter out the commits that are not conventional filter_unconventional = true # process each line of a commit as an individual commit split_commits = false # regex for preprocessing the commit messages commit_preprocessors = [ # Replace issue numbers #{ pattern = '\((\w+\s)?#([0-9]+)\)', replace = "([#${2}](/issues/${2}))"}, # Check spelling of the commit with https://github.com/crate-ci/typos # If the spelling is incorrect, it will be automatically fixed. #{ pattern = '.*', replace_command = 'typos --write-changes -' }, ] # regex for parsing and grouping commits commit_parsers = [ { message = "^feat", group = "šŸš€ Features" }, { message = "^fix", group = "šŸ› Bug Fixes" }, { message = "^doc", skip = true }, { message = "^perf", group = "⚔ Performance" }, { message = "^refactor", group = "🚜 Refactor" }, { message = "^style", skip = true }, { message = "^test", skip = true }, { message = "^build", skip = true }, { message = "^ci", skip = true }, { message = "^chore\\(release\\)", skip = true }, { message = "^chore\\(deps.*\\)", skip = true }, { message = "^chore\\(pr\\)", skip = true }, { message = "^chore\\(pull\\)", skip = true }, { message = "^chore", group = "āš™ļø Miscellaneous Tasks" }, { body = ".*security", group = "šŸ›”ļø Security" }, { message = "^revert", group = "ā—€ļø Revert" }, ] # protect breaking changes from being skipped due to matching a skipping commit_parser protect_breaking_commits = true # filter out the commits that are not matched by commit parsers filter_commits = false # regex for matching git tags # tag_pattern = "v[0-9].*" # regex for skipping tags # skip_tags = "" # regex for ignoring tags # ignore_tags = "" # sort the tags topologically topo_order = false # sort the commits inside sections by oldest/newest order sort_commits = "oldest" # limit the number of commits included in the changelog. # limit_commits = 42 zip-2.5.0/pull_request_template.md000064400000000000000000000066161046102023000154040ustar 00000000000000 zip-2.5.0/release-plz.toml000064400000000000000000000023121046102023000135500ustar 00000000000000[workspace] dependencies_update = true # update dependencies with `cargo update` pr_labels = ["release"] # add the `release` label to the release Pull Request release_commits = "^(feat|fix|perf|refactor):" # prepare release only if at least one commit matches a regex git_release_type = "auto" [changelog] protect_breaking_commits = true commit_parsers = [ { message = "^feat", group = "šŸš€ Features" }, { message = "^fix", group = "šŸ› Bug Fixes" }, { message = "^doc", skip = true }, { message = "^perf", group = "⚔ Performance" }, { message = "^refactor", group = "🚜 Refactor" }, { message = "^style", skip = true }, { message = "^test", skip = true }, { message = "^build", skip = true }, { message = "^ci", skip = true }, { message = "^chore\\(release\\)", skip = true }, { message = "^chore\\(deps.*\\)", skip = true }, { message = "^chore\\(pr\\)", skip = true }, { message = "^chore\\(pull\\)", skip = true }, { message = "^chore", group = "āš™ļø Miscellaneous Tasks" }, { body = ".*security", group = "šŸ›”ļø Security" }, { message = "^revert", group = "ā—€ļø Revert" }, ]zip-2.5.0/src/aes.rs000064400000000000000000000346551046102023000123540ustar 00000000000000//! Implementation of the AES decryption for zip files. //! //! This was implemented according to the [WinZip specification](https://www.winzip.com/win/en/aes_info.html). //! Note that using CRC with AES depends on the used encryption specification, AE-1 or AE-2. //! If the file is marked as encrypted with AE-2 the CRC field is ignored, even if it isn't set to 0. use crate::aes_ctr::AesCipher; use crate::types::AesMode; use crate::{aes_ctr, result::ZipError}; use constant_time_eq::constant_time_eq; use hmac::{Hmac, Mac}; use sha1::Sha1; use std::io::{self, Error, ErrorKind, Read, Write}; use zeroize::{Zeroize, Zeroizing}; /// The length of the password verifcation value in bytes pub const PWD_VERIFY_LENGTH: usize = 2; /// The length of the authentication code in bytes const AUTH_CODE_LENGTH: usize = 10; /// The number of iterations used with PBKDF2 const ITERATION_COUNT: u32 = 1000; enum Cipher { Aes128(Box>), Aes192(Box>), Aes256(Box>), } impl Cipher { /// Create a `Cipher` depending on the used `AesMode` and the given `key`. /// /// # Panics /// /// This panics if `key` doesn't have the correct size for the chosen aes mode. fn from_mode(aes_mode: AesMode, key: &[u8]) -> Self { match aes_mode { AesMode::Aes128 => Cipher::Aes128(Box::new(aes_ctr::AesCtrZipKeyStream::< aes_ctr::Aes128, >::new(key))), AesMode::Aes192 => Cipher::Aes192(Box::new(aes_ctr::AesCtrZipKeyStream::< aes_ctr::Aes192, >::new(key))), AesMode::Aes256 => Cipher::Aes256(Box::new(aes_ctr::AesCtrZipKeyStream::< aes_ctr::Aes256, >::new(key))), } } fn crypt_in_place(&mut self, target: &mut [u8]) { match self { Self::Aes128(cipher) => cipher.crypt_in_place(target), Self::Aes192(cipher) => cipher.crypt_in_place(target), Self::Aes256(cipher) => cipher.crypt_in_place(target), } } } // An aes encrypted file starts with a salt, whose length depends on the used aes mode // followed by a 2 byte password verification value // then the variable length encrypted data // and lastly a 10 byte authentication code pub struct AesReader { reader: R, aes_mode: AesMode, data_length: u64, } impl AesReader { pub const fn new(reader: R, aes_mode: AesMode, compressed_size: u64) -> AesReader { let data_length = compressed_size - (PWD_VERIFY_LENGTH + AUTH_CODE_LENGTH + aes_mode.salt_length()) as u64; Self { reader, aes_mode, data_length, } } /// Read the AES header bytes and validate the password. /// /// Even if the validation succeeds, there is still a 1 in 65536 chance that an incorrect /// password was provided. /// It isn't possible to check the authentication code in this step. This will be done after /// reading and decrypting the file. pub fn validate(mut self, password: &[u8]) -> Result, ZipError> { let salt_length = self.aes_mode.salt_length(); let key_length = self.aes_mode.key_length(); let mut salt = vec![0; salt_length]; self.reader.read_exact(&mut salt)?; // next are 2 bytes used for password verification let mut pwd_verification_value = vec![0; PWD_VERIFY_LENGTH]; self.reader.read_exact(&mut pwd_verification_value)?; // derive a key from the password and salt // the length depends on the aes key length let derived_key_len = 2 * key_length + PWD_VERIFY_LENGTH; let mut derived_key: Box<[u8]> = vec![0; derived_key_len].into_boxed_slice(); // use PBKDF2 with HMAC-Sha1 to derive the key pbkdf2::pbkdf2::>(password, &salt, ITERATION_COUNT, &mut derived_key) .map_err(|e| Error::new(ErrorKind::InvalidInput, e))?; let decrypt_key = &derived_key[0..key_length]; let hmac_key = &derived_key[key_length..key_length * 2]; let pwd_verify = &derived_key[derived_key_len - 2..]; // the last 2 bytes should equal the password verification value if pwd_verification_value != pwd_verify { // wrong password return Err(ZipError::InvalidPassword); } let cipher = Cipher::from_mode(self.aes_mode, decrypt_key); let hmac = Hmac::::new_from_slice(hmac_key).unwrap(); Ok(AesReaderValid { reader: self.reader, data_remaining: self.data_length, cipher, hmac, finalized: false, }) } /// Read the AES header bytes and returns the verification value and salt. /// /// # Returns /// /// the verification value and the salt pub fn get_verification_value_and_salt( mut self, ) -> io::Result<([u8; PWD_VERIFY_LENGTH], Vec)> { let salt_length = self.aes_mode.salt_length(); let mut salt = vec![0; salt_length]; self.reader.read_exact(&mut salt)?; // next are 2 bytes used for password verification let mut pwd_verification_value = [0; PWD_VERIFY_LENGTH]; self.reader.read_exact(&mut pwd_verification_value)?; Ok((pwd_verification_value, salt)) } } /// A reader for aes encrypted files, which has already passed the first password check. /// /// There is a 1 in 65536 chance that an invalid password passes that check. /// After the data has been read and decrypted an HMAC will be checked and provide a final means /// to check if either the password is invalid or if the data has been changed. pub struct AesReaderValid { reader: R, data_remaining: u64, cipher: Cipher, hmac: Hmac, finalized: bool, } impl Read for AesReaderValid { /// This implementation does not fulfill all requirements set in the trait documentation. /// /// ```txt /// "If an error is returned then it must be guaranteed that no bytes were read." /// ``` /// /// Whether this applies to errors that occur while reading the encrypted data depends on the /// underlying reader. If the error occurs while verifying the HMAC, the reader might become /// practically unusable, since its position after the error is not known. fn read(&mut self, buf: &mut [u8]) -> io::Result { if self.data_remaining == 0 { return Ok(0); } // get the number of bytes to read, compare as u64 to make sure we can read more than // 2^32 bytes even on 32 bit systems. let bytes_to_read = self.data_remaining.min(buf.len() as u64) as usize; let read = self.reader.read(&mut buf[0..bytes_to_read])?; self.data_remaining -= read as u64; // Update the hmac with the encrypted data self.hmac.update(&buf[0..read]); // decrypt the data self.cipher.crypt_in_place(&mut buf[0..read]); // if there is no data left to read, check the integrity of the data if self.data_remaining == 0 { assert!( !self.finalized, "Tried to use an already finalized HMAC. This is a bug!" ); self.finalized = true; // Zip uses HMAC-Sha1-80, which only uses the first half of the hash // see https://www.winzip.com/win/en/aes_info.html#auth-faq let mut read_auth_code = [0; AUTH_CODE_LENGTH]; self.reader.read_exact(&mut read_auth_code)?; let computed_auth_code = &self.hmac.finalize_reset().into_bytes()[0..AUTH_CODE_LENGTH]; // use constant time comparison to mitigate timing attacks if !constant_time_eq(computed_auth_code, &read_auth_code) { return Err( Error::new( ErrorKind::InvalidData, "Invalid authentication code, this could be due to an invalid password or errors in the data" ) ); } } Ok(read) } } impl AesReaderValid { /// Consumes this decoder, returning the underlying reader. pub fn into_inner(self) -> R { self.reader } } pub struct AesWriter { writer: W, cipher: Cipher, hmac: Hmac, buffer: Zeroizing>, encrypted_file_header: Option>, } impl AesWriter { pub fn new(writer: W, aes_mode: AesMode, password: &[u8]) -> io::Result { let salt_length = aes_mode.salt_length(); let key_length = aes_mode.key_length(); let mut encrypted_file_header = Vec::with_capacity(salt_length + 2); let mut salt = vec![0; salt_length]; getrandom::fill(&mut salt)?; encrypted_file_header.write_all(&salt)?; // Derive a key from the password and salt. The length depends on the aes key length let derived_key_len = 2 * key_length + PWD_VERIFY_LENGTH; let mut derived_key: Zeroizing> = Zeroizing::new(vec![0; derived_key_len]); // Use PBKDF2 with HMAC-Sha1 to derive the key. pbkdf2::pbkdf2::>(password, &salt, ITERATION_COUNT, &mut derived_key) .map_err(|e| Error::new(ErrorKind::InvalidInput, e))?; let encryption_key = &derived_key[0..key_length]; let hmac_key = &derived_key[key_length..key_length * 2]; let pwd_verify = derived_key[derived_key_len - 2..].to_vec(); encrypted_file_header.write_all(&pwd_verify)?; let cipher = Cipher::from_mode(aes_mode, encryption_key); let hmac = Hmac::::new_from_slice(hmac_key).unwrap(); Ok(Self { writer, cipher, hmac, buffer: Default::default(), encrypted_file_header: Some(encrypted_file_header), }) } pub fn finish(mut self) -> io::Result { self.write_encrypted_file_header()?; // Zip uses HMAC-Sha1-80, which only uses the first half of the hash // see https://www.winzip.com/win/en/aes_info.html#auth-faq let computed_auth_code = &self.hmac.finalize_reset().into_bytes()[0..AUTH_CODE_LENGTH]; self.writer.write_all(computed_auth_code)?; Ok(self.writer) } /// The AES encryption specification requires some metadata being written at the start of the /// file data section, but this can only be done once the extra data writing has been finished /// so we can't do it when the writer is constructed. fn write_encrypted_file_header(&mut self) -> io::Result<()> { if let Some(header) = self.encrypted_file_header.take() { self.writer.write_all(&header)?; } Ok(()) } } impl Write for AesWriter { fn write(&mut self, buf: &[u8]) -> io::Result { self.write_encrypted_file_header()?; // Fill the internal buffer and encrypt it in-place. self.buffer.extend_from_slice(buf); self.cipher.crypt_in_place(&mut self.buffer[..]); // Update the hmac with the encrypted data. self.hmac.update(&self.buffer[..]); // Write the encrypted buffer to the inner writer. We need to use `write_all` here as if // we only write parts of the data we can't easily reverse the keystream in the cipher // implementation. self.writer.write_all(&self.buffer[..])?; // Zeroize the backing memory before clearing the buffer to prevent cleartext data from // being left in memory. self.buffer.zeroize(); self.buffer.clear(); Ok(buf.len()) } fn flush(&mut self) -> io::Result<()> { self.writer.flush() } } #[cfg(all(test, feature = "aes-crypto"))] mod tests { use std::io::{self, Read, Write}; use crate::{ aes::{AesReader, AesWriter}, result::ZipError, types::AesMode, }; /// Checks whether `AesReader` can successfully decrypt what `AesWriter` produces. fn roundtrip(aes_mode: AesMode, password: &[u8], plaintext: &[u8]) -> Result { let mut buf = io::Cursor::new(vec![]); let mut read_buffer = vec![]; { let mut writer = AesWriter::new(&mut buf, aes_mode, password)?; writer.write_all(plaintext)?; writer.finish()?; } // Reset cursor position to the beginning. buf.set_position(0); { let compressed_length = buf.get_ref().len() as u64; let mut reader = AesReader::new(&mut buf, aes_mode, compressed_length).validate(password)?; reader.read_to_end(&mut read_buffer)?; } Ok(plaintext == read_buffer) } #[test] fn crypt_aes_256_0_byte() { let plaintext = &[]; let password = b"some super secret password"; assert!(roundtrip(AesMode::Aes256, password, plaintext).expect("could encrypt and decrypt")); } #[test] fn crypt_aes_128_5_byte() { let plaintext = b"asdf\n"; let password = b"some super secret password"; assert!(roundtrip(AesMode::Aes128, password, plaintext).expect("could encrypt and decrypt")); } #[test] fn crypt_aes_192_5_byte() { let plaintext = b"asdf\n"; let password = b"some super secret password"; assert!(roundtrip(AesMode::Aes192, password, plaintext).expect("could encrypt and decrypt")); } #[test] fn crypt_aes_256_5_byte() { let plaintext = b"asdf\n"; let password = b"some super secret password"; assert!(roundtrip(AesMode::Aes256, password, plaintext).expect("could encrypt and decrypt")); } #[test] fn crypt_aes_128_40_byte() { let plaintext = b"Lorem ipsum dolor sit amet, consectetur\n"; let password = b"some super secret password"; assert!(roundtrip(AesMode::Aes128, password, plaintext).expect("could encrypt and decrypt")); } #[test] fn crypt_aes_192_40_byte() { let plaintext = b"Lorem ipsum dolor sit amet, consectetur\n"; let password = b"some super secret password"; assert!(roundtrip(AesMode::Aes192, password, plaintext).expect("could encrypt and decrypt")); } #[test] fn crypt_aes_256_40_byte() { let plaintext = b"Lorem ipsum dolor sit amet, consectetur\n"; let password = b"some super secret password"; assert!(roundtrip(AesMode::Aes256, password, plaintext).expect("could encrypt and decrypt")); } } zip-2.5.0/src/aes_ctr.rs000064400000000000000000000221351046102023000132120ustar 00000000000000//! A counter mode (CTR) for AES to work with the encryption used in zip files. //! //! This was implemented since the zip specification requires the mode to not use a nonce and uses a //! different byte order (little endian) than NIST (big endian). //! See [AesCtrZipKeyStream] for more information. use crate::unstable::LittleEndianWriteExt; use aes::cipher::generic_array::GenericArray; use aes::cipher::{BlockEncrypt, KeyInit}; use std::{any, fmt}; /// Internal block size of an AES cipher. const AES_BLOCK_SIZE: usize = 16; /// AES-128. #[derive(Debug)] pub struct Aes128; /// AES-192 #[derive(Debug)] pub struct Aes192; /// AES-256. #[derive(Debug)] pub struct Aes256; /// An AES cipher kind. pub trait AesKind { /// Key type. type Key: AsRef<[u8]>; /// Cipher used to decrypt. type Cipher: KeyInit; } impl AesKind for Aes128 { type Key = [u8; 16]; type Cipher = aes::Aes128; } impl AesKind for Aes192 { type Key = [u8; 24]; type Cipher = aes::Aes192; } impl AesKind for Aes256 { type Key = [u8; 32]; type Cipher = aes::Aes256; } /// An AES-CTR key stream generator. /// /// Implements the slightly non-standard AES-CTR variant used by WinZip AES encryption. /// /// Typical AES-CTR implementations combine a nonce with a 64 bit counter. WinZIP AES instead uses /// no nonce and also uses a different byte order (little endian) than NIST (big endian). /// /// The stream implements the `Read` trait; encryption or decryption is performed by XOR-ing the /// bytes from the key stream with the ciphertext/plaintext. pub struct AesCtrZipKeyStream { /// Current AES counter. counter: u128, /// AES cipher instance. cipher: C::Cipher, /// Stores the currently available keystream bytes. buffer: [u8; AES_BLOCK_SIZE], /// Number of bytes already used up from `buffer`. pos: usize, } impl fmt::Debug for AesCtrZipKeyStream where C: AesKind, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!( f, "AesCtrZipKeyStream<{}>(counter: {})", any::type_name::(), self.counter ) } } impl AesCtrZipKeyStream where C: AesKind, C::Cipher: KeyInit, { /// Creates a new zip variant AES-CTR key stream. /// /// # Panics /// /// This panics if `key` doesn't have the correct size for cipher `C`. pub fn new(key: &[u8]) -> AesCtrZipKeyStream { AesCtrZipKeyStream { counter: 1, cipher: C::Cipher::new(GenericArray::from_slice(key)), buffer: [0u8; AES_BLOCK_SIZE], pos: AES_BLOCK_SIZE, } } } impl AesCipher for AesCtrZipKeyStream where C: AesKind, C::Cipher: BlockEncrypt, { /// Decrypt or encrypt `target`. #[inline] fn crypt_in_place(&mut self, mut target: &mut [u8]) { while !target.is_empty() { if self.pos == AES_BLOCK_SIZE { // Note: AES block size is always 16 bytes, same as u128. self.buffer .as_mut() .write_u128_le(self.counter) .expect("did not expect u128 le conversion to fail"); self.cipher .encrypt_block(GenericArray::from_mut_slice(&mut self.buffer)); self.counter += 1; self.pos = 0; } let target_len = target.len().min(AES_BLOCK_SIZE - self.pos); xor( &mut target[0..target_len], &self.buffer[self.pos..(self.pos + target_len)], ); target = &mut target[target_len..]; self.pos += target_len; } } } /// This trait allows using generic AES ciphers with different key sizes. pub trait AesCipher { fn crypt_in_place(&mut self, target: &mut [u8]); } /// XORs a slice in place with another slice. #[inline] fn xor(dest: &mut [u8], src: &[u8]) { assert_eq!(dest.len(), src.len()); for (lhs, rhs) in dest.iter_mut().zip(src.iter()) { *lhs ^= *rhs; } } #[cfg(test)] mod tests { use super::{Aes128, Aes192, Aes256, AesCipher, AesCtrZipKeyStream, AesKind}; use aes::cipher::{BlockEncrypt, KeyInit}; /// Checks whether `crypt_in_place` produces the correct plaintext after one use and yields the /// cipertext again after applying it again. fn roundtrip(key: &[u8], ciphertext: &[u8], expected_plaintext: &[u8]) where Aes: AesKind, Aes::Cipher: KeyInit + BlockEncrypt, { let mut key_stream = AesCtrZipKeyStream::::new(key); let mut plaintext = ciphertext.to_vec().into_boxed_slice(); key_stream.crypt_in_place(&mut plaintext); assert_eq!(*plaintext, *expected_plaintext); // Round-tripping should yield the ciphertext again. let mut key_stream = AesCtrZipKeyStream::::new(key); key_stream.crypt_in_place(&mut plaintext); assert_eq!(*plaintext, *ciphertext); } #[test] #[should_panic] fn new_with_wrong_key_size() { AesCtrZipKeyStream::::new(&[1, 2, 3, 4, 5]); } // The data used in these tests was generated with p7zip without any compression. // It's not possible to recreate the exact same data, since a random salt is used for encryption. // `7z a -phelloworld -mem=AES256 -mx=0 aes256_40byte.zip 40byte_data.txt` #[test] fn crypt_aes_256_0_byte() { let ciphertext = []; let expected_plaintext = &[]; let key = [ 0x0b, 0xec, 0x2e, 0xf2, 0x46, 0xf0, 0x7e, 0x35, 0x16, 0x54, 0xe0, 0x98, 0x10, 0xb3, 0x18, 0x55, 0x24, 0xa3, 0x9e, 0x0e, 0x40, 0xe7, 0x92, 0xad, 0xb2, 0x8a, 0x48, 0xf4, 0x5c, 0xd0, 0xc0, 0x54, ]; roundtrip::(&key, &ciphertext, expected_plaintext); } #[test] fn crypt_aes_128_5_byte() { let ciphertext = [0x98, 0xa9, 0x8c, 0x26, 0x0e]; let expected_plaintext = b"asdf\n"; let key = [ 0xe0, 0x25, 0x7b, 0x57, 0x97, 0x6a, 0xa4, 0x23, 0xab, 0x94, 0xaa, 0x44, 0xfd, 0x47, 0x4f, 0xa5, ]; roundtrip::(&key, &ciphertext, expected_plaintext); } #[test] fn crypt_aes_192_5_byte() { let ciphertext = [0x36, 0x55, 0x5c, 0x61, 0x3c]; let expected_plaintext = b"asdf\n"; let key = [ 0xe4, 0x4a, 0x88, 0x52, 0x8f, 0xf7, 0x0b, 0x81, 0x7b, 0x75, 0xf1, 0x74, 0x21, 0x37, 0x8c, 0x90, 0xad, 0xbe, 0x4a, 0x65, 0xa8, 0x96, 0x0e, 0xcc, ]; roundtrip::(&key, &ciphertext, expected_plaintext); } #[test] fn crypt_aes_256_5_byte() { let ciphertext = [0xc2, 0x47, 0xc0, 0xdc, 0x56]; let expected_plaintext = b"asdf\n"; let key = [ 0x79, 0x5e, 0x17, 0xf2, 0xc6, 0x3d, 0x28, 0x9b, 0x4b, 0x4b, 0xbb, 0xa9, 0xba, 0xc9, 0xa5, 0xee, 0x3a, 0x4f, 0x0f, 0x4b, 0x29, 0xbd, 0xe9, 0xb8, 0x41, 0x9c, 0x41, 0xa5, 0x15, 0xb2, 0x86, 0xab, ]; roundtrip::(&key, &ciphertext, expected_plaintext); } #[test] fn crypt_aes_128_40_byte() { let ciphertext = [ 0xcf, 0x72, 0x6b, 0xa1, 0xb2, 0x0f, 0xdf, 0xaa, 0x10, 0xad, 0x9c, 0x7f, 0x6d, 0x1c, 0x8d, 0xb5, 0x16, 0x7e, 0xbb, 0x11, 0x69, 0x52, 0x8c, 0x89, 0x80, 0x32, 0xaa, 0x76, 0xa6, 0x18, 0x31, 0x98, 0xee, 0xdd, 0x22, 0x68, 0xb7, 0xe6, 0x77, 0xd2, ]; let expected_plaintext = b"Lorem ipsum dolor sit amet, consectetur\n"; let key = [ 0x43, 0x2b, 0x6d, 0xbe, 0x05, 0x76, 0x6c, 0x9e, 0xde, 0xca, 0x3b, 0xf8, 0xaf, 0x5d, 0x81, 0xb6, ]; roundtrip::(&key, &ciphertext, expected_plaintext); } #[test] fn crypt_aes_192_40_byte() { let ciphertext = [ 0xa6, 0xfc, 0x52, 0x79, 0x2c, 0x6c, 0xfe, 0x68, 0xb1, 0xa8, 0xb3, 0x07, 0x52, 0x8b, 0x82, 0xa6, 0x87, 0x9c, 0x72, 0x42, 0x3a, 0xf8, 0xc6, 0xa9, 0xc9, 0xfb, 0x61, 0x19, 0x37, 0xb9, 0x56, 0x62, 0xf4, 0xfc, 0x5e, 0x7a, 0xdd, 0x55, 0x0a, 0x48, ]; let expected_plaintext = b"Lorem ipsum dolor sit amet, consectetur\n"; let key = [ 0xac, 0x92, 0x41, 0xba, 0xde, 0xd9, 0x02, 0xfe, 0x40, 0x92, 0x20, 0xf6, 0x56, 0x03, 0xfe, 0xae, 0x1b, 0xba, 0x01, 0x97, 0x97, 0x79, 0xbb, 0xa6, ]; roundtrip::(&key, &ciphertext, expected_plaintext); } #[test] fn crypt_aes_256_40_byte() { let ciphertext = [ 0xa9, 0x99, 0xbd, 0xea, 0x82, 0x9b, 0x8f, 0x2f, 0xb7, 0x52, 0x2f, 0x6b, 0xd8, 0xf6, 0xab, 0x0e, 0x24, 0x51, 0x9e, 0x18, 0x0f, 0xc0, 0x8f, 0x54, 0x15, 0x80, 0xae, 0xbc, 0xa0, 0x5c, 0x8a, 0x11, 0x8d, 0x14, 0x7e, 0xc5, 0xb4, 0xae, 0xd3, 0x37, ]; let expected_plaintext = b"Lorem ipsum dolor sit amet, consectetur\n"; let key = [ 0x64, 0x7c, 0x7a, 0xde, 0xf0, 0xf2, 0x61, 0x49, 0x1c, 0xf1, 0xf1, 0xe3, 0x37, 0xfc, 0xe1, 0x4d, 0x4a, 0x77, 0xd4, 0xeb, 0x9e, 0x3d, 0x75, 0xce, 0x9a, 0x3e, 0x10, 0x50, 0xc2, 0x07, 0x36, 0xb6, ]; roundtrip::(&key, &ciphertext, expected_plaintext); } } zip-2.5.0/src/build.rs000064400000000000000000000003571046102023000126730ustar 00000000000000use std::env::var; fn main() { if var("CARGO_FEATURE_DEFLATE_MINIZ").is_ok() && var("CARGO_FEATURE__ALL_FEATURES").is_err() { println!("cargo:warning=Feature `deflate-miniz` is deprecated; replace it with `deflate`"); } } zip-2.5.0/src/compression.rs000064400000000000000000000264221046102023000141360ustar 00000000000000//! Possible ZIP compression methods. use std::{fmt, io}; #[allow(deprecated)] /// Identifies the storage format used to compress a file within a ZIP archive. /// /// Each file's compression method is stored alongside it, allowing the /// contents to be read without context. /// /// When creating ZIP files, you may choose the method to use with /// [`crate::write::FileOptions::compression_method`] #[derive(Copy, Clone, PartialEq, Eq, Debug)] #[cfg_attr(fuzzing, derive(arbitrary::Arbitrary))] #[non_exhaustive] pub enum CompressionMethod { /// Store the file as is Stored, /// Compress the file using Deflate #[cfg(feature = "_deflate-any")] Deflated, /// Compress the file using Deflate64. /// Decoding deflate64 is supported but encoding deflate64 is not supported. #[cfg(feature = "deflate64")] Deflate64, /// Compress the file using BZIP2 #[cfg(feature = "bzip2")] Bzip2, /// Encrypted using AES. /// /// The actual compression method has to be taken from the AES extra data field /// or from `ZipFileData`. #[cfg(feature = "aes-crypto")] Aes, /// Compress the file using ZStandard #[cfg(feature = "zstd")] Zstd, /// Compress the file using LZMA #[cfg(feature = "lzma")] Lzma, /// Compress the file using XZ #[cfg(feature = "xz")] Xz, /// Unsupported compression method #[cfg_attr( not(fuzzing), deprecated(since = "0.5.7", note = "use the constants instead") )] Unsupported(u16), } #[allow(deprecated, missing_docs)] /// All compression methods defined for the ZIP format impl CompressionMethod { pub const STORE: Self = CompressionMethod::Stored; pub const SHRINK: Self = CompressionMethod::Unsupported(1); pub const REDUCE_1: Self = CompressionMethod::Unsupported(2); pub const REDUCE_2: Self = CompressionMethod::Unsupported(3); pub const REDUCE_3: Self = CompressionMethod::Unsupported(4); pub const REDUCE_4: Self = CompressionMethod::Unsupported(5); pub const IMPLODE: Self = CompressionMethod::Unsupported(6); #[cfg(feature = "_deflate-any")] pub const DEFLATE: Self = CompressionMethod::Deflated; #[cfg(not(feature = "_deflate-any"))] pub const DEFLATE: Self = CompressionMethod::Unsupported(8); #[cfg(feature = "deflate64")] pub const DEFLATE64: Self = CompressionMethod::Deflate64; #[cfg(not(feature = "deflate64"))] pub const DEFLATE64: Self = CompressionMethod::Unsupported(9); pub const PKWARE_IMPLODE: Self = CompressionMethod::Unsupported(10); #[cfg(feature = "bzip2")] pub const BZIP2: Self = CompressionMethod::Bzip2; #[cfg(not(feature = "bzip2"))] pub const BZIP2: Self = CompressionMethod::Unsupported(12); #[cfg(not(feature = "lzma"))] pub const LZMA: Self = CompressionMethod::Unsupported(14); #[cfg(feature = "lzma")] pub const LZMA: Self = CompressionMethod::Lzma; pub const IBM_ZOS_CMPSC: Self = CompressionMethod::Unsupported(16); pub const IBM_TERSE: Self = CompressionMethod::Unsupported(18); pub const ZSTD_DEPRECATED: Self = CompressionMethod::Unsupported(20); #[cfg(feature = "zstd")] pub const ZSTD: Self = CompressionMethod::Zstd; #[cfg(not(feature = "zstd"))] pub const ZSTD: Self = CompressionMethod::Unsupported(93); pub const MP3: Self = CompressionMethod::Unsupported(94); #[cfg(feature = "xz")] pub const XZ: Self = CompressionMethod::Xz; #[cfg(not(feature = "xz"))] pub const XZ: Self = CompressionMethod::Unsupported(95); pub const JPEG: Self = CompressionMethod::Unsupported(96); pub const WAVPACK: Self = CompressionMethod::Unsupported(97); pub const PPMD: Self = CompressionMethod::Unsupported(98); #[cfg(feature = "aes-crypto")] pub const AES: Self = CompressionMethod::Aes; #[cfg(not(feature = "aes-crypto"))] pub const AES: Self = CompressionMethod::Unsupported(99); } impl CompressionMethod { pub(crate) const fn parse_from_u16(val: u16) -> Self { match val { 0 => CompressionMethod::Stored, #[cfg(feature = "_deflate-any")] 8 => CompressionMethod::Deflated, #[cfg(feature = "deflate64")] 9 => CompressionMethod::Deflate64, #[cfg(feature = "bzip2")] 12 => CompressionMethod::Bzip2, #[cfg(feature = "lzma")] 14 => CompressionMethod::Lzma, #[cfg(feature = "xz")] 95 => CompressionMethod::Xz, #[cfg(feature = "zstd")] 93 => CompressionMethod::Zstd, #[cfg(feature = "aes-crypto")] 99 => CompressionMethod::Aes, #[allow(deprecated)] v => CompressionMethod::Unsupported(v), } } /// Converts a u16 to its corresponding CompressionMethod #[deprecated( since = "0.5.7", note = "use a constant to construct a compression method" )] pub const fn from_u16(val: u16) -> CompressionMethod { Self::parse_from_u16(val) } pub(crate) const fn serialize_to_u16(self) -> u16 { match self { CompressionMethod::Stored => 0, #[cfg(feature = "_deflate-any")] CompressionMethod::Deflated => 8, #[cfg(feature = "deflate64")] CompressionMethod::Deflate64 => 9, #[cfg(feature = "bzip2")] CompressionMethod::Bzip2 => 12, #[cfg(feature = "aes-crypto")] CompressionMethod::Aes => 99, #[cfg(feature = "zstd")] CompressionMethod::Zstd => 93, #[cfg(feature = "lzma")] CompressionMethod::Lzma => 14, #[cfg(feature = "xz")] CompressionMethod::Xz => 95, #[allow(deprecated)] CompressionMethod::Unsupported(v) => v, } } /// Converts a CompressionMethod to a u16 #[deprecated( since = "0.5.7", note = "to match on other compression methods, use a constant" )] pub const fn to_u16(self) -> u16 { self.serialize_to_u16() } } impl Default for CompressionMethod { fn default() -> Self { #[cfg(feature = "_deflate-any")] return CompressionMethod::Deflated; #[cfg(not(feature = "_deflate-any"))] return CompressionMethod::Stored; } } impl fmt::Display for CompressionMethod { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { // Just duplicate what the Debug format looks like, i.e, the enum key: write!(f, "{self:?}") } } /// The compression methods which have been implemented. pub const SUPPORTED_COMPRESSION_METHODS: &[CompressionMethod] = &[ CompressionMethod::Stored, #[cfg(feature = "_deflate-any")] CompressionMethod::Deflated, #[cfg(feature = "deflate64")] CompressionMethod::Deflate64, #[cfg(feature = "bzip2")] CompressionMethod::Bzip2, #[cfg(feature = "zstd")] CompressionMethod::Zstd, #[cfg(feature = "xz")] CompressionMethod::Xz, ]; pub(crate) enum Decompressor { Stored(R), #[cfg(feature = "_deflate-any")] Deflated(flate2::bufread::DeflateDecoder), #[cfg(feature = "deflate64")] Deflate64(deflate64::Deflate64Decoder), #[cfg(feature = "bzip2")] Bzip2(bzip2::bufread::BzDecoder), #[cfg(feature = "zstd")] Zstd(zstd::Decoder<'static, R>), #[cfg(feature = "lzma")] Lzma(Box>), #[cfg(feature = "xz")] Xz(xz2::bufread::XzDecoder), } impl io::Read for Decompressor { fn read(&mut self, buf: &mut [u8]) -> io::Result { match self { Decompressor::Stored(r) => r.read(buf), #[cfg(feature = "_deflate-any")] Decompressor::Deflated(r) => r.read(buf), #[cfg(feature = "deflate64")] Decompressor::Deflate64(r) => r.read(buf), #[cfg(feature = "bzip2")] Decompressor::Bzip2(r) => r.read(buf), #[cfg(feature = "zstd")] Decompressor::Zstd(r) => r.read(buf), #[cfg(feature = "lzma")] Decompressor::Lzma(r) => r.read(buf), #[cfg(feature = "xz")] Decompressor::Xz(r) => r.read(buf), } } } impl Decompressor { pub fn new(reader: R, compression_method: CompressionMethod) -> crate::result::ZipResult { Ok(match compression_method { CompressionMethod::Stored => Decompressor::Stored(reader), #[cfg(feature = "_deflate-any")] CompressionMethod::Deflated => { Decompressor::Deflated(flate2::bufread::DeflateDecoder::new(reader)) } #[cfg(feature = "deflate64")] CompressionMethod::Deflate64 => { Decompressor::Deflate64(deflate64::Deflate64Decoder::with_buffer(reader)) } #[cfg(feature = "bzip2")] CompressionMethod::Bzip2 => Decompressor::Bzip2(bzip2::bufread::BzDecoder::new(reader)), #[cfg(feature = "zstd")] CompressionMethod::Zstd => Decompressor::Zstd(zstd::Decoder::with_buffer(reader)?), #[cfg(feature = "lzma")] CompressionMethod::Lzma => { Decompressor::Lzma(Box::new(crate::read::lzma::LzmaDecoder::new(reader))) } #[cfg(feature = "xz")] CompressionMethod::Xz => Decompressor::Xz(xz2::bufread::XzDecoder::new(reader)), _ => { return Err(crate::result::ZipError::UnsupportedArchive( "Compression method not supported", )) } }) } /// Consumes this decoder, returning the underlying reader. pub fn into_inner(self) -> R { match self { Decompressor::Stored(r) => r, #[cfg(feature = "_deflate-any")] Decompressor::Deflated(r) => r.into_inner(), #[cfg(feature = "deflate64")] Decompressor::Deflate64(r) => r.into_inner(), #[cfg(feature = "bzip2")] Decompressor::Bzip2(r) => r.into_inner(), #[cfg(feature = "zstd")] Decompressor::Zstd(r) => r.finish(), #[cfg(feature = "lzma")] Decompressor::Lzma(r) => r.into_inner(), #[cfg(feature = "xz")] Decompressor::Xz(r) => r.into_inner(), } } } #[cfg(test)] mod test { use super::{CompressionMethod, SUPPORTED_COMPRESSION_METHODS}; #[test] fn from_eq_to() { for v in 0..(u16::MAX as u32 + 1) { let from = CompressionMethod::parse_from_u16(v as u16); let to = from.serialize_to_u16() as u32; assert_eq!(v, to); } } #[test] fn to_eq_from() { fn check_match(method: CompressionMethod) { let to = method.serialize_to_u16(); let from = CompressionMethod::parse_from_u16(to); let back = from.serialize_to_u16(); assert_eq!(to, back); } for &method in SUPPORTED_COMPRESSION_METHODS { check_match(method); } } #[test] fn to_display_fmt() { fn check_match(method: CompressionMethod) { let debug_str = format!("{method:?}"); let display_str = format!("{method}"); assert_eq!(debug_str, display_str); } for &method in SUPPORTED_COMPRESSION_METHODS { check_match(method); } } } zip-2.5.0/src/cp437.rs000064400000000000000000000117611046102023000124350ustar 00000000000000//! Convert a string in IBM codepage 437 to UTF-8 /// Trait to convert IBM codepage 437 to the target type pub trait FromCp437 { /// Target type type Target; /// Function that does the conversion from cp437. /// Generally allocations will be avoided if all data falls into the ASCII range. #[allow(clippy::wrong_self_convention)] fn from_cp437(self) -> Self::Target; } impl<'a> FromCp437 for &'a [u8] { type Target = ::std::borrow::Cow<'a, str>; fn from_cp437(self) -> Self::Target { if self.iter().all(|c| *c < 0x80) { ::std::str::from_utf8(self).unwrap().into() } else { self.iter().map(|c| to_char(*c)).collect::().into() } } } impl FromCp437 for Box<[u8]> { type Target = Box; fn from_cp437(self) -> Self::Target { if self.iter().all(|c| *c < 0x80) { String::from_utf8(self.into()).unwrap() } else { self.iter().copied().map(to_char).collect() } .into_boxed_str() } } fn to_char(input: u8) -> char { let output = match input { 0x00..=0x7f => input as u32, 0x80 => 0x00c7, 0x81 => 0x00fc, 0x82 => 0x00e9, 0x83 => 0x00e2, 0x84 => 0x00e4, 0x85 => 0x00e0, 0x86 => 0x00e5, 0x87 => 0x00e7, 0x88 => 0x00ea, 0x89 => 0x00eb, 0x8a => 0x00e8, 0x8b => 0x00ef, 0x8c => 0x00ee, 0x8d => 0x00ec, 0x8e => 0x00c4, 0x8f => 0x00c5, 0x90 => 0x00c9, 0x91 => 0x00e6, 0x92 => 0x00c6, 0x93 => 0x00f4, 0x94 => 0x00f6, 0x95 => 0x00f2, 0x96 => 0x00fb, 0x97 => 0x00f9, 0x98 => 0x00ff, 0x99 => 0x00d6, 0x9a => 0x00dc, 0x9b => 0x00a2, 0x9c => 0x00a3, 0x9d => 0x00a5, 0x9e => 0x20a7, 0x9f => 0x0192, 0xa0 => 0x00e1, 0xa1 => 0x00ed, 0xa2 => 0x00f3, 0xa3 => 0x00fa, 0xa4 => 0x00f1, 0xa5 => 0x00d1, 0xa6 => 0x00aa, 0xa7 => 0x00ba, 0xa8 => 0x00bf, 0xa9 => 0x2310, 0xaa => 0x00ac, 0xab => 0x00bd, 0xac => 0x00bc, 0xad => 0x00a1, 0xae => 0x00ab, 0xaf => 0x00bb, 0xb0 => 0x2591, 0xb1 => 0x2592, 0xb2 => 0x2593, 0xb3 => 0x2502, 0xb4 => 0x2524, 0xb5 => 0x2561, 0xb6 => 0x2562, 0xb7 => 0x2556, 0xb8 => 0x2555, 0xb9 => 0x2563, 0xba => 0x2551, 0xbb => 0x2557, 0xbc => 0x255d, 0xbd => 0x255c, 0xbe => 0x255b, 0xbf => 0x2510, 0xc0 => 0x2514, 0xc1 => 0x2534, 0xc2 => 0x252c, 0xc3 => 0x251c, 0xc4 => 0x2500, 0xc5 => 0x253c, 0xc6 => 0x255e, 0xc7 => 0x255f, 0xc8 => 0x255a, 0xc9 => 0x2554, 0xca => 0x2569, 0xcb => 0x2566, 0xcc => 0x2560, 0xcd => 0x2550, 0xce => 0x256c, 0xcf => 0x2567, 0xd0 => 0x2568, 0xd1 => 0x2564, 0xd2 => 0x2565, 0xd3 => 0x2559, 0xd4 => 0x2558, 0xd5 => 0x2552, 0xd6 => 0x2553, 0xd7 => 0x256b, 0xd8 => 0x256a, 0xd9 => 0x2518, 0xda => 0x250c, 0xdb => 0x2588, 0xdc => 0x2584, 0xdd => 0x258c, 0xde => 0x2590, 0xdf => 0x2580, 0xe0 => 0x03b1, 0xe1 => 0x00df, 0xe2 => 0x0393, 0xe3 => 0x03c0, 0xe4 => 0x03a3, 0xe5 => 0x03c3, 0xe6 => 0x00b5, 0xe7 => 0x03c4, 0xe8 => 0x03a6, 0xe9 => 0x0398, 0xea => 0x03a9, 0xeb => 0x03b4, 0xec => 0x221e, 0xed => 0x03c6, 0xee => 0x03b5, 0xef => 0x2229, 0xf0 => 0x2261, 0xf1 => 0x00b1, 0xf2 => 0x2265, 0xf3 => 0x2264, 0xf4 => 0x2320, 0xf5 => 0x2321, 0xf6 => 0x00f7, 0xf7 => 0x2248, 0xf8 => 0x00b0, 0xf9 => 0x2219, 0xfa => 0x00b7, 0xfb => 0x221a, 0xfc => 0x207f, 0xfd => 0x00b2, 0xfe => 0x25a0, 0xff => 0x00a0, }; ::std::char::from_u32(output).unwrap() } #[cfg(test)] mod test { #[test] fn to_char_valid() { for i in 0x00_u32..0x100 { super::to_char(i as u8); } } #[test] fn ascii() { for i in 0x00..0x80 { assert_eq!(super::to_char(i), i as char); } } #[test] #[allow(unknown_lints)] // invalid_from_utf8 was added in rust 1.72 #[allow(invalid_from_utf8)] fn example_slice() { use super::FromCp437; let data = b"Cura\x87ao"; assert!(::std::str::from_utf8(data).is_err()); assert_eq!(data.from_cp437(), "CuraƧao"); } #[test] fn example_vec() { use super::FromCp437; let data = vec![0xCC, 0xCD, 0xCD, 0xB9]; assert!(String::from_utf8(data.clone()).is_err()); assert_eq!(&*data.from_cp437(), "╠══╣"); } } zip-2.5.0/src/crc32.rs000064400000000000000000000067061046102023000125140ustar 00000000000000//! Helper module to compute a CRC32 checksum use std::io; use std::io::prelude::*; use crc32fast::Hasher; /// Reader that validates the CRC32 when it reaches the EOF. pub struct Crc32Reader { inner: R, hasher: Hasher, check: u32, /// Signals if `inner` stores aes encrypted data. /// AE-2 encrypted data doesn't use crc and sets the value to 0. enabled: bool, } impl Crc32Reader { /// Get a new Crc32Reader which checks the inner reader against checksum. /// The check is disabled if `ae2_encrypted == true`. pub(crate) fn new(inner: R, checksum: u32, ae2_encrypted: bool) -> Crc32Reader { Crc32Reader { inner, hasher: Hasher::new(), check: checksum, enabled: !ae2_encrypted, } } fn check_matches(&self) -> bool { self.check == self.hasher.clone().finalize() } pub fn into_inner(self) -> R { self.inner } } #[cold] fn invalid_checksum() -> io::Error { io::Error::new(io::ErrorKind::InvalidData, "Invalid checksum") } impl Read for Crc32Reader { fn read(&mut self, buf: &mut [u8]) -> io::Result { let count = self.inner.read(buf)?; if self.enabled { if count == 0 && !buf.is_empty() && !self.check_matches() { return Err(invalid_checksum()); } self.hasher.update(&buf[..count]); } Ok(count) } fn read_to_end(&mut self, buf: &mut Vec) -> io::Result { let start = buf.len(); let n = self.inner.read_to_end(buf)?; if self.enabled { self.hasher.update(&buf[start..]); if !self.check_matches() { return Err(invalid_checksum()); } } Ok(n) } fn read_to_string(&mut self, buf: &mut String) -> io::Result { let start = buf.len(); let n = self.inner.read_to_string(buf)?; if self.enabled { self.hasher.update(&buf.as_bytes()[start..]); if !self.check_matches() { return Err(invalid_checksum()); } } Ok(n) } } #[cfg(test)] mod test { use super::*; #[test] fn test_empty_reader() { let data: &[u8] = b""; let mut buf = [0; 1]; let mut reader = Crc32Reader::new(data, 0, false); assert_eq!(reader.read(&mut buf).unwrap(), 0); let mut reader = Crc32Reader::new(data, 1, false); assert!(reader .read(&mut buf) .unwrap_err() .to_string() .contains("Invalid checksum")); } #[test] fn test_byte_by_byte() { let data: &[u8] = b"1234"; let mut buf = [0; 1]; let mut reader = Crc32Reader::new(data, 0x9be3e0a3, false); assert_eq!(reader.read(&mut buf).unwrap(), 1); assert_eq!(reader.read(&mut buf).unwrap(), 1); assert_eq!(reader.read(&mut buf).unwrap(), 1); assert_eq!(reader.read(&mut buf).unwrap(), 1); assert_eq!(reader.read(&mut buf).unwrap(), 0); // Can keep reading 0 bytes after the end assert_eq!(reader.read(&mut buf).unwrap(), 0); } #[test] fn test_zero_read() { let data: &[u8] = b"1234"; let mut buf = [0; 5]; let mut reader = Crc32Reader::new(data, 0x9be3e0a3, false); assert_eq!(reader.read(&mut buf[..0]).unwrap(), 0); assert_eq!(reader.read(&mut buf).unwrap(), 4); } } zip-2.5.0/src/extra_fields/extended_timestamp.rs000064400000000000000000000054311046102023000201260ustar 00000000000000use crate::result::{ZipError, ZipResult}; use crate::unstable::LittleEndianReadExt; use std::io::Read; /// extended timestamp, as described in #[derive(Debug, Clone)] pub struct ExtendedTimestamp { mod_time: Option, ac_time: Option, cr_time: Option, } impl ExtendedTimestamp { /// creates an extended timestamp struct by reading the required bytes from the reader. /// /// This method assumes that the length has already been read, therefore /// it must be passed as an argument pub fn try_from_reader(reader: &mut R, len: u16) -> ZipResult where R: Read, { let mut flags = [0u8]; reader.read_exact(&mut flags)?; let flags = flags[0]; // the `flags` field refers to the local headers and might not correspond // to the len field. If the length field is 1+4, we assume that only // the modification time has been set // > Those times that are present will appear in the order indicated, but // > any combination of times may be omitted. (Creation time may be // > present without access time, for example.) TSize should equal // > (1 + 4*(number of set bits in Flags)), as the block is currently // > defined. if len != 5 && len as u32 != 1 + 4 * flags.count_ones() { //panic!("found len {len} and flags {flags:08b}"); return Err(ZipError::UnsupportedArchive( "flags and len don't match in extended timestamp field", )); } if flags & 0b11111000 != 0 { return Err(ZipError::UnsupportedArchive( "found unsupported timestamps in the extended timestamp header", )); } let mod_time = if (flags & 0b00000001u8 == 0b00000001u8) || len == 5 { Some(reader.read_u32_le()?) } else { None }; let ac_time = if flags & 0b00000010u8 == 0b00000010u8 && len > 5 { Some(reader.read_u32_le()?) } else { None }; let cr_time = if flags & 0b00000100u8 == 0b00000100u8 && len > 5 { Some(reader.read_u32_le()?) } else { None }; Ok(Self { mod_time, ac_time, cr_time, }) } /// returns the last modification timestamp, if defined, as UNIX epoch seconds pub fn mod_time(&self) -> Option { self.mod_time } /// returns the last access timestamp, if defined, as UNIX epoch seconds pub fn ac_time(&self) -> Option { self.ac_time } /// returns the creation timestamp, if defined, as UNIX epoch seconds pub fn cr_time(&self) -> Option { self.cr_time } } zip-2.5.0/src/extra_fields/mod.rs000064400000000000000000000015411046102023000150200ustar 00000000000000//! types for extra fields /// marker trait to denote the place where this extra field has been stored pub trait ExtraFieldVersion {} /// use this to mark extra fields specified in a local header #[derive(Debug, Clone)] pub struct LocalHeaderVersion; /// use this to mark extra fields specified in the central header #[derive(Debug, Clone)] pub struct CentralHeaderVersion; impl ExtraFieldVersion for LocalHeaderVersion {} impl ExtraFieldVersion for CentralHeaderVersion {} mod extended_timestamp; mod ntfs; mod zipinfo_utf8; pub use extended_timestamp::*; pub use ntfs::Ntfs; pub use zipinfo_utf8::*; /// contains one extra field #[derive(Debug, Clone)] pub enum ExtraField { /// NTFS extra field Ntfs(Ntfs), /// extended timestamp, as described in ExtendedTimestamp(ExtendedTimestamp), } zip-2.5.0/src/extra_fields/ntfs.rs000064400000000000000000000054631046102023000152220ustar 00000000000000use std::io::Read; use crate::{ result::{ZipError, ZipResult}, unstable::LittleEndianReadExt, }; /// The NTFS extra field as described in [PKWARE's APPNOTE.TXT v6.3.9]. /// /// This field stores [Windows file times], which are 64-bit unsigned integer /// values that represents the number of 100-nanosecond intervals that have /// elapsed since "1601-01-01 00:00:00 UTC". /// /// [PKWARE's APPNOTE.TXT v6.3.9]: https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT /// [Windows file times]: https://docs.microsoft.com/en-us/windows/win32/sysinfo/file-times #[derive(Clone, Debug)] pub struct Ntfs { mtime: u64, atime: u64, ctime: u64, } impl Ntfs { /// Creates a NTFS extra field struct by reading the required bytes from the /// reader. /// /// This method assumes that the length has already been read, therefore it /// must be passed as an argument. pub fn try_from_reader(reader: &mut R, len: u16) -> ZipResult where R: Read, { if len != 32 { return Err(ZipError::UnsupportedArchive( "NTFS extra field has an unsupported length", )); } // Read reserved for future use. let _ = reader.read_u32_le()?; let tag = reader.read_u16_le()?; if tag != 0x0001 { return Err(ZipError::UnsupportedArchive( "NTFS extra field has an unsupported attribute tag", )); } let size = reader.read_u16_le()?; if size != 24 { return Err(ZipError::UnsupportedArchive( "NTFS extra field has an unsupported attribute size", )); } let mtime = reader.read_u64_le()?; let atime = reader.read_u64_le()?; let ctime = reader.read_u64_le()?; Ok(Self { mtime, atime, ctime, }) } /// Returns the file last modification time as a file time. pub fn mtime(&self) -> u64 { self.mtime } /// Returns the file last modification time as a file time. #[cfg(feature = "nt-time")] pub fn modified_file_time(&self) -> nt_time::FileTime { nt_time::FileTime::new(self.mtime) } /// Returns the file last access time as a file time. pub fn atime(&self) -> u64 { self.atime } /// Returns the file last access time as a file time. #[cfg(feature = "nt-time")] pub fn accessed_file_time(&self) -> nt_time::FileTime { nt_time::FileTime::new(self.atime) } /// Returns the file creation time as a file time. pub fn ctime(&self) -> u64 { self.ctime } /// Returns the file creation time as a file time. #[cfg(feature = "nt-time")] pub fn created_file_time(&self) -> nt_time::FileTime { nt_time::FileTime::new(self.ctime) } } zip-2.5.0/src/extra_fields/zipinfo_utf8.rs000064400000000000000000000025351046102023000166710ustar 00000000000000use crate::result::{invalid, ZipResult}; use crate::unstable::LittleEndianReadExt; use core::mem::size_of; use std::io::Read; /// Info-ZIP Unicode Path Extra Field (0x7075) or Unicode Comment Extra Field (0x6375), as /// specified in APPNOTE 4.6.8 and 4.6.9 #[derive(Clone, Debug)] pub struct UnicodeExtraField { crc32: u32, content: Box<[u8]>, } impl UnicodeExtraField { /// Verifies the checksum and returns the content. pub fn unwrap_valid(self, ascii_field: &[u8]) -> ZipResult> { let mut crc32 = crc32fast::Hasher::new(); crc32.update(ascii_field); let actual_crc32 = crc32.finalize(); if self.crc32 != actual_crc32 { return Err(invalid!("CRC32 checksum failed on Unicode extra field")); } Ok(self.content) } } impl UnicodeExtraField { pub(crate) fn try_from_reader(reader: &mut R, len: u16) -> ZipResult { // Read and discard version byte reader.read_exact(&mut [0u8])?; let crc32 = reader.read_u32_le()?; let content_len = (len as usize) .checked_sub(size_of::() + size_of::()) .ok_or(invalid!("Unicode extra field is too small"))?; let mut content = vec![0u8; content_len].into_boxed_slice(); reader.read_exact(&mut content)?; Ok(Self { crc32, content }) } } zip-2.5.0/src/lib.rs000064400000000000000000000042331046102023000123370ustar 00000000000000//! A library for reading and writing ZIP archives. //! ZIP is a format designed for cross-platform file "archiving". //! That is, storing a collection of files in a single datastream //! to make them easier to share between computers. //! Additionally, ZIP is able to compress and encrypt files in its //! archives. //! //! The current implementation is based on [PKWARE's APPNOTE.TXT v6.3.9](https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT) //! //! --- //! //! [`zip`](`crate`) has support for the most common ZIP archives found in common use. //! However, in special cases, //! there are some zip archives that are difficult to read or write. //! //! This is a list of supported features: //! //! | | Reading | Writing | //! | ------- | ------ | ------- | //! | Stored | āœ… | āœ… | //! | Deflate | āœ… [->](`crate::ZipArchive::by_name`) | āœ… [->](`crate::write::FileOptions::compression_method`) | //! | Deflate64 | āœ… | | //! | Bzip2 | āœ… | āœ… | //! | ZStandard | āœ… | āœ… | //! | LZMA | āœ… | | //! | XZ | āœ… | | //! | AES encryption | āœ… | āœ… | //! | ZipCrypto deprecated encryption | āœ… | āœ… | //! //! #![cfg_attr(docsrs, feature(doc_auto_cfg))] #![warn(missing_docs)] #![allow(unexpected_cfgs)] // Needed for cfg(fuzzing) on nightly as of 2024-05-06 pub use crate::compression::{CompressionMethod, SUPPORTED_COMPRESSION_METHODS}; pub use crate::read::HasZipMetadata; pub use crate::read::ZipArchive; pub use crate::spec::{ZIP64_BYTES_THR, ZIP64_ENTRY_THR}; pub use crate::types::{AesMode, DateTime}; pub use crate::write::ZipWriter; #[cfg(feature = "aes-crypto")] mod aes; #[cfg(feature = "aes-crypto")] mod aes_ctr; mod compression; mod cp437; mod crc32; pub mod extra_fields; mod path; pub mod read; pub mod result; mod spec; mod types; pub mod write; mod zipcrypto; pub use extra_fields::ExtraField; #[doc = "Unstable APIs\n\ \ All APIs accessible by importing this module are unstable; They may be changed in patch \ releases. You MUST use an exact version specifier in `Cargo.toml`, to indicate the version of this \ API you're using:\n\ \ ```toml\n [dependencies]\n zip = \"="] #[doc=env!("CARGO_PKG_VERSION")] #[doc = "\"\n\ ```"] pub mod unstable; zip-2.5.0/src/path.rs000064400000000000000000000012161046102023000125230ustar 00000000000000//! Path manipulation utilities use std::{ ffi::OsStr, path::{Component, Path}, }; /// Simplify a path by removing the prefix and parent directories and only return normal components pub(crate) fn simplified_components(input: &Path) -> Option> { let mut out = Vec::new(); for component in input.components() { match component { Component::Prefix(_) | Component::RootDir => return None, Component::ParentDir => { out.pop()?; } Component::Normal(_) => out.push(component.as_os_str()), Component::CurDir => (), } } Some(out) } zip-2.5.0/src/read/config.rs000064400000000000000000000016031046102023000137470ustar 00000000000000/// Configuration for reading ZIP archives. #[repr(transparent)] #[derive(Debug, Default, Clone, Copy)] pub struct Config { /// An offset into the reader to use to find the start of the archive. pub archive_offset: ArchiveOffset, } /// The offset of the start of the archive from the beginning of the reader. #[derive(Debug, Default, Clone, Copy, PartialEq, Eq, Hash)] pub enum ArchiveOffset { /// Try to detect the archive offset automatically. /// /// This will look at the central directory specified by `FromCentralDirectory` for a header. /// If missing, this will behave as if `None` were specified. #[default] Detect, /// Use the central directory length and offset to determine the start of the archive. #[deprecated(since = "2.3.0", note = "use `Detect` instead")] FromCentralDirectory, /// Specify a fixed archive offset. Known(u64), } zip-2.5.0/src/read/lzma.rs000064400000000000000000000026241046102023000134510ustar 00000000000000use lzma_rs::decompress::{Options, Stream, UnpackedSize}; use std::collections::VecDeque; use std::io::{BufRead, Error, ErrorKind, Read, Result, Write}; const OPTIONS: Options = Options { unpacked_size: UnpackedSize::ReadFromHeader, memlimit: None, allow_incomplete: true, }; #[derive(Debug)] pub struct LzmaDecoder { compressed_reader: R, stream: Stream>, } impl LzmaDecoder { pub fn new(inner: R) -> Self { LzmaDecoder { compressed_reader: inner, stream: Stream::new_with_options(&OPTIONS, VecDeque::new()), } } pub fn into_inner(self) -> R { self.compressed_reader } } impl Read for LzmaDecoder { fn read(&mut self, buf: &mut [u8]) -> Result { let mut bytes_read = self .stream .get_output_mut() .ok_or(Error::new(ErrorKind::InvalidData, "Invalid LZMA stream"))? .read(buf)?; while bytes_read < buf.len() { let compressed_bytes = self.compressed_reader.fill_buf()?; if compressed_bytes.is_empty() { break; } self.stream.write_all(compressed_bytes)?; bytes_read += self .stream .get_output_mut() .unwrap() .read(&mut buf[bytes_read..])?; } Ok(bytes_read) } } zip-2.5.0/src/read/magic_finder.rs000064400000000000000000000216741046102023000151230ustar 00000000000000use std::io::{Read, Seek, SeekFrom}; use memchr::memmem::{Finder, FinderRev}; use crate::result::ZipResult; pub trait FinderDirection<'a> { fn new(needle: &'a [u8]) -> Self; fn reset_cursor(bounds: (u64, u64), window_size: usize) -> u64; fn scope_window(window: &[u8], mid_window_offset: usize) -> (&[u8], usize); fn needle(&self) -> &[u8]; fn find(&self, haystack: &[u8]) -> Option; fn move_cursor(&self, cursor: u64, bounds: (u64, u64), window_size: usize) -> Option; fn move_scope(&self, offset: usize) -> usize; } pub struct Forward<'a>(Finder<'a>); impl<'a> FinderDirection<'a> for Forward<'a> { fn new(needle: &'a [u8]) -> Self { Self(Finder::new(needle)) } fn reset_cursor((start_inclusive, _): (u64, u64), _: usize) -> u64 { start_inclusive } fn scope_window(window: &[u8], mid_window_offset: usize) -> (&[u8], usize) { (&window[mid_window_offset..], mid_window_offset) } fn find(&self, haystack: &[u8]) -> Option { self.0.find(haystack) } fn needle(&self) -> &[u8] { self.0.needle() } fn move_cursor(&self, cursor: u64, bounds: (u64, u64), window_size: usize) -> Option { let magic_overlap = self.needle().len().saturating_sub(1) as u64; let next = cursor.saturating_add(window_size as u64 - magic_overlap); if next >= bounds.1 { None } else { Some(next) } } fn move_scope(&self, offset: usize) -> usize { offset + self.needle().len() } } pub struct Backwards<'a>(FinderRev<'a>); impl<'a> FinderDirection<'a> for Backwards<'a> { fn new(needle: &'a [u8]) -> Self { Self(FinderRev::new(needle)) } fn reset_cursor(bounds: (u64, u64), window_size: usize) -> u64 { bounds .1 .saturating_sub(window_size as u64) .clamp(bounds.0, bounds.1) } fn scope_window(window: &[u8], mid_window_offset: usize) -> (&[u8], usize) { (&window[..mid_window_offset], 0) } fn find(&self, haystack: &[u8]) -> Option { self.0.rfind(haystack) } fn needle(&self) -> &[u8] { self.0.needle() } fn move_cursor(&self, cursor: u64, bounds: (u64, u64), window_size: usize) -> Option { let magic_overlap = self.needle().len().saturating_sub(1) as u64; if cursor <= bounds.0 { None } else { Some( cursor .saturating_add(magic_overlap) .saturating_sub(window_size as u64) .clamp(bounds.0, bounds.1), ) } } fn move_scope(&self, offset: usize) -> usize { offset } } /// A utility for finding magic symbols from the end of a seekable reader. /// /// Can be repurposed to recycle the internal buffer. pub struct MagicFinder { buffer: Box<[u8]>, pub(self) finder: Direction, cursor: u64, mid_buffer_offset: Option, bounds: (u64, u64), } impl<'a, T: FinderDirection<'a>> MagicFinder { /// Create a new magic bytes finder to look within specific bounds. pub fn new(magic_bytes: &'a [u8], start_inclusive: u64, end_exclusive: u64) -> Self { const BUFFER_SIZE: usize = 2048; // Smaller buffer size would be unable to locate bytes. // Equal buffer size would stall (the window could not be moved). debug_assert!(BUFFER_SIZE >= magic_bytes.len()); Self { buffer: vec![0; BUFFER_SIZE].into_boxed_slice(), finder: T::new(magic_bytes), cursor: T::reset_cursor((start_inclusive, end_exclusive), BUFFER_SIZE), mid_buffer_offset: None, bounds: (start_inclusive, end_exclusive), } } /// Repurpose the finder for different bytes or bounds. pub fn repurpose(&mut self, magic_bytes: &'a [u8], bounds: (u64, u64)) -> &mut Self { debug_assert!(self.buffer.len() >= magic_bytes.len()); self.finder = T::new(magic_bytes); self.cursor = T::reset_cursor(bounds, self.buffer.len()); self.bounds = bounds; // Reset the mid-buffer offset, to invalidate buffer content. self.mid_buffer_offset = None; self } /// Find the next magic bytes in the direction specified in the type. pub fn next(&mut self, reader: &mut R) -> ZipResult> { loop { if self.cursor < self.bounds.0 || self.cursor >= self.bounds.1 { // The finder is consumed break; } /* Position the window and ensure correct length */ let window_start = self.cursor; let window_end = self .cursor .saturating_add(self.buffer.len() as u64) .min(self.bounds.1); if window_end <= window_start { // Short-circuit on zero-sized windows to prevent loop break; } let window = &mut self.buffer[..(window_end - window_start) as usize]; if self.mid_buffer_offset.is_none() { reader.seek(SeekFrom::Start(window_start))?; reader.read_exact(window)?; } let (window, window_start_offset) = match self.mid_buffer_offset { Some(mid_buffer_offset) => T::scope_window(window, mid_buffer_offset), None => (&*window, 0usize), }; if let Some(offset) = self.finder.find(window) { let magic_pos = window_start + window_start_offset as u64 + offset as u64; reader.seek(SeekFrom::Start(magic_pos))?; self.mid_buffer_offset = Some(self.finder.move_scope(window_start_offset + offset)); return Ok(Some(magic_pos)); } self.mid_buffer_offset = None; match self .finder .move_cursor(self.cursor, self.bounds, self.buffer.len()) { Some(new_cursor) => { self.cursor = new_cursor; } None => { // Destroy the finder when we've reached the end of the bounds. self.bounds.0 = self.bounds.1; break; } } } Ok(None) } } /// A magic bytes finder with an optimistic guess that is tried before /// the inner finder begins searching from end. This enables much faster /// lookup in files without appended junk, because the magic bytes will be /// found directly. /// /// The guess can be marked as mandatory to produce an error. This is useful /// if the ArchiveOffset is known and auto-detection is not desired. pub struct OptimisticMagicFinder { inner: MagicFinder, initial_guess: Option<(u64, bool)>, } /// This is a temporary restriction, to avoid heap allocation in [`Self::next_back`]. /// /// We only use magic bytes of size 4 at the moment. const STACK_BUFFER_SIZE: usize = 8; impl<'a, Direction: FinderDirection<'a>> OptimisticMagicFinder { /// Create a new empty optimistic magic bytes finder. pub fn new_empty() -> Self { Self { inner: MagicFinder::new(&[], 0, 0), initial_guess: None, } } /// Repurpose the finder for different bytes, bounds and initial guesses. pub fn repurpose( &mut self, magic_bytes: &'a [u8], bounds: (u64, u64), initial_guess: Option<(u64, bool)>, ) -> &mut Self { debug_assert!(magic_bytes.len() <= STACK_BUFFER_SIZE); self.inner.repurpose(magic_bytes, bounds); self.initial_guess = initial_guess; self } /// Equivalent to `next_back`, with an optional initial guess attempted before /// proceeding with reading from the back of the reader. pub fn next(&mut self, reader: &mut R) -> ZipResult> { if let Some((v, mandatory)) = self.initial_guess { reader.seek(SeekFrom::Start(v))?; let mut buffer = [0; STACK_BUFFER_SIZE]; let buffer = &mut buffer[..self.inner.finder.needle().len()]; // Attempt to match only if there's enough space for the needle if v.saturating_add(buffer.len() as u64) <= self.inner.bounds.1 { reader.read_exact(buffer)?; // If a match is found, yield it. if self.inner.finder.needle() == buffer { self.initial_guess.take(); reader.seek(SeekFrom::Start(v))?; return Ok(Some(v)); } } // If a match is not found, but the initial guess was mandatory, return an error. if mandatory { return Ok(None); } // If the initial guess was not mandatory, remove it, as it was not found. self.initial_guess.take(); } self.inner.next(reader) } } zip-2.5.0/src/read/stream.rs000064400000000000000000000323171046102023000140030ustar 00000000000000use super::{ central_header_to_zip_file_inner, make_symlink, read_zipfile_from_stream, ZipCentralEntryBlock, ZipFile, ZipFileData, ZipResult, }; use crate::spec::FixedSizeBlock; use indexmap::IndexMap; use std::fs; use std::fs::create_dir_all; use std::io::{self, Read}; use std::path::{Path, PathBuf}; /// Stream decoder for zip. #[derive(Debug)] pub struct ZipStreamReader(R); impl ZipStreamReader { /// Create a new ZipStreamReader pub const fn new(reader: R) -> Self { Self(reader) } } impl ZipStreamReader { fn parse_central_directory(&mut self) -> ZipResult { // Give archive_offset and central_header_start dummy value 0, since // they are not used in the output. let archive_offset = 0; let central_header_start = 0; // Parse central header let block = ZipCentralEntryBlock::parse(&mut self.0)?; let file = central_header_to_zip_file_inner( &mut self.0, archive_offset, central_header_start, block, )?; Ok(ZipStreamFileMetadata(file)) } /// Iterate over the stream and extract all file and their /// metadata. pub fn visit(mut self, visitor: &mut V) -> ZipResult<()> { while let Some(mut file) = read_zipfile_from_stream(&mut self.0)? { visitor.visit_file(&mut file)?; } while let Ok(metadata) = self.parse_central_directory() { visitor.visit_additional_metadata(&metadata)?; } Ok(()) } /// Extract a Zip archive into a directory, overwriting files if they /// already exist. Paths are sanitized with [`ZipFile::enclosed_name`]. /// /// Extraction is not atomic; If an error is encountered, some of the files /// may be left on disk. pub fn extract>(self, directory: P) -> ZipResult<()> { create_dir_all(&directory)?; let directory = directory.as_ref().canonicalize()?; struct Extractor(PathBuf, IndexMap, ()>); impl ZipStreamVisitor for Extractor { fn visit_file(&mut self, file: &mut ZipFile<'_>) -> ZipResult<()> { self.1.insert(file.name().into(), ()); let mut outpath = self.0.clone(); file.safe_prepare_path(&self.0, &mut outpath, None::<&(_, fn(&Path) -> bool)>)?; if file.is_symlink() { let mut target = Vec::with_capacity(file.size() as usize); file.read_to_end(&mut target)?; make_symlink(&outpath, &target, &self.1)?; return Ok(()); } if file.is_dir() { fs::create_dir_all(&outpath)?; } else { let mut outfile = fs::File::create(&outpath)?; io::copy(file, &mut outfile)?; } Ok(()) } #[allow(unused)] fn visit_additional_metadata( &mut self, metadata: &ZipStreamFileMetadata, ) -> ZipResult<()> { #[cfg(unix)] { use super::ZipError; let filepath = metadata .enclosed_name() .ok_or(crate::result::invalid!("Invalid file path"))?; let outpath = self.0.join(filepath); use std::os::unix::fs::PermissionsExt; if let Some(mode) = metadata.unix_mode() { fs::set_permissions(outpath, fs::Permissions::from_mode(mode))?; } } Ok(()) } } self.visit(&mut Extractor(directory, IndexMap::new())) } } /// Visitor for ZipStreamReader pub trait ZipStreamVisitor { /// * `file` - contains the content of the file and most of the metadata, /// except: /// - `comment`: set to an empty string /// - `data_start`: set to 0 /// - `external_attributes`: `unix_mode()`: will return None fn visit_file(&mut self, file: &mut ZipFile<'_>) -> ZipResult<()>; /// This function is guranteed to be called after all `visit_file`s. /// /// * `metadata` - Provides missing metadata in `visit_file`. fn visit_additional_metadata(&mut self, metadata: &ZipStreamFileMetadata) -> ZipResult<()>; } /// Additional metadata for the file. #[derive(Debug)] pub struct ZipStreamFileMetadata(ZipFileData); impl ZipStreamFileMetadata { /// Get the name of the file /// /// # Warnings /// /// It is dangerous to use this name directly when extracting an archive. /// It may contain an absolute path (`/etc/shadow`), or break out of the /// current directory (`../runtime`). Carelessly writing to these paths /// allows an attacker to craft a ZIP archive that will overwrite critical /// files. /// /// You can use the [`ZipFile::enclosed_name`] method to validate the name /// as a safe path. pub fn name(&self) -> &str { &self.0.file_name } /// Get the name of the file, in the raw (internal) byte representation. /// /// The encoding of this data is currently undefined. pub fn name_raw(&self) -> &[u8] { &self.0.file_name_raw } /// Rewrite the path, ignoring any path components with special meaning. /// /// - Absolute paths are made relative /// - [std::path::Component::ParentDir]s are ignored /// - Truncates the filename at a NULL byte /// /// This is appropriate if you need to be able to extract *something* from /// any archive, but will easily misrepresent trivial paths like /// `foo/../bar` as `foo/bar` (instead of `bar`). Because of this, /// [`ZipFile::enclosed_name`] is the better option in most scenarios. pub fn mangled_name(&self) -> PathBuf { self.0.file_name_sanitized() } /// Ensure the file path is safe to use as a [`Path`]. /// /// - It can't contain NULL bytes /// - It can't resolve to a path outside the current directory /// > `foo/../bar` is fine, `foo/../../bar` is not. /// - It can't be an absolute path /// /// This will read well-formed ZIP files correctly, and is resistant /// to path-based exploits. It is recommended over /// [`ZipFile::mangled_name`]. pub fn enclosed_name(&self) -> Option { self.0.enclosed_name() } /// Returns whether the file is actually a directory pub fn is_dir(&self) -> bool { self.name() .chars() .next_back() .is_some_and(|c| c == '/' || c == '\\') } /// Returns whether the file is a regular file pub fn is_file(&self) -> bool { !self.is_dir() } /// Get the comment of the file pub fn comment(&self) -> &str { &self.0.file_comment } /// Get unix mode for the file pub const fn unix_mode(&self) -> Option { self.0.unix_mode() } } #[cfg(test)] mod test { use tempfile::TempDir; use super::*; use crate::write::SimpleFileOptions; use crate::ZipWriter; use std::collections::BTreeSet; use std::io::Cursor; struct DummyVisitor; impl ZipStreamVisitor for DummyVisitor { fn visit_file(&mut self, _file: &mut ZipFile<'_>) -> ZipResult<()> { Ok(()) } fn visit_additional_metadata( &mut self, _metadata: &ZipStreamFileMetadata, ) -> ZipResult<()> { Ok(()) } } #[allow(dead_code)] #[derive(Default, Debug, Eq, PartialEq)] struct CounterVisitor(u64, u64); impl ZipStreamVisitor for CounterVisitor { fn visit_file(&mut self, _file: &mut ZipFile<'_>) -> ZipResult<()> { self.0 += 1; Ok(()) } fn visit_additional_metadata( &mut self, _metadata: &ZipStreamFileMetadata, ) -> ZipResult<()> { self.1 += 1; Ok(()) } } #[test] fn invalid_offset() { ZipStreamReader::new(io::Cursor::new(include_bytes!( "../../tests/data/invalid_offset.zip" ))) .visit(&mut DummyVisitor) .unwrap_err(); } #[test] fn invalid_offset2() { ZipStreamReader::new(io::Cursor::new(include_bytes!( "../../tests/data/invalid_offset2.zip" ))) .visit(&mut DummyVisitor) .unwrap_err(); } #[test] fn zip_read_streaming() { let reader = ZipStreamReader::new(io::Cursor::new(include_bytes!( "../../tests/data/mimetype.zip" ))); #[derive(Default)] struct V { filenames: BTreeSet>, } impl ZipStreamVisitor for V { fn visit_file(&mut self, file: &mut ZipFile<'_>) -> ZipResult<()> { if file.is_file() { self.filenames.insert(file.name().into()); } Ok(()) } fn visit_additional_metadata( &mut self, metadata: &ZipStreamFileMetadata, ) -> ZipResult<()> { if metadata.is_file() { assert!( self.filenames.contains(metadata.name()), "{} is missing its file content", metadata.name() ); } Ok(()) } } reader.visit(&mut V::default()).unwrap(); } #[test] fn file_and_dir_predicates() { let reader = ZipStreamReader::new(io::Cursor::new(include_bytes!( "../../tests/data/files_and_dirs.zip" ))); #[derive(Default)] struct V { filenames: BTreeSet>, } impl ZipStreamVisitor for V { fn visit_file(&mut self, file: &mut ZipFile<'_>) -> ZipResult<()> { let full_name = file.enclosed_name().unwrap(); let file_name = full_name.file_name().unwrap().to_str().unwrap(); assert!( (file_name.starts_with("dir") && file.is_dir()) || (file_name.starts_with("file") && file.is_file()) ); if file.is_file() { self.filenames.insert(file.name().into()); } Ok(()) } fn visit_additional_metadata( &mut self, metadata: &ZipStreamFileMetadata, ) -> ZipResult<()> { if metadata.is_file() { assert!( self.filenames.contains(metadata.name()), "{} is missing its file content", metadata.name() ); } Ok(()) } } reader.visit(&mut V::default()).unwrap(); } /// test case to ensure we don't preemptively over allocate based on the /// declared number of files in the CDE of an invalid zip when the number of /// files declared is more than the alleged offset in the CDE #[test] fn invalid_cde_number_of_files_allocation_smaller_offset() { ZipStreamReader::new(io::Cursor::new(include_bytes!( "../../tests/data/invalid_cde_number_of_files_allocation_smaller_offset.zip" ))) .visit(&mut DummyVisitor) .unwrap_err(); } /// test case to ensure we don't preemptively over allocate based on the /// declared number of files in the CDE of an invalid zip when the number of /// files declared is less than the alleged offset in the CDE #[test] fn invalid_cde_number_of_files_allocation_greater_offset() { ZipStreamReader::new(io::Cursor::new(include_bytes!( "../../tests/data/invalid_cde_number_of_files_allocation_greater_offset.zip" ))) .visit(&mut DummyVisitor) .unwrap_err(); } /// Symlinks being extracted shouldn't be followed out of the destination directory. #[test] fn test_cannot_symlink_outside_destination() -> ZipResult<()> { use std::fs::create_dir; let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.add_symlink("symlink/", "../dest-sibling/", SimpleFileOptions::default())?; writer.start_file("symlink/dest-file", SimpleFileOptions::default())?; let reader = ZipStreamReader::new(writer.finish()?); let dest_parent = TempDir::with_prefix("stream__cannot_symlink_outside_destination")?; let dest_sibling = dest_parent.path().join("dest-sibling"); create_dir(&dest_sibling)?; let dest = dest_parent.path().join("dest"); create_dir(&dest)?; assert!(reader.extract(dest).is_err()); assert!(!dest_sibling.join("dest-file").exists()); Ok(()) } #[test] fn test_can_create_destination() -> ZipResult<()> { let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../../tests/data/mimetype.zip")); let reader = ZipStreamReader::new(v.as_slice()); let dest = TempDir::with_prefix("stream_test_can_create_destination").unwrap(); reader.extract(&dest)?; assert!(dest.path().join("mimetype").exists()); Ok(()) } } zip-2.5.0/src/read.rs000064400000000000000000002370001046102023000125040ustar 00000000000000//! Types for reading ZIP archives #[cfg(feature = "aes-crypto")] use crate::aes::{AesReader, AesReaderValid}; use crate::compression::{CompressionMethod, Decompressor}; use crate::cp437::FromCp437; use crate::crc32::Crc32Reader; use crate::extra_fields::{ExtendedTimestamp, ExtraField, Ntfs}; use crate::read::zip_archive::{Shared, SharedBuilder}; use crate::result::invalid; use crate::result::{ZipError, ZipResult}; use crate::spec::{self, CentralDirectoryEndInfo, DataAndPosition, FixedSizeBlock, Pod}; use crate::types::{ AesMode, AesVendorVersion, DateTime, System, ZipCentralEntryBlock, ZipFileData, ZipLocalEntryBlock, }; use crate::write::SimpleFileOptions; use crate::zipcrypto::{ZipCryptoReader, ZipCryptoReaderValid, ZipCryptoValidator}; use crate::ZIP64_BYTES_THR; use indexmap::IndexMap; use std::borrow::Cow; use std::ffi::OsStr; use std::fs::create_dir_all; use std::io::{self, copy, prelude::*, sink, SeekFrom}; use std::mem; use std::mem::size_of; use std::ops::Deref; use std::path::{Component, Path, PathBuf}; use std::sync::{Arc, OnceLock}; mod config; pub use config::*; /// Provides high level API for reading from a stream. pub(crate) mod stream; #[cfg(feature = "lzma")] pub(crate) mod lzma; pub(crate) mod magic_finder; // Put the struct declaration in a private module to convince rustdoc to display ZipArchive nicely pub(crate) mod zip_archive { use indexmap::IndexMap; use std::sync::Arc; /// Extract immutable data from `ZipArchive` to make it cheap to clone #[derive(Debug)] pub(crate) struct Shared { pub(crate) files: IndexMap, super::ZipFileData>, pub(super) offset: u64, pub(super) dir_start: u64, // This isn't yet used anywhere, but it is here for use cases in the future. #[allow(dead_code)] pub(super) config: super::Config, pub(crate) comment: Box<[u8]>, pub(crate) zip64_comment: Option>, } #[derive(Debug)] pub(crate) struct SharedBuilder { pub(crate) files: Vec, pub(super) offset: u64, pub(super) dir_start: u64, // This isn't yet used anywhere, but it is here for use cases in the future. #[allow(dead_code)] pub(super) config: super::Config, } impl SharedBuilder { pub fn build(self, comment: Box<[u8]>, zip64_comment: Option>) -> Shared { let mut index_map = IndexMap::with_capacity(self.files.len()); self.files.into_iter().for_each(|file| { index_map.insert(file.file_name.clone(), file); }); Shared { files: index_map, offset: self.offset, dir_start: self.dir_start, config: self.config, comment, zip64_comment, } } } /// ZIP archive reader /// /// At the moment, this type is cheap to clone if this is the case for the /// reader it uses. However, this is not guaranteed by this crate and it may /// change in the future. /// /// ```no_run /// use std::io::prelude::*; /// fn list_zip_contents(reader: impl Read + Seek) -> zip::result::ZipResult<()> { /// use zip::HasZipMetadata; /// let mut zip = zip::ZipArchive::new(reader)?; /// /// for i in 0..zip.len() { /// let mut file = zip.by_index(i)?; /// println!("Filename: {}", file.name()); /// std::io::copy(&mut file, &mut std::io::stdout())?; /// } /// /// Ok(()) /// } /// ``` #[derive(Clone, Debug)] pub struct ZipArchive { pub(super) reader: R, pub(super) shared: Arc, } } #[cfg(feature = "aes-crypto")] use crate::aes::PWD_VERIFY_LENGTH; use crate::extra_fields::UnicodeExtraField; use crate::result::ZipError::InvalidPassword; use crate::spec::is_dir; use crate::types::ffi::{S_IFLNK, S_IFREG}; use crate::unstable::{path_to_string, LittleEndianReadExt}; pub use zip_archive::ZipArchive; #[allow(clippy::large_enum_variant)] pub(crate) enum CryptoReader<'a> { Plaintext(io::Take<&'a mut dyn Read>), ZipCrypto(ZipCryptoReaderValid>), #[cfg(feature = "aes-crypto")] Aes { reader: AesReaderValid>, vendor_version: AesVendorVersion, }, } impl Read for CryptoReader<'_> { fn read(&mut self, buf: &mut [u8]) -> io::Result { match self { CryptoReader::Plaintext(r) => r.read(buf), CryptoReader::ZipCrypto(r) => r.read(buf), #[cfg(feature = "aes-crypto")] CryptoReader::Aes { reader: r, .. } => r.read(buf), } } fn read_to_end(&mut self, buf: &mut Vec) -> io::Result { match self { CryptoReader::Plaintext(r) => r.read_to_end(buf), CryptoReader::ZipCrypto(r) => r.read_to_end(buf), #[cfg(feature = "aes-crypto")] CryptoReader::Aes { reader: r, .. } => r.read_to_end(buf), } } fn read_to_string(&mut self, buf: &mut String) -> io::Result { match self { CryptoReader::Plaintext(r) => r.read_to_string(buf), CryptoReader::ZipCrypto(r) => r.read_to_string(buf), #[cfg(feature = "aes-crypto")] CryptoReader::Aes { reader: r, .. } => r.read_to_string(buf), } } } impl<'a> CryptoReader<'a> { /// Consumes this decoder, returning the underlying reader. pub fn into_inner(self) -> io::Take<&'a mut dyn Read> { match self { CryptoReader::Plaintext(r) => r, CryptoReader::ZipCrypto(r) => r.into_inner(), #[cfg(feature = "aes-crypto")] CryptoReader::Aes { reader: r, .. } => r.into_inner(), } } /// Returns `true` if the data is encrypted using AE2. pub const fn is_ae2_encrypted(&self) -> bool { #[cfg(feature = "aes-crypto")] return matches!( self, CryptoReader::Aes { vendor_version: AesVendorVersion::Ae2, .. } ); #[cfg(not(feature = "aes-crypto"))] false } } #[cold] fn invalid_state() -> io::Result { Err(io::Error::new( io::ErrorKind::Other, "ZipFileReader was in an invalid state", )) } pub(crate) enum ZipFileReader<'a> { NoReader, Raw(io::Take<&'a mut dyn Read>), Compressed(Box>>>>), } impl Read for ZipFileReader<'_> { fn read(&mut self, buf: &mut [u8]) -> io::Result { match self { ZipFileReader::NoReader => invalid_state(), ZipFileReader::Raw(r) => r.read(buf), ZipFileReader::Compressed(r) => r.read(buf), } } fn read_exact(&mut self, buf: &mut [u8]) -> io::Result<()> { match self { ZipFileReader::NoReader => invalid_state(), ZipFileReader::Raw(r) => r.read_exact(buf), ZipFileReader::Compressed(r) => r.read_exact(buf), } } fn read_to_end(&mut self, buf: &mut Vec) -> io::Result { match self { ZipFileReader::NoReader => invalid_state(), ZipFileReader::Raw(r) => r.read_to_end(buf), ZipFileReader::Compressed(r) => r.read_to_end(buf), } } fn read_to_string(&mut self, buf: &mut String) -> io::Result { match self { ZipFileReader::NoReader => invalid_state(), ZipFileReader::Raw(r) => r.read_to_string(buf), ZipFileReader::Compressed(r) => r.read_to_string(buf), } } } impl<'a> ZipFileReader<'a> { fn into_inner(self) -> io::Result> { match self { ZipFileReader::NoReader => invalid_state(), ZipFileReader::Raw(r) => Ok(r), ZipFileReader::Compressed(r) => { Ok(r.into_inner().into_inner().into_inner().into_inner()) } } } } /// A struct for reading a zip file pub struct ZipFile<'a> { pub(crate) data: Cow<'a, ZipFileData>, pub(crate) reader: ZipFileReader<'a>, } /// A struct for reading and seeking a zip file pub struct ZipFileSeek<'a, R> { data: Cow<'a, ZipFileData>, reader: ZipFileSeekReader<'a, R>, } enum ZipFileSeekReader<'a, R> { Raw(SeekableTake<'a, R>), } struct SeekableTake<'a, R> { inner: &'a mut R, inner_starting_offset: u64, length: u64, current_offset: u64, } impl<'a, R: Seek> SeekableTake<'a, R> { pub fn new(inner: &'a mut R, length: u64) -> io::Result { let inner_starting_offset = inner.stream_position()?; Ok(Self { inner, inner_starting_offset, length, current_offset: 0, }) } } impl Seek for SeekableTake<'_, R> { fn seek(&mut self, pos: SeekFrom) -> io::Result { let offset = match pos { SeekFrom::Start(offset) => Some(offset), SeekFrom::End(offset) => self.length.checked_add_signed(offset), SeekFrom::Current(offset) => self.current_offset.checked_add_signed(offset), }; match offset { None => Err(io::Error::new( io::ErrorKind::InvalidInput, "invalid seek to a negative or overflowing position", )), Some(offset) => { let clamped_offset = std::cmp::min(self.length, offset); let new_inner_offset = self .inner .seek(SeekFrom::Start(self.inner_starting_offset + clamped_offset))?; self.current_offset = new_inner_offset - self.inner_starting_offset; Ok(new_inner_offset) } } } } impl Read for SeekableTake<'_, R> { fn read(&mut self, buf: &mut [u8]) -> io::Result { let written = self .inner .take(self.length - self.current_offset) .read(buf)?; self.current_offset += written as u64; Ok(written) } } pub(crate) fn make_writable_dir_all>(outpath: T) -> Result<(), ZipError> { create_dir_all(outpath.as_ref())?; #[cfg(unix)] { // Dirs must be writable until all normal files are extracted use std::os::unix::fs::PermissionsExt; std::fs::set_permissions( outpath.as_ref(), std::fs::Permissions::from_mode( 0o700 | std::fs::metadata(outpath.as_ref())?.permissions().mode(), ), )?; } Ok(()) } pub(crate) fn find_content<'a>( data: &ZipFileData, reader: &'a mut (impl Read + Seek), ) -> ZipResult> { // TODO: use .get_or_try_init() once stabilized to provide a closure returning a Result! let data_start = match data.data_start.get() { Some(data_start) => *data_start, None => find_data_start(data, reader)?, }; reader.seek(SeekFrom::Start(data_start))?; Ok((reader as &mut dyn Read).take(data.compressed_size)) } fn find_content_seek<'a, R: Read + Seek>( data: &ZipFileData, reader: &'a mut R, ) -> ZipResult> { // Parse local header let data_start = find_data_start(data, reader)?; reader.seek(SeekFrom::Start(data_start))?; // Explicit Ok and ? are needed to convert io::Error to ZipError Ok(SeekableTake::new(reader, data.compressed_size)?) } fn find_data_start( data: &ZipFileData, reader: &mut (impl Read + Seek + Sized), ) -> Result { // Go to start of data. reader.seek(SeekFrom::Start(data.header_start))?; // Parse static-sized fields and check the magic value. let block = ZipLocalEntryBlock::parse(reader)?; // Calculate the end of the local header from the fields we just parsed. let variable_fields_len = // Each of these fields must be converted to u64 before adding, as the result may // easily overflow a u16. block.file_name_length as u64 + block.extra_field_length as u64; let data_start = data.header_start + size_of::() as u64 + variable_fields_len; // Set the value so we don't have to read it again. match data.data_start.set(data_start) { Ok(()) => (), // If the value was already set in the meantime, ensure it matches (this is probably // unnecessary). Err(_) => { debug_assert_eq!(*data.data_start.get().unwrap(), data_start); } } Ok(data_start) } #[allow(clippy::too_many_arguments)] pub(crate) fn make_crypto_reader<'a>( data: &ZipFileData, reader: io::Take<&'a mut dyn Read>, password: Option<&[u8]>, aes_info: Option<(AesMode, AesVendorVersion, CompressionMethod)>, ) -> ZipResult> { #[allow(deprecated)] { if let CompressionMethod::Unsupported(_) = data.compression_method { return unsupported_zip_error("Compression method not supported"); } } let reader = match (password, aes_info) { #[cfg(not(feature = "aes-crypto"))] (Some(_), Some(_)) => { return Err(ZipError::UnsupportedArchive( "AES encrypted files cannot be decrypted without the aes-crypto feature.", )) } #[cfg(feature = "aes-crypto")] (Some(password), Some((aes_mode, vendor_version, _))) => CryptoReader::Aes { reader: AesReader::new(reader, aes_mode, data.compressed_size).validate(password)?, vendor_version, }, (Some(password), None) => { let mut last_modified_time = data.last_modified_time; if !data.using_data_descriptor { last_modified_time = None; } let validator = if let Some(last_modified_time) = last_modified_time { ZipCryptoValidator::InfoZipMsdosTime(last_modified_time.timepart()) } else { ZipCryptoValidator::PkzipCrc32(data.crc32) }; CryptoReader::ZipCrypto(ZipCryptoReader::new(reader, password).validate(validator)?) } (None, Some(_)) => return Err(InvalidPassword), (None, None) => CryptoReader::Plaintext(reader), }; Ok(reader) } pub(crate) fn make_reader( compression_method: CompressionMethod, crc32: u32, reader: CryptoReader, ) -> ZipResult { let ae2_encrypted = reader.is_ae2_encrypted(); Ok(ZipFileReader::Compressed(Box::new(Crc32Reader::new( Decompressor::new(io::BufReader::new(reader), compression_method)?, crc32, ae2_encrypted, )))) } pub(crate) fn make_symlink( outpath: &Path, target: &[u8], #[allow(unused)] existing_files: &IndexMap, T>, ) -> ZipResult<()> { let Ok(target_str) = std::str::from_utf8(target) else { return Err(invalid!("Invalid UTF-8 as symlink target")); }; #[cfg(not(any(unix, windows)))] { use std::fs::File; let output = File::create(outpath); output?.write_all(target)?; } #[cfg(unix)] { std::os::unix::fs::symlink(Path::new(&target_str), outpath)?; } #[cfg(windows)] { let target = Path::new(OsStr::new(&target_str)); let target_is_dir_from_archive = existing_files.contains_key(target_str) && is_dir(target_str); let target_is_dir = if target_is_dir_from_archive { true } else if let Ok(meta) = std::fs::metadata(target) { meta.is_dir() } else { false }; if target_is_dir { std::os::windows::fs::symlink_dir(target, outpath)?; } else { std::os::windows::fs::symlink_file(target, outpath)?; } } Ok(()) } #[derive(Debug)] pub(crate) struct CentralDirectoryInfo { pub(crate) archive_offset: u64, pub(crate) directory_start: u64, pub(crate) number_of_files: usize, pub(crate) disk_number: u32, pub(crate) disk_with_central_directory: u32, } impl<'a> TryFrom<&'a CentralDirectoryEndInfo> for CentralDirectoryInfo { type Error = ZipError; fn try_from(value: &'a CentralDirectoryEndInfo) -> Result { let (relative_cd_offset, number_of_files, disk_number, disk_with_central_directory) = match &value.eocd64 { Some(DataAndPosition { data: eocd64, .. }) => { if eocd64.number_of_files_on_this_disk > eocd64.number_of_files { return Err(invalid!("ZIP64 footer indicates more files on this disk than in the whole archive")); } else if eocd64.version_needed_to_extract > eocd64.version_made_by { return Err(invalid!("ZIP64 footer indicates a new version is needed to extract this archive than the \ version that wrote it")); } ( eocd64.central_directory_offset, eocd64.number_of_files as usize, eocd64.disk_number, eocd64.disk_with_central_directory, ) } _ => ( value.eocd.data.central_directory_offset as u64, value.eocd.data.number_of_files_on_this_disk as usize, value.eocd.data.disk_number as u32, value.eocd.data.disk_with_central_directory as u32, ), }; let directory_start = relative_cd_offset .checked_add(value.archive_offset) .ok_or(invalid!("Invalid central directory size or offset"))?; Ok(Self { archive_offset: value.archive_offset, directory_start, number_of_files, disk_number, disk_with_central_directory, }) } } impl ZipArchive { pub(crate) fn from_finalized_writer( files: IndexMap, ZipFileData>, comment: Box<[u8]>, zip64_comment: Option>, reader: R, central_start: u64, ) -> ZipResult { let initial_offset = match files.first() { Some((_, file)) => file.header_start, None => central_start, }; let shared = Arc::new(Shared { files, offset: initial_offset, dir_start: central_start, config: Config { archive_offset: ArchiveOffset::Known(initial_offset), }, comment, zip64_comment, }); Ok(Self { reader, shared }) } /// Total size of the files in the archive, if it can be known. Doesn't include directories or /// metadata. pub fn decompressed_size(&self) -> Option { let mut total = 0u128; for file in self.shared.files.values() { if file.using_data_descriptor { return None; } total = total.checked_add(file.uncompressed_size as u128)?; } Some(total) } } impl ZipArchive { pub(crate) fn merge_contents( &mut self, mut w: W, ) -> ZipResult, ZipFileData>> { if self.shared.files.is_empty() { return Ok(IndexMap::new()); } let mut new_files = self.shared.files.clone(); /* The first file header will probably start at the beginning of the file, but zip doesn't * enforce that, and executable zips like PEX files will have a shebang line so will * definitely be greater than 0. * * assert_eq!(0, new_files[0].header_start); // Avoid this. */ let first_new_file_header_start = w.stream_position()?; /* Push back file header starts for all entries in the covered files. */ new_files.values_mut().try_for_each(|f| { /* This is probably the only really important thing to change. */ f.header_start = f .header_start .checked_add(first_new_file_header_start) .ok_or(invalid!( "new header start from merge would have been too large" ))?; /* This is only ever used internally to cache metadata lookups (it's not part of the * zip spec), and 0 is the sentinel value. */ f.central_header_start = 0; /* This is an atomic variable so it can be updated from another thread in the * implementation (which is good!). */ if let Some(old_data_start) = f.data_start.take() { let new_data_start = old_data_start .checked_add(first_new_file_header_start) .ok_or(invalid!( "new data start from merge would have been too large" ))?; f.data_start.get_or_init(|| new_data_start); } Ok::<_, ZipError>(()) })?; /* Rewind to the beginning of the file. * * NB: we *could* decide to start copying from new_files[0].header_start instead, which * would avoid copying over e.g. any pex shebangs or other file contents that start before * the first zip file entry. However, zip files actually shouldn't care about garbage data * in *between* real entries, since the central directory header records the correct start * location of each, and keeping track of that math is more complicated logic that will only * rarely be used, since most zips that get merged together are likely to be produced * specifically for that purpose (and therefore are unlikely to have a shebang or other * preface). Finally, this preserves any data that might actually be useful. */ self.reader.rewind()?; /* Find the end of the file data. */ let length_to_read = self.shared.dir_start; /* Produce a Read that reads bytes up until the start of the central directory header. * This "as &mut dyn Read" trick is used elsewhere to avoid having to clone the underlying * handle, which it really shouldn't need to anyway. */ let mut limited_raw = (&mut self.reader as &mut dyn Read).take(length_to_read); /* Copy over file data from source archive directly. */ io::copy(&mut limited_raw, &mut w)?; /* Return the files we've just written to the data stream. */ Ok(new_files) } /// Get the directory start offset and number of files. This is done in a /// separate function to ease the control flow design. pub(crate) fn get_metadata(config: Config, reader: &mut R) -> ZipResult { // End of the probed region, initially set to the end of the file let file_len = reader.seek(io::SeekFrom::End(0))?; let mut end_exclusive = file_len; loop { // Find the EOCD and possibly EOCD64 entries and determine the archive offset. let cde = spec::find_central_directory( reader, config.archive_offset, end_exclusive, file_len, )?; // Turn EOCD into internal representation. let Ok(shared) = CentralDirectoryInfo::try_from(&cde) .and_then(|info| Self::read_central_header(info, config, reader)) else { // The next EOCD candidate should start before the current one. end_exclusive = cde.eocd.position; continue; }; return Ok(shared.build( cde.eocd.data.zip_file_comment, cde.eocd64.map(|v| v.data.extensible_data_sector), )); } } fn read_central_header( dir_info: CentralDirectoryInfo, config: Config, reader: &mut R, ) -> Result { // If the parsed number of files is greater than the offset then // something fishy is going on and we shouldn't trust number_of_files. let file_capacity = if dir_info.number_of_files > dir_info.directory_start as usize { 0 } else { dir_info.number_of_files }; if dir_info.disk_number != dir_info.disk_with_central_directory { return unsupported_zip_error("Support for multi-disk files is not implemented"); } if file_capacity.saturating_mul(size_of::()) > isize::MAX as usize { return unsupported_zip_error("Oversized central directory"); } let mut files = Vec::with_capacity(file_capacity); reader.seek(SeekFrom::Start(dir_info.directory_start))?; for _ in 0..dir_info.number_of_files { let file = central_header_to_zip_file(reader, &dir_info)?; files.push(file); } Ok(SharedBuilder { files, offset: dir_info.archive_offset, dir_start: dir_info.directory_start, config, }) } /// Returns the verification value and salt for the AES encryption of the file /// /// It fails if the file number is invalid. /// /// # Returns /// /// - None if the file is not encrypted with AES #[cfg(feature = "aes-crypto")] pub fn get_aes_verification_key_and_salt( &mut self, file_number: usize, ) -> ZipResult> { let (_, data) = self .shared .files .get_index(file_number) .ok_or(ZipError::FileNotFound)?; let limit_reader = find_content(data, &mut self.reader)?; match data.aes_mode { None => Ok(None), Some((aes_mode, _, _)) => { let (verification_value, salt) = AesReader::new(limit_reader, aes_mode, data.compressed_size) .get_verification_value_and_salt()?; let aes_info = AesInfo { aes_mode, verification_value, salt, }; Ok(Some(aes_info)) } } } /// Read a ZIP archive, collecting the files it contains. /// /// This uses the central directory record of the ZIP file, and ignores local file headers. /// /// A default [`Config`] is used. pub fn new(reader: R) -> ZipResult> { Self::with_config(Default::default(), reader) } /// Read a ZIP archive providing a read configuration, collecting the files it contains. /// /// This uses the central directory record of the ZIP file, and ignores local file headers. pub fn with_config(config: Config, mut reader: R) -> ZipResult> { let shared = Self::get_metadata(config, &mut reader)?; Ok(ZipArchive { reader, shared: shared.into(), }) } /// Extract a Zip archive into a directory, overwriting files if they /// already exist. Paths are sanitized with [`ZipFile::enclosed_name`]. Symbolic links are only /// created and followed if the target is within the destination directory (this is checked /// conservatively using [`std::fs::canonicalize`]). /// /// Extraction is not atomic. If an error is encountered, some of the files /// may be left on disk. However, on Unix targets, no newly-created directories with part but /// not all of their contents extracted will be readable, writable or usable as process working /// directories by any non-root user except you. /// /// On Unix and Windows, symbolic links are extracted correctly. On other platforms such as /// WebAssembly, symbolic links aren't supported, so they're extracted as normal files /// containing the target path in UTF-8. pub fn extract>(&mut self, directory: P) -> ZipResult<()> { self.extract_internal(directory, None:: bool>) } /// Extracts a Zip archive into a directory in the same fashion as /// [`ZipArchive::extract`], but detects a "root" directory in the archive /// (a single top-level directory that contains the rest of the archive's /// entries) and extracts its contents directly. /// /// For a sensible default `filter`, you can use [`root_dir_common_filter`]. /// For a custom `filter`, see [`RootDirFilter`]. /// /// See [`ZipArchive::root_dir`] for more information on how the root /// directory is detected and the meaning of the `filter` parameter. /// /// ## Example /// /// Imagine a Zip archive with the following structure: /// /// ```text /// root/file1.txt /// root/file2.txt /// root/sub/file3.txt /// root/sub/subsub/file4.txt /// ``` /// /// If the archive is extracted to `foo` using [`ZipArchive::extract`], /// the resulting directory structure will be: /// /// ```text /// foo/root/file1.txt /// foo/root/file2.txt /// foo/root/sub/file3.txt /// foo/root/sub/subsub/file4.txt /// ``` /// /// If the archive is extracted to `foo` using /// [`ZipArchive::extract_unwrapped_root_dir`], the resulting directory /// structure will be: /// /// ```text /// foo/file1.txt /// foo/file2.txt /// foo/sub/file3.txt /// foo/sub/subsub/file4.txt /// ``` /// /// ## Example - No Root Directory /// /// Imagine a Zip archive with the following structure: /// /// ```text /// root/file1.txt /// root/file2.txt /// root/sub/file3.txt /// root/sub/subsub/file4.txt /// other/file5.txt /// ``` /// /// Due to the presence of the `other` directory, /// [`ZipArchive::extract_unwrapped_root_dir`] will extract this in the same /// fashion as [`ZipArchive::extract`] as there is now no "root directory." pub fn extract_unwrapped_root_dir>( &mut self, directory: P, root_dir_filter: impl RootDirFilter, ) -> ZipResult<()> { self.extract_internal(directory, Some(root_dir_filter)) } fn extract_internal>( &mut self, directory: P, root_dir_filter: Option, ) -> ZipResult<()> { use std::fs; create_dir_all(&directory)?; let directory = directory.as_ref().canonicalize()?; let root_dir = root_dir_filter .and_then(|filter| { self.root_dir(&filter) .transpose() .map(|root_dir| root_dir.map(|root_dir| (root_dir, filter))) }) .transpose()?; // If we have a root dir, simplify the path components to be more // appropriate for passing to `safe_prepare_path` let root_dir = root_dir .as_ref() .map(|(root_dir, filter)| { crate::path::simplified_components(root_dir) .ok_or_else(|| { // Should be unreachable debug_assert!(false, "Invalid root dir path"); invalid!("Invalid root dir path") }) .map(|root_dir| (root_dir, filter)) }) .transpose()?; #[cfg(unix)] let mut files_by_unix_mode = Vec::new(); for i in 0..self.len() { let mut file = self.by_index(i)?; let mut outpath = directory.clone(); file.safe_prepare_path(directory.as_ref(), &mut outpath, root_dir.as_ref())?; let symlink_target = if file.is_symlink() && (cfg!(unix) || cfg!(windows)) { let mut target = Vec::with_capacity(file.size() as usize); file.read_to_end(&mut target)?; Some(target) } else { if file.is_dir() { crate::read::make_writable_dir_all(&outpath)?; continue; } None }; drop(file); if let Some(target) = symlink_target { make_symlink(&outpath, &target, &self.shared.files)?; continue; } let mut file = self.by_index(i)?; let mut outfile = fs::File::create(&outpath)?; io::copy(&mut file, &mut outfile)?; #[cfg(unix)] { // Check for real permissions, which we'll set in a second pass if let Some(mode) = file.unix_mode() { files_by_unix_mode.push((outpath.clone(), mode)); } } } #[cfg(unix)] { use std::cmp::Reverse; use std::os::unix::fs::PermissionsExt; if files_by_unix_mode.len() > 1 { // Ensure we update children's permissions before making a parent unwritable files_by_unix_mode.sort_by_key(|(path, _)| Reverse(path.clone())); } for (path, mode) in files_by_unix_mode.into_iter() { fs::set_permissions(&path, fs::Permissions::from_mode(mode))?; } } Ok(()) } /// Number of files contained in this zip. pub fn len(&self) -> usize { self.shared.files.len() } /// Get the starting offset of the zip central directory. pub fn central_directory_start(&self) -> u64 { self.shared.dir_start } /// Whether this zip archive contains no files pub fn is_empty(&self) -> bool { self.len() == 0 } /// Get the offset from the beginning of the underlying reader that this zip begins at, in bytes. /// /// Normally this value is zero, but if the zip has arbitrary data prepended to it, then this value will be the size /// of that prepended data. pub fn offset(&self) -> u64 { self.shared.offset } /// Get the comment of the zip archive. pub fn comment(&self) -> &[u8] { &self.shared.comment } /// Get the ZIP64 comment of the zip archive, if it is ZIP64. pub fn zip64_comment(&self) -> Option<&[u8]> { self.shared.zip64_comment.as_deref() } /// Returns an iterator over all the file and directory names in this archive. pub fn file_names(&self) -> impl Iterator { self.shared.files.keys().map(|s| s.as_ref()) } /// Search for a file entry by name, decrypt with given password /// /// # Warning /// /// The implementation of the cryptographic algorithms has not /// gone through a correctness review, and you should assume it is insecure: /// passwords used with this API may be compromised. /// /// This function sometimes accepts wrong password. This is because the ZIP spec only allows us /// to check for a 1/256 chance that the password is correct. /// There are many passwords out there that will also pass the validity checks /// we are able to perform. This is a weakness of the ZipCrypto algorithm, /// due to its fairly primitive approach to cryptography. pub fn by_name_decrypt(&mut self, name: &str, password: &[u8]) -> ZipResult { self.by_name_with_optional_password(name, Some(password)) } /// Search for a file entry by name pub fn by_name(&mut self, name: &str) -> ZipResult { self.by_name_with_optional_password(name, None) } /// Get the index of a file entry by name, if it's present. #[inline(always)] pub fn index_for_name(&self, name: &str) -> Option { self.shared.files.get_index_of(name) } /// Get the index of a file entry by path, if it's present. #[inline(always)] pub fn index_for_path>(&self, path: T) -> Option { self.index_for_name(&path_to_string(path)) } /// Get the name of a file entry, if it's present. #[inline(always)] pub fn name_for_index(&self, index: usize) -> Option<&str> { self.shared .files .get_index(index) .map(|(name, _)| name.as_ref()) } /// Search for a file entry by name and return a seekable object. pub fn by_name_seek(&mut self, name: &str) -> ZipResult> { self.by_index_seek(self.index_for_name(name).ok_or(ZipError::FileNotFound)?) } /// Search for a file entry by index and return a seekable object. pub fn by_index_seek(&mut self, index: usize) -> ZipResult> { let reader = &mut self.reader; self.shared .files .get_index(index) .ok_or(ZipError::FileNotFound) .and_then(move |(_, data)| { let seek_reader = match data.compression_method { CompressionMethod::Stored => { ZipFileSeekReader::Raw(find_content_seek(data, reader)?) } _ => { return Err(ZipError::UnsupportedArchive( "Seekable compressed files are not yet supported", )) } }; Ok(ZipFileSeek { reader: seek_reader, data: Cow::Borrowed(data), }) }) } fn by_name_with_optional_password<'a>( &'a mut self, name: &str, password: Option<&[u8]>, ) -> ZipResult> { let Some(index) = self.shared.files.get_index_of(name) else { return Err(ZipError::FileNotFound); }; self.by_index_with_optional_password(index, password) } /// Get a contained file by index, decrypt with given password /// /// # Warning /// /// The implementation of the cryptographic algorithms has not /// gone through a correctness review, and you should assume it is insecure: /// passwords used with this API may be compromised. /// /// This function sometimes accepts wrong password. This is because the ZIP spec only allows us /// to check for a 1/256 chance that the password is correct. /// There are many passwords out there that will also pass the validity checks /// we are able to perform. This is a weakness of the ZipCrypto algorithm, /// due to its fairly primitive approach to cryptography. pub fn by_index_decrypt( &mut self, file_number: usize, password: &[u8], ) -> ZipResult> { self.by_index_with_optional_password(file_number, Some(password)) } /// Get a contained file by index pub fn by_index(&mut self, file_number: usize) -> ZipResult> { self.by_index_with_optional_password(file_number, None) } /// Get a contained file by index without decompressing it pub fn by_index_raw(&mut self, file_number: usize) -> ZipResult> { let reader = &mut self.reader; let (_, data) = self .shared .files .get_index(file_number) .ok_or(ZipError::FileNotFound)?; Ok(ZipFile { reader: ZipFileReader::Raw(find_content(data, reader)?), data: Cow::Borrowed(data), }) } fn by_index_with_optional_password( &mut self, file_number: usize, mut password: Option<&[u8]>, ) -> ZipResult> { let (_, data) = self .shared .files .get_index(file_number) .ok_or(ZipError::FileNotFound)?; match (password, data.encrypted) { (None, true) => return Err(ZipError::UnsupportedArchive(ZipError::PASSWORD_REQUIRED)), (Some(_), false) => password = None, //Password supplied, but none needed! Discard. _ => {} } let limit_reader = find_content(data, &mut self.reader)?; let crypto_reader = make_crypto_reader(data, limit_reader, password, data.aes_mode)?; Ok(ZipFile { data: Cow::Borrowed(data), reader: make_reader(data.compression_method, data.crc32, crypto_reader)?, }) } /// Find the "root directory" of an archive if it exists, filtering out /// irrelevant entries when searching. /// /// Our definition of a "root directory" is a single top-level directory /// that contains the rest of the archive's entries. This is useful for /// extracting archives that contain a single top-level directory that /// you want to "unwrap" and extract directly. /// /// For a sensible default filter, you can use [`root_dir_common_filter`]. /// For a custom filter, see [`RootDirFilter`]. pub fn root_dir(&self, filter: impl RootDirFilter) -> ZipResult> { let mut root_dir: Option = None; for i in 0..self.len() { let (_, file) = self .shared .files .get_index(i) .ok_or(ZipError::FileNotFound)?; let path = match file.enclosed_name() { Some(path) => path, None => return Ok(None), }; if !filter(&path) { continue; } macro_rules! replace_root_dir { ($path:ident) => { match &mut root_dir { Some(root_dir) => { if *root_dir != $path { // We've found multiple root directories, // abort. return Ok(None); } else { continue; } } None => { root_dir = Some($path.into()); continue; } } }; } // If this entry is located at the root of the archive... if path.components().count() == 1 { if file.is_dir() { // If it's a directory, it could be the root directory. replace_root_dir!(path); } else { // If it's anything else, this archive does not have a // root directory. return Ok(None); } } // Find the root directory for this entry. let mut path = path.as_path(); while let Some(parent) = path.parent().filter(|path| *path != Path::new("")) { path = parent; } replace_root_dir!(path); } Ok(root_dir) } /// Unwrap and return the inner reader object /// /// The position of the reader is undefined. pub fn into_inner(self) -> R { self.reader } } /// Holds the AES information of a file in the zip archive #[derive(Debug)] #[cfg(feature = "aes-crypto")] pub struct AesInfo { /// The AES encryption mode pub aes_mode: AesMode, /// The verification key pub verification_value: [u8; PWD_VERIFY_LENGTH], /// The salt pub salt: Vec, } const fn unsupported_zip_error(detail: &'static str) -> ZipResult { Err(ZipError::UnsupportedArchive(detail)) } /// Parse a central directory entry to collect the information for the file. pub(crate) fn central_header_to_zip_file( reader: &mut R, central_directory: &CentralDirectoryInfo, ) -> ZipResult { let central_header_start = reader.stream_position()?; // Parse central header let block = ZipCentralEntryBlock::parse(reader)?; let file = central_header_to_zip_file_inner( reader, central_directory.archive_offset, central_header_start, block, )?; let central_header_end = reader.stream_position()?; if file.header_start >= central_directory.directory_start { return Err(invalid!( "A local file entry can't start after the central directory" )); } let data_start = find_data_start(&file, reader)?; if data_start > central_directory.directory_start { return Err(invalid!( "File data can't start after the central directory" )); } reader.seek(SeekFrom::Start(central_header_end))?; Ok(file) } #[inline] fn read_variable_length_byte_field(reader: &mut R, len: usize) -> io::Result> { let mut data = vec![0; len].into_boxed_slice(); reader.read_exact(&mut data)?; Ok(data) } /// Parse a central directory entry to collect the information for the file. fn central_header_to_zip_file_inner( reader: &mut R, archive_offset: u64, central_header_start: u64, block: ZipCentralEntryBlock, ) -> ZipResult { let ZipCentralEntryBlock { // magic, version_made_by, // version_to_extract, flags, compression_method, last_mod_time, last_mod_date, crc32, compressed_size, uncompressed_size, file_name_length, extra_field_length, file_comment_length, // disk_number, // internal_file_attributes, external_file_attributes, offset, .. } = block; let encrypted = flags & 1 == 1; let is_utf8 = flags & (1 << 11) != 0; let using_data_descriptor = flags & (1 << 3) != 0; let file_name_raw = read_variable_length_byte_field(reader, file_name_length as usize)?; let extra_field = read_variable_length_byte_field(reader, extra_field_length as usize)?; let file_comment_raw = read_variable_length_byte_field(reader, file_comment_length as usize)?; let file_name: Box = match is_utf8 { true => String::from_utf8_lossy(&file_name_raw).into(), false => file_name_raw.clone().from_cp437(), }; let file_comment: Box = match is_utf8 { true => String::from_utf8_lossy(&file_comment_raw).into(), false => file_comment_raw.from_cp437(), }; // Construct the result let mut result = ZipFileData { system: System::from((version_made_by >> 8) as u8), /* NB: this strips the top 8 bits! */ version_made_by: version_made_by as u8, encrypted, using_data_descriptor, is_utf8, compression_method: CompressionMethod::parse_from_u16(compression_method), compression_level: None, last_modified_time: DateTime::try_from_msdos(last_mod_date, last_mod_time).ok(), crc32, compressed_size: compressed_size.into(), uncompressed_size: uncompressed_size.into(), file_name, file_name_raw, extra_field: Some(Arc::new(extra_field.to_vec())), central_extra_field: None, file_comment, header_start: offset.into(), extra_data_start: None, central_header_start, data_start: OnceLock::new(), external_attributes: external_file_attributes, large_file: false, aes_mode: None, aes_extra_data_start: 0, extra_fields: Vec::new(), }; match parse_extra_field(&mut result) { Ok(stripped_extra_field) => { result.extra_field = stripped_extra_field; } Err(ZipError::Io(..)) => {} Err(e) => return Err(e), } let aes_enabled = result.compression_method == CompressionMethod::AES; if aes_enabled && result.aes_mode.is_none() { return Err(invalid!("AES encryption without AES extra data field")); } // Account for shifted zip offsets. result.header_start = result .header_start .checked_add(archive_offset) .ok_or(invalid!("Archive header is too large"))?; Ok(result) } pub(crate) fn parse_extra_field(file: &mut ZipFileData) -> ZipResult>>> { let Some(ref extra_field) = file.extra_field else { return Ok(None); }; let extra_field = extra_field.clone(); let mut processed_extra_field = extra_field.clone(); let len = extra_field.len(); let mut reader = io::Cursor::new(&**extra_field); /* TODO: codify this structure into Zip64ExtraFieldBlock fields! */ let mut position = reader.position() as usize; while (position) < len { let old_position = position; let remove = parse_single_extra_field(file, &mut reader, position as u64, false)?; position = reader.position() as usize; if remove { let remaining = len - (position - old_position); if remaining == 0 { return Ok(None); } let mut new_extra_field = Vec::with_capacity(remaining); new_extra_field.extend_from_slice(&extra_field[0..old_position]); new_extra_field.extend_from_slice(&extra_field[position..]); processed_extra_field = Arc::new(new_extra_field); } } Ok(Some(processed_extra_field)) } pub(crate) fn parse_single_extra_field( file: &mut ZipFileData, reader: &mut R, bytes_already_read: u64, disallow_zip64: bool, ) -> ZipResult { let kind = reader.read_u16_le()?; let len = reader.read_u16_le()?; match kind { // Zip64 extended information extra field 0x0001 => { if disallow_zip64 { return Err(invalid!("Can't write a custom field using the ZIP64 ID")); } file.large_file = true; let mut consumed_len = 0; if len >= 24 || file.uncompressed_size == spec::ZIP64_BYTES_THR { file.uncompressed_size = reader.read_u64_le()?; consumed_len += size_of::(); } if len >= 24 || file.compressed_size == spec::ZIP64_BYTES_THR { file.compressed_size = reader.read_u64_le()?; consumed_len += size_of::(); } if len >= 24 || file.header_start == spec::ZIP64_BYTES_THR { file.header_start = reader.read_u64_le()?; consumed_len += size_of::(); } let Some(leftover_len) = (len as usize).checked_sub(consumed_len) else { return Err(invalid!("ZIP64 extra-data field is the wrong length")); }; reader.read_exact(&mut vec![0u8; leftover_len])?; return Ok(true); } 0x000a => { // NTFS extra field file.extra_fields .push(ExtraField::Ntfs(Ntfs::try_from_reader(reader, len)?)); } 0x9901 => { // AES if len != 7 { return Err(ZipError::UnsupportedArchive( "AES extra data field has an unsupported length", )); } let vendor_version = reader.read_u16_le()?; let vendor_id = reader.read_u16_le()?; let mut out = [0u8]; reader.read_exact(&mut out)?; let aes_mode = out[0]; let compression_method = CompressionMethod::parse_from_u16(reader.read_u16_le()?); if vendor_id != 0x4541 { return Err(invalid!("Invalid AES vendor")); } let vendor_version = match vendor_version { 0x0001 => AesVendorVersion::Ae1, 0x0002 => AesVendorVersion::Ae2, _ => return Err(invalid!("Invalid AES vendor version")), }; match aes_mode { 0x01 => file.aes_mode = Some((AesMode::Aes128, vendor_version, compression_method)), 0x02 => file.aes_mode = Some((AesMode::Aes192, vendor_version, compression_method)), 0x03 => file.aes_mode = Some((AesMode::Aes256, vendor_version, compression_method)), _ => return Err(invalid!("Invalid AES encryption strength")), }; file.compression_method = compression_method; file.aes_extra_data_start = bytes_already_read; } 0x5455 => { // extended timestamp // https://libzip.org/specifications/extrafld.txt file.extra_fields.push(ExtraField::ExtendedTimestamp( ExtendedTimestamp::try_from_reader(reader, len)?, )); } 0x6375 => { // Info-ZIP Unicode Comment Extra Field // APPNOTE 4.6.8 and https://libzip.org/specifications/extrafld.txt file.file_comment = String::from_utf8( UnicodeExtraField::try_from_reader(reader, len)? .unwrap_valid(file.file_comment.as_bytes())? .into_vec(), )? .into(); } 0x7075 => { // Info-ZIP Unicode Path Extra Field // APPNOTE 4.6.9 and https://libzip.org/specifications/extrafld.txt file.file_name_raw = UnicodeExtraField::try_from_reader(reader, len)? .unwrap_valid(&file.file_name_raw)?; file.file_name = String::from_utf8(file.file_name_raw.clone().into_vec())?.into_boxed_str(); file.is_utf8 = true; } _ => { reader.read_exact(&mut vec![0u8; len as usize])?; // Other fields are ignored } } Ok(false) } /// A trait for exposing file metadata inside the zip. pub trait HasZipMetadata { /// Get the file metadata fn get_metadata(&self) -> &ZipFileData; } /// Methods for retrieving information on zip files impl<'a> ZipFile<'a> { pub(crate) fn take_raw_reader(&mut self) -> io::Result> { mem::replace(&mut self.reader, ZipFileReader::NoReader).into_inner() } /// Get the version of the file pub fn version_made_by(&self) -> (u8, u8) { ( self.get_metadata().version_made_by / 10, self.get_metadata().version_made_by % 10, ) } /// Get the name of the file /// /// # Warnings /// /// It is dangerous to use this name directly when extracting an archive. /// It may contain an absolute path (`/etc/shadow`), or break out of the /// current directory (`../runtime`). Carelessly writing to these paths /// allows an attacker to craft a ZIP archive that will overwrite critical /// files. /// /// You can use the [`ZipFile::enclosed_name`] method to validate the name /// as a safe path. pub fn name(&self) -> &str { &self.get_metadata().file_name } /// Get the name of the file, in the raw (internal) byte representation. /// /// The encoding of this data is currently undefined. pub fn name_raw(&self) -> &[u8] { &self.get_metadata().file_name_raw } /// Get the name of the file in a sanitized form. It truncates the name to the first NULL byte, /// removes a leading '/' and removes '..' parts. #[deprecated( since = "0.5.7", note = "by stripping `..`s from the path, the meaning of paths can change. `mangled_name` can be used if this behaviour is desirable" )] pub fn sanitized_name(&self) -> PathBuf { self.mangled_name() } /// Rewrite the path, ignoring any path components with special meaning. /// /// - Absolute paths are made relative /// - [`ParentDir`]s are ignored /// - Truncates the filename at a NULL byte /// /// This is appropriate if you need to be able to extract *something* from /// any archive, but will easily misrepresent trivial paths like /// `foo/../bar` as `foo/bar` (instead of `bar`). Because of this, /// [`ZipFile::enclosed_name`] is the better option in most scenarios. /// /// [`ParentDir`]: `PathBuf::Component::ParentDir` pub fn mangled_name(&self) -> PathBuf { self.get_metadata().file_name_sanitized() } /// Ensure the file path is safe to use as a [`Path`]. /// /// - It can't contain NULL bytes /// - It can't resolve to a path outside the current directory /// > `foo/../bar` is fine, `foo/../../bar` is not. /// - It can't be an absolute path /// /// This will read well-formed ZIP files correctly, and is resistant /// to path-based exploits. It is recommended over /// [`ZipFile::mangled_name`]. pub fn enclosed_name(&self) -> Option { self.get_metadata().enclosed_name() } pub(crate) fn simplified_components(&self) -> Option> { self.get_metadata().simplified_components() } /// Prepare the path for extraction by creating necessary missing directories and checking for symlinks to be contained within the base path. /// /// `base_path` parameter is assumed to be canonicalized. pub(crate) fn safe_prepare_path( &self, base_path: &Path, outpath: &mut PathBuf, root_dir: Option<&(Vec<&OsStr>, impl RootDirFilter)>, ) -> ZipResult<()> { let components = self .simplified_components() .ok_or(invalid!("Invalid file path"))?; let components = match root_dir { Some((root_dir, filter)) => match components.strip_prefix(&**root_dir) { Some(components) => components, // In this case, we expect that the file was not in the root // directory, but was filtered out when searching for the // root directory. None => { // We could technically find ourselves at this code // path if the user provides an unstable or // non-deterministic `filter` function. // // If debug assertions are on, we should panic here. // Otherwise, the safest thing to do here is to just // extract as-is. debug_assert!( !filter(&PathBuf::from_iter(components.iter())), "Root directory filter should not match at this point" ); // Extract as-is. &components[..] } }, None => &components[..], }; let components_len = components.len(); for (is_last, component) in components .iter() .copied() .enumerate() .map(|(i, c)| (i == components_len - 1, c)) { // we can skip the target directory itself because the base path is assumed to be "trusted" (if the user say extract to a symlink we can follow it) outpath.push(component); // check if the path is a symlink, the target must be _inherently_ within the directory for limit in (0..5u8).rev() { let meta = match std::fs::symlink_metadata(&outpath) { Ok(meta) => meta, Err(e) if e.kind() == io::ErrorKind::NotFound => { if !is_last { crate::read::make_writable_dir_all(&outpath)?; } break; } Err(e) => return Err(e.into()), }; if !meta.is_symlink() { break; } if limit == 0 { return Err(invalid!("Extraction followed a symlink too deep")); } // note that we cannot accept links that do not inherently resolve to a path inside the directory to prevent: // - disclosure of unrelated path exists (no check for a path exist and then ../ out) // - issues with file-system specific path resolution (case sensitivity, etc) let target = std::fs::read_link(&outpath)?; if !crate::path::simplified_components(&target) .ok_or(invalid!("Invalid symlink target path"))? .starts_with( &crate::path::simplified_components(base_path) .ok_or(invalid!("Invalid base path"))?, ) { let is_absolute_enclosed = base_path .components() .map(Some) .chain(std::iter::once(None)) .zip(target.components().map(Some).chain(std::iter::repeat(None))) .all(|(a, b)| match (a, b) { // both components are normal (Some(Component::Normal(a)), Some(Component::Normal(b))) => a == b, // both components consumed fully (None, None) => true, // target consumed fully but base path is not (Some(_), None) => false, // base path consumed fully but target is not (and normal) (None, Some(Component::CurDir | Component::Normal(_))) => true, _ => false, }); if !is_absolute_enclosed { return Err(invalid!("Symlink is not inherently safe")); } } outpath.push(target); } } Ok(()) } /// Get the comment of the file pub fn comment(&self) -> &str { &self.get_metadata().file_comment } /// Get the compression method used to store the file pub fn compression(&self) -> CompressionMethod { self.get_metadata().compression_method } /// Get if the files is encrypted or not pub fn encrypted(&self) -> bool { self.data.encrypted } /// Get the size of the file, in bytes, in the archive pub fn compressed_size(&self) -> u64 { self.get_metadata().compressed_size } /// Get the size of the file, in bytes, when uncompressed pub fn size(&self) -> u64 { self.get_metadata().uncompressed_size } /// Get the time the file was last modified pub fn last_modified(&self) -> Option { self.data.last_modified_time } /// Returns whether the file is actually a directory pub fn is_dir(&self) -> bool { is_dir(self.name()) } /// Returns whether the file is actually a symbolic link pub fn is_symlink(&self) -> bool { self.unix_mode() .is_some_and(|mode| mode & S_IFLNK == S_IFLNK) } /// Returns whether the file is a normal file (i.e. not a directory or symlink) pub fn is_file(&self) -> bool { !self.is_dir() && !self.is_symlink() } /// Get unix mode for the file pub fn unix_mode(&self) -> Option { self.get_metadata().unix_mode() } /// Get the CRC32 hash of the original file pub fn crc32(&self) -> u32 { self.get_metadata().crc32 } /// Get the extra data of the zip header for this file pub fn extra_data(&self) -> Option<&[u8]> { self.get_metadata() .extra_field .as_ref() .map(|v| v.deref().deref()) } /// Get the starting offset of the data of the compressed file pub fn data_start(&self) -> u64 { *self.data.data_start.get().unwrap() } /// Get the starting offset of the zip header for this file pub fn header_start(&self) -> u64 { self.get_metadata().header_start } /// Get the starting offset of the zip header in the central directory for this file pub fn central_header_start(&self) -> u64 { self.get_metadata().central_header_start } /// Get the [`SimpleFileOptions`] that would be used to write this file to /// a new zip archive. pub fn options(&self) -> SimpleFileOptions { let mut options = SimpleFileOptions::default() .large_file(self.compressed_size().max(self.size()) > ZIP64_BYTES_THR) .compression_method(self.compression()) .unix_permissions(self.unix_mode().unwrap_or(0o644) | S_IFREG) .last_modified_time( self.last_modified() .filter(|m| m.is_valid()) .unwrap_or_else(DateTime::default_for_write), ); options.normalize(); options } } /// Methods for retrieving information on zip files impl ZipFile<'_> { /// iterate through all extra fields pub fn extra_data_fields(&self) -> impl Iterator { self.data.extra_fields.iter() } } impl HasZipMetadata for ZipFile<'_> { fn get_metadata(&self) -> &ZipFileData { self.data.as_ref() } } impl Read for ZipFile<'_> { fn read(&mut self, buf: &mut [u8]) -> io::Result { self.reader.read(buf) } fn read_exact(&mut self, buf: &mut [u8]) -> io::Result<()> { self.reader.read_exact(buf) } fn read_to_end(&mut self, buf: &mut Vec) -> io::Result { self.reader.read_to_end(buf) } fn read_to_string(&mut self, buf: &mut String) -> io::Result { self.reader.read_to_string(buf) } } impl Read for ZipFileSeek<'_, R> { fn read(&mut self, buf: &mut [u8]) -> io::Result { match &mut self.reader { ZipFileSeekReader::Raw(r) => r.read(buf), } } } impl Seek for ZipFileSeek<'_, R> { fn seek(&mut self, pos: SeekFrom) -> io::Result { match &mut self.reader { ZipFileSeekReader::Raw(r) => r.seek(pos), } } } impl HasZipMetadata for ZipFileSeek<'_, R> { fn get_metadata(&self) -> &ZipFileData { self.data.as_ref() } } impl Drop for ZipFile<'_> { fn drop(&mut self) { // self.data is Owned, this reader is constructed by a streaming reader. // In this case, we want to exhaust the reader so that the next file is accessible. if let Cow::Owned(_) = self.data { // Get the inner `Take` reader so all decryption, decompression and CRC calculation is skipped. if let Ok(mut inner) = self.take_raw_reader() { let _ = copy(&mut inner, &mut sink()); } } } } /// Read ZipFile structures from a non-seekable reader. /// /// This is an alternative method to read a zip file. If possible, use the ZipArchive functions /// as some information will be missing when reading this manner. /// /// Reads a file header from the start of the stream. Will return `Ok(Some(..))` if a file is /// present at the start of the stream. Returns `Ok(None)` if the start of the central directory /// is encountered. No more files should be read after this. /// /// The Drop implementation of ZipFile ensures that the reader will be correctly positioned after /// the structure is done. /// /// Missing fields are: /// * `comment`: set to an empty string /// * `data_start`: set to 0 /// * `external_attributes`: `unix_mode()`: will return None pub fn read_zipfile_from_stream(reader: &mut R) -> ZipResult>> { // We can't use the typical ::parse() method, as we follow separate code paths depending on the // "magic" value (since the magic value will be from the central directory header if we've // finished iterating over all the actual files). /* TODO: smallvec? */ let mut block = ZipLocalEntryBlock::zeroed(); reader.read_exact(block.as_bytes_mut())?; match block.magic().from_le() { spec::Magic::LOCAL_FILE_HEADER_SIGNATURE => (), spec::Magic::CENTRAL_DIRECTORY_HEADER_SIGNATURE => return Ok(None), _ => return Err(ZipLocalEntryBlock::WRONG_MAGIC_ERROR), } let block = block.from_le(); let mut result = ZipFileData::from_local_block(block, reader)?; match parse_extra_field(&mut result) { Ok(..) | Err(ZipError::Io(..)) => {} Err(e) => return Err(e), } let limit_reader = (reader as &mut dyn Read).take(result.compressed_size); let result_crc32 = result.crc32; let result_compression_method = result.compression_method; let crypto_reader = make_crypto_reader(&result, limit_reader, None, None)?; Ok(Some(ZipFile { data: Cow::Owned(result), reader: make_reader(result_compression_method, result_crc32, crypto_reader)?, })) } /// A filter that determines whether an entry should be ignored when searching /// for the root directory of a Zip archive. /// /// Returns `true` if the entry should be considered, and `false` if it should /// be ignored. /// /// See [`root_dir_common_filter`] for a sensible default filter. pub trait RootDirFilter: Fn(&Path) -> bool {} impl bool> RootDirFilter for F {} /// Common filters when finding the root directory of a Zip archive. /// /// This filter is a sensible default for most use cases and filters out common /// system files that are usually irrelevant to the contents of the archive. /// /// Currently, the filter ignores: /// - `/__MACOSX/` /// - `/.DS_Store` /// - `/Thumbs.db` /// /// **This function is not guaranteed to be stable and may change in future versions.** /// /// # Example /// /// ```rust /// # use std::path::Path; /// assert!(zip::read::root_dir_common_filter(Path::new("foo.txt"))); /// assert!(!zip::read::root_dir_common_filter(Path::new(".DS_Store"))); /// assert!(!zip::read::root_dir_common_filter(Path::new("Thumbs.db"))); /// assert!(!zip::read::root_dir_common_filter(Path::new("__MACOSX"))); /// assert!(!zip::read::root_dir_common_filter(Path::new("__MACOSX/foo.txt"))); /// ``` pub fn root_dir_common_filter(path: &Path) -> bool { const COMMON_FILTER_ROOT_FILES: &[&str] = &[".DS_Store", "Thumbs.db"]; if path.starts_with("__MACOSX") { return false; } if path.components().count() == 1 && path.file_name().is_some_and(|file_name| { COMMON_FILTER_ROOT_FILES .iter() .map(OsStr::new) .any(|cmp| cmp == file_name) }) { return false; } true } #[cfg(test)] mod test { use crate::result::ZipResult; use crate::write::SimpleFileOptions; use crate::CompressionMethod::Stored; use crate::{ZipArchive, ZipWriter}; use std::io::{Cursor, Read, Write}; use tempfile::TempDir; #[test] fn invalid_offset() { use super::ZipArchive; let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/invalid_offset.zip")); let reader = ZipArchive::new(Cursor::new(v)); assert!(reader.is_err()); } #[test] fn invalid_offset2() { use super::ZipArchive; let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/invalid_offset2.zip")); let reader = ZipArchive::new(Cursor::new(v)); assert!(reader.is_err()); } #[test] fn zip64_with_leading_junk() { use super::ZipArchive; let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/zip64_demo.zip")); let reader = ZipArchive::new(Cursor::new(v)).unwrap(); assert_eq!(reader.len(), 1); } #[test] fn zip_contents() { use super::ZipArchive; let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/mimetype.zip")); let mut reader = ZipArchive::new(Cursor::new(v)).unwrap(); assert_eq!(reader.comment(), b""); assert_eq!(reader.by_index(0).unwrap().central_header_start(), 77); } #[test] fn zip_read_streaming() { use super::read_zipfile_from_stream; let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/mimetype.zip")); let mut reader = Cursor::new(v); loop { if read_zipfile_from_stream(&mut reader).unwrap().is_none() { break; } } } #[test] fn zip_clone() { use super::ZipArchive; use std::io::Read; let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/mimetype.zip")); let mut reader1 = ZipArchive::new(Cursor::new(v)).unwrap(); let mut reader2 = reader1.clone(); let mut file1 = reader1.by_index(0).unwrap(); let mut file2 = reader2.by_index(0).unwrap(); let t = file1.last_modified().unwrap(); assert_eq!( ( t.year(), t.month(), t.day(), t.hour(), t.minute(), t.second() ), (1980, 1, 1, 0, 0, 0) ); let mut buf1 = [0; 5]; let mut buf2 = [0; 5]; let mut buf3 = [0; 5]; let mut buf4 = [0; 5]; file1.read_exact(&mut buf1).unwrap(); file2.read_exact(&mut buf2).unwrap(); file1.read_exact(&mut buf3).unwrap(); file2.read_exact(&mut buf4).unwrap(); assert_eq!(buf1, buf2); assert_eq!(buf3, buf4); assert_ne!(buf1, buf3); } #[test] fn file_and_dir_predicates() { use super::ZipArchive; let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/files_and_dirs.zip")); let mut zip = ZipArchive::new(Cursor::new(v)).unwrap(); for i in 0..zip.len() { let zip_file = zip.by_index(i).unwrap(); let full_name = zip_file.enclosed_name().unwrap(); let file_name = full_name.file_name().unwrap().to_str().unwrap(); assert!( (file_name.starts_with("dir") && zip_file.is_dir()) || (file_name.starts_with("file") && zip_file.is_file()) ); } } #[test] fn zip64_magic_in_filenames() { let files = vec![ include_bytes!("../tests/data/zip64_magic_in_filename_1.zip").to_vec(), include_bytes!("../tests/data/zip64_magic_in_filename_2.zip").to_vec(), include_bytes!("../tests/data/zip64_magic_in_filename_3.zip").to_vec(), include_bytes!("../tests/data/zip64_magic_in_filename_4.zip").to_vec(), include_bytes!("../tests/data/zip64_magic_in_filename_5.zip").to_vec(), ]; // Although we don't allow adding files whose names contain the ZIP64 CDB-end or // CDB-end-locator signatures, we still read them when they aren't genuinely ambiguous. for file in files { ZipArchive::new(Cursor::new(file)).unwrap(); } } /// test case to ensure we don't preemptively over allocate based on the /// declared number of files in the CDE of an invalid zip when the number of /// files declared is more than the alleged offset in the CDE #[test] fn invalid_cde_number_of_files_allocation_smaller_offset() { use super::ZipArchive; let mut v = Vec::new(); v.extend_from_slice(include_bytes!( "../tests/data/invalid_cde_number_of_files_allocation_smaller_offset.zip" )); let reader = ZipArchive::new(Cursor::new(v)); assert!(reader.is_err() || reader.unwrap().is_empty()); } /// test case to ensure we don't preemptively over allocate based on the /// declared number of files in the CDE of an invalid zip when the number of /// files declared is less than the alleged offset in the CDE #[test] fn invalid_cde_number_of_files_allocation_greater_offset() { use super::ZipArchive; let mut v = Vec::new(); v.extend_from_slice(include_bytes!( "../tests/data/invalid_cde_number_of_files_allocation_greater_offset.zip" )); let reader = ZipArchive::new(Cursor::new(v)); assert!(reader.is_err()); } #[cfg(feature = "deflate64")] #[test] fn deflate64_index_out_of_bounds() -> std::io::Result<()> { let mut v = Vec::new(); v.extend_from_slice(include_bytes!( "../tests/data/raw_deflate64_index_out_of_bounds.zip" )); let mut reader = ZipArchive::new(Cursor::new(v))?; std::io::copy(&mut reader.by_index(0)?, &mut std::io::sink()).expect_err("Invalid file"); Ok(()) } #[cfg(feature = "deflate64")] #[test] fn deflate64_not_enough_space() { let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/deflate64_issue_25.zip")); ZipArchive::new(Cursor::new(v)).expect_err("Invalid file"); } #[cfg(feature = "_deflate-any")] #[test] fn test_read_with_data_descriptor() { use std::io::Read; let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/data_descriptor.zip")); let mut reader = ZipArchive::new(Cursor::new(v)).unwrap(); let mut decompressed = [0u8; 16]; let mut file = reader.by_index(0).unwrap(); assert_eq!(file.read(&mut decompressed).unwrap(), 12); } #[test] fn test_is_symlink() -> std::io::Result<()> { let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/symlink.zip")); let mut reader = ZipArchive::new(Cursor::new(v)).unwrap(); assert!(reader.by_index(0).unwrap().is_symlink()); let tempdir = TempDir::with_prefix("test_is_symlink")?; reader.extract(&tempdir).unwrap(); assert!(tempdir.path().join("bar").is_symlink()); Ok(()) } #[test] #[cfg(feature = "_deflate-any")] fn test_utf8_extra_field() { let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/chinese.zip")); let mut reader = ZipArchive::new(Cursor::new(v)).unwrap(); reader.by_name("äøƒäøŖęˆæé—“.txt").unwrap(); } #[test] fn test_utf8() { let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/linux-7z.zip")); let mut reader = ZipArchive::new(Cursor::new(v)).unwrap(); reader.by_name("你儽.txt").unwrap(); } #[test] fn test_utf8_2() { let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/windows-7zip.zip")); let mut reader = ZipArchive::new(Cursor::new(v)).unwrap(); reader.by_name("你儽.txt").unwrap(); } #[test] fn test_64k_files() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); let options = SimpleFileOptions { compression_method: Stored, ..Default::default() }; for i in 0..=u16::MAX { let file_name = format!("{i}.txt"); writer.start_file(&*file_name, options)?; writer.write_all(i.to_string().as_bytes())?; } let mut reader = ZipArchive::new(writer.finish()?)?; for i in 0..=u16::MAX { let expected_name = format!("{i}.txt"); let expected_contents = i.to_string(); let expected_contents = expected_contents.as_bytes(); let mut file = reader.by_name(&expected_name)?; let mut contents = Vec::with_capacity(expected_contents.len()); file.read_to_end(&mut contents)?; assert_eq!(contents, expected_contents); drop(file); contents.clear(); let mut file = reader.by_index(i as usize)?; file.read_to_end(&mut contents)?; assert_eq!(contents, expected_contents); } Ok(()) } /// Symlinks being extracted shouldn't be followed out of the destination directory. #[test] fn test_cannot_symlink_outside_destination() -> ZipResult<()> { use std::fs::create_dir; let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.add_symlink("symlink/", "../dest-sibling/", SimpleFileOptions::default())?; writer.start_file("symlink/dest-file", SimpleFileOptions::default())?; let mut reader = writer.finish_into_readable()?; let dest_parent = TempDir::with_prefix("read__test_cannot_symlink_outside_destination").unwrap(); let dest_sibling = dest_parent.path().join("dest-sibling"); create_dir(&dest_sibling)?; let dest = dest_parent.path().join("dest"); create_dir(&dest)?; assert!(reader.extract(dest).is_err()); assert!(!dest_sibling.join("dest-file").exists()); Ok(()) } #[test] fn test_can_create_destination() -> ZipResult<()> { let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/mimetype.zip")); let mut reader = ZipArchive::new(Cursor::new(v))?; let dest = TempDir::with_prefix("read__test_can_create_destination").unwrap(); reader.extract(&dest)?; assert!(dest.path().join("mimetype").exists()); Ok(()) } } zip-2.5.0/src/result.rs000064400000000000000000000100211046102023000130770ustar 00000000000000#![allow(unknown_lints)] // non_local_definitions isn't in Rust 1.70 #![allow(non_local_definitions)] //! Error types that can be emitted from this library use std::borrow::Cow; use std::error::Error; use std::fmt::{self, Display, Formatter}; use std::io; use std::num::TryFromIntError; use std::string::FromUtf8Error; /// Generic result type with ZipError as its error variant pub type ZipResult = Result; /// Error type for Zip #[derive(Debug)] #[non_exhaustive] pub enum ZipError { /// i/o error Io(io::Error), /// invalid Zip archive InvalidArchive(Cow<'static, str>), /// unsupported Zip archive UnsupportedArchive(&'static str), /// specified file not found in archive FileNotFound, /// provided password is incorrect InvalidPassword, } impl ZipError { /// The text used as an error when a password is required and not supplied /// /// ```rust,no_run /// # use zip::result::ZipError; /// # let mut archive = zip::ZipArchive::new(std::io::Cursor::new(&[])).unwrap(); /// match archive.by_index(1) { /// Err(ZipError::UnsupportedArchive(ZipError::PASSWORD_REQUIRED)) => eprintln!("a password is needed to unzip this file"), /// _ => (), /// } /// # () /// ``` pub const PASSWORD_REQUIRED: &'static str = "Password required to decrypt file"; } impl Display for ZipError { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { match self { Self::Io(_) => f.write_str("i/o error"), Self::InvalidArchive(e) => write!(f, "invalid Zip archive: {}", e), Self::UnsupportedArchive(e) => write!(f, "unsupported Zip archive: {}", e), Self::FileNotFound => f.write_str("specified file not found in archive"), Self::InvalidPassword => f.write_str("provided password is incorrect"), } } } impl Error for ZipError { fn source(&self) -> Option<&(dyn Error + 'static)> { match self { Self::Io(e) => Some(e), Self::InvalidArchive(_) | Self::UnsupportedArchive(_) | Self::FileNotFound | Self::InvalidPassword => None, } } } impl From for io::Error { fn from(err: ZipError) -> io::Error { let kind = match &err { ZipError::Io(err) => err.kind(), ZipError::InvalidArchive(_) => io::ErrorKind::InvalidData, ZipError::UnsupportedArchive(_) => io::ErrorKind::Unsupported, ZipError::FileNotFound => io::ErrorKind::NotFound, ZipError::InvalidPassword => io::ErrorKind::InvalidInput, }; io::Error::new(kind, err) } } impl From for ZipError { fn from(value: io::Error) -> Self { Self::Io(value) } } impl From for ZipError { fn from(_: DateTimeRangeError) -> Self { invalid!("Invalid date or time") } } impl From for ZipError { fn from(_: FromUtf8Error) -> Self { invalid!("Invalid UTF-8") } } /// Error type for time parsing #[derive(Debug)] pub struct DateTimeRangeError; // TryFromIntError is also an out-of-range error. impl From for DateTimeRangeError { fn from(_value: TryFromIntError) -> Self { DateTimeRangeError } } impl fmt::Display for DateTimeRangeError { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { write!( fmt, "a date could not be represented within the bounds the MS-DOS date range (1980-2107)" ) } } impl Error for DateTimeRangeError {} pub(crate) fn invalid_archive>>(message: M) -> ZipError { ZipError::InvalidArchive(message.into()) } pub(crate) const fn invalid_archive_const(message: &'static str) -> ZipError { ZipError::InvalidArchive(Cow::Borrowed(message)) } macro_rules! invalid { ($message:literal) => { crate::result::invalid_archive_const($message) }; ($($arg:tt)*) => { crate::result::invalid_archive(format!($($arg)*)) }; } pub(crate) use invalid; zip-2.5.0/src/spec.rs000064400000000000000000000660101046102023000125240ustar 00000000000000#![macro_use] use crate::read::magic_finder::{Backwards, Forward, MagicFinder, OptimisticMagicFinder}; use crate::read::ArchiveOffset; use crate::result::{invalid, ZipError, ZipResult}; use core::mem; use std::io; use std::io::prelude::*; use std::slice; /// "Magic" header values used in the zip spec to locate metadata records. /// /// These values currently always take up a fixed four bytes, so we can parse and wrap them in this /// struct to enforce some small amount of type safety. #[derive(Copy, Clone, Debug, PartialOrd, Ord, PartialEq, Eq, Hash)] #[repr(transparent)] pub(crate) struct Magic(u32); impl Magic { pub const fn literal(x: u32) -> Self { Self(x) } #[inline(always)] #[allow(dead_code)] pub const fn from_le_bytes(bytes: [u8; 4]) -> Self { Self(u32::from_le_bytes(bytes)) } #[inline(always)] pub const fn to_le_bytes(self) -> [u8; 4] { self.0.to_le_bytes() } #[allow(clippy::wrong_self_convention)] #[inline(always)] pub fn from_le(self) -> Self { Self(u32::from_le(self.0)) } #[allow(clippy::wrong_self_convention)] #[inline(always)] pub fn to_le(self) -> Self { Self(u32::to_le(self.0)) } pub const LOCAL_FILE_HEADER_SIGNATURE: Self = Self::literal(0x04034b50); pub const CENTRAL_DIRECTORY_HEADER_SIGNATURE: Self = Self::literal(0x02014b50); pub const CENTRAL_DIRECTORY_END_SIGNATURE: Self = Self::literal(0x06054b50); pub const ZIP64_CENTRAL_DIRECTORY_END_SIGNATURE: Self = Self::literal(0x06064b50); pub const ZIP64_CENTRAL_DIRECTORY_END_LOCATOR_SIGNATURE: Self = Self::literal(0x07064b50); } /// Similar to [`Magic`], but used for extra field tags as per section 4.5.3 of APPNOTE.TXT. #[derive(Copy, Clone, Debug, PartialOrd, Ord, PartialEq, Eq, Hash)] #[repr(transparent)] pub(crate) struct ExtraFieldMagic(u16); /* TODO: maybe try to use this for parsing extra fields as well as writing them? */ #[allow(dead_code)] impl ExtraFieldMagic { pub const fn literal(x: u16) -> Self { Self(x) } #[inline(always)] pub const fn from_le_bytes(bytes: [u8; 2]) -> Self { Self(u16::from_le_bytes(bytes)) } #[inline(always)] pub const fn to_le_bytes(self) -> [u8; 2] { self.0.to_le_bytes() } #[allow(clippy::wrong_self_convention)] #[inline(always)] pub fn from_le(self) -> Self { Self(u16::from_le(self.0)) } #[allow(clippy::wrong_self_convention)] #[inline(always)] pub fn to_le(self) -> Self { Self(u16::to_le(self.0)) } pub const ZIP64_EXTRA_FIELD_TAG: Self = Self::literal(0x0001); } /// The file size at which a ZIP64 record becomes necessary. /// /// If a file larger than this threshold attempts to be written, compressed or uncompressed, and /// [`FileOptions::large_file()`](crate::write::FileOptions) was not true, then [`ZipWriter`] will /// raise an [`io::Error`] with [`io::ErrorKind::Other`]. /// /// If the zip file itself is larger than this value, then a zip64 central directory record will be /// written to the end of the file. /// ///``` /// # fn main() -> Result<(), zip::result::ZipError> { /// # #[cfg(target_pointer_width = "64")] /// # { /// use std::io::{self, Cursor, prelude::*}; /// use std::error::Error; /// use zip::{ZipWriter, write::SimpleFileOptions}; /// /// let mut zip = ZipWriter::new(Cursor::new(Vec::new())); /// // Writing an extremely large file for this test is faster without compression. /// let options = SimpleFileOptions::default().compression_method(zip::CompressionMethod::Stored); /// /// let big_len: usize = (zip::ZIP64_BYTES_THR as usize) + 1; /// let big_buf = vec![0u8; big_len]; /// zip.start_file("zero.dat", options)?; /// // This is too big! /// let res = zip.write_all(&big_buf[..]).err().unwrap(); /// assert_eq!(res.kind(), io::ErrorKind::Other); /// let description = format!("{}", &res); /// assert_eq!(description, "Large file option has not been set"); /// // Attempting to write anything further to the same zip will still succeed, but the previous /// // failing entry has been removed. /// zip.start_file("one.dat", options)?; /// let zip = zip.finish_into_readable()?; /// let names: Vec<_> = zip.file_names().collect(); /// assert_eq!(&names, &["one.dat"]); /// /// // Create a new zip output. /// let mut zip = ZipWriter::new(Cursor::new(Vec::new())); /// // This time, create a zip64 record for the file. /// let options = options.large_file(true); /// zip.start_file("zero.dat", options)?; /// // This succeeds because we specified that it could be a large file. /// assert!(zip.write_all(&big_buf[..]).is_ok()); /// # } /// # Ok(()) /// # } ///``` pub const ZIP64_BYTES_THR: u64 = u32::MAX as u64; /// The number of entries within a single zip necessary to allocate a zip64 central /// directory record. /// /// If more than this number of entries is written to a [`ZipWriter`], then [`ZipWriter::finish()`] /// will write out extra zip64 data to the end of the zip file. pub const ZIP64_ENTRY_THR: usize = u16::MAX as usize; /// # Safety /// /// - No padding/uninit bytes /// - All bytes patterns must be valid /// - No cell, pointers /// /// See `bytemuck::Pod` for more details. pub(crate) unsafe trait Pod: Copy + 'static { #[inline] fn zeroed() -> Self { unsafe { mem::zeroed() } } #[inline] fn as_bytes(&self) -> &[u8] { unsafe { slice::from_raw_parts(self as *const Self as *const u8, mem::size_of::()) } } #[inline] fn as_bytes_mut(&mut self) -> &mut [u8] { unsafe { slice::from_raw_parts_mut(self as *mut Self as *mut u8, mem::size_of::()) } } } pub(crate) trait FixedSizeBlock: Pod { const MAGIC: Magic; fn magic(self) -> Magic; const WRONG_MAGIC_ERROR: ZipError; #[allow(clippy::wrong_self_convention)] fn from_le(self) -> Self; fn parse(reader: &mut R) -> ZipResult { let mut block = Self::zeroed(); reader.read_exact(block.as_bytes_mut())?; let block = Self::from_le(block); if block.magic() != Self::MAGIC { return Err(Self::WRONG_MAGIC_ERROR); } Ok(block) } fn to_le(self) -> Self; fn write(self, writer: &mut T) -> ZipResult<()> { let block = self.to_le(); writer.write_all(block.as_bytes())?; Ok(()) } } /// Convert all the fields of a struct *from* little-endian representations. macro_rules! from_le { ($obj:ident, $field:ident, $type:ty) => { $obj.$field = <$type>::from_le($obj.$field); }; ($obj:ident, [($field:ident, $type:ty) $(,)?]) => { from_le![$obj, $field, $type]; }; ($obj:ident, [($field:ident, $type:ty), $($rest:tt),+ $(,)?]) => { from_le![$obj, $field, $type]; from_le!($obj, [$($rest),+]); }; } /// Convert all the fields of a struct *into* little-endian representations. macro_rules! to_le { ($obj:ident, $field:ident, $type:ty) => { $obj.$field = <$type>::to_le($obj.$field); }; ($obj:ident, [($field:ident, $type:ty) $(,)?]) => { to_le![$obj, $field, $type]; }; ($obj:ident, [($field:ident, $type:ty), $($rest:tt),+ $(,)?]) => { to_le![$obj, $field, $type]; to_le!($obj, [$($rest),+]); }; } /* TODO: derive macro to generate these fields? */ /// Implement `from_le()` and `to_le()`, providing the field specification to both macros /// and methods. macro_rules! to_and_from_le { ($($args:tt),+ $(,)?) => { #[inline(always)] fn from_le(mut self) -> Self { from_le![self, [$($args),+]]; self } #[inline(always)] fn to_le(mut self) -> Self { to_le![self, [$($args),+]]; self } }; } #[derive(Copy, Clone, Debug)] #[repr(packed, C)] pub(crate) struct Zip32CDEBlock { magic: Magic, pub disk_number: u16, pub disk_with_central_directory: u16, pub number_of_files_on_this_disk: u16, pub number_of_files: u16, pub central_directory_size: u32, pub central_directory_offset: u32, pub zip_file_comment_length: u16, } unsafe impl Pod for Zip32CDEBlock {} impl FixedSizeBlock for Zip32CDEBlock { const MAGIC: Magic = Magic::CENTRAL_DIRECTORY_END_SIGNATURE; #[inline(always)] fn magic(self) -> Magic { self.magic } const WRONG_MAGIC_ERROR: ZipError = invalid!("Invalid digital signature header"); to_and_from_le![ (magic, Magic), (disk_number, u16), (disk_with_central_directory, u16), (number_of_files_on_this_disk, u16), (number_of_files, u16), (central_directory_size, u32), (central_directory_offset, u32), (zip_file_comment_length, u16) ]; } #[derive(Debug)] pub(crate) struct Zip32CentralDirectoryEnd { pub disk_number: u16, pub disk_with_central_directory: u16, pub number_of_files_on_this_disk: u16, pub number_of_files: u16, pub central_directory_size: u32, pub central_directory_offset: u32, pub zip_file_comment: Box<[u8]>, } impl Zip32CentralDirectoryEnd { fn into_block_and_comment(self) -> (Zip32CDEBlock, Box<[u8]>) { let Self { disk_number, disk_with_central_directory, number_of_files_on_this_disk, number_of_files, central_directory_size, central_directory_offset, zip_file_comment, } = self; let block = Zip32CDEBlock { magic: Zip32CDEBlock::MAGIC, disk_number, disk_with_central_directory, number_of_files_on_this_disk, number_of_files, central_directory_size, central_directory_offset, zip_file_comment_length: zip_file_comment.len() as u16, }; (block, zip_file_comment) } pub fn parse(reader: &mut T) -> ZipResult { let Zip32CDEBlock { // magic, disk_number, disk_with_central_directory, number_of_files_on_this_disk, number_of_files, central_directory_size, central_directory_offset, zip_file_comment_length, .. } = Zip32CDEBlock::parse(reader)?; let mut zip_file_comment = vec![0u8; zip_file_comment_length as usize].into_boxed_slice(); if let Err(e) = reader.read_exact(&mut zip_file_comment) { if e.kind() == io::ErrorKind::UnexpectedEof { return Err(invalid!("EOCD comment exceeds file boundary")); } return Err(e.into()); } Ok(Zip32CentralDirectoryEnd { disk_number, disk_with_central_directory, number_of_files_on_this_disk, number_of_files, central_directory_size, central_directory_offset, zip_file_comment, }) } pub fn write(self, writer: &mut T) -> ZipResult<()> { let (block, comment) = self.into_block_and_comment(); if comment.len() > u16::MAX as usize { return Err(invalid!("EOCD comment length exceeds u16::MAX")); } block.write(writer)?; writer.write_all(&comment)?; Ok(()) } pub fn may_be_zip64(&self) -> bool { self.number_of_files == u16::MAX || self.central_directory_offset == u32::MAX } } #[derive(Copy, Clone)] #[repr(packed, C)] pub(crate) struct Zip64CDELocatorBlock { magic: Magic, pub disk_with_central_directory: u32, pub end_of_central_directory_offset: u64, pub number_of_disks: u32, } unsafe impl Pod for Zip64CDELocatorBlock {} impl FixedSizeBlock for Zip64CDELocatorBlock { const MAGIC: Magic = Magic::ZIP64_CENTRAL_DIRECTORY_END_LOCATOR_SIGNATURE; #[inline(always)] fn magic(self) -> Magic { self.magic } const WRONG_MAGIC_ERROR: ZipError = invalid!("Invalid zip64 locator digital signature header"); to_and_from_le![ (magic, Magic), (disk_with_central_directory, u32), (end_of_central_directory_offset, u64), (number_of_disks, u32), ]; } pub(crate) struct Zip64CentralDirectoryEndLocator { pub disk_with_central_directory: u32, pub end_of_central_directory_offset: u64, pub number_of_disks: u32, } impl Zip64CentralDirectoryEndLocator { pub fn parse(reader: &mut T) -> ZipResult { let Zip64CDELocatorBlock { // magic, disk_with_central_directory, end_of_central_directory_offset, number_of_disks, .. } = Zip64CDELocatorBlock::parse(reader)?; Ok(Zip64CentralDirectoryEndLocator { disk_with_central_directory, end_of_central_directory_offset, number_of_disks, }) } pub fn block(self) -> Zip64CDELocatorBlock { let Self { disk_with_central_directory, end_of_central_directory_offset, number_of_disks, } = self; Zip64CDELocatorBlock { magic: Zip64CDELocatorBlock::MAGIC, disk_with_central_directory, end_of_central_directory_offset, number_of_disks, } } pub fn write(self, writer: &mut T) -> ZipResult<()> { self.block().write(writer) } } #[derive(Copy, Clone)] #[repr(packed, C)] pub(crate) struct Zip64CDEBlock { magic: Magic, pub record_size: u64, pub version_made_by: u16, pub version_needed_to_extract: u16, pub disk_number: u32, pub disk_with_central_directory: u32, pub number_of_files_on_this_disk: u64, pub number_of_files: u64, pub central_directory_size: u64, pub central_directory_offset: u64, } unsafe impl Pod for Zip64CDEBlock {} impl FixedSizeBlock for Zip64CDEBlock { const MAGIC: Magic = Magic::ZIP64_CENTRAL_DIRECTORY_END_SIGNATURE; fn magic(self) -> Magic { self.magic } const WRONG_MAGIC_ERROR: ZipError = invalid!("Invalid digital signature header"); to_and_from_le![ (magic, Magic), (record_size, u64), (version_made_by, u16), (version_needed_to_extract, u16), (disk_number, u32), (disk_with_central_directory, u32), (number_of_files_on_this_disk, u64), (number_of_files, u64), (central_directory_size, u64), (central_directory_offset, u64), ]; } pub(crate) struct Zip64CentralDirectoryEnd { pub record_size: u64, pub version_made_by: u16, pub version_needed_to_extract: u16, pub disk_number: u32, pub disk_with_central_directory: u32, pub number_of_files_on_this_disk: u64, pub number_of_files: u64, pub central_directory_size: u64, pub central_directory_offset: u64, pub extensible_data_sector: Box<[u8]>, } impl Zip64CentralDirectoryEnd { pub fn parse(reader: &mut T, max_size: u64) -> ZipResult { let Zip64CDEBlock { record_size, version_made_by, version_needed_to_extract, disk_number, disk_with_central_directory, number_of_files_on_this_disk, number_of_files, central_directory_size, central_directory_offset, .. } = Zip64CDEBlock::parse(reader)?; if record_size < 44 { return Err(invalid!("Low EOCD64 record size")); } else if record_size.saturating_add(12) > max_size { return Err(invalid!("EOCD64 extends beyond EOCD64 locator")); } let mut zip_file_comment = vec![0u8; record_size as usize - 44].into_boxed_slice(); reader.read_exact(&mut zip_file_comment)?; Ok(Self { record_size, version_made_by, version_needed_to_extract, disk_number, disk_with_central_directory, number_of_files_on_this_disk, number_of_files, central_directory_size, central_directory_offset, extensible_data_sector: zip_file_comment, }) } pub fn into_block_and_comment(self) -> (Zip64CDEBlock, Box<[u8]>) { let Self { record_size, version_made_by, version_needed_to_extract, disk_number, disk_with_central_directory, number_of_files_on_this_disk, number_of_files, central_directory_size, central_directory_offset, extensible_data_sector, } = self; ( Zip64CDEBlock { magic: Zip64CDEBlock::MAGIC, record_size, version_made_by, version_needed_to_extract, disk_number, disk_with_central_directory, number_of_files_on_this_disk, number_of_files, central_directory_size, central_directory_offset, }, extensible_data_sector, ) } pub fn write(self, writer: &mut T) -> ZipResult<()> { let (block, comment) = self.into_block_and_comment(); block.write(writer)?; writer.write_all(&comment)?; Ok(()) } } pub(crate) struct DataAndPosition { pub data: T, #[allow(dead_code)] pub position: u64, } impl From<(T, u64)> for DataAndPosition { fn from(value: (T, u64)) -> Self { Self { data: value.0, position: value.1, } } } pub(crate) struct CentralDirectoryEndInfo { pub eocd: DataAndPosition, pub eocd64: Option>, pub archive_offset: u64, } /// Finds the EOCD and possibly the EOCD64 block and determines the archive offset. /// /// In the best case scenario (no prepended junk), this function will not backtrack /// in the reader. pub(crate) fn find_central_directory( reader: &mut R, archive_offset: ArchiveOffset, end_exclusive: u64, file_len: u64, ) -> ZipResult { const EOCD_SIG_BYTES: [u8; mem::size_of::()] = Magic::CENTRAL_DIRECTORY_END_SIGNATURE.to_le_bytes(); const EOCD64_SIG_BYTES: [u8; mem::size_of::()] = Magic::ZIP64_CENTRAL_DIRECTORY_END_SIGNATURE.to_le_bytes(); const CDFH_SIG_BYTES: [u8; mem::size_of::()] = Magic::CENTRAL_DIRECTORY_HEADER_SIGNATURE.to_le_bytes(); // Instantiate the mandatory finder let mut eocd_finder = MagicFinder::>::new(&EOCD_SIG_BYTES, 0, end_exclusive); let mut subfinder: Option>> = None; // Keep the last errors for cases of improper EOCD instances. let mut parsing_error = None; while let Some(eocd_offset) = eocd_finder.next(reader)? { // Attempt to parse the EOCD block let eocd = match Zip32CentralDirectoryEnd::parse(reader) { Ok(eocd) => eocd, Err(e) => { if parsing_error.is_none() { parsing_error = Some(e); } continue; } }; // ! Relaxed (inequality) due to garbage-after-comment Python files // Consistency check: the EOCD comment must terminate before the end of file if eocd.zip_file_comment.len() as u64 + eocd_offset + 22 > file_len { parsing_error = Some(invalid!("Invalid EOCD comment length")); continue; } let zip64_metadata = if eocd.may_be_zip64() { fn try_read_eocd64_locator( reader: &mut (impl Read + Seek), eocd_offset: u64, ) -> ZipResult<(u64, Zip64CentralDirectoryEndLocator)> { if eocd_offset < mem::size_of::() as u64 { return Err(invalid!("EOCD64 Locator does not fit in file")); } let locator64_offset = eocd_offset - mem::size_of::() as u64; reader.seek(io::SeekFrom::Start(locator64_offset))?; Ok(( locator64_offset, Zip64CentralDirectoryEndLocator::parse(reader)?, )) } try_read_eocd64_locator(reader, eocd_offset).ok() } else { None }; let Some((locator64_offset, locator64)) = zip64_metadata else { // Branch out for zip32 let relative_cd_offset = eocd.central_directory_offset as u64; // If the archive is empty, there is nothing more to be checked, the archive is correct. if eocd.number_of_files == 0 { return Ok(CentralDirectoryEndInfo { eocd: (eocd, eocd_offset).into(), eocd64: None, archive_offset: eocd_offset.saturating_sub(relative_cd_offset), }); } // Consistency check: the CD relative offset cannot be after the EOCD if relative_cd_offset >= eocd_offset { parsing_error = Some(invalid!("Invalid CDFH offset in EOCD")); continue; } // Attempt to find the first CDFH let subfinder = subfinder .get_or_insert_with(OptimisticMagicFinder::new_empty) .repurpose( &CDFH_SIG_BYTES, // The CDFH must be before the EOCD and after the relative offset, // because prepended junk can only move it forward. (relative_cd_offset, eocd_offset), match archive_offset { ArchiveOffset::Known(n) => { Some((relative_cd_offset.saturating_add(n).min(eocd_offset), true)) } _ => Some((relative_cd_offset, false)), }, ); // Consistency check: find the first CDFH if let Some(cd_offset) = subfinder.next(reader)? { // The first CDFH will define the archive offset let archive_offset = cd_offset - relative_cd_offset; return Ok(CentralDirectoryEndInfo { eocd: (eocd, eocd_offset).into(), eocd64: None, archive_offset, }); } parsing_error = Some(invalid!("No CDFH found")); continue; }; // Consistency check: the EOCD64 offset must be before EOCD64 Locator offset */ if locator64.end_of_central_directory_offset >= locator64_offset { parsing_error = Some(invalid!("Invalid EOCD64 Locator CD offset")); continue; } if locator64.number_of_disks > 1 { parsing_error = Some(invalid!("Multi-disk ZIP files are not supported")); continue; } // This was hidden inside a function to collect errors in a single place. // Once try blocks are stabilized, this can go away. fn try_read_eocd64( reader: &mut R, locator64: &Zip64CentralDirectoryEndLocator, expected_length: u64, ) -> ZipResult { let z64 = Zip64CentralDirectoryEnd::parse(reader, expected_length)?; // Consistency check: EOCD64 locator should agree with the EOCD64 if z64.disk_with_central_directory != locator64.disk_with_central_directory { return Err(invalid!("Invalid EOCD64: inconsistency with Locator data")); } // Consistency check: the EOCD64 must have the expected length if z64.record_size + 12 != expected_length { return Err(invalid!("Invalid EOCD64: inconsistent length")); } Ok(z64) } // Attempt to find the EOCD64 with an initial guess let subfinder = subfinder .get_or_insert_with(OptimisticMagicFinder::new_empty) .repurpose( &EOCD64_SIG_BYTES, (locator64.end_of_central_directory_offset, locator64_offset), match archive_offset { ArchiveOffset::Known(n) => Some(( locator64 .end_of_central_directory_offset .saturating_add(n) .min(locator64_offset), true, )), _ => Some((locator64.end_of_central_directory_offset, false)), }, ); // Consistency check: Find the EOCD64 let mut local_error = None; while let Some(eocd64_offset) = subfinder.next(reader)? { let archive_offset = eocd64_offset - locator64.end_of_central_directory_offset; match try_read_eocd64( reader, &locator64, locator64_offset.saturating_sub(eocd64_offset), ) { Ok(eocd64) => { if eocd64_offset < eocd64 .number_of_files .saturating_mul( mem::size_of::() as u64 ) .saturating_add(eocd64.central_directory_offset) { local_error = Some(invalid!("Invalid EOCD64: inconsistent number of files")); continue; } return Ok(CentralDirectoryEndInfo { eocd: (eocd, eocd_offset).into(), eocd64: Some((eocd64, eocd64_offset).into()), archive_offset, }); } Err(e) => { local_error = Some(e); } } } parsing_error = local_error.or(Some(invalid!("Could not find EOCD64"))); } Err(parsing_error.unwrap_or(invalid!("Could not find EOCD"))) } pub(crate) fn is_dir(filename: &str) -> bool { filename .chars() .next_back() .is_some_and(|c| c == '/' || c == '\\') } #[cfg(test)] mod test { use super::*; use std::io::Cursor; #[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)] #[repr(packed, C)] pub struct TestBlock { magic: Magic, pub file_name_length: u16, } unsafe impl Pod for TestBlock {} impl FixedSizeBlock for TestBlock { const MAGIC: Magic = Magic::literal(0x01111); fn magic(self) -> Magic { self.magic } const WRONG_MAGIC_ERROR: ZipError = invalid!("unreachable"); to_and_from_le![(magic, Magic), (file_name_length, u16)]; } /// Demonstrate that a block object can be safely written to memory and deserialized back out. #[test] fn block_serde() { let block = TestBlock { magic: TestBlock::MAGIC, file_name_length: 3, }; let mut c = Cursor::new(Vec::new()); block.write(&mut c).unwrap(); c.set_position(0); let block2 = TestBlock::parse(&mut c).unwrap(); assert_eq!(block, block2); } } zip-2.5.0/src/types.rs000064400000000000000000001460741046102023000127470ustar 00000000000000//! Types that specify what is contained in a ZIP. use crate::cp437::FromCp437; use crate::write::{FileOptionExtension, FileOptions}; use path::{Component, Path, PathBuf}; use std::cmp::Ordering; use std::ffi::OsStr; use std::fmt; use std::fmt::{Debug, Formatter}; use std::mem; use std::path; use std::sync::{Arc, OnceLock}; #[cfg(feature = "chrono")] use chrono::{Datelike, NaiveDate, NaiveDateTime, NaiveTime, Timelike}; #[cfg(feature = "jiff-02")] use jiff::civil; use crate::result::{invalid, ZipError, ZipResult}; use crate::spec::{self, FixedSizeBlock, Pod}; pub(crate) mod ffi { pub const S_IFDIR: u32 = 0o0040000; pub const S_IFREG: u32 = 0o0100000; pub const S_IFLNK: u32 = 0o0120000; } use crate::extra_fields::ExtraField; use crate::result::DateTimeRangeError; use crate::spec::is_dir; use crate::types::ffi::S_IFDIR; use crate::{CompressionMethod, ZIP64_BYTES_THR}; #[cfg(feature = "time")] use time::{error::ComponentRange, Date, Month, OffsetDateTime, PrimitiveDateTime, Time}; pub(crate) struct ZipRawValues { pub(crate) crc32: u32, pub(crate) compressed_size: u64, pub(crate) uncompressed_size: u64, } #[derive(Clone, Copy, Debug, PartialEq, Eq, Default)] #[repr(u8)] pub enum System { Dos = 0, Unix = 3, #[default] Unknown, } impl From for System { fn from(system: u8) -> Self { match system { 0 => Self::Dos, 3 => Self::Unix, _ => Self::Unknown, } } } impl From for u8 { fn from(system: System) -> Self { match system { System::Dos => 0, System::Unix => 3, System::Unknown => 4, } } } /// Representation of a moment in time. /// /// Zip files use an old format from DOS to store timestamps, /// with its own set of peculiarities. /// For example, it has a resolution of 2 seconds! /// /// A [`DateTime`] can be stored directly in a zipfile with [`FileOptions::last_modified_time`], /// or read from one with [`ZipFile::last_modified`](crate::read::ZipFile::last_modified). /// /// # Warning /// /// Because there is no timezone associated with the [`DateTime`], they should ideally only /// be used for user-facing descriptions. /// /// Modern zip files store more precise timestamps; see [`crate::extra_fields::ExtendedTimestamp`] /// for details. #[derive(Clone, Copy, Eq, Hash, PartialEq)] pub struct DateTime { datepart: u16, timepart: u16, } impl Debug for DateTime { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { if *self == Self::default() { return f.write_str("DateTime::default()"); } f.write_fmt(format_args!( "DateTime::from_date_and_time({}, {}, {}, {}, {}, {})?", self.year(), self.month(), self.day(), self.hour(), self.minute(), self.second() )) } } impl Ord for DateTime { fn cmp(&self, other: &Self) -> Ordering { if let ord @ (Ordering::Less | Ordering::Greater) = self.year().cmp(&other.year()) { return ord; } if let ord @ (Ordering::Less | Ordering::Greater) = self.month().cmp(&other.month()) { return ord; } if let ord @ (Ordering::Less | Ordering::Greater) = self.day().cmp(&other.day()) { return ord; } if let ord @ (Ordering::Less | Ordering::Greater) = self.hour().cmp(&other.hour()) { return ord; } if let ord @ (Ordering::Less | Ordering::Greater) = self.minute().cmp(&other.minute()) { return ord; } self.second().cmp(&other.second()) } } impl PartialOrd for DateTime { fn partial_cmp(&self, other: &Self) -> Option { Some(self.cmp(other)) } } impl DateTime { /// Returns the current time if possible, otherwise the default of 1980-01-01. #[cfg(feature = "time")] pub fn default_for_write() -> Self { let now = OffsetDateTime::now_utc(); PrimitiveDateTime::new(now.date(), now.time()) .try_into() .unwrap_or_else(|_| DateTime::default()) } /// Returns the current time if possible, otherwise the default of 1980-01-01. #[cfg(not(feature = "time"))] pub fn default_for_write() -> Self { DateTime::default() } } #[cfg(fuzzing)] impl arbitrary::Arbitrary<'_> for DateTime { fn arbitrary(u: &mut arbitrary::Unstructured) -> arbitrary::Result { let year: u16 = u.int_in_range(1980..=2107)?; let month: u16 = u.int_in_range(1..=12)?; let day: u16 = u.int_in_range(1..=31)?; let datepart = day | (month << 5) | ((year - 1980) << 9); let hour: u16 = u.int_in_range(0..=23)?; let minute: u16 = u.int_in_range(0..=59)?; let second: u16 = u.int_in_range(0..=58)?; let timepart = (second >> 1) | (minute << 5) | (hour << 11); Ok(DateTime { datepart, timepart }) } } #[cfg(feature = "chrono")] impl TryFrom for DateTime { type Error = DateTimeRangeError; fn try_from(value: NaiveDateTime) -> Result { DateTime::from_date_and_time( value.year().try_into()?, value.month().try_into()?, value.day().try_into()?, value.hour().try_into()?, value.minute().try_into()?, value.second().try_into()?, ) } } #[cfg(feature = "chrono")] impl TryFrom for NaiveDateTime { type Error = DateTimeRangeError; fn try_from(value: DateTime) -> Result { let date = NaiveDate::from_ymd_opt( value.year().into(), value.month().into(), value.day().into(), ) .ok_or(DateTimeRangeError)?; let time = NaiveTime::from_hms_opt( value.hour().into(), value.minute().into(), value.second().into(), ) .ok_or(DateTimeRangeError)?; Ok(NaiveDateTime::new(date, time)) } } #[cfg(feature = "jiff-02")] impl TryFrom for DateTime { type Error = DateTimeRangeError; fn try_from(value: civil::DateTime) -> Result { Self::from_date_and_time( value.year().try_into()?, value.month() as u8, value.day() as u8, value.hour() as u8, value.minute() as u8, value.second() as u8, ) } } #[cfg(feature = "jiff-02")] impl TryFrom for civil::DateTime { type Error = jiff::Error; fn try_from(value: DateTime) -> Result { Self::new( value.year() as i16, value.month() as i8, value.day() as i8, value.hour() as i8, value.minute() as i8, value.second() as i8, 0, ) } } impl TryFrom<(u16, u16)> for DateTime { type Error = DateTimeRangeError; #[inline] fn try_from(values: (u16, u16)) -> Result { Self::try_from_msdos(values.0, values.1) } } impl From for (u16, u16) { #[inline] fn from(dt: DateTime) -> Self { (dt.datepart(), dt.timepart()) } } impl Default for DateTime { /// Constructs an 'default' datetime of 1980-01-01 00:00:00 fn default() -> DateTime { DateTime { datepart: 0b0000000000100001, timepart: 0, } } } impl fmt::Display for DateTime { #[inline] fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!( f, "{:04}-{:02}-{:02} {:02}:{:02}:{:02}", self.year(), self.month(), self.day(), self.hour(), self.minute(), self.second() ) } } impl DateTime { /// Converts an msdos (u16, u16) pair to a DateTime object /// /// # Safety /// The caller must ensure the date and time are valid. pub const unsafe fn from_msdos_unchecked(datepart: u16, timepart: u16) -> DateTime { DateTime { datepart, timepart } } /// Converts an msdos (u16, u16) pair to a DateTime object if it represents a valid date and /// time. pub fn try_from_msdos(datepart: u16, timepart: u16) -> Result { let seconds = (timepart & 0b0000000000011111) << 1; let minutes = (timepart & 0b0000011111100000) >> 5; let hours = (timepart & 0b1111100000000000) >> 11; let days = datepart & 0b0000000000011111; let months = (datepart & 0b0000000111100000) >> 5; let years = (datepart & 0b1111111000000000) >> 9; Self::from_date_and_time( years.checked_add(1980).ok_or(DateTimeRangeError)?, months.try_into()?, days.try_into()?, hours.try_into()?, minutes.try_into()?, seconds.try_into()?, ) } /// Constructs a DateTime from a specific date and time /// /// The bounds are: /// * year: [1980, 2107] /// * month: [1, 12] /// * day: [1, 28..=31] /// * hour: [0, 23] /// * minute: [0, 59] /// * second: [0, 58] pub fn from_date_and_time( year: u16, month: u8, day: u8, hour: u8, minute: u8, second: u8, ) -> Result { fn is_leap_year(year: u16) -> bool { (year % 4 == 0) && ((year % 25 != 0) || (year % 16 == 0)) } if (1980..=2107).contains(&year) && (1..=12).contains(&month) && (1..=31).contains(&day) && hour <= 23 && minute <= 59 && second <= 60 { let second = second.min(58); // exFAT can't store leap seconds let max_day = match month { 1 | 3 | 5 | 7 | 8 | 10 | 12 => 31, 4 | 6 | 9 | 11 => 30, 2 if is_leap_year(year) => 29, 2 => 28, _ => unreachable!(), }; if day > max_day { return Err(DateTimeRangeError); } let datepart = (day as u16) | ((month as u16) << 5) | ((year - 1980) << 9); let timepart = ((second as u16) >> 1) | ((minute as u16) << 5) | ((hour as u16) << 11); Ok(DateTime { datepart, timepart }) } else { Err(DateTimeRangeError) } } /// Indicates whether this date and time can be written to a zip archive. pub fn is_valid(&self) -> bool { Self::try_from_msdos(self.datepart, self.timepart).is_ok() } #[cfg(feature = "time")] /// Converts a OffsetDateTime object to a DateTime /// /// Returns `Err` when this object is out of bounds #[deprecated(since = "0.6.4", note = "use `DateTime::try_from()` instead")] pub fn from_time(dt: OffsetDateTime) -> Result { dt.try_into() } /// Gets the time portion of this datetime in the msdos representation pub const fn timepart(&self) -> u16 { self.timepart } /// Gets the date portion of this datetime in the msdos representation pub const fn datepart(&self) -> u16 { self.datepart } #[cfg(feature = "time")] /// Converts the DateTime to a OffsetDateTime structure #[deprecated(since = "1.3.1", note = "use `OffsetDateTime::try_from()` instead")] pub fn to_time(&self) -> Result { (*self).try_into() } /// Get the year. There is no epoch, i.e. 2018 will be returned as 2018. pub const fn year(&self) -> u16 { (self.datepart >> 9) + 1980 } /// Get the month, where 1 = january and 12 = december /// /// # Warning /// /// When read from a zip file, this may not be a reasonable value pub const fn month(&self) -> u8 { ((self.datepart & 0b0000000111100000) >> 5) as u8 } /// Get the day /// /// # Warning /// /// When read from a zip file, this may not be a reasonable value pub const fn day(&self) -> u8 { (self.datepart & 0b0000000000011111) as u8 } /// Get the hour /// /// # Warning /// /// When read from a zip file, this may not be a reasonable value pub const fn hour(&self) -> u8 { (self.timepart >> 11) as u8 } /// Get the minute /// /// # Warning /// /// When read from a zip file, this may not be a reasonable value pub const fn minute(&self) -> u8 { ((self.timepart & 0b0000011111100000) >> 5) as u8 } /// Get the second /// /// # Warning /// /// When read from a zip file, this may not be a reasonable value pub const fn second(&self) -> u8 { ((self.timepart & 0b0000000000011111) << 1) as u8 } } #[cfg(feature = "time")] impl TryFrom for DateTime { type Error = DateTimeRangeError; #[allow(useless_deprecated)] #[deprecated( since = "2.5.0", note = "use `TryFrom for DateTime` instead" )] fn try_from(dt: OffsetDateTime) -> Result { Self::try_from(PrimitiveDateTime::new(dt.date(), dt.time())) } } #[cfg(feature = "time")] impl TryFrom for DateTime { type Error = DateTimeRangeError; fn try_from(dt: PrimitiveDateTime) -> Result { Self::from_date_and_time( dt.year().try_into()?, dt.month().into(), dt.day(), dt.hour(), dt.minute(), dt.second(), ) } } #[cfg(feature = "time")] impl TryFrom for OffsetDateTime { type Error = ComponentRange; #[allow(useless_deprecated)] #[deprecated( since = "2.5.0", note = "use `TryFrom for PrimitiveDateTime` instead" )] fn try_from(dt: DateTime) -> Result { PrimitiveDateTime::try_from(dt).map(PrimitiveDateTime::assume_utc) } } #[cfg(feature = "time")] impl TryFrom for PrimitiveDateTime { type Error = ComponentRange; fn try_from(dt: DateTime) -> Result { let date = Date::from_calendar_date(dt.year() as i32, Month::try_from(dt.month())?, dt.day())?; let time = Time::from_hms(dt.hour(), dt.minute(), dt.second())?; Ok(PrimitiveDateTime::new(date, time)) } } pub const MIN_VERSION: u8 = 10; pub const DEFAULT_VERSION: u8 = 45; /// Structure representing a ZIP file. #[derive(Debug, Clone, Default)] pub struct ZipFileData { /// Compatibility of the file attribute information pub system: System, /// Specification version pub version_made_by: u8, /// True if the file is encrypted. pub encrypted: bool, /// True if file_name and file_comment are UTF8 pub is_utf8: bool, /// True if the file uses a data-descriptor section pub using_data_descriptor: bool, /// Compression method used to store the file pub compression_method: crate::compression::CompressionMethod, /// Compression level to store the file pub compression_level: Option, /// Last modified time. This will only have a 2 second precision. pub last_modified_time: Option, /// CRC32 checksum pub crc32: u32, /// Size of the file in the ZIP pub compressed_size: u64, /// Size of the file when extracted pub uncompressed_size: u64, /// Name of the file pub file_name: Box, /// Raw file name. To be used when file_name was incorrectly decoded. pub file_name_raw: Box<[u8]>, /// Extra field usually used for storage expansion pub extra_field: Option>>, /// Extra field only written to central directory pub central_extra_field: Option>>, /// File comment pub file_comment: Box, /// Specifies where the local header of the file starts pub header_start: u64, /// Specifies where the extra data of the file starts pub extra_data_start: Option, /// Specifies where the central header of the file starts /// /// Note that when this is not known, it is set to 0 pub central_header_start: u64, /// Specifies where the compressed data of the file starts pub data_start: OnceLock, /// External file attributes pub external_attributes: u32, /// Reserve local ZIP64 extra field pub large_file: bool, /// AES mode if applicable pub aes_mode: Option<(AesMode, AesVendorVersion, CompressionMethod)>, /// Specifies where in the extra data the AES metadata starts pub aes_extra_data_start: u64, /// extra fields, see pub extra_fields: Vec, } impl ZipFileData { /// Get the starting offset of the data of the compressed file pub fn data_start(&self) -> u64 { *self.data_start.get().unwrap() } #[allow(dead_code)] pub fn is_dir(&self) -> bool { is_dir(&self.file_name) } pub fn file_name_sanitized(&self) -> PathBuf { let no_null_filename = match self.file_name.find('\0') { Some(index) => &self.file_name[0..index], None => &self.file_name, } .to_string(); // zip files can contain both / and \ as separators regardless of the OS // and as we want to return a sanitized PathBuf that only supports the // OS separator let's convert incompatible separators to compatible ones let separator = path::MAIN_SEPARATOR; let opposite_separator = match separator { '/' => '\\', _ => '/', }; let filename = no_null_filename.replace(&opposite_separator.to_string(), &separator.to_string()); Path::new(&filename) .components() .filter(|component| matches!(*component, Component::Normal(..))) .fold(PathBuf::new(), |mut path, ref cur| { path.push(cur.as_os_str()); path }) } /// Simplify the file name by removing the prefix and parent directories and only return normal components pub(crate) fn simplified_components(&self) -> Option> { if self.file_name.contains('\0') { return None; } let input = Path::new(OsStr::new(&*self.file_name)); crate::path::simplified_components(input) } pub(crate) fn enclosed_name(&self) -> Option { if self.file_name.contains('\0') { return None; } let path = PathBuf::from(self.file_name.to_string()); let mut depth = 0usize; for component in path.components() { match component { Component::Prefix(_) | Component::RootDir => return None, Component::ParentDir => depth = depth.checked_sub(1)?, Component::Normal(_) => depth += 1, Component::CurDir => (), } } Some(path) } /// Get unix mode for the file pub(crate) const fn unix_mode(&self) -> Option { if self.external_attributes == 0 { return None; } match self.system { System::Unix => Some(self.external_attributes >> 16), System::Dos => { // Interpret MS-DOS directory bit let mut mode = if 0x10 == (self.external_attributes & 0x10) { ffi::S_IFDIR | 0o0775 } else { ffi::S_IFREG | 0o0664 }; if 0x01 == (self.external_attributes & 0x01) { // Read-only bit; strip write permissions mode &= 0o0555; } Some(mode) } _ => None, } } /// PKZIP version needed to open this file (from APPNOTE 4.4.3.2). pub fn version_needed(&self) -> u16 { let compression_version: u16 = match self.compression_method { CompressionMethod::Stored => MIN_VERSION.into(), #[cfg(feature = "_deflate-any")] CompressionMethod::Deflated => 20, #[cfg(feature = "bzip2")] CompressionMethod::Bzip2 => 46, #[cfg(feature = "deflate64")] CompressionMethod::Deflate64 => 21, #[cfg(feature = "lzma")] CompressionMethod::Lzma => 63, #[cfg(feature = "xz")] CompressionMethod::Xz => 63, // APPNOTE doesn't specify a version for Zstandard _ => DEFAULT_VERSION as u16, }; let crypto_version: u16 = if self.aes_mode.is_some() { 51 } else if self.encrypted { 20 } else { 10 }; let misc_feature_version: u16 = if self.large_file { 45 } else if self .unix_mode() .is_some_and(|mode| mode & S_IFDIR == S_IFDIR) { // file is directory 20 } else { 10 }; compression_version .max(crypto_version) .max(misc_feature_version) } #[inline(always)] pub(crate) fn extra_field_len(&self) -> usize { self.extra_field .as_ref() .map(|v| v.len()) .unwrap_or_default() } #[inline(always)] pub(crate) fn central_extra_field_len(&self) -> usize { self.central_extra_field .as_ref() .map(|v| v.len()) .unwrap_or_default() } #[allow(clippy::too_many_arguments)] pub(crate) fn initialize_local_block( name: S, options: &FileOptions, raw_values: ZipRawValues, header_start: u64, extra_data_start: Option, aes_extra_data_start: u64, compression_method: crate::compression::CompressionMethod, aes_mode: Option<(AesMode, AesVendorVersion, CompressionMethod)>, extra_field: &[u8], ) -> Self where S: ToString, { let permissions = options.permissions.unwrap_or(0o100644); let file_name: Box = name.to_string().into_boxed_str(); let file_name_raw: Box<[u8]> = file_name.bytes().collect(); let mut local_block = ZipFileData { system: System::Unix, version_made_by: DEFAULT_VERSION, encrypted: options.encrypt_with.is_some(), using_data_descriptor: false, is_utf8: !file_name.is_ascii(), compression_method, compression_level: options.compression_level, last_modified_time: Some(options.last_modified_time), crc32: raw_values.crc32, compressed_size: raw_values.compressed_size, uncompressed_size: raw_values.uncompressed_size, file_name, // Never used for saving, but used as map key in insert_file_data() file_name_raw, extra_field: Some(extra_field.to_vec().into()), central_extra_field: options.extended_options.central_extra_data().cloned(), file_comment: String::with_capacity(0).into_boxed_str(), header_start, data_start: OnceLock::new(), central_header_start: 0, external_attributes: permissions << 16, large_file: options.large_file, aes_mode, extra_fields: Vec::new(), extra_data_start, aes_extra_data_start, }; local_block.version_made_by = local_block.version_needed() as u8; local_block } pub(crate) fn from_local_block( block: ZipLocalEntryBlock, reader: &mut R, ) -> ZipResult { let ZipLocalEntryBlock { // magic, version_made_by, flags, compression_method, last_mod_time, last_mod_date, crc32, compressed_size, uncompressed_size, file_name_length, extra_field_length, .. } = block; let encrypted: bool = flags & 1 == 1; if encrypted { return Err(ZipError::UnsupportedArchive( "Encrypted files are not supported", )); } /* FIXME: these were previously incorrect: add testing! */ /* flags & (1 << 3) != 0 */ let using_data_descriptor: bool = flags & (1 << 3) == 1 << 3; if using_data_descriptor { return Err(ZipError::UnsupportedArchive( "The file length is not available in the local header", )); } /* flags & (1 << 1) != 0 */ let is_utf8: bool = flags & (1 << 11) != 0; let compression_method = crate::CompressionMethod::parse_from_u16(compression_method); let file_name_length: usize = file_name_length.into(); let extra_field_length: usize = extra_field_length.into(); let mut file_name_raw = vec![0u8; file_name_length]; reader.read_exact(&mut file_name_raw)?; let mut extra_field = vec![0u8; extra_field_length]; reader.read_exact(&mut extra_field)?; let file_name: Box = match is_utf8 { true => String::from_utf8_lossy(&file_name_raw).into(), false => file_name_raw.clone().from_cp437().into(), }; let system: u8 = (version_made_by >> 8).try_into().unwrap(); Ok(ZipFileData { system: System::from(system), /* NB: this strips the top 8 bits! */ version_made_by: version_made_by as u8, encrypted, using_data_descriptor, is_utf8, compression_method, compression_level: None, last_modified_time: DateTime::try_from_msdos(last_mod_date, last_mod_time).ok(), crc32, compressed_size: compressed_size.into(), uncompressed_size: uncompressed_size.into(), file_name, file_name_raw: file_name_raw.into(), extra_field: Some(Arc::new(extra_field)), central_extra_field: None, file_comment: String::with_capacity(0).into_boxed_str(), // file comment is only available in the central directory // header_start and data start are not available, but also don't matter, since seeking is // not available. header_start: 0, data_start: OnceLock::new(), central_header_start: 0, // The external_attributes field is only available in the central directory. // We set this to zero, which should be valid as the docs state 'If input came // from standard input, this field is set to zero.' external_attributes: 0, large_file: false, aes_mode: None, extra_fields: Vec::new(), extra_data_start: None, aes_extra_data_start: 0, }) } fn is_utf8(&self) -> bool { std::str::from_utf8(&self.file_name_raw).is_ok() } fn is_ascii(&self) -> bool { self.file_name_raw.is_ascii() } fn flags(&self) -> u16 { let utf8_bit: u16 = if self.is_utf8() && !self.is_ascii() { 1u16 << 11 } else { 0 }; let encrypted_bit: u16 = if self.encrypted { 1u16 << 0 } else { 0 }; utf8_bit | encrypted_bit } fn clamp_size_field(&self, field: u64) -> u32 { if self.large_file { spec::ZIP64_BYTES_THR as u32 } else { field.min(spec::ZIP64_BYTES_THR).try_into().unwrap() } } pub(crate) fn local_block(&self) -> ZipResult { let compressed_size: u32 = self.clamp_size_field(self.compressed_size); let uncompressed_size: u32 = self.clamp_size_field(self.uncompressed_size); let extra_field_length: u16 = self .extra_field_len() .try_into() .map_err(|_| invalid!("Extra data field is too large"))?; let last_modified_time = self .last_modified_time .unwrap_or_else(DateTime::default_for_write); Ok(ZipLocalEntryBlock { magic: ZipLocalEntryBlock::MAGIC, version_made_by: self.version_needed(), flags: self.flags(), compression_method: self.compression_method.serialize_to_u16(), last_mod_time: last_modified_time.timepart(), last_mod_date: last_modified_time.datepart(), crc32: self.crc32, compressed_size, uncompressed_size, file_name_length: self.file_name_raw.len().try_into().unwrap(), extra_field_length, }) } pub(crate) fn block(&self) -> ZipResult { let extra_field_len: u16 = self.extra_field_len().try_into().unwrap(); let central_extra_field_len: u16 = self.central_extra_field_len().try_into().unwrap(); let last_modified_time = self .last_modified_time .unwrap_or_else(DateTime::default_for_write); let version_to_extract = self.version_needed(); let version_made_by = (self.version_made_by as u16).max(version_to_extract); Ok(ZipCentralEntryBlock { magic: ZipCentralEntryBlock::MAGIC, version_made_by: ((self.system as u16) << 8) | version_made_by, version_to_extract, flags: self.flags(), compression_method: self.compression_method.serialize_to_u16(), last_mod_time: last_modified_time.timepart(), last_mod_date: last_modified_time.datepart(), crc32: self.crc32, compressed_size: self .compressed_size .min(spec::ZIP64_BYTES_THR) .try_into() .unwrap(), uncompressed_size: self .uncompressed_size .min(spec::ZIP64_BYTES_THR) .try_into() .unwrap(), file_name_length: self.file_name_raw.len().try_into().unwrap(), extra_field_length: extra_field_len.checked_add(central_extra_field_len).ok_or( invalid!("Extra field length in central directory exceeds 64KiB"), )?, file_comment_length: self.file_comment.len().try_into().unwrap(), disk_number: 0, internal_file_attributes: 0, external_file_attributes: self.external_attributes, offset: self .header_start .min(spec::ZIP64_BYTES_THR) .try_into() .unwrap(), }) } pub(crate) fn zip64_extra_field_block(&self) -> Option { Zip64ExtraFieldBlock::maybe_new( self.large_file, self.uncompressed_size, self.compressed_size, self.header_start, ) } } #[derive(Copy, Clone, Debug)] #[repr(packed, C)] pub(crate) struct ZipCentralEntryBlock { magic: spec::Magic, pub version_made_by: u16, pub version_to_extract: u16, pub flags: u16, pub compression_method: u16, pub last_mod_time: u16, pub last_mod_date: u16, pub crc32: u32, pub compressed_size: u32, pub uncompressed_size: u32, pub file_name_length: u16, pub extra_field_length: u16, pub file_comment_length: u16, pub disk_number: u16, pub internal_file_attributes: u16, pub external_file_attributes: u32, pub offset: u32, } unsafe impl Pod for ZipCentralEntryBlock {} impl FixedSizeBlock for ZipCentralEntryBlock { const MAGIC: spec::Magic = spec::Magic::CENTRAL_DIRECTORY_HEADER_SIGNATURE; #[inline(always)] fn magic(self) -> spec::Magic { self.magic } const WRONG_MAGIC_ERROR: ZipError = invalid!("Invalid Central Directory header"); to_and_from_le![ (magic, spec::Magic), (version_made_by, u16), (version_to_extract, u16), (flags, u16), (compression_method, u16), (last_mod_time, u16), (last_mod_date, u16), (crc32, u32), (compressed_size, u32), (uncompressed_size, u32), (file_name_length, u16), (extra_field_length, u16), (file_comment_length, u16), (disk_number, u16), (internal_file_attributes, u16), (external_file_attributes, u32), (offset, u32), ]; } #[derive(Copy, Clone, Debug)] #[repr(packed, C)] pub(crate) struct ZipLocalEntryBlock { magic: spec::Magic, pub version_made_by: u16, pub flags: u16, pub compression_method: u16, pub last_mod_time: u16, pub last_mod_date: u16, pub crc32: u32, pub compressed_size: u32, pub uncompressed_size: u32, pub file_name_length: u16, pub extra_field_length: u16, } unsafe impl Pod for ZipLocalEntryBlock {} impl FixedSizeBlock for ZipLocalEntryBlock { const MAGIC: spec::Magic = spec::Magic::LOCAL_FILE_HEADER_SIGNATURE; #[inline(always)] fn magic(self) -> spec::Magic { self.magic } const WRONG_MAGIC_ERROR: ZipError = invalid!("Invalid local file header"); to_and_from_le![ (magic, spec::Magic), (version_made_by, u16), (flags, u16), (compression_method, u16), (last_mod_time, u16), (last_mod_date, u16), (crc32, u32), (compressed_size, u32), (uncompressed_size, u32), (file_name_length, u16), (extra_field_length, u16), ]; } #[derive(Copy, Clone, Debug)] pub(crate) struct Zip64ExtraFieldBlock { magic: spec::ExtraFieldMagic, size: u16, uncompressed_size: Option, compressed_size: Option, header_start: Option, // Excluded fields: // u32: disk start number } impl Zip64ExtraFieldBlock { pub(crate) fn maybe_new( large_file: bool, uncompressed_size: u64, compressed_size: u64, header_start: u64, ) -> Option { let mut size: u16 = 0; let uncompressed_size = if uncompressed_size >= ZIP64_BYTES_THR || large_file { size += mem::size_of::() as u16; Some(uncompressed_size) } else { None }; let compressed_size = if compressed_size >= ZIP64_BYTES_THR || large_file { size += mem::size_of::() as u16; Some(compressed_size) } else { None }; let header_start = if header_start >= ZIP64_BYTES_THR { size += mem::size_of::() as u16; Some(header_start) } else { None }; if size == 0 { return None; } Some(Zip64ExtraFieldBlock { magic: spec::ExtraFieldMagic::ZIP64_EXTRA_FIELD_TAG, size, uncompressed_size, compressed_size, header_start, }) } } impl Zip64ExtraFieldBlock { pub fn full_size(&self) -> usize { assert!(self.size > 0); self.size as usize + mem::size_of::() + mem::size_of::() } pub fn serialize(self) -> Box<[u8]> { let Self { magic, size, uncompressed_size, compressed_size, header_start, } = self; let full_size = self.full_size(); let mut ret = Vec::with_capacity(full_size); ret.extend(magic.to_le_bytes()); ret.extend(u16::to_le_bytes(size)); if let Some(uncompressed_size) = uncompressed_size { ret.extend(u64::to_le_bytes(uncompressed_size)); } if let Some(compressed_size) = compressed_size { ret.extend(u64::to_le_bytes(compressed_size)); } if let Some(header_start) = header_start { ret.extend(u64::to_le_bytes(header_start)); } debug_assert_eq!(ret.len(), full_size); ret.into_boxed_slice() } } /// The encryption specification used to encrypt a file with AES. /// /// According to the [specification](https://www.winzip.com/win/en/aes_info.html#winzip11) AE-2 /// does not make use of the CRC check. #[derive(Copy, Clone, Debug)] #[repr(u16)] pub enum AesVendorVersion { Ae1 = 0x0001, Ae2 = 0x0002, } /// AES variant used. #[derive(Copy, Clone, Debug, Eq, PartialEq)] #[cfg_attr(fuzzing, derive(arbitrary::Arbitrary))] #[repr(u8)] pub enum AesMode { /// 128-bit AES encryption. Aes128 = 0x01, /// 192-bit AES encryption. Aes192 = 0x02, /// 256-bit AES encryption. Aes256 = 0x03, } #[cfg(feature = "aes-crypto")] impl AesMode { /// Length of the salt for the given AES mode. pub const fn salt_length(&self) -> usize { self.key_length() / 2 } /// Length of the key for the given AES mode. pub const fn key_length(&self) -> usize { match self { Self::Aes128 => 16, Self::Aes192 => 24, Self::Aes256 => 32, } } } #[cfg(test)] mod test { #[test] fn system() { use super::System; assert_eq!(u8::from(System::Dos), 0u8); assert_eq!(System::Dos as u8, 0u8); assert_eq!(System::Unix as u8, 3u8); assert_eq!(u8::from(System::Unix), 3u8); assert_eq!(System::from(0), System::Dos); assert_eq!(System::from(3), System::Unix); assert_eq!(u8::from(System::Unknown), 4u8); assert_eq!(System::Unknown as u8, 4u8); } #[test] fn sanitize() { use super::*; let file_name = "/path/../../../../etc/./passwd\0/etc/shadow".to_string(); let data = ZipFileData { system: System::Dos, version_made_by: 0, encrypted: false, using_data_descriptor: false, is_utf8: true, compression_method: crate::compression::CompressionMethod::Stored, compression_level: None, last_modified_time: None, crc32: 0, compressed_size: 0, uncompressed_size: 0, file_name: file_name.clone().into_boxed_str(), file_name_raw: file_name.into_bytes().into_boxed_slice(), extra_field: None, central_extra_field: None, file_comment: String::with_capacity(0).into_boxed_str(), header_start: 0, extra_data_start: None, data_start: OnceLock::new(), central_header_start: 0, external_attributes: 0, large_file: false, aes_mode: None, aes_extra_data_start: 0, extra_fields: Vec::new(), }; assert_eq!(data.file_name_sanitized(), PathBuf::from("path/etc/passwd")); } #[test] #[allow(clippy::unusual_byte_groupings)] fn datetime_default() { use super::DateTime; let dt = DateTime::default(); assert_eq!(dt.timepart(), 0); assert_eq!(dt.datepart(), 0b0000000_0001_00001); } #[test] #[allow(clippy::unusual_byte_groupings)] fn datetime_max() { use super::DateTime; let dt = DateTime::from_date_and_time(2107, 12, 31, 23, 59, 58).unwrap(); assert_eq!(dt.timepart(), 0b10111_111011_11101); assert_eq!(dt.datepart(), 0b1111111_1100_11111); } #[test] fn datetime_equality() { use super::DateTime; let dt = DateTime::from_date_and_time(2018, 11, 17, 10, 38, 30).unwrap(); assert_eq!( dt, DateTime::from_date_and_time(2018, 11, 17, 10, 38, 30).unwrap() ); assert_ne!(dt, DateTime::default()); } #[test] fn datetime_order() { use std::cmp::Ordering; use super::DateTime; let dt = DateTime::from_date_and_time(2018, 11, 17, 10, 38, 30).unwrap(); assert_eq!( dt.cmp(&DateTime::from_date_and_time(2018, 11, 17, 10, 38, 30).unwrap()), Ordering::Equal ); // year assert!(dt < DateTime::from_date_and_time(2019, 11, 17, 10, 38, 30).unwrap()); assert!(dt > DateTime::from_date_and_time(2017, 11, 17, 10, 38, 30).unwrap()); // month assert!(dt < DateTime::from_date_and_time(2018, 12, 17, 10, 38, 30).unwrap()); assert!(dt > DateTime::from_date_and_time(2018, 10, 17, 10, 38, 30).unwrap()); // day assert!(dt < DateTime::from_date_and_time(2018, 11, 18, 10, 38, 30).unwrap()); assert!(dt > DateTime::from_date_and_time(2018, 11, 16, 10, 38, 30).unwrap()); // hour assert!(dt < DateTime::from_date_and_time(2018, 11, 17, 11, 38, 30).unwrap()); assert!(dt > DateTime::from_date_and_time(2018, 11, 17, 9, 38, 30).unwrap()); // minute assert!(dt < DateTime::from_date_and_time(2018, 11, 17, 10, 39, 30).unwrap()); assert!(dt > DateTime::from_date_and_time(2018, 11, 17, 10, 37, 30).unwrap()); // second assert!(dt < DateTime::from_date_and_time(2018, 11, 17, 10, 38, 32).unwrap()); assert_eq!( dt.cmp(&DateTime::from_date_and_time(2018, 11, 17, 10, 38, 31).unwrap()), Ordering::Equal ); assert!(dt > DateTime::from_date_and_time(2018, 11, 17, 10, 38, 29).unwrap()); assert!(dt > DateTime::from_date_and_time(2018, 11, 17, 10, 38, 28).unwrap()); } #[test] fn datetime_display() { use super::DateTime; assert_eq!(format!("{}", DateTime::default()), "1980-01-01 00:00:00"); assert_eq!( format!( "{}", DateTime::from_date_and_time(2018, 11, 17, 10, 38, 30).unwrap() ), "2018-11-17 10:38:30" ); assert_eq!( format!( "{}", DateTime::from_date_and_time(2107, 12, 31, 23, 59, 58).unwrap() ), "2107-12-31 23:59:58" ); } #[test] fn datetime_bounds() { use super::DateTime; assert!(DateTime::from_date_and_time(2000, 1, 1, 23, 59, 60).is_ok()); assert!(DateTime::from_date_and_time(2000, 1, 1, 24, 0, 0).is_err()); assert!(DateTime::from_date_and_time(2000, 1, 1, 0, 60, 0).is_err()); assert!(DateTime::from_date_and_time(2000, 1, 1, 0, 0, 61).is_err()); assert!(DateTime::from_date_and_time(2107, 12, 31, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(1980, 1, 1, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(1979, 1, 1, 0, 0, 0).is_err()); assert!(DateTime::from_date_and_time(1980, 0, 1, 0, 0, 0).is_err()); assert!(DateTime::from_date_and_time(1980, 1, 0, 0, 0, 0).is_err()); assert!(DateTime::from_date_and_time(2108, 12, 31, 0, 0, 0).is_err()); assert!(DateTime::from_date_and_time(2107, 13, 31, 0, 0, 0).is_err()); assert!(DateTime::from_date_and_time(2107, 12, 32, 0, 0, 0).is_err()); assert!(DateTime::from_date_and_time(2018, 1, 31, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(2018, 2, 28, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(2018, 2, 29, 0, 0, 0).is_err()); assert!(DateTime::from_date_and_time(2018, 3, 31, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(2018, 4, 30, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(2018, 4, 31, 0, 0, 0).is_err()); assert!(DateTime::from_date_and_time(2018, 5, 31, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(2018, 6, 30, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(2018, 6, 31, 0, 0, 0).is_err()); assert!(DateTime::from_date_and_time(2018, 7, 31, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(2018, 8, 31, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(2018, 9, 30, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(2018, 9, 31, 0, 0, 0).is_err()); assert!(DateTime::from_date_and_time(2018, 10, 31, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(2018, 11, 30, 0, 0, 0).is_ok()); assert!(DateTime::from_date_and_time(2018, 11, 31, 0, 0, 0).is_err()); assert!(DateTime::from_date_and_time(2018, 12, 31, 0, 0, 0).is_ok()); // leap year: divisible by 4 assert!(DateTime::from_date_and_time(2024, 2, 29, 0, 0, 0).is_ok()); // leap year: divisible by 100 and by 400 assert!(DateTime::from_date_and_time(2000, 2, 29, 0, 0, 0).is_ok()); // common year: divisible by 100 but not by 400 assert!(DateTime::from_date_and_time(2100, 2, 29, 0, 0, 0).is_err()); } #[cfg(feature = "time")] use time::{format_description::well_known::Rfc3339, OffsetDateTime, PrimitiveDateTime}; #[cfg(feature = "time")] #[test] fn datetime_try_from_offset_datetime() { use time::macros::datetime; use super::DateTime; // 2018-11-17 10:38:30 let dt = DateTime::try_from(datetime!(2018-11-17 10:38:30 UTC)).unwrap(); assert_eq!(dt.year(), 2018); assert_eq!(dt.month(), 11); assert_eq!(dt.day(), 17); assert_eq!(dt.hour(), 10); assert_eq!(dt.minute(), 38); assert_eq!(dt.second(), 30); } #[cfg(feature = "time")] #[test] fn datetime_try_from_primitive_datetime() { use time::macros::datetime; use super::DateTime; // 2018-11-17 10:38:30 let dt = DateTime::try_from(datetime!(2018-11-17 10:38:30)).unwrap(); assert_eq!(dt.year(), 2018); assert_eq!(dt.month(), 11); assert_eq!(dt.day(), 17); assert_eq!(dt.hour(), 10); assert_eq!(dt.minute(), 38); assert_eq!(dt.second(), 30); } #[cfg(feature = "time")] #[test] fn datetime_try_from_bounds() { use super::DateTime; use time::macros::datetime; // 1979-12-31 23:59:59 assert!(DateTime::try_from(datetime!(1979-12-31 23:59:59)).is_err()); // 1980-01-01 00:00:00 assert!(DateTime::try_from(datetime!(1980-01-01 00:00:00)).is_ok()); // 2107-12-31 23:59:59 assert!(DateTime::try_from(datetime!(2107-12-31 23:59:59)).is_ok()); // 2108-01-01 00:00:00 assert!(DateTime::try_from(datetime!(2108-01-01 00:00:00)).is_err()); } #[cfg(feature = "time")] #[test] fn offset_datetime_try_from_datetime() { use time::macros::datetime; use super::DateTime; // 2018-11-17 10:38:30 UTC let dt = OffsetDateTime::try_from(DateTime::try_from_msdos(0x4D71, 0x54CF).unwrap()).unwrap(); assert_eq!(dt, datetime!(2018-11-17 10:38:30 UTC)); } #[cfg(feature = "time")] #[test] fn primitive_datetime_try_from_datetime() { use time::macros::datetime; use super::DateTime; // 2018-11-17 10:38:30 let dt = PrimitiveDateTime::try_from(DateTime::try_from_msdos(0x4D71, 0x54CF).unwrap()).unwrap(); assert_eq!(dt, datetime!(2018-11-17 10:38:30)); } #[cfg(feature = "time")] #[test] fn offset_datetime_try_from_bounds() { use super::DateTime; // 1980-00-00 00:00:00 assert!(OffsetDateTime::try_from(unsafe { DateTime::from_msdos_unchecked(0x0000, 0x0000) }) .is_err()); // 2107-15-31 31:63:62 assert!(OffsetDateTime::try_from(unsafe { DateTime::from_msdos_unchecked(0xFFFF, 0xFFFF) }) .is_err()); } #[cfg(feature = "time")] #[test] fn primitive_datetime_try_from_bounds() { use super::DateTime; // 1980-00-00 00:00:00 assert!(PrimitiveDateTime::try_from(unsafe { DateTime::from_msdos_unchecked(0x0000, 0x0000) }) .is_err()); // 2107-15-31 31:63:62 assert!(PrimitiveDateTime::try_from(unsafe { DateTime::from_msdos_unchecked(0xFFFF, 0xFFFF) }) .is_err()); } #[cfg(feature = "jiff-02")] #[test] fn datetime_try_from_civil_datetime() { use jiff::civil; use super::DateTime; // 2018-11-17 10:38:30 let dt = DateTime::try_from(civil::datetime(2018, 11, 17, 10, 38, 30, 0)).unwrap(); assert_eq!(dt.year(), 2018); assert_eq!(dt.month(), 11); assert_eq!(dt.day(), 17); assert_eq!(dt.hour(), 10); assert_eq!(dt.minute(), 38); assert_eq!(dt.second(), 30); } #[cfg(feature = "jiff-02")] #[test] fn datetime_try_from_civil_datetime_bounds() { use jiff::civil; use super::DateTime; // 1979-12-31 23:59:59 assert!(DateTime::try_from(civil::datetime(1979, 12, 31, 23, 59, 59, 0)).is_err()); // 1980-01-01 00:00:00 assert!(DateTime::try_from(civil::datetime(1980, 1, 1, 0, 0, 0, 0)).is_ok()); // 2107-12-31 23:59:59 assert!(DateTime::try_from(civil::datetime(2107, 12, 31, 23, 59, 59, 0)).is_ok()); // 2108-01-01 00:00:00 assert!(DateTime::try_from(civil::datetime(2108, 1, 1, 0, 0, 0, 0)).is_err()); } #[cfg(feature = "jiff-02")] #[test] fn civil_datetime_try_from_datetime() { use jiff::civil; use super::DateTime; // 2018-11-17 10:38:30 UTC let dt = civil::DateTime::try_from(DateTime::try_from_msdos(0x4D71, 0x54CF).unwrap()).unwrap(); assert_eq!(dt, civil::datetime(2018, 11, 17, 10, 38, 30, 0)); } #[cfg(feature = "jiff-02")] #[test] fn civil_datetime_try_from_datetime_bounds() { use jiff::civil; use super::DateTime; // 1980-00-00 00:00:00 assert!(civil::DateTime::try_from(unsafe { DateTime::from_msdos_unchecked(0x0000, 0x0000) }) .is_err()); // 2107-15-31 31:63:62 assert!(civil::DateTime::try_from(unsafe { DateTime::from_msdos_unchecked(0xFFFF, 0xFFFF) }) .is_err()); } #[test] #[allow(deprecated)] fn time_conversion() { use super::DateTime; let dt = DateTime::try_from_msdos(0x4D71, 0x54CF).unwrap(); assert_eq!(dt.year(), 2018); assert_eq!(dt.month(), 11); assert_eq!(dt.day(), 17); assert_eq!(dt.hour(), 10); assert_eq!(dt.minute(), 38); assert_eq!(dt.second(), 30); let dt = DateTime::try_from((0x4D71, 0x54CF)).unwrap(); assert_eq!(dt.year(), 2018); assert_eq!(dt.month(), 11); assert_eq!(dt.day(), 17); assert_eq!(dt.hour(), 10); assert_eq!(dt.minute(), 38); assert_eq!(dt.second(), 30); #[cfg(feature = "time")] assert_eq!( dt.to_time().unwrap().format(&Rfc3339).unwrap(), "2018-11-17T10:38:30Z" ); assert_eq!(<(u16, u16)>::from(dt), (0x4D71, 0x54CF)); } #[test] #[allow(deprecated)] fn time_out_of_bounds() { use super::DateTime; let dt = unsafe { DateTime::from_msdos_unchecked(0xFFFF, 0xFFFF) }; assert_eq!(dt.year(), 2107); assert_eq!(dt.month(), 15); assert_eq!(dt.day(), 31); assert_eq!(dt.hour(), 31); assert_eq!(dt.minute(), 63); assert_eq!(dt.second(), 62); #[cfg(feature = "time")] assert!(dt.to_time().is_err()); let dt = unsafe { DateTime::from_msdos_unchecked(0x0000, 0x0000) }; assert_eq!(dt.year(), 1980); assert_eq!(dt.month(), 0); assert_eq!(dt.day(), 0); assert_eq!(dt.hour(), 0); assert_eq!(dt.minute(), 0); assert_eq!(dt.second(), 0); #[cfg(feature = "time")] assert!(dt.to_time().is_err()); } #[cfg(feature = "time")] #[test] fn time_at_january() { use super::DateTime; // 2020-01-01 00:00:00 let clock = OffsetDateTime::from_unix_timestamp(1_577_836_800).unwrap(); assert!(DateTime::try_from(PrimitiveDateTime::new(clock.date(), clock.time())).is_ok()); } } zip-2.5.0/src/unstable.rs000064400000000000000000000100231046102023000134000ustar 00000000000000#![allow(missing_docs)] use std::borrow::Cow; use std::io; use std::io::{Read, Write}; use std::path::{Component, Path, MAIN_SEPARATOR}; /// Provides high level API for reading from a stream. pub mod stream { pub use crate::read::stream::*; } /// Types for creating ZIP archives. pub mod write { use crate::write::{FileOptionExtension, FileOptions}; /// Unstable methods for [`FileOptions`]. pub trait FileOptionsExt { /// Write the file with the given password using the deprecated ZipCrypto algorithm. /// /// This is not recommended for new archives, as ZipCrypto is not secure. fn with_deprecated_encryption(self, password: &[u8]) -> Self; } impl FileOptionsExt for FileOptions<'_, T> { fn with_deprecated_encryption(self, password: &[u8]) -> FileOptions<'static, T> { self.with_deprecated_encryption(password) } } } /// Helper methods for writing unsigned integers in little-endian form. pub trait LittleEndianWriteExt: Write { fn write_u16_le(&mut self, input: u16) -> io::Result<()> { self.write_all(&input.to_le_bytes()) } fn write_u32_le(&mut self, input: u32) -> io::Result<()> { self.write_all(&input.to_le_bytes()) } fn write_u64_le(&mut self, input: u64) -> io::Result<()> { self.write_all(&input.to_le_bytes()) } fn write_u128_le(&mut self, input: u128) -> io::Result<()> { self.write_all(&input.to_le_bytes()) } } impl LittleEndianWriteExt for W {} /// Helper methods for reading unsigned integers in little-endian form. pub trait LittleEndianReadExt: Read { fn read_u16_le(&mut self) -> io::Result { let mut out = [0u8; 2]; self.read_exact(&mut out)?; Ok(u16::from_le_bytes(out)) } fn read_u32_le(&mut self) -> io::Result { let mut out = [0u8; 4]; self.read_exact(&mut out)?; Ok(u32::from_le_bytes(out)) } fn read_u64_le(&mut self) -> io::Result { let mut out = [0u8; 8]; self.read_exact(&mut out)?; Ok(u64::from_le_bytes(out)) } } impl LittleEndianReadExt for R {} /// Converts a path to the ZIP format (forward-slash-delimited and normalized). pub fn path_to_string>(path: T) -> Box { let mut maybe_original = None; if let Some(original) = path.as_ref().to_str() { if original.is_empty() || original == "." || original == ".." { return String::new().into_boxed_str(); } if original.starts_with(MAIN_SEPARATOR) { if original.len() == 1 { return MAIN_SEPARATOR.to_string().into_boxed_str(); } else if (MAIN_SEPARATOR == '/' || !original[1..].contains(MAIN_SEPARATOR)) && !original.ends_with('.') && !original.contains([MAIN_SEPARATOR, MAIN_SEPARATOR]) && !original.contains([MAIN_SEPARATOR, '.', MAIN_SEPARATOR]) && !original.contains([MAIN_SEPARATOR, '.', '.', MAIN_SEPARATOR]) { maybe_original = Some(&original[1..]); } } else if !original.contains(MAIN_SEPARATOR) { return original.into(); } } let mut recreate = maybe_original.is_none(); let mut normalized_components = Vec::new(); for component in path.as_ref().components() { match component { Component::Normal(os_str) => match os_str.to_str() { Some(valid_str) => normalized_components.push(Cow::Borrowed(valid_str)), None => { recreate = true; normalized_components.push(os_str.to_string_lossy()); } }, Component::ParentDir => { recreate = true; normalized_components.pop(); } _ => { recreate = true; } } } if recreate { normalized_components.join("/").into() } else { maybe_original.unwrap().into() } } zip-2.5.0/src/write.rs000064400000000000000000004544231046102023000127350ustar 00000000000000//! Types for creating ZIP archives #[cfg(feature = "aes-crypto")] use crate::aes::AesWriter; use crate::compression::CompressionMethod; use crate::read::{parse_single_extra_field, Config, ZipArchive, ZipFile}; use crate::result::{invalid, ZipError, ZipResult}; use crate::spec::{self, FixedSizeBlock, Zip32CDEBlock}; #[cfg(feature = "aes-crypto")] use crate::types::AesMode; use crate::types::{ ffi, AesVendorVersion, DateTime, Zip64ExtraFieldBlock, ZipFileData, ZipLocalEntryBlock, ZipRawValues, MIN_VERSION, }; use crate::write::ffi::S_IFLNK; #[cfg(any(feature = "_deflate-any", feature = "bzip2", feature = "zstd",))] use core::num::NonZeroU64; use crc32fast::Hasher; use indexmap::IndexMap; use std::borrow::ToOwned; use std::default::Default; use std::fmt::{Debug, Formatter}; use std::io; use std::io::prelude::*; use std::io::Cursor; use std::io::{BufReader, SeekFrom}; use std::marker::PhantomData; use std::mem; use std::str::{from_utf8, Utf8Error}; use std::sync::Arc; #[cfg(feature = "deflate-flate2")] use flate2::{write::DeflateEncoder, Compression}; #[cfg(feature = "bzip2")] use bzip2::write::BzEncoder; #[cfg(feature = "deflate-zopfli")] use zopfli::Options; #[cfg(feature = "deflate-zopfli")] use std::io::BufWriter; use std::mem::size_of; use std::path::Path; #[cfg(feature = "zstd")] use zstd::stream::write::Encoder as ZstdEncoder; enum MaybeEncrypted { Unencrypted(W), #[cfg(feature = "aes-crypto")] Aes(AesWriter), ZipCrypto(crate::zipcrypto::ZipCryptoWriter), } impl Debug for MaybeEncrypted { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { // Don't print W, since it may be a huge Vec f.write_str(match self { MaybeEncrypted::Unencrypted(_) => "Unencrypted", #[cfg(feature = "aes-crypto")] MaybeEncrypted::Aes(_) => "AES", MaybeEncrypted::ZipCrypto(_) => "ZipCrypto", }) } } impl Write for MaybeEncrypted { fn write(&mut self, buf: &[u8]) -> io::Result { match self { MaybeEncrypted::Unencrypted(w) => w.write(buf), #[cfg(feature = "aes-crypto")] MaybeEncrypted::Aes(w) => w.write(buf), MaybeEncrypted::ZipCrypto(w) => w.write(buf), } } fn flush(&mut self) -> io::Result<()> { match self { MaybeEncrypted::Unencrypted(w) => w.flush(), #[cfg(feature = "aes-crypto")] MaybeEncrypted::Aes(w) => w.flush(), MaybeEncrypted::ZipCrypto(w) => w.flush(), } } } enum GenericZipWriter { Closed, Storer(MaybeEncrypted), #[cfg(feature = "deflate-flate2")] Deflater(DeflateEncoder>), #[cfg(feature = "deflate-zopfli")] ZopfliDeflater(zopfli::DeflateEncoder>), #[cfg(feature = "deflate-zopfli")] BufferedZopfliDeflater(BufWriter>>), #[cfg(feature = "bzip2")] Bzip2(BzEncoder>), #[cfg(feature = "zstd")] Zstd(ZstdEncoder<'static, MaybeEncrypted>), #[cfg(feature = "xz")] Xz(xz2::write::XzEncoder>), } impl Debug for GenericZipWriter { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { match self { Closed => f.write_str("Closed"), Storer(w) => f.write_fmt(format_args!("Storer({:?})", w)), #[cfg(feature = "deflate-flate2")] GenericZipWriter::Deflater(w) => { f.write_fmt(format_args!("Deflater({:?})", w.get_ref())) } #[cfg(feature = "deflate-zopfli")] GenericZipWriter::ZopfliDeflater(_) => f.write_str("ZopfliDeflater"), #[cfg(feature = "deflate-zopfli")] GenericZipWriter::BufferedZopfliDeflater(_) => f.write_str("BufferedZopfliDeflater"), #[cfg(feature = "bzip2")] GenericZipWriter::Bzip2(w) => f.write_fmt(format_args!("Bzip2({:?})", w.get_ref())), #[cfg(feature = "zstd")] GenericZipWriter::Zstd(w) => f.write_fmt(format_args!("Zstd({:?})", w.get_ref())), #[cfg(feature = "xz")] GenericZipWriter::Xz(w) => f.write_fmt(format_args!("Xz({:?})", w.get_ref())), } } } // Put the struct declaration in a private module to convince rustdoc to display ZipWriter nicely pub(crate) mod zip_writer { use super::*; /// ZIP archive generator /// /// Handles the bookkeeping involved in building an archive, and provides an /// API to edit its contents. /// /// ``` /// # fn doit() -> zip::result::ZipResult<()> /// # { /// # use zip::ZipWriter; /// use std::io::Write; /// use zip::write::SimpleFileOptions; /// /// // We use a buffer here, though you'd normally use a `File` /// let mut buf = [0; 65536]; /// let mut zip = ZipWriter::new(std::io::Cursor::new(&mut buf[..])); /// /// let options = SimpleFileOptions::default().compression_method(zip::CompressionMethod::Stored); /// zip.start_file("hello_world.txt", options)?; /// zip.write(b"Hello, World!")?; /// /// // Apply the changes you've made. /// // Dropping the `ZipWriter` will have the same effect, but may silently fail /// zip.finish()?; /// /// # Ok(()) /// # } /// # doit().unwrap(); /// ``` pub struct ZipWriter { pub(super) inner: GenericZipWriter, pub(super) files: IndexMap, ZipFileData>, pub(super) stats: ZipWriterStats, pub(super) writing_to_file: bool, pub(super) writing_raw: bool, pub(super) comment: Box<[u8]>, pub(super) zip64_comment: Option>, pub(super) flush_on_finish_file: bool, } impl Debug for ZipWriter { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { f.write_fmt(format_args!( "ZipWriter {{files: {:?}, stats: {:?}, writing_to_file: {}, writing_raw: {}, comment: {:?}, flush_on_finish_file: {}}}", self.files, self.stats, self.writing_to_file, self.writing_raw, self.comment, self.flush_on_finish_file)) } } } #[doc(inline)] pub use self::sealed::FileOptionExtension; use crate::result::ZipError::UnsupportedArchive; use crate::unstable::path_to_string; use crate::unstable::LittleEndianWriteExt; use crate::write::GenericZipWriter::{Closed, Storer}; use crate::zipcrypto::ZipCryptoKeys; use crate::CompressionMethod::Stored; pub use zip_writer::ZipWriter; #[derive(Default, Debug)] struct ZipWriterStats { hasher: Hasher, start: u64, bytes_written: u64, } mod sealed { use std::sync::Arc; use super::ExtendedFileOptions; pub trait Sealed {} /// File options Extensions #[doc(hidden)] pub trait FileOptionExtension: Default + Sealed { /// Extra Data fn extra_data(&self) -> Option<&Arc>>; /// Central Extra Data fn central_extra_data(&self) -> Option<&Arc>>; } impl Sealed for () {} impl FileOptionExtension for () { fn extra_data(&self) -> Option<&Arc>> { None } fn central_extra_data(&self) -> Option<&Arc>> { None } } impl Sealed for ExtendedFileOptions {} impl FileOptionExtension for ExtendedFileOptions { fn extra_data(&self) -> Option<&Arc>> { Some(&self.extra_data) } fn central_extra_data(&self) -> Option<&Arc>> { Some(&self.central_extra_data) } } } #[derive(Copy, Clone, Debug, Eq, PartialEq)] pub(crate) enum EncryptWith<'k> { #[cfg(feature = "aes-crypto")] Aes { mode: AesMode, password: &'k str, }, ZipCrypto(ZipCryptoKeys, PhantomData<&'k ()>), } #[cfg(fuzzing)] impl<'a> arbitrary::Arbitrary<'a> for EncryptWith<'a> { fn arbitrary(u: &mut arbitrary::Unstructured<'a>) -> arbitrary::Result { #[cfg(feature = "aes-crypto")] if bool::arbitrary(u)? { return Ok(EncryptWith::Aes { mode: AesMode::arbitrary(u)?, password: u.arbitrary::<&str>()?, }); } Ok(EncryptWith::ZipCrypto( ZipCryptoKeys::arbitrary(u)?, PhantomData, )) } } /// Metadata for a file to be written #[derive(Clone, Debug, Copy, Eq, PartialEq)] pub struct FileOptions<'k, T: FileOptionExtension> { pub(crate) compression_method: CompressionMethod, pub(crate) compression_level: Option, pub(crate) last_modified_time: DateTime, pub(crate) permissions: Option, pub(crate) large_file: bool, pub(crate) encrypt_with: Option>, pub(crate) extended_options: T, pub(crate) alignment: u16, #[cfg(feature = "deflate-zopfli")] pub(super) zopfli_buffer_size: Option, } /// Simple File Options. Can be copied and good for simple writing zip files pub type SimpleFileOptions = FileOptions<'static, ()>; /// Adds Extra Data and Central Extra Data. It does not implement copy. pub type FullFileOptions<'k> = FileOptions<'k, ExtendedFileOptions>; /// The Extension for Extra Data and Central Extra Data #[derive(Clone, Default, Eq, PartialEq)] pub struct ExtendedFileOptions { extra_data: Arc>, central_extra_data: Arc>, } impl ExtendedFileOptions { /// Adds an extra data field, unless we detect that it's invalid. pub fn add_extra_data( &mut self, header_id: u16, data: Box<[u8]>, central_only: bool, ) -> ZipResult<()> { let len = data.len() + 4; if self.extra_data.len() + self.central_extra_data.len() + len > u16::MAX as usize { Err(invalid!("Extra data field would be longer than allowed")) } else { let field = if central_only { &mut self.central_extra_data } else { &mut self.extra_data }; let vec = Arc::get_mut(field); let vec = match vec { Some(exclusive) => exclusive, None => { *field = Arc::new(field.to_vec()); Arc::get_mut(field).unwrap() } }; Self::add_extra_data_unchecked(vec, header_id, data)?; Self::validate_extra_data(vec, true)?; Ok(()) } } pub(crate) fn add_extra_data_unchecked( vec: &mut Vec, header_id: u16, data: Box<[u8]>, ) -> Result<(), ZipError> { vec.reserve_exact(data.len() + 4); vec.write_u16_le(header_id)?; vec.write_u16_le(data.len() as u16)?; vec.write_all(&data)?; Ok(()) } fn validate_extra_data(data: &[u8], disallow_zip64: bool) -> ZipResult<()> { let len = data.len() as u64; if len == 0 { return Ok(()); } if len > u16::MAX as u64 { return Err(ZipError::Io(io::Error::new( io::ErrorKind::Other, "Extra-data field can't exceed u16::MAX bytes", ))); } let mut data = Cursor::new(data); let mut pos = data.position(); while pos < len { if len - data.position() < 4 { return Err(ZipError::Io(io::Error::new( io::ErrorKind::Other, "Extra-data field doesn't have room for ID and length", ))); } #[cfg(not(feature = "unreserved"))] { use crate::unstable::LittleEndianReadExt; let header_id = data.read_u16_le()?; if EXTRA_FIELD_MAPPING.contains(&header_id) { return Err(ZipError::Io(io::Error::new( io::ErrorKind::Other, format!( "Extra data header ID {header_id:#06} requires crate feature \"unreserved\"", ), ))); } data.seek(SeekFrom::Current(-2))?; } parse_single_extra_field(&mut ZipFileData::default(), &mut data, pos, disallow_zip64)?; pos = data.position(); } Ok(()) } } impl Debug for ExtendedFileOptions { fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), std::fmt::Error> { f.write_fmt(format_args!("ExtendedFileOptions {{extra_data: vec!{:?}.into(), central_extra_data: vec!{:?}.into()}}", self.extra_data, self.central_extra_data)) } } #[cfg(fuzzing)] impl<'a> arbitrary::Arbitrary<'a> for FileOptions<'a, ExtendedFileOptions> { fn arbitrary(u: &mut arbitrary::Unstructured<'a>) -> arbitrary::Result { let mut options = FullFileOptions { compression_method: CompressionMethod::arbitrary(u)?, compression_level: if bool::arbitrary(u)? { Some(u.int_in_range(0..=24)?) } else { None }, last_modified_time: DateTime::arbitrary(u)?, permissions: Option::::arbitrary(u)?, large_file: bool::arbitrary(u)?, encrypt_with: Option::::arbitrary(u)?, alignment: u16::arbitrary(u)?, #[cfg(feature = "deflate-zopfli")] zopfli_buffer_size: None, ..Default::default() }; #[cfg(feature = "deflate-zopfli")] if options.compression_method == CompressionMethod::Deflated && bool::arbitrary(u)? { options.zopfli_buffer_size = Some(if bool::arbitrary(u)? { 2 } else { 3 } << u.int_in_range(8..=20)?); } u.arbitrary_loop(Some(0), Some(10), |u| { options .add_extra_data( u.int_in_range(2..=u16::MAX)?, Box::<[u8]>::arbitrary(u)?, bool::arbitrary(u)?, ) .map_err(|_| arbitrary::Error::IncorrectFormat)?; Ok(core::ops::ControlFlow::Continue(())) })?; ZipWriter::new(Cursor::new(Vec::new())) .start_file("", options.clone()) .map_err(|_| arbitrary::Error::IncorrectFormat)?; Ok(options) } } impl FileOptions<'_, T> { pub(crate) fn normalize(&mut self) { if !self.last_modified_time.is_valid() { self.last_modified_time = FileOptions::::default().last_modified_time; } *self.permissions.get_or_insert(0o644) |= ffi::S_IFREG; } /// Set the compression method for the new file /// /// The default is `CompressionMethod::Deflated` if it is enabled. If not, /// `CompressionMethod::Bzip2` is the default if it is enabled. If neither `bzip2` nor `deflate` /// is enabled, `CompressionMethod::Zlib` is the default. If all else fails, /// `CompressionMethod::Stored` becomes the default and files are written uncompressed. #[must_use] pub const fn compression_method(mut self, method: CompressionMethod) -> Self { self.compression_method = method; self } /// Set the compression level for the new file /// /// `None` value specifies default compression level. /// /// Range of values depends on compression method: /// * `Deflated`: 10 - 264 for Zopfli, 0 - 9 for other encoders. Default is 24 if Zopfli is the /// only encoder, or 6 otherwise. /// * `Bzip2`: 0 - 9. Default is 6 /// * `Zstd`: -7 - 22, with zero being mapped to default level. Default is 3 /// * others: only `None` is allowed #[must_use] pub const fn compression_level(mut self, level: Option) -> Self { self.compression_level = level; self } /// Set the last modified time /// /// The default is the current timestamp if the 'time' feature is enabled, and 1980-01-01 /// otherwise #[must_use] pub const fn last_modified_time(mut self, mod_time: DateTime) -> Self { self.last_modified_time = mod_time; self } /// Set the permissions for the new file. /// /// The format is represented with unix-style permissions. /// The default is `0o644`, which represents `rw-r--r--` for files, /// and `0o755`, which represents `rwxr-xr-x` for directories. /// /// This method only preserves the file permissions bits (via a `& 0o777`) and discards /// higher file mode bits. So it cannot be used to denote an entry as a directory, /// symlink, or other special file type. #[must_use] pub const fn unix_permissions(mut self, mode: u32) -> Self { self.permissions = Some(mode & 0o777); self } /// Set whether the new file's compressed and uncompressed size is less than 4 GiB. /// /// If set to `false` and the file exceeds the limit, an I/O error is thrown and the file is /// aborted. If set to `true`, readers will require ZIP64 support and if the file does not /// exceed the limit, 20 B are wasted. The default is `false`. #[must_use] pub const fn large_file(mut self, large: bool) -> Self { self.large_file = large; self } pub(crate) fn with_deprecated_encryption(self, password: &[u8]) -> FileOptions<'static, T> { FileOptions { encrypt_with: Some(EncryptWith::ZipCrypto( ZipCryptoKeys::derive(password), PhantomData, )), ..self } } /// Set the AES encryption parameters. #[cfg(feature = "aes-crypto")] pub fn with_aes_encryption(self, mode: AesMode, password: &str) -> FileOptions<'_, T> { FileOptions { encrypt_with: Some(EncryptWith::Aes { mode, password }), ..self } } /// Sets the size of the buffer used to hold the next block that Zopfli will compress. The /// larger the buffer, the more effective the compression, but the more memory is required. /// A value of `None` indicates no buffer, which is recommended only when all non-empty writes /// are larger than about 32 KiB. #[must_use] #[cfg(feature = "deflate-zopfli")] pub const fn with_zopfli_buffer(mut self, size: Option) -> Self { self.zopfli_buffer_size = size; self } /// Returns the compression level currently set. pub const fn get_compression_level(&self) -> Option { self.compression_level } /// Sets the alignment to the given number of bytes. #[must_use] pub const fn with_alignment(mut self, alignment: u16) -> Self { self.alignment = alignment; self } } impl FileOptions<'_, ExtendedFileOptions> { /// Adds an extra data field. pub fn add_extra_data( &mut self, header_id: u16, data: Box<[u8]>, central_only: bool, ) -> ZipResult<()> { self.extended_options .add_extra_data(header_id, data, central_only) } /// Removes the extra data fields. #[must_use] pub fn clear_extra_data(mut self) -> Self { if !self.extended_options.extra_data.is_empty() { self.extended_options.extra_data = Arc::new(vec![]); } if !self.extended_options.central_extra_data.is_empty() { self.extended_options.central_extra_data = Arc::new(vec![]); } self } } impl Default for FileOptions<'_, T> { /// Construct a new FileOptions object fn default() -> Self { Self { compression_method: Default::default(), compression_level: None, last_modified_time: DateTime::default_for_write(), permissions: None, large_file: false, encrypt_with: None, extended_options: T::default(), alignment: 1, #[cfg(feature = "deflate-zopfli")] zopfli_buffer_size: Some(1 << 15), } } } impl Write for ZipWriter { fn write(&mut self, buf: &[u8]) -> io::Result { if !self.writing_to_file { return Err(io::Error::new( io::ErrorKind::Other, "No file has been started", )); } if buf.is_empty() { return Ok(0); } match self.inner.ref_mut() { Some(ref mut w) => { let write_result = w.write(buf); if let Ok(count) = write_result { self.stats.update(&buf[0..count]); if self.stats.bytes_written > spec::ZIP64_BYTES_THR && !self.files.last_mut().unwrap().1.large_file { let _ = self.abort_file(); return Err(io::Error::new( io::ErrorKind::Other, "Large file option has not been set", )); } } write_result } None => Err(io::Error::new( io::ErrorKind::BrokenPipe, "write(): ZipWriter was already closed", )), } } fn flush(&mut self) -> io::Result<()> { match self.inner.ref_mut() { Some(ref mut w) => w.flush(), None => Err(io::Error::new( io::ErrorKind::BrokenPipe, "flush(): ZipWriter was already closed", )), } } } impl ZipWriterStats { fn update(&mut self, buf: &[u8]) { self.hasher.update(buf); self.bytes_written += buf.len() as u64; } } impl ZipWriter { /// Initializes the archive from an existing ZIP archive, making it ready for append. /// /// This uses a default configuration to initially read the archive. pub fn new_append(readwriter: A) -> ZipResult> { Self::new_append_with_config(Default::default(), readwriter) } /// Initializes the archive from an existing ZIP archive, making it ready for append. /// /// This uses the given read configuration to initially read the archive. pub fn new_append_with_config(config: Config, mut readwriter: A) -> ZipResult> { readwriter.seek(SeekFrom::Start(0))?; let shared = ZipArchive::get_metadata(config, &mut readwriter)?; Ok(ZipWriter { inner: Storer(MaybeEncrypted::Unencrypted(readwriter)), files: shared.files, stats: Default::default(), writing_to_file: false, comment: shared.comment, zip64_comment: shared.zip64_comment, writing_raw: true, // avoid recomputing the last file's header flush_on_finish_file: false, }) } /// `flush_on_finish_file` is designed to support a streaming `inner` that may unload flushed /// bytes. It flushes a file's header and body once it starts writing another file. A ZipWriter /// will not try to seek back into where a previous file was written unless /// either [`ZipWriter::abort_file`] is called while [`ZipWriter::is_writing_file`] returns /// false, or [`ZipWriter::deep_copy_file`] is called. In the latter case, it will only need to /// read previously-written files and not overwrite them. /// /// Note: when using an `inner` that cannot overwrite flushed bytes, do not wrap it in a /// [BufWriter], because that has a [Seek::seek] method that implicitly calls /// [BufWriter::flush], and ZipWriter needs to seek backward to update each file's header with /// the size and checksum after writing the body. /// /// This setting is false by default. pub fn set_flush_on_finish_file(&mut self, flush_on_finish_file: bool) { self.flush_on_finish_file = flush_on_finish_file; } } impl ZipWriter { /// Adds another copy of a file already in this archive. This will produce a larger but more /// widely-compatible archive compared to [Self::shallow_copy_file]. Does not copy alignment. pub fn deep_copy_file(&mut self, src_name: &str, dest_name: &str) -> ZipResult<()> { self.finish_file()?; if src_name == dest_name || self.files.contains_key(dest_name) { return Err(invalid!("That file already exists")); } let write_position = self.inner.get_plain().stream_position()?; let src_index = self.index_by_name(src_name)?; let src_data = &mut self.files[src_index]; let src_data_start = src_data.data_start(); debug_assert!(src_data_start <= write_position); let mut compressed_size = src_data.compressed_size; if compressed_size > (write_position - src_data_start) { compressed_size = write_position - src_data_start; src_data.compressed_size = compressed_size; } let mut reader = BufReader::new(self.inner.get_plain()); reader.seek(SeekFrom::Start(src_data_start))?; let mut copy = vec![0; compressed_size as usize]; reader.take(compressed_size).read_exact(&mut copy)?; self.inner .get_plain() .seek(SeekFrom::Start(write_position))?; let mut new_data = src_data.clone(); let dest_name_raw = dest_name.as_bytes(); new_data.file_name = dest_name.into(); new_data.file_name_raw = dest_name_raw.into(); new_data.is_utf8 = !dest_name.is_ascii(); new_data.header_start = write_position; let extra_data_start = write_position + size_of::() as u64 + new_data.file_name_raw.len() as u64; new_data.extra_data_start = Some(extra_data_start); let mut data_start = extra_data_start; if let Some(extra) = &src_data.extra_field { data_start += extra.len() as u64; } new_data.data_start.take(); new_data.data_start.get_or_init(|| data_start); new_data.central_header_start = 0; let block = new_data.local_block()?; let index = self.insert_file_data(new_data)?; let new_data = &self.files[index]; let result: io::Result<()> = (|| { let plain_writer = self.inner.get_plain(); block.write(plain_writer)?; plain_writer.write_all(&new_data.file_name_raw)?; if let Some(data) = &new_data.extra_field { plain_writer.write_all(data)?; } debug_assert_eq!(data_start, plain_writer.stream_position()?); self.writing_to_file = true; plain_writer.write_all(©)?; if self.flush_on_finish_file { plain_writer.flush()?; } Ok(()) })(); self.ok_or_abort_file(result)?; self.writing_to_file = false; Ok(()) } /// Like `deep_copy_file`, but uses Path arguments. /// /// This function ensures that the '/' path separator is used and normalizes `.` and `..`. It /// ignores any `..` or Windows drive letter that would produce a path outside the ZIP file's /// root. pub fn deep_copy_file_from_path, U: AsRef>( &mut self, src_path: T, dest_path: U, ) -> ZipResult<()> { let src = path_to_string(src_path); let dest = path_to_string(dest_path); self.deep_copy_file(&src, &dest) } /// Write the zip file into the backing stream, then produce a readable archive of that data. /// /// This method avoids parsing the central directory records at the end of the stream for /// a slight performance improvement over running [`ZipArchive::new()`] on the output of /// [`Self::finish()`]. /// ///``` /// # fn main() -> Result<(), zip::result::ZipError> { /// use std::io::{Cursor, prelude::*}; /// use zip::{ZipArchive, ZipWriter, write::SimpleFileOptions}; /// /// let buf = Cursor::new(Vec::new()); /// let mut zip = ZipWriter::new(buf); /// let options = SimpleFileOptions::default(); /// zip.start_file("a.txt", options)?; /// zip.write_all(b"hello\n")?; /// /// let mut zip = zip.finish_into_readable()?; /// let mut s: String = String::new(); /// zip.by_name("a.txt")?.read_to_string(&mut s)?; /// assert_eq!(s, "hello\n"); /// # Ok(()) /// # } ///``` pub fn finish_into_readable(mut self) -> ZipResult> { let central_start = self.finalize()?; let inner = mem::replace(&mut self.inner, Closed).unwrap(); let comment = mem::take(&mut self.comment); let zip64_comment = mem::take(&mut self.zip64_comment); let files = mem::take(&mut self.files); let archive = ZipArchive::from_finalized_writer(files, comment, zip64_comment, inner, central_start)?; Ok(archive) } } impl ZipWriter { /// Initializes the archive. /// /// Before writing to this object, the [`ZipWriter::start_file`] function should be called. /// After a successful write, the file remains open for writing. After a failed write, call /// [`ZipWriter::is_writing_file`] to determine if the file remains open. pub fn new(inner: W) -> ZipWriter { ZipWriter { inner: Storer(MaybeEncrypted::Unencrypted(inner)), files: IndexMap::new(), stats: Default::default(), writing_to_file: false, writing_raw: false, comment: Box::new([]), zip64_comment: None, flush_on_finish_file: false, } } /// Returns true if a file is currently open for writing. pub const fn is_writing_file(&self) -> bool { self.writing_to_file && !self.inner.is_closed() } /// Set ZIP archive comment. pub fn set_comment(&mut self, comment: S) where S: Into>, { self.set_raw_comment(comment.into().into_boxed_bytes()) } /// Set ZIP archive comment. /// /// This sets the raw bytes of the comment. The comment /// is typically expected to be encoded in UTF-8. pub fn set_raw_comment(&mut self, comment: Box<[u8]>) { self.comment = comment; } /// Get ZIP archive comment. pub fn get_comment(&mut self) -> Result<&str, Utf8Error> { from_utf8(self.get_raw_comment()) } /// Get ZIP archive comment. /// /// This returns the raw bytes of the comment. The comment /// is typically expected to be encoded in UTF-8. pub const fn get_raw_comment(&self) -> &[u8] { &self.comment } /// Set ZIP64 archive comment. pub fn set_zip64_comment(&mut self, comment: Option) where S: Into>, { self.set_raw_zip64_comment(comment.map(|v| v.into().into_boxed_bytes())) } /// Set ZIP64 archive comment. /// /// This sets the raw bytes of the comment. The comment /// is typically expected to be encoded in UTF-8. pub fn set_raw_zip64_comment(&mut self, comment: Option>) { self.zip64_comment = comment; } /// Get ZIP64 archive comment. pub fn get_zip64_comment(&mut self) -> Option> { self.get_raw_zip64_comment().map(from_utf8) } /// Get ZIP archive comment. /// /// This returns the raw bytes of the comment. The comment /// is typically expected to be encoded in UTF-8. pub fn get_raw_zip64_comment(&self) -> Option<&[u8]> { self.zip64_comment.as_deref() } /// Set the file length and crc32 manually. /// /// # Safety /// /// This overwrites the internal crc32 calculation. It should only be used in case /// the underlying [Write] is written independently and you need to adjust the zip metadata. pub unsafe fn set_file_metadata(&mut self, length: u64, crc32: u32) -> ZipResult<()> { if !self.writing_to_file { return Err(ZipError::Io(io::Error::new( io::ErrorKind::Other, "No file has been started", ))); } self.stats.hasher = Hasher::new_with_initial_len(crc32, length); self.stats.bytes_written = length; Ok(()) } fn ok_or_abort_file>(&mut self, result: Result) -> ZipResult { match result { Err(e) => { let _ = self.abort_file(); Err(e.into()) } Ok(t) => Ok(t), } } /// Start a new file for with the requested options. fn start_entry( &mut self, name: S, options: FileOptions, raw_values: Option, ) -> ZipResult<()> { self.finish_file()?; let header_start = self.inner.get_plain().stream_position()?; let raw_values = raw_values.unwrap_or(ZipRawValues { crc32: 0, compressed_size: 0, uncompressed_size: 0, }); let mut extra_data = match options.extended_options.extra_data() { Some(data) => data.to_vec(), None => vec![], }; let central_extra_data = options.extended_options.central_extra_data(); if let Some(zip64_block) = Zip64ExtraFieldBlock::maybe_new(options.large_file, 0, 0, header_start) { let mut new_extra_data = zip64_block.serialize().into_vec(); new_extra_data.append(&mut extra_data); extra_data = new_extra_data; } // Write AES encryption extra data. #[allow(unused_mut)] let mut aes_extra_data_start = 0; #[cfg(feature = "aes-crypto")] if let Some(EncryptWith::Aes { mode, .. }) = options.encrypt_with { let aes_dummy_extra_data = vec![0x02, 0x00, 0x41, 0x45, mode as u8, 0x00, 0x00].into_boxed_slice(); aes_extra_data_start = extra_data.len() as u64; ExtendedFileOptions::add_extra_data_unchecked( &mut extra_data, 0x9901, aes_dummy_extra_data, )?; } let (compression_method, aes_mode) = match options.encrypt_with { #[cfg(feature = "aes-crypto")] Some(EncryptWith::Aes { mode, .. }) => ( CompressionMethod::Aes, Some((mode, AesVendorVersion::Ae2, options.compression_method)), ), _ => (options.compression_method, None), }; let header_end = header_start + size_of::() as u64 + name.to_string().len() as u64; if options.alignment > 1 { let extra_data_end = header_end + extra_data.len() as u64; let align = options.alignment as u64; let unaligned_header_bytes = extra_data_end % align; if unaligned_header_bytes != 0 { let mut pad_length = (align - unaligned_header_bytes) as usize; while pad_length < 6 { pad_length += align as usize; } // Add an extra field to the extra_data, per APPNOTE 4.6.11 let mut pad_body = vec![0; pad_length - 4]; debug_assert!(pad_body.len() >= 2); [pad_body[0], pad_body[1]] = options.alignment.to_le_bytes(); ExtendedFileOptions::add_extra_data_unchecked( &mut extra_data, 0xa11e, pad_body.into_boxed_slice(), )?; debug_assert_eq!((extra_data.len() as u64 + header_end) % align, 0); } } let extra_data_len = extra_data.len(); if let Some(data) = central_extra_data { if extra_data_len + data.len() > u16::MAX as usize { return Err(invalid!( "Extra data and central extra data must be less than 64KiB when combined" )); } ExtendedFileOptions::validate_extra_data(data, true)?; } let mut file = ZipFileData::initialize_local_block( name, &options, raw_values, header_start, None, aes_extra_data_start, compression_method, aes_mode, &extra_data, ); file.version_made_by = file.version_made_by.max(file.version_needed() as u8); file.extra_data_start = Some(header_end); let index = self.insert_file_data(file)?; self.writing_to_file = true; let result: ZipResult<()> = (|| { ExtendedFileOptions::validate_extra_data(&extra_data, false)?; let file = &mut self.files[index]; let block = file.local_block()?; let writer = self.inner.get_plain(); block.write(writer)?; // file name writer.write_all(&file.file_name_raw)?; if extra_data_len > 0 { writer.write_all(&extra_data)?; file.extra_field = Some(extra_data.into()); } Ok(()) })(); self.ok_or_abort_file(result)?; let writer = self.inner.get_plain(); self.stats.start = writer.stream_position()?; match options.encrypt_with { #[cfg(feature = "aes-crypto")] Some(EncryptWith::Aes { mode, password }) => { let aeswriter = AesWriter::new( mem::replace(&mut self.inner, Closed).unwrap(), mode, password.as_bytes(), )?; self.inner = Storer(MaybeEncrypted::Aes(aeswriter)); } Some(EncryptWith::ZipCrypto(keys, ..)) => { let mut zipwriter = crate::zipcrypto::ZipCryptoWriter { writer: mem::replace(&mut self.inner, Closed).unwrap(), buffer: vec![], keys, }; self.stats.start = zipwriter.writer.stream_position()?; // crypto_header is counted as part of the data let crypto_header = [0u8; 12]; let result = zipwriter.write_all(&crypto_header); self.ok_or_abort_file(result)?; self.inner = Storer(MaybeEncrypted::ZipCrypto(zipwriter)); } None => {} } let file = &mut self.files[index]; debug_assert!(file.data_start.get().is_none()); file.data_start.get_or_init(|| self.stats.start); self.stats.bytes_written = 0; self.stats.hasher = Hasher::new(); Ok(()) } fn insert_file_data(&mut self, file: ZipFileData) -> ZipResult { if self.files.contains_key(&file.file_name) { return Err(invalid!("Duplicate filename: {}", file.file_name)); } let name = file.file_name.to_owned(); self.files.insert(name.clone(), file); Ok(self.files.get_index_of(&name).unwrap()) } fn finish_file(&mut self) -> ZipResult<()> { if !self.writing_to_file { return Ok(()); } let make_plain_writer = self.inner.prepare_next_writer( Stored, None, #[cfg(feature = "deflate-zopfli")] None, )?; self.inner.switch_to(make_plain_writer)?; self.switch_to_non_encrypting_writer()?; let writer = self.inner.get_plain(); if !self.writing_raw { let file = match self.files.last_mut() { None => return Ok(()), Some((_, f)) => f, }; file.uncompressed_size = self.stats.bytes_written; let file_end = writer.stream_position()?; debug_assert!(file_end >= self.stats.start); file.compressed_size = file_end - self.stats.start; let mut crc = true; if let Some(aes_mode) = &mut file.aes_mode { // We prefer using AE-1 which provides an extra CRC check, but for small files we // switch to AE-2 to prevent being able to use the CRC value to to reconstruct the // unencrypted contents. // // C.f. https://www.winzip.com/en/support/aes-encryption/#crc-faq aes_mode.1 = if self.stats.bytes_written < 20 { crc = false; AesVendorVersion::Ae2 } else { AesVendorVersion::Ae1 }; } file.crc32 = if crc { self.stats.hasher.clone().finalize() } else { 0 }; update_aes_extra_data(writer, file)?; update_local_file_header(writer, file)?; writer.seek(SeekFrom::Start(file_end))?; } if self.flush_on_finish_file { let result = writer.flush(); self.ok_or_abort_file(result)?; } self.writing_to_file = false; Ok(()) } fn switch_to_non_encrypting_writer(&mut self) -> Result<(), ZipError> { match mem::replace(&mut self.inner, Closed) { #[cfg(feature = "aes-crypto")] Storer(MaybeEncrypted::Aes(writer)) => { self.inner = Storer(MaybeEncrypted::Unencrypted(writer.finish()?)); } Storer(MaybeEncrypted::ZipCrypto(writer)) => { let crc32 = self.stats.hasher.clone().finalize(); self.inner = Storer(MaybeEncrypted::Unencrypted(writer.finish(crc32)?)) } Storer(MaybeEncrypted::Unencrypted(w)) => { self.inner = Storer(MaybeEncrypted::Unencrypted(w)) } _ => unreachable!(), } Ok(()) } /// Removes the file currently being written from the archive if there is one, or else removes /// the file most recently written. pub fn abort_file(&mut self) -> ZipResult<()> { let (_, last_file) = self.files.pop().ok_or(ZipError::FileNotFound)?; let make_plain_writer = self.inner.prepare_next_writer( Stored, None, #[cfg(feature = "deflate-zopfli")] None, )?; self.inner.switch_to(make_plain_writer)?; self.switch_to_non_encrypting_writer()?; // Make sure this is the last file, and that no shallow copies of it remain; otherwise we'd // overwrite a valid file and corrupt the archive let rewind_safe: bool = match last_file.data_start.get() { None => self.files.is_empty(), Some(last_file_start) => self.files.values().all(|file| { file.data_start .get() .is_some_and(|start| start < last_file_start) }), }; if rewind_safe { self.inner .get_plain() .seek(SeekFrom::Start(last_file.header_start))?; } self.writing_to_file = false; Ok(()) } /// Create a file in the archive and start writing its' contents. The file must not have the /// same name as a file already in the archive. /// /// The data should be written using the [`Write`] implementation on this [`ZipWriter`] pub fn start_file( &mut self, name: S, mut options: FileOptions, ) -> ZipResult<()> { options.normalize(); let make_new_self = self.inner.prepare_next_writer( options.compression_method, options.compression_level, #[cfg(feature = "deflate-zopfli")] options.zopfli_buffer_size, )?; self.start_entry(name, options, None)?; let result = self.inner.switch_to(make_new_self); self.ok_or_abort_file(result)?; self.writing_raw = false; Ok(()) } /* TODO: link to/use Self::finish_into_readable() from https://github.com/zip-rs/zip/pull/400 in * this docstring. */ /// Copy over the entire contents of another archive verbatim. /// /// This method extracts file metadata from the `source` archive, then simply performs a single /// big [`io::copy()`](io::copy) to transfer all the actual file contents without any /// decompression or decryption. This is more performant than the equivalent operation of /// calling [`Self::raw_copy_file()`] for each entry from the `source` archive in sequence. /// ///``` /// # fn main() -> Result<(), zip::result::ZipError> { /// use std::io::{Cursor, prelude::*}; /// use zip::{ZipArchive, ZipWriter, write::SimpleFileOptions}; /// /// let buf = Cursor::new(Vec::new()); /// let mut zip = ZipWriter::new(buf); /// zip.start_file("a.txt", SimpleFileOptions::default())?; /// zip.write_all(b"hello\n")?; /// let src = ZipArchive::new(zip.finish()?)?; /// /// let buf = Cursor::new(Vec::new()); /// let mut zip = ZipWriter::new(buf); /// zip.start_file("b.txt", SimpleFileOptions::default())?; /// zip.write_all(b"hey\n")?; /// let src2 = ZipArchive::new(zip.finish()?)?; /// /// let buf = Cursor::new(Vec::new()); /// let mut zip = ZipWriter::new(buf); /// zip.merge_archive(src)?; /// zip.merge_archive(src2)?; /// let mut result = ZipArchive::new(zip.finish()?)?; /// /// let mut s: String = String::new(); /// result.by_name("a.txt")?.read_to_string(&mut s)?; /// assert_eq!(s, "hello\n"); /// s.clear(); /// result.by_name("b.txt")?.read_to_string(&mut s)?; /// assert_eq!(s, "hey\n"); /// # Ok(()) /// # } ///``` pub fn merge_archive(&mut self, mut source: ZipArchive) -> ZipResult<()> where R: Read + Seek, { self.finish_file()?; /* Ensure we accept the file contents on faith (and avoid overwriting the data). * See raw_copy_file_rename(). */ self.writing_to_file = true; self.writing_raw = true; let writer = self.inner.get_plain(); /* Get the file entries from the source archive. */ let new_files = source.merge_contents(writer)?; /* These file entries are now ours! */ self.files.extend(new_files); Ok(()) } /// Starts a file, taking a Path as argument. /// /// This function ensures that the '/' path separator is used and normalizes `.` and `..`. It /// ignores any `..` or Windows drive letter that would produce a path outside the ZIP file's /// root. pub fn start_file_from_path>( &mut self, path: P, options: FileOptions, ) -> ZipResult<()> { self.start_file(path_to_string(path), options) } /// Add a new file using the already compressed data from a ZIP file being read and renames it, this /// allows faster copies of the `ZipFile` since there is no need to decompress and compress it again. /// Any `ZipFile` metadata is copied and not checked, for example the file CRC. /// /// ```no_run /// use std::fs::File; /// use std::io::{Read, Seek, Write}; /// use zip::{ZipArchive, ZipWriter}; /// /// fn copy_rename( /// src: &mut ZipArchive, /// dst: &mut ZipWriter, /// ) -> zip::result::ZipResult<()> /// where /// R: Read + Seek, /// W: Write + Seek, /// { /// // Retrieve file entry by name /// let file = src.by_name("src_file.txt")?; /// /// // Copy and rename the previously obtained file entry to the destination zip archive /// dst.raw_copy_file_rename(file, "new_name.txt")?; /// /// Ok(()) /// } /// ``` pub fn raw_copy_file_rename(&mut self, file: ZipFile, name: S) -> ZipResult<()> { let options = file.options(); self.raw_copy_file_rename_internal(file, name, options) } fn raw_copy_file_rename_internal( &mut self, mut file: ZipFile, name: S, options: SimpleFileOptions, ) -> ZipResult<()> { let raw_values = ZipRawValues { crc32: file.crc32(), compressed_size: file.compressed_size(), uncompressed_size: file.size(), }; self.start_entry(name, options, Some(raw_values))?; self.writing_to_file = true; self.writing_raw = true; io::copy(&mut file.take_raw_reader()?, self)?; self.finish_file() } /// Like `raw_copy_file_to_path`, but uses Path arguments. /// /// This function ensures that the '/' path separator is used and normalizes `.` and `..`. It /// ignores any `..` or Windows drive letter that would produce a path outside the ZIP file's /// root. pub fn raw_copy_file_to_path>( &mut self, file: ZipFile, path: P, ) -> ZipResult<()> { self.raw_copy_file_rename(file, path_to_string(path)) } /// Add a new file using the already compressed data from a ZIP file being read, this allows faster /// copies of the `ZipFile` since there is no need to decompress and compress it again. Any `ZipFile` /// metadata is copied and not checked, for example the file CRC. /// /// ```no_run /// use std::fs::File; /// use std::io::{Read, Seek, Write}; /// use zip::{ZipArchive, ZipWriter}; /// /// fn copy(src: &mut ZipArchive, dst: &mut ZipWriter) -> zip::result::ZipResult<()> /// where /// R: Read + Seek, /// W: Write + Seek, /// { /// // Retrieve file entry by name /// let file = src.by_name("src_file.txt")?; /// /// // Copy the previously obtained file entry to the destination zip archive /// dst.raw_copy_file(file)?; /// /// Ok(()) /// } /// ``` pub fn raw_copy_file(&mut self, file: ZipFile) -> ZipResult<()> { let name = file.name().to_owned(); self.raw_copy_file_rename(file, name) } /// Add a new file using the already compressed data from a ZIP file being read and set the last /// modified date and unix mode. This allows faster copies of the `ZipFile` since there is no need /// to decompress and compress it again. Any `ZipFile` metadata other than the last modified date /// and the unix mode is copied and not checked, for example the file CRC. /// /// ```no_run /// use std::io::{Read, Seek, Write}; /// use zip::{DateTime, ZipArchive, ZipWriter}; /// /// fn copy(src: &mut ZipArchive, dst: &mut ZipWriter) -> zip::result::ZipResult<()> /// where /// R: Read + Seek, /// W: Write + Seek, /// { /// // Retrieve file entry by name /// let file = src.by_name("src_file.txt")?; /// /// // Copy the previously obtained file entry to the destination zip archive /// dst.raw_copy_file_touch(file, DateTime::default(), Some(0o644))?; /// /// Ok(()) /// } /// ``` pub fn raw_copy_file_touch( &mut self, file: ZipFile, last_modified_time: DateTime, unix_mode: Option, ) -> ZipResult<()> { let name = file.name().to_owned(); let mut options = file.options(); options = options.last_modified_time(last_modified_time); if let Some(perms) = unix_mode { options = options.unix_permissions(perms); } options.normalize(); self.raw_copy_file_rename_internal(file, name, options) } /// Add a directory entry. /// /// As directories have no content, you must not call [`ZipWriter::write`] before adding a new file. pub fn add_directory( &mut self, name: S, mut options: FileOptions, ) -> ZipResult<()> where S: Into, { if options.permissions.is_none() { options.permissions = Some(0o755); } *options.permissions.as_mut().unwrap() |= 0o40000; options.compression_method = Stored; options.encrypt_with = None; let name_as_string = name.into(); // Append a slash to the filename if it does not end with it. let name_with_slash = match name_as_string.chars().last() { Some('/') | Some('\\') => name_as_string, _ => name_as_string + "/", }; self.start_entry(name_with_slash, options, None)?; self.writing_to_file = false; self.switch_to_non_encrypting_writer()?; Ok(()) } /// Add a directory entry, taking a Path as argument. /// /// This function ensures that the '/' path separator is used and normalizes `.` and `..`. It /// ignores any `..` or Windows drive letter that would produce a path outside the ZIP file's /// root. pub fn add_directory_from_path>( &mut self, path: P, options: FileOptions, ) -> ZipResult<()> { self.add_directory(path_to_string(path), options) } /// Finish the last file and write all other zip-structures /// /// This will return the writer, but one should normally not append any data to the end of the file. /// Note that the zipfile will also be finished on drop. pub fn finish(mut self) -> ZipResult { let _central_start = self.finalize()?; let inner = mem::replace(&mut self.inner, Closed); Ok(inner.unwrap()) } /// Add a symlink entry. /// /// The zip archive will contain an entry for path `name` which is a symlink to `target`. /// /// No validation or normalization of the paths is performed. For best results, /// callers should normalize `\` to `/` and ensure symlinks are relative to other /// paths within the zip archive. /// /// WARNING: not all zip implementations preserve symlinks on extract. Some zip /// implementations may materialize a symlink as a regular file, possibly with the /// content incorrectly set to the symlink target. For maximum portability, consider /// storing a regular file instead. pub fn add_symlink( &mut self, name: N, target: T, mut options: FileOptions, ) -> ZipResult<()> { if options.permissions.is_none() { options.permissions = Some(0o777); } *options.permissions.as_mut().unwrap() |= S_IFLNK; // The symlink target is stored as file content. And compressing the target path // likely wastes space. So always store. options.compression_method = Stored; self.start_entry(name, options, None)?; self.writing_to_file = true; let result = self.write_all(target.to_string().as_bytes()); self.ok_or_abort_file(result)?; self.writing_raw = false; self.finish_file()?; Ok(()) } /// Add a symlink entry, taking Paths to the location and target as arguments. /// /// This function ensures that the '/' path separator is used and normalizes `.` and `..`. It /// ignores any `..` or Windows drive letter that would produce a path outside the ZIP file's /// root. pub fn add_symlink_from_path, T: AsRef, E: FileOptionExtension>( &mut self, path: P, target: T, options: FileOptions, ) -> ZipResult<()> { self.add_symlink(path_to_string(path), path_to_string(target), options) } fn finalize(&mut self) -> ZipResult { self.finish_file()?; let mut central_start = self.write_central_and_footer()?; let writer = self.inner.get_plain(); let footer_end = writer.stream_position()?; let archive_end = writer.seek(SeekFrom::End(0))?; if footer_end < archive_end { // Data from an aborted file is past the end of the footer. // Overwrite the magic so the footer is no longer valid. writer.seek(SeekFrom::Start(central_start))?; writer.write_u32_le(0)?; writer.seek(SeekFrom::Start( footer_end - size_of::() as u64 - self.comment.len() as u64, ))?; writer.write_u32_le(0)?; // Rewrite the footer at the actual end. let central_and_footer_size = footer_end - central_start; writer.seek(SeekFrom::End(-(central_and_footer_size as i64)))?; central_start = self.write_central_and_footer()?; debug_assert!(self.inner.get_plain().stream_position()? == archive_end); } Ok(central_start) } fn write_central_and_footer(&mut self) -> Result { let writer = self.inner.get_plain(); let mut version_needed = MIN_VERSION as u16; let central_start = writer.stream_position()?; for file in self.files.values() { write_central_directory_header(writer, file)?; version_needed = version_needed.max(file.version_needed()); } let central_size = writer.stream_position()? - central_start; let is64 = self.files.len() > spec::ZIP64_ENTRY_THR || central_size.max(central_start) > spec::ZIP64_BYTES_THR || self.zip64_comment.is_some(); if is64 { let comment = self.zip64_comment.clone().unwrap_or_default(); let zip64_footer = spec::Zip64CentralDirectoryEnd { record_size: comment.len() as u64 + 44, version_made_by: version_needed, version_needed_to_extract: version_needed, disk_number: 0, disk_with_central_directory: 0, number_of_files_on_this_disk: self.files.len() as u64, number_of_files: self.files.len() as u64, central_directory_size: central_size, central_directory_offset: central_start, extensible_data_sector: comment, }; zip64_footer.write(writer)?; let zip64_footer = spec::Zip64CentralDirectoryEndLocator { disk_with_central_directory: 0, end_of_central_directory_offset: central_start + central_size, number_of_disks: 1, }; zip64_footer.write(writer)?; } let number_of_files = self.files.len().min(spec::ZIP64_ENTRY_THR) as u16; let footer = spec::Zip32CentralDirectoryEnd { disk_number: 0, disk_with_central_directory: 0, zip_file_comment: self.comment.clone(), number_of_files_on_this_disk: number_of_files, number_of_files, central_directory_size: central_size.min(spec::ZIP64_BYTES_THR) as u32, central_directory_offset: central_start.min(spec::ZIP64_BYTES_THR) as u32, }; footer.write(writer)?; Ok(central_start) } fn index_by_name(&self, name: &str) -> ZipResult { self.files.get_index_of(name).ok_or(ZipError::FileNotFound) } /// Adds another entry to the central directory referring to the same content as an existing /// entry. The file's local-file header will still refer to it by its original name, so /// unzipping the file will technically be unspecified behavior. [ZipArchive] ignores the /// filename in the local-file header and treat the central directory as authoritative. However, /// some other software (e.g. Minecraft) will refuse to extract a file copied this way. pub fn shallow_copy_file(&mut self, src_name: &str, dest_name: &str) -> ZipResult<()> { self.finish_file()?; if src_name == dest_name { return Err(invalid!("Trying to copy a file to itself")); } let src_index = self.index_by_name(src_name)?; let mut dest_data = self.files[src_index].to_owned(); dest_data.file_name = dest_name.to_string().into(); dest_data.file_name_raw = dest_name.to_string().into_bytes().into(); dest_data.central_header_start = 0; self.insert_file_data(dest_data)?; Ok(()) } /// Like `shallow_copy_file`, but uses Path arguments. /// /// This function ensures that the '/' path separator is used and normalizes `.` and `..`. It /// ignores any `..` or Windows drive letter that would produce a path outside the ZIP file's /// root. pub fn shallow_copy_file_from_path, U: AsRef>( &mut self, src_path: T, dest_path: U, ) -> ZipResult<()> { self.shallow_copy_file(&path_to_string(src_path), &path_to_string(dest_path)) } } impl Drop for ZipWriter { fn drop(&mut self) { if !self.inner.is_closed() { if let Err(e) = self.finalize() { let _ = write!(io::stderr(), "ZipWriter drop failed: {:?}", e); } } } } type SwitchWriterFunction = Box) -> GenericZipWriter>; impl GenericZipWriter { fn prepare_next_writer( &self, compression: CompressionMethod, compression_level: Option, #[cfg(feature = "deflate-zopfli")] zopfli_buffer_size: Option, ) -> ZipResult> { if let Closed = self { return Err( io::Error::new(io::ErrorKind::BrokenPipe, "ZipWriter was already closed").into(), ); } { #[allow(deprecated)] #[allow(unreachable_code)] match compression { Stored => { if compression_level.is_some() { Err(UnsupportedArchive("Unsupported compression level")) } else { Ok(Box::new(|bare| Storer(bare))) } } #[cfg(feature = "_deflate-any")] CompressionMethod::Deflated => { let default = if cfg!(all( feature = "deflate-zopfli", not(feature = "deflate-flate2") )) { 24 } else { Compression::default().level() as i64 }; let level = clamp_opt( compression_level.unwrap_or(default), deflate_compression_level_range(), ) .ok_or(UnsupportedArchive("Unsupported compression level"))? as u32; #[cfg(feature = "deflate-zopfli")] { let best_non_zopfli = Compression::best().level(); if level > best_non_zopfli { let options = Options { iteration_count: NonZeroU64::try_from( (level - best_non_zopfli) as u64, ) .unwrap(), ..Default::default() }; return Ok(Box::new(move |bare| match zopfli_buffer_size { Some(size) => GenericZipWriter::BufferedZopfliDeflater( BufWriter::with_capacity( size, zopfli::DeflateEncoder::new( options, Default::default(), bare, ), ), ), None => GenericZipWriter::ZopfliDeflater( zopfli::DeflateEncoder::new(options, Default::default(), bare), ), })); } } #[cfg(feature = "deflate-flate2")] { Ok(Box::new(move |bare| { GenericZipWriter::Deflater(DeflateEncoder::new( bare, Compression::new(level), )) })) } } #[cfg(feature = "deflate64")] CompressionMethod::Deflate64 => { Err(UnsupportedArchive("Compressing Deflate64 is not supported")) } #[cfg(feature = "bzip2")] CompressionMethod::Bzip2 => { let level = clamp_opt( compression_level.unwrap_or(bzip2::Compression::default().level() as i64), bzip2_compression_level_range(), ) .ok_or(UnsupportedArchive("Unsupported compression level"))? as u32; Ok(Box::new(move |bare| { GenericZipWriter::Bzip2(BzEncoder::new( bare, bzip2::Compression::new(level), )) })) } CompressionMethod::AES => Err(UnsupportedArchive( "AES encryption is enabled through FileOptions::with_aes_encryption", )), #[cfg(feature = "zstd")] CompressionMethod::Zstd => { let level = clamp_opt( compression_level.unwrap_or(zstd::DEFAULT_COMPRESSION_LEVEL as i64), zstd::compression_level_range(), ) .ok_or(UnsupportedArchive("Unsupported compression level"))?; Ok(Box::new(move |bare| { GenericZipWriter::Zstd(ZstdEncoder::new(bare, level as i32).unwrap()) })) } #[cfg(feature = "lzma")] CompressionMethod::Lzma => { Err(UnsupportedArchive("LZMA isn't supported for compression")) } #[cfg(feature = "xz")] CompressionMethod::Xz => { let level = clamp_opt(compression_level.unwrap_or(6), 0..=9) .ok_or(UnsupportedArchive("Unsupported compression level"))? as u32; Ok(Box::new(move |bare| { GenericZipWriter::Xz(xz2::write::XzEncoder::new(bare, level)) })) } CompressionMethod::Unsupported(..) => { Err(UnsupportedArchive("Unsupported compression")) } } } } fn switch_to(&mut self, make_new_self: SwitchWriterFunction) -> ZipResult<()> { let bare = match mem::replace(self, Closed) { Storer(w) => w, #[cfg(feature = "deflate-flate2")] GenericZipWriter::Deflater(w) => w.finish()?, #[cfg(feature = "deflate-zopfli")] GenericZipWriter::ZopfliDeflater(w) => w.finish()?, #[cfg(feature = "deflate-zopfli")] GenericZipWriter::BufferedZopfliDeflater(w) => w .into_inner() .map_err(|e| ZipError::Io(e.into_error()))? .finish()?, #[cfg(feature = "bzip2")] GenericZipWriter::Bzip2(w) => w.finish()?, #[cfg(feature = "zstd")] GenericZipWriter::Zstd(w) => w.finish()?, #[cfg(feature = "xz")] GenericZipWriter::Xz(w) => w.finish()?, Closed => { return Err(io::Error::new( io::ErrorKind::BrokenPipe, "ZipWriter was already closed", ) .into()); } }; *self = make_new_self(bare); Ok(()) } fn ref_mut(&mut self) -> Option<&mut dyn Write> { match self { Storer(ref mut w) => Some(w as &mut dyn Write), #[cfg(feature = "deflate-flate2")] GenericZipWriter::Deflater(ref mut w) => Some(w as &mut dyn Write), #[cfg(feature = "deflate-zopfli")] GenericZipWriter::ZopfliDeflater(w) => Some(w as &mut dyn Write), #[cfg(feature = "deflate-zopfli")] GenericZipWriter::BufferedZopfliDeflater(w) => Some(w as &mut dyn Write), #[cfg(feature = "bzip2")] GenericZipWriter::Bzip2(ref mut w) => Some(w as &mut dyn Write), #[cfg(feature = "zstd")] GenericZipWriter::Zstd(ref mut w) => Some(w as &mut dyn Write), #[cfg(feature = "xz")] GenericZipWriter::Xz(ref mut w) => Some(w as &mut dyn Write), Closed => None, } } const fn is_closed(&self) -> bool { matches!(*self, Closed) } fn get_plain(&mut self) -> &mut W { match *self { Storer(MaybeEncrypted::Unencrypted(ref mut w)) => w, _ => panic!("Should have switched to stored and unencrypted beforehand"), } } fn unwrap(self) -> W { match self { Storer(MaybeEncrypted::Unencrypted(w)) => w, _ => panic!("Should have switched to stored and unencrypted beforehand"), } } } #[cfg(feature = "_deflate-any")] fn deflate_compression_level_range() -> std::ops::RangeInclusive { let min = if cfg!(feature = "deflate-flate2") { Compression::fast().level() as i64 } else { Compression::best().level() as i64 + 1 }; let max = Compression::best().level() as i64 + if cfg!(feature = "deflate-zopfli") { u8::MAX as i64 } else { 0 }; min..=max } #[cfg(feature = "bzip2")] fn bzip2_compression_level_range() -> std::ops::RangeInclusive { let min = bzip2::Compression::fast().level() as i64; let max = bzip2::Compression::best().level() as i64; min..=max } #[cfg(any(feature = "_deflate-any", feature = "bzip2", feature = "zstd"))] fn clamp_opt>( value: T, range: std::ops::RangeInclusive, ) -> Option { if range.contains(&value.try_into().ok()?) { Some(value) } else { None } } fn update_aes_extra_data(writer: &mut W, file: &mut ZipFileData) -> ZipResult<()> { let Some((aes_mode, version, compression_method)) = file.aes_mode else { return Ok(()); }; let extra_data_start = file.extra_data_start.unwrap(); writer.seek(SeekFrom::Start( extra_data_start + file.aes_extra_data_start, ))?; let mut buf = Vec::new(); /* TODO: implement this using the Block trait! */ // Extra field header ID. buf.write_u16_le(0x9901)?; // Data size. buf.write_u16_le(7)?; // Integer version number. buf.write_u16_le(version as u16)?; // Vendor ID. buf.write_all(b"AE")?; // AES encryption strength. buf.write_all(&[aes_mode as u8])?; // Real compression method. buf.write_u16_le(compression_method.serialize_to_u16())?; writer.write_all(&buf)?; let aes_extra_data_start = file.aes_extra_data_start as usize; let extra_field = Arc::get_mut(file.extra_field.as_mut().unwrap()).unwrap(); extra_field[aes_extra_data_start..aes_extra_data_start + buf.len()].copy_from_slice(&buf); Ok(()) } fn update_local_file_header( writer: &mut T, file: &mut ZipFileData, ) -> ZipResult<()> { const CRC32_OFFSET: u64 = 14; writer.seek(SeekFrom::Start(file.header_start + CRC32_OFFSET))?; writer.write_u32_le(file.crc32)?; if file.large_file { writer.write_u32_le(spec::ZIP64_BYTES_THR as u32)?; writer.write_u32_le(spec::ZIP64_BYTES_THR as u32)?; update_local_zip64_extra_field(writer, file)?; file.compressed_size = spec::ZIP64_BYTES_THR; file.uncompressed_size = spec::ZIP64_BYTES_THR; } else { // check compressed size as well as it can also be slightly larger than uncompressed size if file.compressed_size > spec::ZIP64_BYTES_THR { return Err(ZipError::Io(io::Error::new( io::ErrorKind::Other, "Large file option has not been set", ))); } writer.write_u32_le(file.compressed_size as u32)?; // uncompressed size is already checked on write to catch it as soon as possible writer.write_u32_le(file.uncompressed_size as u32)?; } Ok(()) } fn write_central_directory_header(writer: &mut T, file: &ZipFileData) -> ZipResult<()> { let block = file.block()?; block.write(writer)?; // file name writer.write_all(&file.file_name_raw)?; // extra field if let Some(extra_field) = &file.extra_field { writer.write_all(extra_field)?; } if let Some(central_extra_field) = &file.central_extra_field { writer.write_all(central_extra_field)?; } // file comment writer.write_all(file.file_comment.as_bytes())?; Ok(()) } fn update_local_zip64_extra_field( writer: &mut T, file: &mut ZipFileData, ) -> ZipResult<()> { let block = file.zip64_extra_field_block().ok_or(invalid!( "Attempted to update a nonexistent ZIP64 extra field" ))?; let zip64_extra_field_start = file.header_start + size_of::() as u64 + file.file_name_raw.len() as u64; writer.seek(SeekFrom::Start(zip64_extra_field_start))?; let block = block.serialize(); writer.write_all(&block)?; let extra_field = Arc::get_mut(file.extra_field.as_mut().unwrap()).unwrap(); extra_field[..block.len()].copy_from_slice(&block); Ok(()) } #[cfg(not(feature = "unreserved"))] const EXTRA_FIELD_MAPPING: [u16; 43] = [ 0x0007, 0x0008, 0x0009, 0x000a, 0x000c, 0x000d, 0x000e, 0x000f, 0x0014, 0x0015, 0x0016, 0x0017, 0x0018, 0x0019, 0x0020, 0x0021, 0x0022, 0x0023, 0x0065, 0x0066, 0x4690, 0x07c8, 0x2605, 0x2705, 0x2805, 0x334d, 0x4341, 0x4453, 0x4704, 0x470f, 0x4b46, 0x4c41, 0x4d49, 0x4f4c, 0x5356, 0x554e, 0x5855, 0x6542, 0x756e, 0x7855, 0xa220, 0xfd4a, 0x9902, ]; #[cfg(test)] #[allow(unknown_lints)] // needless_update is new in clippy pre 1.29.0 #[allow(clippy::needless_update)] // So we can use the same FileOptions decls with and without zopfli_buffer_size #[allow(clippy::octal_escapes)] // many false positives in converted fuzz cases mod test { use super::{ExtendedFileOptions, FileOptions, FullFileOptions, ZipWriter}; use crate::compression::CompressionMethod; use crate::result::ZipResult; use crate::types::DateTime; use crate::write::EncryptWith::ZipCrypto; use crate::write::SimpleFileOptions; use crate::zipcrypto::ZipCryptoKeys; use crate::CompressionMethod::Stored; use crate::ZipArchive; use std::io::{Cursor, Read, Write}; use std::marker::PhantomData; use std::path::PathBuf; #[test] fn write_empty_zip() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_comment("ZIP"); let result = writer.finish().unwrap(); assert_eq!(result.get_ref().len(), 25); assert_eq!( *result.get_ref(), [80, 75, 5, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 90, 73, 80] ); } #[test] fn unix_permissions_bitmask() { // unix_permissions() throws away upper bits. let options = SimpleFileOptions::default().unix_permissions(0o120777); assert_eq!(options.permissions, Some(0o777)); } #[test] fn write_zip_dir() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .add_directory( "test", SimpleFileOptions::default().last_modified_time( DateTime::from_date_and_time(2018, 8, 15, 20, 45, 6).unwrap(), ), ) .unwrap(); assert!(writer .write(b"writing to a directory is not allowed, and will not write any data") .is_err()); let result = writer.finish().unwrap(); assert_eq!(result.get_ref().len(), 108); assert_eq!( *result.get_ref(), &[ 80u8, 75, 3, 4, 20, 0, 0, 0, 0, 0, 163, 165, 15, 77, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 116, 101, 115, 116, 47, 80, 75, 1, 2, 20, 3, 20, 0, 0, 0, 0, 0, 163, 165, 15, 77, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 237, 65, 0, 0, 0, 0, 116, 101, 115, 116, 47, 80, 75, 5, 6, 0, 0, 0, 0, 1, 0, 1, 0, 51, 0, 0, 0, 35, 0, 0, 0, 0, 0, ] as &[u8] ); } #[test] fn write_symlink_simple() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .add_symlink( "name", "target", SimpleFileOptions::default().last_modified_time( DateTime::from_date_and_time(2018, 8, 15, 20, 45, 6).unwrap(), ), ) .unwrap(); assert!(writer .write(b"writing to a symlink is not allowed and will not write any data") .is_err()); let result = writer.finish().unwrap(); assert_eq!(result.get_ref().len(), 112); assert_eq!( *result.get_ref(), &[ 80u8, 75, 3, 4, 10, 0, 0, 0, 0, 0, 163, 165, 15, 77, 252, 47, 111, 70, 6, 0, 0, 0, 6, 0, 0, 0, 4, 0, 0, 0, 110, 97, 109, 101, 116, 97, 114, 103, 101, 116, 80, 75, 1, 2, 10, 3, 10, 0, 0, 0, 0, 0, 163, 165, 15, 77, 252, 47, 111, 70, 6, 0, 0, 0, 6, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 161, 0, 0, 0, 0, 110, 97, 109, 101, 80, 75, 5, 6, 0, 0, 0, 0, 1, 0, 1, 0, 50, 0, 0, 0, 40, 0, 0, 0, 0, 0 ] as &[u8], ); } #[test] fn test_path_normalization() { let mut path = PathBuf::new(); path.push("foo"); path.push("bar"); path.push(".."); path.push("."); path.push("example.txt"); let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .start_file_from_path(path, SimpleFileOptions::default()) .unwrap(); let archive = writer.finish_into_readable().unwrap(); assert_eq!(Some("foo/example.txt"), archive.name_for_index(0)); } #[test] fn write_symlink_wonky_paths() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .add_symlink( "directory\\link", "/absolute/symlink\\with\\mixed/slashes", SimpleFileOptions::default().last_modified_time( DateTime::from_date_and_time(2018, 8, 15, 20, 45, 6).unwrap(), ), ) .unwrap(); assert!(writer .write(b"writing to a symlink is not allowed and will not write any data") .is_err()); let result = writer.finish().unwrap(); assert_eq!(result.get_ref().len(), 162); assert_eq!( *result.get_ref(), &[ 80u8, 75, 3, 4, 10, 0, 0, 0, 0, 0, 163, 165, 15, 77, 95, 41, 81, 245, 36, 0, 0, 0, 36, 0, 0, 0, 14, 0, 0, 0, 100, 105, 114, 101, 99, 116, 111, 114, 121, 92, 108, 105, 110, 107, 47, 97, 98, 115, 111, 108, 117, 116, 101, 47, 115, 121, 109, 108, 105, 110, 107, 92, 119, 105, 116, 104, 92, 109, 105, 120, 101, 100, 47, 115, 108, 97, 115, 104, 101, 115, 80, 75, 1, 2, 10, 3, 10, 0, 0, 0, 0, 0, 163, 165, 15, 77, 95, 41, 81, 245, 36, 0, 0, 0, 36, 0, 0, 0, 14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 161, 0, 0, 0, 0, 100, 105, 114, 101, 99, 116, 111, 114, 121, 92, 108, 105, 110, 107, 80, 75, 5, 6, 0, 0, 0, 0, 1, 0, 1, 0, 60, 0, 0, 0, 80, 0, 0, 0, 0, 0 ] as &[u8], ); } #[test] fn write_mimetype_zip() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::default(), permissions: Some(33188), large_file: false, encrypt_with: None, extended_options: (), alignment: 1, #[cfg(feature = "deflate-zopfli")] zopfli_buffer_size: None, }; writer.start_file("mimetype", options).unwrap(); writer .write_all(b"application/vnd.oasis.opendocument.text") .unwrap(); let result = writer.finish().unwrap(); assert_eq!(result.get_ref().len(), 153); let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/mimetype.zip")); assert_eq!(result.get_ref(), &v); } const RT_TEST_TEXT: &str = "And I can't stop thinking about the moments that I lost to you\ And I can't stop thinking of things I used to do\ And I can't stop making bad decisions\ And I can't stop eating stuff you make me chew\ I put on a smile like you wanna see\ Another day goes by that I long to be like you"; const RT_TEST_FILENAME: &str = "subfolder/sub-subfolder/can't_stop.txt"; const SECOND_FILENAME: &str = "different_name.xyz"; const THIRD_FILENAME: &str = "third_name.xyz"; #[test] fn write_non_utf8() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::default(), permissions: Some(33188), large_file: false, encrypt_with: None, extended_options: (), alignment: 1, #[cfg(feature = "deflate-zopfli")] zopfli_buffer_size: None, }; // GB18030 // "äø­ę–‡" = [214, 208, 206, 196] let filename = unsafe { String::from_utf8_unchecked(vec![214, 208, 206, 196]) }; writer.start_file(filename, options).unwrap(); writer.write_all(b"encoding GB18030").unwrap(); // SHIFT_JIS // "ę—„ę–‡" = [147, 250, 149, 182] let filename = unsafe { String::from_utf8_unchecked(vec![147, 250, 149, 182]) }; writer.start_file(filename, options).unwrap(); writer.write_all(b"encoding SHIFT_JIS").unwrap(); let result = writer.finish().unwrap(); assert_eq!(result.get_ref().len(), 224); let mut v = Vec::new(); v.extend_from_slice(include_bytes!("../tests/data/non_utf8.zip")); assert_eq!(result.get_ref(), &v); } #[test] fn path_to_string() { let mut path = PathBuf::new(); #[cfg(windows)] path.push(r"C:\"); #[cfg(unix)] path.push("/"); path.push("windows"); path.push(".."); path.push("."); path.push("system32"); let path_str = super::path_to_string(&path); assert_eq!(&*path_str, "system32"); } #[test] fn test_shallow_copy() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); let options = FileOptions { compression_method: CompressionMethod::default(), compression_level: None, last_modified_time: DateTime::default(), permissions: Some(33188), large_file: false, encrypt_with: None, extended_options: (), alignment: 0, #[cfg(feature = "deflate-zopfli")] zopfli_buffer_size: None, }; writer.start_file(RT_TEST_FILENAME, options).unwrap(); writer.write_all(RT_TEST_TEXT.as_ref()).unwrap(); writer .shallow_copy_file(RT_TEST_FILENAME, SECOND_FILENAME) .unwrap(); writer .shallow_copy_file(RT_TEST_FILENAME, SECOND_FILENAME) .expect_err("Duplicate filename"); let zip = writer.finish().unwrap(); let mut writer = ZipWriter::new_append(zip).unwrap(); writer .shallow_copy_file(SECOND_FILENAME, SECOND_FILENAME) .expect_err("Duplicate filename"); let mut reader = writer.finish_into_readable().unwrap(); let mut file_names: Vec<&str> = reader.file_names().collect(); file_names.sort(); let mut expected_file_names = vec![RT_TEST_FILENAME, SECOND_FILENAME]; expected_file_names.sort(); assert_eq!(file_names, expected_file_names); let mut first_file_content = String::new(); reader .by_name(RT_TEST_FILENAME) .unwrap() .read_to_string(&mut first_file_content) .unwrap(); assert_eq!(first_file_content, RT_TEST_TEXT); let mut second_file_content = String::new(); reader .by_name(SECOND_FILENAME) .unwrap() .read_to_string(&mut second_file_content) .unwrap(); assert_eq!(second_file_content, RT_TEST_TEXT); } #[test] fn test_deep_copy() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); let options = FileOptions { compression_method: CompressionMethod::default(), compression_level: None, last_modified_time: DateTime::default(), permissions: Some(33188), large_file: false, encrypt_with: None, extended_options: (), alignment: 0, #[cfg(feature = "deflate-zopfli")] zopfli_buffer_size: None, }; writer.start_file(RT_TEST_FILENAME, options).unwrap(); writer.write_all(RT_TEST_TEXT.as_ref()).unwrap(); writer .deep_copy_file(RT_TEST_FILENAME, SECOND_FILENAME) .unwrap(); let zip = writer.finish().unwrap().into_inner(); zip.iter().copied().for_each(|x| print!("{:02x}", x)); println!(); let mut writer = ZipWriter::new_append(Cursor::new(zip)).unwrap(); writer .deep_copy_file(RT_TEST_FILENAME, THIRD_FILENAME) .unwrap(); let zip = writer.finish().unwrap(); let mut reader = ZipArchive::new(zip).unwrap(); let mut file_names: Vec<&str> = reader.file_names().collect(); file_names.sort(); let mut expected_file_names = vec![RT_TEST_FILENAME, SECOND_FILENAME, THIRD_FILENAME]; expected_file_names.sort(); assert_eq!(file_names, expected_file_names); let mut first_file_content = String::new(); reader .by_name(RT_TEST_FILENAME) .unwrap() .read_to_string(&mut first_file_content) .unwrap(); assert_eq!(first_file_content, RT_TEST_TEXT); let mut second_file_content = String::new(); reader .by_name(SECOND_FILENAME) .unwrap() .read_to_string(&mut second_file_content) .unwrap(); assert_eq!(second_file_content, RT_TEST_TEXT); } #[test] fn duplicate_filenames() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .start_file("foo/bar/test", SimpleFileOptions::default()) .unwrap(); writer .write_all("The quick brown 🦊 jumps over the lazy šŸ•".as_bytes()) .unwrap(); writer .start_file("foo/bar/test", SimpleFileOptions::default()) .expect_err("Expected duplicate filename not to be allowed"); } #[test] fn test_filename_looks_like_zip64_locator() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .start_file( "PK\u{6}\u{7}\0\0\0\u{11}\0\0\0\0\0\0\0\0\0\0\0\0", SimpleFileOptions::default(), ) .unwrap(); let zip = writer.finish().unwrap(); let _ = ZipArchive::new(zip).unwrap(); } #[test] fn test_filename_looks_like_zip64_locator_2() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .start_file( "PK\u{6}\u{6}\0\0\0\0\0\0\0\0\0\0PK\u{6}\u{7}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", SimpleFileOptions::default(), ) .unwrap(); let zip = writer.finish().unwrap(); let _ = ZipArchive::new(zip).unwrap(); } #[test] fn test_filename_looks_like_zip64_locator_2a() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .start_file( "PK\u{6}\u{6}PK\u{6}\u{7}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", SimpleFileOptions::default(), ) .unwrap(); let zip = writer.finish().unwrap(); let _ = ZipArchive::new(zip).unwrap(); } #[test] fn test_filename_looks_like_zip64_locator_3() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .start_file("\0PK\u{6}\u{6}", SimpleFileOptions::default()) .unwrap(); writer .start_file( "\0\u{4}\0\0PK\u{6}\u{7}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\u{3}", SimpleFileOptions::default(), ) .unwrap(); let zip = writer.finish().unwrap(); let _ = ZipArchive::new(zip).unwrap(); } #[test] fn test_filename_looks_like_zip64_locator_4() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .start_file("PK\u{6}\u{6}", SimpleFileOptions::default()) .unwrap(); writer .start_file("\0\0\0\0\0\0", SimpleFileOptions::default()) .unwrap(); writer .start_file("\0", SimpleFileOptions::default()) .unwrap(); writer.start_file("", SimpleFileOptions::default()).unwrap(); writer .start_file("\0\0", SimpleFileOptions::default()) .unwrap(); writer .start_file( "\0\0\0PK\u{6}\u{7}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", SimpleFileOptions::default(), ) .unwrap(); let zip = writer.finish().unwrap(); let _ = ZipArchive::new(zip).unwrap(); } #[test] fn test_filename_looks_like_zip64_locator_5() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .add_directory("", SimpleFileOptions::default().with_alignment(21)) .unwrap(); let mut writer = ZipWriter::new_append(writer.finish().unwrap()).unwrap(); writer.shallow_copy_file("/", "").unwrap(); writer.shallow_copy_file("", "\0").unwrap(); writer.shallow_copy_file("\0", "PK\u{6}\u{6}").unwrap(); let mut writer = ZipWriter::new_append(writer.finish().unwrap()).unwrap(); writer .start_file("\0\0\0\0\0\0", SimpleFileOptions::default()) .unwrap(); let mut writer = ZipWriter::new_append(writer.finish().unwrap()).unwrap(); writer .start_file( "#PK\u{6}\u{7}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", SimpleFileOptions::default(), ) .unwrap(); let zip = writer.finish().unwrap(); let _ = ZipArchive::new(zip).unwrap(); Ok(()) } #[test] fn remove_shallow_copy_keeps_original() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .start_file("original", SimpleFileOptions::default()) .unwrap(); writer.write_all(RT_TEST_TEXT.as_bytes()).unwrap(); writer .shallow_copy_file("original", "shallow_copy") .unwrap(); writer.abort_file().unwrap(); let mut zip = ZipArchive::new(writer.finish().unwrap()).unwrap(); let mut file = zip.by_name("original").unwrap(); let mut contents = Vec::new(); file.read_to_end(&mut contents).unwrap(); assert_eq!(RT_TEST_TEXT.as_bytes(), contents); Ok(()) } #[test] fn remove_encrypted_file() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); let first_file_options = SimpleFileOptions::default() .with_alignment(65535) .with_deprecated_encryption(b"Password"); writer.start_file("", first_file_options).unwrap(); writer.abort_file().unwrap(); let zip = writer.finish().unwrap(); let mut writer = ZipWriter::new(zip); writer.start_file("", SimpleFileOptions::default()).unwrap(); Ok(()) } #[test] fn remove_encrypted_aligned_symlink() -> ZipResult<()> { let mut options = SimpleFileOptions::default(); options = options.with_deprecated_encryption(b"Password"); options.alignment = 65535; let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.add_symlink("", "s\t\0\0ggggg\0\0", options).unwrap(); writer.abort_file().unwrap(); let zip = writer.finish().unwrap(); let mut writer = ZipWriter::new_append(zip).unwrap(); writer.start_file("", SimpleFileOptions::default()).unwrap(); Ok(()) } #[cfg(feature = "deflate-zopfli")] #[test] fn zopfli_empty_write() -> ZipResult<()> { let mut options = SimpleFileOptions::default(); options = options .compression_method(CompressionMethod::default()) .compression_level(Some(264)); let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.start_file("", options).unwrap(); writer.write_all(&[]).unwrap(); writer.write_all(&[]).unwrap(); Ok(()) } #[test] fn crash_with_no_features() -> ZipResult<()> { const ORIGINAL_FILE_NAME: &str = "PK\u{6}\u{6}\0\0\0\0\0\0\0\0\0\u{2}g\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\u{1}\0\0\0\0\0\0\0\0\0\0PK\u{6}\u{7}\0\0\0\0\0\0\0\0\0\0\0\0\u{7}\0\t'"; let mut writer = ZipWriter::new(Cursor::new(Vec::new())); let mut options = SimpleFileOptions::default(); options = options.with_alignment(3584).compression_method(Stored); writer.start_file(ORIGINAL_FILE_NAME, options)?; let archive = writer.finish()?; let mut writer = ZipWriter::new_append(archive)?; writer.shallow_copy_file(ORIGINAL_FILE_NAME, "\u{6}\\")?; writer.finish()?; Ok(()) } #[test] fn test_alignment() { let page_size = 4096; let options = SimpleFileOptions::default() .compression_method(Stored) .with_alignment(page_size); let mut zip = ZipWriter::new(Cursor::new(Vec::new())); let contents = b"sleeping"; let () = zip.start_file("sleep", options).unwrap(); let _count = zip.write(&contents[..]).unwrap(); let mut zip = zip.finish_into_readable().unwrap(); let file = zip.by_index(0).unwrap(); assert_eq!(file.name(), "sleep"); assert_eq!(file.data_start(), page_size.into()); } #[test] fn test_alignment_2() { let page_size = 4096; let mut data = Vec::new(); { let options = SimpleFileOptions::default() .compression_method(Stored) .with_alignment(page_size); let mut zip = ZipWriter::new(Cursor::new(&mut data)); let contents = b"sleeping"; let () = zip.start_file("sleep", options).unwrap(); let _count = zip.write(&contents[..]).unwrap(); } assert_eq!(data[4096..4104], b"sleeping"[..]); { let mut zip = ZipArchive::new(Cursor::new(&mut data)).unwrap(); let file = zip.by_index(0).unwrap(); assert_eq!(file.name(), "sleep"); assert_eq!(file.data_start(), page_size.into()); } } #[test] fn test_crash_short_read() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); let comment = vec![ 1, 80, 75, 5, 6, 237, 237, 237, 237, 237, 237, 237, 237, 44, 255, 191, 255, 255, 255, 255, 255, 255, 255, 255, 16, ] .into_boxed_slice(); writer.set_raw_comment(comment); let options = SimpleFileOptions::default() .compression_method(Stored) .with_alignment(11823); writer.start_file("", options).unwrap(); writer.write_all(&[255, 255, 44, 255, 0]).unwrap(); let written = writer.finish().unwrap(); let _ = ZipWriter::new_append(written).unwrap(); } #[cfg(all(feature = "_deflate-any", feature = "aes-crypto"))] #[test] fn test_fuzz_failure_2024_05_08() -> ZipResult<()> { let mut first_writer = ZipWriter::new(Cursor::new(Vec::new())); let mut second_writer = ZipWriter::new(Cursor::new(Vec::new())); let options = SimpleFileOptions::default() .compression_method(Stored) .with_alignment(46036); second_writer.add_symlink("\0", "", options)?; let second_archive = second_writer.finish_into_readable()?.into_inner(); let mut second_writer = ZipWriter::new_append(second_archive)?; let options = SimpleFileOptions::default() .compression_method(CompressionMethod::Deflated) .large_file(true) .with_alignment(46036) .with_aes_encryption(crate::AesMode::Aes128, "\0\0"); second_writer.add_symlink("", "", options)?; let second_archive = second_writer.finish_into_readable()?.into_inner(); let mut second_writer = ZipWriter::new_append(second_archive)?; let options = SimpleFileOptions::default().compression_method(Stored); second_writer.start_file(" ", options)?; let second_archive = second_writer.finish_into_readable()?; first_writer.merge_archive(second_archive)?; let _ = ZipArchive::new(first_writer.finish()?)?; Ok(()) } #[cfg(all(feature = "bzip2", not(miri)))] #[test] fn test_fuzz_failure_2024_06_08() -> ZipResult<()> { use crate::write::ExtendedFileOptions; use CompressionMethod::Bzip2; let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); const SYMLINK_PATH: &str = "PK\u{6}\u{6}K\u{6}\u{6}\u{6}\0\0\0\0\u{18}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\u{1}\0\0\0\0\0\0\0\0\u{1}\0\0PK\u{1}\u{2},\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\u{1}\0\0PK\u{1}\u{2},\0\0\0\0\0\0\0\0\0\0l\0\0\0\0\0\0PK\u{6}\u{7}P\0\0\0\0\0\0\0\0\0\0\0\0\u{1}\0\0"; let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Bzip2, compression_level: None, last_modified_time: DateTime::from_date_and_time(1980, 5, 20, 21, 0, 57)?, permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 2048, ..Default::default() }; writer.add_symlink_from_path(SYMLINK_PATH, "||\0\0\0\0", options)?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer.deep_copy_file_from_path(SYMLINK_PATH, "")?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer.abort_file()?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer.deep_copy_file_from_path(SYMLINK_PATH, "foo")?; let _ = writer.finish_into_readable()?; Ok(()) } #[test] fn test_short_extra_data() { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![99, 0, 15, 0, 207].into(), }, ..Default::default() }; assert!(writer.start_file_from_path("", options).is_err()); } #[test] #[cfg(not(feature = "unreserved"))] fn test_invalid_extra_data() -> ZipResult<()> { use crate::write::ExtendedFileOptions; let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(1980, 1, 4, 6, 54, 0)?, permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![ 7, 0, 15, 0, 207, 117, 177, 117, 112, 2, 0, 255, 255, 131, 255, 255, 255, 80, 185, ] .into(), }, alignment: 32787, ..Default::default() }; assert!(writer.start_file_from_path("", options).is_err()); Ok(()) } #[test] #[cfg(not(feature = "unreserved"))] fn test_invalid_extra_data_unreserved() { use crate::write::ExtendedFileOptions; let mut writer = ZipWriter::new(Cursor::new(Vec::new())); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(2021, 8, 8, 1, 0, 29).unwrap(), permissions: None, large_file: true, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![ 1, 41, 4, 0, 1, 255, 245, 117, 117, 112, 5, 0, 80, 255, 149, 255, 247, ] .into(), }, alignment: 4103, ..Default::default() }; assert!(writer.start_file_from_path("", options).is_err()); } #[cfg(feature = "deflate64")] #[test] fn test_fuzz_crash_2024_06_13a() -> ZipResult<()> { use crate::write::ExtendedFileOptions; use CompressionMethod::Deflate64; let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Deflate64, compression_level: None, last_modified_time: DateTime::from_date_and_time(2039, 4, 17, 6, 18, 19)?, permissions: None, large_file: true, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 4, ..Default::default() }; writer.add_directory_from_path("", options)?; let _ = writer.finish_into_readable()?; Ok(()) } #[test] fn test_fuzz_crash_2024_06_13b() -> ZipResult<()> { use crate::write::ExtendedFileOptions; let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(1980, 4, 14, 6, 11, 54)?, permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 185, ..Default::default() }; writer.add_symlink_from_path("", "", options)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer.deep_copy_file_from_path("", "_copy")?; let _ = writer.finish_into_readable()?; Ok(()) } #[test] fn test_fuzz_crash_2024_06_14() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FullFileOptions { compression_method: Stored, large_file: true, alignment: 93, ..Default::default() }; writer.start_file_from_path("\0", options)?; writer = ZipWriter::new_append(writer.finish()?)?; writer.deep_copy_file_from_path("\0", "")?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer.deep_copy_file_from_path("", "copy")?; let _ = writer.finish_into_readable()?; Ok(()) } #[test] fn test_fuzz_crash_2024_06_14a() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(2083, 5, 30, 21, 45, 35)?, permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 2565, ..Default::default() }; writer.add_symlink_from_path("", "", options)?; writer.abort_file()?; let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::default(), permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 0, ..Default::default() }; writer.start_file_from_path("", options)?; let _ = writer.finish_into_readable()?; Ok(()) } #[allow(deprecated)] #[test] fn test_fuzz_crash_2024_06_14b() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(2078, 3, 6, 12, 48, 58)?, permissions: None, large_file: true, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 65521, ..Default::default() }; writer.start_file_from_path("\u{4}\0@\n//\u{c}", options)?; writer = ZipWriter::new_append(writer.finish()?)?; writer.abort_file()?; let options = FileOptions { compression_method: CompressionMethod::Unsupported(65535), compression_level: None, last_modified_time: DateTime::from_date_and_time(2055, 10, 2, 11, 48, 49)?, permissions: None, large_file: true, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![255, 255, 1, 0, 255, 0, 0, 0, 0].into(), central_extra_data: vec![].into(), }, alignment: 65535, ..Default::default() }; writer.add_directory_from_path("", options)?; let _ = writer.finish_into_readable()?; Ok(()) } #[test] fn test_fuzz_crash_2024_06_14c() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(2060, 4, 6, 13, 13, 3)?, permissions: None, large_file: true, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 0, ..Default::default() }; writer.start_file_from_path("\0", options)?; writer.write_all(&([]))?; writer = ZipWriter::new_append(writer.finish()?)?; writer.deep_copy_file_from_path("\0", "")?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer.deep_copy_file_from_path("", "_copy")?; let _ = writer.finish_into_readable()?; Ok(()) } #[cfg(all(feature = "_deflate-any", feature = "aes-crypto"))] #[test] fn test_fuzz_crash_2024_06_14d() -> ZipResult<()> { use crate::write::EncryptWith::Aes; use crate::AesMode::Aes256; use CompressionMethod::Deflated; let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Deflated, compression_level: Some(5), last_modified_time: DateTime::from_date_and_time(2107, 4, 8, 15, 54, 19)?, permissions: None, large_file: true, encrypt_with: Some(Aes { mode: Aes256, password: "", }), extended_options: ExtendedFileOptions { extra_data: vec![2, 0, 1, 0, 0].into(), central_extra_data: vec![ 35, 229, 2, 0, 41, 41, 231, 44, 2, 0, 52, 233, 82, 201, 0, 0, 3, 0, 2, 0, 233, 255, 3, 0, 2, 0, 26, 154, 38, 251, 0, 0, ] .into(), }, alignment: 65535, ..Default::default() }; assert!(writer.add_directory_from_path("", options).is_err()); Ok(()) } #[test] fn test_fuzz_crash_2024_06_14e() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(1988, 1, 1, 1, 6, 26)?, permissions: None, large_file: true, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![76, 0, 1, 0, 0, 2, 0, 0, 0].into(), central_extra_data: vec![ 1, 149, 1, 0, 255, 3, 0, 0, 0, 2, 255, 0, 0, 12, 65, 1, 0, 0, 67, 149, 0, 0, 76, 149, 2, 0, 149, 149, 67, 149, 0, 0, ] .into(), }, alignment: 65535, ..Default::default() }; assert!(writer.add_directory_from_path("", options).is_err()); let _ = writer.finish_into_readable()?; Ok(()) } #[allow(deprecated)] #[test] fn test_fuzz_crash_2024_06_17() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: CompressionMethod::Unsupported( 65535, ), compression_level: Some(5), last_modified_time: DateTime::from_date_and_time( 2107, 2, 8, 15, 0, 0, )?, permissions: None, large_file: true, encrypt_with: Some(ZipCrypto( ZipCryptoKeys::of( 0x63ff, 0xc62d3103, 0xfffe00ea, ), PhantomData, )), extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 255, ..Default::default() }; writer.add_symlink_from_path("1\0PK\u{6}\u{6}\u{b}\u{6}\u{6}\u{6}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\u{1}\0\0\0\0\0\0\0\0\u{b}\0\0PK\u{1}\u{2},\0\0\0\0\0\0\0\0\0\0\0\u{10}\0\0\0K\u{6}\u{6}\0\0\0\0\0\0\0\0PK\u{2}\u{6}", "", options)?; writer = ZipWriter::new_append( writer.finish_into_readable()?.into_inner(), )?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer = ZipWriter::new_append( writer.finish_into_readable()?.into_inner(), )?; let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time( 1992, 7, 3, 0, 0, 0, )?, permissions: None, large_file: true, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 43, ..Default::default() }; writer.start_file_from_path( "\0\0\0\u{3}\0\u{1a}\u{1a}\u{1a}\u{1a}\u{1a}\u{1a}", options, )?; let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time( 2006, 3, 27, 2, 24, 26, )?, permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 26, ..Default::default() }; writer.start_file_from_path("\0K\u{6}\u{6}\0PK\u{6}\u{7}PK\u{6}\u{6}\0\0\0\0\0\0\0\0PK\u{2}\u{6}", options)?; writer = ZipWriter::new_append( writer.finish_into_readable()?.into_inner(), )?; let options = FileOptions { compression_method: Stored, compression_level: Some(17), last_modified_time: DateTime::from_date_and_time( 2103, 4, 10, 23, 15, 18, )?, permissions: Some(3284386755), large_file: true, encrypt_with: Some(ZipCrypto( ZipCryptoKeys::of( 0x8888c5bf, 0x88888888, 0xff888888, ), PhantomData, )), extended_options: ExtendedFileOptions { extra_data: vec![3, 0, 1, 0, 255, 144, 136, 0, 0] .into(), central_extra_data: vec![].into(), }, alignment: 65535, ..Default::default() }; writer.add_symlink_from_path("", "\nu", options)?; writer = ZipWriter::new_append(writer.finish()?)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer = ZipWriter::new_append( writer.finish_into_readable()?.into_inner(), )?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer = ZipWriter::new_append(writer.finish()?)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer.abort_file()?; let options = FileOptions { compression_method: CompressionMethod::Unsupported(49603), compression_level: Some(20), last_modified_time: DateTime::from_date_and_time( 2047, 4, 14, 3, 15, 14, )?, permissions: Some(3284386755), large_file: true, encrypt_with: Some(ZipCrypto( ZipCryptoKeys::of(0xc3, 0x0, 0x0), PhantomData, )), extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 0, ..Default::default() }; writer.add_directory_from_path("", options)?; writer.deep_copy_file_from_path("/", "")?; writer.shallow_copy_file_from_path("", "copy")?; assert!(writer.shallow_copy_file_from_path("", "copy").is_err()); assert!(writer.shallow_copy_file_from_path("", "copy").is_err()); assert!(writer.shallow_copy_file_from_path("", "copy").is_err()); assert!(writer.shallow_copy_file_from_path("", "copy").is_err()); assert!(writer.shallow_copy_file_from_path("", "copy").is_err()); assert!(writer.shallow_copy_file_from_path("", "copy").is_err()); writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; let _ = writer.finish_into_readable()?; Ok(()) } #[test] fn test_fuzz_crash_2024_06_17a() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); const PATH_1: &str = "\0I\01\0P\0\0\u{2}\0\0\u{1a}\u{1a}\u{1a}\u{1a}\u{1b}\u{1a}UT\u{5}\0\0\u{1a}\u{1a}\u{1a}\u{1a}UT\u{5}\0\u{1}\0\u{1a}\u{1a}\u{1a}UT\t\0uc\u{5}\0\0\0\0\u{7f}\u{7f}\u{7f}\u{7f}PK\u{6}"; let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(1981, 1, 1, 0, 24, 21)?, permissions: Some(16908288), large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 20555, ..Default::default() }; writer.start_file_from_path( "\0\u{7}\u{1}\0\0\0\0\0\0\0\0\u{1}\0\0PK\u{1}\u{2};", options, )?; writer.write_all( &([ 255, 255, 255, 255, 253, 253, 253, 203, 203, 203, 253, 253, 253, 253, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 249, 191, 225, 225, 241, 197, ]), )?; writer.write_all( &([ 197, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 75, 0, ]), )?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(1980, 11, 14, 10, 46, 47)?, permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 0, ..Default::default() }; writer.start_file_from_path(PATH_1, options)?; writer.deep_copy_file_from_path(PATH_1, "eee\u{6}\0\0\0\0\0\0\0\0\0\0\0$\0\0\0\0\0\0\u{7f}\u{7f}PK\u{6}\u{6}K\u{6}\u{6}\u{6}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\u{1}\0\0\0\0\0\0\0\0\u{1}\0\0PK\u{1}\u{1e},\0\0\0\0\0\0\0\0\0\0\0\u{8}\0*\0\0\u{1}PK\u{6}\u{7}PK\u{6}\u{6}\0\0\0\0\0\0\0\0}K\u{2}\u{6}")?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer.deep_copy_file_from_path(PATH_1, "")?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer.shallow_copy_file_from_path("", "copy")?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; let _ = writer.finish_into_readable()?; Ok(()) } #[test] #[allow(clippy::octal_escapes)] #[cfg(all(feature = "bzip2", not(miri)))] fn test_fuzz_crash_2024_06_17b() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time( 1981, 1, 1, 0, 0, 21, )?, permissions: Some(16908288), large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 20555, ..Default::default() }; writer.start_file_from_path("\0\u{7}\u{1}\0\0\0\0\0\0\0\0\u{1}\0\0PK\u{1}\u{2};\u{1a}\u{18}\u{1a}UT\t.........................\0u", options)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; let options = FileOptions { compression_method: CompressionMethod::Bzip2, compression_level: Some(5), last_modified_time: DateTime::from_date_and_time( 2055, 7, 7, 3, 6, 6, )?, permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 0, ..Default::default() }; writer.start_file_from_path("\0\0\0\0..\0\0\0\0\0\u{7f}\u{7f}PK\u{6}\u{6}K\u{6}\u{6}\u{6}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\u{1}\0\0\0\0\0\0\0\0\u{1}\0\0PK\u{1}\u{1e},\0\0\0\0\0\0\0\0\0\0\0\u{8}\0*\0\0\u{1}PK\u{6}\u{7}PK\u{6}\u{6}\0\0\0\0\0\0\0\0}K\u{2}\u{6}", options)?; writer = ZipWriter::new_append( writer.finish_into_readable()?.into_inner(), )?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer = ZipWriter::new_append( writer.finish_into_readable()?.into_inner(), )?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; let _ = writer.finish_into_readable()?; Ok(()) } #[test] fn test_fuzz_crash_2024_06_18() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_raw_comment(Box::<[u8]>::from([ 80, 75, 5, 6, 255, 255, 255, 255, 255, 255, 80, 75, 5, 6, 255, 255, 255, 255, 255, 255, 13, 0, 13, 13, 13, 13, 13, 255, 255, 255, 255, 255, 255, 255, 255, ])); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); writer.set_raw_comment(Box::new([])); writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer = ZipWriter::new_append(writer.finish()?)?; let _ = writer.finish_into_readable()?; Ok(()) } #[test] fn test_fuzz_crash_2024_06_18a() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); writer.set_raw_comment(Box::<[u8]>::from([])); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FullFileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(2107, 4, 8, 14, 0, 19)?, permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![ 182, 180, 1, 0, 180, 182, 74, 0, 0, 200, 0, 0, 0, 2, 0, 0, 0, ] .into(), central_extra_data: vec![].into(), }, alignment: 1542, ..Default::default() }; writer.start_file_from_path("\0\0PK\u{6}\u{6}K\u{6}PK\u{3}\u{4}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\u{1}\0\0\0\0\0\0\0\0\u{1}\u{1}\0PK\u{1}\u{2},\0\0\0\0\0\0\0\0\0\0\0P\u{7}\u{4}/.\0KP\0\0;\0\0\0\u{1e}\0\0\0\0\0\0\0\0\0\0\0\0\0", options)?; let finished = writer.finish_into_readable()?; assert_eq!(1, finished.file_names().count()); writer = ZipWriter::new_append(finished.into_inner())?; let options = FullFileOptions { compression_method: Stored, compression_level: Some(5), last_modified_time: DateTime::from_date_and_time(2107, 4, 1, 0, 0, 0)?, permissions: None, large_file: false, encrypt_with: Some(ZipCrypto( ZipCryptoKeys::of(0x0, 0x62e4b50, 0x100), PhantomData, )), ..Default::default() }; writer.add_symlink_from_path( "\0K\u{6}\0PK\u{6}\u{7}PK\u{6}\u{6}\0\0\0\0\0\0\0\0PK\u{2}\u{6}", "\u{8}\0\0\0\0/\0", options, )?; let finished = writer.finish_into_readable()?; assert_eq!(2, finished.file_names().count()); writer = ZipWriter::new_append(finished.into_inner())?; assert_eq!(2, writer.files.len()); writer }; let finished = sub_writer.finish_into_readable()?; assert_eq!(2, finished.file_names().count()); writer.merge_archive(finished)?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; let _ = writer.finish_into_readable()?; Ok(()) } #[cfg(all(feature = "bzip2", feature = "aes-crypto", not(miri)))] #[test] fn test_fuzz_crash_2024_06_18b() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(true); writer.set_raw_comment([0].into()); writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; assert_eq!(writer.get_raw_comment()[0], 0); let options = FileOptions { compression_method: CompressionMethod::Bzip2, compression_level: None, last_modified_time: DateTime::from_date_and_time(2009, 6, 3, 13, 37, 39)?, permissions: Some(2644352413), large_file: true, encrypt_with: Some(crate::write::EncryptWith::Aes { mode: crate::AesMode::Aes256, password: "", }), extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 255, ..Default::default() }; writer.add_symlink_from_path("", "", options)?; writer.deep_copy_file_from_path("", "PK\u{5}\u{6}\0\0\0\0\0\0\0\0\0\0\0\0\0\u{4}\0\0\0")?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; assert_eq!(writer.get_raw_comment()[0], 0); writer.deep_copy_file_from_path( "PK\u{5}\u{6}\0\0\0\0\0\0\0\0\0\0\0\0\0\u{4}\0\0\0", "\u{2}yy\u{5}qu\0", )?; let finished = writer.finish()?; let archive = ZipArchive::new(finished.clone())?; assert_eq!(archive.comment(), [0]); writer = ZipWriter::new_append(finished)?; assert_eq!(writer.get_raw_comment()[0], 0); let _ = writer.finish_into_readable()?; Ok(()) } #[test] fn test_fuzz_crash_2024_06_19() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(1980, 3, 1, 19, 55, 58)?, permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 256, ..Default::default() }; writer.start_file_from_path( "\0\0\0PK\u{5}\u{6}\0\0\0\0\u{1}\0\u{12}\u{6}\0\0\0\0\0\u{1}\0\0\0\0\0\0\0\0\0", options, )?; writer.set_flush_on_finish_file(false); writer.shallow_copy_file_from_path( "\0\0\0PK\u{5}\u{6}\0\0\0\0\u{1}\0\u{12}\u{6}\0\0\0\0\0\u{1}\0\0\0\0\0\0\0\0\0", "", )?; writer.set_flush_on_finish_file(false); writer.deep_copy_file_from_path("", "copy")?; writer.abort_file()?; writer.set_flush_on_finish_file(false); writer.set_raw_comment([255, 0].into()); writer.abort_file()?; assert_eq!(writer.get_raw_comment(), [255, 0]); writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; assert_eq!(writer.get_raw_comment(), [255, 0]); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::default(), permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, ..Default::default() }; writer.start_file_from_path("", options)?; assert_eq!(writer.get_raw_comment(), [255, 0]); let archive = writer.finish_into_readable()?; assert_eq!(archive.comment(), [255, 0]); Ok(()) } #[test] fn fuzz_crash_2024_06_21() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FullFileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(1980, 2, 1, 0, 0, 0)?, permissions: None, large_file: false, encrypt_with: None, ..Default::default() }; const LONG_PATH: &str = "\0@PK\u{6}\u{6}\u{7}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0@/\0\0\00ĪPK\u{5}\u{6}O\0\u{10}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0@PK\u{6}\u{7}\u{6}\0/@\0\0\0\0\0\0\0\0 \0\0"; writer.start_file_from_path(LONG_PATH, options)?; writer = ZipWriter::new_append(writer.finish()?)?; writer.deep_copy_file_from_path(LONG_PATH, "oo\0\0\0")?; writer.abort_file()?; writer.set_raw_comment([33].into()); let archive = writer.finish_into_readable()?; writer = ZipWriter::new_append(archive.into_inner())?; assert!(writer.get_raw_comment().starts_with(&[33])); let archive = writer.finish_into_readable()?; assert!(archive.comment().starts_with(&[33])); Ok(()) } #[test] #[cfg(all(feature = "bzip2", not(miri)))] fn fuzz_crash_2024_07_17() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: CompressionMethod::Bzip2, compression_level: None, last_modified_time: DateTime::from_date_and_time(2095, 2, 16, 21, 0, 1)?, permissions: Some(84238341), large_file: true, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![117, 99, 6, 0, 0, 0, 0, 0, 0, 0, 2, 1, 0, 0, 2, 0, 0, 0].into(), central_extra_data: vec![].into(), }, alignment: 65535, ..Default::default() }; writer.start_file_from_path("", options)?; //writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer.deep_copy_file_from_path("", "copy")?; let _ = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; Ok(()) } #[test] fn fuzz_crash_2024_07_19() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(1980, 6, 1, 0, 34, 47)?, permissions: None, large_file: true, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 45232, ..Default::default() }; writer.add_directory_from_path("", options)?; writer.deep_copy_file_from_path("/", "")?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer.deep_copy_file_from_path("", "copy")?; let _ = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; Ok(()) } #[test] #[cfg(feature = "aes-crypto")] fn fuzz_crash_2024_07_19a() -> ZipResult<()> { use crate::write::EncryptWith::Aes; use crate::AesMode::Aes128; let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(2107, 6, 5, 13, 0, 21)?, permissions: None, large_file: true, encrypt_with: Some(Aes { mode: Aes128, password: "", }), extended_options: ExtendedFileOptions { extra_data: vec![3, 0, 4, 0, 209, 53, 53, 8, 2, 61, 0, 0].into(), central_extra_data: vec![].into(), }, alignment: 65535, ..Default::default() }; writer.start_file_from_path("", options)?; let _ = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; Ok(()) } #[test] fn fuzz_crash_2024_07_20() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(true); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(2041, 8, 2, 19, 38, 0)?, permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 0, ..Default::default() }; writer.add_directory_from_path("\0\0\0\0\0\0\07é»»", options)?; let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.set_flush_on_finish_file(false); let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::default(), permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 4, ..Default::default() }; writer.add_directory_from_path("\0\0\0é»»", options)?; writer = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; writer.abort_file()?; let options = FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(1980, 1, 1, 0, 7, 0)?, permissions: Some(2663103419), large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 32256, ..Default::default() }; writer.add_directory_from_path("\0", options)?; writer = ZipWriter::new_append(writer.finish()?)?; writer }; writer.merge_archive(sub_writer.finish_into_readable()?)?; let _ = ZipWriter::new_append(writer.finish_into_readable()?.into_inner())?; Ok(()) } #[test] fn fuzz_crash_2024_07_21() -> ZipResult<()> { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); let sub_writer = { let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer.add_directory_from_path( "", FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::from_date_and_time(2105, 8, 1, 15, 0, 0)?, permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 0, ..Default::default() }, )?; writer.abort_file()?; let mut writer = ZipWriter::new_append(writer.finish()?)?; writer.add_directory_from_path( "", FileOptions { compression_method: Stored, compression_level: None, last_modified_time: DateTime::default(), permissions: None, large_file: false, encrypt_with: None, extended_options: ExtendedFileOptions { extra_data: vec![].into(), central_extra_data: vec![].into(), }, alignment: 16, ..Default::default() }, )?; ZipWriter::new_append(writer.finish()?)? }; writer.merge_archive(sub_writer.finish_into_readable()?)?; let writer = ZipWriter::new_append(writer.finish()?)?; let _ = writer.finish_into_readable()?; Ok(()) } } zip-2.5.0/src/zipcrypto.rs000064400000000000000000000240571046102023000136420ustar 00000000000000//! Implementation of the ZipCrypto algorithm //! //! The following paper was used to implement the ZipCrypto algorithm: //! [https://courses.cs.ut.ee/MTAT.07.022/2015_fall/uploads/Main/dmitri-report-f15-16.pdf](https://courses.cs.ut.ee/MTAT.07.022/2015_fall/uploads/Main/dmitri-report-f15-16.pdf) use std::fmt::{Debug, Formatter}; use std::hash::Hash; use std::num::Wrapping; use crate::result::ZipError; /// A container to hold the current key state #[cfg_attr(fuzzing, derive(arbitrary::Arbitrary))] #[derive(Clone, Copy, Hash, Ord, PartialOrd, Eq, PartialEq)] pub(crate) struct ZipCryptoKeys { key_0: Wrapping, key_1: Wrapping, key_2: Wrapping, } impl Debug for ZipCryptoKeys { #[allow(unreachable_code)] fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { #[cfg(not(any(test, fuzzing)))] { use std::collections::hash_map::DefaultHasher; use std::hash::Hasher; let mut t = DefaultHasher::new(); self.hash(&mut t); f.write_fmt(format_args!("ZipCryptoKeys(hash {})", t.finish())) } #[cfg(any(test, fuzzing))] f.write_fmt(format_args!( "ZipCryptoKeys::of({:#10x},{:#10x},{:#10x})", self.key_0, self.key_1, self.key_2 )) } } impl ZipCryptoKeys { const fn new() -> ZipCryptoKeys { ZipCryptoKeys { key_0: Wrapping(0x12345678), key_1: Wrapping(0x23456789), key_2: Wrapping(0x34567890), } } #[allow(unused)] pub const fn of(key_0: u32, key_1: u32, key_2: u32) -> ZipCryptoKeys { ZipCryptoKeys { key_0: Wrapping(key_0), key_1: Wrapping(key_1), key_2: Wrapping(key_2), } } fn update(&mut self, input: u8) { self.key_0 = ZipCryptoKeys::crc32(self.key_0, input); self.key_1 = (self.key_1 + (self.key_0 & Wrapping(0xff))) * Wrapping(0x08088405) + Wrapping(1); self.key_2 = ZipCryptoKeys::crc32(self.key_2, (self.key_1 >> 24).0 as u8); } fn stream_byte(&mut self) -> u8 { let temp: Wrapping = Wrapping(self.key_2.0 as u16) | Wrapping(3); ((temp * (temp ^ Wrapping(1))) >> 8).0 as u8 } fn decrypt_byte(&mut self, cipher_byte: u8) -> u8 { let plain_byte: u8 = self.stream_byte() ^ cipher_byte; self.update(plain_byte); plain_byte } #[allow(dead_code)] fn encrypt_byte(&mut self, plain_byte: u8) -> u8 { let cipher_byte: u8 = self.stream_byte() ^ plain_byte; self.update(plain_byte); cipher_byte } fn crc32(crc: Wrapping, input: u8) -> Wrapping { (crc >> 8) ^ Wrapping(CRCTABLE[((crc & Wrapping(0xff)).0 as u8 ^ input) as usize]) } pub(crate) fn derive(password: &[u8]) -> ZipCryptoKeys { let mut keys = ZipCryptoKeys::new(); for byte in password.iter() { keys.update(*byte); } keys } } /// A ZipCrypto reader with unverified password pub struct ZipCryptoReader { file: R, keys: ZipCryptoKeys, } pub enum ZipCryptoValidator { PkzipCrc32(u32), InfoZipMsdosTime(u16), } impl ZipCryptoReader { /// Note: The password is `&[u8]` and not `&str` because the /// [zip specification](https://pkware.cachefly.net/webdocs/APPNOTE/APPNOTE-6.3.3.TXT) /// does not specify password encoding (see function `update_keys` in the specification). /// Therefore, if `&str` was used, the password would be UTF-8 and it /// would be impossible to decrypt files that were encrypted with a /// password byte sequence that is unrepresentable in UTF-8. pub fn new(file: R, password: &[u8]) -> ZipCryptoReader { ZipCryptoReader { file, keys: ZipCryptoKeys::derive(password), } } /// Read the ZipCrypto header bytes and validate the password. pub fn validate( mut self, validator: ZipCryptoValidator, ) -> Result, ZipError> { // ZipCrypto prefixes a file with a 12 byte header let mut header_buf = [0u8; 12]; self.file.read_exact(&mut header_buf)?; for byte in header_buf.iter_mut() { *byte = self.keys.decrypt_byte(*byte); } match validator { ZipCryptoValidator::PkzipCrc32(crc32_plaintext) => { // PKZIP before 2.0 used 2 byte CRC check. // PKZIP 2.0+ used 1 byte CRC check. It's more secure. // We also use 1 byte CRC. if (crc32_plaintext >> 24) as u8 != header_buf[11] { return Err(ZipError::InvalidPassword); } } ZipCryptoValidator::InfoZipMsdosTime(last_mod_time) => { // Info-ZIP modification to ZipCrypto format: // If bit 3 of the general purpose bit flag is set // (indicates that the file uses a data-descriptor section), // it uses high byte of 16-bit File Time. // Info-ZIP code probably writes 2 bytes of File Time. // We check only 1 byte. if (last_mod_time >> 8) as u8 != header_buf[11] { return Err(ZipError::InvalidPassword); } } } Ok(ZipCryptoReaderValid { reader: self }) } } #[allow(unused)] pub(crate) struct ZipCryptoWriter { pub(crate) writer: W, pub(crate) buffer: Vec, pub(crate) keys: ZipCryptoKeys, } impl ZipCryptoWriter { #[allow(unused)] pub(crate) fn finish(mut self, crc32: u32) -> std::io::Result { self.buffer[11] = (crc32 >> 24) as u8; for byte in self.buffer.iter_mut() { *byte = self.keys.encrypt_byte(*byte); } self.writer.write_all(&self.buffer)?; self.writer.flush()?; Ok(self.writer) } } impl std::io::Write for ZipCryptoWriter { fn write(&mut self, buf: &[u8]) -> std::io::Result { self.buffer.extend_from_slice(buf); Ok(buf.len()) } fn flush(&mut self) -> std::io::Result<()> { Ok(()) } } /// A ZipCrypto reader with verified password pub struct ZipCryptoReaderValid { reader: ZipCryptoReader, } impl std::io::Read for ZipCryptoReaderValid { fn read(&mut self, buf: &mut [u8]) -> std::io::Result { // Note: There might be potential for optimization. Inspiration can be found at: // https://github.com/kornelski/7z/blob/master/CPP/7zip/Crypto/ZipCrypto.cpp let n = self.reader.file.read(buf)?; for byte in buf.iter_mut().take(n) { *byte = self.reader.keys.decrypt_byte(*byte); } Ok(n) } } impl ZipCryptoReaderValid { /// Consumes this decoder, returning the underlying reader. pub fn into_inner(self) -> R { self.reader.file } } static CRCTABLE: [u32; 256] = [ 0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91, 0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de, 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7, 0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172, 0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b, 0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59, 0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924, 0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433, 0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818, 0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01, 0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e, 0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65, 0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, 0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0, 0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f, 0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a, 0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683, 0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8, 0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1, 0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc, 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5, 0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b, 0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236, 0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d, 0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a, 0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713, 0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38, 0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21, 0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777, 0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, 0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45, 0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2, 0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9, 0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94, 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d, ];