nftables-0.6.3/.cargo_vcs_info.json0000644000000001360000000000100126400ustar { "git": { "sha1": "be84bdbafbd3b4e7f32aa46f6d28284752aba7d7" }, "path_in_vcs": "" }nftables-0.6.3/.gitignore000064400000000000000000000000101046102023000134070ustar 00000000000000/target nftables-0.6.3/CHANGELOG.md000064400000000000000000000224531046102023000132470ustar 00000000000000# Changelog All notable changes to this project will be documented in this file. ## [Unreleased] ## [0.6.3](https://github.com/nftables-rs/nftables-rs/compare/v0.6.2...v0.6.3) ### 🐛 Bug Fixes - Bitwise flags or with more than two operands - ([9024a91](https://github.com/nftables-rs/nftables-rs/commit/9024a91fc4d8764bc70c66eee8ce54ca599bfc9e)) - Changed BinaryOperation::OR to accept Vec instead of exactly two. - Added regression test (tests/test_bit_flags) - Accept single or multiple set/map flags - ([54ba9a3](https://github.com/nftables-rs/nftables-rs/commit/54ba9a30e9d083be64dbf2b777fed56ffe173236)) - Accept single or multiple synproxy flags - ([c7ce1b0](https://github.com/nftables-rs/nftables-rs/commit/c7ce1b0c4a3a26bf3b8e86fffd5b4519a518bad9)) - Accept fib flags as string or array - ([d234af6](https://github.com/nftables-rs/nftables-rs/commit/d234af6ea17015dfd62cc249c38b15565fbfa3dd)) - Correctly parse log opts as array - ([0294fb2](https://github.com/nftables-rs/nftables-rs/commit/0294fb260173669ed556ba007642c875a6c3d71e)) - Do not panic when parsing an unsupported log flag - ([4c6a943](https://github.com/nftables-rs/nftables-rs/commit/4c6a943fef48457ca55f4b62ab944e58ad61607f)) ### 🧪 Testing - *(expr)* Bitwise flags or with more than two operands - ([f1d6e16](https://github.com/nftables-rs/nftables-rs/commit/f1d6e16bd6d91d6185cdf501be36401c58dfefaa)) ## [0.6.2](https://github.com/nftables-rs/nftables-rs/compare/v0.6.1...v0.6.2) ### 🐛 Bug Fixes - Clippy string format lint - ([39b5796](https://github.com/nftables-rs/nftables-rs/commit/39b57961da47dd7dedb520b42f3a136e4d4ad1c9)) ### 📚 Documentation - *(expr)* Fix Payload docs - ([11f9657](https://github.com/nftables-rs/nftables-rs/commit/11f9657f42308c238fbbf76319c0956d24936b9c)) ## [0.6.1](https://github.com/nftables-rs/nftables-rs/compare/v0.6.0...v0.6.1) This release adds the command `./nftables-rs schema ` to export a *JSON Schema* of our implementation of the nftables JSON API. ### ⛰️ Features - *(cli)* Add json schema export using schemars - ([79fe2f8](https://github.com/nftables-rs/nftables-rs/commit/79fe2f81ad3ab4d48de5784289914707e608a4af)) ### 📚 Documentation - Update license-mit copyright owner - ([d90ddb9](https://github.com/nftables-rs/nftables-rs/commit/d90ddb9edb3f38ee30dbea050d55db3dff5b6a79)) - Update new github org URL - ([95f1f68](https://github.com/nftables-rs/nftables-rs/commit/95f1f68ae246d87ed74872314ef08c931c68ce61)) Thanks to @joelMuehlena for adding the JSON Schema export. ## [0.6.0](https://github.com/nftables-rs/nftables-rs/compare/v0.5.0...v0.6.0) This release includes memory optimizations, adds async helpers (optionally via tokio) and improves expressions documentation. ### ⛰️ Features - *(expr)* [**breaking**] Add documentation, default impls for expressions, add attributes to socket expression - ([13c0849](https://github.com/nftables-rs/nftables-rs/commit/13c084968b04bba73a8161f8947f9d4901580a93)) - *(expr)* [**breaking**] Make range fixed-sized array, not slice - ([1ce8021](https://github.com/nftables-rs/nftables-rs/commit/1ce80215bdf4d6ce0d42794127caa11d4b270626)) - *(helper)* Add async helpers - ([81cd4f3](https://github.com/nftables-rs/nftables-rs/commit/81cd4f37387519eb7bfba833e9be13ed5ed728f6)) - *(helper)* Generalize helper arguments - ([021668a](https://github.com/nftables-rs/nftables-rs/commit/021668a9231864d597b9165719df9830ca8b0c92)) - *(helper)* [**breaking**] Make helper APIs accept borrowed values - ([091adb4](https://github.com/nftables-rs/nftables-rs/commit/091adb43134f523c4ae7276d59f87e55e3436d93)) - [**breaking**] Replace Cow<'static, _> with 'a - ([c22a2a4](https://github.com/nftables-rs/nftables-rs/commit/c22a2a47d68888441028e4921711b72ac15aee2a)) - [**breaking**] Reduce stack usage by selectively wrapping large values in Box - ([583b2d5](https://github.com/nftables-rs/nftables-rs/commit/583b2d58cb3a8d55a348752b7ef248a00df899bf)) - [**breaking**] Use `Cow` whenever possible instead of owned values - ([8ddb5ff](https://github.com/nftables-rs/nftables-rs/commit/8ddb5ff132e757b95ac8b4cb8e05295f38a7098e)) ### 🐛 Bug Fixes - *(expr)* [**breaking**] Revert recursive Cow<[Expression]> back to Vec - ([75b7f48](https://github.com/nftables-rs/nftables-rs/commit/75b7f48795fe87857f2e9dfcd859eb5075de30ac)) - *(stmt)* Allow port-range for nat port - ([07d062a](https://github.com/nftables-rs/nftables-rs/commit/07d062a8de0827a8a50f865d9ceaf61975ad8415)) - *(stmt)* [**breaking**] Match anonymous and named quotas - ([61ba8ea](https://github.com/nftables-rs/nftables-rs/commit/61ba8eaec6502674104b77666dc89f8bc052e7ad)) - *(tests)* Fix datatest_stable::harness macro usage - ([3948819](https://github.com/nftables-rs/nftables-rs/commit/3948819e109e4fe66ed1f7a954c9bd6d2f6530e6)) ### 📚 Documentation - *(helper)* Add docs for async helpers - ([3a6be32](https://github.com/nftables-rs/nftables-rs/commit/3a6be325a8f97bc42ca15cc4c4e183aa369c80ac)) - *(readme)* Fix call to apply_ruleset() - ([210e4ee](https://github.com/nftables-rs/nftables-rs/commit/210e4ee7c3eafd265be7e997294ba68571732ecc)) - *(readme)* Update examples - ([4857791](https://github.com/nftables-rs/nftables-rs/commit/48577917d67703819a9b73f3866df0bfaa3773eb)) - Define msrv - ([dfc8517](https://github.com/nftables-rs/nftables-rs/commit/dfc8517372dd8360dac27fbf8859d32b2f8f8bad)) ### 🧪 Testing - *(deserialize)* Generate deserialize tests with harness - ([68332fd](https://github.com/nftables-rs/nftables-rs/commit/68332fd8dfe3d03921b8f0fad64a324ba4b6b326)) - *(stmt)* Extend nat test with port range - ([ad0b46a](https://github.com/nftables-rs/nftables-rs/commit/ad0b46a0f5b6a739e10e0d8b2a39b50547ab02f3)) ### ⚙️ Miscellaneous Tasks - *(msrv)* [**breaking**] Increase msrv to 1.76 - ([76e7e7a](https://github.com/nftables-rs/nftables-rs/commit/76e7e7ad6b277bb63dd632adfe022cccf9959c5c)) ## [0.5.0](https://github.com/namib-project/nftables-rs/compare/v0.4.1...v0.5.0) This release completes documentation for `schema` and adds support for **tproxy**, **synproxy** and **flow**/**flowtable** statements/objects. ### ⚠️ Breaking Changes - Enum `stmt::Statement`: - adds variants `Flow`, `SynProxy` and `TProxy`, - removes variant `CounterRef`, - receives a `#[non_exhaustive]` mark. - Struct `stmt::Counter` became enum. - Enum `schema::NfListObject` adds variant `SynProxy`. - Removed functions `schema::Table::new()`, `schema::Table::new()` and `schema::Rule::new()`. ### ⛰️ Features - *(schema)* [**breaking**] Add default impl, add doc comments - ([abd3156](https://github.com/namib-project/nftables-rs/commit/abd3156e846c13be3a9c8a9df31395580ba0d75b)) - *(schema)* Qualify limit's per-attribute as time unit enum - ([42c399d](https://github.com/namib-project/nftables-rs/commit/42c399d2d26e8cb4ae9324e5315bcb746beb6f10)) - *(stmt)* Implement flow statement - ([a3209cb](https://github.com/namib-project/nftables-rs/commit/a3209cb2c293f64043d96a454dee9970eeda679a)) - Add synproxy statement and list object - ([0108fbf](https://github.com/namib-project/nftables-rs/commit/0108fbfc9ecf6523083b4bd77215431a90e11c16)) ### 🐛 Bug Fixes - *(stmt)* [**breaking**] Fix named counter - ([9f109c5](https://github.com/namib-project/nftables-rs/commit/9f109c51e4b657acf1194e4342f175b0394d2cd8)) - Add doc comment and trait derive to counters - ([617b071](https://github.com/namib-project/nftables-rs/commit/617b071330960cc8092ded5fcbaf91c0579e35d1)) - [**breaking**] Store NfListObjects in heap - ([51ccf10](https://github.com/namib-project/nftables-rs/commit/51ccf106dac1b810eec6d61af602284d594c440a)) ### 📚 Documentation - *(lib)* Add library description - ([2e98483](https://github.com/namib-project/nftables-rs/commit/2e98483b74a75c0e3dfed9dc53cc8d87ee0edda4)) - *(readme)* Add @JKRhb as maintainer - ([021abc1](https://github.com/namib-project/nftables-rs/commit/021abc1cbf636f980084e8390924691fa873d3df)) - *(visitor)* Fix doc comment syntax - ([d8e0c68](https://github.com/namib-project/nftables-rs/commit/d8e0c68391fdaa07c66ebb53e202239fae53be4b)) - Fix long doc comments in expr, stmt - ([290c5bb](https://github.com/namib-project/nftables-rs/commit/290c5bbb0c3890c0fa94b915e27b1d26b48f5042)) - Add doc comments for tproxy - ([e13a5ed](https://github.com/namib-project/nftables-rs/commit/e13a5ed90d9dcc9475e66e64ad0dc29a7bc71514)) ### 🧪 Testing - *(schema)* Add set and map nft/json test - ([03db827](https://github.com/namib-project/nftables-rs/commit/03db827a9a8630a3f10129b91eb47b06cb667c36)) - *(stmt)* Add serialization test for flow, flowtable - ([fd88573](https://github.com/namib-project/nftables-rs/commit/fd8857314d8a611724d753567664fd9301d4299e)) - Refactor nftables-json test script with unshare - ([3799022](https://github.com/namib-project/nftables-rs/commit/3799022069311f47770aa061da5c05bf70e306bb)) - Add test for synproxy - ([910315b](https://github.com/namib-project/nftables-rs/commit/910315ba22a8fc2f38e3d0e2ac84c670deb2ec82)) - Re-convert json data from nftables files - ([1ca5421](https://github.com/namib-project/nftables-rs/commit/1ca5421807e4663087cdcf5801ead27b74eb6b72)) ## [0.4.1] - 2024-05-27 ### ⚙️ Miscellaneous Tasks - Add dependabot, git-cliff, release-plz - Add github issue templates - Add rust fmt check for pull requests - Consolidate rust-fmt into rust workflow - *(dep)* Bump dependencies serde, serde_json, serial_test ### Build - Add devcontainer configuration nftables-0.6.3/Cargo.lock0000644000000772120000000000100106240ustar # This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "addr2line" version = "0.24.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dfbe277e56a376000877090da837660b4427aad530e3028d44e0bffe4f89a1c1" dependencies = [ "gimli", ] [[package]] name = "adler2" version = "2.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "320119579fcad9c21884f5c4861d16174d0e06250625266f50fe6898340abefa" [[package]] name = "aho-corasick" version = "1.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e60d3430d3a69478ad0993f19238d2df97c507009a52b3c10addcd7f6bcb916" dependencies = [ "memchr", ] [[package]] name = "anstream" version = "0.6.20" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3ae563653d1938f79b1ab1b5e668c87c76a9930414574a6583a7b7e11a8e6192" dependencies = [ "anstyle", "anstyle-parse", "anstyle-query", "anstyle-wincon", "colorchoice", "is_terminal_polyfill", "utf8parse", ] [[package]] name = "anstyle" version = "1.0.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "862ed96ca487e809f1c8e5a8447f6ee2cf102f846893800b20cebdf541fc6bbd" [[package]] name = "anstyle-parse" version = "0.2.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4e7644824f0aa2c7b9384579234ef10eb7efb6a0deb83f9630a49594dd9c15c2" dependencies = [ "utf8parse", ] [[package]] name = "anstyle-query" version = "1.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9e231f6134f61b71076a3eab506c379d4f36122f2af15a9ff04415ea4c3339e2" dependencies = [ "windows-sys 0.60.2", ] [[package]] name = "anstyle-wincon" version = "3.0.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3e0633414522a32ffaac8ac6cc8f748e090c5717661fddeea04219e2344f5f2a" dependencies = [ "anstyle", "once_cell_polyfill", "windows-sys 0.60.2", ] [[package]] name = "async-channel" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "924ed96dd52d1b75e9c1a3e6275715fd320f5f9439fb5a4a11fa51f4221158d2" dependencies = [ "concurrent-queue", "event-listener-strategy", "futures-core", "pin-project-lite", ] [[package]] name = "async-io" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "19634d6336019ef220f09fd31168ce5c184b295cbf80345437cc36094ef223ca" dependencies = [ "async-lock", "cfg-if", "concurrent-queue", "futures-io", "futures-lite", "parking", "polling", "rustix", "slab", "windows-sys 0.60.2", ] [[package]] name = "async-lock" version = "3.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5fd03604047cee9b6ce9de9f70c6cd540a0520c813cbd49bae61f33ab80ed1dc" dependencies = [ "event-listener", "event-listener-strategy", "pin-project-lite", ] [[package]] name = "async-process" version = "2.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "65daa13722ad51e6ab1a1b9c01299142bc75135b337923cfa10e79bbbd669f00" dependencies = [ "async-channel", "async-io", "async-lock", "async-signal", "async-task", "blocking", "cfg-if", "event-listener", "futures-lite", "rustix", ] [[package]] name = "async-signal" version = "0.2.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f567af260ef69e1d52c2b560ce0ea230763e6fbb9214a85d768760a920e3e3c1" dependencies = [ "async-io", "async-lock", "atomic-waker", "cfg-if", "futures-core", "futures-io", "rustix", "signal-hook-registry", "slab", "windows-sys 0.60.2", ] [[package]] name = "async-task" version = "4.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8b75356056920673b02621b35afd0f7dda9306d03c79a30f5c56c44cf256e3de" [[package]] name = "atomic-waker" version = "1.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0" [[package]] name = "autocfg" version = "1.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8" [[package]] name = "backtrace" version = "0.3.75" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6806a6321ec58106fea15becdad98371e28d92ccbc7c8f1b3b6dd724fe8f1002" dependencies = [ "addr2line", "cfg-if", "libc", "miniz_oxide", "object", "rustc-demangle", "windows-targets 0.52.6", ] [[package]] name = "bit-set" version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "08807e080ed7f9d5433fa9b275196cfc35414f66a0c79d864dc51a0d825231a3" dependencies = [ "bit-vec", ] [[package]] name = "bit-vec" version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5e764a1d40d510daf35e07be9eb06e75770908c27d411ee6c92109c9840eaaf7" [[package]] name = "bitflags" version = "2.9.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1b8e56985ec62d17e9c1001dc89c88ecd7dc08e47eba5ec7c29c7b5eeecde967" [[package]] name = "blocking" version = "1.6.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e83f8d02be6967315521be875afa792a316e28d57b5a2d401897e2a7921b7f21" dependencies = [ "async-channel", "async-task", "futures-io", "futures-lite", "piper", ] [[package]] name = "bytes" version = "1.10.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d71b6127be86fdcfddb610f7182ac57211d4b18a3e9c82eb2d17662f2227ad6a" [[package]] name = "camino" version = "1.1.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5d07aa9a93b00c76f71bc35d598bed923f6d4f3a9ca5c24b7737ae1a292841c0" [[package]] name = "cfg-if" version = "1.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9555578bc9e57714c812a1f84e4fc5b4d21fcb063490c624de019f7464c91268" [[package]] name = "clap" version = "4.5.45" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1fc0e74a703892159f5ae7d3aac52c8e6c392f5ae5f359c70b5881d60aaac318" dependencies = [ "clap_builder", "clap_derive", ] [[package]] name = "clap_builder" version = "4.5.44" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b3e7f4214277f3c7aa526a59dd3fbe306a370daee1f8b7b8c987069cd8e888a8" dependencies = [ "anstream", "anstyle", "clap_lex", "strsim", ] [[package]] name = "clap_derive" version = "4.5.45" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "14cb31bb0a7d536caef2639baa7fad459e15c3144efefa6dbd1c84562c4739f6" dependencies = [ "heck", "proc-macro2", "quote", "syn", ] [[package]] name = "clap_lex" version = "0.7.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b94f61472cee1439c0b966b47e3aca9ae07e45d070759512cd390ea2bebc6675" [[package]] name = "colorchoice" version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b05b61dc5112cbb17e4b6cd61790d9845d13888356391624cbe7e41efeac1e75" [[package]] name = "concurrent-queue" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4ca0197aee26d1ae37445ee532fefce43251d24cc7c166799f4d46817f1d3973" dependencies = [ "crossbeam-utils", ] [[package]] name = "crossbeam-utils" version = "0.8.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d0a5c400df2834b80a4c3327b3aad3a4c4cd4de0629063962b03235697506a28" [[package]] name = "datatest-stable" version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "19ebbb3c403031a3739980c2864e3b5ee4efca009dd83d2c0f80a31555243981" dependencies = [ "camino", "fancy-regex", "libtest-mimic", "walkdir", ] [[package]] name = "dyn-clone" version = "1.0.20" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d0881ea181b1df73ff77ffaaf9c7544ecc11e82fba9b5f27b262a3c73a332555" [[package]] name = "errno" version = "0.3.13" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "778e2ac28f6c47af28e4907f13ffd1e1ddbd400980a9abd7c8df189bf578a5ad" dependencies = [ "libc", "windows-sys 0.60.2", ] [[package]] name = "escape8259" version = "0.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5692dd7b5a1978a5aeb0ce83b7655c58ca8efdcb79d21036ea249da95afec2c6" [[package]] name = "event-listener" version = "5.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e13b66accf52311f30a0db42147dadea9850cb48cd070028831ae5f5d4b856ab" dependencies = [ "concurrent-queue", "parking", "pin-project-lite", ] [[package]] name = "event-listener-strategy" version = "0.5.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8be9f3dfaaffdae2972880079a491a1a8bb7cbed0b8dd7a347f668b4150a3b93" dependencies = [ "event-listener", "pin-project-lite", ] [[package]] name = "fancy-regex" version = "0.14.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6e24cb5a94bcae1e5408b0effca5cd7172ea3c5755049c5f3af4cd283a165298" dependencies = [ "bit-set", "regex-automata", "regex-syntax", ] [[package]] name = "fastrand" version = "2.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be" [[package]] name = "futures" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "65bc07b1a8bc7c85c5f2e110c476c7389b4554ba72af57d8445ea63a576b0876" dependencies = [ "futures-channel", "futures-core", "futures-executor", "futures-io", "futures-sink", "futures-task", "futures-util", ] [[package]] name = "futures-channel" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2dff15bf788c671c1934e366d07e30c1814a8ef514e1af724a602e8a2fbe1b10" dependencies = [ "futures-core", "futures-sink", ] [[package]] name = "futures-core" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "05f29059c0c2090612e8d742178b0580d2dc940c837851ad723096f87af6663e" [[package]] name = "futures-executor" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1e28d1d997f585e54aebc3f97d39e72338912123a67330d723fdbb564d646c9f" dependencies = [ "futures-core", "futures-task", "futures-util", ] [[package]] name = "futures-io" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9e5c1b78ca4aae1ac06c48a526a655760685149f0d465d21f37abfe57ce075c6" [[package]] name = "futures-lite" version = "2.6.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f78e10609fe0e0b3f4157ffab1876319b5b0db102a2c60dc4626306dc46b44ad" dependencies = [ "fastrand", "futures-core", "futures-io", "parking", "pin-project-lite", ] [[package]] name = "futures-sink" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e575fab7d1e0dcb8d0c7bcf9a63ee213816ab51902e6d244a95819acacf1d4f7" [[package]] name = "futures-task" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f90f7dce0722e95104fcb095585910c0977252f286e354b5e3bd38902cd99988" [[package]] name = "futures-util" version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81" dependencies = [ "futures-channel", "futures-core", "futures-io", "futures-sink", "futures-task", "memchr", "pin-project-lite", "pin-utils", "slab", ] [[package]] name = "getrandom" version = "0.3.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "26145e563e54f2cadc477553f1ec5ee650b00862f0a58bcd12cbdc5f0ea2d2f4" dependencies = [ "cfg-if", "libc", "r-efi", "wasi 0.14.2+wasi-0.2.4", ] [[package]] name = "gimli" version = "0.31.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "07e28edb80900c19c28f1072f2e8aeca7fa06b23cd4169cefe1af5aa3260783f" [[package]] name = "heck" version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea" [[package]] name = "hermit-abi" version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fc0fef456e4baa96da950455cd02c081ca953b141298e41db3fc7e36b1da849c" [[package]] name = "io-uring" version = "0.7.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d93587f37623a1a17d94ef2bc9ada592f5465fe7732084ab7beefabe5c77c0c4" dependencies = [ "bitflags", "cfg-if", "libc", ] [[package]] name = "is_terminal_polyfill" version = "1.70.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7943c866cc5cd64cbc25b2e01621d07fa8eb2a1a23160ee81ce38704e97b8ecf" [[package]] name = "itoa" version = "1.0.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4a5f13b858c8d314ee3e8f639011f7ccefe71f97f96e50151fb991f267928e2c" [[package]] name = "libc" version = "0.2.175" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6a82ae493e598baaea5209805c49bbf2ea7de956d50d7da0da1164f9c6d28543" [[package]] name = "libtest-mimic" version = "0.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5297962ef19edda4ce33aaa484386e0a5b3d7f2f4e037cbeee00503ef6b29d33" dependencies = [ "anstream", "anstyle", "clap", "escape8259", ] [[package]] name = "linux-raw-sys" version = "0.9.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cd945864f07fe9f5371a27ad7b52a172b4b499999f1d97574c9fa68373937e12" [[package]] name = "lock_api" version = "0.4.13" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "96936507f153605bddfcda068dd804796c84324ed2510809e5b2a624c81da765" dependencies = [ "autocfg", "scopeguard", ] [[package]] name = "log" version = "0.4.27" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "13dc2df351e3202783a1fe0d44375f7295ffb4049267b0f3018346dc122a1d94" [[package]] name = "memchr" version = "2.7.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "32a282da65faaf38286cf3be983213fcf1d2e2a58700e808f83f4ea9a4804bc0" [[package]] name = "miniz_oxide" version = "0.8.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1fa76a2c86f704bdb222d66965fb3d63269ce38518b83cb0575fca855ebb6316" dependencies = [ "adler2", ] [[package]] name = "mio" version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "78bed444cc8a2160f01cbcf811ef18cac863ad68ae8ca62092e8db51d51c761c" dependencies = [ "libc", "wasi 0.11.1+wasi-snapshot-preview1", "windows-sys 0.59.0", ] [[package]] name = "nftables" version = "0.6.3" dependencies = [ "async-process", "datatest-stable", "futures-lite", "schemars", "serde", "serde_json", "serde_path_to_error", "serial_test", "strum", "strum_macros", "tempfile", "thiserror", "tokio", ] [[package]] name = "object" version = "0.36.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "62948e14d923ea95ea2c7c86c71013138b66525b86bdc08d2dcc262bdb497b87" dependencies = [ "memchr", ] [[package]] name = "once_cell" version = "1.21.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d" [[package]] name = "once_cell_polyfill" version = "1.70.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a4895175b425cb1f87721b59f0f286c2092bd4af812243672510e1ac53e2e0ad" [[package]] name = "parking" version = "2.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f38d5652c16fde515bb1ecef450ab0f6a219d619a7274976324d5e377f7dceba" [[package]] name = "parking_lot" version = "0.12.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "70d58bf43669b5795d1576d0641cfb6fbb2057bf629506267a92807158584a13" dependencies = [ "lock_api", "parking_lot_core", ] [[package]] name = "parking_lot_core" version = "0.9.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bc838d2a56b5b1a6c25f55575dfc605fabb63bb2365f6c2353ef9159aa69e4a5" dependencies = [ "cfg-if", "libc", "redox_syscall", "smallvec", "windows-targets 0.52.6", ] [[package]] name = "pin-project-lite" version = "0.2.16" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3b3cff922bd51709b605d9ead9aa71031d81447142d828eb4a6eba76fe619f9b" [[package]] name = "pin-utils" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184" [[package]] name = "piper" version = "0.2.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "96c8c490f422ef9a4efd2cb5b42b76c8613d7e7dfc1caf667b8a3350a5acc066" dependencies = [ "atomic-waker", "fastrand", "futures-io", ] [[package]] name = "polling" version = "3.10.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b5bd19146350fe804f7cb2669c851c03d69da628803dab0d98018142aaa5d829" dependencies = [ "cfg-if", "concurrent-queue", "hermit-abi", "pin-project-lite", "rustix", "windows-sys 0.60.2", ] [[package]] name = "proc-macro2" version = "1.0.97" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d61789d7719defeb74ea5fe81f2fdfdbd28a803847077cecce2ff14e1472f6f1" dependencies = [ "unicode-ident", ] [[package]] name = "quote" version = "1.0.40" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1885c039570dc00dcb4ff087a89e185fd56bae234ddc7f056a945bf36467248d" dependencies = [ "proc-macro2", ] [[package]] name = "r-efi" version = "5.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "69cdb34c158ceb288df11e18b4bd39de994f6657d83847bdffdbd7f346754b0f" [[package]] name = "redox_syscall" version = "0.5.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5407465600fb0548f1442edf71dd20683c6ed326200ace4b1ef0763521bb3b77" dependencies = [ "bitflags", ] [[package]] name = "ref-cast" version = "1.0.24" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4a0ae411dbe946a674d89546582cea4ba2bb8defac896622d6496f14c23ba5cf" dependencies = [ "ref-cast-impl", ] [[package]] name = "ref-cast-impl" version = "1.0.24" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1165225c21bff1f3bbce98f5a1f889949bc902d3575308cc7b0de30b4f6d27c7" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "regex-automata" version = "0.4.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "809e8dc61f6de73b46c85f4c96486310fe304c434cfa43669d7b40f711150908" dependencies = [ "aho-corasick", "memchr", "regex-syntax", ] [[package]] name = "regex-syntax" version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c" [[package]] name = "rustc-demangle" version = "0.1.26" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "56f7d92ca342cea22a06f2121d944b4fd82af56988c270852495420f961d4ace" [[package]] name = "rustix" version = "1.0.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "11181fbabf243db407ef8df94a6ce0b2f9a733bd8be4ad02b4eda9602296cac8" dependencies = [ "bitflags", "errno", "libc", "linux-raw-sys", "windows-sys 0.60.2", ] [[package]] name = "ryu" version = "1.0.20" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f" [[package]] name = "same-file" version = "1.0.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502" dependencies = [ "winapi-util", ] [[package]] name = "scc" version = "2.3.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "22b2d775fb28f245817589471dd49c5edf64237f4a19d10ce9a92ff4651a27f4" dependencies = [ "sdd", ] [[package]] name = "schemars" version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "82d20c4491bc164fa2f6c5d44565947a52ad80b9505d8e36f8d54c27c739fcd0" dependencies = [ "dyn-clone", "ref-cast", "schemars_derive", "serde", "serde_json", ] [[package]] name = "schemars_derive" version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "33d020396d1d138dc19f1165df7545479dcd58d93810dc5d646a16e55abefa80" dependencies = [ "proc-macro2", "quote", "serde_derive_internals", "syn", ] [[package]] name = "scopeguard" version = "1.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49" [[package]] name = "sdd" version = "3.0.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "490dcfcbfef26be6800d11870ff2df8774fa6e86d047e3e8c8a76b25655e41ca" [[package]] name = "serde" version = "1.0.219" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6" dependencies = [ "serde_derive", ] [[package]] name = "serde_derive" version = "1.0.219" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "serde_derive_internals" version = "0.29.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "18d26a20a969b9e3fdf2fc2d9f21eda6c40e2de84c9408bb5d3b05d499aae711" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "serde_json" version = "1.0.142" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "030fedb782600dcbd6f02d479bf0d817ac3bb40d644745b769d6a96bc3afc5a7" dependencies = [ "itoa", "memchr", "ryu", "serde", ] [[package]] name = "serde_path_to_error" version = "0.1.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "59fab13f937fa393d08645bf3a84bdfe86e296747b506ada67bb15f10f218b2a" dependencies = [ "itoa", "serde", ] [[package]] name = "serial_test" version = "3.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1b258109f244e1d6891bf1053a55d63a5cd4f8f4c30cf9a1280989f80e7a1fa9" dependencies = [ "futures", "log", "once_cell", "parking_lot", "scc", "serial_test_derive", ] [[package]] name = "serial_test_derive" version = "3.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5d69265a08751de7844521fd15003ae0a888e035773ba05695c5c759a6f89eef" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "signal-hook-registry" version = "1.4.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b2a4719bff48cee6b39d12c020eeb490953ad2443b7055bd0b21fca26bd8c28b" dependencies = [ "libc", ] [[package]] name = "slab" version = "0.4.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7a2ae44ef20feb57a68b23d846850f861394c2e02dc425a50098ae8c90267589" [[package]] name = "smallvec" version = "1.15.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "67b1b7a3b5fe4f1376887184045fcf45c69e92af734b7aaddc05fb777b6fbd03" [[package]] name = "strsim" version = "0.11.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f" [[package]] name = "strum" version = "0.27.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "af23d6f6c1a224baef9d3f61e287d2761385a5b88fdab4eb4c6f11aeb54c4bcf" [[package]] name = "strum_macros" version = "0.27.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7695ce3845ea4b33927c055a39dc438a45b059f7c1b3d91d38d10355fb8cbca7" dependencies = [ "heck", "proc-macro2", "quote", "syn", ] [[package]] name = "syn" version = "2.0.105" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7bc3fcb250e53458e712715cf74285c1f889686520d79294a9ef3bd7aa1fc619" dependencies = [ "proc-macro2", "quote", "unicode-ident", ] [[package]] name = "tempfile" version = "3.20.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e8a64e3985349f2441a1a9ef0b853f869006c3855f2cda6862a94d26ebb9d6a1" dependencies = [ "fastrand", "getrandom", "once_cell", "rustix", "windows-sys 0.59.0", ] [[package]] name = "thiserror" version = "2.0.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0b0949c3a6c842cbde3f1686d6eea5a010516deb7085f79db747562d4102f41e" dependencies = [ "thiserror-impl", ] [[package]] name = "thiserror-impl" version = "2.0.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cc5b44b4ab9c2fdd0e0512e6bece8388e214c0749f5862b114cc5b7a25daf227" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "tokio" version = "1.47.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "89e49afdadebb872d3145a5638b59eb0691ea23e46ca484037cfab3b76b95038" dependencies = [ "backtrace", "bytes", "io-uring", "libc", "mio", "pin-project-lite", "signal-hook-registry", "slab", "windows-sys 0.59.0", ] [[package]] name = "unicode-ident" version = "1.0.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5a5f39404a5da50712a4c1eecf25e90dd62b613502b7e925fd4e4d19b5c96512" [[package]] name = "utf8parse" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821" [[package]] name = "walkdir" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "29790946404f91d9c5d06f9874efddea1dc06c5efe94541a7d6863108e3a5e4b" dependencies = [ "same-file", "winapi-util", ] [[package]] name = "wasi" version = "0.11.1+wasi-snapshot-preview1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b" [[package]] name = "wasi" version = "0.14.2+wasi-0.2.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9683f9a5a998d873c0d21fcbe3c083009670149a8fab228644b8bd36b2c48cb3" dependencies = [ "wit-bindgen-rt", ] [[package]] name = "winapi-util" version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb" dependencies = [ "windows-sys 0.59.0", ] [[package]] name = "windows-link" version = "0.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5e6ad25900d524eaabdbbb96d20b4311e1e7ae1699af4fb28c17ae66c80d798a" [[package]] name = "windows-sys" version = "0.59.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1e38bc4d79ed67fd075bcc251a1c39b32a1776bbe92e5bef1f0bf1f8c531853b" dependencies = [ "windows-targets 0.52.6", ] [[package]] name = "windows-sys" version = "0.60.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f2f500e4d28234f72040990ec9d39e3a6b950f9f22d3dba18416c35882612bcb" dependencies = [ "windows-targets 0.53.3", ] [[package]] name = "windows-targets" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973" dependencies = [ "windows_aarch64_gnullvm 0.52.6", "windows_aarch64_msvc 0.52.6", "windows_i686_gnu 0.52.6", "windows_i686_gnullvm 0.52.6", "windows_i686_msvc 0.52.6", "windows_x86_64_gnu 0.52.6", "windows_x86_64_gnullvm 0.52.6", "windows_x86_64_msvc 0.52.6", ] [[package]] name = "windows-targets" version = "0.53.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d5fe6031c4041849d7c496a8ded650796e7b6ecc19df1a431c1a363342e5dc91" dependencies = [ "windows-link", "windows_aarch64_gnullvm 0.53.0", "windows_aarch64_msvc 0.53.0", "windows_i686_gnu 0.53.0", "windows_i686_gnullvm 0.53.0", "windows_i686_msvc 0.53.0", "windows_x86_64_gnu 0.53.0", "windows_x86_64_gnullvm 0.53.0", "windows_x86_64_msvc 0.53.0", ] [[package]] name = "windows_aarch64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3" [[package]] name = "windows_aarch64_gnullvm" version = "0.53.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "86b8d5f90ddd19cb4a147a5fa63ca848db3df085e25fee3cc10b39b6eebae764" [[package]] name = "windows_aarch64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469" [[package]] name = "windows_aarch64_msvc" version = "0.53.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c7651a1f62a11b8cbd5e0d42526e55f2c99886c77e007179efff86c2b137e66c" [[package]] name = "windows_i686_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b" [[package]] name = "windows_i686_gnu" version = "0.53.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c1dc67659d35f387f5f6c479dc4e28f1d4bb90ddd1a5d3da2e5d97b42d6272c3" [[package]] name = "windows_i686_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66" [[package]] name = "windows_i686_gnullvm" version = "0.53.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9ce6ccbdedbf6d6354471319e781c0dfef054c81fbc7cf83f338a4296c0cae11" [[package]] name = "windows_i686_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66" [[package]] name = "windows_i686_msvc" version = "0.53.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "581fee95406bb13382d2f65cd4a908ca7b1e4c2f1917f143ba16efe98a589b5d" [[package]] name = "windows_x86_64_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78" [[package]] name = "windows_x86_64_gnu" version = "0.53.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2e55b5ac9ea33f2fc1716d1742db15574fd6fc8dadc51caab1c16a3d3b4190ba" [[package]] name = "windows_x86_64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d" [[package]] name = "windows_x86_64_gnullvm" version = "0.53.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0a6e035dd0599267ce1ee132e51c27dd29437f63325753051e71dd9e42406c57" [[package]] name = "windows_x86_64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec" [[package]] name = "windows_x86_64_msvc" version = "0.53.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "271414315aff87387382ec3d271b52d7ae78726f5d44ac98b4f4030c91880486" [[package]] name = "wit-bindgen-rt" version = "0.39.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6f42320e61fe2cfd34354ecb597f86f413484a798ba44a8ca1165c58d42da6c1" dependencies = [ "bitflags", ] nftables-0.6.3/Cargo.toml0000644000000046470000000000100106510ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2021" rust-version = "1.76" name = "nftables" version = "0.6.3" authors = [ "Jasper Wiegratz ", "Jan Romann ", ] build = false exclude = [ ".devcontainer/*", ".github/*", "cliff.toml", "release-plz.toml", ] autolib = false autobins = false autoexamples = false autotests = false autobenches = false description = "Safe abstraction for nftables JSON API. It can be used to create nftables rulesets in Rust and parse existing nftables rulesets from JSON." readme = "README.md" keywords = [ "nftables", "netfilter", "firewall", ] categories = [ "os", "network-programming", ] license = "MIT OR Apache-2.0" repository = "https://github.com/nftables-rs/nftables-rs" [features] async-process = [ "dep:async-process", "dep:futures-lite", ] tokio = ["dep:tokio"] [lib] name = "nftables" path = "src/lib.rs" [[bin]] name = "nftables" path = "src/main.rs" [[test]] name = "deserialize" path = "tests/deserialize.rs" harness = false [[test]] name = "fixtures" path = "tests/fixtures.rs" [[test]] name = "helper_tests" path = "tests/helper_tests.rs" [[test]] name = "json_tests" path = "tests/json_tests.rs" [[test]] name = "serialize" path = "tests/serialize.rs" [dependencies.async-process] version = "2.4.0" optional = true [dependencies.futures-lite] version = "2.6.1" optional = true [dependencies.schemars] version = "1.0.4" [dependencies.serde] version = "1.0.219" features = ["derive"] [dependencies.serde_json] version = "1.0.142" [dependencies.serde_path_to_error] version = "0.1" [dependencies.strum] version = "0.27.2" [dependencies.strum_macros] version = "0.27.2" [dependencies.thiserror] version = "2.0.14" [dependencies.tokio] version = "1.47.1" features = [ "process", "io-util", ] optional = true [dev-dependencies.datatest-stable] version = "0.3.2" [dev-dependencies.serial_test] version = "3.2.0" [dev-dependencies.tempfile] version = "3.20.0" nftables-0.6.3/Cargo.toml.orig000064400000000000000000000023651046102023000143250ustar 00000000000000[package] name = "nftables" version = "0.6.3" authors = ["Jasper Wiegratz ", "Jan Romann "] edition = "2021" rust-version = "1.76" description = "Safe abstraction for nftables JSON API. It can be used to create nftables rulesets in Rust and parse existing nftables rulesets from JSON." repository = "https://github.com/nftables-rs/nftables-rs" readme = "README.md" license = "MIT OR Apache-2.0" keywords = ["nftables", "netfilter", "firewall"] categories = ["os", "network-programming"] exclude = [ ".devcontainer/*", ".github/*", "cliff.toml", "release-plz.toml", ] [dependencies] async-process = { version = "2.4.0", optional = true } futures-lite = { version = "2.6.1", optional = true } schemars = "1.0.4" serde = { version = "1.0.219", features = ["derive"] } serde_json = { version = "1.0.142" } serde_path_to_error = "0.1" strum = "0.27.2" strum_macros = "0.27.2" thiserror = "2.0.14" tokio = { version = "1.47.1", optional = true, features = ["process", "io-util"] } [dev-dependencies] datatest-stable = "0.3.2" serial_test = "3.2.0" tempfile = "3.20.0" [[test]] name = "deserialize" harness = false [features] tokio = ["dep:tokio"] async-process = ["dep:async-process", "dep:futures-lite"] nftables-0.6.3/LICENSE-APACHE000064400000000000000000000261361046102023000133640ustar 00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. nftables-0.6.3/LICENSE-MIT000064400000000000000000000021551046102023000130670ustar 00000000000000MIT License Copyright (c) 2021 The NAMIB Project Developers Copyright (c) 2022 The nftables-rs Contributors Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. nftables-0.6.3/README.md000064400000000000000000000152031046102023000127100ustar 00000000000000

Logo
nftables-rs

Automate modern Linux firewalls with nftables through its declarative and imperative JSON API in Rust.

Crates.io Total Downloads rs Actions Workflow Status License


## Features 🌟 - 🛡️ **Safe and Easy-to-Use Abstraction**: Provides a high-level, safe abstraction over the [nftables JSON API](https://manpages.debian.org/testing/libnftables1/libnftables-json.5.en.html), making it easier and safer to work with nftables in Rust. - 🛠️ **Comprehensive Functions**: Includes a wide range of functions to create, read, and apply nftables rulesets directly from Rust, streamlining the management of firewall rules. - 📄 **JSON Parsing and Generation**: Offers detailed parsing and generation capabilities for nftables rulesets in JSON format, enabling seamless integration and manipulation of rulesets. - 💾 **JSON Schema generation for nftables**: Allows to create and export a JSON Schema for further usage derived from the explicit Rust types. - 💡 **Inspired by nftnl-rs**: While taking inspiration from [nftnl-rs](https://github.com/mullvad/nftnl-rs), `nftables-rs` focuses on utilizing the JSON API for broader accessibility and catering to diverse use cases. ## Motivation `nftables-rs` is a Rust library designed to provide a safe and easy-to-use abstraction over the nftables JSON API, known as libnftables-json. This library is engineered for developers who need to interact with nftables, the Linux kernel's next-generation firewalling tool, directly from Rust applications. By abstracting the underlying JSON API, nftables-rs facilitates the creation, manipulation, and application of firewall rulesets without requiring deep knowledge of nftables' internal workings. ## Installation ```toml [dependencies] nftables = "0.5" ``` Linux nftables v0.9.3 or newer is required at runtime: `nft --version` ## Example Here are some examples that show use cases of this library. Check out the `tests/` directory for more usage examples. ### Apply ruleset to nftables This example applies a ruleset that creates and deletes a table to nftables. ```rust use nftables::{batch::Batch, helper, schema, types}; /// Applies a ruleset to nftables. fn test_apply_ruleset() { let ruleset = example_ruleset(); helper::apply_ruleset(&ruleset).unwrap(); } fn example_ruleset() -> schema::Nftables<'static> { let mut batch = Batch::new(); batch.add(schema::NfListObject::Table(schema::Table { family: types::NfFamily::IP, name: "test-table-01".into(), ..Default::default() })); batch.delete(schema::NfListObject::Table(schema::Table { family: types::NfFamily::IP, name: "test-table-01".into(), ..Default::default() })); batch.to_nftables() } ``` ### Parse/Generate nftables ruleset in JSON format This example compares nftables' native JSON out to the JSON payload generated by this library. ```rust fn test_chain_table_rule_inet() { // nft add table inet some_inet_table // nft add chain inet some_inet_table some_inet_chain '{ type filter hook forward priority 0; policy accept; }' let expected: Nftables = Nftables { objects: Cow::Borrowed(&[ NfObject::CmdObject(NfCmd::Add(NfListObject::Table(Table { family: NfFamily::INet, name: Cow::Borrowed("some_inet_table"), handle: None, }))), NfObject::CmdObject(NfCmd::Add(NfListObject::Chain(Chain { family: NfFamily::INet, table: Cow::Borrowed("some_inet_table"), name: Cow::Borrowed("some_inet_chain"), newname: None, handle: None, _type: Some(NfChainType::Filter), hook: Some(NfHook::Forward), prio: None, dev: None, policy: Some(NfChainPolicy::Accept), }))), ]), }; let json = json!({"nftables":[{"add":{"table":{"family":"inet","name":"some_inet_table"}}},{"add":{"chain":{"family":"inet","table":"some_inet_table","name":"some_inet_chain","type":"filter","hook":"forward","policy":"accept"}}}]}); println!("{}", &json); let parsed: Nftables = serde_json::from_value(json).unwrap(); assert_eq!(expected, parsed); } ``` ### Export JSON Schema Export a JSON Schema to a file (if no path is set it defaults to `./nftables.schema.json`). ```bash ./nftables-rs schema ``` ## MSRV (Minimum Supported Rust Version) The MSRV of this crate is currently: **Rust 1.76** The MSRV will only be increased by a minor or major release of this crate. ## License Licensed under either of * Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. ## Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. ## Maintainers This project is currently maintained by the following developers: | Name | Email Address | GitHub Username | |:----------------:|:------------------------:|:--------------------------------------------:| | Jasper Wiegratz | wiegratz@uni-bremen.de | [@jwhb](https://github.com/jwhb) | | Jan Romann | jan.romann@uni-bremen.de | [@JKRhb](https://github.com/JKRhb) | Write access to the main branch and to crates.io is exclusively granted to the maintainers listed above. nftables-0.6.3/resources/test/fixtures/README.md000064400000000000000000000014131046102023000175500ustar 00000000000000These files are where either generated by a specific nftables version or by hand to test specific json parsing behavior. ## synproxy test The synproxy files should contain the json for this ``` table ip test { synproxy syn1 { mss 1100 wscale 7 timestamp } synproxy syn2 { mss 1000 wscale 6 timestamp sack-perm } chain chain1 { synproxy mss 1460 wscale 7 timestamp sack-perm synproxy mss 1500 wscale 5 timestamp } } ``` ## set and map test ``` table ip test { set set1 { type inet_service flags interval } set set2 { type inet_service flags interval,timeout timeout 10s } map map1 { type inet_service : inet_service flags interval } map map2 { type inet_service : inet_service flags interval,timeout timeout 20s } } ``` nftables-0.6.3/resources/test/fixtures/set-map-flag-1.json000064400000000000000000000033321046102023000216010ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.1.0", "release_name": "some name", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "test", "handle": 1 } }, { "set": { "family": "ip", "name": "set1", "table": "test", "type": "inet_service", "handle": 1, "flags": [ "interval" ] } }, { "set": { "family": "ip", "name": "set2", "table": "test", "type": "inet_service", "handle": 2, "flags": [ "interval", "timeout" ], "timeout": 10 } }, { "map": { "family": "ip", "name": "map1", "table": "test", "type": "inet_service", "handle": 3, "map": "inet_service", "flags": [ "interval" ] } }, { "map": { "family": "ip", "name": "map2", "table": "test", "type": "inet_service", "handle": 4, "map": "inet_service", "flags": [ "interval", "timeout" ], "timeout": 20 } } ] } nftables-0.6.3/resources/test/fixtures/set-map-flag-2.json000064400000000000000000000032121046102023000215770ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.1.0", "release_name": "some name", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "test", "handle": 1 } }, { "set": { "family": "ip", "name": "set1", "table": "test", "type": "inet_service", "handle": 1, "flags": "interval" } }, { "set": { "family": "ip", "name": "set2", "table": "test", "type": "inet_service", "handle": 2, "flags": [ "interval", "timeout" ], "timeout": 10 } }, { "map": { "family": "ip", "name": "map1", "table": "test", "type": "inet_service", "handle": 3, "map": "inet_service", "flags": "interval" } }, { "map": { "family": "ip", "name": "map2", "table": "test", "type": "inet_service", "handle": 4, "map": "inet_service", "flags": [ "interval", "timeout" ], "timeout": 20 } } ] } nftables-0.6.3/resources/test/fixtures/single-fib-flag-1.json000064400000000000000000000016521046102023000222550ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.1.0", "release_name": "some name", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "test", "handle": 1 } }, { "chain": { "family": "ip", "table": "test", "name": "prerouting", "handle": 1 } }, { "rule": { "family": "ip", "table": "test", "chain": "prerouting", "handle": 2, "expr": [ { "match": { "op": "==", "left": { "fib": { "result": "type", "flags": [ "daddr" ] } }, "right": "local" } }, { "accept": null } ] } } ] } nftables-0.6.3/resources/test/fixtures/single-fib-flag-2.json000064400000000000000000000016001046102023000222470ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.1.0", "release_name": "some name", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "test", "handle": 1 } }, { "chain": { "family": "ip", "table": "test", "name": "prerouting", "handle": 1 } }, { "rule": { "family": "ip", "table": "test", "chain": "prerouting", "handle": 2, "expr": [ { "match": { "op": "==", "left": { "fib": { "result": "type", "flags": "daddr" } }, "right": "local" } }, { "accept": null } ] } } ] } nftables-0.6.3/resources/test/fixtures/synproxy-flag-1.json000064400000000000000000000044351046102023000221530ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.1.0", "release_name": "some name", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "test", "handle": 1 } }, { "synproxy": { "family": "ip", "name": "syn1", "table": "test", "handle": 4, "mss": 1100, "wscale": 7, "flags": [ "timestamp" ] } }, { "synproxy": { "family": "ip", "name": "syn2", "table": "test", "handle": 5, "mss": 1000, "wscale": 6, "flags": [ "timestamp", "sack-perm" ] } }, { "chain": { "family": "ip", "table": "test", "name": "chain1", "handle": 1 } }, { "rule": { "family": "ip", "table": "test", "chain": "chain1", "handle": 2, "expr": [ { "synproxy": { "mss": 1460, "wscale": 7, "flags": [ "timestamp", "sack-perm" ] } } ] } }, { "rule": { "family": "ip", "table": "test", "chain": "chain1", "handle": 3, "expr": [ { "synproxy": { "mss": 1500, "wscale": 5, "flags": [ "timestamp" ] } } ] } } ] } nftables-0.6.3/resources/test/fixtures/synproxy-flag-2.json000064400000000000000000000042651046102023000221550ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.1.0", "release_name": "some name", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "test", "handle": 1 } }, { "synproxy": { "family": "ip", "name": "syn1", "table": "test", "handle": 4, "mss": 1100, "wscale": 7, "flags": "timestamp" } }, { "synproxy": { "family": "ip", "name": "syn2", "table": "test", "handle": 5, "mss": 1000, "wscale": 6, "flags": [ "timestamp", "sack-perm" ] } }, { "chain": { "family": "ip", "table": "test", "name": "chain1", "handle": 1 } }, { "rule": { "family": "ip", "table": "test", "chain": "chain1", "handle": 2, "expr": [ { "synproxy": { "mss": 1460, "wscale": 7, "flags": [ "timestamp", "sack-perm" ] } } ] } }, { "rule": { "family": "ip", "table": "test", "chain": "chain1", "handle": 3, "expr": [ { "synproxy": { "mss": 1500, "wscale": 5, "flags": "timestamp" } } ] } } ] } nftables-0.6.3/resources/test/json/basic.json000064400000000000000000000067551046102023000173630ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.9", "release_name": "Old Doc Yak #3", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "filter", "handle": 1 } }, { "chain": { "family": "ip", "table": "filter", "name": "output", "handle": 1, "type": "filter", "hook": "output", "prio": 100, "policy": "accept" } }, { "chain": { "family": "ip", "table": "filter", "name": "input", "handle": 2, "type": "filter", "hook": "input", "prio": 0, "policy": "accept" } }, { "chain": { "family": "ip", "table": "filter", "name": "forward", "handle": 3, "type": "filter", "hook": "forward", "prio": 0, "policy": "drop" } }, { "rule": { "family": "ip", "table": "filter", "chain": "input", "handle": 4, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "iifname" } }, "right": "lan0" } }, { "accept": null } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "input", "handle": 5, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "iifname" } }, "right": "wan0" } }, { "drop": null } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "forward", "handle": 6, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "iifname" } }, "right": "lan0" } }, { "match": { "op": "==", "left": { "meta": { "key": "oifname" } }, "right": "wan0" } }, { "accept": null } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "forward", "handle": 7, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "iifname" } }, "right": "wan0" } }, { "match": { "op": "==", "left": { "meta": { "key": "oifname" } }, "right": "lan0" } }, { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": [ "established", "related" ] } }, { "accept": null } ] } } ] } nftables-0.6.3/resources/test/json/bitflags.json000064400000000000000000000050641046102023000200650ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.9", "release_name": "Old Doc Yak #3", "json_schema_version": 1 } }, { "table": { "family": "inet", "name": "filter", "handle": 1 } }, { "chain": { "family": "inet", "table": "filter", "name": "input", "handle": 1 } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 2, "expr": [ { "match": { "op": "==", "left": { "&": [ { "payload": { "protocol": "tcp", "field": "flags" } }, "syn" ] }, "right": [ "syn", "ack" ] } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 3, "expr": [ { "match": { "op": "==", "left": { "&": [ { "payload": { "protocol": "tcp", "field": "flags" } }, [ "fin", "syn", "rst", "ack" ] ] }, "right": "syn" } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 4, "expr": [ { "match": { "op": "==", "left": { "&": [ { "payload": { "protocol": "tcp", "field": "flags" } }, [ "fin", "syn", "rst", "ack" ] ] }, "right": [ "syn", "ack" ] } }, { "drop": null } ] } } ] } nftables-0.6.3/resources/test/json/counter.json000064400000000000000000000056231046102023000177520ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.9", "release_name": "Old Doc Yak #3", "json_schema_version": 1 } }, { "table": { "family": "inet", "name": "named_counter_demo", "handle": 1 } }, { "counter": { "family": "inet", "name": "cnt_http", "table": "named_counter_demo", "handle": 2, "comment": "count both http and https packets", "packets": 0, "bytes": 0 } }, { "counter": { "family": "inet", "name": "cnt_smtp", "table": "named_counter_demo", "handle": 3, "packets": 0, "bytes": 0 } }, { "chain": { "family": "inet", "table": "named_counter_demo", "name": "IN", "handle": 1 } }, { "rule": { "family": "inet", "table": "named_counter_demo", "chain": "IN", "handle": 4, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 21 } }, { "counter": { "packets": 0, "bytes": 0 } } ] } }, { "rule": { "family": "inet", "table": "named_counter_demo", "chain": "IN", "handle": 5, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 25 } }, { "counter": "cnt_smtp" } ] } }, { "rule": { "family": "inet", "table": "named_counter_demo", "chain": "IN", "handle": 6, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 80 } }, { "counter": "cnt_http" } ] } }, { "rule": { "family": "inet", "table": "named_counter_demo", "chain": "IN", "handle": 7, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 443 } }, { "counter": "cnt_http" } ] } } ] } nftables-0.6.3/resources/test/json/flow.json000064400000000000000000000024271046102023000172410ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.9", "release_name": "Old Doc Yak #3", "json_schema_version": 1 } }, { "table": { "family": "inet", "name": "named_counter_demo", "handle": 3 } }, { "flowtable": { "family": "inet", "name": "flowed", "table": "named_counter_demo", "handle": 2, "hook": "ingress", "prio": 0, "dev": "lo" } }, { "chain": { "family": "inet", "table": "named_counter_demo", "name": "forward", "handle": 1, "type": "filter", "hook": "forward", "prio": 0, "policy": "accept" } }, { "rule": { "family": "inet", "table": "named_counter_demo", "chain": "forward", "handle": 3, "expr": [ { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": "established" } }, { "flow": { "op": "add", "flowtable": "@flowed" } } ] } } ] } nftables-0.6.3/resources/test/json/nat.json000064400000000000000000000042461046102023000170550ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.9", "release_name": "Old Doc Yak #3", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "nat", "handle": 1 } }, { "chain": { "family": "ip", "table": "nat", "name": "prerouting", "handle": 1, "type": "nat", "hook": "prerouting", "prio": 0, "policy": "accept" } }, { "chain": { "family": "ip", "table": "nat", "name": "postrouting", "handle": 2, "type": "nat", "hook": "postrouting", "prio": 100, "policy": "accept" } }, { "rule": { "family": "ip", "table": "nat", "chain": "postrouting", "handle": 3, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "l4proto" } }, "right": "tcp" } }, { "match": { "op": "!=", "left": { "payload": { "protocol": "ip", "field": "daddr" } }, "right": { "prefix": { "addr": "192.168.122.0", "len": 24 } } } }, { "masquerade": { "port": { "range": [ 1024, 65535 ] } } } ] } }, { "rule": { "family": "ip", "table": "nat", "chain": "postrouting", "handle": 4, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "oifname" } }, "right": "wan0" } }, { "masquerade": null } ] } } ] } nftables-0.6.3/resources/test/json/nftables-init.json000064400000000000000000000522331046102023000210310ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.9", "release_name": "Old Doc Yak #3", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "nat", "handle": 1 } }, { "chain": { "family": "ip", "table": "nat", "name": "prerouting", "handle": 1, "type": "nat", "hook": "prerouting", "prio": 0, "policy": "accept" } }, { "chain": { "family": "ip", "table": "nat", "name": "postrouting", "handle": 2, "type": "nat", "hook": "postrouting", "prio": 0, "policy": "accept" } }, { "rule": { "family": "ip", "table": "nat", "chain": "prerouting", "handle": 3, "expr": [ { "redirect": null } ] } }, { "rule": { "family": "ip", "table": "nat", "chain": "prerouting", "handle": 4, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 21 } }, { "redirect": { "port": 21212 } } ] } }, { "table": { "family": "inet", "name": "filter", "handle": 2 } }, { "set": { "family": "inet", "name": "blackhole", "table": "filter", "type": "ipv4_addr", "handle": 4, "flags": [ "timeout" ], "timeout": 86400 } }, { "chain": { "family": "inet", "table": "filter", "name": "input", "handle": 1, "type": "filter", "hook": "input", "prio": 0, "policy": "accept" } }, { "chain": { "family": "inet", "table": "filter", "name": "output", "handle": 2, "type": "filter", "hook": "output", "prio": 0, "policy": "accept" } }, { "chain": { "family": "inet", "table": "filter", "name": "admin", "handle": 3 } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 5, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "ip", "field": "saddr" } }, "right": "@blackhole" } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 6, "expr": [ { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": [ "established", "related" ] } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 7, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "iif" } }, "right": "lo" } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 8, "expr": [ { "match": { "op": "!=", "left": { "payload": { "protocol": "tcp", "field": "flags" } }, "right": "syn" } }, { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": "new" } }, { "log": { "prefix": "FIRST PACKET IS NOT SYN" } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 9, "expr": [ { "match": { "op": "==", "left": { "&": [ { "payload": { "protocol": "tcp", "field": "flags" } }, [ "fin", "syn" ] ] }, "right": [ "fin", "syn" ] } }, { "log": { "prefix": "SCANNER1" } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 10, "expr": [ { "match": { "op": "==", "left": { "&": [ { "payload": { "protocol": "tcp", "field": "flags" } }, [ "syn", "rst" ] ] }, "right": [ "syn", "rst" ] } }, { "log": { "prefix": "SCANNER2" } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 11, "expr": [ { "match": { "op": "<", "left": { "&": [ { "payload": { "protocol": "tcp", "field": "flags" } }, { "|": [ { "|": [ { "|": [ { "|": [ { "|": [ "fin", "syn" ] }, "rst" ] }, "psh" ] }, "ack" ] }, "urg" ] } ] }, "right": "fin" } }, { "log": { "prefix": "SCANNER3" } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 12, "expr": [ { "match": { "op": "==", "left": { "&": [ { "payload": { "protocol": "tcp", "field": "flags" } }, [ "fin", "syn", "rst", "psh", "ack", "urg" ] ] }, "right": [ "fin", "psh", "urg" ] } }, { "log": { "prefix": "SCANNER4" } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 13, "expr": [ { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": "invalid" } }, { "log": { "prefix": "Invalid conntrack state: ", "flags": [ "skuid", "ether" ] } }, { "counter": { "packets": 0, "bytes": 0 } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 15, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": { "set": [ 22, 80, 443 ] } } }, { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": "new" } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 17, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "ip", "field": "saddr" } }, "right": { "set": [ { "prefix": { "addr": "10.0.0.0", "len": 8 } }, { "prefix": { "addr": "12.34.56.72", "len": 29 } }, { "prefix": { "addr": "172.16.0.0", "len": 16 } } ] } } }, { "jump": { "target": "admin" } } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 19, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "ip6", "field": "nexthdr" } }, "right": "ipv6-icmp" } }, { "match": { "op": "==", "left": { "payload": { "protocol": "icmpv6", "field": "type" } }, "right": { "set": [ "destination-unreachable", "packet-too-big", "time-exceeded", "parameter-problem", "nd-router-advert", "nd-neighbor-solicit", "nd-neighbor-advert" ] } } }, { "limit": { "rate": 100, "burst": 5, "per": "second" } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 21, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "ip", "field": "protocol" } }, "right": "icmp" } }, { "match": { "op": "==", "left": { "payload": { "protocol": "icmp", "field": "type" } }, "right": { "set": [ "destination-unreachable", "router-advertisement", "time-exceeded", "parameter-problem" ] } } }, { "limit": { "rate": 100, "burst": 5, "per": "second" } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "output", "handle": 22, "expr": [ { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": [ "established", "related" ] } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "output", "handle": 23, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "oif" } }, "right": "lo" } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "output", "handle": 25, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "udp", "field": "dport" } }, "right": 53 } }, { "match": { "op": "==", "left": { "payload": { "protocol": "ip", "field": "daddr" } }, "right": { "set": [ "8.8.4.4", "8.8.8.8" ] } } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "output", "handle": 27, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 53 } }, { "match": { "op": "==", "left": { "payload": { "protocol": "ip", "field": "daddr" } }, "right": { "set": [ "8.8.4.4", "8.8.8.8" ] } } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "output", "handle": 28, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "udp", "field": "dport" } }, "right": 67 } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "output", "handle": 29, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "udp", "field": "dport" } }, "right": 443 } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "output", "handle": 31, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": { "set": [ 25, 465, 587 ] } } }, { "match": { "op": "!=", "left": { "payload": { "protocol": "ip", "field": "daddr" } }, "right": "127.0.0.1" } }, { "log": { "prefix": "SPAMALERT!" } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "output", "handle": 33, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": { "set": [ 80, 443 ] } } }, { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": "new" } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "output", "handle": 34, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "ip", "field": "protocol" } }, "right": "icmp" } }, { "match": { "op": "==", "left": { "payload": { "protocol": "icmp", "field": "type" } }, "right": "echo-request" } }, { "limit": { "rate": 1, "burst": 5, "per": "second" } }, { "log": null }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "output", "handle": 35, "expr": [ { "log": { "prefix": "Outgoing packet dropped: ", "flags": "all" } } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "admin", "handle": 36, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 22 } }, { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": "new" } }, { "log": { "prefix": "Admin connection:" } }, { "accept": null } ] } } ] } nftables-0.6.3/resources/test/json/setmap.json000064400000000000000000000043241046102023000175610ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.9", "release_name": "Old Doc Yak #3", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "nat", "handle": 9 } }, { "map": { "family": "ip", "name": "porttoip", "table": "nat", "type": "inet_service", "handle": 3, "map": "ipv4_addr", "elem": [ [ 80, "192.168.1.100" ], [ 8888, "192.168.1.101" ] ] } }, { "chain": { "family": "ip", "table": "nat", "name": "prerouting", "handle": 1 } }, { "chain": { "family": "ip", "table": "nat", "name": "postrouting", "handle": 2 } }, { "rule": { "family": "ip", "table": "nat", "chain": "prerouting", "handle": 5, "expr": [ { "dnat": { "addr": { "map": { "key": { "payload": { "protocol": "tcp", "field": "dport" } }, "data": { "set": [ [ 80, "192.168.1.100" ], [ 8888, "192.168.1.101" ] ] } } } } } ] } }, { "rule": { "family": "ip", "table": "nat", "chain": "postrouting", "handle": 6, "expr": [ { "snat": { "addr": { "map": { "key": { "payload": { "protocol": "tcp", "field": "dport" } }, "data": "@porttoip" } } } } ] } } ] } nftables-0.6.3/resources/test/json/space-keys.json000064400000000000000000000121241046102023000203310ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.9", "release_name": "Old Doc Yak #3", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "filter", "handle": 1 } }, { "ct expectation": { "family": "ip", "name": "e_pgsql", "table": "filter", "handle": 4, "protocol": "tcp", "dport": 5432, "timeout": 3600000, "size": 12, "l3proto": "ip" } }, { "ct helper": { "family": "ip", "name": "ftp-standard", "table": "filter", "handle": 5, "type": "ftp", "protocol": "tcp", "l3proto": "ip" } }, { "chain": { "family": "ip", "table": "filter", "name": "INPUT", "handle": 1, "type": "filter", "hook": "input", "prio": 0, "policy": "accept" } }, { "chain": { "family": "ip", "table": "filter", "name": "FORWARD", "handle": 2, "type": "filter", "hook": "forward", "prio": 0, "policy": "accept" } }, { "chain": { "family": "ip", "table": "filter", "name": "OUTPUT", "handle": 3, "type": "filter", "hook": "output", "prio": 0, "policy": "accept" } }, { "rule": { "family": "ip", "table": "filter", "chain": "INPUT", "handle": 6, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 22 } }, { "ct count": { "val": 10 } }, { "accept": null } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "INPUT", "handle": 7, "expr": [ { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": "new" } }, { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 8888 } }, { "ct expectation": "e_pgsql" } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "INPUT", "handle": 8, "expr": [ { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": [ "established", "related" ] } }, { "counter": { "packets": 0, "bytes": 0 } }, { "accept": null } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "FORWARD", "handle": 9, "expr": [ { "match": { "op": "in", "left": { "payload": { "protocol": "tcp", "field": "flags" } }, "right": "syn" } }, { "counter": { "packets": 0, "bytes": 0 } }, { "mangle": { "key": { "tcp option": { "name": "maxseg", "field": "size" } }, "value": { "rt": { "key": "mtu" } } } } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "FORWARD", "handle": 10, "expr": [ { "match": { "op": "==", "left": { "sctp chunk": { "name": "data", "field": "flags" } }, "right": 2 } } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "FORWARD", "handle": 11, "expr": [ { "match": { "op": "==", "left": { "ct": { "key": "helper" } }, "right": "ftp-standard" } }, { "accept": null } ] } } ] } nftables-0.6.3/resources/test/json/synproxy.json000064400000000000000000000141731046102023000202060ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.6", "release_name": "Lester Gooch #5", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "synproxy_anonymous", "handle": 1 } }, { "chain": { "family": "ip", "table": "synproxy_anonymous", "name": "PREROUTING", "handle": 1, "type": "filter", "hook": "prerouting", "prio": -300, "policy": "accept" } }, { "chain": { "family": "ip", "table": "synproxy_anonymous", "name": "INPUT", "handle": 2, "type": "filter", "hook": "input", "prio": 0, "policy": "accept" } }, { "rule": { "family": "ip", "table": "synproxy_anonymous", "chain": "PREROUTING", "handle": 3, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 8080 } }, { "match": { "op": "in", "left": { "payload": { "protocol": "tcp", "field": "flags" } }, "right": "syn" } }, { "notrack": null } ] } }, { "rule": { "family": "ip", "table": "synproxy_anonymous", "chain": "INPUT", "handle": 4, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 8080 } }, { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": [ "invalid", "untracked" ] } }, { "synproxy": { "mss": 1460, "wscale": 7, "flags": [ "timestamp", "sack-perm" ] } } ] } }, { "rule": { "family": "ip", "table": "synproxy_anonymous", "chain": "INPUT", "handle": 5, "expr": [ { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": "invalid" } }, { "drop": null } ] } }, { "table": { "family": "ip", "name": "synproxy_named", "handle": 2 } }, { "synproxy": { "family": "ip", "name": "synproxy_named_1", "table": "synproxy_named", "handle": 3, "mss": 1460, "wscale": 7, "flags": [ "timestamp", "sack-perm" ] } }, { "synproxy": { "family": "ip", "name": "synproxy_named_2", "table": "synproxy_named", "handle": 4, "mss": 1460, "wscale": 5 } }, { "chain": { "family": "ip", "table": "synproxy_named", "name": "PREROUTING", "handle": 1, "type": "filter", "hook": "prerouting", "prio": -300, "policy": "accept" } }, { "chain": { "family": "ip", "table": "synproxy_named", "name": "FORWARD", "handle": 2, "type": "filter", "hook": "forward", "prio": 0, "policy": "accept" } }, { "rule": { "family": "ip", "table": "synproxy_named", "chain": "PREROUTING", "handle": 5, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 8080 } }, { "match": { "op": "in", "left": { "payload": { "protocol": "tcp", "field": "flags" } }, "right": "syn" } }, { "notrack": null } ] } }, { "rule": { "family": "ip", "table": "synproxy_named", "chain": "FORWARD", "handle": 7, "expr": [ { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": [ "invalid", "untracked" ] } }, { "synproxy": { "map": { "key": { "payload": { "protocol": "ip", "field": "saddr" } }, "data": { "set": [ [ { "prefix": { "addr": "192.168.1.0", "len": 24 } }, "synproxy_named_1" ], [ { "prefix": { "addr": "192.168.2.0", "len": 24 } }, "synproxy_named_2" ] ] } } } } ] } } ] } nftables-0.6.3/resources/test/json/tproxy.json000064400000000000000000000052631046102023000176400ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.9", "release_name": "Old Doc Yak #3", "json_schema_version": 1 } }, { "table": { "family": "inet", "name": "filter", "handle": 1 } }, { "chain": { "family": "inet", "table": "filter", "name": "tproxy_ipv4", "handle": 1 } }, { "chain": { "family": "inet", "table": "filter", "name": "tproxy_ipv6", "handle": 2 } }, { "rule": { "family": "inet", "table": "filter", "chain": "tproxy_ipv4", "handle": 3, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "l4proto" } }, "right": "tcp" } }, { "tproxy": { "family": "ip", "addr": "127.0.0.1", "port": 12345 } } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "tproxy_ipv4", "handle": 4, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "l4proto" } }, "right": "tcp" } }, { "tproxy": { "family": "ip", "port": 12345 } } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "tproxy_ipv6", "handle": 5, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "l4proto" } }, "right": "tcp" } }, { "tproxy": { "family": "ip6", "addr": "::1", "port": 12345 } } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "tproxy_ipv6", "handle": 6, "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "l4proto" } }, "right": "tcp" } }, { "tproxy": { "family": "ip6", "port": 12345 } } ] } } ] } nftables-0.6.3/resources/test/json/workstation.json000064400000000000000000000273521046102023000206620ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.9", "release_name": "Old Doc Yak #3", "json_schema_version": 1 } }, { "table": { "family": "ip", "name": "filter", "handle": 1 } }, { "chain": { "family": "ip", "table": "filter", "name": "input", "handle": 1, "type": "filter", "hook": "input", "prio": 0, "policy": "drop" } }, { "chain": { "family": "ip", "table": "filter", "name": "forward", "handle": 2, "type": "filter", "hook": "forward", "prio": 0, "policy": "drop" } }, { "chain": { "family": "ip", "table": "filter", "name": "output", "handle": 3, "type": "filter", "hook": "output", "prio": 0, "policy": "accept" } }, { "rule": { "family": "ip", "table": "filter", "chain": "input", "handle": 4, "comment": "early drop of invalid packets", "expr": [ { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": "invalid" } }, { "counter": { "packets": 0, "bytes": 0 } }, { "drop": null } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "input", "handle": 6, "comment": "accept all connections related to connections made by us", "expr": [ { "match": { "op": "==", "left": { "ct": { "key": "state" } }, "right": { "set": [ "established", "related" ] } } }, { "counter": { "packets": 0, "bytes": 0 } }, { "accept": null } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "input", "handle": 7, "comment": "accept loopback", "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "iif" } }, "right": "lo" } }, { "accept": null } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "input", "handle": 8, "comment": "drop connections to loopback not coming from loopback", "expr": [ { "match": { "op": "!=", "left": { "meta": { "key": "iif" } }, "right": "lo" } }, { "match": { "op": "==", "left": { "payload": { "protocol": "ip", "field": "daddr" } }, "right": { "prefix": { "addr": "127.0.0.0", "len": 8 } } } }, { "counter": { "packets": 0, "bytes": 0 } }, { "drop": null } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "input", "handle": 9, "comment": "accept all ICMP types", "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "ip", "field": "protocol" } }, "right": "icmp" } }, { "counter": { "packets": 0, "bytes": 0 } }, { "accept": null } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "input", "handle": 10, "comment": "accept SSH", "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 22 } }, { "counter": { "packets": 0, "bytes": 0 } }, { "accept": null } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "input", "handle": 11, "comment": "count dropped packets", "expr": [ { "counter": { "packets": 0, "bytes": 0 } } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "forward", "handle": 12, "comment": "count dropped packets", "expr": [ { "counter": { "packets": 0, "bytes": 0 } } ] } }, { "rule": { "family": "ip", "table": "filter", "chain": "output", "handle": 13, "comment": "count accepted packets", "expr": [ { "counter": { "packets": 0, "bytes": 0 } } ] } }, { "table": { "family": "ip6", "name": "filter", "handle": 2 } }, { "chain": { "family": "ip6", "table": "filter", "name": "input", "handle": 1, "type": "filter", "hook": "input", "prio": 0, "policy": "drop" } }, { "chain": { "family": "ip6", "table": "filter", "name": "forward", "handle": 2, "type": "filter", "hook": "forward", "prio": 0, "policy": "drop" } }, { "chain": { "family": "ip6", "table": "filter", "name": "output", "handle": 3, "type": "filter", "hook": "output", "prio": 0, "policy": "accept" } }, { "rule": { "family": "ip6", "table": "filter", "chain": "input", "handle": 4, "comment": "early drop of invalid packets", "expr": [ { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": "invalid" } }, { "counter": { "packets": 0, "bytes": 0 } }, { "drop": null } ] } }, { "rule": { "family": "ip6", "table": "filter", "chain": "input", "handle": 6, "comment": "accept all connections related to connections made by us", "expr": [ { "match": { "op": "==", "left": { "ct": { "key": "state" } }, "right": { "set": [ "established", "related" ] } } }, { "counter": { "packets": 0, "bytes": 0 } }, { "accept": null } ] } }, { "rule": { "family": "ip6", "table": "filter", "chain": "input", "handle": 7, "comment": "accept loopback", "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "iif" } }, "right": "lo" } }, { "accept": null } ] } }, { "rule": { "family": "ip6", "table": "filter", "chain": "input", "handle": 8, "comment": "drop connections to loopback not coming from loopback", "expr": [ { "match": { "op": "!=", "left": { "meta": { "key": "iif" } }, "right": "lo" } }, { "match": { "op": "==", "left": { "payload": { "protocol": "ip6", "field": "daddr" } }, "right": "::1" } }, { "counter": { "packets": 0, "bytes": 0 } }, { "drop": null } ] } }, { "rule": { "family": "ip6", "table": "filter", "chain": "input", "handle": 9, "comment": "accept all ICMP types", "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "ip6", "field": "nexthdr" } }, "right": "ipv6-icmp" } }, { "counter": { "packets": 0, "bytes": 0 } }, { "accept": null } ] } }, { "rule": { "family": "ip6", "table": "filter", "chain": "input", "handle": 10, "comment": "accept SSH", "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 22 } }, { "counter": { "packets": 0, "bytes": 0 } }, { "accept": null } ] } }, { "rule": { "family": "ip6", "table": "filter", "chain": "input", "handle": 11, "comment": "count dropped packets", "expr": [ { "counter": { "packets": 0, "bytes": 0 } } ] } }, { "rule": { "family": "ip6", "table": "filter", "chain": "forward", "handle": 12, "comment": "count dropped packets", "expr": [ { "counter": { "packets": 0, "bytes": 0 } } ] } }, { "rule": { "family": "ip6", "table": "filter", "chain": "output", "handle": 13, "comment": "count accepted packets", "expr": [ { "counter": { "packets": 0, "bytes": 0 } } ] } } ] } nftables-0.6.3/resources/test/json/workstation_combined.json000064400000000000000000000171031046102023000225130ustar 00000000000000{ "nftables": [ { "metainfo": { "version": "1.0.9", "release_name": "Old Doc Yak #3", "json_schema_version": 1 } }, { "table": { "family": "inet", "name": "filter", "handle": 1 } }, { "chain": { "family": "inet", "table": "filter", "name": "input", "handle": 1, "type": "filter", "hook": "input", "prio": 0, "policy": "drop" } }, { "chain": { "family": "inet", "table": "filter", "name": "forward", "handle": 2, "type": "filter", "hook": "forward", "prio": 0, "policy": "drop" } }, { "chain": { "family": "inet", "table": "filter", "name": "output", "handle": 3, "type": "filter", "hook": "output", "prio": 0, "policy": "accept" } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 4, "comment": "early drop of invalid packets", "expr": [ { "match": { "op": "in", "left": { "ct": { "key": "state" } }, "right": "invalid" } }, { "counter": { "packets": 0, "bytes": 0 } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 6, "comment": "accept all connections related to connections made by us", "expr": [ { "match": { "op": "==", "left": { "ct": { "key": "state" } }, "right": { "set": [ "established", "related" ] } } }, { "counter": { "packets": 0, "bytes": 0 } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 7, "comment": "accept loopback", "expr": [ { "match": { "op": "==", "left": { "meta": { "key": "iif" } }, "right": "lo" } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 8, "comment": "drop connections to loopback not coming from loopback", "expr": [ { "match": { "op": "!=", "left": { "meta": { "key": "iif" } }, "right": "lo" } }, { "match": { "op": "==", "left": { "payload": { "protocol": "ip", "field": "daddr" } }, "right": { "prefix": { "addr": "127.0.0.0", "len": 8 } } } }, { "counter": { "packets": 0, "bytes": 0 } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 9, "comment": "drop connections to loopback not coming from loopback", "expr": [ { "match": { "op": "!=", "left": { "meta": { "key": "iif" } }, "right": "lo" } }, { "match": { "op": "==", "left": { "payload": { "protocol": "ip6", "field": "daddr" } }, "right": "::1" } }, { "counter": { "packets": 0, "bytes": 0 } }, { "drop": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 10, "comment": "accept all ICMP types", "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "ip", "field": "protocol" } }, "right": "icmp" } }, { "counter": { "packets": 0, "bytes": 0 } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 11, "comment": "accept all ICMP types", "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "ip6", "field": "nexthdr" } }, "right": "ipv6-icmp" } }, { "counter": { "packets": 0, "bytes": 0 } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 12, "comment": "accept SSH", "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 22 } }, { "counter": { "packets": 0, "bytes": 0 } }, { "accept": null } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "input", "handle": 13, "comment": "count dropped packets", "expr": [ { "counter": { "packets": 0, "bytes": 0 } } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "forward", "handle": 14, "comment": "count dropped packets", "expr": [ { "counter": { "packets": 0, "bytes": 0 } } ] } }, { "rule": { "family": "inet", "table": "filter", "chain": "output", "handle": 15, "comment": "count accepted packets", "expr": [ { "counter": { "packets": 0, "bytes": 0 } } ] } } ] } nftables-0.6.3/resources/test/nft/NOTICE000064400000000000000000000026671046102023000161270ustar 00000000000000Nftables/Examples (files basic.nft, nat.nft, workstation_combined.nft, workstation.nft) Copyright 2001–2022 Gentoo Foundation, Inc. This product includes software developed at Gentoo Foundation, Inc. (https://gentoo.org), licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License (https://creativecommons.org/licenses/by-sa/3.0/). ===== nftables-example (file nftables-init.nft) Copyright 2021 Yoram van de Velde Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. nftables-0.6.3/resources/test/nft/basic.nft000064400000000000000000000011431046102023000170010ustar 00000000000000#!/sbin/nft -f flush ruleset table ip filter { # allow all packets sent by the firewall machine itself chain output { type filter hook output priority 100; policy accept; } # allow LAN to firewall, disallow WAN to firewall chain input { type filter hook input priority 0; policy accept; iifname "lan0" accept iifname "wan0" drop } # allow packets from LAN to WAN, and WAN to LAN if LAN initiated the connection chain forward { type filter hook forward priority 0; policy drop; iifname "lan0" oifname "wan0" accept iifname "wan0" oifname "lan0" ct state related,established accept } } nftables-0.6.3/resources/test/nft/bitflags.nft000064400000000000000000000003301046102023000175100ustar 00000000000000#!/sbin/nft -f flush ruleset table inet filter { chain input { tcp flags and syn == syn|ack drop tcp flags and (syn|ack|fin|rst) == syn drop tcp flags and (syn|ack|fin|rst) == syn|ack drop } } nftables-0.6.3/resources/test/nft/counter.nft000064400000000000000000000005041046102023000173770ustar 00000000000000table inet named_counter_demo { counter cnt_http { comment "count both http and https packets" packets 0 bytes 0 } counter cnt_smtp { packets 0 bytes 0 } chain IN { tcp dport 21 counter tcp dport 25 counter name "cnt_smtp" tcp dport 80 counter name "cnt_http" tcp dport 443 counter name "cnt_http" } } nftables-0.6.3/resources/test/nft/flow.nft000064400000000000000000000004161046102023000166710ustar 00000000000000#!/sbin/nft -f flush ruleset table inet named_counter_demo { flowtable flowed { hook ingress priority filter devices = { lo } } chain forward { type filter hook forward priority filter; policy accept; ct state established flow add @flowed } } nftables-0.6.3/resources/test/nft/nat.nft000064400000000000000000000004651046102023000165100ustar 00000000000000#!/sbin/nft -f flush ruleset table ip nat { chain prerouting { type nat hook prerouting priority 0; policy accept; } chain postrouting { type nat hook postrouting priority 100; policy accept; meta l4proto tcp ip daddr != 192.168.122.0/24 masquerade to :1024-65535 oifname "wan0" masquerade } } nftables-0.6.3/resources/test/nft/nftables-init.nft000064400000000000000000000122421046102023000204610ustar 00000000000000# # Netfilter's NFTable firewall # # This is just a ruleset to play around with the syntax introduced # in nftables and itis my way of getting to know it. # # Here might be dragons! # # To invoke: # # $ sudo iptable-save > iptables.backup # $ sudo iptables -P INPUT DROP # $ sudo iptables -F # $ sudo iptables -X # $ sudo nft flush ruleset && sudo nft -f nftables-init.rules # # To get back to your iptables ruleset: # # $ sudo nft flush ruleset # $ sudo iptables-restore < iptables.backup # # BEWARE: during the above commands there is a short moment where # there are no firewall rules active. That is why the default # policy is changed to drop all traffic. But still you # should make sure to only try this on trusted networks! # flush ruleset define admin = { 12.34.56.78/29, 10.11.12.0/8, 172.16.1.0/16 } define google_dns = { 8.8.8.8, 8.8.4.4 } define mailout = { 127.0.0.1 } table nat { chain prerouting { type nat hook prerouting priority 0 # initiate redirecting on the local machine and redirect incoming # traffic on port 21 to 21212 which is nice for docker for example redirect tcp dport 21 redirect to 21212 } chain postrouting { type nat hook postrouting priority 0 # we need this chain even if there are no rules for the return # path otherwise the path will not exist } } table inet filter { chain input { type filter hook input priority 0; policy accept # drop all bad actors before we do rel/est ip saddr @blackhole drop # connection track and accept previous accepted traffic ct state established,related accept # localhost godmode iif lo accept # if the connection is NEW and is not SYN then drop tcp flags != syn ct state new log prefix "FIRST PACKET IS NOT SYN" drop # new and sending FIN the connection? DROP! tcp flags & (fin|syn) == (fin|syn) log prefix "SCANNER1" drop # i don't think we've met but you're sending a reset? tcp flags & (syn|rst) == (syn|rst) log prefix "SCANNER2" drop # 0 attack? tcp flags & (fin|syn|rst|psh|ack|urg) < (fin) log prefix "SCANNER3" drop # xmas attack. lights up everything tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) log prefix "SCANNER4" drop # if the ctstate is invalid ct state invalid log flags skuid flags ether prefix "Invalid conntrack state: "counter drop # open ssh, http and https and give them the new state tcp dport { ssh, http, https } ct state new accept # handle packets from iprange to admin chain ip saddr $admin jump admin # icmpv6 for ipv6 connections ip6 nexthdr icmpv6 icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert } limit rate 100/second accept # icmp for ipv4 connections ip protocol icmp icmp type { destination-unreachable, router-advertisement, time-exceeded, parameter-problem } limit rate 100/second accept # otherwise we drop, drop, drop # # when you are troubleshooting uncomment the next line. # log prefix "Incoming packet dropped: " } chain output { type filter hook output priority 0; policy accept # connection track and accept previous accepted traffic ct state established,related accept # all powerfull... as long as it is to localhost oif lo accept # allow DNS request if they are not to Google's DNS # i think this would qualify as torture, but I # have never claimed this set to be technically # or morraly sound. udp dport 53 ip daddr $google_dns accept tcp dport 53 ip daddr $google_dns accept # allow dhcp udp dport 67 accept # youtube needs this for tracking where you are in the video... weird. udp dport 443 accept # mail, really? are you malwa... -uhm- mailware! tcp dport {25,465,587} ip daddr != $mailout log prefix "SPAMALERT!" drop # allow web requests tcp dport { http, https } ct state new accept # limit outgoing icmp type 8 traffic ip protocol icmp icmp type echo-request limit rate 1/second log accept # log packet before it is dropped log flags all prefix "Outgoing packet dropped: " } chain admin { tcp dport ssh ct state new log prefix "Admin connection:" accept } set blackhole { # to add ip's to the blacklist you could use the commandline _nft_ tool ie: # nft add element ip filter blackhole { 192.168.1.4, 192.168.1.5 } # blackhole ipset where we set the type of element as ipv4 type ipv4_addr # we will set a timer on the element after which it is cleared flags timeout # the value of the timer timeout 1d } } nftables-0.6.3/resources/test/nft/setmap.nft000064400000000000000000000005571046102023000172210ustar 00000000000000#!/sbin/nft -f # https://wiki.nftables.org/wiki-nftables/index.php/Maps flush ruleset table ip nat { map porttoip { type inet_service : ipv4_addr elements = { 80 : 192.168.1.100, 8888 : 192.168.1.101 } } chain prerouting { dnat to tcp dport map { 80 : 192.168.1.100, 8888 : 192.168.1.101 } } chain postrouting { snat to tcp dport map @porttoip } } nftables-0.6.3/resources/test/nft/space-keys.nft000064400000000000000000000016171046102023000177720ustar 00000000000000# this tests various key names with spaces: # * ct count # * ct expectation # * ct helper # * ct timeout # * sctp chunk # * tcp option # nft rule snippets are taken from wiki.nftables.org table ip filter { ct expectation e_pgsql { protocol tcp dport 5432 timeout 1h size 12 l3proto ip } ct helper ftp-standard { type "ftp" protocol tcp l3proto ip } chain INPUT { type filter hook input priority filter; policy accept; tcp dport 22 ct count 10 accept ct state new tcp dport 8888 ct expectation set "e_pgsql" ct state established,related counter packets 0 bytes 0 accept } chain FORWARD { type filter hook forward priority filter; policy accept; tcp flags syn counter packets 0 bytes 0 tcp option maxseg size set rt mtu sctp chunk data flags 2 ct helper "ftp-standard" accept } chain OUTPUT { type filter hook output priority filter; policy accept; } } nftables-0.6.3/resources/test/nft/synproxy.nft000064400000000000000000000017441046102023000176420ustar 00000000000000table ip synproxy_anonymous { chain PREROUTING { type filter hook prerouting priority raw; policy accept; tcp dport 8080 tcp flags syn notrack } chain INPUT { type filter hook input priority filter; policy accept; tcp dport 8080 ct state invalid,untracked synproxy mss 1460 wscale 7 timestamp sack-perm ct state invalid drop } } table ip synproxy_named { synproxy synproxy_named_1 { mss 1460 wscale 7 timestamp sack-perm } synproxy synproxy_named_2 { mss 1460 wscale 5 } chain PREROUTING { type filter hook prerouting priority raw; policy accept; tcp dport 8080 tcp flags syn notrack } chain FORWARD { type filter hook forward priority filter; policy accept; ct state invalid,untracked synproxy name ip saddr map { 192.168.1.0/24 : "synproxy_named_1", 192.168.2.0/24 : "synproxy_named_2", } } } nftables-0.6.3/resources/test/nft/tproxy.nft000064400000000000000000000004431046102023000172670ustar 00000000000000#!/sbin/nft -f flush ruleset table inet filter { chain tproxy_ipv4 { meta l4proto tcp tproxy ip to 127.0.0.1:12345 meta l4proto tcp tproxy ip to :12345 } chain tproxy_ipv6 { meta l4proto tcp tproxy ip6 to [::1]:12345 meta l4proto tcp tproxy ip6 to :12345 } } nftables-0.6.3/resources/test/nft/workstation.nft000064400000000000000000000034471046102023000203150ustar 00000000000000#!/sbin/nft -f flush ruleset # ----- IPv4 ----- table ip filter { chain input { type filter hook input priority 0; policy drop; ct state invalid counter drop comment "early drop of invalid packets" ct state {established, related} counter accept comment "accept all connections related to connections made by us" iif lo accept comment "accept loopback" iif != lo ip daddr 127.0.0.1/8 counter drop comment "drop connections to loopback not coming from loopback" ip protocol icmp counter accept comment "accept all ICMP types" tcp dport 22 counter accept comment "accept SSH" counter comment "count dropped packets" } chain forward { type filter hook forward priority 0; policy drop; counter comment "count dropped packets" } # If you're not counting packets, this chain can be omitted. chain output { type filter hook output priority 0; policy accept; counter comment "count accepted packets" } } # ----- IPv6 ----- table ip6 filter { chain input { type filter hook input priority 0; policy drop; ct state invalid counter drop comment "early drop of invalid packets" ct state {established, related} counter accept comment "accept all connections related to connections made by us" iif lo accept comment "accept loopback" iif != lo ip6 daddr ::1/128 counter drop comment "drop connections to loopback not coming from loopback" ip6 nexthdr icmpv6 counter accept comment "accept all ICMP types" tcp dport 22 counter accept comment "accept SSH" counter comment "count dropped packets" } chain forward { type filter hook forward priority 0; policy drop; counter comment "count dropped packets" } # If you're not counting packets, this chain can be omitted. chain output { type filter hook output priority 0; policy accept; counter comment "count accepted packets" } } nftables-0.6.3/resources/test/nft/workstation_combined.nft000064400000000000000000000021001046102023000221360ustar 00000000000000#!/sbin/nft -f flush ruleset table inet filter { chain input { type filter hook input priority 0; policy drop; ct state invalid counter drop comment "early drop of invalid packets" ct state {established, related} counter accept comment "accept all connections related to connections made by us" iif lo accept comment "accept loopback" iif != lo ip daddr 127.0.0.1/8 counter drop comment "drop connections to loopback not coming from loopback" iif != lo ip6 daddr ::1/128 counter drop comment "drop connections to loopback not coming from loopback" ip protocol icmp counter accept comment "accept all ICMP types" ip6 nexthdr icmpv6 counter accept comment "accept all ICMP types" tcp dport 22 counter accept comment "accept SSH" counter comment "count dropped packets" } chain forward { type filter hook forward priority 0; policy drop; counter comment "count dropped packets" } # If you're not counting packets, this chain can be omitted. chain output { type filter hook output priority 0; policy accept; counter comment "count accepted packets" } } nftables-0.6.3/resources/test/nft-to-json.sh000075500000000000000000000004531046102023000171400ustar 00000000000000#!/bin/sh set -e cd "$(dirname "$0")" INPUT_DIR=./nft OUTPUT_DIR=./json convert_file () { INFILE=$1 unshare -rn sh -exc "nft -f \"${INFILE}\" && nft -j list ruleset" } for nftfile in "$INPUT_DIR"/*.nft; do convert_file "$nftfile" | jq > "$OUTPUT_DIR/$(basename "$nftfile" .nft).json" done nftables-0.6.3/src/batch.rs000064400000000000000000000030641046102023000136510ustar 00000000000000use serde::{Deserialize, Serialize}; use crate::schema::{NfCmd, NfListObject, NfObject, Nftables}; #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize)] /// Batch manages nftables objects and is used to prepare an nftables payload. pub struct Batch<'a> { data: Vec>, } impl Default for Batch<'_> { fn default() -> Self { Self::new() } } impl<'a> Batch<'a> { /// Creates an empty Batch instance. pub fn new() -> Batch<'a> { Batch { data: Vec::new() } } /// Adds object with `add` command to Batch. pub fn add(&mut self, obj: NfListObject<'a>) { self.data.push(NfObject::CmdObject(NfCmd::Add(obj))) } /// Adds object with `delete` command to Batch. pub fn delete(&mut self, obj: NfListObject<'a>) { self.data.push(NfObject::CmdObject(NfCmd::Delete(obj))) } /// Adds a command to Batch. pub fn add_cmd(&mut self, cmd: NfCmd<'a>) { self.data.push(NfObject::CmdObject(cmd)) } /// Adds a list object (without a command) directly to Batch. /// This corresponds to the descriptive output format of `nft -j list ruleset`. pub fn add_obj(&mut self, obj: NfListObject<'a>) { self.data.push(NfObject::ListObject(obj)) } /// Adds all given objects to the batch. pub fn add_all>>(&mut self, objs: I) { self.data.extend(objs) } /// Wraps Batch in nftables object. pub fn to_nftables(self) -> Nftables<'a> { Nftables { objects: self.data.into(), } } } nftables-0.6.3/src/cli.rs000064400000000000000000000074571046102023000133510ustar 00000000000000use crate::schema::Nftables; use schemars::schema_for; use std::{env::args, fs, io::Read, process::exit}; /// Get command arguments. /// /// This skips the first argument, because it is the program path itself. pub fn collect_command_args() -> Vec { args().skip(1).collect() } /// Dispatch command line arguments to commands. pub fn handle_args(args: Vec) { let mut args = args.into_iter(); if let Some(command) = &args.next() { if command == "schema" { generate_json_schema(args.next().unwrap_or("./nftables.schema.json".to_string())); return; } eprintln!("Unknown command: `{command}`. Try again with the schema command to generate a JSON Schema or call with stdin only."); exit(1); } deserialize_stdin(); } fn generate_json_schema(schema_dst_path: String) { let schema = schema_for!(Nftables); if let Err(err) = fs::write( schema_dst_path.clone(), serde_json::to_string_pretty(&schema).expect("Serde should serialize the document"), ) { eprintln!("Failed to write data to file: {err}"); exit(1); } println!("Wrote schema data to: {schema_dst_path}"); } /// Deserializes nftables JSON from the standard input and prints the result. /// /// This is the default behavior when the executable is called without any /// arguments. fn deserialize_stdin() { use std::io; let mut buffer = String::new(); match io::stdin().read_to_string(&mut buffer) { Err(error) => panic!("Problem opening the file: {error:?}"), Ok(_) => { println!("Document: {}", &buffer); let deserializer = &mut serde_json::Deserializer::from_str(&buffer); let result: Result = serde_path_to_error::deserialize(deserializer); match result { Ok(_) => println!("Result: {result:?}"), Err(err) => { panic!("Deserialization error: {err}"); } } } }; } #[cfg(test)] mod tests { use super::*; use serial_test::serial; use std::{env, fs}; use tempfile::TempDir; #[test] // Use serial due to altering the value // of CWD #[serial] fn test_handle_args_schema_default_path() { let tmp_dir = TempDir::new().expect("Should create a temp dir inside `env::tmp_dir`"); let path = tmp_dir.path().join("nftables.schema.json"); // Little hack, to have "control" over the directory // in which the default file is created let cwd = env::current_dir().expect("Should get current dir"); let _ = env::set_current_dir(tmp_dir.path()); handle_args(vec!["schema".to_string()]); let _ = env::set_current_dir(cwd); assert!(fs::metadata(&path).is_ok()); } #[test] fn test_handle_args_schema_custom_path() { let tmp_dir = TempDir::new().expect("Should create a temp dir inside `env::tmp_dir`"); let path = tmp_dir.path().join("test_nftables.schema.json"); handle_args(vec![ "schema".to_string(), path.to_str().unwrap().to_string(), ]); assert!(fs::metadata(&path).is_ok()); } #[test] fn test_generate_json_schema() { let tmp_dir = TempDir::new().expect("Should create a temp dir inside `env::tmp_dir`"); let path = tmp_dir.path().join("nftables.schema.json"); generate_json_schema(path.to_str().unwrap().to_string()); assert!(fs::metadata(&path).is_ok()); let content = fs::read_to_string(&path).unwrap(); // Check if generated file contains JSON schema "$schema" field assert!(content.contains("$schema")); assert_eq!( content.to_string(), serde_json::to_string_pretty(&schema_for!(Nftables)).expect("") ) } } nftables-0.6.3/src/expr.rs000064400000000000000000000625351046102023000135560ustar 00000000000000use schemars::JsonSchema; use serde::{Deserialize, Serialize}; use std::{borrow::Cow, collections::HashSet}; use crate::stmt::{Counter, JumpTarget, Statement}; use crate::visitor::deserialize_flags; use strum_macros::EnumString; #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(untagged)] /// Expressions are the building blocks of (most) [statements](crate::stmt::Statement). /// In their most basic form, they are just immediate values represented as a /// JSON string, integer or boolean type. pub enum Expression<'a> { // immediates /// A string expression (*immediate expression*). /// For string expressions there are two special cases: /// * `@STRING`: The remaining part is taken as [set](crate::schema::Set) /// name to create a set reference. /// * `\*`: Construct a wildcard expression. String(Cow<'a, str>), /// An integer expression (*immediate expression*). Number(u32), /// A boolean expression (*immediate expression*). Boolean(bool), /// List expressions are constructed by plain arrays containing of an arbitrary number of expressions. List(Vec>), /// A [binary operation](BinaryOperation) expression. BinaryOperation(Box>), /// Construct a range of values. /// /// The first array item denotes the lower boundary, the second one the upper boundary. Range(Box>), /// Wrapper for non-immediate expressions. Named(NamedExpression<'a>), /// A verdict expression (used in [verdict maps](crate::stmt::VerdictMap)). Verdict(Verdict<'a>), } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Wrapper for non-immediate [Expressions](Expression). pub enum NamedExpression<'a> { /// Concatenate several expressions. Concat(Vec>), /// This object constructs an anonymous set with [items](SetItem). /// For mappings, an array of arrays with exactly two elements is expected. Set(Vec>), /// Map a key to a value. Map(Box>), /// Construct an IPv4 or IPv6 [prefix](Prefix) consisting of address part and prefix length. Prefix(Prefix<'a>), /// Construct a [payload](Payload) expression, i.e. a reference to a certain part of packet data. Payload(Payload<'a>), /// Create a reference to a field in an IPv6 extension header. Exthdr(Exthdr<'a>), #[serde(rename = "tcp option")] /// Create a reference to a field of a TCP option header. TcpOption(TcpOption<'a>), #[serde(rename = "sctp chunk")] /// Create a reference to a field of an SCTP chunk. SctpChunk(SctpChunk<'a>), // TODO: DCCP Option /// Create a reference to packet meta data. Meta(Meta), /// Create a reference to packet routing data. RT(RT), /// Create a reference to packet conntrack data. CT(CT<'a>), /// Create a number generator. Numgen(Numgen), /// Hash packet data (Jenkins Hash). JHash(JHash<'a>), /// Hash packet data (Symmetric Hash). SymHash(SymHash), /// Perform kernel Forwarding Information Base lookups. Fib(Fib), /// Explicitly set element object, in case `timeout`, `expires`, or `comment` /// are desired. Elem(Elem<'a>), /// Construct a reference to a packet’s socket. Socket(Socket<'a>), /// Perform OS fingerprinting. /// /// This expression is typically used in the [LHS](crate::stmt::Match::left) /// of a [match](crate::stmt::Match) statement. Osf(Osf<'a>), } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "map")] /// Map a key to a value. pub struct Map<'a> { /// Map key. pub key: Expression<'a>, /// Mapping expression consisting of value/target pairs. pub data: Expression<'a>, } /// Default map expression (`true -> false`). impl Default for Map<'_> { fn default() -> Self { Map { key: Expression::Boolean(true), data: Expression::Boolean(false), } } } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(untagged)] /// Item in an anonymous set. pub enum SetItem<'a> { /// A set item containing a single expression. Element(Expression<'a>), /// A set item mapping two expressions. Mapping(Expression<'a>, Expression<'a>), /// A set item mapping an expression to a statement. MappingStatement(Expression<'a>, Statement<'a>), } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "prefix")] /// Construct an IPv4 or IPv6 prefix consisting of address part in /// [addr](Prefix::addr) and prefix length in [len](Prefix::len). pub struct Prefix<'a> { /// An IPv4 or IPv6 address. pub addr: Box>, /// The prefix length. pub len: u32, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "range")] /// Construct a range of values. /// The first array item denotes the lower boundary, the second one the upper /// boundary. pub struct Range<'a> { /// The range boundaries. /// /// The first array item denotes the lower boundary, the second one the /// upper boundary. pub range: [Expression<'a>; 2], } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(untagged)] /// Construct a payload expression, i.e. a reference to a certain part of packet /// data. pub enum Payload<'a> { /// Allows one to reference a field by name in a named packet header. PayloadField(PayloadField<'a>), /// Creates a raw payload expression to point at a random number of bits at /// a certain offset from a given reference point. PayloadRaw(PayloadRaw), } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Creates a raw payload expression to point at a random number /// ([len](PayloadRaw::len)) of bits at a certain offset /// ([offset](PayloadRaw::offset)) from a given reference point /// ([base](PayloadRaw::base)). pub struct PayloadRaw { /// The (protocol layer) reference point. pub base: PayloadBase, /// Offset from the reference point in bits. pub offset: u32, /// Number of bits. pub len: u32, } /// Default raw payload expression (0-length at link layer). impl Default for PayloadRaw { fn default() -> Self { PayloadRaw { base: PayloadBase::LL, offset: 0, len: 0, } } } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Construct a payload expression, i.e. a reference to a certain part of packet /// data. /// /// Allows to reference a field by name ([field](PayloadField::field)) in a /// named packet header ([protocol](PayloadField::protocol)). pub struct PayloadField<'a> { /// A named packet header. pub protocol: Cow<'a, str>, /// The field name. pub field: Cow<'a, str>, } /// Default payload field reference (`arp ptype`). impl Default for PayloadField<'_> { fn default() -> Self { PayloadField { protocol: "arp".into(), field: "ptype".into(), } } } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents a protocol layer for [payload](Payload) references. pub enum PayloadBase { /// Link layer, for example the Ethernet header. LL, /// Network header, for example IPv4 or IPv6. NH, /// Transport Header, for example TCP. /// /// *Added in nftables 0.9.2 and Linux kernel 5.3.* TH, /// Inner Header / Payload, i.e. after the L4 transport level header. /// /// *Added in Kernel version 6.2.* IH, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "exthdr")] /// Create a reference to a field ([field](Exthdr::field)) in an IPv6 extension /// header ([name](Exthdr::name)). /// /// [offset](Exthdr::offset) is used only for `rt0` protocol. pub struct Exthdr<'a> { /// The IPv6 extension header name. pub name: Cow<'a, str>, /// The field name. /// /// If the [field][Exthdr::field] property is not given, the expression is /// to be used as a header existence check in a [match](crate::stmt::Match) /// statement with a [boolean](Expression::Boolean) on the /// [right](crate::stmt::Match::right) hand side. pub field: Option>, /// The offset length. Used only for `rt0` protocol. pub offset: Option, } /// Default [Exthdr] for `frag` extension header. impl Default for Exthdr<'_> { fn default() -> Self { Exthdr { name: "frag".into(), field: None, offset: None, } } } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "tcp option")] /// Create a reference to a field ([field](TcpOption::field)) of a TCP option /// header ([name](TcpOption::field)). pub struct TcpOption<'a> { /// The TCP option header name. pub name: Cow<'a, str>, /// The field name. /// /// If the field property is not given, the expression is to be used as a /// TCP option existence check in a [match](crate::stmt::Match) /// statement with a [boolean](Expression::Boolean) on the /// [right](crate::stmt::Match::right) hand side. pub field: Option>, } /// Default TCP option for `maxseg` option. impl Default for TcpOption<'_> { fn default() -> Self { TcpOption { name: "maxseg".into(), field: None, } } } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "sctp chunk")] /// Create a reference to a field ([field](SctpChunk::field)) of an SCTP chunk /// ((name)[SctpChunk::name]). pub struct SctpChunk<'a> { /// The SCTP chunk name. pub name: Cow<'a, str>, /// The field name. /// /// If the field property is not given, the expression is to be used as an /// SCTP chunk existence check in a [match](crate::stmt::Match) statement /// with a [boolean](Expression::Boolean) on the /// [right](crate::stmt::Match::right) hand side. pub field: Cow<'a, str>, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "meta")] /// Create a reference to packet meta data. /// /// See [this page](https://wiki.nftables.org/wiki-nftables/index.php/Matching_packet_metainformation) /// for more information. pub struct Meta { /// The packet [meta data key](MetaKey). pub key: MetaKey, } /// Default impl for meta key `l4proto`. impl Default for Meta { fn default() -> Self { Meta { key: MetaKey::L4proto, } } } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents a `meta` key for packet meta data. /// /// See [this page](https://wiki.nftables.org/wiki-nftables/index.php/Matching_packet_metainformation) /// for more information. pub enum MetaKey { // matching by packet info: /// Packet type (unicast, broadcast, multicast, other). Pkttype, /// Packet length in bytes. Length, /// Packet protocol / EtherType protocol value. Protocol, /// Netfilter packet protocol family. Nfproto, /// Layer 4 protocol. L4proto, // matching by interface: /// Input interface index. Iif, /// Input interface name. Iifname, /// Input interface type. Iiftype, /// Input interface kind name. Iifkind, /// Input interface group. Iifgroup, /// Output interface index. Oif, /// Output interface name. Oifname, /// Output interface type. Oiftype, /// Output interface kind name. Oifkind, /// Output interface group. Oifgroup, /// Input bridge interface name. Ibridgename, /// Output bridge interface name. Obridgename, /// Input bridge interface name Ibriport, /// Output bridge interface name Obriport, // matching by packet mark, routing class and realm: /// Packet mark. Mark, /// TC packet priority. Priority, /// Routing realm. Rtclassid, // matching by socket uid/gid: /// UID associated with originating socket. Skuid, /// GID associated with originating socket. Skgid, // matching by security selectors: /// CPU number processing the packet. Cpu, /// Socket control group ID. Cgroup, /// `true` if packet was ipsec encrypted. (*obsolete*) Secpath, // matching by miscellaneous selectors: /// Pseudo-random number. Random, /// [nftrace debugging] bit. /// /// [nftract debugging]: Nftrace, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "rt")] /// Create a reference to packet routing data. pub struct RT { /// The routing data key. pub key: RTKey, #[serde(skip_serializing_if = "Option::is_none")] /// The protocol family. /// /// The `family` property is optional and defaults to unspecified. pub family: Option, } /// Default impl for [RT] with key [nexthop](RTKey::NextHop). impl Default for RT { fn default() -> Self { RT { key: RTKey::NextHop, family: None, } } } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents a key to reference to packet routing data. pub enum RTKey { /// Routing realm. ClassId, /// Routing nexthop. NextHop, /// TCP maximum segment size of route. MTU, } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents a protocol family for use by the [rt](RT) expression. pub enum RTFamily { /// IPv4 RT protocol family. IP, /// IPv6 RT protocol family. IP6, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "ct")] /// Create a reference to packet conntrack data. pub struct CT<'a> { /// The conntrack expression. /// /// See also: *CONNTRACK EXPRESSIONS* in *ntf(8)*. pub key: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The [conntrack protocol family](CTFamily). pub family: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Conntrack flow [direction](CTDir). /// /// Some CT keys do not support a direction. /// In this case, `dir` must not be given. pub dir: Option, } /// Default impl for conntrack with `l3proto` conntrack key. impl Default for CT<'_> { fn default() -> Self { CT { key: "l3proto".into(), family: None, dir: None, } } } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents a protocol family for use by the [ct](CT) expression. pub enum CTFamily { /// IPv4 conntrack protocol family. IP, /// IPv6 conntrack protocol family. IP6, } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents a direction for use by the [ct](CT) expression. pub enum CTDir { /// Original direction. Original, /// Reply direction. Reply, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "numgen")] /// Create a number generator. pub struct Numgen { /// The [number generator mode](NgMode). pub mode: NgMode, #[serde(rename = "mod")] /// Specifies an upper boundary ("modulus") which is not reached by returned /// numbers. pub ng_mod: u32, #[serde(skip_serializing_if = "Option::is_none")] /// Allows one to increment the returned value by a fixed offset. pub offset: Option, } /// Default impl for [numgen](Numgen) with mode [inc](NgMode::Inc) and mod `7`. impl Default for Numgen { fn default() -> Self { Numgen { mode: NgMode::Inc, ng_mod: 7, offset: None, } } } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents a number generator mode. pub enum NgMode { /// The last returned value is simply incremented. Inc, /// A new random number is returned. Random, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "jhash")] /// Hash packet data (Jenkins Hash). pub struct JHash<'a> { #[serde(rename = "mod")] /// Specifies an upper boundary ("modulus") which is not reached by returned numbers. pub hash_mod: u32, #[serde(skip_serializing_if = "Option::is_none")] /// Increment the returned value by a fixed offset. pub offset: Option, /// Determines the parameters of the packet header to apply the hashing, /// concatenations are possible as well. pub expr: Box>, #[serde(skip_serializing_if = "Option::is_none")] /// Specify an init value used as seed in the hashing function pub seed: Option, } /// Default impl for [jhash](JHash). impl Default for JHash<'_> { fn default() -> Self { JHash { hash_mod: 7, offset: None, expr: Box::new(Expression::Named(NamedExpression::Payload( Payload::PayloadField(PayloadField { protocol: "ip".into(), field: "saddr".into(), }), ))), seed: None, } } } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "symhash")] /// Hash packet data (Symmetric Hash). pub struct SymHash { #[serde(rename = "mod")] /// Specifies an upper boundary ("modulus") which is not reached by returned numbers. pub hash_mod: u32, /// Increment the returned value by a fixed offset. pub offset: Option, } /// Default impl for [symhash](SymHash). impl Default for SymHash { fn default() -> Self { SymHash { hash_mod: 2, offset: None, } } } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "fib")] /// Perform kernel Forwarding Information Base lookups. pub struct Fib { /// The data to be queried by fib lookup. pub result: FibResult, #[serde(deserialize_with = "deserialize_flags")] /// The tuple of elements ([FibFlags](FibFlag)) that is used as input to the /// fib lookup functions. pub flags: HashSet, } /// Default impl for [fib](Fib). impl Default for Fib { fn default() -> Self { let mut flags = HashSet::with_capacity(1); flags.insert(FibFlag::Iif); Fib { result: FibResult::Oif, flags, } } } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents which data is queried by [fib](Fib) lookup. pub enum FibResult { /// Output interface index. Oif, /// Output interface name. Oifname, /// Address type. Type, } #[derive( Debug, Clone, Copy, Eq, PartialEq, Serialize, Deserialize, EnumString, Hash, JsonSchema, )] #[serde(rename_all = "lowercase")] #[strum(serialize_all = "lowercase")] /// Represents flags for `fib` lookup. pub enum FibFlag { /// Consider the source address of a packet. Saddr, /// Consider the destination address of a packet. Daddr, /// Consider the packet mark. Mark, /// Consider the packet's input interface. Iif, /// Consider the packet's output interface. Oif, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Represents a binary operation to be used in an `Expression`. pub enum BinaryOperation<'a> { #[serde(rename = "&")] /// Binary AND (`&`) AND(Expression<'a>, Expression<'a>), #[serde(rename = "|")] /// Binary OR (`|`) OR(Vec>), #[serde(rename = "^")] /// Binary XOR (`^`) XOR(Expression<'a>, Expression<'a>), #[serde(rename = "<<")] /// Left shift (`<<`) LSHIFT(Expression<'a>, Expression<'a>), #[serde(rename = ">>")] /// Right shift (`>>`) RSHIFT(Expression<'a>, Expression<'a>), } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// A verdict expression (used in [verdict maps](crate::stmt::VerdictMap)). /// /// There are also verdict [statements](crate::stmt::Statement), such as /// [accept](crate::stmt::Statement::Accept). pub enum Verdict<'a> { /// Terminate ruleset evaluation and accept the packet. /// /// The packet can still be dropped later by another hook, for instance /// accept in the forward hook still allows one to drop the packet later in /// the postrouting hook, or another forward base chain that has a higher /// priority number and is evaluated afterwards in the processing pipeline. Accept, /// Terminate ruleset evaluation and drop the packet. /// /// The drop occurs instantly, no further chains or hooks are evaluated. /// It is not possible to accept the packet in a later chain again, as those /// are not evaluated anymore for the packet. Drop, /// Continue ruleset evaluation with the next rule. /// /// This is the default behaviour in case a rule issues no verdict. Continue, /// Return from the current chain and continue evaluation at the next rule /// in the last chain. /// /// If issued in a base chain, it is equivalent to the base chain policy. Return, /// Continue evaluation at the first rule in chain. /// /// The current position in the ruleset is pushed to a call stack and /// evaluation will continue there when the new chain is entirely evaluated /// or a [return](Verdict::Return) verdict is issued. In case an absolute /// verdict is issued by a rule in the chain, ruleset evaluation terminates /// immediately and the specific action is taken. Jump(JumpTarget<'a>), /// Similar to jump, but the current position is not pushed to the call /// stack. /// /// That means that after the new chain evaluation will continue at the /// last chain instead of the one containing the goto statement. Goto(JumpTarget<'a>), } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "elem")] /// Explicitly set element object. /// /// Element-related commands allow one to change contents of named /// [sets](crate::schema::Set) and [maps](crate::schema::Map). pub struct Elem<'a> { /// The element value. pub val: Box>, /// Timeout value for [sets](crate::schema::Set)/[maps](crate::schema::Map). /// with flag [timeout](crate::schema::SetFlag::Timeout) pub timeout: Option, /// The time until given element expires, useful for ruleset replication only. pub expires: Option, /// Per element comment field. pub comment: Option>, /// Enable a [counter][crate::stmt::Counter] per element. /// /// Added in nftables version *0.9.5*. pub counter: Option>, } /// Default impl for [Elem]. impl Default for Elem<'_> { fn default() -> Self { Elem { val: Box::new(Expression::String("10.2.3.4".into())), timeout: None, expires: None, comment: None, counter: None, } } } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "socket")] /// Construct a reference to packet’s socket. pub struct Socket<'a> { /// The socket attribute to match on. pub key: Cow<'a, SocketAttr>, } /// Default impl for [Socket] with [wildcard](SocketAttr::Wildcard) key. impl Default for Socket<'_> { fn default() -> Self { Socket { key: Cow::Borrowed(&SocketAttr::Wildcard), } } } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// A [socket][Socket] attribute to match on. pub enum SocketAttr { /// Match on the `IP_TRANSPARENT` socket option in the found socket. Transparent, /// Match on the socket mark (`SOL_SOCKET`, `SO_MARK`). Mark, /// Indicates whether the socket is wildcard-bound (e.g. 0.0.0.0 or ::0). Wildcard, /// The cgroup version 2 for this socket (path from `/sys/fs/cgroup`). Cgroupv2, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "osf")] /// Perform OS fingerprinting. /// /// This expression is typically used in the [LHS](crate::stmt::Match::left) of /// a [match](crate::stmt::Match) statement. pub struct Osf<'a> { /// Name of the OS signature to match. /// /// All signatures can be found at `pf.os` file. /// Use "unknown" for OS signatures that the expression could not detect. pub key: Cow<'a, str>, /// Do TTL checks on the packet to determine the operating system. pub ttl: OsfTtl, } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// TTL check mode for [osf](Osf). pub enum OsfTtl { /// Check if the IP header's TTL is less than the fingerprint one. /// /// Works for globally-routable addresses. Loose, /// Do not compare the TTL at all. Skip, } nftables-0.6.3/src/helper.rs000064400000000000000000000375051046102023000140560ustar 00000000000000use std::string::FromUtf8Error; use std::{ ffi::{OsStr, OsString}, io::{self, Write}, process::{Command, Stdio}, }; use thiserror::Error; use crate::schema::Nftables; /// Default `nft` executable. const NFT_EXECUTABLE: &str = "nft"; // search in PATH /// Use the default `nft` executable. pub const DEFAULT_NFT: Option<&str> = None; /// Do not use additional arguments to the `nft` executable. pub const DEFAULT_ARGS: &[&str] = &[]; #[cfg(all(feature = "tokio", feature = "async-process"))] compile_error!("features `tokio` and `async-process` are mutually exclusive"); /// Error during `nft` execution. #[derive(Error, Debug)] pub enum NftablesError { #[error("unable to execute {program:?}: {inner}")] NftExecution { program: OsString, inner: io::Error }, #[error("{program:?}'s output contained invalid utf8: {inner}")] NftOutputEncoding { program: OsString, inner: FromUtf8Error, }, #[error("got invalid json: {0}")] NftInvalidJson(serde_json::Error), #[error("{program:?} did not return successfully while {hint}")] NftFailed { program: OsString, hint: String, stdout: String, stderr: String, }, } /// Get the rule set that is currently active in the kernel. /// /// This is done by calling the default `nft` executable with default arguments. pub fn get_current_ruleset() -> Result, NftablesError> { get_current_ruleset_with_args(DEFAULT_NFT, DEFAULT_ARGS) } /// Get the current rule set by calling a custom `nft` with custom arguments. /// /// If `program` is [Some], then this program will be called instead of the /// default `nft` executable. /// [DEFAULT_NFT] can be passed to call the default `nft`. /// /// If `args` is not empty, then these `nft` arguments will be used instead of the /// default arguments `list` and `ruleset`. /// [DEFAULT_ARGS] can be passed to use the default arguments. /// Note that the argument `-j` is always added in front of `args`. pub fn get_current_ruleset_with_args<'a, P, A, I>( program: Option<&P>, args: I, ) -> Result, NftablesError> where P: AsRef + ?Sized, A: AsRef + ?Sized + 'a, I: IntoIterator + 'a, { let output = get_current_ruleset_raw(program, args)?; serde_json::from_str(&output).map_err(NftablesError::NftInvalidJson) } /// Get the current raw rule set json by calling a custom `nft` with custom arguments. /// /// If `program` is [Some], then this program will be called instead of the /// default `nft` executable. /// [DEFAULT_NFT] can be passed to call the default `nft`. /// /// If `args` is not empty, then these `nft` arguments will be used instead of the /// default arguments `list` and `ruleset`. /// [DEFAULT_ARGS] can be passed to use the default arguments. /// Note that the argument `-j` is always added in front of `args`. pub fn get_current_ruleset_raw<'a, P, A, I>( program: Option<&P>, args: I, ) -> Result where P: AsRef + ?Sized, A: AsRef + ?Sized + 'a, I: IntoIterator + 'a, { let program = program .map(AsRef::as_ref) .unwrap_or(NFT_EXECUTABLE.as_ref()); let mut nft_cmd = Command::new(program); let nft_cmd = nft_cmd.arg("-j"); let mut args = args.into_iter(); let nft_cmd = match args.next() { Some(arg) => nft_cmd.arg(arg).args(args), None => nft_cmd.args(["list", "ruleset"]), }; let process_result = nft_cmd.output(); let process_result = process_result.map_err(|e| NftablesError::NftExecution { inner: e, program: program.into(), })?; let stdout = read_output(program, process_result.stdout)?; if !process_result.status.success() { let stderr = read_output(program, process_result.stderr)?; return Err(NftablesError::NftFailed { program: program.into(), hint: "getting the current ruleset".to_string(), stdout, stderr, }); } Ok(stdout) } /// Get the rule set that is currently active in the kernel asynchronously. /// /// See the synchronous [`get_current_ruleset`] for more information. #[cfg(any(feature = "tokio", feature = "async-process"))] pub async fn get_current_ruleset_async() -> Result, NftablesError> { get_current_ruleset_with_args_async(DEFAULT_NFT, DEFAULT_ARGS).await } /// Get the current rule set asynchronously by calling a custom `nft` with custom arguments. /// /// See the synchronous [`get_current_ruleset_with_args`] for more information. #[cfg(any(feature = "tokio", feature = "async-process"))] pub async fn get_current_ruleset_with_args_async<'a, P, A, I>( program: Option<&P>, args: I, ) -> Result, NftablesError> where P: AsRef + ?Sized, A: AsRef + ?Sized + 'a, I: IntoIterator + 'a, { let output = get_current_ruleset_raw_async(program, args).await?; serde_json::from_str(&output).map_err(NftablesError::NftInvalidJson) } /// Get the current raw rule set json asynchronously by calling a custom `nft` with custom arguments. /// /// See the synchronous [`get_current_ruleset_raw`] for more information. #[cfg(any(feature = "tokio", feature = "async-process"))] pub async fn get_current_ruleset_raw_async<'a, P, A, I>( program: Option<&P>, args: I, ) -> Result where P: AsRef + ?Sized, A: AsRef + ?Sized + 'a, I: IntoIterator + 'a, { #[cfg(feature = "async-process")] use async_process::Command; #[cfg(feature = "tokio")] use tokio::process::Command; let program = program .map(AsRef::as_ref) .unwrap_or(NFT_EXECUTABLE.as_ref()); let mut nft_cmd = Command::new(program); let nft_cmd = nft_cmd.arg("-j"); let mut args = args.into_iter(); let nft_cmd = match args.next() { Some(arg) => nft_cmd.arg(arg).args(args), None => nft_cmd.args(["list", "ruleset"]), }; let process_result = nft_cmd.output().await; let process_result = process_result.map_err(|e| NftablesError::NftExecution { inner: e, program: program.into(), })?; let stdout = read_output(program, process_result.stdout)?; if !process_result.status.success() { let stderr = read_output(program, process_result.stderr)?; return Err(NftablesError::NftFailed { program: program.into(), hint: "getting the current ruleset".to_string(), stdout, stderr, }); } Ok(stdout) } /// Apply the given rule set to the kernel. /// /// This is done by calling the default `nft` executable with default arguments. pub fn apply_ruleset(nftables: &Nftables) -> Result<(), NftablesError> { apply_ruleset_with_args(nftables, DEFAULT_NFT, DEFAULT_ARGS) } /// Apply the given rule set by calling a custom `nft` with custom arguments. /// /// If `program` is [Some], then this program will be called instead of the /// default `nft` executable. /// [DEFAULT_NFT] can be passed to call the default `nft`. /// /// If `args` is not empty, then these `nft` arguments will be added in front of the /// other arguments `-j` and `-f -` that are always required internally. pub fn apply_ruleset_with_args<'a, P, A, I>( nftables: &Nftables, program: Option<&P>, args: I, ) -> Result<(), NftablesError> where P: AsRef + ?Sized, A: AsRef + ?Sized + 'a, I: IntoIterator + 'a, { let nftables = serde_json::to_string(nftables).expect("failed to serialize Nftables struct"); apply_ruleset_raw(&nftables, program, args)?; Ok(()) } /// Apply the given rule set to the kernel, and returns the processed rule set with /// extra information. /// /// This is done by using `nft --echo`. One can get rule handles from the returned /// objects for future modifications, positional inserts, as well as removal. pub fn apply_and_return_ruleset(nftables: &Nftables) -> Result, NftablesError> { apply_and_return_ruleset_with_args(nftables, DEFAULT_NFT, DEFAULT_ARGS) } /// Apply the given rule set by calling a custom `nft` with custom arguments, and /// returns the processed rule set with extra information. /// /// This is done by using `nft --echo`. One can get rule handles from the returned /// objects for future modifications, positional inserts, as well as removal. pub fn apply_and_return_ruleset_with_args<'a, P, A, I>( nftables: &Nftables, program: Option<&P>, args: I, ) -> Result, NftablesError> where P: AsRef + ?Sized, A: AsRef + ?Sized + 'a, I: IntoIterator + 'a, { let nftables = serde_json::to_string(nftables).expect("failed to serialize Nftables struct"); let args = args .into_iter() .map(AsRef::as_ref) .chain(Some("--echo".as_ref())); let output = apply_ruleset_raw(&nftables, program, args)?; serde_json::from_str(&output).map_err(NftablesError::NftInvalidJson) } /// Apply the given raw rule set json by calling a custom `nft` with custom arguments. /// /// If `program` is [Some], then this program will be called instead of the /// default `nft` executable. /// [DEFAULT_NFT] can be passed to call the default `nft`. /// /// If `args` is not empty, then these `nft` arguments will be added in front of the /// other arguments `-j` and `-f -` that are always required internally. /// /// The command's stdout is returned as a [`String`]. pub fn apply_ruleset_raw<'a, P, A, I>( payload: &str, program: Option<&P>, args: I, ) -> Result where P: AsRef + ?Sized, A: AsRef + ?Sized + 'a, I: IntoIterator + 'a, { let program = program .map(AsRef::as_ref) .unwrap_or(NFT_EXECUTABLE.as_ref()); let mut nft_cmd = Command::new(program); let default_args = ["-j", "-f", "-"]; let process = nft_cmd .args(args) .args(default_args) .stdin(Stdio::piped()) .stdout(Stdio::piped()) .spawn(); let mut process = process.map_err(|e| NftablesError::NftExecution { program: program.into(), inner: e, })?; let mut stdin = process.stdin.take().unwrap(); stdin .write_all(payload.as_bytes()) .map_err(|e| NftablesError::NftExecution { program: program.into(), inner: e, })?; drop(stdin); let result = process.wait_with_output(); match result { Ok(output) if output.status.success() => read_output(program, output.stdout), Ok(process_result) => { let stdout = read_output(program, process_result.stdout)?; let stderr = read_output(program, process_result.stderr)?; Err(NftablesError::NftFailed { program: program.into(), hint: "applying ruleset".to_string(), stdout, stderr, }) } Err(e) => Err(NftablesError::NftExecution { program: program.into(), inner: e, }), } } /// Apply the given rule set to the kernel asynchronously. /// /// See the synchronous [`apply_ruleset`] for more information. #[cfg(any(feature = "tokio", feature = "async-process"))] pub async fn apply_ruleset_async(nftables: &Nftables<'_>) -> Result<(), NftablesError> { apply_ruleset_with_args_async(nftables, DEFAULT_NFT, DEFAULT_ARGS).await } /// Apply the given rule set asynchronously by calling a custom `nft` with custom arguments. /// /// See the synchronous [`apply_ruleset_with_args`] for more information. #[cfg(any(feature = "tokio", feature = "async-process"))] pub async fn apply_ruleset_with_args_async<'a, P, A, I>( nftables: &Nftables<'_>, program: Option<&P>, args: I, ) -> Result<(), NftablesError> where P: AsRef + ?Sized, A: AsRef + ?Sized + 'a, I: IntoIterator + 'a, { let nftables = serde_json::to_string(nftables).expect("failed to serialize Nftables struct"); apply_ruleset_raw_async(&nftables, program, args).await?; Ok(()) } /// Apply the given rule set to the kernel asynchronously, and returns the processed /// rule set with extra information. /// /// See the synchronous [`apply_and_return_ruleset`] for more information. #[cfg(any(feature = "tokio", feature = "async-process"))] pub async fn apply_and_return_ruleset_async( nftables: &Nftables<'_>, ) -> Result, NftablesError> { apply_and_return_ruleset_with_args_async(nftables, DEFAULT_NFT, DEFAULT_ARGS).await } /// Apply the given rule set asynchronously by calling a custom `nft` with custom /// arguments, and returns the processed rule set with extra information. /// /// See the synchronous [`apply_and_return_ruleset_with_args`] for more information. #[cfg(any(feature = "tokio", feature = "async-process"))] pub async fn apply_and_return_ruleset_with_args_async<'a, P, A, I>( nftables: &Nftables<'_>, program: Option<&P>, args: I, ) -> Result, NftablesError> where P: AsRef + ?Sized, A: AsRef + ?Sized + 'a, I: IntoIterator + 'a, { let nftables = serde_json::to_string(nftables).expect("failed to serialize Nftables struct"); let args = args .into_iter() .map(AsRef::as_ref) .chain(Some("--echo".as_ref())); let output = apply_ruleset_raw_async(&nftables, program, args).await?; serde_json::from_str(&output).map_err(NftablesError::NftInvalidJson) } /// Apply the given raw rule set json asynchronously by calling a custom `nft` with custom arguments. /// /// See the synchronous [`apply_ruleset_raw`] for more information. #[cfg(any(feature = "tokio", feature = "async-process"))] pub async fn apply_ruleset_raw_async<'a, P, A, I>( payload: &str, program: Option<&P>, args: I, ) -> Result where P: AsRef + ?Sized, A: AsRef + ?Sized + 'a, I: IntoIterator + 'a, { #[cfg(feature = "async-process")] use async_process::Command; #[cfg(feature = "async-process")] use futures_lite::io::AsyncWriteExt; #[cfg(feature = "tokio")] use tokio::io::AsyncWriteExt; #[cfg(feature = "tokio")] use tokio::process::Command; let program = program .map(AsRef::as_ref) .unwrap_or(NFT_EXECUTABLE.as_ref()); let mut nft_cmd = Command::new(program); let default_args = ["-j", "-f", "-"]; let process = nft_cmd .args(args) .args(default_args) .stdin(Stdio::piped()) .stdout(Stdio::piped()) .spawn(); let mut process = process.map_err(|e| NftablesError::NftExecution { program: program.into(), inner: e, })?; let mut stdin = process.stdin.take().unwrap(); stdin .write_all(payload.as_bytes()) .await .map_err(|e| NftablesError::NftExecution { program: program.into(), inner: e, })?; drop(stdin); #[cfg(feature = "tokio")] let result = process.wait_with_output().await; #[cfg(feature = "async-process")] let result = process.output().await; match result { Ok(output) if output.status.success() => read_output(program, output.stdout), Ok(process_result) => { let stdout = read_output(program, process_result.stdout)?; let stderr = read_output(program, process_result.stderr)?; Err(NftablesError::NftFailed { program: program.into(), hint: "applying ruleset".to_string(), stdout, stderr, }) } Err(e) => Err(NftablesError::NftExecution { program: program.into(), inner: e, }), } } fn read_output(program: impl Into, bytes: Vec) -> Result { String::from_utf8(bytes).map_err(|e| NftablesError::NftOutputEncoding { inner: e, program: program.into(), }) } nftables-0.6.3/src/lib.rs000064400000000000000000000033111046102023000133310ustar 00000000000000//! nftables-rs is a Rust library designed to provide a safe and easy-to-use abstraction over the nftables JSON API, known as libnftables-json. //! //! This library is engineered for developers who need to interact with nftables, //! the Linux kernel's next-generation firewalling tool, directly from Rust applications. //! //! By abstracting the underlying JSON API, nftables-rs facilitates the creation, manipulation, //! and application of firewall rulesets without requiring deep knowledge of nftables' internal workings. // TODO: add example usage to library doc /// Contains Batch object to be used to prepare Nftables payloads. pub mod batch; /// Contains [expressions](crate::expr::Expression). /// Expressions are the building blocks of (most) statements. /// /// See . pub mod expr; /// Contains the global structure of an Nftables document. /// /// See . pub mod schema; /// Contains Statements. /// Statements are the building blocks for rules. /// /// See . pub mod stmt; /// Contains common type definitions referred to in the schema. pub mod types; /// Contains methods to communicate with nftables JSON API. pub mod helper; /// Contains node visitors for serde. pub mod visitor; /// Contains handling and parsing of command line arguments. pub mod cli; // Default values for Default implementations. const DEFAULT_FAMILY: types::NfFamily = types::NfFamily::INet; const DEFAULT_TABLE: &str = "filter"; const DEFAULT_CHAIN: &str = "forward"; nftables-0.6.3/src/main.rs000064400000000000000000000001521046102023000135070ustar 00000000000000use nftables::cli; fn main() { let args = cli::collect_command_args(); cli::handle_args(args); } nftables-0.6.3/src/schema.rs000064400000000000000000001032671046102023000140360ustar 00000000000000use schemars::JsonSchema; use std::{borrow::Cow, collections::HashSet}; use crate::visitor::deserialize_optional_flags; use crate::{ expr::Expression, stmt::Statement, types::*, visitor::single_string_to_option_vec, DEFAULT_CHAIN, DEFAULT_FAMILY, DEFAULT_TABLE, }; use serde::{Deserialize, Serialize}; use strum_macros::EnumString; #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// In general, any JSON input or output is enclosed in an object with a single property named **nftables**. /// /// See [libnftables-json global structure](Global Structure). /// /// (Global Structure): pub struct Nftables<'a> { /// An array containing [commands](NfCmd) (for input) or [ruleset elements](NfListObject) (for output). #[serde(rename = "nftables")] pub objects: Cow<'a, [NfObject<'a>]>, } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(untagged)] /// A [ruleset element](NfListObject) or [command](NfCmd) in an [nftables document](Nftables). pub enum NfObject<'a> { /// A command. CmdObject(NfCmd<'a>), /// A ruleset element. ListObject(NfListObject<'a>), } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// A ruleset element in an [nftables document](Nftables). pub enum NfListObject<'a> { /// A table element. Table(Table<'a>), /// A chain element. Chain(Chain<'a>), /// A rule element. Rule(Rule<'a>), /// A set element. Set(Box>), /// A map element. Map(Box>), /// An element manipulation. Element(Element<'a>), /// A flow table. FlowTable(FlowTable<'a>), /// A counter. Counter(Counter<'a>), /// A quota. Quota(Quota<'a>), #[serde(rename = "ct helper")] /// A conntrack helper (ct helper). CTHelper(CTHelper<'a>), /// A limit. Limit(Limit<'a>), #[serde(rename = "metainfo")] /// The metainfo object. MetainfoObject(MetainfoObject<'a>), /// A conntrack timeout (ct timeout). CTTimeout(CTTimeout<'a>), #[serde(rename = "ct expectation")] /// A conntrack expectation (ct expectation). CTExpectation(CTExpectation<'a>), /// A synproxy object. SynProxy(SynProxy<'a>), } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// A command is an object with a single property whose name identifies the command. /// /// Its value is a ruleset element - basically identical to output elements, /// apart from certain properties which may be interpreted differently or are /// required when output generally omits them. pub enum NfCmd<'a> { /// Add a new ruleset element to the kernel. Add(NfListObject<'a>), /// Replace a rule. /// /// In [RULE](Rule), the **handle** property is mandatory and identifies /// the rule to be replaced. Replace(Rule<'a>), /// Identical to [add command](NfCmd::Add), but returns an error if the object already exists. Create(NfListObject<'a>), // TODO: ADD_OBJECT is subset of NfListObject /// Insert an object. /// /// This command is identical to [add](NfCmd::Add) for rules, but instead of /// appending the rule to the chain by default, it inserts at first position. /// If a handle or index property is given, the rule is inserted before the /// rule identified by those properties. Insert(NfListObject<'a>), /// Delete an object from the ruleset. /// /// Only the minimal number of properties required to uniquely identify an /// object is generally needed in the enclosed object. /// For most ruleset elements, this is **family** and **table** plus either /// **handle** or **name** (except rules since they don’t have a name). Delete(NfListObject<'a>), // TODO: ADD_OBJECT is subset of NfListObject /// List ruleset elements. /// /// The plural forms are used to list all objects of that kind, /// optionally filtered by family and for some, also table. List(NfListObject<'a>), /// Reset state in suitable objects, i.e. zero their internal counter. Reset(ResetObject<'a>), /// Empty contents in given object, e.g. remove all chains from given table /// or remove all elements from given set. Flush(FlushObject<'a>), /// Rename a [chain](Chain). /// /// The new name is expected in a dedicated property named **newname**. Rename(Chain<'a>), } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Reset state in suitable objects, i.e. zero their internal counter. pub enum ResetObject<'a> { /// A counter to reset. Counter(Counter<'a>), /// A list of counters to reset. Counters(Cow<'a, [Counter<'a>]>), /// A quota to reset. Quota(Quota<'a>), /// A list of quotas to reset. Quotas(Cow<'a, [Quota<'a>]>), } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Empty contents in given object, e.g. remove all chains from given table or remove all elements from given set. pub enum FlushObject<'a> { /// A table to flush (i.e., remove all chains from table). Table(Table<'a>), /// A chain to flush (i.e., remove all rules from chain). Chain(Chain<'a>), /// A set to flush (i.e., remove all elements from set). Set(Box>), /// A map to flush (i.e., remove all elements from map). Map(Box>), /// A meter to flush. Meter(Meter<'a>), /// Flush the live ruleset (i.e., remove all elements from live ruleset). Ruleset(Option), } // Ruleset Elements #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// This object describes a table. pub struct Table<'a> { /// The table’s [family](NfFamily), e.g. "ip" or "ip6". pub family: NfFamily, /// The table’s name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The table’s handle. /// /// In input, it is used only in [delete command](NfCmd::Delete) as /// alternative to **name**. pub handle: Option, } /// Default table. impl Default for Table<'_> { fn default() -> Self { Table { family: DEFAULT_FAMILY, name: DEFAULT_TABLE.into(), handle: None, } } } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// This object describes a chain. pub struct Chain<'a> { /// The table’s family. pub family: NfFamily, /// The table’s name. pub table: Cow<'a, str>, /// The chain’s name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// New name of the chain when supplied to the [rename command](NfCmd::Rename). pub newname: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// The chain’s handle. /// In input, it is used only in [delete command](NfCmd::Delete) as alternative to **name**. pub handle: Option, #[serde(skip_serializing_if = "Option::is_none", rename = "type")] /// The chain’s type. /// Required for [base chains](Base chains). /// /// (Base chains): pub _type: Option, // type #[serde(skip_serializing_if = "Option::is_none")] /// The chain’s hook. /// Required for [base chains](Base chains). /// /// (Base chains): pub hook: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The chain’s priority. /// Required for [base chains](Base chains). /// /// (Base chains): pub prio: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The chain’s bound interface (if in the netdev family). /// Required for [base chains](Base chains). /// /// (Base chains): pub dev: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// The chain’s [policy](NfChainPolicy). /// Required for [base chains](Base chains). /// /// (Base chains): pub policy: Option, } /// Default Chain. impl Default for Chain<'_> { fn default() -> Self { Chain { family: DEFAULT_FAMILY, table: DEFAULT_TABLE.into(), name: DEFAULT_CHAIN.into(), newname: None, handle: None, _type: None, hook: None, prio: None, dev: None, policy: None, } } } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// This object describes a rule. /// /// Basic building blocks of rules are statements. /// Each rule consists of at least one. pub struct Rule<'a> { /// The table’s family. pub family: NfFamily, /// The table’s name. pub table: Cow<'a, str>, /// The chain’s name. pub chain: Cow<'a, str>, /// An array of statements this rule consists of. /// /// In input, it is used in [add](NfCmd::Add)/[insert](NfCmd::Insert)/[replace](NfCmd::Replace) commands only. pub expr: Cow<'a, [Statement<'a>]>, #[serde(skip_serializing_if = "Option::is_none")] /// The rule’s handle. /// /// In [delete](NfCmd::Delete)/[replace](NfCmd::Replace) commands, it serves as an identifier of the rule to delete/replace. /// In [add](NfCmd::Add)/[insert](NfCmd::Insert) commands, it serves as an identifier of an existing rule to append/prepend the rule to. pub handle: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The rule’s position for [add](NfCmd::Add)/[insert](NfCmd::Insert) commands. /// /// It is used as an alternative to **handle** then. pub index: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Optional rule comment. pub comment: Option>, } /// Default rule with no expressions. impl Default for Rule<'_> { fn default() -> Self { Rule { family: DEFAULT_FAMILY, table: DEFAULT_TABLE.into(), chain: DEFAULT_CHAIN.into(), expr: [][..].into(), handle: None, index: None, comment: None, } } } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Named set that holds expression elements. pub struct Set<'a> { /// The table’s family. pub family: NfFamily, /// The table’s name. pub table: Cow<'a, str>, /// The set’s name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The set’s handle. For input, it is used by the [delete command](NfCmd::Delete) only. pub handle: Option, #[serde(rename = "type")] /// The set’s datatype. /// /// The set type might be a string, such as `"ipv4_addr"` or an array consisting of strings (for concatenated types). pub set_type: SetTypeValue<'a>, #[serde(skip_serializing_if = "Option::is_none")] /// The set’s policy. pub policy: Option, #[serde( skip_serializing_if = "Option::is_none", deserialize_with = "deserialize_optional_flags", default )] /// The set’s flags. pub flags: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// Initial set element(s). /// /// A single set element might be given as string, integer or boolean value for simple cases. If additional properties are required, a formal elem object may be used. /// Multiple elements may be given in an array. pub elem: Option]>>, #[serde(skip_serializing_if = "Option::is_none")] /// Element timeout in seconds. pub timeout: Option, #[serde(rename = "gc-interval", skip_serializing_if = "Option::is_none")] /// Garbage collector interval in seconds. pub gc_interval: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Maximum number of elements supported. pub size: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Optional set comment. /// /// Set comment attribute requires at least nftables 0.9.7 and kernel 5.10 pub comment: Option>, } /// Default set `"myset"` with type `ipv4_addr`. impl Default for Set<'_> { fn default() -> Self { Set { family: DEFAULT_FAMILY, table: DEFAULT_TABLE.into(), name: "myset".into(), handle: None, set_type: SetTypeValue::Single(SetType::Ipv4Addr), policy: None, flags: None, elem: None, timeout: None, gc_interval: None, size: None, comment: None, } } } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Named map that holds expression elements. /// Maps are a special form of sets in that they translate a unique key to a value. pub struct Map<'a> { /// The table’s family. pub family: NfFamily, /// The table’s name. pub table: Cow<'a, str>, /// The map’s name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The map’s handle. For input, it is used by the [delete command](NfCmd::Delete) only. pub handle: Option, #[serde(rename = "type")] /// The map set’s datatype. /// /// The set type might be a string, such as `"ipv4_addr"`` or an array /// consisting of strings (for concatenated types). pub set_type: SetTypeValue<'a>, /// Type of values this set maps to (i.e. this set is a map). pub map: SetTypeValue<'a>, #[serde(skip_serializing_if = "Option::is_none")] /// The map’s policy. pub policy: Option, #[serde( skip_serializing_if = "Option::is_none", deserialize_with = "deserialize_optional_flags", default )] /// The map’s flags. pub flags: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// Initial map set element(s). /// /// A single set element might be given as string, integer or boolean value for simple cases. If additional properties are required, a formal elem object may be used. /// Multiple elements may be given in an array. pub elem: Option]>>, #[serde(skip_serializing_if = "Option::is_none")] /// Element timeout in seconds. pub timeout: Option, #[serde(rename = "gc-interval", skip_serializing_if = "Option::is_none")] /// Garbage collector interval in seconds. pub gc_interval: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Maximum number of elements supported. pub size: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Optional map comment. /// /// The map/set comment attribute requires at least nftables 0.9.7 and kernel 5.10 pub comment: Option>, } /// Default map "mymap" that maps ipv4addrs. impl Default for Map<'_> { fn default() -> Self { Map { family: DEFAULT_FAMILY, table: DEFAULT_TABLE.into(), name: "mymap".into(), handle: None, set_type: SetTypeValue::Single(SetType::Ipv4Addr), map: SetTypeValue::Single(SetType::Ipv4Addr), policy: None, flags: None, elem: None, timeout: None, gc_interval: None, size: None, comment: None, } } } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(untagged)] /// Wrapper for single or concatenated set types. /// The set type might be a string, such as `"ipv4_addr"` or an array consisting of strings (for concatenated types). pub enum SetTypeValue<'a> { /// Single set type. Single(SetType), /// Concatenated set types. Concatenated(Cow<'a, [SetType]>), } #[derive( Clone, Copy, Debug, Eq, PartialEq, Hash, Serialize, Deserialize, EnumString, JsonSchema, )] #[serde(rename_all = "lowercase")] /// Describes a set’s datatype. pub enum SetType { #[serde(rename = "ipv4_addr")] #[strum(serialize = "ipv4_addr")] /// IPv4 address. Ipv4Addr, #[serde(rename = "ipv6_addr")] #[strum(serialize = "ipv6_addr")] /// IPv6 address. Ipv6Addr, #[serde(rename = "ether_addr")] #[strum(serialize = "ether_addr")] /// Ethernet address. EtherAddr, #[serde(rename = "inet_proto")] #[strum(serialize = "inet_proto")] /// Internet protocol type. InetProto, #[serde(rename = "inet_service")] #[strum(serialize = "inet_service")] /// Internet service. InetService, #[serde(rename = "mark")] #[strum(serialize = "mark")] /// Mark type. Mark, #[serde(rename = "ifname")] #[strum(serialize = "ifname")] /// Network interface name (eth0, eth1..). Ifname, } #[derive(Clone, Copy, Debug, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Describes a set’s policy. pub enum SetPolicy { /// Performance policy (default). Performance, /// Memory policy. Memory, } #[derive( Clone, Copy, Debug, Eq, PartialEq, Serialize, Deserialize, EnumString, Hash, JsonSchema, )] #[serde(rename_all = "lowercase")] #[strum(serialize_all = "lowercase")] /// Describes a [set](Set)’s flags. pub enum SetFlag { /// Set content may not change while bound. Constant, /// Set contains intervals. Interval, /// Elements can be added with a timeout. Timeout, // TODO: undocumented upstream /// *Undocumented flag.* Dynamic, } #[derive(Clone, Copy, Debug, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Describes an operator on set. pub enum SetOp { /// Operator for adding elements. Add, /// Operator for updating elements. Update, } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Manipulate element(s) in a named set. pub struct Element<'a> { /// The table’s family. pub family: NfFamily, /// The table’s name. pub table: Cow<'a, str>, /// The set’s name. pub name: Cow<'a, str>, /// A single set element might be given as string, integer or boolean value for simple cases. /// If additional properties are required, a formal `elem` object may be used. /// Multiple elements may be given in an array. pub elem: Cow<'a, [Expression<'a>]>, } /// Default manipulation element for [set](Set) "myset". impl Default for Element<'_> { fn default() -> Self { Element { family: DEFAULT_FAMILY, table: DEFAULT_TABLE.into(), name: "myset".into(), elem: [][..].into(), } } } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// [Flowtables] allow you to accelerate packet forwarding in software (and in hardware if your NIC supports it) /// by using a conntrack-based network stack bypass. /// /// [Flowtables]: https://wiki.nftables.org/wiki-nftables/index.php/Flowtables pub struct FlowTable<'a> { /// The [table](Table)’s family. pub family: NfFamily, /// The [table](Table)’s name. pub table: Cow<'a, str>, /// The flow table’s name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The flow table’s handle. In input, it is used by the [delete command](NfCmd::Delete) only. pub handle: Option, /// The flow table’s [hook](NfHook). pub hook: Option, /// The flow table's *priority* can be a signed integer or *filter* which stands for 0. /// Addition and subtraction can be used to set relative priority, e.g., filter + 5 is equal to 5. pub prio: Option, #[serde( default, skip_serializing_if = "Option::is_none", deserialize_with = "single_string_to_option_vec" )] /// The *devices* are specified as iifname(s) of the input interface(s) of the traffic that should be offloaded. /// /// Devices are required for both traffic directions. /// Cow slice of device names, e.g. `vec!["wg0".into(), "wg1".into()].into()`. pub dev: Option]>>, } /// Default [flowtable](FlowTable) named "myflowtable". impl Default for FlowTable<'_> { fn default() -> Self { FlowTable { family: DEFAULT_FAMILY, table: DEFAULT_TABLE.into(), name: "myflowtable".into(), handle: None, hook: None, prio: None, dev: None, } } } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// This object represents a named [counter]. /// /// A counter counts both the total number of packets and the total bytes it has seen since it was last reset. /// With nftables you need to explicitly specify a counter for each rule you want to count. /// /// [counter]: https://wiki.nftables.org/wiki-nftables/index.php/Counters pub struct Counter<'a> { /// The [table](Table)’s family. pub family: NfFamily, /// The [table](Table)’s name. pub table: Cow<'a, str>, /// The counter’s name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The counter’s handle. In input, it is used by the [delete command](NfCmd::Delete) only. pub handle: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Packet counter value. pub packets: Option, /// Byte counter value. pub bytes: Option, } /// Default [counter](Counter) named "mycounter". impl Default for Counter<'_> { fn default() -> Self { Counter { family: DEFAULT_FAMILY, table: DEFAULT_TABLE.into(), name: "mycounter".into(), handle: None, packets: None, bytes: None, } } } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// This object represents a named [quota](Quota). /// /// A quota: /// * defines a threshold number of bytes; /// * sets an initial byte count (defaults to 0 bytes if not specified); /// * counts the total number of bytes, starting from the initial count; and /// * matches either: /// * only until the byte count exceeds the threshold, or /// * only after the byte count is over the threshold. /// /// (Quota): pub struct Quota<'a> { /// The [table](Table)’s family. pub family: NfFamily, /// The [table](Table)’s name. pub table: Cow<'a, str>, /// The quota’s name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The quota’s handle. In input, it is used by the [delete command](NfCmd::Delete) only. pub handle: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Quota threshold. pub bytes: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Quota used so far. pub used: Option, #[serde(skip_serializing_if = "Option::is_none")] /// If `true`, match if the quota has been exceeded (i.e., "invert" the quota). pub inv: Option, } /// Default [quota](Quota) named "myquota". impl Default for Quota<'_> { fn default() -> Self { Quota { family: DEFAULT_FAMILY, table: DEFAULT_TABLE.into(), name: "myquota".into(), handle: None, bytes: None, used: None, inv: None, } } } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "ct helper")] /// Enable the specified [conntrack helper][Conntrack helpers] for this packet. /// /// [Conntrack helpers]: pub struct CTHelper<'a> { /// The [table](Table)’s family. pub family: NfFamily, /// The [table](Table)’s name. pub table: Cow<'a, str>, /// The ct helper’s name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The ct helper’s handle. In input, it is used by the [delete command](NfCmd::Delete) only. pub handle: Option, #[serde(rename = "type")] /// The ct helper type name, e.g. "ftp" or "tftp". pub _type: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The ct helper’s layer 4 protocol. pub protocol: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// The ct helper’s layer 3 protocol, e.g. "ip" or "ip6". pub l3proto: Option>, } /// Default ftp [ct helper](CTHelper) named "mycthelper". impl Default for CTHelper<'_> { fn default() -> Self { CTHelper { family: DEFAULT_FAMILY, table: DEFAULT_TABLE.into(), name: "mycthelper".into(), handle: None, _type: "ftp".into(), protocol: None, l3proto: None, } } } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// This object represents a named [limit](Limit). /// /// A limit uses a [token bucket](Token bucket) filter to match packets: /// * only until its rate is exceeded; or /// * only after its rate is exceeded, if defined as an over limit. /// /// (Limit): /// (Token bucket): pub struct Limit<'a> { /// The [table](Table)’s family. pub family: NfFamily, /// The [table](Table)’s name. pub table: Cow<'a, str>, /// The limit’s name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The limit’s handle. In input, it is used by the [delete command](NfCmd::Delete) only. pub handle: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The limit’s rate value. pub rate: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Time unit to apply the limit to, e.g. "week", "day", "hour", etc. /// /// If omitted, defaults to "second". pub per: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The limit’s burst value. If omitted, defaults to 0. pub burst: Option, #[serde(skip_serializing_if = "Option::is_none")] /// [Unit](LimitUnit) of rate and burst values. If omitted, defaults to "packets". pub unit: Option, /// If `true`, match if limit was exceeded. If omitted, defaults to `false`. pub inv: Option, } /// Default [limit](Limit) named "mylimit". impl Default for Limit<'_> { fn default() -> Self { Limit { family: DEFAULT_FAMILY, table: DEFAULT_TABLE.into(), name: "mylimit".into(), handle: None, rate: None, per: None, burst: None, unit: None, inv: None, } } } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// A unit used in [limits](Limit). pub enum LimitUnit { /// Limit by number of packets. Packets, /// Limit by number of bytes. Bytes, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] pub struct Meter<'a> { pub name: Cow<'a, str>, pub key: Expression<'a>, pub stmt: Box>, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Represents the live ruleset (to be [flushed](NfCmd::Flush)). pub struct Ruleset {} /// Default ruleset. impl Default for Ruleset { fn default() -> Self { Ruleset {} } } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Library information in output. /// /// In output, the first object in an nftables array is a special one containing library information. pub struct MetainfoObject<'a> { #[serde(skip_serializing_if = "Option::is_none")] /// The value of version property is equal to the package version as printed by `nft -v`. pub version: Option>, /// The value of release_name property is equal to the release name as printed by `nft -v`. #[serde(skip_serializing_if = "Option::is_none")] pub release_name: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// The JSON Schema version. /// /// If supplied in (libnftables) library input, the parser will verify the /// `json_schema_version` value to not exceed the internally hardcoded one /// (to make sure the given schema is fully understood). /// In future, a lower number than the internal one may activate /// compatibility mode to parse outdated and incompatible JSON input. pub json_schema_version: Option, } /// Default (empty) [metainfo object](MetainfoObject). impl Default for MetainfoObject<'_> { fn default() -> Self { MetainfoObject { version: None, release_name: None, json_schema_version: None, } } } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// This object represents a named [conntrack timeout][Ct timeout] policy. /// /// You can use a ct timeout object to specify a connection tracking timeout policy for a particular flow. /// /// [Ct timeout]: pub struct CTTimeout<'a> { /// The table’s family. pub family: NfFamily, /// The table’s name. pub table: Cow<'a, str>, /// The ct timeout object’s name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The ct timeout object’s handle. In input, it is used by the [delete command](NfCmd::Delete) only. pub handle: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The ct timeout object’s [layer 4 protocol](CTHProto). pub protocol: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The connection state name, e.g. "established", "syn_sent", "close" or "close_wait", for which the timeout value has to be updated. pub state: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// The updated timeout value for the specified connection state. pub value: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The ct timeout object’s layer 3 protocol, e.g. "ip" or "ip6". pub l3proto: Option>, } /// Default [ct timeout](CTTimeout) named "mycttimeout" impl Default for CTTimeout<'_> { fn default() -> Self { CTTimeout { family: DEFAULT_FAMILY, table: DEFAULT_TABLE.into(), name: "mycttimeout".into(), handle: None, protocol: None, state: None, value: None, l3proto: None, } } } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// This object represents a named [conntrack expectation][Ct expectation]. /// /// [Ct expectation]: pub struct CTExpectation<'a> { /// The table’s family. pub family: NfFamily, /// The table’s name. pub table: Cow<'a, str>, /// The ct expectation object’s name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The ct expectation object’s handle. In input, it is used by delete command only. pub handle: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The ct expectation object’s layer 3 protocol, e.g. "ip" or "ip6". pub l3proto: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// The ct expectation object’s layer 4 protocol. pub protocol: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The destination port of the expected connection. pub dport: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The time in millisecond that this expectation will live. pub timeout: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The maximum count of expectations to be living in the same time. pub size: Option, } /// [SynProxy] intercepts new TCP connections and handles the initial 3-way handshake using /// syncookies instead of conntrack to establish the connection. /// /// Named SynProxy requires **nftables 0.9.3 or newer**. /// /// [SynProxy]: https://wiki.nftables.org/wiki-nftables/index.php/Synproxy #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] pub struct SynProxy<'a> { /// The table’s family. pub family: NfFamily, /// The table’s name. pub table: Cow<'a, str>, /// The synproxy's name. pub name: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// The synproxy's handle. For input, it is used by the [delete command](NfCmd::Delete) only. pub handle: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The maximum segment size (must match your backend server). pub mss: Option, #[serde(skip_serializing_if = "Option::is_none")] /// The window scale (must match your backend server). pub wscale: Option, #[serde( skip_serializing_if = "Option::is_none", deserialize_with = "deserialize_optional_flags", default )] /// The synproxy's [flags](crate::types::SynProxyFlag). pub flags: Option>, } nftables-0.6.3/src/stmt.rs000064400000000000000000000410641046102023000135610ustar 00000000000000use std::collections::HashSet; use schemars::JsonSchema; use serde::{Deserialize, Serialize}; use strum_macros::EnumString; use crate::types::{RejectCode, SynProxyFlag}; use crate::visitor::deserialize_optional_flags; use crate::expr::Expression; use std::borrow::Cow; #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] #[non_exhaustive] /// Statements are the building blocks for rules. Each rule consists of at least one. /// /// See . pub enum Statement<'a> { /// `accept` verdict. Accept(Option), /// `drop` verdict. Drop(Option), /// `continue` verdict. Continue(Option), /// `return` verdict. Return(Option), /// `jump` verdict. Expects a target chain name. Jump(JumpTarget<'a>), /// `goto` verdict. Expects a target chain name. Goto(JumpTarget<'a>), Match(Match<'a>), /// anonymous or named counter. Counter(Counter<'a>), Mangle(Mangle<'a>), /// anonymous or named quota. Quota(QuotaOrQuotaRef<'a>), // TODO: last Limit(Limit<'a>), /// The Flow statement offloads matching network traffic to flowtables, /// enabling faster forwarding by bypassing standard processing. Flow(Flow<'a>), FWD(Option>), /// Disable connection tracking for the packet. Notrack, Dup(Dup<'a>), SNAT(Option>), DNAT(Option>), Masquerade(Option>), // masquerade is subset of NAT options Redirect(Option>), // redirect is subset of NAT options Reject(Option), Set(Set<'a>), // TODO: map Log(Option>), #[serde(rename = "ct helper")] /// Enable the specified conntrack helper for this packet. CTHelper(Cow<'a, str>), // CT helper reference. Meter(Meter<'a>), Queue(Queue<'a>), #[serde(rename = "vmap")] // TODO: vmap is expr, not stmt! VerdictMap(VerdictMap<'a>), #[serde(rename = "ct count")] CTCount(CTCount<'a>), #[serde(rename = "ct timeout")] /// Assign connection tracking timeout policy. CTTimeout(Expression<'a>), // CT timeout reference. #[serde(rename = "ct expectation")] /// Assign connection tracking expectation. CTExpectation(Expression<'a>), // CT expectation reference. /// This represents an xt statement from xtables compat interface. /// Sadly, at this point, it is not possible to provide any further information about its content. XT(Option), /// A netfilter synproxy intercepts new TCP connections and handles the initial 3-way handshake using syncookies instead of conntrack to establish the connection. SynProxy(SynProxy), /// Redirects the packet to a local socket without changing the packet header in any way. TProxy(TProxy<'a>), // TODO: reset // TODO: secmark } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// `accept` verdict. pub struct Accept {} #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// `drop` verdict. pub struct Drop {} #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// `continue` verdict. pub struct Continue {} #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// `return` verdict. pub struct Return {} #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] pub struct JumpTarget<'a> { pub target: Cow<'a, str>, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// This matches the expression on left hand side (typically a packet header or packet meta info) with the expression on right hand side (typically a constant value). /// /// If the statement evaluates to true, the next statement in this rule is considered. /// If not, processing continues with the next rule in the same chain. pub struct Match<'a> { /// Left hand side of this match. pub left: Expression<'a>, /// Right hand side of this match. pub right: Expression<'a>, /// Operator indicating the type of comparison. pub op: Operator, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(untagged)] /// Anonymous or named Counter. pub enum Counter<'a> { /// A counter referenced by name. Named(Cow<'a, str>), /// An anonymous counter. Anonymous(Option), } #[derive(Debug, Default, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// This object represents a byte/packet counter. /// In input, no properties are required. /// If given, they act as initial values for the counter. pub struct AnonymousCounter { /// Packets counted. #[serde(serialize_with = "crate::visitor::serialize_none_to_zero")] pub packets: Option, /// Bytes counted. #[serde(serialize_with = "crate::visitor::serialize_none_to_zero")] pub bytes: Option, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// This changes the packet data or meta info. pub struct Mangle<'a> { /// The packet data to be changed, given as an `exthdr`, `payload`, `meta`, `ct` or `ct helper` expression. pub key: Expression<'a>, /// Value to change data to. pub value: Expression<'a>, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(untagged)] /// Represents an anonymous or named quota object. pub enum QuotaOrQuotaRef<'a> { /// Anonymous quota object. Quota(Quota<'a>), /// Reference to a named quota object. QuotaRef(Cow<'a, str>), } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Creates an anonymous quota which lives in the rule it appears in. pub struct Quota<'a> { /// Quota value. pub val: u32, /// Unit of `val`, e.g. `"kbytes"` or `"mbytes"`. If omitted, defaults to `"bytes"`. pub val_unit: Cow<'a, str>, #[serde(skip_serializing_if = "Option::is_none")] /// Quota used so far. Optional on input. If given, serves as initial value. pub used: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Unit of `used`. Defaults to `"bytes"`. pub used_unit: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// If `true`, will match if quota was exceeded. Defaults to `false`. pub inv: Option, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Creates an anonymous limit which lives in the rule it appears in. pub struct Limit<'a> { /// Rate value to limit to. pub rate: u32, #[serde(skip_serializing_if = "Option::is_none")] /// Unit of `rate`, e.g. `"packets"` or `"mbytes"`. If omitted, defaults to `"packets"`. pub rate_unit: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// Denominator of rate, e.g. "week" or "minutes". pub per: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// Burst value. Defaults to `0`. pub burst: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Unit of `burst`, ignored if `rate_unit` is `"packets"`. Defaults to `"bytes"`. pub burst_unit: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// If `true`, will match if the limit was exceeded. Defaults to `false`. pub inv: Option, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Forward a packet to a different destination. pub struct Flow<'a> { /// Operator on flow/set. pub op: SetOp, /// The [flow table][crate::schema::FlowTable]'s name. pub flowtable: Cow<'a, str>, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Forward a packet to a different destination. pub struct FWD<'a> { #[serde(skip_serializing_if = "Option::is_none")] /// Interface to forward the packet on. pub dev: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// Family of addr. pub family: Option, #[serde(skip_serializing_if = "Option::is_none")] /// IP(v6) address to forward the packet to. pub addr: Option>, } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Protocol family for `FWD`. pub enum FWDFamily { IP, IP6, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Duplicate a packet to a different destination. pub struct Dup<'a> { /// Address to duplicate packet to. pub addr: Expression<'a>, #[serde(skip_serializing_if = "Option::is_none")] /// Interface to duplicate packet on. May be omitted to not specify an interface explicitly. pub dev: Option>, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Perform Network Address Translation. /// Referenced by `SNAT` and `DNAT` statements. pub struct NAT<'a> { #[serde(skip_serializing_if = "Option::is_none")] /// Address to translate to. pub addr: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// Family of addr, either ip or ip6. Required in inet table family. pub family: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Port to translate to. pub port: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// Flag(s). pub flags: Option>, } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Protocol family for `NAT`. pub enum NATFamily { IP, IP6, } #[derive(Debug, Clone, Copy, Eq, PartialEq, Serialize, Deserialize, Hash, JsonSchema)] #[serde(rename_all = "lowercase")] /// Flags for `NAT`. pub enum NATFlag { Random, #[serde(rename = "fully-random")] FullyRandom, Persistent, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Reject the packet and send the given error reply. pub struct Reject { #[serde(skip_serializing_if = "Option::is_none", rename = "type")] /// Type of reject. pub _type: Option, #[serde(skip_serializing_if = "Option::is_none")] /// ICMP code to reject with. pub expr: Option, } impl Reject { pub fn new(_type: Option, code: Option) -> Reject { Reject { _type, expr: code } } } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Types of `Reject`. pub enum RejectType { #[serde(rename = "tcp reset")] TCPReset, ICMPX, ICMP, ICMPv6, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Dynamically add/update elements to a set. pub struct Set<'a> { /// Operator on set. pub op: SetOp, /// Set element to add or update. pub elem: Expression<'a>, /// Set reference. pub set: Cow<'a, str>, } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Operators on `Set`. pub enum SetOp { Add, Update, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Log the packet. /// All properties are optional. pub struct Log<'a> { #[serde(skip_serializing_if = "Option::is_none")] /// Prefix for log entries. pub prefix: Option>, #[serde(skip_serializing_if = "Option::is_none")] /// Log group. pub group: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Snaplen for logging. pub snaplen: Option, #[serde(skip_serializing_if = "Option::is_none", rename = "queue-threshold")] /// Queue threshold. pub queue_threshold: Option, #[serde(skip_serializing_if = "Option::is_none")] /// Log level. Defaults to `"warn"`. pub level: Option, #[serde( default, skip_serializing_if = "Option::is_none", deserialize_with = "deserialize_optional_flags" )] /// Log flags. pub flags: Option>, } impl Log<'_> { pub fn new(group: Option) -> Self { Log { prefix: None, group, snaplen: None, queue_threshold: None, level: None, flags: None, } } } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Levels of `Log`. pub enum LogLevel { Emerg, Alert, Crit, Err, Warn, Notice, Info, Debug, Audit, } #[derive( Debug, Clone, Copy, Eq, PartialEq, Serialize, Deserialize, Hash, EnumString, JsonSchema, )] #[serde(rename_all = "lowercase")] #[strum(serialize_all = "lowercase")] /// Flags of `Log`. pub enum LogFlag { #[serde(rename = "tcp sequence")] TCPSequence, #[serde(rename = "tcp options")] TCPOptions, #[serde(rename = "ip options")] IPOptions, Skuid, Ether, All, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Apply a given statement using a meter. pub struct Meter<'a> { /// Meter name. pub name: Cow<'a, str>, /// Meter key. pub key: Expression<'a>, /// Meter statement. pub stmt: Box>, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Queue the packet to userspace. pub struct Queue<'a> { /// Queue number. pub num: Expression<'a>, #[serde(skip_serializing_if = "Option::is_none")] /// Queue flags. pub flags: Option>, } #[derive(Debug, Clone, Copy, Eq, PartialEq, Serialize, Deserialize, Hash, JsonSchema)] #[serde(rename_all = "lowercase")] /// Flags of `Queue`. pub enum QueueFlag { Bypass, Fanout, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "vmap")] /// Apply a verdict conditionally. pub struct VerdictMap<'a> { /// Map key. pub key: Expression<'a>, /// Mapping expression consisting of value/verdict pairs. pub data: Expression<'a>, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename = "ct count")] /// Limit the number of connections using conntrack. pub struct CTCount<'a> { /// Connection count threshold. pub val: Expression<'a>, #[serde(skip_serializing_if = "Option::is_none")] /// If `true`, match if `val` was exceeded. If omitted, defaults to `false`. pub inv: Option, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] /// Limit the number of connections using conntrack. /// /// Anonymous synproxy was requires **nftables 0.9.2 or newer**. pub struct SynProxy { #[serde(skip_serializing_if = "Option::is_none")] /// maximum segment size (must match your backend server) pub mss: Option, #[serde(skip_serializing_if = "Option::is_none")] /// window scale (must match your backend server) pub wscale: Option, #[serde( skip_serializing_if = "Option::is_none", deserialize_with = "deserialize_optional_flags", default )] /// The synproxy's [flags][crate::types::SynProxyFlag]. pub flags: Option>, } #[derive(Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Redirects the packet to a local socket without changing the packet header in any way. pub struct TProxy<'a> { #[serde(skip_serializing_if = "Option::is_none")] pub family: Option>, pub port: u16, #[serde(skip_serializing_if = "Option::is_none")] pub addr: Option>, } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] /// Represents an operator for `Match`. pub enum Operator { #[serde(rename = "&")] /// Binary AND (`&`) AND, #[serde(rename = "|")] /// Binary OR (`|`) OR, #[serde(rename = "^")] /// Binary XOR (`^`) XOR, #[serde(rename = "<<")] /// Left shift (`<<`) LSHIFT, #[serde(rename = ">>")] /// Right shift (`>>`) RSHIFT, #[serde(rename = "==")] /// Equal (`==`) EQ, #[serde(rename = "!=")] /// Not equal (`!=`) NEQ, #[serde(rename = ">")] /// Less than (`>`) LT, #[serde(rename = "<")] /// Greater than (`<`) GT, #[serde(rename = "<=")] /// Less than or equal to (`<=`) LEQ, #[serde(rename = ">=")] /// Greater than or equal to (`>=`) GEQ, #[serde(rename = "in")] /// Perform a lookup, i.e. test if bits on RHS are contained in LHS value (`in`) IN, } nftables-0.6.3/src/types.rs000064400000000000000000000065121046102023000137350ustar 00000000000000use schemars::JsonSchema; use serde::{Deserialize, Serialize}; use strum_macros::EnumString; /// Families in nftables. /// /// See . #[derive(Clone, Copy, Debug, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] pub enum NfFamily { IP, IP6, INet, ARP, Bridge, NetDev, } #[derive(Clone, Copy, Debug, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents the type of a Chain. pub enum NfChainType { Filter, Route, NAT, } #[derive(Clone, Copy, Debug, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents the policy of a Chain. pub enum NfChainPolicy { Accept, Drop, } #[derive(Clone, Copy, Debug, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents a netfilter hook. /// /// See . pub enum NfHook { Ingress, Prerouting, Forward, Input, Output, Postrouting, Egress, } #[derive(Clone, Copy, Debug, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// Represents a conntrack helper protocol. pub enum CTHProto { TCP, UDP, DCCP, SCTP, GRE, ICMPv6, ICMP, Generic, } #[derive(Clone, Copy, Debug, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] pub enum RejectCode { #[serde(rename = "admin-prohibited")] /// Host administratively prohibited (ICMPX, ICMP, ICMPv6) AdminProhibited, #[serde(rename = "port-unreachable")] /// Destination port unreachable (ICMPX, ICMP, ICMPv6) PortUnreach, #[serde(rename = "no-route")] /// No route to destination (ICMPX, ICMP, ICMPv6) NoRoute, #[serde(rename = "host-unreachable")] /// Destination host unreachable (ICMPX, ICMP, ICMPv6) HostUnreach, #[serde(rename = "net-unreachable")] /// Destination network unreachable (ICMP) NetUnreach, #[serde(rename = "prot-unreachable")] /// Destination protocol unreachable (ICMP) ProtUnreach, #[serde(rename = "net-prohibited")] /// Network administratively prohibited (ICMP) NetProhibited, #[serde(rename = "host-prohibited")] /// Host administratively prohibited (ICMP) HostProhibited, #[serde(rename = "addr-unreachable")] /// Address unreachable (ICMPv6) AddrUnreach, } #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, EnumString, Hash, JsonSchema)] #[serde(rename_all = "lowercase")] #[strum(serialize_all = "lowercase")] /// Describes a SynProxy's flags. pub enum SynProxyFlag { /// Pass client timestamp option to backend. Timestamp, #[serde(rename = "sack-perm")] #[strum(serialize = "sack-perm")] /// Pass client selective acknowledgement option to backend. SackPerm, } #[derive(Clone, Copy, Debug, Eq, PartialEq, Hash, Serialize, Deserialize, JsonSchema)] #[serde(rename_all = "lowercase")] /// A time unit (used by [limits][crate::schema::Limit]). pub enum NfTimeUnit { /// A second. Second, /// A minute (60 seconds). Minute, /// An hour (3600 seconds). Hour, /// A day (86400 seconds). Day, /// A week (604800 seconds). Week, } nftables-0.6.3/src/visitor.rs000064400000000000000000000117271046102023000142740ustar 00000000000000use serde::{de, Deserialize}; use std::{borrow::Cow, collections::HashSet, fmt::Formatter, marker::PhantomData, str::FromStr}; type CowCowStrs<'a> = Cow<'a, [Cow<'a, str>]>; /// Deserialize null, a string, or string sequence into an `Option]>>`. pub fn single_string_to_option_vec<'a, 'de, D>( deserializer: D, ) -> Result>, D::Error> where D: de::Deserializer<'de>, { match single_string_to_vec::<'a, 'de, D>(deserializer) { Ok(value) => match value.len() { 0 => Ok(None), _ => Ok(Some(value)), }, Err(err) => Err(err), } } /// Deserialize null, a string or string sequence into a `Cow<'a, [Cow<'a, str>]>`. pub fn single_string_to_vec<'a, 'de, D>(deserializer: D) -> Result, D::Error> where D: de::Deserializer<'de>, { struct StringOrVec<'a>(PhantomData>); impl<'a, 'de> de::Visitor<'de> for StringOrVec<'a> { type Value = CowCowStrs<'a>; fn expecting(&self, formatter: &mut Formatter) -> std::fmt::Result { formatter.write_str("single string or list of strings") } fn visit_none(self) -> Result where E: de::Error, { Ok([][..].into()) } fn visit_str(self, value: &str) -> Result where E: de::Error, { Ok(Cow::Owned(vec![Cow::Owned(value.to_owned())])) } fn visit_seq(self, visitor: S) -> Result where S: de::SeqAccess<'de>, { Deserialize::deserialize(de::value::SeqAccessDeserializer::new(visitor)) } } deserializer.deserialize_any(StringOrVec(PhantomData)) } /// Deserialize null, a string or string sequence into an `Option>`. pub fn deserialize_optional_flags<'de, D, T>( deserializer: D, ) -> Result>, D::Error> where T: FromStr + Eq + core::hash::Hash + Deserialize<'de>, ::Err: std::fmt::Display, D: de::Deserializer<'de>, { struct FlagSet(PhantomData); impl<'de, T> de::Visitor<'de> for FlagSet where T: FromStr + Eq + core::hash::Hash + Deserialize<'de>, ::Err: std::fmt::Display, { type Value = Option>; fn expecting(&self, formatter: &mut Formatter) -> std::fmt::Result { formatter.write_str("single string or list of strings") } fn visit_none(self) -> Result where E: de::Error, { Ok(None) } fn visit_str(self, value: &str) -> Result where E: de::Error, { let mut h: HashSet = HashSet::new(); h.insert(T::from_str(value).map_err(::custom)?); Ok(Some(h)) } fn visit_seq(self, visitor: S) -> Result where S: de::SeqAccess<'de>, { let h: HashSet = Deserialize::deserialize(de::value::SeqAccessDeserializer::new(visitor))?; Ok(Some(h)) } } deserializer.deserialize_any(FlagSet(PhantomData)) } /// Serialize an [Option] with [Option::None] value as `0`. pub fn serialize_none_to_zero(x: &Option, s: S) -> Result where S: serde::Serializer, T: serde::Serialize, { match x { Some(v) => s.serialize_some(v), None => s.serialize_some(&0_usize), } } /// Deserialize string or array of strings into the given HashSet type. pub fn deserialize_flags<'de, D, T>(deserializer: D) -> Result, D::Error> where D: de::Deserializer<'de>, T: FromStr + Eq + core::hash::Hash + Deserialize<'de>, ::Err: std::fmt::Display, { struct FlagSet(PhantomData); impl<'de, T> de::Visitor<'de> for FlagSet where T: FromStr + Eq + core::hash::Hash + Deserialize<'de>, ::Err: std::fmt::Display, { type Value = HashSet; fn expecting(&self, formatter: &mut Formatter) -> std::fmt::Result { formatter.write_str("single string or list of strings") } fn visit_none(self) -> Result where E: de::Error, { Ok(HashSet::default()) } fn visit_str(self, value: &str) -> Result where E: de::Error, ::Err: std::fmt::Display, { let mut h: HashSet = HashSet::new(); h.insert(T::from_str(value).map_err(::custom)?); Ok(h) } fn visit_seq(self, visitor: S) -> Result where S: de::SeqAccess<'de>, { Deserialize::deserialize(de::value::SeqAccessDeserializer::new(visitor)) } } deserializer.deserialize_any(FlagSet(PhantomData)) } nftables-0.6.3/tests/deserialize.rs000064400000000000000000000016201046102023000154370ustar 00000000000000use std::{fs::File, io::BufReader, path::Path}; use nftables::schema::Nftables; use serde::de::Error; fn test_deserialize_json_files(path: &Path) -> datatest_stable::Result<()> { println!("Deserializing file: {}", path.display()); let file = File::open(path).expect("Cannot open file"); let reader = BufReader::new(file); let jd = &mut serde_json::Deserializer::from_reader(reader); let result: Result = serde_path_to_error::deserialize(jd); match result { Ok(nf) => { println!("Deserialized document: {nf:?}"); Ok(()) } Err(err) => Err(serde_json::error::Error::custom(format!( "Path: {}. Original error: {}", err.path(), err )) .into()), } } datatest_stable::harness! { {test = test_deserialize_json_files, root = "resources/test/json", pattern = r"^.*/*"}, } nftables-0.6.3/tests/fixtures.rs000064400000000000000000000027111046102023000150120ustar 00000000000000use std::{fs::File, io::BufReader}; use nftables::schema::Nftables; // nft 1.1.4 changed behavior where the flag is printed as single string instead of array // As such this lib should be able to parse both and return the same result. // https://bugzilla.netfilter.org/show_bug.cgi?id=1806 fn parse_and_compare_files(path1: &str, path2: &str) { let file1 = BufReader::new(File::open(path1).expect("Cannot open file1")); let json1: Nftables = serde_path_to_error::deserialize(&mut serde_json::Deserializer::from_reader(file1)) .expect("failed to parse json1"); let file2 = BufReader::new(File::open(path2).expect("Cannot open file2")); let json2: Nftables = serde_path_to_error::deserialize(&mut serde_json::Deserializer::from_reader(file2)) .expect("failed to parse json2"); assert_eq!(json1, json2, "Both parsed files should be identical"); } #[test] fn test_parse_fib_flags() { parse_and_compare_files( "resources/test/fixtures/single-fib-flag-1.json", "resources/test/fixtures/single-fib-flag-2.json", ); } #[test] fn test_parse_synproxy_flags() { parse_and_compare_files( "resources/test/fixtures/synproxy-flag-1.json", "resources/test/fixtures/synproxy-flag-2.json", ); } #[test] fn test_parse_set_map_flags() { parse_and_compare_files( "resources/test/fixtures/set-map-flag-1.json", "resources/test/fixtures/set-map-flag-2.json", ); } nftables-0.6.3/tests/helper_tests.rs000064400000000000000000000140001046102023000156340ustar 00000000000000use std::{borrow::Cow, vec}; use nftables::{ batch::Batch, expr, helper::{self, NftablesError}, schema::{self, Chain, Rule, Table}, stmt, types, }; use serial_test::serial; #[test] #[ignore] #[serial] /// Reads current ruleset from nftables and reads it to `Nftables` Rust struct. fn test_list_ruleset() { flush_ruleset().expect("failed to flush ruleset"); helper::get_current_ruleset().unwrap(); } #[test] #[ignore] /// Attempts to read current ruleset from nftables using non-existing nft binary. fn test_list_ruleset_invalid_program() { let result = helper::get_current_ruleset_with_args(Some("/dev/null/nft"), helper::DEFAULT_ARGS); let err = result.expect_err("getting the current ruleset should fail with non-existing nft binary"); assert!(matches!(err, NftablesError::NftExecution { .. })); } #[test] #[ignore] #[serial] /// Applies an example ruleset to nftables, lists single map/set through nft args. fn test_nft_args_list_map_set() { flush_ruleset().expect("failed to flush ruleset"); let ruleset = example_ruleset(false); nftables::helper::apply_ruleset(&ruleset).unwrap(); // nft should return two list object: metainfo and the set/map let applied = helper::get_current_ruleset_with_args( helper::DEFAULT_NFT, ["list", "map", "ip", "test-table-01", "test_map"], ) .unwrap(); assert_eq!(2, applied.objects.len()); let applied = helper::get_current_ruleset_with_args( helper::DEFAULT_NFT, ["list", "set", "ip", "test-table-01", "test_set"], ) .unwrap(); assert_eq!(2, applied.objects.len()); } #[test] #[ignore] #[serial] /// Test that AnonymousCounter can be applied with [Option::None] values. fn test_regr_anoncounter_none() { flush_ruleset().expect("failed to flush ruleset"); let mut batch = Batch::new(); // create table "test-table-02" and chain "test-chain-02". let table_name: &'static str = "test-table-02"; batch.add(schema::NfListObject::Table(Table { name: table_name.into(), family: types::NfFamily::IP, ..Table::default() })); batch.add(schema::NfListObject::Chain(Chain { name: "test-chain-02".into(), family: types::NfFamily::IP, table: table_name.into(), ..Chain::default() })); // create rule with multiple forms of [nftables::stmt::AnonymousCounter]. batch.add(schema::NfListObject::Rule(Rule { chain: "test-chain-02".into(), family: types::NfFamily::IP, table: table_name.into(), expr: [ stmt::Statement::Counter(nftables::stmt::Counter::Anonymous(Some( nftables::stmt::AnonymousCounter { packets: None, bytes: None, }, ))), stmt::Statement::Counter(nftables::stmt::Counter::Anonymous(Some( nftables::stmt::AnonymousCounter { packets: Some(0), bytes: Some(0), }, ))), ][..] .into(), ..Rule::default() })); let ruleset = batch.to_nftables(); let result = nftables::helper::apply_ruleset(&ruleset); assert!(result.is_ok()); } #[test] #[ignore] #[serial] /// Applies a ruleset to nftables. fn test_apply_ruleset() { flush_ruleset().expect("failed to flush ruleset"); let ruleset = example_ruleset(true); nftables::helper::apply_ruleset(&ruleset).unwrap(); } #[test] #[ignore] #[serial] /// Attempts to delete an unknown table, expecting an error. fn test_remove_unknown_table() { flush_ruleset().expect("failed to flush ruleset"); let mut batch = Batch::new(); batch.delete(schema::NfListObject::Table(schema::Table { family: types::NfFamily::IP6, name: "i-do-not-exist".into(), ..Table::default() })); let ruleset = batch.to_nftables(); let result = nftables::helper::apply_ruleset(&ruleset); let err = result.expect_err("Expecting nftables error for unknown table."); assert!(matches!(err, NftablesError::NftFailed { .. })); } fn example_ruleset(with_undo: bool) -> schema::Nftables<'static> { let mut batch = Batch::new(); // create table "test-table-01" let table_name: &'static str = "test-table-01"; batch.add(schema::NfListObject::Table(Table { name: table_name.into(), family: types::NfFamily::IP, ..Table::default() })); // create named set "test_set" let set_name = "test_set"; batch.add(schema::NfListObject::Set(Box::new(schema::Set { family: types::NfFamily::IP, table: table_name.into(), name: set_name.into(), set_type: schema::SetTypeValue::Single(schema::SetType::Ipv4Addr), ..schema::Set::default() }))); // create named map "test_map" batch.add(schema::NfListObject::Map(Box::new(schema::Map { family: types::NfFamily::IP, table: table_name.into(), name: "test_map".into(), map: schema::SetTypeValue::Single(schema::SetType::EtherAddr), set_type: schema::SetTypeValue::Single(schema::SetType::Ipv4Addr), ..schema::Map::default() }))); // add element to set batch.add(schema::NfListObject::Element(schema::Element { family: types::NfFamily::IP, table: table_name.into(), name: set_name.into(), elem: Cow::Owned(vec![ expr::Expression::String("127.0.0.1".into()), expr::Expression::String("127.0.0.2".into()), ]), })); if with_undo { batch.delete(schema::NfListObject::Table(schema::Table { family: types::NfFamily::IP, name: "test-table-01".into(), ..Table::default() })); } batch.to_nftables() } fn get_flush_ruleset() -> schema::Nftables<'static> { let mut batch = Batch::new(); batch.add_cmd(schema::NfCmd::Flush(schema::FlushObject::Ruleset(None))); batch.to_nftables() } fn flush_ruleset() -> Result<(), NftablesError> { let ruleset = get_flush_ruleset(); nftables::helper::apply_ruleset(&ruleset) } nftables-0.6.3/tests/json_tests.rs000064400000000000000000000337051046102023000153430ustar 00000000000000use nftables::expr::{self, BinaryOperation, Expression, Meta, MetaKey, NamedExpression}; use nftables::stmt::{self, Counter, Match, Operator, Queue, Statement}; use nftables::{schema::*, types::*}; use serde_json::json; use std::borrow::Cow; #[test] fn test_chain_table_rule_inet() { // Equivalent nft command: // ``` // nft "add table inet some_inet_table; // add chain inet some_inet_table some_inet_chain // '{ type filter hook forward priority 0; policy accept; }'" // ``` let expected: Nftables = Nftables { objects: Cow::Borrowed(&[ NfObject::CmdObject(NfCmd::Add(NfListObject::Table(Table { family: NfFamily::INet, name: Cow::Borrowed("some_inet_table"), handle: None, }))), NfObject::CmdObject(NfCmd::Add(NfListObject::Chain(Chain { family: NfFamily::INet, table: Cow::Borrowed("some_inet_table"), name: Cow::Borrowed("some_inet_chain"), newname: None, handle: None, _type: Some(NfChainType::Filter), hook: Some(NfHook::Forward), prio: None, dev: None, policy: Some(NfChainPolicy::Accept), }))), ]), }; let json = json!({"nftables":[ {"add":{"table":{"family":"inet","name":"some_inet_table"}}}, {"add":{"chain":{"family":"inet","table":"some_inet_table", "name":"some_inet_chain","type":"filter","hook":"forward","policy":"accept"}}} ]}); println!("{}", &json); let parsed: Nftables = serde_json::from_value(json).unwrap(); assert_eq!(expected, parsed); } #[test] /// Test JSON serialization of flow and flowtable. fn test_flowtable() { // equivalent nft command: // ``` // nft 'flush ruleset; add table inet some_inet_table; // add chain inet some_inet_table forward; // add flowtable inet some_inet_table flowed { hook ingress priority filter; devices = { lo }; }; // add rule inet some_inet_table forward ct state established flow add @flowed' // ``` let expected: Nftables = Nftables { objects: Cow::Borrowed(&[ NfObject::ListObject(NfListObject::Table(Table { family: NfFamily::INet, name: Cow::Borrowed("some_inet_table"), handle: None, })), NfObject::ListObject(NfListObject::FlowTable(FlowTable { family: NfFamily::INet, table: Cow::Borrowed("some_inet_table"), name: Cow::Borrowed("flowed"), handle: None, hook: Some(NfHook::Ingress), prio: Some(0), dev: Some(Cow::Borrowed(&[Cow::Borrowed("lo")])), })), NfObject::ListObject(NfListObject::Chain(Chain { family: NfFamily::INet, table: Cow::Borrowed("some_inet_table"), name: Cow::Borrowed("some_inet_chain"), newname: None, handle: None, _type: Some(NfChainType::Filter), hook: Some(NfHook::Forward), prio: None, dev: None, policy: Some(NfChainPolicy::Accept), })), NfObject::ListObject(NfListObject::Rule(Rule { family: NfFamily::INet, table: Cow::Borrowed("some_inet_table"), chain: Cow::Borrowed("some_inet_chain"), expr: Cow::Borrowed(&[ Statement::Flow(stmt::Flow { op: stmt::SetOp::Add, flowtable: Cow::Borrowed("@flowed"), }), Statement::Match(Match { left: Expression::Named(NamedExpression::CT(expr::CT { key: Cow::Borrowed("state"), family: None, dir: None, })), op: Operator::IN, right: Expression::String(Cow::Borrowed("established")), }), ]), handle: None, index: None, comment: None, })), ]), }; let json = json!({"nftables":[ {"table":{"family":"inet","name":"some_inet_table"}}, {"flowtable":{"family":"inet","table":"some_inet_table","name":"flowed", "hook":"ingress","prio":0,"dev":["lo"]}}, {"chain":{"family":"inet","table":"some_inet_table","name":"some_inet_chain", "type":"filter","hook":"forward","policy":"accept"}}, {"rule":{"family":"inet","table":"some_inet_table","chain":"some_inet_chain", "expr":[{"flow":{"op":"add","flowtable":"@flowed"}}, {"match":{"left":{"ct":{"key":"state"}},"right":"established","op":"in"}}]}}]}); println!("{}", &json); let parsed: Nftables = serde_json::from_value(json).unwrap(); assert_eq!(expected, parsed); } #[test] fn test_insert() { // Equivalent nft command: // ``` // nft 'insert rule inet some_inet_table some_inet_chain position 0 // iifname "br-lan" oifname "wg_exit" counter accept' // ``` let expected: Nftables = Nftables { objects: vec![NfObject::CmdObject(NfCmd::Insert(NfListObject::Rule( Rule { family: NfFamily::INet, table: "some_inet_table".into(), chain: "some_inet_chain".into(), expr: vec![ Statement::Match(Match { left: Expression::Named(NamedExpression::Meta(Meta { key: MetaKey::Iifname, })), right: Expression::String("br-lan".into()), op: Operator::EQ, }), Statement::Match(Match { left: Expression::Named(NamedExpression::Meta(Meta { key: MetaKey::Oifname, })), right: Expression::String("wg_exit".into()), op: Operator::EQ, }), Statement::Counter(Counter::Anonymous(None)), Statement::Accept(None), ] .into(), handle: None, index: Some(0), comment: None, }, )))] .into(), }; let json = json!({"nftables":[{"insert": {"rule":{"family":"inet","table":"some_inet_table","chain":"some_inet_chain","expr":[ {"match":{"left":{"meta":{"key":"iifname"}},"right":"br-lan","op":"=="}}, {"match":{"left":{"meta":{"key":"oifname"}},"right":"wg_exit","op":"=="}}, {"counter":null},{"accept":null} ],"index":0,"comment":null}}}]}); println!("{}", &json); let parsed: Nftables = serde_json::from_value(json).unwrap(); assert_eq!(expected, parsed); } #[test] fn test_parsing_of_queue_without_flags() { let expected = Nftables { objects: Cow::Borrowed(&[NfObject::ListObject(NfListObject::Rule(Rule { family: NfFamily::IP, table: Cow::Borrowed("test_table"), chain: Cow::Borrowed("test_chain"), expr: Cow::Borrowed(&[ Statement::Match(Match { left: Expression::Named(NamedExpression::Payload( nftables::expr::Payload::PayloadField(nftables::expr::PayloadField { protocol: Cow::Borrowed("udp"), field: Cow::Borrowed("dport"), }), )), right: Expression::Number(20000), op: Operator::EQ, }), Statement::Queue(Queue { num: Expression::Number(0), flags: None, }), ]), handle: Some(2), index: None, comment: None, }))]), }; let json = json!({ "nftables": [ { "rule": { "family": "ip", "table": "test_table", "chain": "test_chain", "handle": 2, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "udp", "field": "dport" } }, "right": 20000 } }, { "queue": { "num": 0 } } ] } } ] }); let parsed: Nftables = serde_json::from_value(json).unwrap(); assert_eq!(expected, parsed); } #[test] fn test_queue_json_serialisation() { let queue = Statement::Queue(Queue { num: Expression::Number(0), flags: None, }); let expected_json = String::from(r#"{"queue":{"num":0}}"#); assert_eq!(expected_json, serde_json::to_string(&queue).unwrap()); } #[test] fn test_parse_payload() { let expected = Nftables { objects: Cow::Borrowed(&[NfObject::ListObject(NfListObject::Rule(Rule { family: NfFamily::IP, table: Cow::Borrowed("test_table"), chain: Cow::Borrowed("test_chain"), expr: Cow::Borrowed(&[ Statement::Match(Match { left: Expression::Named(NamedExpression::Payload( nftables::expr::Payload::PayloadField(nftables::expr::PayloadField { protocol: Cow::Borrowed("udp"), field: Cow::Borrowed("dport"), }), )), right: Expression::Number(20000), op: Operator::EQ, }), Statement::Match(Match { left: Expression::Named(NamedExpression::Payload( nftables::expr::Payload::PayloadRaw(nftables::expr::PayloadRaw { base: nftables::expr::PayloadBase::TH, offset: 10, len: 4, }), )), right: Expression::Number(20), op: Operator::EQ, }), ]), handle: Some(2), index: None, comment: None, }))]), }; let json = json!({ "nftables": [ { "rule": { "family": "ip", "table": "test_table", "chain": "test_chain", "handle": 2, "expr": [ { "match": { "op": "==", "left": { "payload": { "protocol": "udp", "field": "dport" } }, "right": 20000 } }, { "match": { "op": "==", "left": { "payload": { "base": "th", "offset": 10, "len": 4 } }, "right": 20 } }, ] } } ] }); let parsed: Nftables = serde_json::from_value(json).unwrap(); assert_eq!(expected, parsed); } #[test] fn test_bit_flags() { let expected = NfListObject::Rule(Rule { family: NfFamily::INet, table: Cow::Borrowed("test_table"), chain: Cow::Borrowed("input"), expr: Cow::Owned(vec![Statement::Match(Match { op: Operator::EQ, left: Expression::BinaryOperation(Box::new(BinaryOperation::AND( Expression::Named(NamedExpression::Payload( nftables::expr::Payload::PayloadField(nftables::expr::PayloadField { protocol: Cow::Borrowed("tcp"), field: Cow::Borrowed("flags"), }), )), Expression::BinaryOperation(Box::new(BinaryOperation::OR(vec![ Expression::String(Cow::Borrowed("fin")), Expression::String(Cow::Borrowed("syn")), Expression::String(Cow::Borrowed("rst")), Expression::String(Cow::Borrowed("ack")), ]))), ))), right: Expression::String(Cow::Borrowed("syn")), })]), handle: Some(27), index: None, comment: None, }); let json = json!({ "rule": { "family": "inet", "table": "test_table", "chain": "input", "handle": 27, "expr": [ { "match": { "op": "==", "left": { "&": [ { "payload": { "protocol": "tcp", "field": "flags" } }, { "|": [ "fin", "syn", "rst", "ack" ] } ] }, "right": "syn" } } ] } }); let parsed: NfListObject = serde_json::from_value(json).unwrap(); assert_eq!(expected, parsed); } nftables-0.6.3/tests/serialize.rs000064400000000000000000000040011046102023000151220ustar 00000000000000use nftables::{expr::*, schema::*, stmt::*, types::*}; use std::borrow::Cow; #[test] fn test_serialize() { let _a: Nftables = Nftables { objects: Cow::Owned(vec![ NfObject::CmdObject(NfCmd::Add(NfListObject::Table(Table { family: NfFamily::INet, name: Cow::Borrowed("namib"), handle: None, }))), NfObject::CmdObject(NfCmd::Add(NfListObject::Chain(Chain { family: NfFamily::INet, table: Cow::Borrowed("namib"), name: Cow::Borrowed("one_chain"), newname: None, handle: None, _type: Some(NfChainType::Filter), hook: Some(NfHook::Forward), prio: None, dev: None, policy: Some(NfChainPolicy::Accept), }))), NfObject::CmdObject(NfCmd::Add(NfListObject::Rule(Rule { family: NfFamily::INet, table: Cow::Borrowed("namib"), chain: Cow::Borrowed("one_chain"), expr: Cow::Owned(vec![ Statement::Match(Match { left: Expression::List(vec![ Expression::Number(123), Expression::String(Cow::Borrowed("asd")), ]), right: Expression::Named(NamedExpression::CT(CT { key: Cow::Borrowed("state"), family: None, dir: None, })), op: Operator::EQ, }), Statement::Drop(Some(Drop {})), ]), handle: None, index: None, comment: None, }))), ]), }; let j = serde_json::to_string(&_a).unwrap(); let result: Nftables = serde_json::from_str(&j).unwrap(); println!("JSON: {j}"); println!("Parsed: {result:?}"); }