sqlparse-0.5.4/.flake80000644000000000000000000000014313615410400011443 0ustar00[flake8] exclude = tests, docs, dist max-complexity = 31 statistics = True show-source = Truesqlparse-0.5.4/.pre-commit-hooks.yaml0000644000000000000000000000031513615410400014430 0ustar00- id: sqlformat name: sqlformat description: Format SQL files using sqlparse entry: sqlformat language: python types: [sql] args: [--in-place, --reindent] minimum_pre_commit_version: '2.9.0' sqlparse-0.5.4/.readthedocs.yaml0000644000000000000000000000202013615410400013513 0ustar00# Read the Docs configuration file for Sphinx projects # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details # Required version: 2 # Set the OS, Python version and other tools you might need build: os: ubuntu-22.04 tools: python: "3.12" # You can also specify other tool versions: # nodejs: "20" # rust: "1.70" # golang: "1.20" # Build documentation in the "docs/" directory with Sphinx sphinx: configuration: docs/source/conf.py # You can configure Sphinx to use a different builder, for instance use the dirhtml builder for simpler URLs # builder: "dirhtml" # Fail on all warnings to avoid broken references # fail_on_warning: true # Optionally build your docs in additional formats such as PDF and ePub # formats: # - pdf # - epub # Optional but recommended, declare the Python requirements required # to build your documentation # See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html # python: # install: # - requirements: docs/requirements.txtsqlparse-0.5.4/AGENTS.md0000644000000000000000000001206113615410400011575 0ustar00# AGENTS.md This file provides guidance to Agents when working with code in this repository. ## Project Overview sqlparse is a non-validating SQL parser for Python that provides support for parsing, splitting, and formatting SQL statements. It's compatible with Python 3.8+ and supports multiple SQL dialects (Oracle, MySQL, PostgreSQL/PL/pgSQL, HQL, MS Access, Snowflake, BigQuery). ## Development Commands This project uses `pixi` for dependency and environment management. Common commands: ### Testing - Run all tests across Python versions: `pixi run test-all` - Run tests for specific Python version: `pixi run -e py311 pytest tests/` - Run single test file: `pixi run -e py311 pytest tests/test_format.py` - Run specific test: `pixi run -e py311 pytest tests/test_format.py::test_name` - Using Makefile: `make test` ### Linting - `pixi run lint` or `make lint` ### Coverage - `make coverage` (runs tests with coverage and shows report) - `make coverage-xml` (generates XML coverage report) ### Building - `python -m build` (builds distribution packages) ## Architecture ### Core Processing Pipeline The parsing and formatting workflow follows this sequence: 1. **Lexing** (`sqlparse/lexer.py`): Tokenizes SQL text into `(token_type, value)` pairs using regex-based pattern matching 2. **Filtering** (`sqlparse/engine/filter_stack.py`): Processes token stream through a `FilterStack` with three stages: - `preprocess`: Token-level filters - `stmtprocess`: Statement-level filters - `postprocess`: Final output filters 3. **Statement Splitting** (`sqlparse/engine/statement_splitter.py`): Splits token stream into individual SQL statements 4. **Grouping** (`sqlparse/engine/grouping.py`): Groups tokens into higher-level syntactic structures (parentheses, functions, identifiers, etc.) 5. **Formatting** (`sqlparse/formatter.py` + `sqlparse/filters/`): Applies formatting filters based on options ### Token Hierarchy The token system is defined in `sqlparse/sql.py`: - `Token`: Base class with `value`, `ttype` (token type), and `parent` attributes - `TokenList`: Group of tokens, base for all syntactic structures - `Statement`: Top-level SQL statement - `Identifier`: Table/column names, possibly with aliases - `IdentifierList`: Comma-separated identifiers - `Function`: Function calls with parameters - `Parenthesis`, `SquareBrackets`: Bracketed expressions - `Case`, `If`, `For`, `Begin`: Control structures - `Where`, `Having`, `Over`: SQL clauses - `Comparison`, `Operation`: Expressions All tokens maintain parent-child relationships for tree traversal. ### Token Types Token types are defined in `sqlparse/tokens.py` and used for classification during lexing (e.g., `T.Keyword.DML`, `T.Name`, `T.Punctuation`). ### Keywords and Lexer Configuration `sqlparse/keywords.py` contains: - `SQL_REGEX`: List of regex patterns for tokenization - Multiple `KEYWORDS_*` dictionaries for different SQL dialects - The `Lexer` class uses a singleton pattern (`Lexer.get_default_instance()`) that can be configured with different keyword sets ### Grouping Algorithm `sqlparse/engine/grouping.py` contains the grouping logic that transforms flat token lists into nested tree structures. Key functions: - `_group_matching()`: Groups tokens with matching open/close markers (parentheses, CASE/END, etc.) - Various `group_*()` functions for specific constructs (identifiers, functions, comparisons, etc.) - Includes DoS protection via `MAX_GROUPING_DEPTH` and `MAX_GROUPING_TOKENS` limits ### Formatting Filters `sqlparse/filters/` contains various formatting filters: - `reindent.py`: Indentation logic - `aligned_indent.py`: Aligned indentation style - `right_margin.py`: Line wrapping - `tokens.py`: Token-level transformations (keyword case, etc.) - `output.py`: Output format serialization (SQL, Python, PHP) - `others.py`: Miscellaneous filters (strip comments, whitespace, etc.) ## Public API The main entry points in `sqlparse/__init__.py`: - `parse(sql, encoding=None)`: Parse SQL into tuple of `Statement` objects - `format(sql, encoding=None, **options)`: Format SQL with options (reindent, keyword_case, etc.) - `split(sql, encoding=None, strip_semicolon=False)`: Split SQL into individual statement strings - `parsestream(stream, encoding=None)`: Generator version of parse for file-like objects ## Important Patterns ### Token Traversal - `token.flatten()`: Recursively yields all leaf tokens (ungrouped) - `token_first()`, `token_next()`, `token_prev()`: Navigate token lists - `token_next_by(i=, m=, t=)`: Find next token by instance type, match criteria, or token type - `token.match(ttype, values, regex=False)`: Check if token matches criteria ### Adding Keyword Support Use `Lexer.add_keywords()` to extend the parser with new keywords for different SQL dialects. ### DoS Prevention Be aware of recursion limits and token count limits in grouping operations when handling untrusted SQL input. ## Testing Conventions - Tests are in `tests/` directory - Test files follow pattern `test_*.py` - Uses pytest framework - Test data often includes SQL strings with expected parsing/formatting results sqlparse-0.5.4/CHANGELOG0000644000000000000000000005642713615410400011522 0ustar00Release 0.5.4 (Nov 28, 2025) ---------------------------- Enhancements * Add support for Python 3.14. * Add type annotations to top-level API functions and include py.typed marker for PEP 561 compliance, enabling type checking with mypy and other tools (issue756). * Add pre-commit hook support. sqlparse can now be used as a pre-commit hook to automatically format SQL files. The CLI now supports multiple files and an `--in-place` flag for in-place editing (issue537). * Add `ATTACH` and `DETACH` to PostgreSQL keywords (pr808). * Add `INTERSECT` to close keywords in WHERE clause (pr820). * Support `REGEXP BINARY` comparison operator (pr817). Bug Fixes * Add additional protection against denial of service attacks when parsing very large lists of tuples. This enhances the existing recursion protections with configurable limits for token processing to prevent DoS through algorithmic complexity attacks. The new limits (MAX_GROUPING_DEPTH=100, MAX_GROUPING_TOKENS=10000) can be adjusted or disabled (by setting to None) if needed for legitimate large SQL statements. * Remove shebang from cli.py and remove executable flag (pr818). * Fix strip_comments not removing all comments when input contains only comments (issue801, pr803 by stropysh). * Fix splitting statements with IF EXISTS/IF NOT EXISTS inside BEGIN...END blocks (issue812). * Fix splitting on semicolons inside BEGIN...END blocks (issue809). Release 0.5.3 (Dez 10, 2024) ---------------------------- Bug Fixes * This version introduces a more generalized handling of potential denial of service attack (DOS) due to recursion errors for deeply nested statements. Brought up and fixed by @living180. Thanks a lot! Release 0.5.2 (Nov 14, 2024) ---------------------------- Bug Fixes * EXTENSION is now recognized as a keyword (issue785). * SQL hints are not removed when removing comments (issue262, by skryzh). Release 0.5.1 (Jul 15, 2024) ---------------------------- Enhancements * New "compact" option for formatter. If set, the formatter tries to produce a more compact output by avoiding some line breaks (issue783). Bug Fixes * The strip comments filter was a bit greedy and removed too much whitespace (issue772). Note: In some cases you might want to add `strip_whitespace=True` where you previously used just `strip_comments=True`. `strip_comments` did some of the work that `strip_whitespace` should do. * Fix error when splitting statements that contain multiple CASE clauses within a BEGIN block (issue784). * Fix whitespace removal with nested expressions (issue782). * Fix parsing and formatting of ORDER clauses containing NULLS FIRST or NULLS LAST (issue532). Release 0.5.0 (Apr 13, 2024) ---------------------------- Notable Changes * Drop support for Python 3.5, 3.6, and 3.7. * Python 3.12 is now supported (pr725, by hugovk). * IMPORTANT: Fixes a potential denial of service attack (DOS) due to recursion error for deeply nested statements. Instead of recursion error a generic SQLParseError is raised. See the security advisory for details: https://github.com/andialbrecht/sqlparse/security/advisories/GHSA-2m57-hf25-phgg The vulnerability was discovered by @uriyay-jfrog. Thanks for reporting! Enhancements * Splitting statements now allows to remove the semicolon at the end. Some database backends love statements without semicolon (issue742). * Support TypedLiterals in get_parameters (pr749, by Khrol). * Improve splitting of Transact SQL when using GO keyword (issue762). * Support for some JSON operators (issue682). * Improve formatting of statements containing JSON operators (issue542). * Support for BigQuery and Snowflake keywords (pr699, by griffatrasgo). * Support parsing of OVER clause (issue701, pr768 by r33s3n6). Bug Fixes * Ignore dunder attributes when creating Tokens (issue672). * Allow operators to precede dollar-quoted strings (issue763). * Fix parsing of nested order clauses (issue745, pr746 by john-bodley). * Thread-safe initialization of Lexer class (issue730). * Classify TRUNCATE as DDL and GRANT/REVOKE as DCL keywords (based on pr719 by josuc1, thanks for bringing this up!). * Fix parsing of PRIMARY KEY (issue740). Other * Optimize performance of matching function (pr799, by admachainz). Release 0.4.4 (Apr 18, 2023) ---------------------------- Notable Changes * IMPORTANT: This release fixes a security vulnerability in the parser where a regular expression vulnerable to ReDOS (Regular Expression Denial of Service) was used. See the security advisory for details: https://github.com/andialbrecht/sqlparse/security/advisories/GHSA-rrm6-wvj7-cwh2 The vulnerability was discovered by @erik-krogh from GitHub Security Lab (GHSL). Thanks for reporting! Bug Fixes * Revert a change from 0.4.0 that changed IN to be a comparison (issue694). The primary expectation is that IN is treated as a keyword and not as a comparison operator. That also follows the definition of reserved keywords for the major SQL syntax definitions. * Fix regular expressions for string parsing. Other * sqlparse now uses pyproject.toml instead of setup.cfg (issue685). Release 0.4.3 (Sep 23, 2022) ---------------------------- Enhancements * Add support for DIV operator (pr664, by chezou). * Add support for additional SPARK keywords (pr643, by mrmasterplan). * Avoid tokens copy (pr622, by living180). * Add REGEXP as a comparision (pr647, by PeterSandwich). * Add DISTINCTROW keyword for MS Access (issue677). * Improve parsing of CREATE TABLE AS SELECT (pr662, by chezou). Bug Fixes * Fix spelling of INDICATOR keyword (pr653, by ptld). * Fix formatting error in EXTRACT function (issue562, issue670, pr676, by ecederstrand). * Fix bad parsing of create table statements that use lower case (issue217, pr642, by mrmasterplan). * Handle backtick as valid quote char (issue628, pr629, by codenamelxl). * Allow any unicode character as valid identifier name (issue641). Other * Update github actions to test on Python 3.10 as well (pr661, by cclaus). Release 0.4.2 (Sep 10, 2021) ---------------------------- Notable Changes * IMPORTANT: This release fixes a security vulnerability in the strip comments filter. In this filter a regular expression that was vulnerable to ReDOS (Regular Expression Denial of Service) was used. See the security advisory for details: https://github.com/andialbrecht/sqlparse/security/advisories/GHSA-p5w8-wqhj-9hhf The vulnerability was discovered by @erik-krogh and @yoff from GitHub Security Lab (GHSL). Thanks for reporting! Enhancements * Add ELSIF as keyword (issue584). * Add CONFLICT and ON_ERROR_STOP keywords (pr595, by j-martin). Bug Fixes * Fix parsing of backticks (issue588). * Fix parsing of scientific number (issue399). Release 0.4.1 (Oct 08, 2020) ---------------------------- Bug Fixes * Just removed a debug print statement, sorry... Release 0.4.0 (Oct 07, 2020) ---------------------------- Notable Changes * Remove support for end-of-life Python 2.7 and 3.4. Python 3.5+ is now required. * Remaining strings that only consist of whitespaces are not treated as statements anymore. Code that ignored the last element from sqlparse.split() should be updated accordingly since that function now doesn't return an empty string as the last element in some cases (issue496). Enhancements * Add WINDOW keyword (pr579 by ali-tny). * Add RLIKE keyword (pr582 by wjones1). Bug Fixes * Improved parsing of IN(...) statements (issue566, pr567 by hurcy). * Preserve line breaks when removing comments (issue484). * Fix parsing error when using square bracket notation (issue583). * Fix splitting when using DECLARE ... HANDLER (issue581). * Fix splitting of statements using CASE ... WHEN (issue580). * Improve formatting of type casts in parentheses. * Stabilize formatting of invalid SQL statements. Release 0.3.1 (Feb 29, 2020) ---------------------------- Enhancements * Add HQL keywords (pr475, by matwalk). * Add support for time zone casts (issue489). * Enhance formatting of AS keyword (issue507, by john-bodley). * Stabilize grouping engine when parsing invalid SQL statements. Bug Fixes * Fix splitting of SQL with multiple statements inside parentheses (issue485, pr486 by win39). * Correctly identify NULLS FIRST / NULLS LAST as keywords (issue487). * Fix splitting of SQL statements that contain dollar signs in identifiers (issue491). * Remove support for parsing double slash comments introduced in 0.3.0 (issue456) as it had some side-effects with other dialects and doesn't seem to be widely used (issue476). * Restrict detection of alias names to objects that actually could have an alias (issue455, adopted some parts of pr509 by john-bodley). * Fix parsing of date/time literals (issue438, by vashek). * Fix initialization of TokenList (issue499, pr505 by john-bodley). * Fix parsing of LIKE (issue493, pr525 by dbczumar). * Improve parsing of identifiers (pr527 by liulk). Release 0.3.0 (Mar 11, 2019) ---------------------------- Notable Changes * Remove support for Python 3.3. Enhancements * New formatting option "--indent_after_first" (pr345, by johshoff). * New formatting option "--indent_columns" (pr393, by digitalarbeiter). * Add UPSERT keyword (issue408). * Strip multiple whitespace within parentheses (issue473, by john-bodley). * Support double slash (//) comments (issue456, by theianrobertson). * Support for Calcite temporal keywords (pr468, by john-bodley). Bug Fixes * Fix occasional IndexError (pr390, by circld, issue313). * Fix incorrect splitting of strings containing new lines (pr396, by fredyw). * Fix reindent issue for parenthesis (issue427, by fredyw). * Fix from( parsing issue (issue446, by fredyw) . * Fix for get_real_name() to return correct name (issue369, by fredyw). * Wrap function params when wrap_after is set (pr398, by soloman1124). * Fix parsing of "WHEN name" clauses (pr418, by andrew deryabin). * Add missing EXPLAIN keyword (issue421). * Fix issue with strip_comments causing a syntax error (issue425, by fredyw). * Fix formatting on INSERT which caused staircase effect on values (issue329, by fredyw). * Avoid formatting of psql commands (issue469). Internal Changes * Unify handling of GROUP BY/ORDER BY (pr457, by john-bodley). * Remove unnecessary compat shim for bytes (pr453, by jdufresne). Release 0.2.4 (Sep 27, 2017) ---------------------------- Enhancements * Add more keywords for MySQL table options (pr328, pr333, by phdru). * Add more PL/pgSQL keywords (pr357, by Demetrio92). * Improve parsing of floats (pr330, by atronah). Bug Fixes * Fix parsing of MySQL table names starting with digits (issue337). * Fix detection of identifiers using comparisons (issue327). * Fix parsing of UNION ALL after WHERE (issue349). * Fix handling of semicolon in assignments (issue359, issue358). Release 0.2.3 (Mar 02, 2017) ---------------------------- Enhancements * New command line option "--encoding" (by twang2218, pr317). * Support CONCURRENTLY keyword (issue322, by rowanseymour). Bug Fixes * Fix some edge-cases when parsing invalid SQL statements. * Fix indentation of LIMIT (by romainr, issue320). * Fix parsing of INTO keyword (issue324). Internal Changes * Several improvements regarding encodings. Release 0.2.2 (Oct 22, 2016) ---------------------------- Enhancements * Add comma_first option: When splitting list "comma first" notation is used (issue141). Bug Fixes * Fix parsing of incomplete AS (issue284, by vmuriart). * Fix parsing of Oracle names containing dollars (issue291). * Fix parsing of UNION ALL (issue294). * Fix grouping of identifiers containing typecasts (issue297). * Add Changelog to sdist again (issue302). Internal Changes * `is_whitespace` and `is_group` changed into properties Release 0.2.1 (Aug 13, 2016) ---------------------------- Notable Changes * PostgreSQL: Function bodys are parsed as literal string. Previously sqlparse assumed that all function bodys are parsable psql strings (see issue277). Bug Fixes * Fix a regression to parse streams again (issue273, reported and test case by gmccreight). * Improve Python 2/3 compatibility when using parsestream (issue190, by phdru). * Improve splitting of PostgreSQL functions (issue277). Release 0.2.0 (Jul 20, 2016) ---------------------------- IMPORTANT: The supported Python versions have changed with this release. sqlparse 0.2.x supports Python 2.7 and Python >= 3.3. Thanks to the many contributors for writing bug reports and working on pull requests who made this version possible! Internal Changes * sqlparse.SQLParseError was removed from top-level module and moved to sqlparse.exceptions. * sqlparse.sql.Token.to_unicode was removed. * The signature of a filter's process method has changed from process(stack, stream) -> to process(stream). Stack was never used at all. * Lots of code cleanups and modernization (thanks esp. to vmuriart!). * Improved grouping performance. (sjoerdjob) Enhancements * Support WHILE loops (issue215, by shenlongxing). * Better support for CTEs (issue217, by Andrew Tipton). * Recognize USING as a keyword more consistently (issue236, by koljonen). * Improve alignment of columns (issue207, issue235, by vmuriat). * Add wrap_after option for better alignment when formatting lists (issue248, by Dennis Taylor). * Add reindent-aligned option for alternate formatting (Adam Greenhall) * Improved grouping of operations (issue211, by vmuriat). Bug Fixes * Leading whitespaces are now removed when format() is called with strip_whitespace=True (issue213, by shenlongxing). * Fix typo in keywords list (issue229, by cbeloni). * Fix parsing of functions in comparisons (issue230, by saaj). * Fix grouping of identifiers (issue233). * Fix parsing of CREATE TABLE statements (issue242, by Tenghuan). * Minor bug fixes (issue101). * Improve formatting of CASE WHEN constructs (issue164, by vmuriat). Release 0.1.19 (Mar 07, 2016) ----------------------------- Bug Fixes * Fix IndexError when statement contains WITH clauses (issue205). Release 0.1.18 (Oct 25, 2015) ----------------------------- Bug Fixes * Remove universal wheel support, added in 0.1.17 by mistake. Release 0.1.17 (Oct 24, 2015) ----------------------------- Enhancements * Speed up parsing of large SQL statements (pull request: issue201, fixes the following issues: issue199, issue135, issue62, issue41, by Ryan Wooden). Bug Fixes * Fix another splitter bug regarding DECLARE (issue194). Misc * Packages on PyPI are signed from now on. Release 0.1.16 (Jul 26, 2015) ----------------------------- Bug Fixes * Fix a regression in get_alias() introduced in 0.1.15 (issue185). * Fix a bug in the splitter regarding DECLARE (issue193). * sqlformat command line tool doesn't duplicate newlines anymore (issue191). * Don't mix up MySQL comments starting with hash and MSSQL temp tables (issue192). * Statement.get_type() now ignores comments at the beginning of a statement (issue186). Release 0.1.15 (Apr 15, 2015) ----------------------------- Bug Fixes * Fix a regression for identifiers with square bracktes notation (issue153, by darikg). * Add missing SQL types (issue154, issue155, issue156, by jukebox). * Fix parsing of multi-line comments (issue172, by JacekPliszka). * Fix parsing of escaped backslashes (issue174, by caseyching). * Fix parsing of identifiers starting with underscore (issue175). * Fix misinterpretation of IN keyword (issue183). Enhancements * Improve formatting of HAVING statements. * Improve parsing of inline comments (issue163). * Group comments to parent object (issue128, issue160). * Add double precision builtin (issue169, by darikg). * Add support for square bracket array indexing (issue170, issue176, issue177 by darikg). * Improve grouping of aliased elements (issue167, by darikg). * Support comments starting with '#' character (issue178). Release 0.1.14 (Nov 30, 2014) ----------------------------- Bug Fixes * Floats in UPDATE statements are now handled correctly (issue145). * Properly handle string literals in comparisons (issue148, change proposed by aadis). * Fix indentation when using tabs (issue146). Enhancements * Improved formatting in list when newlines precede commas (issue140). Release 0.1.13 (Oct 09, 2014) ----------------------------- Bug Fixes * Fix a regression in handling of NULL keywords introduced in 0.1.12. Release 0.1.12 (Sep 20, 2014) ----------------------------- Bug Fixes * Fix handling of NULL keywords in aliased identifiers. * Fix SerializerUnicode to split unquoted newlines (issue131, by Michael Schuller). * Fix handling of modulo operators without spaces (by gavinwahl). Enhancements * Improve parsing of identifier lists containing placeholders. * Speed up query parsing of unquoted lines (by Michael Schuller). Release 0.1.11 (Feb 07, 2014) ----------------------------- Bug Fixes * Fix incorrect parsing of string literals containing line breaks (issue118). * Fix typo in keywords, add MERGE, COLLECT keywords (issue122/124, by Cristian Orellana). * Improve parsing of string literals in columns. * Fix parsing and formatting of statements containing EXCEPT keyword. * Fix Function.get_parameters() (issue126/127, by spigwitmer). Enhancements * Classify DML keywords (issue116, by Victor Hahn). * Add missing FOREACH keyword. * Grouping of BEGIN/END blocks. Other * Python 2.5 isn't automatically tested anymore, neither Travis nor Tox still support it out of the box. Release 0.1.10 (Nov 02, 2013) ----------------------------- Bug Fixes * Removed buffered reading again, it obviously causes wrong parsing in some rare cases (issue114). * Fix regression in setup.py introduced 10 months ago (issue115). Enhancements * Improved support for JOINs, by Alexander Beedie. Release 0.1.9 (Sep 28, 2013) ---------------------------- Bug Fixes * Fix an regression introduced in 0.1.5 where sqlparse didn't properly distinguished between single and double quoted strings when tagging identifier (issue111). Enhancements * New option to truncate long string literals when formatting. * Scientific numbers are pares correctly (issue107). * Support for arithmetic expressions (issue109, issue106; by prudhvi). Release 0.1.8 (Jun 29, 2013) ---------------------------- Bug Fixes * Whitespaces within certain keywords are now allowed (issue97, patch proposed by xcombelle). Enhancements * Improve parsing of assignments in UPDATE statements (issue90). * Add STRAIGHT_JOIN statement (by Yago Riveiro). * Function.get_parameters() now returns the parameter if only one parameter is given (issue94, by wayne.wuw). * sqlparse.split() now removes leading and trailing whitespaces from split statements. * Add USE as keyword token (by mulos). * Improve parsing of PEP249-style placeholders (issue103). Release 0.1.7 (Apr 06, 2013) ---------------------------- Bug Fixes * Fix Python 3 compatibility of sqlformat script (by Pi Delport). * Fix parsing of SQL statements that contain binary data (by Alexey Malyshev). * Fix a bug where keywords were identified as aliased identifiers in invalid SQL statements. * Fix parsing of identifier lists where identifiers are keywords too (issue10). Enhancements * Top-level API functions now accept encoding keyword to parse statements in certain encodings more reliable (issue20). * Improve parsing speed when SQL contains CLOBs or BLOBs (issue86). * Improve formatting of ORDER BY clauses (issue89). * Formatter now tries to detect runaway indentations caused by parsing errors or invalid SQL statements. When re-indenting such statements the formatter flips back to column 0 before going crazy. Other * Documentation updates. Release 0.1.6 (Jan 01, 2013) ---------------------------- sqlparse is now compatible with Python 3 without any patches. The Python 3 version is generated during install by 2to3. You'll need distribute to install sqlparse for Python 3. Bug Fixes * Fix parsing error with dollar-quoted procedure bodies (issue83). Other * Documentation updates. * Test suite now uses tox and pytest. * py3k fixes (by vthriller). * py3k fixes in setup.py (by Florian Bauer). * setup.py now requires distribute (by Florian Bauer). Release 0.1.5 (Nov 13, 2012) ---------------------------- Bug Fixes * Improve handling of quoted identifiers (issue78). * Improve grouping and formatting of identifiers with operators (issue53). * Improve grouping and formatting of concatenated strings (issue53). * Improve handling of varchar() (by Mike Amy). * Clean up handling of various SQL elements. * Switch to pytest and clean up tests. * Several minor fixes. Other * Deprecate sqlparse.SQLParseError. Please use sqlparse.exceptions.SQLParseError instead. * Add caching to speed up processing. * Add experimental filters for token processing. * Add sqlformat.parsestream (by quest). Release 0.1.4 (Apr 20, 2012) ---------------------------- Bug Fixes * Avoid "stair case" effects when identifiers, functions, placeholders or keywords are mixed in identifier lists (issue45, issue49, issue52) and when asterisks are used as operators (issue58). * Make keyword detection more restrict (issue47). * Improve handling of CASE statements (issue46). * Fix statement splitting when parsing recursive statements (issue57, thanks to piranna). * Fix for negative numbers (issue56, thanks to kevinjqiu). * Pretty format comments in identifier lists (issue59). * Several minor bug fixes and improvements. Release 0.1.3 (Jul 29, 2011) ---------------------------- Bug Fixes * Improve parsing of floats (thanks to Kris). * When formatting a statement a space before LIMIT was removed (issue35). * Fix strip_comments flag (issue38, reported by ooberm...@gmail.com). * Avoid parsing names as keywords (issue39, reported by djo...@taket.org). * Make sure identifier lists in subselects are grouped (issue40, reported by djo...@taket.org). * Split statements with IF as functions correctly (issue33 and issue29, reported by charles....@unige.ch). * Relax detection of keywords, esp. when used as function names (issue36, nyuhu...@gmail.com). * Don't treat single characters as keywords (issue32). * Improve parsing of stand-alone comments (issue26). * Detection of placeholders in paramterized queries (issue22, reported by Glyph Lefkowitz). * Add parsing of MS Access column names with braces (issue27, reported by frankz...@gmail.com). Other * Replace Django by Flask in App Engine frontend (issue11). Release 0.1.2 (Nov 23, 2010) ---------------------------- Bug Fixes * Fixed incorrect detection of keyword fragments embed in names (issue7, reported and initial patch by andyboyko). * Stricter detection of identifier aliases (issue8, reported by estama). * WHERE grouping consumed closing parenthesis (issue9, reported by estama). * Fixed an issue with trailing whitespaces (reported by Kris). * Better detection of escaped single quotes (issue13, reported by Martin Brochhaus, patch by bluemaro with test case by Dan Carley). * Ignore identifier in double-quotes when changing cases (issue 21). * Lots of minor fixes targeting encoding, indentation, statement parsing and more (issues 12, 14, 15, 16, 18, 19). * Code cleanup with a pinch of refactoring. Release 0.1.1 (May 6, 2009) --------------------------- Bug Fixes * Lexers preserves original line breaks (issue1). * Improved identifier parsing: backtick quotes, wildcards, T-SQL variables prefixed with @. * Improved parsing of identifier lists (issue2). * Recursive recognition of AS (issue4) and CASE. * Improved support for UPDATE statements. Other * Code cleanup and better test coverage. Release 0.1.0 (Apr 8, 2009) --------------------------- Initial release. sqlparse-0.5.4/CLAUDE.md0000644000000000000000000000001313615410400011543 0ustar00@AGENTS.md sqlparse-0.5.4/CONTRIBUTING.md0000644000000000000000000000061713615410400012527 0ustar00# Contributing to `sqlparse` Thanks for your interest in contributing to the `sqlparse` project! All contributors are expected to follow the [Python Community Code of Conduct](https://www.python.org/psf/codeofconduct/). Head over to the [Discussions Page](https://github.com/andialbrecht/sqlparse/discussions) if you have any questions. We're still working on a more elaborate developer guide.sqlparse-0.5.4/Makefile0000644000000000000000000000133513615410400011734 0ustar00# Makefile to simplify some common development tasks. # Run 'make help' for a list of commands. PYTHON=`which python` default: help help: @echo "Available commands:" @sed -n '/^[a-zA-Z0-9_.]*:/s/:.*//p' =7.5.0 constrains: - openmp_impl 9999 license: BSD-3-Clause license_family: BSD purls: [] size: 23621 timestamp: 1650670423406 - pypi: https://files.pythonhosted.org/packages/7e/b3/6b4067be973ae96ba0d615946e314c5ae35f9f993eca561b356540bb0c2b/alabaster-1.0.0-py3-none-any.whl name: alabaster version: 1.0.0 sha256: fc6786402dc3fcb2de3cabd5fe455a2db534b371124f1f21de8731783dec828b requires_python: '>=3.10' - pypi: https://files.pythonhosted.org/packages/b7/b8/3fe70c75fe32afc4bb507f75563d39bc5642255d1d94f1f23604725780bf/babel-2.17.0-py3-none-any.whl name: babel version: 2.17.0 sha256: 4d0b53093fdfb4b21c92b5213dba5a1b23885afa8383709427046b21c366e5f2 requires_dist: - pytz>=2015.7 ; python_full_version < '3.9' - tzdata ; sys_platform == 'win32' and extra == 'dev' - backports-zoneinfo ; python_full_version < '3.9' and extra == 'dev' - freezegun~=1.0 ; extra == 'dev' - jinja2>=3.0 ; extra == 'dev' - pytest-cov ; extra == 'dev' - pytest>=6.0 ; extra == 'dev' - pytz ; extra == 'dev' - setuptools ; extra == 'dev' requires_python: '>=3.8' - pypi: https://files.pythonhosted.org/packages/cb/8c/2b30c12155ad8de0cf641d76a8b396a16d2c36bc6d50b621a62b7c4567c1/build-1.3.0-py3-none-any.whl name: build version: 1.3.0 sha256: 7145f0b5061ba90a1500d60bd1b13ca0a8a4cebdd0cc16ed8adf1c0e739f43b4 requires_dist: - packaging>=19.1 - pyproject-hooks - colorama ; os_name == 'nt' - importlib-metadata>=4.6 ; python_full_version < '3.10.2' - tomli>=1.1.0 ; python_full_version < '3.11' - uv>=0.1.18 ; extra == 'uv' - virtualenv>=20.11 ; python_full_version < '3.10' and extra == 'virtualenv' - virtualenv>=20.17 ; python_full_version >= '3.10' and python_full_version < '3.14' and extra == 'virtualenv' - virtualenv>=20.31 ; python_full_version >= '3.14' and extra == 'virtualenv' requires_python: '>=3.9' - conda: https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.8-hda65f42_8.conda sha256: c30daba32ddebbb7ded490f0e371eae90f51e72db620554089103b4a6934b0d5 md5: 51a19bba1b8ebfb60df25cde030b7ebc depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 license: bzip2-1.0.6 license_family: BSD purls: [] size: 260341 timestamp: 1757437258798 - conda: https://conda.anaconda.org/conda-forge/osx-64/bzip2-1.0.8-h500dc9f_8.conda sha256: 8f50b58efb29c710f3cecf2027a8d7325ba769ab10c746eff75cea3ac050b10c md5: 97c4b3bd8a90722104798175a1bdddbf depends: - __osx >=10.13 license: bzip2-1.0.6 license_family: BSD purls: [] size: 132607 timestamp: 1757437730085 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/bzip2-1.0.8-hd037594_8.conda sha256: b456200636bd5fecb2bec63f7e0985ad2097cf1b83d60ce0b6968dffa6d02aa1 md5: 58fd217444c2a5701a44244faf518206 depends: - __osx >=11.0 license: bzip2-1.0.6 license_family: BSD purls: [] size: 125061 timestamp: 1757437486465 - conda: https://conda.anaconda.org/conda-forge/win-64/bzip2-1.0.8-h0ad9c76_8.conda sha256: d882712855624641f48aa9dc3f5feea2ed6b4e6004585d3616386a18186fe692 md5: 1077e9333c41ff0be8edd1a5ec0ddace depends: - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: bzip2-1.0.6 license_family: BSD purls: [] size: 55977 timestamp: 1757437738856 - conda: https://conda.anaconda.org/conda-forge/noarch/ca-certificates-2025.11.12-h4c7d964_0.conda sha256: 686a13bd2d4024fc99a22c1e0e68a7356af3ed3304a8d3ff6bb56249ad4e82f0 md5: f98fb7db808b94bc1ec5b0e62f9f1069 depends: - __win license: ISC purls: [] size: 152827 timestamp: 1762967310929 - conda: https://conda.anaconda.org/conda-forge/noarch/ca-certificates-2025.11.12-hbd8a1cb_0.conda sha256: b986ba796d42c9d3265602bc038f6f5264095702dd546c14bc684e60c385e773 md5: f0991f0f84902f6b6009b4d2350a83aa depends: - __unix license: ISC purls: [] size: 152432 timestamp: 1762967197890 - conda: https://conda.anaconda.org/conda-forge/noarch/ca-certificates-2025.8.3-h4c7d964_0.conda sha256: 3b82f62baad3fd33827b01b0426e8203a2786c8f452f633740868296bcbe8485 md5: c9e0c0f82f6e63323827db462b40ede8 depends: - __win license: ISC purls: [] size: 154489 timestamp: 1754210967212 - conda: https://conda.anaconda.org/conda-forge/noarch/ca-certificates-2025.8.3-hbd8a1cb_0.conda sha256: 837b795a2bb39b75694ba910c13c15fa4998d4bb2a622c214a6a5174b2ae53d1 md5: 74784ee3d225fc3dca89edb635b4e5cc depends: - __unix license: ISC purls: [] size: 154402 timestamp: 1754210968730 - pypi: https://files.pythonhosted.org/packages/e5/48/1549795ba7742c948d2ad169c1c8cdbae65bc450d6cd753d124b17c8cd32/certifi-2025.8.3-py3-none-any.whl name: certifi version: 2025.8.3 sha256: f6c12493cfb1b06ba2ff328595af9350c65d6644968e5d3a2ffd78699af217a5 requires_python: '>=3.7' - pypi: https://files.pythonhosted.org/packages/65/ca/2135ac97709b400c7654b4b764daf5c5567c2da45a30cdd20f9eefe2d658/charset_normalizer-3.4.3-cp313-cp313-macosx_10_13_universal2.whl name: charset-normalizer version: 3.4.3 sha256: 14c2a87c65b351109f6abfc424cab3927b3bdece6f706e4d12faaf3d52ee5efe requires_python: '>=3.7' - pypi: https://files.pythonhosted.org/packages/7e/95/42aa2156235cbc8fa61208aded06ef46111c4d3f0de233107b3f38631803/charset_normalizer-3.4.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl name: charset-normalizer version: 3.4.3 sha256: 416175faf02e4b0810f1f38bcb54682878a4af94059a1cd63b8747244420801f requires_python: '>=3.7' - pypi: https://files.pythonhosted.org/packages/9a/8f/ae790790c7b64f925e5c953b924aaa42a243fb778fed9e41f147b2a5715a/charset_normalizer-3.4.3-cp313-cp313-win_amd64.whl name: charset-normalizer version: 3.4.3 sha256: cf1ebb7d78e1ad8ec2a8c4732c7be2e736f6e5123a4146c5b89c9d1f585f8cef requires_python: '>=3.7' - pypi: https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl name: colorama version: 0.4.6 sha256: 4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6 requires_python: '>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*' - conda: https://conda.anaconda.org/conda-forge/noarch/colorama-0.4.6-pyhd8ed1ab_0.tar.bz2 sha256: 2c1b2e9755ce3102bca8d69e8f26e4f087ece73f50418186aee7c74bef8e1698 md5: 3faab06a954c2a04039983f2c4a50d99 depends: - python >=3.7 license: BSD-3-Clause license_family: BSD purls: - pkg:pypi/colorama?source=hash-mapping size: 25170 timestamp: 1666700778190 - conda: https://conda.anaconda.org/conda-forge/noarch/colorama-0.4.6-pyhd8ed1ab_1.conda sha256: ab29d57dc70786c1269633ba3dff20288b81664d3ff8d21af995742e2bb03287 md5: 962b9857ee8e7018c22f2776ffa0b2d7 depends: - python >=3.9 license: BSD-3-Clause license_family: BSD purls: - pkg:pypi/colorama?source=hash-mapping size: 27011 timestamp: 1733218222191 - conda: https://conda.anaconda.org/conda-forge/linux-64/coverage-7.10.4-py39heb7d2ae_0.conda sha256: c461bb1afa582d9e6b14e857bcdf938271ba34735db8e2c5ef131760250f5761 md5: 3ecc156a987ea09c920564f1a2e03963 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 - python >=3.9,<3.10.0a0 - python_abi 3.9.* *_cp39 - tomli license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 304363 timestamp: 1755492920798 - conda: https://conda.anaconda.org/conda-forge/linux-64/coverage-7.10.7-py310h3406613_0.conda sha256: fbe57d4a4efbafd56a7b48b462e261487b6adde3d45f47d2ebc244d91156f491 md5: bc73c61ff9544f3ff7df03696e0548c2 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 - python >=3.10,<3.11.0a0 - python_abi 3.10.* *_cp310 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 309252 timestamp: 1758500958419 - conda: https://conda.anaconda.org/conda-forge/linux-64/coverage-7.10.7-py311h3778330_0.conda sha256: 19f423276875193355458a4a7b68716a13d4d45de8ec376695aa16fd12b16183 md5: 53fdad3b032eee40cf74ac0de87e4518 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 - python >=3.11,<3.12.0a0 - python_abi 3.11.* *_cp311 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=compressed-mapping size: 395102 timestamp: 1758500900711 - conda: https://conda.anaconda.org/conda-forge/linux-64/coverage-7.10.7-py312h8a5da7c_0.conda sha256: 31a5117c6b9ff110deafb007ca781f65409046973744ffb33072604481b333fd md5: 03d83efc728a6721a0f1616a04a7fc84 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 - python >=3.12,<3.13.0a0 - python_abi 3.12.* *_cp312 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 382934 timestamp: 1758501072565 - conda: https://conda.anaconda.org/conda-forge/linux-64/coverage-7.10.7-py313h3dea7bd_0.conda sha256: 1b56d8f5ed42734e56737a98d8d943da48a58e55c5dd1a3142867afb4adef385 md5: 2847245cb868cdf87bb7fee7b8605d10 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 - python >=3.13,<3.14.0a0 - python_abi 3.13.* *_cp313 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 390586 timestamp: 1758501129226 - conda: https://conda.anaconda.org/conda-forge/linux-64/coverage-7.12.0-py314h67df5f8_0.conda sha256: 1ca5d745e39cd2a0ccc9970e79e1aaf11779bcd4aa620f4d66930a068b7fc85d md5: 16322a0ecfab77d9129d6fe817ad110a depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 - python >=3.14,<3.15.0a0 - python_abi 3.14.* *_cp314 - tomli license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 407149 timestamp: 1763480788920 - conda: https://conda.anaconda.org/conda-forge/linux-64/coverage-7.6.1-py38h2019614_0.conda sha256: ded9743d5ccc5752d0e51eb1722619eaed501d11671452e43bb7d10905877047 md5: 18b931a858e782f7ec64fa068f2dfb01 depends: - __glibc >=2.17,<3.0.a0 - libgcc-ng >=12 - python >=3.8,<3.9.0a0 - python_abi 3.8.* *_cp38 - tomli license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 289534 timestamp: 1722822053006 - conda: https://conda.anaconda.org/conda-forge/osx-64/coverage-7.10.4-py39h2753485_0.conda sha256: 3e6096818ac753429dc88cb63cda5e278308bafc9747512ca45f2be496858237 md5: 16ed2c8534fece6c99c4ed7b2cfae44c depends: - __osx >=10.13 - python >=3.9,<3.10.0a0 - python_abi 3.9.* *_cp39 - tomli license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 301466 timestamp: 1755493117026 - conda: https://conda.anaconda.org/conda-forge/osx-64/coverage-7.10.7-py310hd951482_0.conda sha256: 86ac3c05b268eb3a76d6fab03c3ceaefa47a67c3b9e9d024ab3ce9c1d8ef668b md5: 999daa19122cc5169c2fad754b2e3431 depends: - __osx >=10.13 - python >=3.10,<3.11.0a0 - python_abi 3.10.* *_cp310 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 307715 timestamp: 1758501041857 - conda: https://conda.anaconda.org/conda-forge/osx-64/coverage-7.10.7-py311he13f9b5_0.conda sha256: edd2c65248a844f4a408f38203c7276002de35372a87bd8bb5c805d9ff3fa0b2 md5: 4841198997d465368048c92365446502 depends: - __osx >=10.13 - python >=3.11,<3.12.0a0 - python_abi 3.11.* *_cp311 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=compressed-mapping size: 391950 timestamp: 1758501238265 - conda: https://conda.anaconda.org/conda-forge/osx-64/coverage-7.10.7-py312hacf3034_0.conda sha256: bf8298e2f69ca02842f527caefd03042d8ca7f2abc8f79e5420712ae0811fce1 md5: 92ad0f73c3865cc370b604750ae437af depends: - __osx >=10.13 - python >=3.12,<3.13.0a0 - python_abi 3.12.* *_cp312 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 381487 timestamp: 1758501010295 - conda: https://conda.anaconda.org/conda-forge/osx-64/coverage-7.10.7-py313h0f4d31d_0.conda sha256: eb88d793089b85d1b9bee16be455ccd147116cd1abd445f838c9489380d6b85a md5: 2a00c5c55aeffeb92b513b1231418090 depends: - __osx >=10.13 - python >=3.13,<3.14.0a0 - python_abi 3.13.* *_cp313 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 389977 timestamp: 1758501049598 - conda: https://conda.anaconda.org/conda-forge/osx-64/coverage-7.12.0-py314hb9c7d66_0.conda sha256: 7ddcda3be190ccd7f1bbd9e59da43c4f611b788fcc6c9136b576197a0efee13b md5: d8805ca5ce27c9a2182baf03a16209ab depends: - __osx >=10.13 - python >=3.14,<3.15.0a0 - python_abi 3.14.* *_cp314 - tomli license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 406407 timestamp: 1763480938070 - conda: https://conda.anaconda.org/conda-forge/osx-64/coverage-7.6.1-py38hc718529_0.conda sha256: 655aaa8932cd912975b9199f3ab22d947730941b9122b91036705db5512257f7 md5: 0903bf8e11a3efe89396d4dab737887b depends: - __osx >=10.13 - python >=3.8,<3.9.0a0 - python_abi 3.8.* *_cp38 - tomli license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 288518 timestamp: 1722822161375 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/coverage-7.10.4-py39hb270ea8_0.conda sha256: 6a6dde93708a4d027c25c8fea99c445ecb1e1d3ce557bdb6e749a2e6288f499d md5: 8cf85c9d39bb15134923720ed5c337fe depends: - __osx >=11.0 - python >=3.9,<3.10.0a0 - python >=3.9,<3.10.0a0 *_cpython - python_abi 3.9.* *_cp39 - tomli license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 303963 timestamp: 1755493290197 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/coverage-7.10.7-py310hf4fd40f_0.conda sha256: 4b5ce837d6bf150907084c2dac896861ff81bf7ff7c21d386bb16507dcd9dfd6 md5: 5b6953ce4222af350d597e9c0e382510 depends: - __osx >=11.0 - python >=3.10,<3.11.0a0 - python >=3.10,<3.11.0a0 *_cpython - python_abi 3.10.* *_cp310 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 308730 timestamp: 1758501341985 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/coverage-7.10.7-py311ha9b3269_0.conda sha256: c900c574a0dfe5e6becbbecc9d6c35ac6e09dd8bf3ead865f3fc351ec64c6dcf md5: 95bdbcfc132bf1ef8c44b9d7594e68ba depends: - __osx >=11.0 - python >=3.11,<3.12.0a0 - python >=3.11,<3.12.0a0 *_cpython - python_abi 3.11.* *_cp311 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=compressed-mapping size: 390154 timestamp: 1758501107590 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/coverage-7.10.7-py312h5748b74_0.conda sha256: 9cb9ab655cdde3f3e24368d2b14d16ea2982e5e7f5e58ef57c55d1f95c4534b0 md5: e0b8f44484ee14574476e3ee811da2f6 depends: - __osx >=11.0 - python >=3.12,<3.13.0a0 - python >=3.12,<3.13.0a0 *_cpython - python_abi 3.12.* *_cp312 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 382135 timestamp: 1758501121399 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/coverage-7.10.7-py313h7d74516_0.conda sha256: f3e3414ffda0d03741ebbd60447114f81f362d3f568e434a963448303dd11565 md5: 6165cb718b857579763bd1408459a530 depends: - __osx >=11.0 - python >=3.13,<3.14.0a0 - python >=3.13,<3.14.0a0 *_cp313 - python_abi 3.13.* *_cp313 - tomli license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 390053 timestamp: 1758501053435 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/coverage-7.12.0-py314hb7e19f3_0.conda sha256: 8822a2dc3ca2a994bb586704e4496f401b975fe3cfa6cd6af5ce714a538c7717 md5: e309a92f20483773dcbaa1b04af65b64 depends: - __osx >=11.0 - python >=3.14,<3.15.0a0 - python >=3.14,<3.15.0a0 *_cp314 - python_abi 3.14.* *_cp314 - tomli license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 407454 timestamp: 1763481081019 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/coverage-7.6.1-py38h3237794_0.conda sha256: 80e796c5d5c5f7ec5d7161c1dd4c15526376efd7e3d7e5ef45c855ecf454d383 md5: 40ae4a6b12896cf2c2e515842e56afcd depends: - __osx >=11.0 - python >=3.8,<3.9.0a0 - python >=3.8,<3.9.0a0 *_cpython - python_abi 3.8.* *_cp38 - tomli license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 287765 timestamp: 1722822247557 - conda: https://conda.anaconda.org/conda-forge/win-64/coverage-7.10.4-py39h5769e4c_0.conda sha256: 9fa0d8b8500ac365c101826549a88493222d6b1accbfff081e2840860eaf6d27 md5: 8b4598e992e590c59d6921d291463767 depends: - python >=3.9,<3.10.0a0 - python_abi 3.9.* *_cp39 - tomli - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 330389 timestamp: 1755493213166 - conda: https://conda.anaconda.org/conda-forge/win-64/coverage-7.10.7-py310hdb0e946_0.conda sha256: 3fdc5cd1f28dd8398da3c79cd4092b2655b943299ad4397d3a9362ff70b84f8b md5: 7007b00329cefabcc982d9a6409b8360 depends: - python >=3.10,<3.11.0a0 - python_abi 3.10.* *_cp310 - tomli - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 334607 timestamp: 1758501725708 - conda: https://conda.anaconda.org/conda-forge/win-64/coverage-7.10.7-py311h3f79411_0.conda sha256: 12ff83b5df97ece299d9923ba68b8843716376dd8a8683a94e076205dac7651b md5: 56ff543fe8b76f6c40a307ae3ab022cf depends: - python >=3.11,<3.12.0a0 - python_abi 3.11.* *_cp311 - tomli - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 419224 timestamp: 1758501511112 - conda: https://conda.anaconda.org/conda-forge/win-64/coverage-7.10.7-py312h05f76fc_0.conda sha256: feb7c603334bc5c4cd55ada7d199ee9b3db877fe76230f0bb1198eb9f21a07c3 md5: 85f87f69db7da9c361e3babc62733701 depends: - python >=3.12,<3.13.0a0 - python_abi 3.12.* *_cp312 - tomli - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 407916 timestamp: 1758501511074 - conda: https://conda.anaconda.org/conda-forge/win-64/coverage-7.10.7-py313hd650c13_0.conda sha256: d7aed7e234d6abcc4c40ab9035c7d8d0bd610dece0eab81f391d1b6df22c40f2 md5: 20e3184041b711b0c57859544eb4ce7d depends: - python >=3.13,<3.14.0a0 - python_abi 3.13.* *_cp313 - tomli - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: Apache-2.0 purls: - pkg:pypi/coverage?source=hash-mapping size: 415962 timestamp: 1758501048142 - conda: https://conda.anaconda.org/conda-forge/win-64/coverage-7.12.0-py314h2359020_0.conda sha256: 64d4dac83930c909b3bf8bc1070f0839fe1a73290f1eab85f7c0552aca381dc6 md5: 83e2a310f02a67f2edbc731d6038807a depends: - python >=3.14,<3.15.0a0 - python_abi 3.14.* *_cp314 - tomli - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 433655 timestamp: 1763480637246 - conda: https://conda.anaconda.org/conda-forge/win-64/coverage-7.6.1-py38h4cb3324_0.conda sha256: 057c438d53432ec57cb46e523b738fd6368b1aa5fe4e30ba5d1bd7a2a00fff73 md5: 8cc505003135c788d299b3640f23e5ee depends: - python >=3.8,<3.9.0a0 - python_abi 3.8.* *_cp38 - tomli - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/coverage?source=hash-mapping size: 315337 timestamp: 1722822489983 - pypi: https://files.pythonhosted.org/packages/8f/d7/9322c609343d929e75e7e5e6255e614fcc67572cfd083959cdef3b7aad79/docutils-0.21.2-py3-none-any.whl name: docutils version: 0.21.2 sha256: dafca5b9e384f0e419294eb4d2ff9fa826435bf15f15b7bd45723e8ad76811b2 requires_python: '>=3.9' - conda: https://conda.anaconda.org/conda-forge/noarch/exceptiongroup-1.2.2-pyhd8ed1ab_0.conda sha256: e0edd30c4b7144406bb4da975e6bb97d6bc9c0e999aa4efe66ae108cada5d5b5 md5: d02ae936e42063ca46af6cdad2dbd1e0 depends: - python >=3.7 license: MIT and PSF-2.0 purls: - pkg:pypi/exceptiongroup?source=hash-mapping size: 20418 timestamp: 1720869435725 - conda: https://conda.anaconda.org/conda-forge/noarch/exceptiongroup-1.3.0-pyhd8ed1ab_0.conda sha256: ce61f4f99401a4bd455b89909153b40b9c823276aefcbb06f2044618696009ca md5: 72e42d28960d875c7654614f8b50939a depends: - python >=3.9 - typing_extensions >=4.6.0 license: MIT and PSF-2.0 purls: - pkg:pypi/exceptiongroup?source=hash-mapping size: 21284 timestamp: 1746947398083 - conda: https://conda.anaconda.org/conda-forge/noarch/exceptiongroup-1.3.1-pyhd8ed1ab_0.conda sha256: ee6cf346d017d954255bbcbdb424cddea4d14e4ed7e9813e429db1d795d01144 md5: 8e662bd460bda79b1ea39194e3c4c9ab depends: - python >=3.10 - typing_extensions >=4.6.0 license: MIT and PSF-2.0 purls: - pkg:pypi/exceptiongroup?source=compressed-mapping size: 21333 timestamp: 1763918099466 - conda: https://conda.anaconda.org/conda-forge/noarch/flake8-7.1.1-pyhd8ed1ab_0.conda sha256: c513c6db311641dc50dfadbc49c8edea105ec18fee350149543b49f7970c3962 md5: a25e5df6b26be3c2d64be307c1ef0b37 depends: - mccabe >=0.7.0,<0.8.0 - pycodestyle >=2.12.0,<2.13.0 - pyflakes >=3.2.0,<3.3.0 - python >=3.8.1 license: MIT license_family: MIT purls: - pkg:pypi/flake8?source=hash-mapping size: 111109 timestamp: 1722878963477 - conda: https://conda.anaconda.org/conda-forge/noarch/flake8-7.3.0-pyhd8ed1ab_0.conda sha256: a32e511ea71a9667666935fd9f497f00bcc6ed0099ef04b9416ac24606854d58 md5: 04a55140685296b25b79ad942264c0ef depends: - mccabe >=0.7.0,<0.8.0 - pycodestyle >=2.14.0,<2.15.0 - pyflakes >=3.4.0,<3.5.0 - python >=3.9 license: MIT license_family: MIT purls: - pkg:pypi/flake8?source=hash-mapping size: 111916 timestamp: 1750968083921 - conda: https://conda.anaconda.org/conda-forge/linux-64/icu-75.1-he02047a_0.conda sha256: 71e750d509f5fa3421087ba88ef9a7b9be11c53174af3aa4d06aff4c18b38e8e md5: 8b189310083baabfb622af68fd9d3ae3 depends: - __glibc >=2.17,<3.0.a0 - libgcc-ng >=12 - libstdcxx-ng >=12 license: MIT license_family: MIT purls: [] size: 12129203 timestamp: 1720853576813 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/icu-75.1-hfee45f7_0.conda sha256: 9ba12c93406f3df5ab0a43db8a4b4ef67a5871dfd401010fbe29b218b2cbe620 md5: 5eb22c1d7b3fc4abb50d92d621583137 depends: - __osx >=11.0 license: MIT license_family: MIT purls: [] size: 11857802 timestamp: 1720853997952 - pypi: https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl name: idna version: '3.10' sha256: 946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3 requires_dist: - ruff>=0.6.2 ; extra == 'all' - mypy>=1.11.2 ; extra == 'all' - pytest>=8.3.2 ; extra == 'all' - flake8>=7.1.1 ; extra == 'all' requires_python: '>=3.6' - pypi: https://files.pythonhosted.org/packages/ff/62/85c4c919272577931d407be5ba5d71c20f0b616d31a0befe0ae45bb79abd/imagesize-1.4.1-py2.py3-none-any.whl name: imagesize version: 1.4.1 sha256: 0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b requires_python: '>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*' - conda: https://conda.anaconda.org/conda-forge/noarch/iniconfig-2.0.0-pyhd8ed1ab_0.conda sha256: 38740c939b668b36a50ef455b077e8015b8c9cf89860d421b3fff86048f49666 md5: f800d2da156d08e289b14e87e43c1ae5 depends: - python >=3.7 license: MIT license_family: MIT purls: - pkg:pypi/iniconfig?source=hash-mapping size: 11101 timestamp: 1673103208955 - conda: https://conda.anaconda.org/conda-forge/noarch/iniconfig-2.0.0-pyhd8ed1ab_1.conda sha256: 0ec8f4d02053cd03b0f3e63168316530949484f80e16f5e2fb199a1d117a89ca md5: 6837f3eff7dcea42ecd714ce1ac2b108 depends: - python >=3.9 license: MIT license_family: MIT purls: - pkg:pypi/iniconfig?source=hash-mapping size: 11474 timestamp: 1733223232820 - conda: https://conda.anaconda.org/conda-forge/noarch/iniconfig-2.3.0-pyhd8ed1ab_0.conda sha256: e1a9e3b1c8fe62dc3932a616c284b5d8cbe3124bbfbedcf4ce5c828cb166ee19 md5: 9614359868482abba1bd15ce465e3c42 depends: - python >=3.10 license: MIT license_family: MIT purls: - pkg:pypi/iniconfig?source=compressed-mapping size: 13387 timestamp: 1760831448842 - pypi: https://files.pythonhosted.org/packages/62/a1/3d680cbfd5f4b8f15abc1d571870c5fc3e594bb582bc3b64ea099db13e56/jinja2-3.1.6-py3-none-any.whl name: jinja2 version: 3.1.6 sha256: 85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67 requires_dist: - markupsafe>=2.0 - babel>=2.7 ; extra == 'i18n' requires_python: '>=3.7' - conda: https://conda.anaconda.org/conda-forge/linux-64/ld_impl_linux-64-2.44-h1423503_1.conda sha256: 1a620f27d79217c1295049ba214c2f80372062fd251b569e9873d4a953d27554 md5: 0be7c6e070c19105f966d3758448d018 depends: - __glibc >=2.17,<3.0.a0 constrains: - binutils_impl_linux-64 2.44 license: GPL-3.0-only license_family: GPL purls: [] size: 676044 timestamp: 1752032747103 - conda: https://conda.anaconda.org/conda-forge/linux-64/ld_impl_linux-64-2.45-default_hbd61a6d_104.conda sha256: 9e191baf2426a19507f1d0a17be0fdb7aa155cdf0f61d5a09c808e0a69464312 md5: a6abd2796fc332536735f68ba23f7901 depends: - __glibc >=2.17,<3.0.a0 - zstd >=1.5.7,<1.6.0a0 constrains: - binutils_impl_linux-64 2.45 license: GPL-3.0-only license_family: GPL purls: [] size: 725545 timestamp: 1764007826689 - conda: https://conda.anaconda.org/conda-forge/linux-64/libexpat-2.7.1-hecca717_0.conda sha256: da2080da8f0288b95dd86765c801c6e166c4619b910b11f9a8446fb852438dc2 md5: 4211416ecba1866fab0c6470986c22d6 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 constrains: - expat 2.7.1.* license: MIT license_family: MIT purls: [] size: 74811 timestamp: 1752719572741 - conda: https://conda.anaconda.org/conda-forge/linux-64/libexpat-2.7.3-hecca717_0.conda sha256: 1e1b08f6211629cbc2efe7a5bca5953f8f6b3cae0eeb04ca4dacee1bd4e2db2f md5: 8b09ae86839581147ef2e5c5e229d164 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 constrains: - expat 2.7.3.* license: MIT license_family: MIT purls: [] size: 76643 timestamp: 1763549731408 - conda: https://conda.anaconda.org/conda-forge/osx-64/libexpat-2.7.1-h21dd04a_0.conda sha256: 689862313571b62ee77ee01729dc093f2bf25a2f99415fcfe51d3a6cd31cce7b md5: 9fdeae0b7edda62e989557d645769515 depends: - __osx >=10.13 constrains: - expat 2.7.1.* license: MIT license_family: MIT purls: [] size: 72450 timestamp: 1752719744781 - conda: https://conda.anaconda.org/conda-forge/osx-64/libexpat-2.7.3-heffb93a_0.conda sha256: d11b3a6ce5b2e832f430fd112084533a01220597221bee16d6c7dc3947dffba6 md5: 222e0732a1d0780a622926265bee14ef depends: - __osx >=10.13 constrains: - expat 2.7.3.* license: MIT license_family: MIT purls: [] size: 74058 timestamp: 1763549886493 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/libexpat-2.7.1-hec049ff_0.conda sha256: 8fbb17a56f51e7113ed511c5787e0dec0d4b10ef9df921c4fd1cccca0458f648 md5: b1ca5f21335782f71a8bd69bdc093f67 depends: - __osx >=11.0 constrains: - expat 2.7.1.* license: MIT license_family: MIT purls: [] size: 65971 timestamp: 1752719657566 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/libexpat-2.7.3-haf25636_0.conda sha256: fce22610ecc95e6d149e42a42fbc3cc9d9179bd4eb6232639a60f06e080eec98 md5: b79875dbb5b1db9a4a22a4520f918e1a depends: - __osx >=11.0 constrains: - expat 2.7.3.* license: MIT license_family: MIT purls: [] size: 67800 timestamp: 1763549994166 - conda: https://conda.anaconda.org/conda-forge/win-64/libexpat-2.7.1-hac47afa_0.conda sha256: 8432ca842bdf8073ccecf016ccc9140c41c7114dc4ec77ca754551c01f780845 md5: 3608ffde260281fa641e70d6e34b1b96 depends: - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 constrains: - expat 2.7.1.* license: MIT license_family: MIT purls: [] size: 141322 timestamp: 1752719767870 - conda: https://conda.anaconda.org/conda-forge/win-64/libexpat-2.7.3-hac47afa_0.conda sha256: 844ab708594bdfbd7b35e1a67c379861bcd180d6efe57b654f482ae2f7f5c21e md5: 8c9e4f1a0e688eef2e95711178061a0f depends: - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 constrains: - expat 2.7.3.* license: MIT license_family: MIT purls: [] size: 70137 timestamp: 1763550049107 - conda: https://conda.anaconda.org/conda-forge/linux-64/libffi-3.4.6-h2dba641_1.conda sha256: 764432d32db45466e87f10621db5b74363a9f847d2b8b1f9743746cd160f06ab md5: ede4673863426c0883c0063d853bbd85 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 license: MIT license_family: MIT purls: [] size: 57433 timestamp: 1743434498161 - conda: https://conda.anaconda.org/conda-forge/linux-64/libffi-3.5.2-h9ec8514_0.conda sha256: 25cbdfa65580cfab1b8d15ee90b4c9f1e0d72128f1661449c9a999d341377d54 md5: 35f29eec58405aaf55e01cb470d8c26a depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 license: MIT license_family: MIT purls: [] size: 57821 timestamp: 1760295480630 - conda: https://conda.anaconda.org/conda-forge/osx-64/libffi-3.4.6-h281671d_1.conda sha256: 6394b1bc67c64a21a5cc73d1736d1d4193a64515152e861785c44d2cfc49edf3 md5: 4ca9ea59839a9ca8df84170fab4ceb41 depends: - __osx >=10.13 license: MIT license_family: MIT purls: [] size: 51216 timestamp: 1743434595269 - conda: https://conda.anaconda.org/conda-forge/osx-64/libffi-3.5.2-h750e83c_0.conda sha256: 277dc89950f5d97f1683f26e362d6dca3c2efa16cb2f6fdb73d109effa1cd3d0 md5: d214916b24c625bcc459b245d509f22e depends: - __osx >=10.13 license: MIT license_family: MIT purls: [] size: 52573 timestamp: 1760295626449 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/libffi-3.4.6-h1da3d7d_1.conda sha256: c6a530924a9b14e193ea9adfe92843de2a806d1b7dbfd341546ece9653129e60 md5: c215a60c2935b517dcda8cad4705734d depends: - __osx >=11.0 license: MIT license_family: MIT purls: [] size: 39839 timestamp: 1743434670405 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/libffi-3.5.2-he5f378a_0.conda sha256: 9b8acdf42df61b7bfe8bdc545c016c29e61985e79748c64ad66df47dbc2e295f md5: 411ff7cd5d1472bba0f55c0faf04453b depends: - __osx >=11.0 license: MIT license_family: MIT purls: [] size: 40251 timestamp: 1760295839166 - conda: https://conda.anaconda.org/conda-forge/win-64/libffi-3.4.6-h537db12_1.conda sha256: d3b0b8812eab553d3464bbd68204f007f1ebadf96ce30eb0cbc5159f72e353f5 md5: 85d8fa5e55ed8f93f874b3b23ed54ec6 depends: - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 license: MIT license_family: MIT purls: [] size: 44978 timestamp: 1743435053850 - conda: https://conda.anaconda.org/conda-forge/win-64/libffi-3.5.2-h52bdfb6_0.conda sha256: ddff25aaa4f0aa535413f5d831b04073789522890a4d8626366e43ecde1534a3 md5: ba4ad812d2afc22b9a34ce8327a0930f depends: - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: MIT license_family: MIT purls: [] size: 44866 timestamp: 1760295760649 - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-15.1.0-h767d61c_5.conda sha256: 0caed73aac3966bfbf5710e06c728a24c6c138605121a3dacb2e03440e8baa6a md5: 264fbfba7fb20acf3b29cde153e345ce depends: - __glibc >=2.17,<3.0.a0 - _openmp_mutex >=4.5 constrains: - libgomp 15.1.0 h767d61c_5 - libgcc-ng ==15.1.0=*_5 license: GPL-3.0-only WITH GCC-exception-3.1 license_family: GPL purls: [] size: 824191 timestamp: 1757042543820 - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-15.2.0-he0feb66_12.conda sha256: b2c57cebdcf243f71d96a6c934c643aebb5a38093eb61d8d1aa67dc2e03c9244 md5: b3137606149c607becd89faed5ee4ec6 depends: - __glibc >=2.17,<3.0.a0 - _openmp_mutex >=4.5 constrains: - libgomp 15.2.0 he0feb66_12 - libgcc-ng ==15.2.0=*_12 license: GPL-3.0-only WITH GCC-exception-3.1 license_family: GPL purls: [] size: 1043771 timestamp: 1764036113005 - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-15.1.0-h69a702a_5.conda sha256: f54bb9c3be12b24be327f4c1afccc2969712e0b091cdfbd1d763fb3e61cda03f md5: 069afdf8ea72504e48d23ae1171d951c depends: - libgcc 15.1.0 h767d61c_5 license: GPL-3.0-only WITH GCC-exception-3.1 license_family: GPL purls: [] size: 29187 timestamp: 1757042549554 - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-15.2.0-h69a702a_12.conda sha256: b45f45c2362f9e8aaa5b875a7e612f4b4562bd136611a93b7642b45f7d1eaec3 md5: 3c4b621138fcfc95ba219344b8a0d91f depends: - libgcc 15.2.0 he0feb66_12 license: GPL-3.0-only WITH GCC-exception-3.1 license_family: GPL purls: [] size: 29191 timestamp: 1764036122114 - conda: https://conda.anaconda.org/conda-forge/linux-64/libgomp-15.1.0-h767d61c_5.conda sha256: 125051d51a8c04694d0830f6343af78b556dd88cc249dfec5a97703ebfb1832d md5: dcd5ff1940cd38f6df777cac86819d60 depends: - __glibc >=2.17,<3.0.a0 license: GPL-3.0-only WITH GCC-exception-3.1 license_family: GPL purls: [] size: 447215 timestamp: 1757042483384 - conda: https://conda.anaconda.org/conda-forge/linux-64/libgomp-15.2.0-he0feb66_12.conda sha256: 49c313bb040d04512c5e29da169dec58c51c6535dc97cc5808d3614bc048723e md5: 4881b9b732ee8b673cd46875d7d36fc6 depends: - __glibc >=2.17,<3.0.a0 license: GPL-3.0-only WITH GCC-exception-3.1 license_family: GPL purls: [] size: 605231 timestamp: 1764036022611 - conda: https://conda.anaconda.org/conda-forge/linux-64/liblzma-5.8.1-hb9d3cd8_2.conda sha256: f2591c0069447bbe28d4d696b7fcb0c5bd0b4ac582769b89addbcf26fb3430d8 md5: 1a580f7796c7bf6393fddb8bbbde58dc depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 constrains: - xz 5.8.1.* license: 0BSD purls: [] size: 112894 timestamp: 1749230047870 - conda: https://conda.anaconda.org/conda-forge/osx-64/liblzma-5.8.1-hd471939_2.conda sha256: 7e22fd1bdb8bf4c2be93de2d4e718db5c548aa082af47a7430eb23192de6bb36 md5: 8468beea04b9065b9807fc8b9cdc5894 depends: - __osx >=10.13 constrains: - xz 5.8.1.* license: 0BSD purls: [] size: 104826 timestamp: 1749230155443 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/liblzma-5.8.1-h39f12f2_2.conda sha256: 0cb92a9e026e7bd4842f410a5c5c665c89b2eb97794ffddba519a626b8ce7285 md5: d6df911d4564d77c4374b02552cb17d1 depends: - __osx >=11.0 constrains: - xz 5.8.1.* license: 0BSD purls: [] size: 92286 timestamp: 1749230283517 - conda: https://conda.anaconda.org/conda-forge/win-64/liblzma-5.8.1-h2466b09_2.conda sha256: 55764956eb9179b98de7cc0e55696f2eff8f7b83fc3ebff5e696ca358bca28cc md5: c15148b2e18da456f5108ccb5e411446 depends: - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 constrains: - xz 5.8.1.* license: 0BSD purls: [] size: 104935 timestamp: 1749230611612 - conda: https://conda.anaconda.org/conda-forge/linux-64/liblzma-devel-5.8.1-hb9d3cd8_2.conda sha256: 329e66330a8f9cbb6a8d5995005478188eb4ba8a6b6391affa849744f4968492 md5: f61edadbb301530bd65a32646bd81552 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 - liblzma 5.8.1 hb9d3cd8_2 license: 0BSD purls: [] size: 439868 timestamp: 1749230061968 - conda: https://conda.anaconda.org/conda-forge/osx-64/liblzma-devel-5.8.1-hd471939_2.conda sha256: a020ad9f1e27d4f7a522cbbb9613b99f64a5cc41f80caf62b9fdd1cf818acf18 md5: 2e16f5b4f6c92b96f6a346f98adc4e3e depends: - __osx >=10.13 - liblzma 5.8.1 hd471939_2 license: 0BSD purls: [] size: 116356 timestamp: 1749230171181 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/liblzma-devel-5.8.1-h39f12f2_2.conda sha256: 974804430e24f0b00f3a48b67ec10c9f5441c9bb3d82cc0af51ba45b8a75a241 md5: 1201137f1a5ec9556032ffc04dcdde8d depends: - __osx >=11.0 - liblzma 5.8.1 h39f12f2_2 license: 0BSD purls: [] size: 116244 timestamp: 1749230297170 - conda: https://conda.anaconda.org/conda-forge/win-64/liblzma-devel-5.8.1-h2466b09_2.conda sha256: 1ccff927a2d768403bad85e36ca3e931d96890adb4f503e1780c3412dd1e1298 md5: 42c90c4941c59f1b9f8fab627ad8ae76 depends: - liblzma 5.8.1 h2466b09_2 - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 license: 0BSD purls: [] size: 129344 timestamp: 1749230637001 - conda: https://conda.anaconda.org/conda-forge/linux-64/libmpdec-4.0.0-hb9d3cd8_0.conda sha256: 3aa92d4074d4063f2a162cd8ecb45dccac93e543e565c01a787e16a43501f7ee md5: c7e925f37e3b40d893459e625f6a53f1 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 license: BSD-2-Clause license_family: BSD purls: [] size: 91183 timestamp: 1748393666725 - conda: https://conda.anaconda.org/conda-forge/osx-64/libmpdec-4.0.0-h6e16a3a_0.conda sha256: 98299c73c7a93cd4f5ff8bb7f43cd80389f08b5a27a296d806bdef7841cc9b9e md5: 18b81186a6adb43f000ad19ed7b70381 depends: - __osx >=10.13 license: BSD-2-Clause license_family: BSD purls: [] size: 77667 timestamp: 1748393757154 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/libmpdec-4.0.0-h5505292_0.conda sha256: 0a1875fc1642324ebd6c4ac864604f3f18f57fbcf558a8264f6ced028a3c75b2 md5: 85ccccb47823dd9f7a99d2c7f530342f depends: - __osx >=11.0 license: BSD-2-Clause license_family: BSD purls: [] size: 71829 timestamp: 1748393749336 - conda: https://conda.anaconda.org/conda-forge/win-64/libmpdec-4.0.0-h2466b09_0.conda sha256: fc529fc82c7caf51202cc5cec5bb1c2e8d90edbac6d0a4602c966366efe3c7bf md5: 74860100b2029e2523cf480804c76b9b depends: - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 license: BSD-2-Clause license_family: BSD purls: [] size: 88657 timestamp: 1723861474602 - conda: https://conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.1-hb9d3cd8_1.conda sha256: 927fe72b054277cde6cb82597d0fcf6baf127dcbce2e0a9d8925a68f1265eef5 md5: d864d34357c3b65a4b731f78c0801dc4 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 license: LGPL-2.1-only license_family: GPL purls: [] size: 33731 timestamp: 1750274110928 - conda: https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.50.4-h0c1763c_0.conda sha256: 6d9c32fc369af5a84875725f7ddfbfc2ace795c28f246dc70055a79f9b2003da md5: 0b367fad34931cb79e0d6b7e5c06bb1c depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 - libzlib >=1.3.1,<2.0a0 license: blessing purls: [] size: 932581 timestamp: 1753948484112 - conda: https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.51.0-hee844dc_0.conda sha256: 4c992dcd0e34b68f843e75406f7f303b1b97c248d18f3c7c330bdc0bc26ae0b3 md5: 729a572a3ebb8c43933b30edcc628ceb depends: - __glibc >=2.17,<3.0.a0 - icu >=75.1,<76.0a0 - libgcc >=14 - libzlib >=1.3.1,<2.0a0 license: blessing purls: [] size: 945576 timestamp: 1762299687230 - conda: https://conda.anaconda.org/conda-forge/osx-64/libsqlite-3.50.4-h39a8b3b_0.conda sha256: 466366b094c3eb4b1d77320530cbf5400e7a10ab33e4824c200147488eebf7a6 md5: 156bfb239b6a67ab4a01110e6718cbc4 depends: - __osx >=10.13 - libzlib >=1.3.1,<2.0a0 license: blessing purls: [] size: 980121 timestamp: 1753948554003 - conda: https://conda.anaconda.org/conda-forge/osx-64/libsqlite-3.51.0-h86bffb9_0.conda sha256: ad151af8192c17591fad0b68c9ffb7849ad9f4be9da2020b38b8befd2c5f6f02 md5: 1ee9b74571acd6dd87e6a0f783989426 depends: - __osx >=10.13 - libzlib >=1.3.1,<2.0a0 license: blessing purls: [] size: 986898 timestamp: 1762300146976 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/libsqlite-3.50.4-h4237e3c_0.conda sha256: 802ebe62e6bc59fc26b26276b793e0542cfff2d03c086440aeaf72fb8bbcec44 md5: 1dcb0468f5146e38fae99aef9656034b depends: - __osx >=11.0 - icu >=75.1,<76.0a0 - libzlib >=1.3.1,<2.0a0 license: blessing purls: [] size: 902645 timestamp: 1753948599139 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/libsqlite-3.51.0-h8adb53f_0.conda sha256: b43d198f147f46866e5336c4a6b91668beef698bfba69d1706158460eadb2c1b md5: 5fb1945dbc6380e6fe7e939a62267772 depends: - __osx >=11.0 - icu >=75.1,<76.0a0 - libzlib >=1.3.1,<2.0a0 license: blessing purls: [] size: 909508 timestamp: 1762300078624 - conda: https://conda.anaconda.org/conda-forge/win-64/libsqlite-3.50.4-hf5d6505_0.conda sha256: 5dc4f07b2d6270ac0c874caec53c6984caaaa84bc0d3eb593b0edf3dc8492efa md5: ccb20d946040f86f0c05b644d5eadeca depends: - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: blessing purls: [] size: 1288499 timestamp: 1753948889360 - conda: https://conda.anaconda.org/conda-forge/win-64/libsqlite-3.51.0-hf5d6505_0.conda sha256: 2373bd7450693bd0f624966e1bee2f49b0bf0ffbc114275ed0a43cf35aec5b21 md5: d2c9300ebd2848862929b18c264d1b1e depends: - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: blessing purls: [] size: 1292710 timestamp: 1762299749044 - conda: https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-15.2.0-h934c35e_12.conda sha256: 2954f7b21ad6f0f1b9b5eabf0595039c425f6f6267087e58310dc4855fee8383 md5: b8ef46cab65ab6676c7d5c9581b17ebf depends: - __glibc >=2.17,<3.0.a0 - libgcc 15.2.0 he0feb66_12 constrains: - libstdcxx-ng ==15.2.0=*_12 license: GPL-3.0-only WITH GCC-exception-3.1 license_family: GPL purls: [] size: 5854408 timestamp: 1764036151142 - conda: https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-ng-15.2.0-hdf11a46_12.conda sha256: 7540d3b3577b058962d110dfa08ec2a278254dd6f9397d33ad0ede7bf222094e md5: ac15e685fa88f7d070b60b396dd91017 depends: - libstdcxx 15.2.0 h934c35e_12 license: GPL-3.0-only WITH GCC-exception-3.1 license_family: GPL purls: [] size: 29230 timestamp: 1764036201717 - conda: https://conda.anaconda.org/conda-forge/linux-64/libuuid-2.41.1-he9a06e4_0.conda sha256: 776e28735cee84b97e4d05dd5d67b95221a3e2c09b8b13e3d6dbe6494337d527 md5: af930c65e9a79a3423d6d36e265cef65 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 license: BSD-3-Clause license_family: BSD purls: [] size: 37087 timestamp: 1757334557450 - conda: https://conda.anaconda.org/conda-forge/linux-64/libuuid-2.41.2-he9a06e4_0.conda sha256: e5ec6d2ad7eef538ddcb9ea62ad4346fde70a4736342c4ad87bd713641eb9808 md5: 80c07c68d2f6870250959dcc95b209d1 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=14 license: BSD-3-Clause license_family: BSD purls: [] size: 37135 timestamp: 1758626800002 - conda: https://conda.anaconda.org/conda-forge/linux-64/libxcrypt-4.4.36-hd590300_1.conda sha256: 6ae68e0b86423ef188196fff6207ed0c8195dd84273cb5623b85aa08033a410c md5: 5aa797f8787fe7a17d1b0821485b5adc depends: - libgcc-ng >=12 license: LGPL-2.1-or-later purls: [] size: 100393 timestamp: 1702724383534 - conda: https://conda.anaconda.org/conda-forge/linux-64/libzlib-1.3.1-hb9d3cd8_2.conda sha256: d4bfe88d7cb447768e31650f06257995601f89076080e76df55e3112d4e47dc4 md5: edb0dca6bc32e4f4789199455a1dbeb8 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 constrains: - zlib 1.3.1 *_2 license: Zlib license_family: Other purls: [] size: 60963 timestamp: 1727963148474 - conda: https://conda.anaconda.org/conda-forge/osx-64/libzlib-1.3.1-hd23fc13_2.conda sha256: 8412f96504fc5993a63edf1e211d042a1fd5b1d51dedec755d2058948fcced09 md5: 003a54a4e32b02f7355b50a837e699da depends: - __osx >=10.13 constrains: - zlib 1.3.1 *_2 license: Zlib license_family: Other purls: [] size: 57133 timestamp: 1727963183990 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/libzlib-1.3.1-h8359307_2.conda sha256: ce34669eadaba351cd54910743e6a2261b67009624dbc7daeeafdef93616711b md5: 369964e85dc26bfe78f41399b366c435 depends: - __osx >=11.0 constrains: - zlib 1.3.1 *_2 license: Zlib license_family: Other purls: [] size: 46438 timestamp: 1727963202283 - conda: https://conda.anaconda.org/conda-forge/win-64/libzlib-1.3.1-h2466b09_2.conda sha256: ba945c6493449bed0e6e29883c4943817f7c79cbff52b83360f7b341277c6402 md5: 41fbfac52c601159df6c01f875de31b9 depends: - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 constrains: - zlib 1.3.1 *_2 license: Zlib license_family: Other purls: [] size: 55476 timestamp: 1727963768015 - pypi: https://files.pythonhosted.org/packages/0c/91/96cf928db8236f1bfab6ce15ad070dfdd02ed88261c2afafd4b43575e9e9/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl name: markupsafe version: 3.0.2 sha256: 15ab75ef81add55874e7ab7055e9c397312385bd9ced94920f2802310c930396 requires_python: '>=3.9' - pypi: https://files.pythonhosted.org/packages/29/88/07df22d2dd4df40aba9f3e402e6dc1b8ee86297dddbad4872bd5e7b0094f/MarkupSafe-3.0.2-cp313-cp313-win_amd64.whl name: markupsafe version: 3.0.2 sha256: e6a2a455bd412959b57a172ce6328d2dd1f01cb2135efda2e4576e8a23fa3b0f requires_python: '>=3.9' - pypi: https://files.pythonhosted.org/packages/2b/6d/9409f3684d3335375d04e5f05744dfe7e9f120062c9857df4ab490a1031a/MarkupSafe-3.0.2-cp313-cp313-macosx_11_0_arm64.whl name: markupsafe version: 3.0.2 sha256: f8b3d067f2e40fe93e1ccdd6b2e1d16c43140e76f02fb1319a05cf2b79d99430 requires_python: '>=3.9' - pypi: https://files.pythonhosted.org/packages/83/0e/67eb10a7ecc77a0c2bbe2b0235765b98d164d81600746914bebada795e97/MarkupSafe-3.0.2-cp313-cp313-macosx_10_13_universal2.whl name: markupsafe version: 3.0.2 sha256: ba9527cdd4c926ed0760bc301f6728ef34d841f405abf9d4f959c478421e4efd requires_python: '>=3.9' - conda: https://conda.anaconda.org/conda-forge/noarch/mccabe-0.7.0-pyhd8ed1ab_0.tar.bz2 sha256: 0466ad9490b761e9a8c57fab574fc099136b45fa19a0746ce33acdeb2a84766b md5: 34fc335fc50eef0b5ea708f2b5f54e0c depends: - python >=3.6 license: MIT license_family: MIT purls: - pkg:pypi/mccabe?source=hash-mapping size: 10909 timestamp: 1643049714491 - conda: https://conda.anaconda.org/conda-forge/noarch/mccabe-0.7.0-pyhd8ed1ab_1.conda sha256: 9b0037171dad0100f0296699a11ae7d355237b55f42f9094aebc0f41512d96a1 md5: 827064ddfe0de2917fb29f1da4f8f533 depends: - python >=3.9 license: MIT license_family: MIT purls: - pkg:pypi/mccabe?source=hash-mapping size: 12934 timestamp: 1733216573915 - conda: https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.5-h2d0b736_3.conda sha256: 3fde293232fa3fca98635e1167de6b7c7fda83caf24b9d6c91ec9eefb4f4d586 md5: 47e340acb35de30501a76c7c799c41d7 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 license: X11 AND BSD-3-Clause purls: [] size: 891641 timestamp: 1738195959188 - conda: https://conda.anaconda.org/conda-forge/osx-64/ncurses-6.5-h0622a9a_3.conda sha256: ea4a5d27ded18443749aefa49dc79f6356da8506d508b5296f60b8d51e0c4bd9 md5: ced34dd9929f491ca6dab6a2927aff25 depends: - __osx >=10.13 license: X11 AND BSD-3-Clause purls: [] size: 822259 timestamp: 1738196181298 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/ncurses-6.5-h5e97a16_3.conda sha256: 2827ada40e8d9ca69a153a45f7fd14f32b2ead7045d3bbb5d10964898fe65733 md5: 068d497125e4bf8a66bf707254fff5ae depends: - __osx >=11.0 license: X11 AND BSD-3-Clause purls: [] size: 797030 timestamp: 1738196177597 - conda: https://conda.anaconda.org/conda-forge/linux-64/openssl-3.5.3-h26f9b46_1.conda sha256: 0572be1b7d3c4f4c288bb8ab1cb6007b5b8b9523985b34b862b5222dea3c45f5 md5: 4fc6c4c88da64c0219c0c6c0408cedd4 depends: - __glibc >=2.17,<3.0.a0 - ca-certificates - libgcc >=14 license: Apache-2.0 license_family: Apache purls: [] size: 3128517 timestamp: 1758597915858 - conda: https://conda.anaconda.org/conda-forge/linux-64/openssl-3.6.0-h26f9b46_0.conda sha256: a47271202f4518a484956968335b2521409c8173e123ab381e775c358c67fe6d md5: 9ee58d5c534af06558933af3c845a780 depends: - __glibc >=2.17,<3.0.a0 - ca-certificates - libgcc >=14 license: Apache-2.0 license_family: Apache purls: [] size: 3165399 timestamp: 1762839186699 - conda: https://conda.anaconda.org/conda-forge/osx-64/openssl-3.5.3-h230baf5_1.conda sha256: 8eeb0d7e01784c1644c93947ba5e6e55d79f9f9c8dd53b33a6523efb93afd56c md5: f601470d724024fec8dbb98a2dd5b39c depends: - __osx >=10.13 - ca-certificates license: Apache-2.0 license_family: Apache purls: [] size: 2742974 timestamp: 1758599496115 - conda: https://conda.anaconda.org/conda-forge/osx-64/openssl-3.6.0-h230baf5_0.conda sha256: 36fe9fb316be22fcfb46d5fa3e2e85eec5ef84f908b7745f68f768917235b2d5 md5: 3f50cdf9a97d0280655758b735781096 depends: - __osx >=10.13 - ca-certificates license: Apache-2.0 license_family: Apache purls: [] size: 2778996 timestamp: 1762840724922 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/openssl-3.5.3-h5503f6c_1.conda sha256: d5499ee2611a0ca9d84e9d60a5978d1f17350e94915c89026f5d9346ccf0a987 md5: 4b23b1e2aa9d81b16204e1304241ccae depends: - __osx >=11.0 - ca-certificates license: Apache-2.0 license_family: Apache purls: [] size: 3069376 timestamp: 1758598263612 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/openssl-3.6.0-h5503f6c_0.conda sha256: ebe93dafcc09e099782fe3907485d4e1671296bc14f8c383cb6f3dfebb773988 md5: b34dc4172653c13dcf453862f251af2b depends: - __osx >=11.0 - ca-certificates license: Apache-2.0 license_family: Apache purls: [] size: 3108371 timestamp: 1762839712322 - conda: https://conda.anaconda.org/conda-forge/win-64/openssl-3.5.3-h725018a_1.conda sha256: 72dc204b0d59a7262bc77ca0e86cba11cbc6706cb9b4d6656fe7fab9593347c9 md5: c84884e2c1f899de9a895a1f0b7c9cd8 depends: - ca-certificates - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: Apache-2.0 license_family: Apache purls: [] size: 9276051 timestamp: 1758599639304 - conda: https://conda.anaconda.org/conda-forge/win-64/openssl-3.6.0-h725018a_0.conda sha256: 6d72d6f766293d4f2aa60c28c244c8efed6946c430814175f959ffe8cab899b3 md5: 84f8fb4afd1157f59098f618cd2437e4 depends: - ca-certificates - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: Apache-2.0 license_family: Apache purls: [] size: 9440812 timestamp: 1762841722179 - pypi: https://files.pythonhosted.org/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl name: packaging version: '25.0' sha256: 29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484 requires_python: '>=3.8' - conda: https://conda.anaconda.org/conda-forge/noarch/packaging-25.0-pyh29332c3_1.conda sha256: 289861ed0c13a15d7bbb408796af4de72c2fe67e2bcb0de98f4c3fce259d7991 md5: 58335b26c38bf4a20f399384c33cbcf9 depends: - python >=3.8 - python license: Apache-2.0 license_family: APACHE purls: - pkg:pypi/packaging?source=hash-mapping size: 62477 timestamp: 1745345660407 - conda: https://conda.anaconda.org/conda-forge/noarch/pluggy-1.5.0-pyhd8ed1ab_0.conda sha256: 33eaa3359948a260ebccf9cdc2fd862cea5a6029783289e13602d8e634cd9a26 md5: d3483c8fc2dc2cc3f5cf43e26d60cabf depends: - python >=3.8 license: MIT license_family: MIT purls: - pkg:pypi/pluggy?source=hash-mapping size: 23815 timestamp: 1713667175451 - conda: https://conda.anaconda.org/conda-forge/noarch/pluggy-1.6.0-pyhd8ed1ab_0.conda sha256: a8eb555eef5063bbb7ba06a379fa7ea714f57d9741fe0efdb9442dbbc2cccbcc md5: 7da7ccd349dbf6487a7778579d2bb971 depends: - python >=3.9 license: MIT license_family: MIT purls: - pkg:pypi/pluggy?source=hash-mapping size: 24246 timestamp: 1747339794916 - conda: https://conda.anaconda.org/conda-forge/noarch/pycodestyle-2.12.1-pyhd8ed1ab_0.conda sha256: ca548aa380edcc1a6e96893c0d870de9e22a7b0d4619ffa426875e6443a2044f md5: 72453e39709f38d0494d096bb5f678b7 depends: - python >=3.8 license: MIT license_family: MIT purls: - pkg:pypi/pycodestyle?source=hash-mapping size: 34215 timestamp: 1722846854518 - conda: https://conda.anaconda.org/conda-forge/noarch/pycodestyle-2.14.0-pyhd8ed1ab_0.conda sha256: 1950f71ff44e64163e176b1ca34812afc1a104075c3190de50597e1623eb7d53 md5: 85815c6a22905c080111ec8d56741454 depends: - python >=3.9 license: MIT license_family: MIT purls: - pkg:pypi/pycodestyle?source=hash-mapping size: 35182 timestamp: 1750616054854 - conda: https://conda.anaconda.org/conda-forge/noarch/pyflakes-3.2.0-pyhd8ed1ab_0.conda sha256: b1582410fcfa30b3597629e39b688ead87833c4a64f7c4637068f80aa1411d49 md5: 0cf7fef6aa123df28adb21a590065e3d depends: - python ==2.7.*|>=3.5 license: MIT license_family: MIT purls: - pkg:pypi/pyflakes?source=hash-mapping size: 58654 timestamp: 1704424729210 - conda: https://conda.anaconda.org/conda-forge/noarch/pyflakes-3.4.0-pyhd8ed1ab_0.conda sha256: 4b6fb3f7697b4e591c06149671699777c71ca215e9ec16d5bd0767425e630d65 md5: dba204e749e06890aeb3756ef2b1bf35 depends: - python >=3.9 license: MIT license_family: MIT purls: - pkg:pypi/pyflakes?source=hash-mapping size: 59592 timestamp: 1750492011671 - pypi: https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl name: pygments version: 2.19.2 sha256: 86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b requires_dist: - colorama>=0.4.6 ; extra == 'windows-terminal' requires_python: '>=3.8' - conda: https://conda.anaconda.org/conda-forge/noarch/pygments-2.19.2-pyhd8ed1ab_0.conda sha256: 5577623b9f6685ece2697c6eb7511b4c9ac5fb607c9babc2646c811b428fd46a md5: 6b6ece66ebcae2d5f326c77ef2c5a066 depends: - python >=3.9 license: BSD-2-Clause license_family: BSD purls: - pkg:pypi/pygments?source=hash-mapping size: 889287 timestamp: 1750615908735 - pypi: https://files.pythonhosted.org/packages/bd/24/12818598c362d7f300f18e74db45963dbcb85150324092410c8b49405e42/pyproject_hooks-1.2.0-py3-none-any.whl name: pyproject-hooks version: 1.2.0 sha256: 9e5c6bfa8dcc30091c74b0cf803c81fdd29d94f01992a7707bc97babb1141913 requires_python: '>=3.7' - conda: https://conda.anaconda.org/conda-forge/noarch/pytest-8.3.4-pyhd8ed1ab_0.conda sha256: 254256beab3dcf29907fbdccee6fbbb3371e9ac3782d2b1b5864596a0317818e md5: ff8f2ef7f2636906b3781d0cf92388d0 depends: - colorama - exceptiongroup >=1.0.0rc8 - iniconfig - packaging - pluggy <2,>=1.5 - python >=3.8 - tomli >=1 constrains: - pytest-faulthandler >=2 license: MIT license_family: MIT purls: - pkg:pypi/pytest?source=hash-mapping size: 259634 timestamp: 1733087755165 - conda: https://conda.anaconda.org/conda-forge/noarch/pytest-8.4.1-pyhd8ed1ab_0.conda sha256: 93e267e4ec35353e81df707938a6527d5eb55c97bf54c3b87229b69523afb59d md5: a49c2283f24696a7b30367b7346a0144 depends: - colorama >=0.4 - exceptiongroup >=1 - iniconfig >=1 - packaging >=20 - pluggy >=1.5,<2 - pygments >=2.7.2 - python >=3.9 - tomli >=1 constrains: - pytest-faulthandler >=2 license: MIT license_family: MIT purls: - pkg:pypi/pytest?source=hash-mapping size: 276562 timestamp: 1750239526127 - conda: https://conda.anaconda.org/conda-forge/noarch/pytest-8.4.2-pyhd8ed1ab_0.conda sha256: 41053d9893e379a3133bb9b557b98a3d2142fca474fb6b964ba5d97515f78e2d md5: 1f987505580cb972cf28dc5f74a0f81b depends: - colorama >=0.4 - exceptiongroup >=1 - iniconfig >=1 - packaging >=20 - pluggy >=1.5,<2 - pygments >=2.7.2 - python >=3.10 - tomli >=1 constrains: - pytest-faulthandler >=2 license: MIT license_family: MIT purls: - pkg:pypi/pytest?source=compressed-mapping size: 276734 timestamp: 1757011891753 - conda: https://conda.anaconda.org/conda-forge/noarch/pytest-9.0.1-pyhcf101f3_0.conda sha256: 7f25f71e4890fb60a4c4cb4563d10acf2d741804fec51e9b85a6fd97cd686f2f md5: fa7f71faa234947d9c520f89b4bda1a2 depends: - pygments >=2.7.2 - python >=3.10 - iniconfig >=1.0.1 - packaging >=22 - pluggy >=1.5,<2 - tomli >=1 - colorama >=0.4 - exceptiongroup >=1 - python constrains: - pytest-faulthandler >=2 license: MIT license_family: MIT purls: - pkg:pypi/pytest?source=compressed-mapping size: 299017 timestamp: 1763049198670 - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.10.18-hd6af730_0_cpython.conda sha256: 4111e5504fa4f4fb431d3a73fa606daccaf23a5a1da0f17a30db70ffad9336a7 md5: 4ea0c77cdcb0b81813a0436b162d7316 depends: - __glibc >=2.17,<3.0.a0 - bzip2 >=1.0.8,<2.0a0 - ld_impl_linux-64 >=2.36.1 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4,<4.0a0 - libgcc >=13 - liblzma >=5.8.1,<6.0a0 - libnsl >=2.0.1,<2.1.0a0 - libsqlite >=3.50.0,<4.0a0 - libuuid >=2.38.1,<3.0a0 - libxcrypt >=4.4.36 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.10.* *_cp310 license: Python-2.0 purls: [] size: 25042108 timestamp: 1749049293621 - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.11.13-h9e4cc4f_0_cpython.conda sha256: 9979a7d4621049388892489267139f1aa629b10c26601ba5dce96afc2b1551d4 md5: 8c399445b6dc73eab839659e6c7b5ad1 depends: - __glibc >=2.17,<3.0.a0 - bzip2 >=1.0.8,<2.0a0 - ld_impl_linux-64 >=2.36.1 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - libgcc >=13 - liblzma >=5.8.1,<6.0a0 - libnsl >=2.0.1,<2.1.0a0 - libsqlite >=3.50.0,<4.0a0 - libuuid >=2.38.1,<3.0a0 - libxcrypt >=4.4.36 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.11.* *_cp311 license: Python-2.0 purls: [] size: 30629559 timestamp: 1749050021812 - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.12.11-h9e4cc4f_0_cpython.conda sha256: 6cca004806ceceea9585d4d655059e951152fc774a471593d4f5138e6a54c81d md5: 94206474a5608243a10c92cefbe0908f depends: - __glibc >=2.17,<3.0.a0 - bzip2 >=1.0.8,<2.0a0 - ld_impl_linux-64 >=2.36.1 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - libgcc >=13 - liblzma >=5.8.1,<6.0a0 - libnsl >=2.0.1,<2.1.0a0 - libsqlite >=3.50.0,<4.0a0 - libuuid >=2.38.1,<3.0a0 - libxcrypt >=4.4.36 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.12.* *_cp312 license: Python-2.0 purls: [] size: 31445023 timestamp: 1749050216615 - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.13.7-h2b335a9_100_cp313.conda build_number: 100 sha256: 16cc30a5854f31ca6c3688337d34e37a79cdc518a06375fe3482ea8e2d6b34c8 md5: 724dcf9960e933838247971da07fe5cf depends: - __glibc >=2.17,<3.0.a0 - bzip2 >=1.0.8,<2.0a0 - ld_impl_linux-64 >=2.36.1 - libexpat >=2.7.1,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - libgcc >=14 - liblzma >=5.8.1,<6.0a0 - libmpdec >=4.0.0,<5.0a0 - libsqlite >=3.50.4,<4.0a0 - libuuid >=2.38.1,<3.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.2,<4.0a0 - python_abi 3.13.* *_cp313 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata license: Python-2.0 purls: [] size: 33583088 timestamp: 1756911465277 python_site_packages_path: lib/python3.13/site-packages - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.14.0-h32b2ec7_102_cp314.conda build_number: 102 sha256: 76d750045b94fded676323bfd01975a26a474023635735773d0e4d80aaa72518 md5: 0a19d2cc6eb15881889b0c6fa7d6a78d depends: - __glibc >=2.17,<3.0.a0 - bzip2 >=1.0.8,<2.0a0 - ld_impl_linux-64 >=2.36.1 - libexpat >=2.7.1,<3.0a0 - libffi >=3.5.2,<3.6.0a0 - libgcc >=14 - liblzma >=5.8.1,<6.0a0 - libmpdec >=4.0.0,<5.0a0 - libsqlite >=3.50.4,<4.0a0 - libuuid >=2.41.2,<3.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.4,<4.0a0 - python_abi 3.14.* *_cp314 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata - zstd >=1.5.7,<1.6.0a0 license: Python-2.0 purls: [] size: 36681389 timestamp: 1761176838143 python_site_packages_path: lib/python3.14/site-packages - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.8.20-h4a871b0_2_cpython.conda build_number: 2 sha256: 8043dcdb29e1e026d0def1056620d81b24c04f71fd98cc45888c58373b479845 md5: 05ffff2f44ad60b94ecb53d029c6bdf7 depends: - __glibc >=2.17,<3.0.a0 - bzip2 >=1.0.8,<2.0a0 - ld_impl_linux-64 >=2.36.1 - libffi >=3.4,<4.0a0 - libgcc >=13 - libnsl >=2.0.1,<2.1.0a0 - libsqlite >=3.46.1,<4.0a0 - libuuid >=2.38.1,<3.0a0 - libxcrypt >=4.4.36 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.3.2,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - xz >=5.2.6,<6.0a0 constrains: - python_abi 3.8.* *_cp38 license: Python-2.0 purls: [] size: 22176012 timestamp: 1727719857908 - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.9.23-hc30ae73_0_cpython.conda sha256: dcfc417424b21ffca70dddf7a86ef69270b3e8d2040c748b7356a615470d5298 md5: 624ab0484356d86a54297919352d52b6 depends: - __glibc >=2.17,<3.0.a0 - bzip2 >=1.0.8,<2.0a0 - ld_impl_linux-64 >=2.36.1 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4,<4.0a0 - libgcc >=13 - liblzma >=5.8.1,<6.0a0 - libnsl >=2.0.1,<2.1.0a0 - libsqlite >=3.50.0,<4.0a0 - libuuid >=2.38.1,<3.0a0 - libxcrypt >=4.4.36 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.9.* *_cp39 license: Python-2.0 purls: [] size: 23677900 timestamp: 1749060753022 - conda: https://conda.anaconda.org/conda-forge/osx-64/python-3.10.18-h93e8a92_0_cpython.conda sha256: 6a8d4122fa7406d31919eee6cf8e0185f4fb13596af8fdb7c7ac46d397b02de8 md5: 00299cefe3c38a8e200db754c4f025c4 depends: - __osx >=10.13 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4,<4.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.10.* *_cp310 license: Python-2.0 purls: [] size: 12921103 timestamp: 1749048830353 - conda: https://conda.anaconda.org/conda-forge/osx-64/python-3.11.13-h9ccd52b_0_cpython.conda sha256: d8e15db837c10242658979bc475298059bd6615524f2f71365ab8e54fbfea43c md5: 6e28c31688c6f1fdea3dc3d48d33e1c0 depends: - __osx >=10.13 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.11.* *_cp311 license: Python-2.0 purls: [] size: 15423460 timestamp: 1749049420299 - conda: https://conda.anaconda.org/conda-forge/osx-64/python-3.12.11-h9ccd52b_0_cpython.conda sha256: ebda5b5e8e25976013fdd81b5ba253705b076741d02bdc8ab32763f2afb2c81b md5: 06049132ecd09d0c1dc3d54d93cf1d5d depends: - __osx >=10.13 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.12.* *_cp312 license: Python-2.0 purls: [] size: 13571569 timestamp: 1749049058713 - conda: https://conda.anaconda.org/conda-forge/osx-64/python-3.13.7-h5eba815_100_cp313.conda build_number: 100 sha256: 581e4db7462c383fbb64d295a99a3db73217f8c24781cbe7ab583ff9d0305968 md5: 1759e1c9591755521bd50489756a599d depends: - __osx >=10.13 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.1,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - liblzma >=5.8.1,<6.0a0 - libmpdec >=4.0.0,<5.0a0 - libsqlite >=3.50.4,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.2,<4.0a0 - python_abi 3.13.* *_cp313 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata license: Python-2.0 purls: [] size: 12575616 timestamp: 1756911460182 python_site_packages_path: lib/python3.13/site-packages - conda: https://conda.anaconda.org/conda-forge/osx-64/python-3.14.0-hf88997e_102_cp314.conda build_number: 102 sha256: 2470866eee70e75d6be667aa537424b63f97c397a0a90f05f2bab347b9ed5a51 md5: 7917d1205eed3e72366a3397dca8a2af depends: - __osx >=10.13 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.1,<3.0a0 - libffi >=3.5.2,<3.6.0a0 - liblzma >=5.8.1,<6.0a0 - libmpdec >=4.0.0,<5.0a0 - libsqlite >=3.50.4,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.4,<4.0a0 - python_abi 3.14.* *_cp314 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata - zstd >=1.5.7,<1.6.0a0 license: Python-2.0 purls: [] size: 14427639 timestamp: 1761177864469 python_site_packages_path: lib/python3.14/site-packages - conda: https://conda.anaconda.org/conda-forge/osx-64/python-3.8.20-h4f978b9_2_cpython.conda build_number: 2 sha256: 839c786f6f46eceb4b197d84ff96b134c273d60af4e55e9cbbdc08e489b6d78b md5: a6263abf89e3162d11e63141bf25d91f depends: - __osx >=10.13 - bzip2 >=1.0.8,<2.0a0 - libffi >=3.4,<4.0a0 - libsqlite >=3.46.1,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.3.2,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - xz >=5.2.6,<6.0a0 constrains: - python_abi 3.8.* *_cp38 license: Python-2.0 purls: [] size: 11338027 timestamp: 1727718893331 - conda: https://conda.anaconda.org/conda-forge/osx-64/python-3.9.23-h8a7f3fd_0_cpython.conda sha256: ba02d0631c20870676c4757ad5dbf1f5820962e31fae63dccd5e570cb414be98 md5: 77a728b43b3d213da1566da0bd7b85e6 depends: - __osx >=10.13 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4,<4.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.9.* *_cp39 license: Python-2.0 purls: [] size: 11403008 timestamp: 1749060546150 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/python-3.10.18-h6cefb37_0_cpython.conda sha256: a9b9a74a98348019b28be674cc64c23d28297f3d0d9ebe079e81521b5ab5d853 md5: 2732121b53b3651565a84137c795605d depends: - __osx >=11.0 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4,<4.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.10.* *_cp310 license: Python-2.0 purls: [] size: 12385306 timestamp: 1749048585934 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/python-3.11.13-hc22306f_0_cpython.conda sha256: 2c966293ef9e97e66b55747c7a97bc95ba0311ac1cf0d04be4a51aafac60dcb1 md5: 95facc4683b7b3b9cf8ae0ed10f30dce depends: - __osx >=11.0 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.11.* *_cp311 license: Python-2.0 purls: [] size: 14573820 timestamp: 1749048947732 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/python-3.12.11-hc22306f_0_cpython.conda sha256: cde8b944c2dc378a5afbc48028d0843583fd215493d5885a80f1b41de085552f md5: 9207ebad7cfbe2a4af0702c92fd031c4 depends: - __osx >=11.0 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.12.* *_cp312 license: Python-2.0 purls: [] size: 13009234 timestamp: 1749048134449 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/python-3.13.7-h5c937ed_100_cp313.conda build_number: 100 sha256: b9776cc330fa4836171a42e0e9d9d3da145d7702ba6ef9fad45e94f0f016eaef md5: 445d057271904b0e21e14b1fa1d07ba5 depends: - __osx >=11.0 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.1,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - liblzma >=5.8.1,<6.0a0 - libmpdec >=4.0.0,<5.0a0 - libsqlite >=3.50.4,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.2,<4.0a0 - python_abi 3.13.* *_cp313 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata license: Python-2.0 purls: [] size: 11926240 timestamp: 1756909724811 python_site_packages_path: lib/python3.13/site-packages - conda: https://conda.anaconda.org/conda-forge/osx-arm64/python-3.14.0-h40d2674_102_cp314.conda build_number: 102 sha256: 3ca1da026fe5df8a479d60e1d3ed02d9bc50fcbafd5f125d86abe70d21a34cc7 md5: a9ff09231c555da7e30777747318321b depends: - __osx >=11.0 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.1,<3.0a0 - libffi >=3.5.2,<3.6.0a0 - liblzma >=5.8.1,<6.0a0 - libmpdec >=4.0.0,<5.0a0 - libsqlite >=3.50.4,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.4,<4.0a0 - python_abi 3.14.* *_cp314 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata - zstd >=1.5.7,<1.6.0a0 license: Python-2.0 purls: [] size: 13590581 timestamp: 1761177195716 python_site_packages_path: lib/python3.14/site-packages - conda: https://conda.anaconda.org/conda-forge/osx-arm64/python-3.8.20-h7d35d02_2_cpython.conda build_number: 2 sha256: cf8692e732697d47f0290ef83caa4b3115c7b277a3fb155b7de0f09fa1b5e27c md5: 29ed2994beffea2a256a7e14f9468df8 depends: - __osx >=11.0 - bzip2 >=1.0.8,<2.0a0 - libffi >=3.4,<4.0a0 - libsqlite >=3.46.1,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.3.2,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - xz >=5.2.6,<6.0a0 constrains: - python_abi 3.8.* *_cp38 license: Python-2.0 purls: [] size: 11774160 timestamp: 1727718758277 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/python-3.9.23-h7139b31_0_cpython.conda sha256: f0ef9e79987c524b25cb5245770890b568db568ae66edc7fd65ec60bccf3e3df md5: 6e3ac2810142219bd3dbf68ccf3d68cc depends: - __osx >=11.0 - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4,<4.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - ncurses >=6.5,<7.0a0 - openssl >=3.5.0,<4.0a0 - readline >=8.2,<9.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata constrains: - python_abi 3.9.* *_cp39 license: Python-2.0 purls: [] size: 10975082 timestamp: 1749060340280 - conda: https://conda.anaconda.org/conda-forge/win-64/python-3.10.18-h8c5b53a_0_cpython.conda sha256: 548f9e542e72925d595c66191ffd17056f7c0029b7181e2d99dbef47e4f3f646 md5: f1775dab55c8a073ebd024bfb2f689c1 depends: - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4,<4.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - openssl >=3.5.0,<4.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 constrains: - python_abi 3.10.* *_cp310 license: Python-2.0 purls: [] size: 15832933 timestamp: 1749048670944 - conda: https://conda.anaconda.org/conda-forge/win-64/python-3.11.13-h3f84c4b_0_cpython.conda sha256: 723dbca1384f30bd2070f77dd83eefd0e8d7e4dda96ac3332fbf8fe5573a8abb md5: bedbb6f7bb654839719cd528f9b298ad depends: - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - openssl >=3.5.0,<4.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 constrains: - python_abi 3.11.* *_cp311 license: Python-2.0 purls: [] size: 18242669 timestamp: 1749048351218 - conda: https://conda.anaconda.org/conda-forge/win-64/python-3.12.11-h3f84c4b_0_cpython.conda sha256: b69412e64971b5da3ced0fc36f05d0eacc9393f2084c6f92b8f28ee068d83e2e md5: 6aa5e62df29efa6319542ae5025f4376 depends: - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - openssl >=3.5.0,<4.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 constrains: - python_abi 3.12.* *_cp312 license: Python-2.0 purls: [] size: 15829289 timestamp: 1749047682640 - conda: https://conda.anaconda.org/conda-forge/win-64/python-3.13.7-hdf00ec1_100_cp313.conda build_number: 100 sha256: b86b5b3a960de2fff0bb7e0932b50846b22b75659576a257b1872177aab444cd md5: 7cd6ebd1a32d4a5d99f8f8300c2029d5 depends: - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.1,<3.0a0 - libffi >=3.4.6,<3.5.0a0 - liblzma >=5.8.1,<6.0a0 - libmpdec >=4.0.0,<5.0a0 - libsqlite >=3.50.4,<4.0a0 - libzlib >=1.3.1,<2.0a0 - openssl >=3.5.2,<4.0a0 - python_abi 3.13.* *_cp313 - tk >=8.6.13,<8.7.0a0 - tzdata - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 license: Python-2.0 purls: [] size: 16386672 timestamp: 1756909324921 python_site_packages_path: Lib/site-packages - conda: https://conda.anaconda.org/conda-forge/win-64/python-3.14.0-h4b44e0e_102_cp314.conda build_number: 102 sha256: 2b8c8fcafcc30690b4c5991ee28eb80c962e50e06ce7da03b2b302e2d39d6a81 md5: 3e1ce2fb0f277cebcae01a3c418eb5e2 depends: - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.1,<3.0a0 - libffi >=3.5.2,<3.6.0a0 - liblzma >=5.8.1,<6.0a0 - libmpdec >=4.0.0,<5.0a0 - libsqlite >=3.50.4,<4.0a0 - libzlib >=1.3.1,<2.0a0 - openssl >=3.5.4,<4.0a0 - python_abi 3.14.* *_cp314 - tk >=8.6.13,<8.7.0a0 - tzdata - ucrt >=10.0.20348.0 - vc >=14.3,<15 - vc14_runtime >=14.44.35208 - zstd >=1.5.7,<1.6.0a0 license: Python-2.0 purls: [] size: 16706286 timestamp: 1761175439068 python_site_packages_path: Lib/site-packages - conda: https://conda.anaconda.org/conda-forge/win-64/python-3.8.20-hfaddaf0_2_cpython.conda build_number: 2 sha256: 4cf5c93b625cc353b7bb20eb2f2840a2c24c76578ae425c017812d1b95c5225d md5: 4e181f484d292cb273fdf456e8dc7b4a depends: - bzip2 >=1.0.8,<2.0a0 - libffi >=3.4,<4.0a0 - libsqlite >=3.46.1,<4.0a0 - libzlib >=1.3.1,<2.0a0 - openssl >=3.3.2,<4.0a0 - tk >=8.6.13,<8.7.0a0 - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 - xz >=5.2.6,<6.0a0 constrains: - python_abi 3.8.* *_cp38 license: Python-2.0 purls: [] size: 16152994 timestamp: 1727719830490 - conda: https://conda.anaconda.org/conda-forge/win-64/python-3.9.23-h8c5b53a_0_cpython.conda sha256: 07b9b6dd5e0acee4d967e5263e01b76fae48596b6e0e6fb3733a587b5d0bcea5 md5: 2fd01874016cd5e3b9edccf52755082b depends: - bzip2 >=1.0.8,<2.0a0 - libexpat >=2.7.0,<3.0a0 - libffi >=3.4,<4.0a0 - liblzma >=5.8.1,<6.0a0 - libsqlite >=3.50.0,<4.0a0 - libzlib >=1.3.1,<2.0a0 - openssl >=3.5.0,<4.0a0 - tk >=8.6.13,<8.7.0a0 - tzdata - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 constrains: - python_abi 3.9.* *_cp39 license: Python-2.0 purls: [] size: 16971365 timestamp: 1749059542957 - conda: https://conda.anaconda.org/conda-forge/noarch/python_abi-3.10-8_cp310.conda build_number: 8 sha256: 7ad76fa396e4bde336872350124c0819032a9e8a0a40590744ff9527b54351c1 md5: 05e00f3b21e88bb3d658ac700b2ce58c constrains: - python 3.10.* *_cpython license: BSD-3-Clause license_family: BSD purls: [] size: 6999 timestamp: 1752805924192 - conda: https://conda.anaconda.org/conda-forge/noarch/python_abi-3.11-8_cp311.conda build_number: 8 sha256: fddf123692aa4b1fc48f0471e346400d9852d96eeed77dbfdd746fa50a8ff894 md5: 8fcb6b0e2161850556231336dae58358 constrains: - python 3.11.* *_cpython license: BSD-3-Clause license_family: BSD purls: [] size: 7003 timestamp: 1752805919375 - conda: https://conda.anaconda.org/conda-forge/noarch/python_abi-3.12-8_cp312.conda build_number: 8 sha256: 80677180dd3c22deb7426ca89d6203f1c7f1f256f2d5a94dc210f6e758229809 md5: c3efd25ac4d74b1584d2f7a57195ddf1 constrains: - python 3.12.* *_cpython license: BSD-3-Clause license_family: BSD purls: [] size: 6958 timestamp: 1752805918820 - conda: https://conda.anaconda.org/conda-forge/noarch/python_abi-3.13-8_cp313.conda build_number: 8 sha256: 210bffe7b121e651419cb196a2a63687b087497595c9be9d20ebe97dd06060a7 md5: 94305520c52a4aa3f6c2b1ff6008d9f8 constrains: - python 3.13.* *_cp313 license: BSD-3-Clause license_family: BSD purls: [] size: 7002 timestamp: 1752805902938 - conda: https://conda.anaconda.org/conda-forge/noarch/python_abi-3.14-8_cp314.conda build_number: 8 sha256: ad6d2e9ac39751cc0529dd1566a26751a0bf2542adb0c232533d32e176e21db5 md5: 0539938c55b6b1a59b560e843ad864a4 constrains: - python 3.14.* *_cp314 license: BSD-3-Clause license_family: BSD purls: [] size: 6989 timestamp: 1752805904792 - conda: https://conda.anaconda.org/conda-forge/noarch/python_abi-3.8-8_cp38.conda build_number: 8 sha256: 83c22066a672ce0b16e693c84aa6d5efb68e02eff037a55e047d7095d0fdb5ca md5: 4f7b6e3de4f15cc44e0f93b39f07205d constrains: - python 3.8.* *_cpython license: BSD-3-Clause license_family: BSD purls: [] size: 6960 timestamp: 1752805923703 - conda: https://conda.anaconda.org/conda-forge/noarch/python_abi-3.9-8_cp39.conda build_number: 8 sha256: c3cffff954fea53c254f1a3aad1b1fccd4cc2a781efd383e6b09d1b06348c67b md5: c2f0c4bf417925c27b62ab50264baa98 constrains: - python 3.9.* *_cpython license: BSD-3-Clause license_family: BSD purls: [] size: 6999 timestamp: 1752805917390 - conda: https://conda.anaconda.org/conda-forge/linux-64/readline-8.2-h8c095d6_2.conda sha256: 2d6d0c026902561ed77cd646b5021aef2d4db22e57a5b0178dfc669231e06d2c md5: 283b96675859b20a825f8fa30f311446 depends: - libgcc >=13 - ncurses >=6.5,<7.0a0 license: GPL-3.0-only license_family: GPL purls: [] size: 282480 timestamp: 1740379431762 - conda: https://conda.anaconda.org/conda-forge/osx-64/readline-8.2-h7cca4af_2.conda sha256: 53017e80453c4c1d97aaf78369040418dea14cf8f46a2fa999f31bd70b36c877 md5: 342570f8e02f2f022147a7f841475784 depends: - ncurses >=6.5,<7.0a0 license: GPL-3.0-only license_family: GPL purls: [] size: 256712 timestamp: 1740379577668 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/readline-8.2-h1d1bf99_2.conda sha256: 7db04684d3904f6151eff8673270922d31da1eea7fa73254d01c437f49702e34 md5: 63ef3f6e6d6d5c589e64f11263dc5676 depends: - ncurses >=6.5,<7.0a0 license: GPL-3.0-only license_family: GPL purls: [] size: 252359 timestamp: 1740379663071 - pypi: https://files.pythonhosted.org/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl name: requests version: 2.32.5 sha256: 2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6 requires_dist: - charset-normalizer>=2,<4 - idna>=2.5,<4 - urllib3>=1.21.1,<3 - certifi>=2017.4.17 - pysocks>=1.5.6,!=1.5.7 ; extra == 'socks' - chardet>=3.0.2,<6 ; extra == 'use-chardet-on-py3' requires_python: '>=3.9' - pypi: https://files.pythonhosted.org/packages/53/97/d2cbbaa10c9b826af0e10fdf836e1bf344d9f0abb873ebc34d1f49642d3f/roman_numerals_py-3.1.0-py3-none-any.whl name: roman-numerals-py version: 3.1.0 sha256: 9da2ad2fb670bcf24e81070ceb3be72f6c11c440d73bd579fbeca1e9f330954c requires_dist: - mypy==1.15.0 ; extra == 'lint' - ruff==0.9.7 ; extra == 'lint' - pyright==1.1.394 ; extra == 'lint' - pytest>=8 ; extra == 'test' requires_python: '>=3.9' - pypi: https://files.pythonhosted.org/packages/c8/78/3565d011c61f5a43488987ee32b6f3f656e7f107ac2782dd57bdd7d91d9a/snowballstemmer-3.0.1-py3-none-any.whl name: snowballstemmer version: 3.0.1 sha256: 6cd7b3897da8d6c9ffb968a6781fa6532dce9c3618a4b127d920dab764a19064 requires_python: '!=3.0.*,!=3.1.*,!=3.2.*' - pypi: https://files.pythonhosted.org/packages/31/53/136e9eca6e0b9dc0e1962e2c908fbea2e5ac000c2a2fbd9a35797958c48b/sphinx-8.2.3-py3-none-any.whl name: sphinx version: 8.2.3 sha256: 4405915165f13521d875a8c29c8970800a0141c14cc5416a38feca4ea5d9b9c3 requires_dist: - sphinxcontrib-applehelp>=1.0.7 - sphinxcontrib-devhelp>=1.0.6 - sphinxcontrib-htmlhelp>=2.0.6 - sphinxcontrib-jsmath>=1.0.1 - sphinxcontrib-qthelp>=1.0.6 - sphinxcontrib-serializinghtml>=1.1.9 - jinja2>=3.1 - pygments>=2.17 - docutils>=0.20,<0.22 - snowballstemmer>=2.2 - babel>=2.13 - alabaster>=0.7.14 - imagesize>=1.3 - requests>=2.30.0 - roman-numerals-py>=1.0.0 - packaging>=23.0 - colorama>=0.4.6 ; sys_platform == 'win32' - sphinxcontrib-websupport ; extra == 'docs' - ruff==0.9.9 ; extra == 'lint' - mypy==1.15.0 ; extra == 'lint' - sphinx-lint>=0.9 ; extra == 'lint' - types-colorama==0.4.15.20240311 ; extra == 'lint' - types-defusedxml==0.7.0.20240218 ; extra == 'lint' - types-docutils==0.21.0.20241128 ; extra == 'lint' - types-pillow==10.2.0.20240822 ; extra == 'lint' - types-pygments==2.19.0.20250219 ; extra == 'lint' - types-requests==2.32.0.20241016 ; extra == 'lint' - types-urllib3==1.26.25.14 ; extra == 'lint' - pyright==1.1.395 ; extra == 'lint' - pytest>=8.0 ; extra == 'lint' - pypi-attestations==0.0.21 ; extra == 'lint' - betterproto==2.0.0b6 ; extra == 'lint' - pytest>=8.0 ; extra == 'test' - pytest-xdist[psutil]>=3.4 ; extra == 'test' - defusedxml>=0.7.1 ; extra == 'test' - cython>=3.0 ; extra == 'test' - setuptools>=70.0 ; extra == 'test' - typing-extensions>=4.9 ; extra == 'test' requires_python: '>=3.11' - pypi: https://files.pythonhosted.org/packages/5d/85/9ebeae2f76e9e77b952f4b274c27238156eae7979c5421fba91a28f4970d/sphinxcontrib_applehelp-2.0.0-py3-none-any.whl name: sphinxcontrib-applehelp version: 2.0.0 sha256: 4cd3f0ec4ac5dd9c17ec65e9ab272c9b867ea77425228e68ecf08d6b28ddbdb5 requires_dist: - ruff==0.5.5 ; extra == 'lint' - mypy ; extra == 'lint' - types-docutils ; extra == 'lint' - sphinx>=5 ; extra == 'standalone' - pytest ; extra == 'test' requires_python: '>=3.9' - pypi: https://files.pythonhosted.org/packages/35/7a/987e583882f985fe4d7323774889ec58049171828b58c2217e7f79cdf44e/sphinxcontrib_devhelp-2.0.0-py3-none-any.whl name: sphinxcontrib-devhelp version: 2.0.0 sha256: aefb8b83854e4b0998877524d1029fd3e6879210422ee3780459e28a1f03a8a2 requires_dist: - ruff==0.5.5 ; extra == 'lint' - mypy ; extra == 'lint' - types-docutils ; extra == 'lint' - sphinx>=5 ; extra == 'standalone' - pytest ; extra == 'test' requires_python: '>=3.9' - pypi: https://files.pythonhosted.org/packages/0a/7b/18a8c0bcec9182c05a0b3ec2a776bba4ead82750a55ff798e8d406dae604/sphinxcontrib_htmlhelp-2.1.0-py3-none-any.whl name: sphinxcontrib-htmlhelp version: 2.1.0 sha256: 166759820b47002d22914d64a075ce08f4c46818e17cfc9470a9786b759b19f8 requires_dist: - ruff==0.5.5 ; extra == 'lint' - mypy ; extra == 'lint' - types-docutils ; extra == 'lint' - sphinx>=5 ; extra == 'standalone' - pytest ; extra == 'test' - html5lib ; extra == 'test' requires_python: '>=3.9' - pypi: https://files.pythonhosted.org/packages/c2/42/4c8646762ee83602e3fb3fbe774c2fac12f317deb0b5dbeeedd2d3ba4b77/sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl name: sphinxcontrib-jsmath version: 1.0.1 sha256: 2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178 requires_dist: - pytest ; extra == 'test' - flake8 ; extra == 'test' - mypy ; extra == 'test' requires_python: '>=3.5' - pypi: https://files.pythonhosted.org/packages/27/83/859ecdd180cacc13b1f7e857abf8582a64552ea7a061057a6c716e790fce/sphinxcontrib_qthelp-2.0.0-py3-none-any.whl name: sphinxcontrib-qthelp version: 2.0.0 sha256: b18a828cdba941ccd6ee8445dbe72ffa3ef8cbe7505d8cd1fa0d42d3f2d5f3eb requires_dist: - ruff==0.5.5 ; extra == 'lint' - mypy ; extra == 'lint' - types-docutils ; extra == 'lint' - sphinx>=5 ; extra == 'standalone' - pytest ; extra == 'test' - defusedxml>=0.7.1 ; extra == 'test' requires_python: '>=3.9' - pypi: https://files.pythonhosted.org/packages/52/a7/d2782e4e3f77c8450f727ba74a8f12756d5ba823d81b941f1b04da9d033a/sphinxcontrib_serializinghtml-2.0.0-py3-none-any.whl name: sphinxcontrib-serializinghtml version: 2.0.0 sha256: 6e2cb0eef194e10c27ec0023bfeb25badbbb5868244cf5bc5bdc04e4464bf331 requires_dist: - ruff==0.5.5 ; extra == 'lint' - mypy ; extra == 'lint' - types-docutils ; extra == 'lint' - sphinx>=5 ; extra == 'standalone' - pytest ; extra == 'test' requires_python: '>=3.9' - pypi: ./ name: sqlparse version: 0.5.4.dev0 sha256: 110b003a0343a33422c279e622f160af5a8aa6fe79882cc7de91a438c6ad9603 requires_dist: - build ; extra == 'dev' - sphinx ; extra == 'doc' requires_python: '>=3.8' editable: true - conda: https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-noxft_ha0e22de_103.conda sha256: 1544760538a40bcd8ace2b1d8ebe3eb5807ac268641f8acdc18c69c5ebfeaf64 md5: 86bc20552bf46075e3d92b67f089172d depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 - libzlib >=1.3.1,<2.0a0 constrains: - xorg-libx11 >=1.8.12,<2.0a0 license: TCL license_family: BSD purls: [] size: 3284905 timestamp: 1763054914403 - conda: https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-noxft_hd72426e_102.conda sha256: a84ff687119e6d8752346d1d408d5cf360dee0badd487a472aa8ddedfdc219e1 md5: a0116df4f4ed05c303811a837d5b39d8 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 - libzlib >=1.3.1,<2.0a0 license: TCL license_family: BSD purls: [] size: 3285204 timestamp: 1748387766691 - conda: https://conda.anaconda.org/conda-forge/osx-64/tk-8.6.13-hf689a15_2.conda sha256: b24468006a96b71a5f4372205ea7ec4b399b0f2a543541e86f883de54cd623fc md5: 9864891a6946c2fe037c02fca7392ab4 depends: - __osx >=10.13 - libzlib >=1.3.1,<2.0a0 license: TCL license_family: BSD purls: [] size: 3259809 timestamp: 1748387843735 - conda: https://conda.anaconda.org/conda-forge/osx-64/tk-8.6.13-hf689a15_3.conda sha256: 0d0b6cef83fec41bc0eb4f3b761c4621b7adfb14378051a8177bd9bb73d26779 md5: bd9f1de651dbd80b51281c694827f78f depends: - __osx >=10.13 - libzlib >=1.3.1,<2.0a0 license: TCL license_family: BSD purls: [] size: 3262702 timestamp: 1763055085507 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/tk-8.6.13-h892fb3f_2.conda sha256: cb86c522576fa95c6db4c878849af0bccfd3264daf0cc40dd18e7f4a7bfced0e md5: 7362396c170252e7b7b0c8fb37fe9c78 depends: - __osx >=11.0 - libzlib >=1.3.1,<2.0a0 license: TCL license_family: BSD purls: [] size: 3125538 timestamp: 1748388189063 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/tk-8.6.13-h892fb3f_3.conda sha256: ad0c67cb03c163a109820dc9ecf77faf6ec7150e942d1e8bb13e5d39dc058ab7 md5: a73d54a5abba6543cb2f0af1bfbd6851 depends: - __osx >=11.0 - libzlib >=1.3.1,<2.0a0 license: TCL license_family: BSD purls: [] size: 3125484 timestamp: 1763055028377 - conda: https://conda.anaconda.org/conda-forge/win-64/tk-8.6.13-h2c6b04d_2.conda sha256: e3614b0eb4abcc70d98eae159db59d9b4059ed743ef402081151a948dce95896 md5: ebd0e761de9aa879a51d22cc721bd095 depends: - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 license: TCL license_family: BSD purls: [] size: 3466348 timestamp: 1748388121356 - conda: https://conda.anaconda.org/conda-forge/win-64/tk-8.6.13-h2c6b04d_3.conda sha256: 4581f4ffb432fefa1ac4f85c5682cc27014bcd66e7beaa0ee330e927a7858790 md5: 7cb36e506a7dba4817970f8adb6396f9 depends: - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 license: TCL license_family: BSD purls: [] size: 3472313 timestamp: 1763055164278 - conda: https://conda.anaconda.org/conda-forge/noarch/tomli-2.0.2-pyhd8ed1ab_0.conda sha256: 5e742ba856168b606ac3c814d247657b1c33b8042371f1a08000bdc5075bc0cc md5: e977934e00b355ff55ed154904044727 depends: - python >=3.7 license: MIT license_family: MIT purls: - pkg:pypi/tomli?source=hash-mapping size: 18203 timestamp: 1727974767524 - conda: https://conda.anaconda.org/conda-forge/noarch/tomli-2.2.1-pyhe01879c_2.conda sha256: 040a5a05c487647c089ad5e05ad5aff5942830db2a4e656f1e300d73436436f1 md5: 30a0a26c8abccf4b7991d590fe17c699 depends: - python >=3.9 - python license: MIT license_family: MIT purls: - pkg:pypi/tomli?source=compressed-mapping size: 21238 timestamp: 1753796677376 - conda: https://conda.anaconda.org/conda-forge/noarch/tomli-2.3.0-pyhcf101f3_0.conda sha256: cb77c660b646c00a48ef942a9e1721ee46e90230c7c570cdeb5a893b5cce9bff md5: d2732eb636c264dc9aa4cbee404b1a53 depends: - python >=3.10 - python license: MIT license_family: MIT purls: - pkg:pypi/tomli?source=compressed-mapping size: 20973 timestamp: 1760014679845 - conda: https://conda.anaconda.org/conda-forge/noarch/typing_extensions-4.14.1-pyhe01879c_0.conda sha256: 4f52390e331ea8b9019b87effaebc4f80c6466d09f68453f52d5cdc2a3e1194f md5: e523f4f1e980ed7a4240d7e27e9ec81f depends: - python >=3.9 - python license: PSF-2.0 license_family: PSF purls: - pkg:pypi/typing-extensions?source=hash-mapping size: 51065 timestamp: 1751643513473 - conda: https://conda.anaconda.org/conda-forge/noarch/typing_extensions-4.15.0-pyhcf101f3_0.conda sha256: 032271135bca55aeb156cee361c81350c6f3fb203f57d024d7e5a1fc9ef18731 md5: 0caa1af407ecff61170c9437a808404d depends: - python >=3.10 - python license: PSF-2.0 license_family: PSF purls: - pkg:pypi/typing-extensions?source=compressed-mapping size: 51692 timestamp: 1756220668932 - conda: https://conda.anaconda.org/conda-forge/noarch/tzdata-2025b-h78e105d_0.conda sha256: 5aaa366385d716557e365f0a4e9c3fca43ba196872abbbe3d56bb610d131e192 md5: 4222072737ccff51314b5ece9c7d6f5a license: LicenseRef-Public-Domain purls: [] size: 122968 timestamp: 1742727099393 - conda: https://conda.anaconda.org/conda-forge/win-64/ucrt-10.0.26100.0-h57928b3_0.conda sha256: 3005729dce6f3d3f5ec91dfc49fc75a0095f9cd23bab49efb899657297ac91a5 md5: 71b24316859acd00bdb8b38f5e2ce328 constrains: - vc14_runtime >=14.29.30037 - vs2015_runtime >=14.29.30037 license: LicenseRef-MicrosoftWindowsSDK10 purls: [] size: 694692 timestamp: 1756385147981 - pypi: https://files.pythonhosted.org/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl name: urllib3 version: 2.5.0 sha256: e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc requires_dist: - brotli>=1.0.9 ; platform_python_implementation == 'CPython' and extra == 'brotli' - brotlicffi>=0.8.0 ; platform_python_implementation != 'CPython' and extra == 'brotli' - h2>=4,<5 ; extra == 'h2' - pysocks>=1.5.6,!=1.5.7,<2.0 ; extra == 'socks' - zstandard>=0.18.0 ; extra == 'zstd' requires_python: '>=3.9' - conda: https://conda.anaconda.org/conda-forge/win-64/vc-14.3-h2b53caa_32.conda sha256: 82250af59af9ff3c6a635dd4c4764c631d854feb334d6747d356d949af44d7cf md5: ef02bbe151253a72b8eda264a935db66 depends: - vc14_runtime >=14.42.34433 track_features: - vc14 license: BSD-3-Clause license_family: BSD purls: [] size: 18861 timestamp: 1760418772353 - conda: https://conda.anaconda.org/conda-forge/win-64/vc-14.3-h41ae7f8_31.conda sha256: cb357591d069a1e6cb74199a8a43a7e3611f72a6caed9faa49dbb3d7a0a98e0b md5: 28f4ca1e0337d0f27afb8602663c5723 depends: - vc14_runtime >=14.44.35208 track_features: - vc14 license: BSD-3-Clause license_family: BSD purls: [] size: 18249 timestamp: 1753739241465 - conda: https://conda.anaconda.org/conda-forge/win-64/vc14_runtime-14.44.35208-h818238b_31.conda sha256: af4b4b354b87a9a8d05b8064ff1ea0b47083274f7c30b4eb96bc2312c9b5f08f md5: 603e41da40a765fd47995faa021da946 depends: - ucrt >=10.0.20348.0 - vcomp14 14.44.35208 h818238b_31 constrains: - vs2015_runtime 14.44.35208.* *_31 license: LicenseRef-MicrosoftVisualCpp2015-2022Runtime license_family: Proprietary purls: [] size: 682424 timestamp: 1753739239305 - conda: https://conda.anaconda.org/conda-forge/win-64/vc14_runtime-14.44.35208-h818238b_32.conda sha256: e3a3656b70d1202e0d042811ceb743bd0d9f7e00e2acdf824d231b044ef6c0fd md5: 378d5dcec45eaea8d303da6f00447ac0 depends: - ucrt >=10.0.20348.0 - vcomp14 14.44.35208 h818238b_32 constrains: - vs2015_runtime 14.44.35208.* *_32 license: LicenseRef-MicrosoftVisualCpp2015-2022Runtime license_family: Proprietary purls: [] size: 682706 timestamp: 1760418629729 - conda: https://conda.anaconda.org/conda-forge/win-64/vcomp14-14.44.35208-h818238b_31.conda sha256: 67b317b64f47635415776718d25170a9a6f9a1218c0f5a6202bfd687e07b6ea4 md5: a6b1d5c1fc3cb89f88f7179ee6a9afe3 depends: - ucrt >=10.0.20348.0 constrains: - vs2015_runtime 14.44.35208.* *_31 license: LicenseRef-MicrosoftVisualCpp2015-2022Runtime license_family: Proprietary purls: [] size: 113963 timestamp: 1753739198723 - conda: https://conda.anaconda.org/conda-forge/win-64/vcomp14-14.44.35208-h818238b_32.conda sha256: f3790c88fbbdc55874f41de81a4237b1b91eab75e05d0e58661518ff04d2a8a1 md5: 58f67b437acbf2764317ba273d731f1d depends: - ucrt >=10.0.20348.0 constrains: - vs2015_runtime 14.44.35208.* *_32 license: LicenseRef-MicrosoftVisualCpp2015-2022Runtime license_family: Proprietary purls: [] size: 114846 timestamp: 1760418593847 - conda: https://conda.anaconda.org/conda-forge/linux-64/xz-5.8.1-hbcc6ac9_2.conda sha256: 802725371682ea06053971db5b4fb7fbbcaee9cb1804ec688f55e51d74660617 md5: 68eae977d7d1196d32b636a026dc015d depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 - liblzma 5.8.1 hb9d3cd8_2 - liblzma-devel 5.8.1 hb9d3cd8_2 - xz-gpl-tools 5.8.1 hbcc6ac9_2 - xz-tools 5.8.1 hb9d3cd8_2 license: 0BSD AND LGPL-2.1-or-later AND GPL-2.0-or-later purls: [] size: 23987 timestamp: 1749230104359 - conda: https://conda.anaconda.org/conda-forge/osx-64/xz-5.8.1-h357f2ed_2.conda sha256: 89248de6c9417522b6fec011dc26b81c25af731a31ba91e668f72f1b9aab05d7 md5: 7eee908c7df8478c1f35b28efa2e42b1 depends: - __osx >=10.13 - liblzma 5.8.1 hd471939_2 - liblzma-devel 5.8.1 hd471939_2 - xz-gpl-tools 5.8.1 h357f2ed_2 - xz-tools 5.8.1 hd471939_2 license: 0BSD AND LGPL-2.1-or-later AND GPL-2.0-or-later purls: [] size: 24033 timestamp: 1749230223096 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/xz-5.8.1-h9a6d368_2.conda sha256: afb747cf017b67cc31d54c6e6c4bd1b1e179fe487a3d23a856232ed7fd0b099b md5: 39435c82e5a007ef64cbb153ecc40cfd depends: - __osx >=11.0 - liblzma 5.8.1 h39f12f2_2 - liblzma-devel 5.8.1 h39f12f2_2 - xz-gpl-tools 5.8.1 h9a6d368_2 - xz-tools 5.8.1 h39f12f2_2 license: 0BSD AND LGPL-2.1-or-later AND GPL-2.0-or-later purls: [] size: 23995 timestamp: 1749230346887 - conda: https://conda.anaconda.org/conda-forge/win-64/xz-5.8.1-h208afaa_2.conda sha256: 22289a81da4698bb8d13ac032a88a4a1f49505b2303885e1add3d8bd1a7b56e6 md5: fb3fa84ea37de9f12cc8ba730cec0bdc depends: - liblzma 5.8.1 h2466b09_2 - liblzma-devel 5.8.1 h2466b09_2 - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 - xz-tools 5.8.1 h2466b09_2 license: 0BSD AND LGPL-2.1-or-later AND GPL-2.0-or-later purls: [] size: 24430 timestamp: 1749230691276 - conda: https://conda.anaconda.org/conda-forge/linux-64/xz-gpl-tools-5.8.1-hbcc6ac9_2.conda sha256: 840838dca829ec53f1160f3fca6dbfc43f2388b85f15d3e867e69109b168b87b md5: bf627c16aa26231720af037a2709ab09 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 - liblzma 5.8.1 hb9d3cd8_2 constrains: - xz 5.8.1.* license: 0BSD AND LGPL-2.1-or-later AND GPL-2.0-or-later purls: [] size: 33911 timestamp: 1749230090353 - conda: https://conda.anaconda.org/conda-forge/osx-64/xz-gpl-tools-5.8.1-h357f2ed_2.conda sha256: 5cdadfff31de7f50d1b2f919dd80697c0a08d90f8d6fb89f00c93751ec135c3c md5: d4044359fad6af47224e9ef483118378 depends: - __osx >=10.13 - liblzma 5.8.1 hd471939_2 constrains: - xz 5.8.1.* license: 0BSD AND LGPL-2.1-or-later AND GPL-2.0-or-later purls: [] size: 33890 timestamp: 1749230206830 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/xz-gpl-tools-5.8.1-h9a6d368_2.conda sha256: a0790cfb48d240e7b655b0d797a00040219cf39e3ee38e2104e548515df4f9c2 md5: 09b1442c1d49ac7c5f758c44695e77d1 depends: - __osx >=11.0 - liblzma 5.8.1 h39f12f2_2 constrains: - xz 5.8.1.* license: 0BSD AND LGPL-2.1-or-later AND GPL-2.0-or-later purls: [] size: 34103 timestamp: 1749230329933 - conda: https://conda.anaconda.org/conda-forge/linux-64/xz-tools-5.8.1-hb9d3cd8_2.conda sha256: 58034f3fca491075c14e61568ad8b25de00cb3ae479de3e69be6d7ee5d3ace28 md5: 1bad2995c8f1c8075c6c331bf96e46fb depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 - liblzma 5.8.1 hb9d3cd8_2 constrains: - xz 5.8.1.* license: 0BSD AND LGPL-2.1-or-later purls: [] size: 96433 timestamp: 1749230076687 - conda: https://conda.anaconda.org/conda-forge/osx-64/xz-tools-5.8.1-hd471939_2.conda sha256: 3b1d8958f8dceaa4442100d5326b2ec9bcc2e8d7ee55345bf7101dc362fb9868 md5: 349148960ad74aece88028f2b5c62c51 depends: - __osx >=10.13 - liblzma 5.8.1 hd471939_2 constrains: - xz 5.8.1.* license: 0BSD AND LGPL-2.1-or-later purls: [] size: 85777 timestamp: 1749230191007 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/xz-tools-5.8.1-h39f12f2_2.conda sha256: 9d1232705e3d175f600dc8e344af9182d0341cdaa73d25330591a28532951063 md5: 37996935aa33138fca43e4b4563b6a28 depends: - __osx >=11.0 - liblzma 5.8.1 h39f12f2_2 constrains: - xz 5.8.1.* license: 0BSD AND LGPL-2.1-or-later purls: [] size: 86425 timestamp: 1749230316106 - conda: https://conda.anaconda.org/conda-forge/win-64/xz-tools-5.8.1-h2466b09_2.conda sha256: 38712f0e62f61741ab69d7551fa863099f5be769bdf9fdbc28542134874b4e88 md5: e1b62ec0457e6ba10287a49854108fdb depends: - liblzma 5.8.1 h2466b09_2 - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 constrains: - xz 5.8.1.* license: 0BSD AND LGPL-2.1-or-later purls: [] size: 67419 timestamp: 1749230666460 - conda: https://conda.anaconda.org/conda-forge/linux-64/zstd-1.5.7-hb8e6e7a_2.conda sha256: a4166e3d8ff4e35932510aaff7aa90772f84b4d07e9f6f83c614cba7ceefe0eb md5: 6432cb5d4ac0046c3ac0a8a0f95842f9 depends: - __glibc >=2.17,<3.0.a0 - libgcc >=13 - libstdcxx >=13 - libzlib >=1.3.1,<2.0a0 license: BSD-3-Clause license_family: BSD purls: [] size: 567578 timestamp: 1742433379869 - conda: https://conda.anaconda.org/conda-forge/osx-64/zstd-1.5.7-h8210216_2.conda sha256: c171c43d0c47eed45085112cb00c8c7d4f0caa5a32d47f2daca727e45fb98dca md5: cd60a4a5a8d6a476b30d8aa4bb49251a depends: - __osx >=10.13 - libzlib >=1.3.1,<2.0a0 license: BSD-3-Clause license_family: BSD purls: [] size: 485754 timestamp: 1742433356230 - conda: https://conda.anaconda.org/conda-forge/osx-arm64/zstd-1.5.7-h6491c7d_2.conda sha256: 0d02046f57f7a1a3feae3e9d1aa2113788311f3cf37a3244c71e61a93177ba67 md5: e6f69c7bcccdefa417f056fa593b40f0 depends: - __osx >=11.0 - libzlib >=1.3.1,<2.0a0 license: BSD-3-Clause license_family: BSD purls: [] size: 399979 timestamp: 1742433432699 - conda: https://conda.anaconda.org/conda-forge/win-64/zstd-1.5.7-hbeecb71_2.conda sha256: bc64864377d809b904e877a98d0584f43836c9f2ef27d3d2a1421fa6eae7ca04 md5: 21f56217d6125fb30c3c3f10c786d751 depends: - libzlib >=1.3.1,<2.0a0 - ucrt >=10.0.20348.0 - vc >=14.2,<15 - vc14_runtime >=14.29.30139 license: BSD-3-Clause license_family: BSD purls: [] size: 354697 timestamp: 1742433568506 sqlparse-0.5.4/.claude/settings.local.json0000644000000000000000000000041113615410400015425 0ustar00{ "permissions": { "allow": [ "mcp__github__issue_read", "Bash(find:*)", "mcp__github__pull_request_read", "Bash(pixi run:*)", "mcp__github__list_pull_requests", "Bash(python3:*)" ], "deny": [], "ask": [] } } sqlparse-0.5.4/.github/CODE_OF_CONDUCT.md0000644000000000000000000000051713615410400014434 0ustar00# Be nice to each other Everyone participating in the _sqlparse_ project and especially in the issue tracker, discussion forums, pull requests, is expected to treat other people with respect and more generally to follow the guidelines articulated in the [Python Community Code of Conduct](https://www.python.org/psf/codeofconduct/).sqlparse-0.5.4/.github/PULL_REQUEST_TEMPLATE.md0000644000000000000000000000120713615410400015433 0ustar00# Thanks for contributing! Before submitting your pull request please have a look at the following checklist: - [ ] ran the tests (`pytest`) - [ ] all style issues addressed (`flake8`) - [ ] your changes are covered by tests - [ ] your changes are documented, if needed In addition, please take care to provide a proper description on what your change does, fixes or achieves when submitting the pull request. --- **Note:** This repository has automated AI code reviews enabled to help catch potential issues early and provide suggestions. This is an experimental feature to support maintainers and contributors 鈥 your feedback is welcome!sqlparse-0.5.4/.github/ISSUE_TEMPLATE/bug_report.md0000644000000000000000000000125413615410400016511 0ustar00--- name: Bug report about: Create a report to help us improve title: '' labels: 'bug,needs-triage' assignees: '' --- **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior. Please give code examples or concete SQL statements. Take care of not posting any sensitive information when pasting SQL statements! What's the concrete error / traceback. **Expected behavior** A clear and concise description of what you expected to happen. **Versions (please complete the following information):** - Python: [e.g. 3.11.2] - sqlparse: [e.g. 0.4.1] **Additional context** Add any other context about the problem here. sqlparse-0.5.4/.github/ISSUE_TEMPLATE/config.yml0000644000000000000000000000032113615410400016001 0ustar00blank_issues_enabled: true contact_links: - name: Discussions, Questions? url: https://github.com/andialbrecht/sqlparse/discussions about: Please ask questions and start more general discussions heresqlparse-0.5.4/.github/ISSUE_TEMPLATE/feature_request.md0000644000000000000000000000112313615410400017537 0ustar00--- name: Feature request about: Suggest an idea for this project title: '' labels: '' assignees: '' --- **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here. sqlparse-0.5.4/.github/workflows/codeql-analysis.yml0000644000000000000000000000461713615410400017512 0ustar00# For most projects, this workflow file will not need changing; you simply need # to commit it to your repository. # # You may wish to alter this file to override the set of languages analyzed, # or to provide custom queries or build logic. # # ******** NOTE ******** # We have attempted to detect the languages in your repository. Please check # the `language` matrix defined below to confirm you have the correct set of # supported CodeQL languages. # name: "CodeQL" on: push: branches: [ master ] pull_request: # The branches below must be a subset of the branches above branches: [ master ] schedule: - cron: '25 5 * * 1' jobs: analyze: name: Analyze runs-on: ubuntu-latest permissions: actions: read contents: read security-events: write strategy: fail-fast: false matrix: language: [ 'python' ] # CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ] # Learn more: # https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed steps: - name: Checkout repository uses: actions/checkout@v4 # Initializes the CodeQL tools for scanning. - name: Initialize CodeQL uses: github/codeql-action/init@v3 with: languages: ${{ matrix.language }} # If you wish to specify custom queries, you can do so here or in a config file. # By default, queries listed here will override any specified in a config file. # Prefix the list here with "+" to use these queries and those in the config file. # queries: ./path/to/local/query, your-org/your-repo/queries@main # Autobuild attempts to build any compiled languages (C/C++, C#, or Java). # If this step fails, then you should remove it and run the build manually (see below) - name: Autobuild uses: github/codeql-action/autobuild@v3 # 鈩癸笍 Command-line programs to run using the OS shell. # 馃摎 https://git.io/JvXDl # 鉁忥笍 If the Autobuild fails above, remove it and uncomment the following three lines # and modify them (or add more) to build your code if your project # uses a compiled language #- run: | # make bootstrap # make release - name: Perform CodeQL Analysis uses: github/codeql-action/analyze@v3 sqlparse-0.5.4/.github/workflows/python-app.yml0000644000000000000000000000363013615410400016513 0ustar00# This workflow will install Python dependencies, run tests and lint using pixi # For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions name: Python application on: push: branches: - master pull_request: schedule: - cron: '0 12 * * *' jobs: test: name: Run tests on Python ${{ matrix.python-version }} runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: include: - python-version: "py38" os: ubuntu-latest - python-version: "py39" os: ubuntu-latest - python-version: "py310" os: ubuntu-latest - python-version: "py311" os: ubuntu-latest - python-version: "py312" os: ubuntu-latest - python-version: "py313" os: ubuntu-latest - python-version: "py314" os: ubuntu-latest # Test on additional platforms for Python 3.11 - python-version: "py311" os: macos-latest - python-version: "py311" os: windows-latest steps: - uses: actions/checkout@v4 - name: Setup pixi uses: prefix-dev/setup-pixi@v0.8.1 with: pixi-version: v0.55.0 cache: true - name: Install dependencies and run tests run: pixi run test-${{ matrix.python-version }} - name: Run lint (Python 3.11 only) if: matrix.python-version == 'py311' && matrix.os == 'ubuntu-latest' run: pixi run lint - name: Generate coverage report (Python 3.11 only) if: matrix.python-version == 'py311' && matrix.os == 'ubuntu-latest' run: pixi run -e py311 coverage && pixi run -e py311 coverage-combine && pixi run -e py311 coverage-xml - name: Publish to codecov if: matrix.python-version == 'py311' && matrix.os == 'ubuntu-latest' uses: codecov/codecov-action@v4 sqlparse-0.5.4/.idea/.gitignore0000644000000000000000000000026013615410400013240 0ustar00# Default ignored files /shelf/ /workspace.xml # Editor-based HTTP Client requests /httpRequests/ # Datasource local storage ignored files /dataSources/ /dataSources.local.xml sqlparse-0.5.4/.idea/copilot.data.migration.agent.xml0000644000000000000000000000027613615410400017447 0ustar00 sqlparse-0.5.4/.idea/copilot.data.migration.ask.xml0000644000000000000000000000027413615410400017125 0ustar00 sqlparse-0.5.4/.idea/copilot.data.migration.ask2agent.xml0000644000000000000000000000030213615410400020216 0ustar00 sqlparse-0.5.4/.idea/copilot.data.migration.edit.xml0000644000000000000000000000027513615410400017275 0ustar00 sqlparse-0.5.4/.idea/misc.xml0000644000000000000000000000043213615410400012726 0ustar00 sqlparse-0.5.4/.idea/modules.xml0000644000000000000000000000041413615410400013443 0ustar00 sqlparse-0.5.4/.idea/sqlparse.iml0000644000000000000000000000122213615410400013604 0ustar00 sqlparse-0.5.4/.idea/vcs.xml0000644000000000000000000000024713615410400012572 0ustar00 sqlparse-0.5.4/.idea/workspace.xml0000644000000000000000000002674613615410400014011 0ustar00 { "associatedIndex": 6 } { "keyToString": { "ModuleVcsDetector.initialDetectionPerformed": "true", "RunOnceActivity.ShowReadmeOnStart": "true", "RunOnceActivity.TerminalTabsStorage.copyFrom.TerminalArrangementManager.252": "true", "RunOnceActivity.git.unshallow": "true", "git-widget-placeholder": "master", "last_opened_file_path": "/Users/andi/devel/sqlparse", "node.js.detected.package.eslint": "true", "node.js.detected.package.tslint": "true", "node.js.selected.package.eslint": "(autodetect)", "node.js.selected.package.tslint": "(autodetect)", "nodejs_package_manager_path": "npm", "vue.rearranger.settings.migration": "true" } } 1758601787641 sqlparse-0.5.4/.idea/inspectionProfiles/profiles_settings.xml0000644000000000000000000000025613615410400021421 0ustar00 sqlparse-0.5.4/docs/Makefile0000644000000000000000000000566213615410400012673 0ustar00# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf build/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html @echo @echo "Build finished. The HTML pages are in build/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) build/dirhtml @echo @echo "Build finished. The HTML pages are in build/dirhtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) build/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) build/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in build/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) build/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in build/qthelp, like this:" @echo "# qcollectiongenerator build/qthelp/python-sqlparse.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile build/qthelp/python-sqlparse.qhc" latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex @echo @echo "Build finished; the LaTeX files are in build/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes @echo @echo "The overview file is in build/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in build/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) build/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in build/doctest/output.txt." sqlparse-0.5.4/docs/sqlformat.10000644000000000000000000000340013615410400013311 0ustar00.\" Based on template /usr/share/man-db/examples/manpage.example provided by .\" Tom Christiansen . .TH SQLFORMAT "1" "December 2010" "python-sqlparse version: 0.1.2" "User Commands" .SH NAME sqlformat \- reformat SQL .SH SYNOPSIS .PP .B sqlformat [ .I "OPTION" ] ... [ .I "FILE" ] ... .SH DESCRIPTION .\" Putting a newline after each sentence can generate better output. The `sqlformat' command-line tool can be used to reformat SQL file according to specified options or prepare a snippet in in some programming language (only Python and PHP currently supported). Use "-" for .I FILE to read from stdin. .SH OPTIONS .TP \fB\-i\fR \fICHOICE\fR|\fB\-\-identifiers\fR=\fIFORMAT\fR Change case of identifiers. .I FORMAT is one of "upper", "lower", "capitalize". .TP \fB\-k\fR \fICHOICE\fR|\fB\-\-keywords\fR=\fIFORMAT\fR Change case of keywords. .I FORMAT is one of "upper", "lower", "capitalize". .TP \fB\-l\fR \fICHOICE\fR|\fB\-\-language\fR=\fILANG\fR Output a snippet in programming language LANG. .I LANG can be "python", "php". .TP \fB\-o\fR \fIFILE\fR|\fB\-\-outfile\fR=\fIFILE\fR Write output to .I FILE (defaults to stdout). .TP .BR \-r | \-\-reindent Reindent statements. .TP \fB\-\-indent_width\fR=\fIINDENT_WIDTH\fR Set indent width to .IR INDENT_WIDTH . Default is 2 spaces. .TP \fB\-\-wrap_after\fR=\fIWRAP_AFTER\fR The column limit for wrapping comma-separated lists. If unspecified, it puts every item in the list on its own line. .TP \fB\-\-strip\-comments Remove comments. .TP .BR \-h | \-\-help Print a short help message and exit. All subsequent options are ignored. .TP .BR --verbose Verbose output. .TP .BR \-\-version Print program's version number and exit. .SH AUTHORS This man page was written by Andriy Senkovych sqlparse-0.5.4/docs/source/analyzing.rst0000644000000000000000000000252113615410400015250 0ustar00.. _analyze: Analyzing the Parsed Statement ============================== When the :meth:`~sqlparse.parse` function is called the returned value is a tree-ish representation of the analyzed statements. The returned objects can be used by applications to retrieve further information about the parsed SQL. Base Classes ------------ All returned objects inherit from these base classes. The :class:`~sqlparse.sql.Token` class represents a single token and :class:`~sqlparse.sql.TokenList` class is a group of tokens. The latter provides methods for inspecting its child tokens. .. autoclass:: sqlparse.sql.Token :members: .. autoclass:: sqlparse.sql.TokenList :members: SQL Representing Classes ------------------------ The following classes represent distinct parts of a SQL statement. .. autoclass:: sqlparse.sql.Statement :members: .. autoclass:: sqlparse.sql.Comment :members: .. autoclass:: sqlparse.sql.Identifier :members: .. autoclass:: sqlparse.sql.IdentifierList :members: .. autoclass:: sqlparse.sql.Where :members: .. autoclass:: sqlparse.sql.Case :members: .. autoclass:: sqlparse.sql.Parenthesis :members: .. autoclass:: sqlparse.sql.If :members: .. autoclass:: sqlparse.sql.For :members: .. autoclass:: sqlparse.sql.Assignment :members: .. autoclass:: sqlparse.sql.Comparison :members: sqlparse-0.5.4/docs/source/api.rst0000644000000000000000000000731513615410400014033 0ustar00:mod:`sqlparse` -- Parse SQL statements ======================================= .. module:: sqlparse :synopsis: Parse SQL statements. The :mod:`sqlparse` module provides the following functions on module-level. .. autofunction:: sqlparse.split .. autofunction:: sqlparse.format .. autofunction:: sqlparse.parse In most cases there's no need to set the `encoding` parameter. If `encoding` is not set, sqlparse assumes that the given SQL statement is encoded either in utf-8 or latin-1. .. _formatting: Formatting of SQL Statements ---------------------------- The :meth:`~sqlparse.format` function accepts the following keyword arguments. ``keyword_case`` Changes how keywords are formatted. Allowed values are "upper", "lower" and "capitalize". ``identifier_case`` Changes how identifiers are formatted. Allowed values are "upper", "lower", and "capitalize". ``strip_comments`` If ``True`` comments are removed from the statements. ``truncate_strings`` If ``truncate_strings`` is a positive integer, string literals longer than the given value will be truncated. ``truncate_char`` (default: "[...]") If long string literals are truncated (see above) this value will be append to the truncated string. ``reindent`` If ``True`` the indentations of the statements are changed. ``reindent_aligned`` If ``True`` the indentations of the statements are changed, and statements are aligned by keywords. ``use_space_around_operators`` If ``True`` spaces are used around all operators. ``indent_tabs`` If ``True`` tabs instead of spaces are used for indentation. ``indent_width`` The width of the indentation, defaults to 2. ``wrap_after`` The column limit (in characters) for wrapping comma-separated lists. If unspecified, it puts every item in the list on its own line. ``compact`` If ``True`` the formatter tries to produce more compact output. ``output_format`` If given the output is additionally formatted to be used as a variable in a programming language. Allowed values are "python" and "php". ``comma_first`` If ``True`` comma-first notation for column names is used. Security and Performance Considerations --------------------------------------- For developers working with very large SQL statements or in security-sensitive environments, sqlparse includes built-in protections against potential denial of service (DoS) attacks: **Grouping Limits** The parser includes configurable limits to prevent excessive resource consumption when processing very large or deeply nested SQL structures: - ``MAX_GROUPING_DEPTH`` (default: 100) - Limits recursion depth during token grouping - ``MAX_GROUPING_TOKENS`` (default: 10,000) - Limits the number of tokens processed in a single grouping operation These limits can be modified by changing the constants in ``sqlparse.engine.grouping`` if your application requires processing larger SQL statements. Set a limit to ``None`` to completely disable it. However, increasing these values or disabling limits may expose your application to DoS vulnerabilities when processing untrusted SQL input. Example of modifying limits:: import sqlparse.engine.grouping # Increase limits (use with caution) sqlparse.engine.grouping.MAX_GROUPING_DEPTH = 200 sqlparse.engine.grouping.MAX_GROUPING_TOKENS = 50000 # Disable limits completely (use with extreme caution) sqlparse.engine.grouping.MAX_GROUPING_DEPTH = None sqlparse.engine.grouping.MAX_GROUPING_TOKENS = None .. warning:: Increasing the grouping limits or disabling them completely may make your application vulnerable to DoS attacks when processing untrusted SQL input. Only modify these values if you are certain about the source and size of your SQL statements. sqlparse-0.5.4/docs/source/changes.rst0000644000000000000000000000046713615410400014673 0ustar00.. _changes: ============================ Changes in python-sqlparse ============================ Upcoming Deprecations ===================== * ``sqlparse.SQLParseError`` is deprecated (version 0.1.5), use ``sqlparse.exceptions.SQLParseError`` instead. Changelog ========= .. include:: ../../CHANGELOG sqlparse-0.5.4/docs/source/conf.py0000644000000000000000000001472013615410400014025 0ustar00# python-sqlparse documentation build configuration file, created by # sphinx-quickstart on Thu Feb 26 08:19:28 2009. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import datetime import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.append(os.path.abspath('.')) sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../')) import sqlparse # -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.autosummary'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = 'python-sqlparse' copyright = '{:%Y}, Andi Albrecht'.format(datetime.date.today()) # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = sqlparse.__version__ # The full version, including alpha/beta/rc tags. release = sqlparse.__version__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = [] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'tango' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. #html_theme = 'agogo' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [os.path.abspath('../')] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'python-sqlparsedoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'python-sqlparse.tex', 'python-sqlparse Documentation', 'Andi Albrecht', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True todo_include_todos = True sqlparse-0.5.4/docs/source/extending.rst0000644000000000000000000000572113615410400015246 0ustar00Extending :mod:`sqlparse` ========================= .. module:: sqlparse :synopsis: Extending parsing capability of sqlparse. The :mod:`sqlparse` module uses a sql grammar that was tuned through usage and numerous PR to fit a broad range of SQL syntaxes, but it cannot cater to every given case since some SQL dialects have adopted conflicting meanings of certain keywords. Sqlparse therefore exposes a mechanism to configure the fundamental keywords and regular expressions that parse the language as described below. If you find an adaptation that works for your specific use-case. Please consider contributing it back to the community by opening a PR on `GitHub `_. Configuring the Lexer --------------------- The lexer is a singleton class that breaks down the stream of characters into language tokens. It does this by using a sequence of regular expressions and keywords that are listed in the file ``sqlparse.keywords``. Instead of applying these fixed grammar definitions directly, the lexer is default initialized in its method called ``default_initialization()``. As an api user, you can adapt the Lexer configuration by applying your own configuration logic. To do so, start out by clearing previous configurations with ``.clear()``, then apply the SQL list with ``.set_SQL_REGEX(SQL_REGEX)``, and apply keyword lists with ``.add_keywords(KEYWORDS)``. You can do so by re-using the expressions in ``sqlparse.keywords`` (see example below), leaving parts out, or by making up your own master list. See the expected types of the arguments by inspecting their structure in ``sqlparse.keywords``. (For compatibility with python 3.4, this library does not use type-hints.) The following example adds support for the expression ``ZORDER BY``, and adds ``BAR`` as a keyword to the lexer: .. code-block:: python import re import sqlparse from sqlparse import keywords from sqlparse.lexer import Lexer # get the lexer singleton object to configure it lex = Lexer.get_default_instance() # Clear the default configurations. # After this call, reg-exps and keyword dictionaries need to be loaded # to make the lexer functional again. lex.clear() my_regex = (r"ZORDER\s+BY\b", sqlparse.tokens.Keyword) # slice the default SQL_REGEX to inject the custom object lex.set_SQL_REGEX( keywords.SQL_REGEX[:38] + [my_regex] + keywords.SQL_REGEX[38:] ) # add the default keyword dictionaries lex.add_keywords(keywords.KEYWORDS_COMMON) lex.add_keywords(keywords.KEYWORDS_ORACLE) lex.add_keywords(keywords.KEYWORDS_PLPGSQL) lex.add_keywords(keywords.KEYWORDS_HQL) lex.add_keywords(keywords.KEYWORDS_MSACCESS) lex.add_keywords(keywords.KEYWORDS) # add a custom keyword dictionary lex.add_keywords({'BAR': sqlparse.tokens.Keyword}) # no configuration is passed here. The lexer is used as a singleton. sqlparse.parse("select * from foo zorder by bar;") sqlparse-0.5.4/docs/source/index.rst0000644000000000000000000000130513615410400014362 0ustar00.. python-sqlparse documentation master file, created by sphinx-quickstart on Thu Feb 26 08:19:28 2009. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. python-sqlparse =============== .. include:: ../../README.rst :start-after: docincludebegin :end-before: Links Contents -------- .. toctree:: :maxdepth: 2 intro api analyzing ui extending changes license indices Resources --------- Project page https://github.com/andialbrecht/sqlparse Bug tracker https://github.com/andialbrecht/sqlparse/issues Documentation https://sqlparse.readthedocs.io/ Online Demo https://sqlformat.org/ sqlparse-0.5.4/docs/source/indices.rst0000644000000000000000000000013413615410400014670 0ustar00Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` sqlparse-0.5.4/docs/source/intro.rst0000644000000000000000000001030213615410400014403 0ustar00Introduction ============ Download & Installation ----------------------- The latest released version can be obtained from the `Python Package Index (PyPI) `_. To extract and install the module system-wide run .. code-block:: bash $ tar cvfz python-sqlparse-VERSION.tar.gz $ cd python-sqlparse/ $ sudo python setup.py install Alternatively you can install :mod:`sqlparse` using :command:`pip`: .. code-block:: bash $ pip install sqlparse Getting Started --------------- The :mod:`sqlparse` module provides three simple functions on module level to achieve some common tasks when working with SQL statements. This section shows some simple usage examples of these functions. Let's get started with splitting a string containing one or more SQL statements into a list of single statements using :meth:`~sqlparse.split`: .. code-block:: python >>> import sqlparse >>> sql = 'select * from foo; select * from bar;' >>> sqlparse.split(sql) [u'select * from foo; ', u'select * from bar;'] The end of a statement is identified by the occurrence of a semicolon. Semicolons within certain SQL constructs like ``BEGIN ... END`` blocks are handled correctly by the splitting mechanism. SQL statements can be beautified by using the :meth:`~sqlparse.format` function. .. code-block:: python >>> sql = 'select * from foo where id in (select id from bar);' >>> print(sqlparse.format(sql, reindent=True, keyword_case='upper')) SELECT * FROM foo WHERE id IN (SELECT id FROM bar); In this case all keywords in the given SQL are uppercased and the indentation is changed to make it more readable. Read :ref:`formatting` for a full reference of supported options given as keyword arguments to that function. Before proceeding with a closer look at the internal representation of SQL statements, you should be aware that this SQL parser is intentionally non-validating. It assumes that the given input is at least some kind of SQL and then it tries to analyze as much as possible without making too much assumptions about the concrete dialect or the actual statement. At least it's up to the user of this API to interpret the results right. When using the :meth:`~sqlparse.parse` function a tuple of :class:`~sqlparse.sql.Statement` instances is returned: .. code-block:: python >>> sql = 'select * from "someschema"."mytable" where id = 1' >>> parsed = sqlparse.parse(sql) >>> parsed (,) Each item of the tuple is a single statement as identified by the above mentioned :meth:`~sqlparse.split` function. So let's grab the only element from that list and have a look at the ``tokens`` attribute. Sub-tokens are stored in this attribute. .. code-block:: python >>> stmt = parsed[0] # grab the Statement object >>> stmt.tokens (, , , , , , , , ) Each object can be converted back to a string at any time: .. code-block:: python >>> str(stmt) # str(stmt) for Python 3 'select * from "someschema"."mytable" where id = 1' >>> str(stmt.tokens[-1]) # or just the WHERE part 'where id = 1' Details of the returned objects are described in :ref:`analyze`. Development & Contributing -------------------------- To check out the latest sources of this module run .. code-block:: bash $ git clone git://github.com/andialbrecht/sqlparse.git to check out the latest sources from the repository. :mod:`sqlparse` is currently tested under Python 3.5+ and PyPy. Tests are automatically run on each commit and for each pull request on Travis: https://travis-ci.org/andialbrecht/sqlparse Make sure to run the test suite before sending a pull request by running .. code-block:: bash $ tox It's ok, if :command:`tox` doesn't find all interpreters listed above. Ideally a Python 2 and a Python 3 version should be tested locally. Please file bug reports and feature requests on the project site at https://github.com/andialbrecht/sqlparse/issues/new. sqlparse-0.5.4/docs/source/license.rst0000644000000000000000000000005313615410400014674 0ustar00License ======= .. include:: ../../LICENSEsqlparse-0.5.4/docs/source/ui.rst0000644000000000000000000000436313615410400013677 0ustar00User Interfaces =============== ``sqlformat`` The ``sqlformat`` command line script is distributed with the module. Run :command:`sqlformat --help` to list available options and for usage hints. Pre-commit Hook ^^^^^^^^^^^^^^^^ ``sqlparse`` can be integrated with `pre-commit `_ to automatically format SQL files before committing them to version control. To use it, add the following to your ``.pre-commit-config.yaml``: .. code-block:: yaml repos: - repo: https://github.com/andialbrecht/sqlparse rev: 0.5.4 # Replace with the version you want to use hooks: - id: sqlformat The hook will format your SQL files with basic indentation (``--reindent``) by default. To customize formatting options, override the ``args`` parameter: .. code-block:: yaml repos: - repo: https://github.com/andialbrecht/sqlparse rev: 0.5.4 hooks: - id: sqlformat args: [--in-place, --reindent, --keywords, upper, --identifiers, lower] .. important:: When overriding ``args``, you **must include** ``--in-place`` as the first argument, otherwise the hook will not modify your files. Common formatting options include: * ``--in-place``: Required - modify files in-place (always include this!) * ``--reindent`` or ``-r``: Reindent statements * ``--keywords upper`` or ``-k upper``: Convert keywords to uppercase * ``--identifiers lower`` or ``-i lower``: Convert identifiers to lowercase * ``--indent_width 4``: Set indentation width to 4 spaces * ``--strip-comments``: Remove comments from SQL Run ``sqlformat --help`` for a complete list of formatting options. After adding the configuration, install the pre-commit hooks: .. code-block:: bash pre-commit install The hook will now run automatically before each commit. You can also run it manually on all files: .. code-block:: bash pre-commit run sqlformat --all-files ``sqlformat.appspot.com`` An example `Google App Engine `_ application that exposes the formatting features using a web front-end. See https://sqlformat.org/ for details. The source for this application is available from a source code check out of the :mod:`sqlparse` module (see :file:`extras/appengine`). sqlparse-0.5.4/examples/column_defs_lowlevel.py0000644000000000000000000000316513615410400016676 0ustar00#!/usr/bin/env python # # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This example is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause # # Example for retrieving column definitions from a CREATE statement # using low-level functions. import sqlparse def extract_definitions(token_list): # assumes that token_list is a parenthesis definitions = [] tmp = [] par_level = 0 for token in token_list.flatten(): if token.is_whitespace: continue elif token.match(sqlparse.tokens.Punctuation, '('): par_level += 1 continue if token.match(sqlparse.tokens.Punctuation, ')'): if par_level == 0: break else: par_level -= 1 elif token.match(sqlparse.tokens.Punctuation, ','): if tmp: definitions.append(tmp) tmp = [] else: tmp.append(token) if tmp: definitions.append(tmp) return definitions if __name__ == '__main__': SQL = """CREATE TABLE foo ( id integer primary key, title varchar(200) not null, description text);""" parsed = sqlparse.parse(SQL)[0] # extract the parenthesis which holds column definitions _, par = parsed.token_next_by(i=sqlparse.sql.Parenthesis) columns = extract_definitions(par) for column in columns: print('NAME: {name!s:12} DEFINITION: {definition}'.format( name=column[0], definition=' '.join(str(t) for t in column[1:]))) sqlparse-0.5.4/examples/extract_table_names.py0000644000000000000000000000401113615410400016462 0ustar00#!/usr/bin/env python # # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This example is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause # # This example illustrates how to extract table names from nested # SELECT statements. # # See: # https://groups.google.com/forum/#!forum/sqlparse/browse_thread/thread/b0bd9a022e9d4895 import sqlparse from sqlparse.sql import IdentifierList, Identifier from sqlparse.tokens import Keyword, DML def is_subselect(parsed): if not parsed.is_group: return False for item in parsed.tokens: if item.ttype is DML and item.value.upper() == 'SELECT': return True return False def extract_from_part(parsed): from_seen = False for item in parsed.tokens: if from_seen: if is_subselect(item): yield from extract_from_part(item) elif item.ttype is Keyword: return else: yield item elif item.ttype is Keyword and item.value.upper() == 'FROM': from_seen = True def extract_table_identifiers(token_stream): for item in token_stream: if isinstance(item, IdentifierList): for identifier in item.get_identifiers(): yield identifier.get_name() elif isinstance(item, Identifier): yield item.get_name() # It's a bug to check for Keyword here, but in the example # above some tables names are identified as keywords... elif item.ttype is Keyword: yield item.value def extract_tables(sql): stream = extract_from_part(sqlparse.parse(sql)[0]) return list(extract_table_identifiers(stream)) if __name__ == '__main__': sql = """ select K.a,K.b from (select H.b from (select G.c from (select F.d from (select E.e from A, B, C, D, E), F), G), H), I, J, K order by 1,2; """ tables = ', '.join(extract_tables(sql)) print('Tables: {}'.format(tables)) sqlparse-0.5.4/sqlparse/__init__.py0000644000000000000000000000507113615410400014240 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause """Parse SQL statements.""" # Setup namespace from typing import Any, Generator, IO, List, Optional, Tuple, Union from sqlparse import sql from sqlparse import cli from sqlparse import engine from sqlparse import tokens from sqlparse import filters from sqlparse import formatter __version__ = "0.5.4" __all__ = ["engine", "filters", "formatter", "sql", "tokens", "cli"] def parse( sql: str, encoding: Optional[str] = None ) -> Tuple[sql.Statement, ...]: """Parse sql and return a list of statements. :param sql: A string containing one or more SQL statements. :param encoding: The encoding of the statement (optional). :returns: A tuple of :class:`~sqlparse.sql.Statement` instances. """ return tuple(parsestream(sql, encoding)) def parsestream( stream: Union[str, IO[str]], encoding: Optional[str] = None ) -> Generator[sql.Statement, None, None]: """Parses sql statements from file-like object. :param stream: A file-like object. :param encoding: The encoding of the stream contents (optional). :returns: A generator of :class:`~sqlparse.sql.Statement` instances. """ stack = engine.FilterStack() stack.enable_grouping() return stack.run(stream, encoding) def format(sql: str, encoding: Optional[str] = None, **options: Any) -> str: """Format *sql* according to *options*. Available options are documented in :ref:`formatting`. In addition to the formatting options this function accepts the keyword "encoding" which determines the encoding of the statement. :returns: The formatted SQL statement as string. """ stack = engine.FilterStack() options = formatter.validate_options(options) stack = formatter.build_filter_stack(stack, options) stack.postprocess.append(filters.SerializerUnicode()) return "".join(stack.run(sql, encoding)) def split( sql: str, encoding: Optional[str] = None, strip_semicolon: bool = False ) -> List[str]: """Split *sql* into single statements. :param sql: A string containing one or more SQL statements. :param encoding: The encoding of the statement (optional). :param strip_semicolon: If True, remove trailing semicolons (default: False). :returns: A list of strings. """ stack = engine.FilterStack(strip_semicolon=strip_semicolon) return [str(stmt).strip() for stmt in stack.run(sql, encoding)] sqlparse-0.5.4/sqlparse/__main__.py0000644000000000000000000000114213615410400014214 0ustar00#!/usr/bin/env python # # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause """Entrypoint module for `python -m sqlparse`. Why does this file exist, and why __main__? For more info, read: - https://www.python.org/dev/peps/pep-0338/ - https://docs.python.org/2/using/cmdline.html#cmdoption-m - https://docs.python.org/3/using/cmdline.html#cmdoption-m """ import sys from sqlparse.cli import main if __name__ == '__main__': sys.exit(main()) sqlparse-0.5.4/sqlparse/cli.py0000644000000000000000000001602713615410400013253 0ustar00# Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause """Module that contains the command line app. Why does this file exist, and why not put this in __main__? You might be tempted to import things from __main__ later, but that will cause problems: the code will get executed twice: - When you run `python -m sqlparse` python will execute ``__main__.py`` as a script. That means there won't be any ``sqlparse.__main__`` in ``sys.modules``. - When you import __main__ it will get executed again (as a module) because there's no ``sqlparse.__main__`` in ``sys.modules``. Also see (1) from http://click.pocoo.org/5/setuptools/#setuptools-integration """ import argparse import sys from io import TextIOWrapper import sqlparse from sqlparse.exceptions import SQLParseError # TODO: Add CLI Tests # TODO: Simplify formatter by using argparse `type` arguments def create_parser(): _CASE_CHOICES = ['upper', 'lower', 'capitalize'] parser = argparse.ArgumentParser( prog='sqlformat', description='Format FILE according to OPTIONS. Use "-" as FILE ' 'to read from stdin.', usage='%(prog)s [OPTIONS] FILE [FILE ...]', ) parser.add_argument( 'filename', nargs='+', help='file(s) to format (use "-" for stdin)') parser.add_argument( '-o', '--outfile', dest='outfile', metavar='FILE', help='write output to FILE (defaults to stdout)') parser.add_argument( '--in-place', dest='inplace', action='store_true', default=False, help='format files in-place (overwrite existing files)') parser.add_argument( '--version', action='version', version=sqlparse.__version__) group = parser.add_argument_group('Formatting Options') group.add_argument( '-k', '--keywords', metavar='CHOICE', dest='keyword_case', choices=_CASE_CHOICES, help='change case of keywords, CHOICE is one of {}'.format( ', '.join(f'"{x}"' for x in _CASE_CHOICES))) group.add_argument( '-i', '--identifiers', metavar='CHOICE', dest='identifier_case', choices=_CASE_CHOICES, help='change case of identifiers, CHOICE is one of {}'.format( ', '.join(f'"{x}"' for x in _CASE_CHOICES))) group.add_argument( '-l', '--language', metavar='LANG', dest='output_format', choices=['python', 'php'], help='output a snippet in programming language LANG, ' 'choices are "python", "php"') group.add_argument( '--strip-comments', dest='strip_comments', action='store_true', default=False, help='remove comments') group.add_argument( '-r', '--reindent', dest='reindent', action='store_true', default=False, help='reindent statements') group.add_argument( '--indent_width', dest='indent_width', default=2, type=int, help='indentation width (defaults to 2 spaces)') group.add_argument( '--indent_after_first', dest='indent_after_first', action='store_true', default=False, help='indent after first line of statement (e.g. SELECT)') group.add_argument( '--indent_columns', dest='indent_columns', action='store_true', default=False, help='indent all columns by indent_width instead of keyword length') group.add_argument( '-a', '--reindent_aligned', action='store_true', default=False, help='reindent statements to aligned format') group.add_argument( '-s', '--use_space_around_operators', action='store_true', default=False, help='place spaces around mathematical operators') group.add_argument( '--wrap_after', dest='wrap_after', default=0, type=int, help='Column after which lists should be wrapped') group.add_argument( '--comma_first', dest='comma_first', default=False, type=bool, help='Insert linebreak before comma (default False)') group.add_argument( '--compact', dest='compact', default=False, type=bool, help='Try to produce more compact output (default False)') group.add_argument( '--encoding', dest='encoding', default='utf-8', help='Specify the input encoding (default utf-8)') return parser def _error(msg): """Print msg and optionally exit with return code exit_.""" sys.stderr.write(f'[ERROR] {msg}\n') return 1 def _process_file(filename, args): """Process a single file with the given formatting options. Returns 0 on success, 1 on error. """ # Check for incompatible option combinations first if filename == '-' and args.inplace: return _error('Cannot use --in-place with stdin') # Read input if filename == '-': # read from stdin wrapper = TextIOWrapper(sys.stdin.buffer, encoding=args.encoding) try: data = wrapper.read() finally: wrapper.detach() else: try: with open(filename, encoding=args.encoding) as f: data = ''.join(f.readlines()) except OSError as e: return _error(f'Failed to read {filename}: {e}') # Determine output destination close_stream = False if args.inplace: try: stream = open(filename, 'w', encoding=args.encoding) close_stream = True except OSError as e: return _error(f'Failed to open {filename}: {e}') elif args.outfile: try: stream = open(args.outfile, 'w', encoding=args.encoding) close_stream = True except OSError as e: return _error(f'Failed to open {args.outfile}: {e}') else: stream = sys.stdout # Format the SQL formatter_opts = vars(args) try: formatter_opts = sqlparse.formatter.validate_options(formatter_opts) except SQLParseError as e: return _error(f'Invalid options: {e}') s = sqlparse.format(data, **formatter_opts) stream.write(s) stream.flush() if close_stream: stream.close() return 0 def main(args=None): parser = create_parser() args = parser.parse_args(args) # Validate argument combinations if len(args.filename) > 1: if args.outfile: return _error('Cannot use -o/--outfile with multiple files') if not args.inplace: return _error('Multiple files require --in-place flag') # Process all files exit_code = 0 for filename in args.filename: result = _process_file(filename, args) if result != 0: exit_code = result # Continue processing remaining files even if one fails return exit_code sqlparse-0.5.4/sqlparse/exceptions.py0000644000000000000000000000052613615410400014662 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause """Exceptions used in this package.""" class SQLParseError(Exception): """Base class for exceptions in this module.""" sqlparse-0.5.4/sqlparse/formatter.py0000644000000000000000000001717413615410400014513 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause """SQL formatter""" from sqlparse import filters from sqlparse.exceptions import SQLParseError def validate_options(options): # noqa: C901 """Validates options.""" kwcase = options.get('keyword_case') if kwcase not in [None, 'upper', 'lower', 'capitalize']: raise SQLParseError('Invalid value for keyword_case: ' '{!r}'.format(kwcase)) idcase = options.get('identifier_case') if idcase not in [None, 'upper', 'lower', 'capitalize']: raise SQLParseError('Invalid value for identifier_case: ' '{!r}'.format(idcase)) ofrmt = options.get('output_format') if ofrmt not in [None, 'sql', 'python', 'php']: raise SQLParseError('Unknown output format: ' '{!r}'.format(ofrmt)) strip_comments = options.get('strip_comments', False) if strip_comments not in [True, False]: raise SQLParseError('Invalid value for strip_comments: ' '{!r}'.format(strip_comments)) space_around_operators = options.get('use_space_around_operators', False) if space_around_operators not in [True, False]: raise SQLParseError('Invalid value for use_space_around_operators: ' '{!r}'.format(space_around_operators)) strip_ws = options.get('strip_whitespace', False) if strip_ws not in [True, False]: raise SQLParseError('Invalid value for strip_whitespace: ' '{!r}'.format(strip_ws)) truncate_strings = options.get('truncate_strings') if truncate_strings is not None: try: truncate_strings = int(truncate_strings) except (ValueError, TypeError): raise SQLParseError('Invalid value for truncate_strings: ' '{!r}'.format(truncate_strings)) if truncate_strings <= 1: raise SQLParseError('Invalid value for truncate_strings: ' '{!r}'.format(truncate_strings)) options['truncate_strings'] = truncate_strings options['truncate_char'] = options.get('truncate_char', '[...]') indent_columns = options.get('indent_columns', False) if indent_columns not in [True, False]: raise SQLParseError('Invalid value for indent_columns: ' '{!r}'.format(indent_columns)) elif indent_columns: options['reindent'] = True # enforce reindent options['indent_columns'] = indent_columns reindent = options.get('reindent', False) if reindent not in [True, False]: raise SQLParseError('Invalid value for reindent: ' '{!r}'.format(reindent)) elif reindent: options['strip_whitespace'] = True reindent_aligned = options.get('reindent_aligned', False) if reindent_aligned not in [True, False]: raise SQLParseError('Invalid value for reindent_aligned: ' '{!r}'.format(reindent)) elif reindent_aligned: options['strip_whitespace'] = True indent_after_first = options.get('indent_after_first', False) if indent_after_first not in [True, False]: raise SQLParseError('Invalid value for indent_after_first: ' '{!r}'.format(indent_after_first)) options['indent_after_first'] = indent_after_first indent_tabs = options.get('indent_tabs', False) if indent_tabs not in [True, False]: raise SQLParseError('Invalid value for indent_tabs: ' '{!r}'.format(indent_tabs)) elif indent_tabs: options['indent_char'] = '\t' else: options['indent_char'] = ' ' indent_width = options.get('indent_width', 2) try: indent_width = int(indent_width) except (TypeError, ValueError): raise SQLParseError('indent_width requires an integer') if indent_width < 1: raise SQLParseError('indent_width requires a positive integer') options['indent_width'] = indent_width wrap_after = options.get('wrap_after', 0) try: wrap_after = int(wrap_after) except (TypeError, ValueError): raise SQLParseError('wrap_after requires an integer') if wrap_after < 0: raise SQLParseError('wrap_after requires a positive integer') options['wrap_after'] = wrap_after comma_first = options.get('comma_first', False) if comma_first not in [True, False]: raise SQLParseError('comma_first requires a boolean value') options['comma_first'] = comma_first compact = options.get('compact', False) if compact not in [True, False]: raise SQLParseError('compact requires a boolean value') options['compact'] = compact right_margin = options.get('right_margin') if right_margin is not None: try: right_margin = int(right_margin) except (TypeError, ValueError): raise SQLParseError('right_margin requires an integer') if right_margin < 10: raise SQLParseError('right_margin requires an integer > 10') options['right_margin'] = right_margin return options def build_filter_stack(stack, options): """Setup and return a filter stack. Args: stack: :class:`~sqlparse.filters.FilterStack` instance options: Dictionary with options validated by validate_options. """ # Token filter if options.get('keyword_case'): stack.preprocess.append( filters.KeywordCaseFilter(options['keyword_case'])) if options.get('identifier_case'): stack.preprocess.append( filters.IdentifierCaseFilter(options['identifier_case'])) if options.get('truncate_strings'): stack.preprocess.append(filters.TruncateStringFilter( width=options['truncate_strings'], char=options['truncate_char'])) if options.get('use_space_around_operators', False): stack.enable_grouping() stack.stmtprocess.append(filters.SpacesAroundOperatorsFilter()) # After grouping if options.get('strip_comments'): stack.enable_grouping() stack.stmtprocess.append(filters.StripCommentsFilter()) if options.get('strip_whitespace') or options.get('reindent'): stack.enable_grouping() stack.stmtprocess.append(filters.StripWhitespaceFilter()) if options.get('reindent'): stack.enable_grouping() stack.stmtprocess.append( filters.ReindentFilter( char=options['indent_char'], width=options['indent_width'], indent_after_first=options['indent_after_first'], indent_columns=options['indent_columns'], wrap_after=options['wrap_after'], comma_first=options['comma_first'], compact=options['compact'],)) if options.get('reindent_aligned', False): stack.enable_grouping() stack.stmtprocess.append( filters.AlignedIndentFilter(char=options['indent_char'])) if options.get('right_margin'): stack.enable_grouping() stack.stmtprocess.append( filters.RightMarginFilter(width=options['right_margin'])) # Serializer if options.get('output_format'): frmt = options['output_format'] if frmt.lower() == 'php': fltr = filters.OutputPHPFilter() elif frmt.lower() == 'python': fltr = filters.OutputPythonFilter() else: fltr = None if fltr is not None: stack.postprocess.append(fltr) return stack sqlparse-0.5.4/sqlparse/keywords.py0000644000000000000000000007364313615410400014362 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause from sqlparse import tokens # object() only supports "is" and is useful as a marker # use this marker to specify that the given regex in SQL_REGEX # shall be processed further through a lookup in the KEYWORDS dictionaries PROCESS_AS_KEYWORD = object() SQL_REGEX = [ (r'(--|# )\+.*?(\r\n|\r|\n|$)', tokens.Comment.Single.Hint), (r'/\*\+[\s\S]*?\*/', tokens.Comment.Multiline.Hint), (r'(--|# ).*?(\r\n|\r|\n|$)', tokens.Comment.Single), (r'/\*[\s\S]*?\*/', tokens.Comment.Multiline), (r'(\r\n|\r|\n)', tokens.Newline), (r'\s+?', tokens.Whitespace), (r':=', tokens.Assignment), (r'::', tokens.Punctuation), (r'\*', tokens.Wildcard), (r"`(``|[^`])*`", tokens.Name), (r"麓(麓麓|[^麓])*麓", tokens.Name), (r'((?>?|#>>?|@>|<@|\?\|?|\?&|\-|#\-)', tokens.Operator), (r'[<>=~!]+', tokens.Operator.Comparison), (r'[+/@#%^&|^-]+', tokens.Operator), ] KEYWORDS = { 'ABORT': tokens.Keyword, 'ABS': tokens.Keyword, 'ABSOLUTE': tokens.Keyword, 'ACCESS': tokens.Keyword, 'ADA': tokens.Keyword, 'ADD': tokens.Keyword, 'ADMIN': tokens.Keyword, 'AFTER': tokens.Keyword, 'AGGREGATE': tokens.Keyword, 'ALIAS': tokens.Keyword, 'ALL': tokens.Keyword, 'ALLOCATE': tokens.Keyword, 'ANALYSE': tokens.Keyword, 'ANALYZE': tokens.Keyword, 'ANY': tokens.Keyword, 'ARRAYLEN': tokens.Keyword, 'ARE': tokens.Keyword, 'ASENSITIVE': tokens.Keyword, 'ASSERTION': tokens.Keyword, 'ASSIGNMENT': tokens.Keyword, 'ASYMMETRIC': tokens.Keyword, 'AT': tokens.Keyword, 'ATOMIC': tokens.Keyword, 'AUDIT': tokens.Keyword, 'AUTHORIZATION': tokens.Keyword, 'AUTO_INCREMENT': tokens.Keyword, 'AVG': tokens.Keyword, 'BACKWARD': tokens.Keyword, 'BEFORE': tokens.Keyword, 'BEGIN': tokens.Keyword, 'BETWEEN': tokens.Keyword, 'BITVAR': tokens.Keyword, 'BIT_LENGTH': tokens.Keyword, 'BOTH': tokens.Keyword, 'BREADTH': tokens.Keyword, # 'C': tokens.Keyword, # most likely this is an alias 'CACHE': tokens.Keyword, 'CALL': tokens.Keyword, 'CALLED': tokens.Keyword, 'CARDINALITY': tokens.Keyword, 'CASCADE': tokens.Keyword, 'CASCADED': tokens.Keyword, 'CAST': tokens.Keyword, 'CATALOG': tokens.Keyword, 'CATALOG_NAME': tokens.Keyword, 'CHAIN': tokens.Keyword, 'CHARACTERISTICS': tokens.Keyword, 'CHARACTER_LENGTH': tokens.Keyword, 'CHARACTER_SET_CATALOG': tokens.Keyword, 'CHARACTER_SET_NAME': tokens.Keyword, 'CHARACTER_SET_SCHEMA': tokens.Keyword, 'CHAR_LENGTH': tokens.Keyword, 'CHARSET': tokens.Keyword, 'CHECK': tokens.Keyword, 'CHECKED': tokens.Keyword, 'CHECKPOINT': tokens.Keyword, 'CLASS': tokens.Keyword, 'CLASS_ORIGIN': tokens.Keyword, 'CLOB': tokens.Keyword, 'CLOSE': tokens.Keyword, 'CLUSTER': tokens.Keyword, 'COALESCE': tokens.Keyword, 'COBOL': tokens.Keyword, 'COLLATE': tokens.Keyword, 'COLLATION': tokens.Keyword, 'COLLATION_CATALOG': tokens.Keyword, 'COLLATION_NAME': tokens.Keyword, 'COLLATION_SCHEMA': tokens.Keyword, 'COLLECT': tokens.Keyword, 'COLUMN': tokens.Keyword, 'COLUMN_NAME': tokens.Keyword, 'COMPRESS': tokens.Keyword, 'COMMAND_FUNCTION': tokens.Keyword, 'COMMAND_FUNCTION_CODE': tokens.Keyword, 'COMMENT': tokens.Keyword, 'COMMIT': tokens.Keyword.DML, 'COMMITTED': tokens.Keyword, 'COMPLETION': tokens.Keyword, 'CONCURRENTLY': tokens.Keyword, 'CONDITION_NUMBER': tokens.Keyword, 'CONNECT': tokens.Keyword, 'CONNECTION': tokens.Keyword, 'CONNECTION_NAME': tokens.Keyword, 'CONSTRAINT': tokens.Keyword, 'CONSTRAINTS': tokens.Keyword, 'CONSTRAINT_CATALOG': tokens.Keyword, 'CONSTRAINT_NAME': tokens.Keyword, 'CONSTRAINT_SCHEMA': tokens.Keyword, 'CONSTRUCTOR': tokens.Keyword, 'CONTAINS': tokens.Keyword, 'CONTINUE': tokens.Keyword, 'CONVERSION': tokens.Keyword, 'CONVERT': tokens.Keyword, 'COPY': tokens.Keyword, 'CORRESPONDING': tokens.Keyword, 'COUNT': tokens.Keyword, 'CREATEDB': tokens.Keyword, 'CREATEUSER': tokens.Keyword, 'CROSS': tokens.Keyword, 'CUBE': tokens.Keyword, 'CURRENT': tokens.Keyword, 'CURRENT_DATE': tokens.Keyword, 'CURRENT_PATH': tokens.Keyword, 'CURRENT_ROLE': tokens.Keyword, 'CURRENT_TIME': tokens.Keyword, 'CURRENT_TIMESTAMP': tokens.Keyword, 'CURRENT_USER': tokens.Keyword, 'CURSOR': tokens.Keyword, 'CURSOR_NAME': tokens.Keyword, 'CYCLE': tokens.Keyword, 'DATA': tokens.Keyword, 'DATABASE': tokens.Keyword, 'DATETIME_INTERVAL_CODE': tokens.Keyword, 'DATETIME_INTERVAL_PRECISION': tokens.Keyword, 'DAY': tokens.Keyword, 'DEALLOCATE': tokens.Keyword, 'DECLARE': tokens.Keyword, 'DEFAULT': tokens.Keyword, 'DEFAULTS': tokens.Keyword, 'DEFERRABLE': tokens.Keyword, 'DEFERRED': tokens.Keyword, 'DEFINED': tokens.Keyword, 'DEFINER': tokens.Keyword, 'DELIMITER': tokens.Keyword, 'DELIMITERS': tokens.Keyword, 'DEREF': tokens.Keyword, 'DESCRIBE': tokens.Keyword, 'DESCRIPTOR': tokens.Keyword, 'DESTROY': tokens.Keyword, 'DESTRUCTOR': tokens.Keyword, 'DETERMINISTIC': tokens.Keyword, 'DIAGNOSTICS': tokens.Keyword, 'DICTIONARY': tokens.Keyword, 'DISABLE': tokens.Keyword, 'DISCONNECT': tokens.Keyword, 'DISPATCH': tokens.Keyword, 'DIV': tokens.Operator, 'DO': tokens.Keyword, 'DOMAIN': tokens.Keyword, 'DYNAMIC': tokens.Keyword, 'DYNAMIC_FUNCTION': tokens.Keyword, 'DYNAMIC_FUNCTION_CODE': tokens.Keyword, 'EACH': tokens.Keyword, 'ENABLE': tokens.Keyword, 'ENCODING': tokens.Keyword, 'ENCRYPTED': tokens.Keyword, 'END-EXEC': tokens.Keyword, 'ENGINE': tokens.Keyword, 'EQUALS': tokens.Keyword, 'ESCAPE': tokens.Keyword, 'EVERY': tokens.Keyword, 'EXCEPT': tokens.Keyword, 'EXCEPTION': tokens.Keyword, 'EXCLUDING': tokens.Keyword, 'EXCLUSIVE': tokens.Keyword, 'EXEC': tokens.Keyword, 'EXECUTE': tokens.Keyword, 'EXISTING': tokens.Keyword, 'EXISTS': tokens.Keyword, 'EXPLAIN': tokens.Keyword, 'EXTERNAL': tokens.Keyword, 'EXTRACT': tokens.Keyword, 'FALSE': tokens.Keyword, 'FETCH': tokens.Keyword, 'FILE': tokens.Keyword, 'FINAL': tokens.Keyword, 'FIRST': tokens.Keyword, 'FORCE': tokens.Keyword, 'FOREACH': tokens.Keyword, 'FOREIGN': tokens.Keyword, 'FORTRAN': tokens.Keyword, 'FORWARD': tokens.Keyword, 'FOUND': tokens.Keyword, 'FREE': tokens.Keyword, 'FREEZE': tokens.Keyword, 'FULL': tokens.Keyword, 'FUNCTION': tokens.Keyword, # 'G': tokens.Keyword, 'GENERAL': tokens.Keyword, 'GENERATED': tokens.Keyword, 'GET': tokens.Keyword, 'GLOBAL': tokens.Keyword, 'GO': tokens.Keyword, 'GOTO': tokens.Keyword, 'GRANTED': tokens.Keyword, 'GROUPING': tokens.Keyword, 'HAVING': tokens.Keyword, 'HIERARCHY': tokens.Keyword, 'HOLD': tokens.Keyword, 'HOUR': tokens.Keyword, 'HOST': tokens.Keyword, 'IDENTIFIED': tokens.Keyword, 'IDENTITY': tokens.Keyword, 'IGNORE': tokens.Keyword, 'ILIKE': tokens.Keyword, 'IMMEDIATE': tokens.Keyword, 'IMMUTABLE': tokens.Keyword, 'IMPLEMENTATION': tokens.Keyword, 'IMPLICIT': tokens.Keyword, 'INCLUDING': tokens.Keyword, 'INCREMENT': tokens.Keyword, 'INDEX': tokens.Keyword, 'INDICATOR': tokens.Keyword, 'INFIX': tokens.Keyword, 'INHERITS': tokens.Keyword, 'INITIAL': tokens.Keyword, 'INITIALIZE': tokens.Keyword, 'INITIALLY': tokens.Keyword, 'INOUT': tokens.Keyword, 'INPUT': tokens.Keyword, 'INSENSITIVE': tokens.Keyword, 'INSTANTIABLE': tokens.Keyword, 'INSTEAD': tokens.Keyword, 'INTERSECT': tokens.Keyword, 'INTO': tokens.Keyword, 'INVOKER': tokens.Keyword, 'IS': tokens.Keyword, 'ISNULL': tokens.Keyword, 'ISOLATION': tokens.Keyword, 'ITERATE': tokens.Keyword, # 'K': tokens.Keyword, 'KEY': tokens.Keyword, 'KEY_MEMBER': tokens.Keyword, 'KEY_TYPE': tokens.Keyword, 'LANCOMPILER': tokens.Keyword, 'LANGUAGE': tokens.Keyword, 'LARGE': tokens.Keyword, 'LAST': tokens.Keyword, 'LATERAL': tokens.Keyword, 'LEADING': tokens.Keyword, 'LENGTH': tokens.Keyword, 'LESS': tokens.Keyword, 'LEVEL': tokens.Keyword, 'LIMIT': tokens.Keyword, 'LISTEN': tokens.Keyword, 'LOAD': tokens.Keyword, 'LOCAL': tokens.Keyword, 'LOCALTIME': tokens.Keyword, 'LOCALTIMESTAMP': tokens.Keyword, 'LOCATION': tokens.Keyword, 'LOCATOR': tokens.Keyword, 'LOCK': tokens.Keyword, 'LOWER': tokens.Keyword, # 'M': tokens.Keyword, 'MAP': tokens.Keyword, 'MATCH': tokens.Keyword, 'MAXEXTENTS': tokens.Keyword, 'MAXVALUE': tokens.Keyword, 'MESSAGE_LENGTH': tokens.Keyword, 'MESSAGE_OCTET_LENGTH': tokens.Keyword, 'MESSAGE_TEXT': tokens.Keyword, 'METHOD': tokens.Keyword, 'MINUTE': tokens.Keyword, 'MINUS': tokens.Keyword, 'MINVALUE': tokens.Keyword, 'MOD': tokens.Keyword, 'MODE': tokens.Keyword, 'MODIFIES': tokens.Keyword, 'MODIFY': tokens.Keyword, 'MONTH': tokens.Keyword, 'MORE': tokens.Keyword, 'MOVE': tokens.Keyword, 'MUMPS': tokens.Keyword, 'NAMES': tokens.Keyword, 'NATIONAL': tokens.Keyword, 'NATURAL': tokens.Keyword, 'NCHAR': tokens.Keyword, 'NCLOB': tokens.Keyword, 'NEW': tokens.Keyword, 'NEXT': tokens.Keyword, 'NO': tokens.Keyword, 'NOAUDIT': tokens.Keyword, 'NOCOMPRESS': tokens.Keyword, 'NOCREATEDB': tokens.Keyword, 'NOCREATEUSER': tokens.Keyword, 'NONE': tokens.Keyword, 'NOT': tokens.Keyword, 'NOTFOUND': tokens.Keyword, 'NOTHING': tokens.Keyword, 'NOTIFY': tokens.Keyword, 'NOTNULL': tokens.Keyword, 'NOWAIT': tokens.Keyword, 'NULL': tokens.Keyword, 'NULLABLE': tokens.Keyword, 'NULLIF': tokens.Keyword, 'OBJECT': tokens.Keyword, 'OCTET_LENGTH': tokens.Keyword, 'OF': tokens.Keyword, 'OFF': tokens.Keyword, 'OFFLINE': tokens.Keyword, 'OFFSET': tokens.Keyword, 'OIDS': tokens.Keyword, 'OLD': tokens.Keyword, 'ONLINE': tokens.Keyword, 'ONLY': tokens.Keyword, 'OPEN': tokens.Keyword, 'OPERATION': tokens.Keyword, 'OPERATOR': tokens.Keyword, 'OPTION': tokens.Keyword, 'OPTIONS': tokens.Keyword, 'ORDINALITY': tokens.Keyword, 'OUT': tokens.Keyword, 'OUTPUT': tokens.Keyword, 'OVERLAPS': tokens.Keyword, 'OVERLAY': tokens.Keyword, 'OVERRIDING': tokens.Keyword, 'OWNER': tokens.Keyword, 'QUARTER': tokens.Keyword, 'PAD': tokens.Keyword, 'PARAMETER': tokens.Keyword, 'PARAMETERS': tokens.Keyword, 'PARAMETER_MODE': tokens.Keyword, 'PARAMETER_NAME': tokens.Keyword, 'PARAMETER_ORDINAL_POSITION': tokens.Keyword, 'PARAMETER_SPECIFIC_CATALOG': tokens.Keyword, 'PARAMETER_SPECIFIC_NAME': tokens.Keyword, 'PARAMETER_SPECIFIC_SCHEMA': tokens.Keyword, 'PARTIAL': tokens.Keyword, 'PASCAL': tokens.Keyword, 'PCTFREE': tokens.Keyword, 'PENDANT': tokens.Keyword, 'PLACING': tokens.Keyword, 'PLI': tokens.Keyword, 'POSITION': tokens.Keyword, 'POSTFIX': tokens.Keyword, 'PRECISION': tokens.Keyword, 'PREFIX': tokens.Keyword, 'PREORDER': tokens.Keyword, 'PREPARE': tokens.Keyword, 'PRESERVE': tokens.Keyword, 'PRIMARY': tokens.Keyword, 'PRIOR': tokens.Keyword, 'PRIVILEGES': tokens.Keyword, 'PROCEDURAL': tokens.Keyword, 'PROCEDURE': tokens.Keyword, 'PUBLIC': tokens.Keyword, 'RAISE': tokens.Keyword, 'RAW': tokens.Keyword, 'READ': tokens.Keyword, 'READS': tokens.Keyword, 'RECHECK': tokens.Keyword, 'RECURSIVE': tokens.Keyword, 'REF': tokens.Keyword, 'REFERENCES': tokens.Keyword, 'REFERENCING': tokens.Keyword, 'REINDEX': tokens.Keyword, 'RELATIVE': tokens.Keyword, 'RENAME': tokens.Keyword, 'REPEATABLE': tokens.Keyword, 'RESET': tokens.Keyword, 'RESOURCE': tokens.Keyword, 'RESTART': tokens.Keyword, 'RESTRICT': tokens.Keyword, 'RESULT': tokens.Keyword, 'RETURN': tokens.Keyword, 'RETURNED_LENGTH': tokens.Keyword, 'RETURNED_OCTET_LENGTH': tokens.Keyword, 'RETURNED_SQLSTATE': tokens.Keyword, 'RETURNING': tokens.Keyword, 'RETURNS': tokens.Keyword, 'RIGHT': tokens.Keyword, 'ROLE': tokens.Keyword, 'ROLLBACK': tokens.Keyword.DML, 'ROLLUP': tokens.Keyword, 'ROUTINE': tokens.Keyword, 'ROUTINE_CATALOG': tokens.Keyword, 'ROUTINE_NAME': tokens.Keyword, 'ROUTINE_SCHEMA': tokens.Keyword, 'ROWS': tokens.Keyword, 'ROW_COUNT': tokens.Keyword, 'RULE': tokens.Keyword, 'SAVE_POINT': tokens.Keyword, 'SCALE': tokens.Keyword, 'SCHEMA': tokens.Keyword, 'SCHEMA_NAME': tokens.Keyword, 'SCOPE': tokens.Keyword, 'SCROLL': tokens.Keyword, 'SEARCH': tokens.Keyword, 'SECOND': tokens.Keyword, 'SECURITY': tokens.Keyword, 'SELF': tokens.Keyword, 'SENSITIVE': tokens.Keyword, 'SEQUENCE': tokens.Keyword, 'SERIALIZABLE': tokens.Keyword, 'SERVER_NAME': tokens.Keyword, 'SESSION': tokens.Keyword, 'SESSION_USER': tokens.Keyword, 'SETOF': tokens.Keyword, 'SETS': tokens.Keyword, 'SHARE': tokens.Keyword, 'SHOW': tokens.Keyword, 'SIMILAR': tokens.Keyword, 'SIMPLE': tokens.Keyword, 'SIZE': tokens.Keyword, 'SOME': tokens.Keyword, 'SOURCE': tokens.Keyword, 'SPACE': tokens.Keyword, 'SPECIFIC': tokens.Keyword, 'SPECIFICTYPE': tokens.Keyword, 'SPECIFIC_NAME': tokens.Keyword, 'SQL': tokens.Keyword, 'SQLBUF': tokens.Keyword, 'SQLCODE': tokens.Keyword, 'SQLERROR': tokens.Keyword, 'SQLEXCEPTION': tokens.Keyword, 'SQLSTATE': tokens.Keyword, 'SQLWARNING': tokens.Keyword, 'STABLE': tokens.Keyword, 'START': tokens.Keyword.DML, # 'STATE': tokens.Keyword, 'STATEMENT': tokens.Keyword, 'STATIC': tokens.Keyword, 'STATISTICS': tokens.Keyword, 'STDIN': tokens.Keyword, 'STDOUT': tokens.Keyword, 'STORAGE': tokens.Keyword, 'STRICT': tokens.Keyword, 'STRUCTURE': tokens.Keyword, 'STYPE': tokens.Keyword, 'SUBCLASS_ORIGIN': tokens.Keyword, 'SUBLIST': tokens.Keyword, 'SUBSTRING': tokens.Keyword, 'SUCCESSFUL': tokens.Keyword, 'SUM': tokens.Keyword, 'SYMMETRIC': tokens.Keyword, 'SYNONYM': tokens.Keyword, 'SYSID': tokens.Keyword, 'SYSTEM': tokens.Keyword, 'SYSTEM_USER': tokens.Keyword, 'TABLE': tokens.Keyword, 'TABLE_NAME': tokens.Keyword, 'TEMP': tokens.Keyword, 'TEMPLATE': tokens.Keyword, 'TEMPORARY': tokens.Keyword, 'TERMINATE': tokens.Keyword, 'THAN': tokens.Keyword, 'TIMESTAMP': tokens.Keyword, 'TIMEZONE_HOUR': tokens.Keyword, 'TIMEZONE_MINUTE': tokens.Keyword, 'TO': tokens.Keyword, 'TOAST': tokens.Keyword, 'TRAILING': tokens.Keyword, 'TRANSATION': tokens.Keyword, 'TRANSACTIONS_COMMITTED': tokens.Keyword, 'TRANSACTIONS_ROLLED_BACK': tokens.Keyword, 'TRANSATION_ACTIVE': tokens.Keyword, 'TRANSFORM': tokens.Keyword, 'TRANSFORMS': tokens.Keyword, 'TRANSLATE': tokens.Keyword, 'TRANSLATION': tokens.Keyword, 'TREAT': tokens.Keyword, 'TRIGGER': tokens.Keyword, 'TRIGGER_CATALOG': tokens.Keyword, 'TRIGGER_NAME': tokens.Keyword, 'TRIGGER_SCHEMA': tokens.Keyword, 'TRIM': tokens.Keyword, 'TRUE': tokens.Keyword, 'TRUSTED': tokens.Keyword, 'TYPE': tokens.Keyword, 'UID': tokens.Keyword, 'UNCOMMITTED': tokens.Keyword, 'UNDER': tokens.Keyword, 'UNENCRYPTED': tokens.Keyword, 'UNION': tokens.Keyword, 'UNIQUE': tokens.Keyword, 'UNKNOWN': tokens.Keyword, 'UNLISTEN': tokens.Keyword, 'UNNAMED': tokens.Keyword, 'UNNEST': tokens.Keyword, 'UNTIL': tokens.Keyword, 'UPPER': tokens.Keyword, 'USAGE': tokens.Keyword, 'USE': tokens.Keyword, 'USER': tokens.Keyword, 'USER_DEFINED_TYPE_CATALOG': tokens.Keyword, 'USER_DEFINED_TYPE_NAME': tokens.Keyword, 'USER_DEFINED_TYPE_SCHEMA': tokens.Keyword, 'USING': tokens.Keyword, 'VACUUM': tokens.Keyword, 'VALID': tokens.Keyword, 'VALIDATE': tokens.Keyword, 'VALIDATOR': tokens.Keyword, 'VALUES': tokens.Keyword, 'VARIABLE': tokens.Keyword, 'VERBOSE': tokens.Keyword, 'VERSION': tokens.Keyword, 'VIEW': tokens.Keyword, 'VOLATILE': tokens.Keyword, 'WEEK': tokens.Keyword, 'WHENEVER': tokens.Keyword, 'WITH': tokens.Keyword.CTE, 'WITHOUT': tokens.Keyword, 'WORK': tokens.Keyword, 'WRITE': tokens.Keyword, 'YEAR': tokens.Keyword, 'ZONE': tokens.Keyword, # Name.Builtin 'ARRAY': tokens.Name.Builtin, 'BIGINT': tokens.Name.Builtin, 'BINARY': tokens.Name.Builtin, 'BIT': tokens.Name.Builtin, 'BLOB': tokens.Name.Builtin, 'BOOLEAN': tokens.Name.Builtin, 'CHAR': tokens.Name.Builtin, 'CHARACTER': tokens.Name.Builtin, 'DATE': tokens.Name.Builtin, 'DEC': tokens.Name.Builtin, 'DECIMAL': tokens.Name.Builtin, 'FILE_TYPE': tokens.Name.Builtin, 'FLOAT': tokens.Name.Builtin, 'INT': tokens.Name.Builtin, 'INT8': tokens.Name.Builtin, 'INTEGER': tokens.Name.Builtin, 'INTERVAL': tokens.Name.Builtin, 'LONG': tokens.Name.Builtin, 'NATURALN': tokens.Name.Builtin, 'NVARCHAR': tokens.Name.Builtin, 'NUMBER': tokens.Name.Builtin, 'NUMERIC': tokens.Name.Builtin, 'PLS_INTEGER': tokens.Name.Builtin, 'POSITIVE': tokens.Name.Builtin, 'POSITIVEN': tokens.Name.Builtin, 'REAL': tokens.Name.Builtin, 'ROWID': tokens.Name.Builtin, 'ROWLABEL': tokens.Name.Builtin, 'ROWNUM': tokens.Name.Builtin, 'SERIAL': tokens.Name.Builtin, 'SERIAL8': tokens.Name.Builtin, 'SIGNED': tokens.Name.Builtin, 'SIGNTYPE': tokens.Name.Builtin, 'SIMPLE_DOUBLE': tokens.Name.Builtin, 'SIMPLE_FLOAT': tokens.Name.Builtin, 'SIMPLE_INTEGER': tokens.Name.Builtin, 'SMALLINT': tokens.Name.Builtin, 'SYS_REFCURSOR': tokens.Name.Builtin, 'SYSDATE': tokens.Name, 'TEXT': tokens.Name.Builtin, 'TINYINT': tokens.Name.Builtin, 'UNSIGNED': tokens.Name.Builtin, 'UROWID': tokens.Name.Builtin, 'UTL_FILE': tokens.Name.Builtin, 'VARCHAR': tokens.Name.Builtin, 'VARCHAR2': tokens.Name.Builtin, 'VARYING': tokens.Name.Builtin, } KEYWORDS_COMMON = { 'SELECT': tokens.Keyword.DML, 'INSERT': tokens.Keyword.DML, 'DELETE': tokens.Keyword.DML, 'UPDATE': tokens.Keyword.DML, 'UPSERT': tokens.Keyword.DML, 'REPLACE': tokens.Keyword.DML, 'MERGE': tokens.Keyword.DML, 'DROP': tokens.Keyword.DDL, 'CREATE': tokens.Keyword.DDL, 'ALTER': tokens.Keyword.DDL, 'TRUNCATE': tokens.Keyword.DDL, 'GRANT': tokens.Keyword.DCL, 'REVOKE': tokens.Keyword.DCL, 'WHERE': tokens.Keyword, 'FROM': tokens.Keyword, 'INNER': tokens.Keyword, 'JOIN': tokens.Keyword, 'STRAIGHT_JOIN': tokens.Keyword, 'AND': tokens.Keyword, 'OR': tokens.Keyword, 'LIKE': tokens.Keyword, 'ON': tokens.Keyword, 'IN': tokens.Keyword, 'SET': tokens.Keyword, 'BY': tokens.Keyword, 'GROUP': tokens.Keyword, 'ORDER': tokens.Keyword, 'LEFT': tokens.Keyword, 'OUTER': tokens.Keyword, 'FULL': tokens.Keyword, 'IF': tokens.Keyword, 'END': tokens.Keyword, 'THEN': tokens.Keyword, 'LOOP': tokens.Keyword, 'AS': tokens.Keyword, 'ELSE': tokens.Keyword, 'FOR': tokens.Keyword, 'WHILE': tokens.Keyword, 'CASE': tokens.Keyword, 'WHEN': tokens.Keyword, 'MIN': tokens.Keyword, 'MAX': tokens.Keyword, 'DISTINCT': tokens.Keyword, } KEYWORDS_ORACLE = { 'ARCHIVE': tokens.Keyword, 'ARCHIVELOG': tokens.Keyword, 'BACKUP': tokens.Keyword, 'BECOME': tokens.Keyword, 'BLOCK': tokens.Keyword, 'BODY': tokens.Keyword, 'CANCEL': tokens.Keyword, 'CHANGE': tokens.Keyword, 'COMPILE': tokens.Keyword, 'CONTENTS': tokens.Keyword, 'CONTROLFILE': tokens.Keyword, 'DATAFILE': tokens.Keyword, 'DBA': tokens.Keyword, 'DISMOUNT': tokens.Keyword, 'DOUBLE': tokens.Keyword, 'DUMP': tokens.Keyword, 'ELSIF': tokens.Keyword, 'EVENTS': tokens.Keyword, 'EXCEPTIONS': tokens.Keyword, 'EXPLAIN': tokens.Keyword, 'EXTENT': tokens.Keyword, 'EXTERNALLY': tokens.Keyword, 'FLUSH': tokens.Keyword, 'FREELIST': tokens.Keyword, 'FREELISTS': tokens.Keyword, # groups seems too common as table name # 'GROUPS': tokens.Keyword, 'INDICATOR': tokens.Keyword, 'INITRANS': tokens.Keyword, 'INSTANCE': tokens.Keyword, 'LAYER': tokens.Keyword, 'LINK': tokens.Keyword, 'LISTS': tokens.Keyword, 'LOGFILE': tokens.Keyword, 'MANAGE': tokens.Keyword, 'MANUAL': tokens.Keyword, 'MAXDATAFILES': tokens.Keyword, 'MAXINSTANCES': tokens.Keyword, 'MAXLOGFILES': tokens.Keyword, 'MAXLOGHISTORY': tokens.Keyword, 'MAXLOGMEMBERS': tokens.Keyword, 'MAXTRANS': tokens.Keyword, 'MINEXTENTS': tokens.Keyword, 'MODULE': tokens.Keyword, 'MOUNT': tokens.Keyword, 'NOARCHIVELOG': tokens.Keyword, 'NOCACHE': tokens.Keyword, 'NOCYCLE': tokens.Keyword, 'NOMAXVALUE': tokens.Keyword, 'NOMINVALUE': tokens.Keyword, 'NOORDER': tokens.Keyword, 'NORESETLOGS': tokens.Keyword, 'NORMAL': tokens.Keyword, 'NOSORT': tokens.Keyword, 'OPTIMAL': tokens.Keyword, 'OWN': tokens.Keyword, 'PACKAGE': tokens.Keyword, 'PARALLEL': tokens.Keyword, 'PCTINCREASE': tokens.Keyword, 'PCTUSED': tokens.Keyword, 'PLAN': tokens.Keyword, 'PRIVATE': tokens.Keyword, 'PROFILE': tokens.Keyword, 'QUOTA': tokens.Keyword, 'RECOVER': tokens.Keyword, 'RESETLOGS': tokens.Keyword, 'RESTRICTED': tokens.Keyword, 'REUSE': tokens.Keyword, 'ROLES': tokens.Keyword, 'SAVEPOINT': tokens.Keyword, 'SCN': tokens.Keyword, 'SECTION': tokens.Keyword, 'SEGMENT': tokens.Keyword, 'SHARED': tokens.Keyword, 'SNAPSHOT': tokens.Keyword, 'SORT': tokens.Keyword, 'STATEMENT_ID': tokens.Keyword, 'STOP': tokens.Keyword, 'SWITCH': tokens.Keyword, 'TABLES': tokens.Keyword, 'TABLESPACE': tokens.Keyword, 'THREAD': tokens.Keyword, 'TIME': tokens.Keyword, 'TRACING': tokens.Keyword, 'TRANSACTION': tokens.Keyword, 'TRIGGERS': tokens.Keyword, 'UNLIMITED': tokens.Keyword, 'UNLOCK': tokens.Keyword, } # MySQL KEYWORDS_MYSQL = { 'ROW': tokens.Keyword, } # PostgreSQL Syntax KEYWORDS_PLPGSQL = { 'CONFLICT': tokens.Keyword, 'WINDOW': tokens.Keyword, 'PARTITION': tokens.Keyword, 'ATTACH': tokens.Keyword, 'DETACH': tokens.Keyword, 'OVER': tokens.Keyword, 'PERFORM': tokens.Keyword, 'NOTICE': tokens.Keyword, 'PLPGSQL': tokens.Keyword, 'INHERIT': tokens.Keyword, 'INDEXES': tokens.Keyword, 'ON_ERROR_STOP': tokens.Keyword, 'EXTENSION': tokens.Keyword, 'BYTEA': tokens.Keyword, 'BIGSERIAL': tokens.Keyword, 'BIT VARYING': tokens.Keyword, 'BOX': tokens.Keyword, 'CHARACTER': tokens.Keyword, 'CHARACTER VARYING': tokens.Keyword, 'CIDR': tokens.Keyword, 'CIRCLE': tokens.Keyword, 'DOUBLE PRECISION': tokens.Keyword, 'INET': tokens.Keyword, 'JSON': tokens.Keyword, 'JSONB': tokens.Keyword, 'LINE': tokens.Keyword, 'LSEG': tokens.Keyword, 'MACADDR': tokens.Keyword, 'MONEY': tokens.Keyword, 'PATH': tokens.Keyword, 'PG_LSN': tokens.Keyword, 'POINT': tokens.Keyword, 'POLYGON': tokens.Keyword, 'SMALLSERIAL': tokens.Keyword, 'TSQUERY': tokens.Keyword, 'TSVECTOR': tokens.Keyword, 'TXID_SNAPSHOT': tokens.Keyword, 'UUID': tokens.Keyword, 'XML': tokens.Keyword, 'FOR': tokens.Keyword, 'IN': tokens.Keyword, 'LOOP': tokens.Keyword, } # Hive Syntax KEYWORDS_HQL = { 'EXPLODE': tokens.Keyword, 'DIRECTORY': tokens.Keyword, 'DISTRIBUTE': tokens.Keyword, 'INCLUDE': tokens.Keyword, 'LOCATE': tokens.Keyword, 'OVERWRITE': tokens.Keyword, 'POSEXPLODE': tokens.Keyword, 'ARRAY_CONTAINS': tokens.Keyword, 'CMP': tokens.Keyword, 'COLLECT_LIST': tokens.Keyword, 'CONCAT': tokens.Keyword, 'CONDITION': tokens.Keyword, 'DATE_ADD': tokens.Keyword, 'DATE_SUB': tokens.Keyword, 'DECODE': tokens.Keyword, 'DBMS_OUTPUT': tokens.Keyword, 'ELEMENTS': tokens.Keyword, 'EXCHANGE': tokens.Keyword, 'EXTENDED': tokens.Keyword, 'FLOOR': tokens.Keyword, 'FOLLOWING': tokens.Keyword, 'FROM_UNIXTIME': tokens.Keyword, 'FTP': tokens.Keyword, 'HOUR': tokens.Keyword, 'INLINE': tokens.Keyword, 'INSTR': tokens.Keyword, 'LEN': tokens.Keyword, 'MAP': tokens.Name.Builtin, 'MAXELEMENT': tokens.Keyword, 'MAXINDEX': tokens.Keyword, 'MAX_PART_DATE': tokens.Keyword, 'MAX_PART_INT': tokens.Keyword, 'MAX_PART_STRING': tokens.Keyword, 'MINELEMENT': tokens.Keyword, 'MININDEX': tokens.Keyword, 'MIN_PART_DATE': tokens.Keyword, 'MIN_PART_INT': tokens.Keyword, 'MIN_PART_STRING': tokens.Keyword, 'NOW': tokens.Keyword, 'NVL': tokens.Keyword, 'NVL2': tokens.Keyword, 'PARSE_URL_TUPLE': tokens.Keyword, 'PART_LOC': tokens.Keyword, 'PART_COUNT': tokens.Keyword, 'PART_COUNT_BY': tokens.Keyword, 'PRINT': tokens.Keyword, 'PUT_LINE': tokens.Keyword, 'RANGE': tokens.Keyword, 'REDUCE': tokens.Keyword, 'REGEXP_REPLACE': tokens.Keyword, 'RESIGNAL': tokens.Keyword, 'RTRIM': tokens.Keyword, 'SIGN': tokens.Keyword, 'SIGNAL': tokens.Keyword, 'SIN': tokens.Keyword, 'SPLIT': tokens.Keyword, 'SQRT': tokens.Keyword, 'STACK': tokens.Keyword, 'STR': tokens.Keyword, 'STRING': tokens.Name.Builtin, 'STRUCT': tokens.Name.Builtin, 'SUBSTR': tokens.Keyword, 'SUMMARY': tokens.Keyword, 'TBLPROPERTIES': tokens.Keyword, 'TIMESTAMP': tokens.Name.Builtin, 'TIMESTAMP_ISO': tokens.Keyword, 'TO_CHAR': tokens.Keyword, 'TO_DATE': tokens.Keyword, 'TO_TIMESTAMP': tokens.Keyword, 'TRUNC': tokens.Keyword, 'UNBOUNDED': tokens.Keyword, 'UNIQUEJOIN': tokens.Keyword, 'UNIX_TIMESTAMP': tokens.Keyword, 'UTC_TIMESTAMP': tokens.Keyword, 'VIEWS': tokens.Keyword, 'EXIT': tokens.Keyword, 'BREAK': tokens.Keyword, 'LEAVE': tokens.Keyword, } KEYWORDS_MSACCESS = { 'DISTINCTROW': tokens.Keyword, } KEYWORDS_SNOWFLAKE = { 'ACCOUNT': tokens.Keyword, 'GSCLUSTER': tokens.Keyword, 'ISSUE': tokens.Keyword, 'ORGANIZATION': tokens.Keyword, 'PIVOT': tokens.Keyword, 'QUALIFY': tokens.Keyword, 'REGEXP': tokens.Keyword, 'RLIKE': tokens.Keyword, 'SAMPLE': tokens.Keyword, 'TRY_CAST': tokens.Keyword, 'UNPIVOT': tokens.Keyword, 'VARIANT': tokens.Name.Builtin, } KEYWORDS_BIGQUERY = { 'ASSERT_ROWS_MODIFIED': tokens.Keyword, 'DEFINE': tokens.Keyword, 'ENUM': tokens.Keyword, 'HASH': tokens.Keyword, 'LOOKUP': tokens.Keyword, 'PRECEDING': tokens.Keyword, 'PROTO': tokens.Keyword, 'RESPECT': tokens.Keyword, 'TABLESAMPLE': tokens.Keyword, 'BIGNUMERIC': tokens.Name.Builtin, } sqlparse-0.5.4/sqlparse/lexer.py0000644000000000000000000001354713615410400013627 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause """SQL Lexer""" import re from threading import Lock # This code is based on the SqlLexer in pygments. # http://pygments.org/ # It's separated from the rest of pygments to increase performance # and to allow some customizations. from io import TextIOBase from sqlparse import tokens, keywords from sqlparse.utils import consume class Lexer: """The Lexer supports configurable syntax. To add support for additional keywords, use the `add_keywords` method.""" _default_instance = None _lock = Lock() # Development notes: # - This class is prepared to be able to support additional SQL dialects # in the future by adding additional functions that take the place of # the function default_initialization(). # - The lexer class uses an explicit singleton behavior with the # instance-getter method get_default_instance(). This mechanism has # the advantage that the call signature of the entry-points to the # sqlparse library are not affected. Also, usage of sqlparse in third # party code does not need to be adapted. On the other hand, the current # implementation does not easily allow for multiple SQL dialects to be # parsed in the same process. # Such behavior can be supported in the future by passing a # suitably initialized lexer object as an additional parameter to the # entry-point functions (such as `parse`). Code will need to be written # to pass down and utilize such an object. The current implementation # is prepared to support this thread safe approach without the # default_instance part needing to change interface. @classmethod def get_default_instance(cls): """Returns the lexer instance used internally by the sqlparse core functions.""" with cls._lock: if cls._default_instance is None: cls._default_instance = cls() cls._default_instance.default_initialization() return cls._default_instance def default_initialization(self): """Initialize the lexer with default dictionaries. Useful if you need to revert custom syntax settings.""" self.clear() self.set_SQL_REGEX(keywords.SQL_REGEX) self.add_keywords(keywords.KEYWORDS_COMMON) self.add_keywords(keywords.KEYWORDS_ORACLE) self.add_keywords(keywords.KEYWORDS_MYSQL) self.add_keywords(keywords.KEYWORDS_PLPGSQL) self.add_keywords(keywords.KEYWORDS_HQL) self.add_keywords(keywords.KEYWORDS_MSACCESS) self.add_keywords(keywords.KEYWORDS_SNOWFLAKE) self.add_keywords(keywords.KEYWORDS_BIGQUERY) self.add_keywords(keywords.KEYWORDS) def clear(self): """Clear all syntax configurations. Useful if you want to load a reduced set of syntax configurations. After this call, regexps and keyword dictionaries need to be loaded to make the lexer functional again.""" self._SQL_REGEX = [] self._keywords = [] def set_SQL_REGEX(self, SQL_REGEX): """Set the list of regex that will parse the SQL.""" FLAGS = re.IGNORECASE | re.UNICODE self._SQL_REGEX = [ (re.compile(rx, FLAGS).match, tt) for rx, tt in SQL_REGEX ] def add_keywords(self, keywords): """Add keyword dictionaries. Keywords are looked up in the same order that dictionaries were added.""" self._keywords.append(keywords) def is_keyword(self, value): """Checks for a keyword. If the given value is in one of the KEYWORDS_* dictionary it's considered a keyword. Otherwise, tokens.Name is returned. """ val = value.upper() for kwdict in self._keywords: if val in kwdict: return kwdict[val], value else: return tokens.Name, value def get_tokens(self, text, encoding=None): """ Return an iterable of (tokentype, value) pairs generated from `text`. If `unfiltered` is set to `True`, the filtering mechanism is bypassed even if filters are defined. Also preprocess the text, i.e. expand tabs and strip it if wanted and applies registered filters. Split ``text`` into (tokentype, text) pairs. ``stack`` is the initial stack (default: ``['root']``) """ if isinstance(text, TextIOBase): text = text.read() if isinstance(text, str): pass elif isinstance(text, bytes): if encoding: text = text.decode(encoding) else: try: text = text.decode('utf-8') except UnicodeDecodeError: text = text.decode('unicode-escape') else: raise TypeError("Expected text or file-like object, got {!r}". format(type(text))) iterable = enumerate(text) for pos, char in iterable: for rexmatch, action in self._SQL_REGEX: m = rexmatch(text, pos) if not m: continue elif isinstance(action, tokens._TokenType): yield action, m.group() elif action is keywords.PROCESS_AS_KEYWORD: yield self.is_keyword(m.group()) consume(iterable, m.end() - pos - 1) break else: yield tokens.Error, char def tokenize(sql, encoding=None): """Tokenize sql. Tokenize *sql* using the :class:`Lexer` and return a 2-tuple stream of ``(token type, value)`` items. """ return Lexer.get_default_instance().get_tokens(sql, encoding) sqlparse-0.5.4/sqlparse/py.typed0000644000000000000000000000000013615410400013611 0ustar00sqlparse-0.5.4/sqlparse/sql.py0000644000000000000000000005062513615410400013305 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause """This module contains classes representing syntactical elements of SQL.""" import re from sqlparse import tokens as T from sqlparse.utils import imt, remove_quotes class NameAliasMixin: """Implements get_real_name and get_alias.""" def get_real_name(self): """Returns the real name (object name) of this identifier.""" # a.b dot_idx, _ = self.token_next_by(m=(T.Punctuation, '.')) return self._get_first_name(dot_idx, real_name=True) def get_alias(self): """Returns the alias for this identifier or ``None``.""" # "name AS alias" kw_idx, kw = self.token_next_by(m=(T.Keyword, 'AS')) if kw is not None: return self._get_first_name(kw_idx + 1, keywords=True) # "name alias" or "complicated column expression alias" _, ws = self.token_next_by(t=T.Whitespace) if len(self.tokens) > 2 and ws is not None: return self._get_first_name(reverse=True) class Token: """Base class for all other classes in this module. It represents a single token and has two instance attributes: ``value`` is the unchanged value of the token and ``ttype`` is the type of the token. """ __slots__ = ('value', 'ttype', 'parent', 'normalized', 'is_keyword', 'is_group', 'is_whitespace', 'is_newline') def __init__(self, ttype, value): value = str(value) self.value = value self.ttype = ttype self.parent = None self.is_group = False self.is_keyword = ttype in T.Keyword self.is_whitespace = self.ttype in T.Whitespace self.is_newline = self.ttype in T.Newline self.normalized = value.upper() if self.is_keyword else value def __str__(self): return self.value # Pending tokenlist __len__ bug fix # def __len__(self): # return len(self.value) def __repr__(self): cls = self._get_repr_name() value = self._get_repr_value() q = '"' if value.startswith("'") and value.endswith("'") else "'" return "<{cls} {q}{value}{q} at 0x{id:2X}>".format( id=id(self), **locals()) def _get_repr_name(self): return str(self.ttype).split('.')[-1] def _get_repr_value(self): raw = str(self) if len(raw) > 7: raw = raw[:6] + '...' return re.sub(r'\s+', ' ', raw) def flatten(self): """Resolve subgroups.""" yield self def match(self, ttype, values, regex=False): """Checks whether the token matches the given arguments. *ttype* is a token type as defined in `sqlparse.tokens`. If it does not match, ``False`` is returned. *values* is a list of possible values for this token. For match to be considered valid, the token value needs to be in this list. For tokens of type ``Keyword`` the comparison is case-insensitive. For convenience, a single value can be given passed as a string. If *regex* is ``True``, the given values are treated as regular expressions. Partial matches are allowed. Defaults to ``False``. """ type_matched = self.ttype is ttype if not type_matched or values is None: return type_matched if isinstance(values, str): values = (values,) if regex: # TODO: Add test for regex with is_keyword = false flag = re.IGNORECASE if self.is_keyword else 0 values = (re.compile(v, flag) for v in values) for pattern in values: if pattern.search(self.normalized): return True return False if self.is_keyword: values = (v.upper() for v in values) return self.normalized in values def within(self, group_cls): """Returns ``True`` if this token is within *group_cls*. Use this method for example to check if an identifier is within a function: ``t.within(sql.Function)``. """ parent = self.parent while parent: if isinstance(parent, group_cls): return True parent = parent.parent return False def is_child_of(self, other): """Returns ``True`` if this token is a direct child of *other*.""" return self.parent == other def has_ancestor(self, other): """Returns ``True`` if *other* is in this tokens ancestry.""" parent = self.parent while parent: if parent == other: return True parent = parent.parent return False class TokenList(Token): """A group of tokens. It has an additional instance attribute ``tokens`` which holds a list of child-tokens. """ __slots__ = 'tokens' def __init__(self, tokens=None): self.tokens = tokens or [] [setattr(token, 'parent', self) for token in self.tokens] super().__init__(None, str(self)) self.is_group = True def __str__(self): return ''.join(token.value for token in self.flatten()) # weird bug # def __len__(self): # return len(self.tokens) def __iter__(self): return iter(self.tokens) def __getitem__(self, item): return self.tokens[item] def _get_repr_name(self): return type(self).__name__ def _pprint_tree(self, max_depth=None, depth=0, f=None, _pre=''): """Pretty-print the object tree.""" token_count = len(self.tokens) for idx, token in enumerate(self.tokens): cls = token._get_repr_name() value = token._get_repr_value() last = idx == (token_count - 1) pre = '`- ' if last else '|- ' q = '"' if value.startswith("'") and value.endswith("'") else "'" print(f"{_pre}{pre}{idx} {cls} {q}{value}{q}", file=f) if token.is_group and (max_depth is None or depth < max_depth): parent_pre = ' ' if last else '| ' token._pprint_tree(max_depth, depth + 1, f, _pre + parent_pre) def get_token_at_offset(self, offset): """Returns the token that is on position offset.""" idx = 0 for token in self.flatten(): end = idx + len(token.value) if idx <= offset < end: return token idx = end def flatten(self): """Generator yielding ungrouped tokens. This method is recursively called for all child tokens. """ for token in self.tokens: if token.is_group: yield from token.flatten() else: yield token def get_sublists(self): for token in self.tokens: if token.is_group: yield token @property def _groupable_tokens(self): return self.tokens def _token_matching(self, funcs, start=0, end=None, reverse=False): """next token that match functions""" if start is None: return None if not isinstance(funcs, (list, tuple)): funcs = (funcs,) if reverse: assert end is None indexes = range(start - 2, -1, -1) else: if end is None: end = len(self.tokens) indexes = range(start, end) for idx in indexes: token = self.tokens[idx] for func in funcs: if func(token): return idx, token return None, None def token_first(self, skip_ws=True, skip_cm=False): """Returns the first child token. If *skip_ws* is ``True`` (the default), whitespace tokens are ignored. if *skip_cm* is ``True`` (default: ``False``), comments are ignored too. """ # this on is inconsistent, using Comment instead of T.Comment... def matcher(tk): return not ((skip_ws and tk.is_whitespace) or (skip_cm and imt(tk, t=T.Comment, i=Comment))) return self._token_matching(matcher)[1] def token_next_by(self, i=None, m=None, t=None, idx=-1, end=None): idx += 1 return self._token_matching(lambda tk: imt(tk, i, m, t), idx, end) def token_not_matching(self, funcs, idx): funcs = (funcs,) if not isinstance(funcs, (list, tuple)) else funcs funcs = [lambda tk: not func(tk) for func in funcs] return self._token_matching(funcs, idx) def token_matching(self, funcs, idx): return self._token_matching(funcs, idx)[1] def token_prev(self, idx, skip_ws=True, skip_cm=False): """Returns the previous token relative to *idx*. If *skip_ws* is ``True`` (the default) whitespace tokens are ignored. If *skip_cm* is ``True`` comments are ignored. ``None`` is returned if there's no previous token. """ return self.token_next(idx, skip_ws, skip_cm, _reverse=True) # TODO: May need to re-add default value to idx def token_next(self, idx, skip_ws=True, skip_cm=False, _reverse=False): """Returns the next token relative to *idx*. If *skip_ws* is ``True`` (the default) whitespace tokens are ignored. If *skip_cm* is ``True`` comments are ignored. ``None`` is returned if there's no next token. """ if idx is None: return None, None idx += 1 # alot of code usage current pre-compensates for this def matcher(tk): return not ((skip_ws and tk.is_whitespace) or (skip_cm and imt(tk, t=T.Comment, i=Comment))) return self._token_matching(matcher, idx, reverse=_reverse) def token_index(self, token, start=0): """Return list index of token.""" start = start if isinstance(start, int) else self.token_index(start) return start + self.tokens[start:].index(token) def group_tokens(self, grp_cls, start, end, include_end=True, extend=False): """Replace tokens by an instance of *grp_cls*.""" start_idx = start start = self.tokens[start_idx] end_idx = end + include_end # will be needed later for new group_clauses # while skip_ws and tokens and tokens[-1].is_whitespace: # tokens = tokens[:-1] if extend and isinstance(start, grp_cls): subtokens = self.tokens[start_idx + 1:end_idx] grp = start grp.tokens.extend(subtokens) del self.tokens[start_idx + 1:end_idx] grp.value = str(start) else: subtokens = self.tokens[start_idx:end_idx] grp = grp_cls(subtokens) self.tokens[start_idx:end_idx] = [grp] grp.parent = self for token in subtokens: token.parent = grp return grp def insert_before(self, where, token): """Inserts *token* before *where*.""" if not isinstance(where, int): where = self.token_index(where) token.parent = self self.tokens.insert(where, token) def insert_after(self, where, token, skip_ws=True): """Inserts *token* after *where*.""" if not isinstance(where, int): where = self.token_index(where) nidx, next_ = self.token_next(where, skip_ws=skip_ws) token.parent = self if next_ is None: self.tokens.append(token) else: self.tokens.insert(nidx, token) def has_alias(self): """Returns ``True`` if an alias is present.""" return self.get_alias() is not None def get_alias(self): """Returns the alias for this identifier or ``None``.""" return None def get_name(self): """Returns the name of this identifier. This is either it's alias or it's real name. The returned valued can be considered as the name under which the object corresponding to this identifier is known within the current statement. """ return self.get_alias() or self.get_real_name() def get_real_name(self): """Returns the real name (object name) of this identifier.""" return None def get_parent_name(self): """Return name of the parent object if any. A parent object is identified by the first occurring dot. """ dot_idx, _ = self.token_next_by(m=(T.Punctuation, '.')) _, prev_ = self.token_prev(dot_idx) return remove_quotes(prev_.value) if prev_ is not None else None def _get_first_name(self, idx=None, reverse=False, keywords=False, real_name=False): """Returns the name of the first token with a name""" tokens = self.tokens[idx:] if idx else self.tokens tokens = reversed(tokens) if reverse else tokens types = [T.Name, T.Wildcard, T.String.Symbol] if keywords: types.append(T.Keyword) for token in tokens: if token.ttype in types: return remove_quotes(token.value) elif isinstance(token, (Identifier, Function)): return token.get_real_name() if real_name else token.get_name() class Statement(TokenList): """Represents a SQL statement.""" def get_type(self): """Returns the type of a statement. The returned value is a string holding an upper-cased reprint of the first DML or DDL keyword. If the first token in this group isn't a DML or DDL keyword "UNKNOWN" is returned. Whitespaces and comments at the beginning of the statement are ignored. """ token = self.token_first(skip_cm=True) if token is None: # An "empty" statement that either has not tokens at all # or only whitespace tokens. return 'UNKNOWN' elif token.ttype in (T.Keyword.DML, T.Keyword.DDL): return token.normalized elif token.ttype == T.Keyword.CTE: # The WITH keyword should be followed by either an Identifier or # an IdentifierList containing the CTE definitions; the actual # DML keyword (e.g. SELECT, INSERT) will follow next. tidx = self.token_index(token) while tidx is not None: tidx, token = self.token_next(tidx, skip_ws=True) if isinstance(token, (Identifier, IdentifierList)): tidx, token = self.token_next(tidx, skip_ws=True) if token is not None \ and token.ttype == T.Keyword.DML: return token.normalized # Hmm, probably invalid syntax, so return unknown. return 'UNKNOWN' class Identifier(NameAliasMixin, TokenList): """Represents an identifier. Identifiers may have aliases or typecasts. """ def is_wildcard(self): """Return ``True`` if this identifier contains a wildcard.""" _, token = self.token_next_by(t=T.Wildcard) return token is not None def get_typecast(self): """Returns the typecast or ``None`` of this object as a string.""" midx, marker = self.token_next_by(m=(T.Punctuation, '::')) nidx, next_ = self.token_next(midx, skip_ws=False) return next_.value if next_ else None def get_ordering(self): """Returns the ordering or ``None`` as uppercase string.""" _, ordering = self.token_next_by(t=T.Keyword.Order) return ordering.normalized if ordering else None def get_array_indices(self): """Returns an iterator of index token lists""" for token in self.tokens: if isinstance(token, SquareBrackets): # Use [1:-1] index to discard the square brackets yield token.tokens[1:-1] class IdentifierList(TokenList): """A list of :class:`~sqlparse.sql.Identifier`\'s.""" def get_identifiers(self): """Returns the identifiers. Whitespaces and punctuations are not included in this generator. """ for token in self.tokens: if not (token.is_whitespace or token.match(T.Punctuation, ',')): yield token class TypedLiteral(TokenList): """A typed literal, such as "date '2001-09-28'" or "interval '2 hours'".""" M_OPEN = [(T.Name.Builtin, None), (T.Keyword, "TIMESTAMP")] M_CLOSE = T.String.Single, None M_EXTEND = T.Keyword, ("DAY", "HOUR", "MINUTE", "MONTH", "SECOND", "YEAR") class Parenthesis(TokenList): """Tokens between parenthesis.""" M_OPEN = T.Punctuation, '(' M_CLOSE = T.Punctuation, ')' @property def _groupable_tokens(self): return self.tokens[1:-1] class SquareBrackets(TokenList): """Tokens between square brackets""" M_OPEN = T.Punctuation, '[' M_CLOSE = T.Punctuation, ']' @property def _groupable_tokens(self): return self.tokens[1:-1] class Assignment(TokenList): """An assignment like 'var := val;'""" class If(TokenList): """An 'if' clause with possible 'else if' or 'else' parts.""" M_OPEN = T.Keyword, 'IF' M_CLOSE = T.Keyword, 'END IF' class For(TokenList): """A 'FOR' loop.""" M_OPEN = T.Keyword, ('FOR', 'FOREACH') M_CLOSE = T.Keyword, 'END LOOP' class Comparison(TokenList): """A comparison used for example in WHERE clauses.""" @property def left(self): return self.tokens[0] @property def right(self): return self.tokens[-1] class Comment(TokenList): """A comment.""" def is_multiline(self): return self.tokens and self.tokens[0].ttype == T.Comment.Multiline class Where(TokenList): """A WHERE clause.""" M_OPEN = T.Keyword, 'WHERE' M_CLOSE = T.Keyword, ( 'ORDER BY', 'GROUP BY', 'LIMIT', 'UNION', 'UNION ALL', 'EXCEPT', 'INTERSECT', 'HAVING', 'RETURNING', 'INTO') class Over(TokenList): """An OVER clause.""" M_OPEN = T.Keyword, 'OVER' class Having(TokenList): """A HAVING clause.""" M_OPEN = T.Keyword, 'HAVING' M_CLOSE = T.Keyword, ('ORDER BY', 'LIMIT') class Case(TokenList): """A CASE statement with one or more WHEN and possibly an ELSE part.""" M_OPEN = T.Keyword, 'CASE' M_CLOSE = T.Keyword, 'END' def get_cases(self, skip_ws=False): """Returns a list of 2-tuples (condition, value). If an ELSE exists condition is None. """ CONDITION = 1 VALUE = 2 ret = [] mode = CONDITION for token in self.tokens: # Set mode from the current statement if token.match(T.Keyword, 'CASE'): continue elif skip_ws and token.ttype in T.Whitespace: continue elif token.match(T.Keyword, 'WHEN'): ret.append(([], [])) mode = CONDITION elif token.match(T.Keyword, 'THEN'): mode = VALUE elif token.match(T.Keyword, 'ELSE'): ret.append((None, [])) mode = VALUE elif token.match(T.Keyword, 'END'): mode = None # First condition without preceding WHEN if mode and not ret: ret.append(([], [])) # Append token depending of the current mode if mode == CONDITION: ret[-1][0].append(token) elif mode == VALUE: ret[-1][1].append(token) # Return cases list return ret class Function(NameAliasMixin, TokenList): """A function or procedure call.""" def get_parameters(self): """Return a list of parameters.""" parenthesis = self.token_next_by(i=Parenthesis)[1] result = [] for token in parenthesis.tokens: if isinstance(token, IdentifierList): return token.get_identifiers() elif imt(token, i=(Function, Identifier, TypedLiteral), t=T.Literal): result.append(token) return result def get_window(self): """Return the window if it exists.""" over_clause = self.token_next_by(i=Over) if not over_clause: return None return over_clause[1].tokens[-1] class Begin(TokenList): """A BEGIN/END block.""" M_OPEN = T.Keyword, 'BEGIN' M_CLOSE = T.Keyword, 'END' class Operation(TokenList): """Grouping of operations""" class Values(TokenList): """Grouping of values""" class Command(TokenList): """Grouping of CLI commands.""" sqlparse-0.5.4/sqlparse/tokens.py0000644000000000000000000000336313615410400014006 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause # # The Token implementation is based on pygment's token system written # by Georg Brandl. # http://pygments.org/ """Tokens""" class _TokenType(tuple): parent = None def __contains__(self, item): return item is not None and (self is item or item[:len(self)] == self) def __getattr__(self, name): # don't mess with dunder if name.startswith('__'): return super().__getattr__(self, name) new = _TokenType(self + (name,)) setattr(self, name, new) new.parent = self return new def __repr__(self): # self can be False only if its the `root` i.e. Token itself return 'Token' + ('.' if self else '') + '.'.join(self) Token = _TokenType() # Special token types Text = Token.Text Whitespace = Text.Whitespace Newline = Whitespace.Newline Error = Token.Error # Text that doesn't belong to this lexer (e.g. HTML in PHP) Other = Token.Other # Common token types for source code Keyword = Token.Keyword Name = Token.Name Literal = Token.Literal String = Literal.String Number = Literal.Number Punctuation = Token.Punctuation Operator = Token.Operator Comparison = Operator.Comparison Wildcard = Token.Wildcard Comment = Token.Comment Assignment = Token.Assignment # Generic types for non-source code Generic = Token.Generic Command = Generic.Command # String and some others are not direct children of Token. # alias them: Token.Token = Token Token.String = String Token.Number = Number # SQL specific tokens DML = Keyword.DML DDL = Keyword.DDL CTE = Keyword.CTE sqlparse-0.5.4/sqlparse/utils.py0000644000000000000000000000662313615410400013645 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause import itertools import re from collections import deque from contextlib import contextmanager # This regular expression replaces the home-cooked parser that was here before. # It is much faster, but requires an extra post-processing step to get the # desired results (that are compatible with what you would expect from the # str.splitlines() method). # # It matches groups of characters: newlines, quoted strings, or unquoted text, # and splits on that basis. The post-processing step puts those back together # into the actual lines of SQL. SPLIT_REGEX = re.compile(r""" ( (?: # Start of non-capturing group (?:\r\n|\r|\n) | # Match any single newline, or [^\r\n'"]+ | # Match any character series without quotes or # newlines, or "(?:[^"\\]|\\.)*" | # Match double-quoted strings, or '(?:[^'\\]|\\.)*' # Match single quoted strings ) ) """, re.VERBOSE) LINE_MATCH = re.compile(r'(\r\n|\r|\n)') def split_unquoted_newlines(stmt): """Split a string on all unquoted newlines. Unlike str.splitlines(), this will ignore CR/LF/CR+LF if the requisite character is inside of a string.""" text = str(stmt) lines = SPLIT_REGEX.split(text) outputlines = [''] for line in lines: if not line: continue elif LINE_MATCH.match(line): outputlines.append('') else: outputlines[-1] += line return outputlines def remove_quotes(val): """Helper that removes surrounding quotes from strings.""" if val is None: return if val[0] in ('"', "'", '`') and val[0] == val[-1]: val = val[1:-1] return val def recurse(*cls): """Function decorator to help with recursion :param cls: Classes to not recurse over :return: function """ def wrap(f): def wrapped_f(tlist): for sgroup in tlist.get_sublists(): if not isinstance(sgroup, cls): wrapped_f(sgroup) f(tlist) return wrapped_f return wrap def imt(token, i=None, m=None, t=None): """Helper function to simplify comparisons Instance, Match and TokenType :param token: :param i: Class or Tuple/List of Classes :param m: Tuple of TokenType & Value. Can be list of Tuple for multiple :param t: TokenType or Tuple/List of TokenTypes :return: bool """ if token is None: return False if i and isinstance(token, i): return True if m: if isinstance(m, list): if any(token.match(*pattern) for pattern in m): return True elif token.match(*m): return True if t: if isinstance(t, list): if any(token.ttype in ttype for ttype in t): return True elif token.ttype in t: return True return False def consume(iterator, n): """Advance the iterator n-steps ahead. If n is none, consume entirely.""" deque(itertools.islice(iterator, n), maxlen=0) @contextmanager def offset(filter_, n=0): filter_.offset += n yield filter_.offset -= n @contextmanager def indent(filter_, n=1): filter_.indent += n yield filter_.indent -= n sqlparse-0.5.4/sqlparse/engine/__init__.py0000644000000000000000000000067713615410400015514 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause from sqlparse.engine import grouping from sqlparse.engine.filter_stack import FilterStack from sqlparse.engine.statement_splitter import StatementSplitter __all__ = [ 'grouping', 'FilterStack', 'StatementSplitter', ] sqlparse-0.5.4/sqlparse/engine/filter_stack.py0000644000000000000000000000310013615410400016407 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause """filter""" from sqlparse import lexer from sqlparse.engine import grouping from sqlparse.engine.statement_splitter import StatementSplitter from sqlparse.exceptions import SQLParseError from sqlparse.filters import StripTrailingSemicolonFilter class FilterStack: def __init__(self, strip_semicolon=False): self.preprocess = [] self.stmtprocess = [] self.postprocess = [] self._grouping = False if strip_semicolon: self.stmtprocess.append(StripTrailingSemicolonFilter()) def enable_grouping(self): self._grouping = True def run(self, sql, encoding=None): try: stream = lexer.tokenize(sql, encoding) # Process token stream for filter_ in self.preprocess: stream = filter_.process(stream) stream = StatementSplitter().process(stream) # Output: Stream processed Statements for stmt in stream: if self._grouping: stmt = grouping.group(stmt) for filter_ in self.stmtprocess: filter_.process(stmt) for filter_ in self.postprocess: stmt = filter_.process(stmt) yield stmt except RecursionError as err: raise SQLParseError('Maximum recursion depth exceeded') from err sqlparse-0.5.4/sqlparse/engine/grouping.py0000644000000000000000000003635113615410400015605 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause from sqlparse import sql from sqlparse import tokens as T from sqlparse.utils import recurse, imt # Maximum recursion depth for grouping operations to prevent DoS attacks # Set to None to disable limit (not recommended for untrusted input) MAX_GROUPING_DEPTH = 100 # Maximum number of tokens to process in one grouping operation to prevent # DoS attacks. # Set to None to disable limit (not recommended for untrusted input) MAX_GROUPING_TOKENS = 10000 T_NUMERICAL = (T.Number, T.Number.Integer, T.Number.Float) T_STRING = (T.String, T.String.Single, T.String.Symbol) T_NAME = (T.Name, T.Name.Placeholder) def _group_matching(tlist, cls, depth=0): """Groups Tokens that have beginning and end.""" if MAX_GROUPING_DEPTH is not None and depth > MAX_GROUPING_DEPTH: return # Limit the number of tokens to prevent DoS attacks if MAX_GROUPING_TOKENS is not None \ and len(tlist.tokens) > MAX_GROUPING_TOKENS: return opens = [] tidx_offset = 0 token_list = list(tlist) for idx, token in enumerate(token_list): tidx = idx - tidx_offset if token.is_whitespace: # ~50% of tokens will be whitespace. Will checking early # for them avoid 3 comparisons, but then add 1 more comparison # for the other ~50% of tokens... continue if token.is_group and not isinstance(token, cls): # Check inside previously grouped (i.e. parenthesis) if group # of different type is inside (i.e., case). though ideally should # should check for all open/close tokens at once to avoid recursion _group_matching(token, cls, depth + 1) continue if token.match(*cls.M_OPEN): opens.append(tidx) elif token.match(*cls.M_CLOSE): try: open_idx = opens.pop() except IndexError: # this indicates invalid sql and unbalanced tokens. # instead of break, continue in case other "valid" groups exist continue close_idx = tidx tlist.group_tokens(cls, open_idx, close_idx) tidx_offset += close_idx - open_idx def group_brackets(tlist): _group_matching(tlist, sql.SquareBrackets) def group_parenthesis(tlist): _group_matching(tlist, sql.Parenthesis) def group_case(tlist): _group_matching(tlist, sql.Case) def group_if(tlist): _group_matching(tlist, sql.If) def group_for(tlist): _group_matching(tlist, sql.For) def group_begin(tlist): _group_matching(tlist, sql.Begin) def group_typecasts(tlist): def match(token): return token.match(T.Punctuation, '::') def valid(token): return token is not None def post(tlist, pidx, tidx, nidx): return pidx, nidx valid_prev = valid_next = valid _group(tlist, sql.Identifier, match, valid_prev, valid_next, post) def group_tzcasts(tlist): def match(token): return token.ttype == T.Keyword.TZCast def valid_prev(token): return token is not None def valid_next(token): return token is not None and ( token.is_whitespace or token.match(T.Keyword, 'AS') or token.match(*sql.TypedLiteral.M_CLOSE) ) def post(tlist, pidx, tidx, nidx): return pidx, nidx _group(tlist, sql.Identifier, match, valid_prev, valid_next, post) def group_typed_literal(tlist): # definitely not complete, see e.g.: # https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/interval-literal-syntax # https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/interval-literals # https://www.postgresql.org/docs/9.1/datatype-datetime.html # https://www.postgresql.org/docs/9.1/functions-datetime.html def match(token): return imt(token, m=sql.TypedLiteral.M_OPEN) def match_to_extend(token): return isinstance(token, sql.TypedLiteral) def valid_prev(token): return token is not None def valid_next(token): return token is not None and token.match(*sql.TypedLiteral.M_CLOSE) def valid_final(token): return token is not None and token.match(*sql.TypedLiteral.M_EXTEND) def post(tlist, pidx, tidx, nidx): return tidx, nidx _group(tlist, sql.TypedLiteral, match, valid_prev, valid_next, post, extend=False) _group(tlist, sql.TypedLiteral, match_to_extend, valid_prev, valid_final, post, extend=True) def group_period(tlist): def match(token): for ttype, value in ((T.Punctuation, '.'), (T.Operator, '->'), (T.Operator, '->>')): if token.match(ttype, value): return True return False def valid_prev(token): sqlcls = sql.SquareBrackets, sql.Identifier ttypes = T.Name, T.String.Symbol return imt(token, i=sqlcls, t=ttypes) def valid_next(token): # issue261, allow invalid next token return True def post(tlist, pidx, tidx, nidx): # next_ validation is being performed here. issue261 sqlcls = sql.SquareBrackets, sql.Function ttypes = T.Name, T.String.Symbol, T.Wildcard, T.String.Single next_ = tlist[nidx] if nidx is not None else None valid_next = imt(next_, i=sqlcls, t=ttypes) return (pidx, nidx) if valid_next else (pidx, tidx) _group(tlist, sql.Identifier, match, valid_prev, valid_next, post) def group_as(tlist): def match(token): return token.is_keyword and token.normalized == 'AS' def valid_prev(token): return token.normalized == 'NULL' or not token.is_keyword def valid_next(token): ttypes = T.DML, T.DDL, T.CTE return not imt(token, t=ttypes) and token is not None def post(tlist, pidx, tidx, nidx): return pidx, nidx _group(tlist, sql.Identifier, match, valid_prev, valid_next, post) def group_assignment(tlist): def match(token): return token.match(T.Assignment, ':=') def valid(token): return token is not None and token.ttype not in (T.Keyword,) def post(tlist, pidx, tidx, nidx): m_semicolon = T.Punctuation, ';' snidx, _ = tlist.token_next_by(m=m_semicolon, idx=nidx) nidx = snidx or nidx return pidx, nidx valid_prev = valid_next = valid _group(tlist, sql.Assignment, match, valid_prev, valid_next, post) def group_comparison(tlist): sqlcls = (sql.Parenthesis, sql.Function, sql.Identifier, sql.Operation, sql.TypedLiteral) ttypes = T_NUMERICAL + T_STRING + T_NAME def match(token): return token.ttype == T.Operator.Comparison def valid(token): if imt(token, t=ttypes, i=sqlcls): return True elif token and token.is_keyword and token.normalized == 'NULL': return True else: return False def post(tlist, pidx, tidx, nidx): return pidx, nidx valid_prev = valid_next = valid _group(tlist, sql.Comparison, match, valid_prev, valid_next, post, extend=False) @recurse(sql.Identifier) def group_identifier(tlist): ttypes = (T.String.Symbol, T.Name) tidx, token = tlist.token_next_by(t=ttypes) while token: tlist.group_tokens(sql.Identifier, tidx, tidx) tidx, token = tlist.token_next_by(t=ttypes, idx=tidx) @recurse(sql.Over) def group_over(tlist): tidx, token = tlist.token_next_by(m=sql.Over.M_OPEN) while token: nidx, next_ = tlist.token_next(tidx) if imt(next_, i=sql.Parenthesis, t=T.Name): tlist.group_tokens(sql.Over, tidx, nidx) tidx, token = tlist.token_next_by(m=sql.Over.M_OPEN, idx=tidx) def group_arrays(tlist): sqlcls = sql.SquareBrackets, sql.Identifier, sql.Function ttypes = T.Name, T.String.Symbol def match(token): return isinstance(token, sql.SquareBrackets) def valid_prev(token): return imt(token, i=sqlcls, t=ttypes) def valid_next(token): return True def post(tlist, pidx, tidx, nidx): return pidx, tidx _group(tlist, sql.Identifier, match, valid_prev, valid_next, post, extend=True, recurse=False) def group_operator(tlist): ttypes = T_NUMERICAL + T_STRING + T_NAME sqlcls = (sql.SquareBrackets, sql.Parenthesis, sql.Function, sql.Identifier, sql.Operation, sql.TypedLiteral) def match(token): return imt(token, t=(T.Operator, T.Wildcard)) def valid(token): return imt(token, i=sqlcls, t=ttypes) \ or (token and token.match( T.Keyword, ('CURRENT_DATE', 'CURRENT_TIME', 'CURRENT_TIMESTAMP'))) def post(tlist, pidx, tidx, nidx): tlist[tidx].ttype = T.Operator return pidx, nidx valid_prev = valid_next = valid _group(tlist, sql.Operation, match, valid_prev, valid_next, post, extend=False) def group_identifier_list(tlist): m_role = T.Keyword, ('null', 'role') sqlcls = (sql.Function, sql.Case, sql.Identifier, sql.Comparison, sql.IdentifierList, sql.Operation) ttypes = (T_NUMERICAL + T_STRING + T_NAME + (T.Keyword, T.Comment, T.Wildcard)) def match(token): return token.match(T.Punctuation, ',') def valid(token): return imt(token, i=sqlcls, m=m_role, t=ttypes) def post(tlist, pidx, tidx, nidx): return pidx, nidx valid_prev = valid_next = valid _group(tlist, sql.IdentifierList, match, valid_prev, valid_next, post, extend=True) @recurse(sql.Comment) def group_comments(tlist): tidx, token = tlist.token_next_by(t=T.Comment) while token: eidx, end = tlist.token_not_matching( lambda tk: imt(tk, t=T.Comment) or tk.is_newline, idx=tidx) if end is not None: eidx, end = tlist.token_prev(eidx, skip_ws=False) tlist.group_tokens(sql.Comment, tidx, eidx) tidx, token = tlist.token_next_by(t=T.Comment, idx=tidx) @recurse(sql.Where) def group_where(tlist): tidx, token = tlist.token_next_by(m=sql.Where.M_OPEN) while token: eidx, end = tlist.token_next_by(m=sql.Where.M_CLOSE, idx=tidx) if end is None: end = tlist._groupable_tokens[-1] else: end = tlist.tokens[eidx - 1] # TODO: convert this to eidx instead of end token. # i think above values are len(tlist) and eidx-1 eidx = tlist.token_index(end) tlist.group_tokens(sql.Where, tidx, eidx) tidx, token = tlist.token_next_by(m=sql.Where.M_OPEN, idx=tidx) @recurse() def group_aliased(tlist): I_ALIAS = (sql.Parenthesis, sql.Function, sql.Case, sql.Identifier, sql.Operation, sql.Comparison) tidx, token = tlist.token_next_by(i=I_ALIAS, t=T.Number) while token: nidx, next_ = tlist.token_next(tidx) if isinstance(next_, sql.Identifier): tlist.group_tokens(sql.Identifier, tidx, nidx, extend=True) tidx, token = tlist.token_next_by(i=I_ALIAS, t=T.Number, idx=tidx) @recurse(sql.Function) def group_functions(tlist): has_create = False has_table = False has_as = False for tmp_token in tlist.tokens: if tmp_token.value.upper() == 'CREATE': has_create = True if tmp_token.value.upper() == 'TABLE': has_table = True if tmp_token.value == 'AS': has_as = True if has_create and has_table and not has_as: return tidx, token = tlist.token_next_by(t=T.Name) while token: nidx, next_ = tlist.token_next(tidx) if isinstance(next_, sql.Parenthesis): over_idx, over = tlist.token_next(nidx) if over and isinstance(over, sql.Over): eidx = over_idx else: eidx = nidx tlist.group_tokens(sql.Function, tidx, eidx) tidx, token = tlist.token_next_by(t=T.Name, idx=tidx) @recurse(sql.Identifier) def group_order(tlist): """Group together Identifier and Asc/Desc token""" tidx, token = tlist.token_next_by(t=T.Keyword.Order) while token: pidx, prev_ = tlist.token_prev(tidx) if imt(prev_, i=sql.Identifier, t=T.Number): tlist.group_tokens(sql.Identifier, pidx, tidx) tidx = pidx tidx, token = tlist.token_next_by(t=T.Keyword.Order, idx=tidx) @recurse() def align_comments(tlist): tidx, token = tlist.token_next_by(i=sql.Comment) while token: pidx, prev_ = tlist.token_prev(tidx) if isinstance(prev_, sql.TokenList): tlist.group_tokens(sql.TokenList, pidx, tidx, extend=True) tidx = pidx tidx, token = tlist.token_next_by(i=sql.Comment, idx=tidx) def group_values(tlist): tidx, token = tlist.token_next_by(m=(T.Keyword, 'VALUES')) start_idx = tidx end_idx = -1 while token: if isinstance(token, sql.Parenthesis): end_idx = tidx tidx, token = tlist.token_next(tidx) if end_idx != -1: tlist.group_tokens(sql.Values, start_idx, end_idx, extend=True) def group(stmt): for func in [ group_comments, # _group_matching group_brackets, group_parenthesis, group_case, group_if, group_for, group_begin, group_over, group_functions, group_where, group_period, group_arrays, group_identifier, group_order, group_typecasts, group_tzcasts, group_typed_literal, group_operator, group_comparison, group_as, group_aliased, group_assignment, align_comments, group_identifier_list, group_values, ]: func(stmt) return stmt def _group(tlist, cls, match, valid_prev=lambda t: True, valid_next=lambda t: True, post=None, extend=True, recurse=True, depth=0 ): """Groups together tokens that are joined by a middle token. i.e. x < y""" if MAX_GROUPING_DEPTH is not None and depth > MAX_GROUPING_DEPTH: return # Limit the number of tokens to prevent DoS attacks if MAX_GROUPING_TOKENS is not None \ and len(tlist.tokens) > MAX_GROUPING_TOKENS: return tidx_offset = 0 pidx, prev_ = None, None token_list = list(tlist) for idx, token in enumerate(token_list): tidx = idx - tidx_offset if tidx < 0: # tidx shouldn't get negative continue if token.is_whitespace: continue if recurse and token.is_group and not isinstance(token, cls): _group(token, cls, match, valid_prev, valid_next, post, extend, True, depth + 1) if match(token): nidx, next_ = tlist.token_next(tidx) if prev_ and valid_prev(prev_) and valid_next(next_): from_idx, to_idx = post(tlist, pidx, tidx, nidx) grp = tlist.group_tokens(cls, from_idx, to_idx, extend=extend) tidx_offset += to_idx - from_idx pidx, prev_ = from_idx, grp continue pidx, prev_ = tidx, token sqlparse-0.5.4/sqlparse/engine/statement_splitter.py0000644000000000000000000001177713615410400017712 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause from sqlparse import sql, tokens as T class StatementSplitter: """Filter that split stream at individual statements""" def __init__(self): self._reset() def _reset(self): """Set the filter attributes to its default values""" self._in_declare = False self._in_case = False self._is_create = False self._begin_depth = 0 self._seen_begin = False self.consume_ws = False self.tokens = [] self.level = 0 def _change_splitlevel(self, ttype, value): """Get the new split level (increase, decrease or remain equal)""" # parenthesis increase/decrease a level if ttype is T.Punctuation and value == '(': return 1 elif ttype is T.Punctuation and value == ')': return -1 elif ttype not in T.Keyword: # if normal token return return 0 # Everything after here is ttype = T.Keyword # Also to note, once entered an If statement you are done and basically # returning unified = value.upper() # three keywords begin with CREATE, but only one of them is DDL # DDL Create though can contain more words such as "or replace" if ttype is T.Keyword.DDL and unified.startswith('CREATE'): self._is_create = True return 0 # can have nested declare inside of being... if unified == 'DECLARE' and self._is_create and self._begin_depth == 0: self._in_declare = True return 1 if unified == 'BEGIN': self._begin_depth += 1 self._seen_begin = True if self._is_create: # FIXME(andi): This makes no sense. ## this comment neither return 1 return 0 # BEGIN and CASE/WHEN both end with END if unified == 'END': if not self._in_case: self._begin_depth = max(0, self._begin_depth - 1) else: self._in_case = False return -1 if (unified in ('IF', 'FOR', 'WHILE', 'CASE') and self._is_create and self._begin_depth > 0): if unified == 'CASE': self._in_case = True return 1 if unified in ('END IF', 'END FOR', 'END WHILE'): return -1 # Default return 0 def process(self, stream): """Process the stream""" EOS_TTYPE = T.Whitespace, T.Comment.Single # Run over all stream tokens for ttype, value in stream: # Yield token if we finished a statement and there's no whitespaces # It will count newline token as a non whitespace. In this context # whitespace ignores newlines. # why don't multi line comments also count? if self.consume_ws and ttype not in EOS_TTYPE: yield sql.Statement(self.tokens) # Reset filter and prepare to process next statement self._reset() # Change current split level (increase, decrease or remain equal) self.level += self._change_splitlevel(ttype, value) # Append the token to the current statement self.tokens.append(sql.Token(ttype, value)) # Check if we get the end of a statement # Issue762: Allow GO (or "GO 2") as statement splitter. # When implementing a language toggle, it's not only to add # keywords it's also to change some rules, like this splitting # rule. # Issue809: Ignore semicolons inside BEGIN...END blocks, but handle # standalone BEGIN; as a transaction statement if ttype is T.Punctuation and value == ';': # If we just saw BEGIN; then this is a transaction BEGIN, # not a BEGIN...END block, so decrement depth if self._seen_begin: self._begin_depth = max(0, self._begin_depth - 1) self._seen_begin = False # Split on semicolon if not inside a BEGIN...END block if self.level <= 0 and self._begin_depth == 0: self.consume_ws = True elif ttype is T.Keyword and value.split()[0] == 'GO': self.consume_ws = True elif (ttype not in (T.Whitespace, T.Comment.Single, T.Comment.Multiline) and not (ttype is T.Keyword and value.upper() == 'BEGIN')): # Reset _seen_begin if we see a non-whitespace, non-comment # token but not for BEGIN itself (which just set the flag) self._seen_begin = False # Yield pending statement (if any) if self.tokens and not all(t.is_whitespace for t in self.tokens): yield sql.Statement(self.tokens) sqlparse-0.5.4/sqlparse/filters/__init__.py0000644000000000000000000000247713615410400015717 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause from sqlparse.filters.others import SerializerUnicode from sqlparse.filters.others import StripCommentsFilter from sqlparse.filters.others import StripWhitespaceFilter from sqlparse.filters.others import StripTrailingSemicolonFilter from sqlparse.filters.others import SpacesAroundOperatorsFilter from sqlparse.filters.output import OutputPHPFilter from sqlparse.filters.output import OutputPythonFilter from sqlparse.filters.tokens import KeywordCaseFilter from sqlparse.filters.tokens import IdentifierCaseFilter from sqlparse.filters.tokens import TruncateStringFilter from sqlparse.filters.reindent import ReindentFilter from sqlparse.filters.right_margin import RightMarginFilter from sqlparse.filters.aligned_indent import AlignedIndentFilter __all__ = [ 'SerializerUnicode', 'StripCommentsFilter', 'StripWhitespaceFilter', 'StripTrailingSemicolonFilter', 'SpacesAroundOperatorsFilter', 'OutputPHPFilter', 'OutputPythonFilter', 'KeywordCaseFilter', 'IdentifierCaseFilter', 'TruncateStringFilter', 'ReindentFilter', 'RightMarginFilter', 'AlignedIndentFilter', ] sqlparse-0.5.4/sqlparse/filters/aligned_indent.py0000644000000000000000000001174713615410400017124 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause from sqlparse import sql, tokens as T from sqlparse.utils import offset, indent class AlignedIndentFilter: join_words = (r'((LEFT\s+|RIGHT\s+|FULL\s+)?' r'(INNER\s+|OUTER\s+|STRAIGHT\s+)?|' r'(CROSS\s+|NATURAL\s+)?)?JOIN\b') by_words = r'(GROUP|ORDER)\s+BY\b' split_words = ('FROM', join_words, 'ON', by_words, 'WHERE', 'AND', 'OR', 'HAVING', 'LIMIT', 'UNION', 'VALUES', 'SET', 'BETWEEN', 'EXCEPT') def __init__(self, char=' ', n='\n'): self.n = n self.offset = 0 self.indent = 0 self.char = char self._max_kwd_len = len('select') def nl(self, offset=1): # offset = 1 represent a single space after SELECT offset = -len(offset) if not isinstance(offset, int) else offset # add two for the space and parenthesis indent = self.indent * (2 + self._max_kwd_len) return sql.Token(T.Whitespace, self.n + self.char * ( self._max_kwd_len + offset + indent + self.offset)) def _process_statement(self, tlist): if len(tlist.tokens) > 0 and tlist.tokens[0].is_whitespace \ and self.indent == 0: tlist.tokens.pop(0) # process the main query body self._process(sql.TokenList(tlist.tokens)) def _process_parenthesis(self, tlist): # if this isn't a subquery, don't re-indent _, token = tlist.token_next_by(m=(T.DML, 'SELECT')) if token is not None: with indent(self): tlist.insert_after(tlist[0], self.nl('SELECT')) # process the inside of the parenthesis self._process_default(tlist) # de-indent last parenthesis tlist.insert_before(tlist[-1], self.nl()) def _process_identifierlist(self, tlist): # columns being selected identifiers = list(tlist.get_identifiers()) identifiers.pop(0) [tlist.insert_before(token, self.nl()) for token in identifiers] self._process_default(tlist) def _process_case(self, tlist): offset_ = len('case ') + len('when ') cases = tlist.get_cases(skip_ws=True) # align the end as well end_token = tlist.token_next_by(m=(T.Keyword, 'END'))[1] cases.append((None, [end_token])) condition_width = [len(' '.join(map(str, cond))) if cond else 0 for cond, _ in cases] max_cond_width = max(condition_width) for i, (cond, value) in enumerate(cases): # cond is None when 'else or end' stmt = cond[0] if cond else value[0] if i > 0: tlist.insert_before(stmt, self.nl(offset_ - len(str(stmt)))) if cond: ws = sql.Token(T.Whitespace, self.char * ( max_cond_width - condition_width[i])) tlist.insert_after(cond[-1], ws) def _next_token(self, tlist, idx=-1): split_words = T.Keyword, self.split_words, True tidx, token = tlist.token_next_by(m=split_words, idx=idx) # treat "BETWEEN x and y" as a single statement if token and token.normalized == 'BETWEEN': tidx, token = self._next_token(tlist, tidx) if token and token.normalized == 'AND': tidx, token = self._next_token(tlist, tidx) return tidx, token def _split_kwds(self, tlist): tidx, token = self._next_token(tlist) while token: # joins, group/order by are special case. only consider the first # word as aligner if ( token.match(T.Keyword, self.join_words, regex=True) or token.match(T.Keyword, self.by_words, regex=True) ): token_indent = token.value.split()[0] else: token_indent = str(token) tlist.insert_before(token, self.nl(token_indent)) tidx += 1 tidx, token = self._next_token(tlist, tidx) def _process_default(self, tlist): self._split_kwds(tlist) # process any sub-sub statements for sgroup in tlist.get_sublists(): idx = tlist.token_index(sgroup) pidx, prev_ = tlist.token_prev(idx) # HACK: make "group/order by" work. Longer than max_len. offset_ = 3 if ( prev_ and prev_.match(T.Keyword, self.by_words, regex=True) ) else 0 with offset(self, offset_): self._process(sgroup) def _process(self, tlist): func_name = f'_process_{type(tlist).__name__}' func = getattr(self, func_name.lower(), self._process_default) func(tlist) def process(self, stmt): self._process(stmt) return stmt sqlparse-0.5.4/sqlparse/filters/others.py0000644000000000000000000001501513615410400015454 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause import re from sqlparse import sql, tokens as T from sqlparse.utils import split_unquoted_newlines class StripCommentsFilter: @staticmethod def _process(tlist): def get_next_comment(idx=-1): # TODO(andi) Comment types should be unified, see related issue38 return tlist.token_next_by(i=sql.Comment, t=T.Comment, idx=idx) def _get_insert_token(token): """Returns either a whitespace or the line breaks from token.""" # See issue484 why line breaks should be preserved. # Note: The actual value for a line break is replaced by \n # in SerializerUnicode which will be executed in the # postprocessing state. m = re.search(r'([\r\n]+) *$', token.value) if m is not None: return sql.Token(T.Whitespace.Newline, m.groups()[0]) else: return sql.Token(T.Whitespace, ' ') sql_hints = (T.Comment.Multiline.Hint, T.Comment.Single.Hint) tidx, token = get_next_comment() while token: # skipping token remove if token is a SQL-Hint. issue262 is_sql_hint = False if token.ttype in sql_hints: is_sql_hint = True elif isinstance(token, sql.Comment): comment_tokens = token.tokens if len(comment_tokens) > 0: if comment_tokens[0].ttype in sql_hints: is_sql_hint = True if is_sql_hint: # using current index as start index to search next token for # preventing infinite loop in cases when token type is a # "SQL-Hint" and has to be skipped tidx, token = get_next_comment(idx=tidx) continue pidx, prev_ = tlist.token_prev(tidx, skip_ws=False) nidx, next_ = tlist.token_next(tidx, skip_ws=False) # Replace by whitespace if prev and next exist and if they're not # whitespaces. This doesn't apply if prev or next is a parenthesis. if ( prev_ is None or next_ is None or prev_.is_whitespace or prev_.match(T.Punctuation, '(') or next_.is_whitespace or next_.match(T.Punctuation, ')') ): # Insert a whitespace to ensure the following SQL produces # a valid SQL (see #425). if prev_ is not None and not prev_.match(T.Punctuation, '('): tlist.tokens.insert(tidx, _get_insert_token(token)) tlist.tokens.remove(token) tidx -= 1 else: tlist.tokens[tidx] = _get_insert_token(token) # using current index as start index to search next token for # preventing infinite loop in cases when token type is a # "SQL-Hint" and has to be skipped tidx, token = get_next_comment(idx=tidx) def process(self, stmt): [self.process(sgroup) for sgroup in stmt.get_sublists()] StripCommentsFilter._process(stmt) return stmt class StripWhitespaceFilter: def _stripws(self, tlist): func_name = f'_stripws_{type(tlist).__name__}' func = getattr(self, func_name.lower(), self._stripws_default) func(tlist) @staticmethod def _stripws_default(tlist): last_was_ws = False is_first_char = True for token in tlist.tokens: if token.is_whitespace: token.value = '' if last_was_ws or is_first_char else ' ' last_was_ws = token.is_whitespace is_first_char = False def _stripws_identifierlist(self, tlist): # Removes newlines before commas, see issue140 last_nl = None for token in list(tlist.tokens): if last_nl and token.ttype is T.Punctuation and token.value == ',': tlist.tokens.remove(last_nl) last_nl = token if token.is_whitespace else None # next_ = tlist.token_next(token, skip_ws=False) # if (next_ and not next_.is_whitespace and # token.ttype is T.Punctuation and token.value == ','): # tlist.insert_after(token, sql.Token(T.Whitespace, ' ')) return self._stripws_default(tlist) def _stripws_parenthesis(self, tlist): while tlist.tokens[1].is_whitespace: tlist.tokens.pop(1) while tlist.tokens[-2].is_whitespace: tlist.tokens.pop(-2) if tlist.tokens[-2].is_group: # save to remove the last whitespace while tlist.tokens[-2].tokens[-1].is_whitespace: tlist.tokens[-2].tokens.pop(-1) self._stripws_default(tlist) def process(self, stmt, depth=0): [self.process(sgroup, depth + 1) for sgroup in stmt.get_sublists()] self._stripws(stmt) if depth == 0 and stmt.tokens and stmt.tokens[-1].is_whitespace: stmt.tokens.pop(-1) return stmt class SpacesAroundOperatorsFilter: @staticmethod def _process(tlist): ttypes = (T.Operator, T.Comparison) tidx, token = tlist.token_next_by(t=ttypes) while token: nidx, next_ = tlist.token_next(tidx, skip_ws=False) if next_ and next_.ttype != T.Whitespace: tlist.insert_after(tidx, sql.Token(T.Whitespace, ' ')) pidx, prev_ = tlist.token_prev(tidx, skip_ws=False) if prev_ and prev_.ttype != T.Whitespace: tlist.insert_before(tidx, sql.Token(T.Whitespace, ' ')) tidx += 1 # has to shift since token inserted before it # assert tlist.token_index(token) == tidx tidx, token = tlist.token_next_by(t=ttypes, idx=tidx) def process(self, stmt): [self.process(sgroup) for sgroup in stmt.get_sublists()] SpacesAroundOperatorsFilter._process(stmt) return stmt class StripTrailingSemicolonFilter: def process(self, stmt): while stmt.tokens and (stmt.tokens[-1].is_whitespace or stmt.tokens[-1].value == ';'): stmt.tokens.pop() return stmt # --------------------------- # postprocess class SerializerUnicode: @staticmethod def process(stmt): lines = split_unquoted_newlines(stmt) return '\n'.join(line.rstrip() for line in lines) sqlparse-0.5.4/sqlparse/filters/output.py0000644000000000000000000000764113615410400015516 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause from sqlparse import sql, tokens as T class OutputFilter: varname_prefix = '' def __init__(self, varname='sql'): self.varname = self.varname_prefix + varname self.count = 0 def _process(self, stream, varname, has_nl): raise NotImplementedError def process(self, stmt): self.count += 1 if self.count > 1: varname = '{f.varname}{f.count}'.format(f=self) else: varname = self.varname has_nl = len(str(stmt).strip().splitlines()) > 1 stmt.tokens = self._process(stmt.tokens, varname, has_nl) return stmt class OutputPythonFilter(OutputFilter): def _process(self, stream, varname, has_nl): # SQL query assignation to varname if self.count > 1: yield sql.Token(T.Whitespace, '\n') yield sql.Token(T.Name, varname) yield sql.Token(T.Whitespace, ' ') yield sql.Token(T.Operator, '=') yield sql.Token(T.Whitespace, ' ') if has_nl: yield sql.Token(T.Operator, '(') yield sql.Token(T.Text, "'") # Print the tokens on the quote for token in stream: # Token is a new line separator if token.is_whitespace and '\n' in token.value: # Close quote and add a new line yield sql.Token(T.Text, " '") yield sql.Token(T.Whitespace, '\n') # Quote header on secondary lines yield sql.Token(T.Whitespace, ' ' * (len(varname) + 4)) yield sql.Token(T.Text, "'") # Indentation after_lb = token.value.split('\n', 1)[1] if after_lb: yield sql.Token(T.Whitespace, after_lb) continue # Token has escape chars elif "'" in token.value: token.value = token.value.replace("'", "\\'") # Put the token yield sql.Token(T.Text, token.value) # Close quote yield sql.Token(T.Text, "'") if has_nl: yield sql.Token(T.Operator, ')') class OutputPHPFilter(OutputFilter): varname_prefix = '$' def _process(self, stream, varname, has_nl): # SQL query assignation to varname (quote header) if self.count > 1: yield sql.Token(T.Whitespace, '\n') yield sql.Token(T.Name, varname) yield sql.Token(T.Whitespace, ' ') if has_nl: yield sql.Token(T.Whitespace, ' ') yield sql.Token(T.Operator, '=') yield sql.Token(T.Whitespace, ' ') yield sql.Token(T.Text, '"') # Print the tokens on the quote for token in stream: # Token is a new line separator if token.is_whitespace and '\n' in token.value: # Close quote and add a new line yield sql.Token(T.Text, ' ";') yield sql.Token(T.Whitespace, '\n') # Quote header on secondary lines yield sql.Token(T.Name, varname) yield sql.Token(T.Whitespace, ' ') yield sql.Token(T.Operator, '.=') yield sql.Token(T.Whitespace, ' ') yield sql.Token(T.Text, '"') # Indentation after_lb = token.value.split('\n', 1)[1] if after_lb: yield sql.Token(T.Whitespace, after_lb) continue # Token has escape chars elif '"' in token.value: token.value = token.value.replace('"', '\\"') # Put the token yield sql.Token(T.Text, token.value) # Close quote yield sql.Token(T.Text, '"') yield sql.Token(T.Punctuation, ';') sqlparse-0.5.4/sqlparse/filters/reindent.py0000644000000000000000000002324313615410400015762 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause from sqlparse import sql, tokens as T from sqlparse.utils import offset, indent class ReindentFilter: def __init__(self, width=2, char=' ', wrap_after=0, n='\n', comma_first=False, indent_after_first=False, indent_columns=False, compact=False): self.n = n self.width = width self.char = char self.indent = 1 if indent_after_first else 0 self.offset = 0 self.wrap_after = wrap_after self.comma_first = comma_first self.indent_columns = indent_columns self.compact = compact self._curr_stmt = None self._last_stmt = None self._last_func = None def _flatten_up_to_token(self, token): """Yields all tokens up to token but excluding current.""" if token.is_group: token = next(token.flatten()) for t in self._curr_stmt.flatten(): if t == token: break yield t @property def leading_ws(self): return self.offset + self.indent * self.width def _get_offset(self, token): raw = ''.join(map(str, self._flatten_up_to_token(token))) line = (raw or '\n').splitlines()[-1] # Now take current offset into account and return relative offset. return len(line) - len(self.char * self.leading_ws) def nl(self, offset=0): return sql.Token( T.Whitespace, self.n + self.char * max(0, self.leading_ws + offset)) def _next_token(self, tlist, idx=-1): split_words = ('FROM', 'STRAIGHT_JOIN$', 'JOIN$', 'AND', 'OR', 'GROUP BY', 'ORDER BY', 'UNION', 'VALUES', 'SET', 'BETWEEN', 'EXCEPT', 'HAVING', 'LIMIT') m_split = T.Keyword, split_words, True tidx, token = tlist.token_next_by(m=m_split, idx=idx) if token and token.normalized == 'BETWEEN': tidx, token = self._next_token(tlist, tidx) if token and token.normalized == 'AND': tidx, token = self._next_token(tlist, tidx) return tidx, token def _split_kwds(self, tlist): tidx, token = self._next_token(tlist) while token: pidx, prev_ = tlist.token_prev(tidx, skip_ws=False) uprev = str(prev_) if prev_ and prev_.is_whitespace: del tlist.tokens[pidx] tidx -= 1 if not (uprev.endswith('\n') or uprev.endswith('\r')): tlist.insert_before(tidx, self.nl()) tidx += 1 tidx, token = self._next_token(tlist, tidx) def _split_statements(self, tlist): ttypes = T.Keyword.DML, T.Keyword.DDL tidx, token = tlist.token_next_by(t=ttypes) while token: pidx, prev_ = tlist.token_prev(tidx, skip_ws=False) if prev_ and prev_.is_whitespace: del tlist.tokens[pidx] tidx -= 1 # only break if it's not the first token if prev_: tlist.insert_before(tidx, self.nl()) tidx += 1 tidx, token = tlist.token_next_by(t=ttypes, idx=tidx) def _process(self, tlist): func_name = f'_process_{type(tlist).__name__}' func = getattr(self, func_name.lower(), self._process_default) func(tlist) def _process_where(self, tlist): tidx, token = tlist.token_next_by(m=(T.Keyword, 'WHERE')) if not token: return # issue121, errors in statement fixed?? tlist.insert_before(tidx, self.nl()) with indent(self): self._process_default(tlist) def _process_parenthesis(self, tlist): ttypes = T.Keyword.DML, T.Keyword.DDL _, is_dml_dll = tlist.token_next_by(t=ttypes) fidx, first = tlist.token_next_by(m=sql.Parenthesis.M_OPEN) if first is None: return with indent(self, 1 if is_dml_dll else 0): tlist.tokens.insert(0, self.nl()) if is_dml_dll else None with offset(self, self._get_offset(first) + 1): self._process_default(tlist, not is_dml_dll) def _process_function(self, tlist): self._last_func = tlist[0] self._process_default(tlist) def _process_identifierlist(self, tlist): identifiers = list(tlist.get_identifiers()) if self.indent_columns: first = next(identifiers[0].flatten()) num_offset = 1 if self.char == '\t' else self.width else: first = next(identifiers.pop(0).flatten()) num_offset = 1 if self.char == '\t' else self._get_offset(first) if not tlist.within(sql.Function) and not tlist.within(sql.Values): with offset(self, num_offset): position = 0 for token in identifiers: # Add 1 for the "," separator position += len(token.value) + 1 if position > (self.wrap_after - self.offset): adjust = 0 if self.comma_first: adjust = -2 _, comma = tlist.token_prev( tlist.token_index(token)) if comma is None: continue token = comma tlist.insert_before(token, self.nl(offset=adjust)) if self.comma_first: _, ws = tlist.token_next( tlist.token_index(token), skip_ws=False) if (ws is not None and ws.ttype is not T.Text.Whitespace): tlist.insert_after( token, sql.Token(T.Whitespace, ' ')) position = 0 else: # ensure whitespace for token in tlist: _, next_ws = tlist.token_next( tlist.token_index(token), skip_ws=False) if token.value == ',' and not next_ws.is_whitespace: tlist.insert_after( token, sql.Token(T.Whitespace, ' ')) end_at = self.offset + sum(len(i.value) + 1 for i in identifiers) adjusted_offset = 0 if (self.wrap_after > 0 and end_at > (self.wrap_after - self.offset) and self._last_func): adjusted_offset = -len(self._last_func.value) - 1 with offset(self, adjusted_offset), indent(self): if adjusted_offset < 0: tlist.insert_before(identifiers[0], self.nl()) position = 0 for token in identifiers: # Add 1 for the "," separator position += len(token.value) + 1 if (self.wrap_after > 0 and position > (self.wrap_after - self.offset)): adjust = 0 tlist.insert_before(token, self.nl(offset=adjust)) position = 0 self._process_default(tlist) def _process_case(self, tlist): iterable = iter(tlist.get_cases()) cond, _ = next(iterable) first = next(cond[0].flatten()) with offset(self, self._get_offset(tlist[0])): with offset(self, self._get_offset(first)): for cond, value in iterable: str_cond = ''.join(str(x) for x in cond or []) str_value = ''.join(str(x) for x in value) end_pos = self.offset + 1 + len(str_cond) + len(str_value) if (not self.compact and end_pos > self.wrap_after): token = value[0] if cond is None else cond[0] tlist.insert_before(token, self.nl()) # Line breaks on group level are done. let's add an offset of # len "when ", "then ", "else " with offset(self, len("WHEN ")): self._process_default(tlist) end_idx, end = tlist.token_next_by(m=sql.Case.M_CLOSE) if end_idx is not None and not self.compact: tlist.insert_before(end_idx, self.nl()) def _process_values(self, tlist): tlist.insert_before(0, self.nl()) tidx, token = tlist.token_next_by(i=sql.Parenthesis) first_token = token while token: ptidx, ptoken = tlist.token_next_by(m=(T.Punctuation, ','), idx=tidx) if ptoken: if self.comma_first: adjust = -2 offset = self._get_offset(first_token) + adjust tlist.insert_before(ptoken, self.nl(offset)) else: tlist.insert_after(ptoken, self.nl(self._get_offset(token))) tidx, token = tlist.token_next_by(i=sql.Parenthesis, idx=tidx) def _process_default(self, tlist, stmts=True): self._split_statements(tlist) if stmts else None self._split_kwds(tlist) for sgroup in tlist.get_sublists(): self._process(sgroup) def process(self, stmt): self._curr_stmt = stmt self._process(stmt) if self._last_stmt is not None: nl = '\n' if str(self._last_stmt).endswith('\n') else '\n\n' stmt.tokens.insert(0, sql.Token(T.Whitespace, nl)) self._last_stmt = stmt return stmt sqlparse-0.5.4/sqlparse/filters/right_margin.py0000644000000000000000000000277713615410400016635 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause import re from sqlparse import sql, tokens as T # FIXME: Doesn't work class RightMarginFilter: keep_together = ( # sql.TypeCast, sql.Identifier, sql.Alias, ) def __init__(self, width=79): self.width = width self.line = '' def _process(self, group, stream): for token in stream: if token.is_whitespace and '\n' in token.value: if token.value.endswith('\n'): self.line = '' else: self.line = token.value.splitlines()[-1] elif token.is_group and type(token) not in self.keep_together: token.tokens = self._process(token, token.tokens) else: val = str(token) if len(self.line) + len(val) > self.width: match = re.search(r'^ +', self.line) if match is not None: indent = match.group() else: indent = '' yield sql.Token(T.Whitespace, f'\n{indent}') self.line = indent self.line += val yield token def process(self, group): # return # group.tokens = self._process(group, group.tokens) raise NotImplementedError sqlparse-0.5.4/sqlparse/filters/tokens.py0000644000000000000000000000302113615410400015445 0ustar00# # Copyright (C) 2009-2020 the sqlparse authors and contributors # # # This module is part of python-sqlparse and is released under # the BSD License: https://opensource.org/licenses/BSD-3-Clause from sqlparse import tokens as T class _CaseFilter: ttype = None def __init__(self, case=None): case = case or 'upper' self.convert = getattr(str, case) def process(self, stream): for ttype, value in stream: if ttype in self.ttype: value = self.convert(value) yield ttype, value class KeywordCaseFilter(_CaseFilter): ttype = T.Keyword class IdentifierCaseFilter(_CaseFilter): ttype = T.Name, T.String.Symbol def process(self, stream): for ttype, value in stream: if ttype in self.ttype and value.strip()[0] != '"': value = self.convert(value) yield ttype, value class TruncateStringFilter: def __init__(self, width, char): self.width = width self.char = char def process(self, stream): for ttype, value in stream: if ttype != T.Literal.String.Single: yield ttype, value continue if value[:2] == "''": inner = value[2:-2] quote = "''" else: inner = value[1:-1] quote = "'" if len(inner) > self.width: value = ''.join((quote, inner[:self.width], self.char, quote)) yield ttype, value sqlparse-0.5.4/tests/__init__.py0000644000000000000000000000000013615410400013533 0ustar00sqlparse-0.5.4/tests/conftest.py0000644000000000000000000000304313615410400013633 0ustar00"""Helpers for testing.""" import io import os import pytest DIR_PATH = os.path.dirname(__file__) FILES_DIR = os.path.join(DIR_PATH, 'files') @pytest.fixture() def filepath(): """Returns full file path for test files.""" def make_filepath(filename): # https://stackoverflow.com/questions/18011902/py-test-pass-a-parameter-to-a-fixture-function # Alternate solution is to use parametrization `indirect=True` # https://stackoverflow.com/questions/18011902/py-test-pass-a-parameter-to-a-fixture-function/33879151#33879151 # Syntax is noisy and requires specific variable names return os.path.join(FILES_DIR, filename) return make_filepath @pytest.fixture() def load_file(filepath): """Opens filename with encoding and return its contents.""" def make_load_file(filename, encoding='utf-8'): # https://stackoverflow.com/questions/18011902/py-test-pass-a-parameter-to-a-fixture-function # Alternate solution is to use parametrization `indirect=True` # https://stackoverflow.com/questions/18011902/py-test-pass-a-parameter-to-a-fixture-function/33879151#33879151 # Syntax is noisy and requires specific variable names # And seems to be limited to only 1 argument. with open(filepath(filename), encoding=encoding) as f: return f.read().strip() return make_load_file @pytest.fixture() def get_stream(filepath): def make_stream(filename, encoding='utf-8'): return open(filepath(filename), encoding=encoding) return make_stream sqlparse-0.5.4/tests/test_cli.py0000644000000000000000000001441613615410400013622 0ustar00import subprocess import sys import pytest import sqlparse def test_cli_main_empty(): with pytest.raises(SystemExit): sqlparse.cli.main([]) def test_parser_empty(): with pytest.raises(SystemExit): parser = sqlparse.cli.create_parser() parser.parse_args([]) def test_main_help(): # Call with the --help option as a basic sanity check. with pytest.raises(SystemExit) as exinfo: sqlparse.cli.main(["--help", ]) assert exinfo.value.code == 0 def test_valid_args(filepath): # test doesn't abort path = filepath('function.sql') assert sqlparse.cli.main([path, '-r']) is not None def test_invalid_choice(filepath): path = filepath('function.sql') with pytest.raises(SystemExit): sqlparse.cli.main([path, '-l', 'Spanish']) def test_invalid_args(filepath, capsys): path = filepath('function.sql') sqlparse.cli.main([path, '-r', '--indent_width', '0']) _, err = capsys.readouterr() assert err == ("[ERROR] Invalid options: indent_width requires " "a positive integer\n") def test_invalid_infile(filepath, capsys): path = filepath('missing.sql') sqlparse.cli.main([path, '-r']) _, err = capsys.readouterr() assert err[:22] == "[ERROR] Failed to read" def test_invalid_outfile(filepath, capsys): path = filepath('function.sql') outpath = filepath('/missing/function.sql') sqlparse.cli.main([path, '-r', '-o', outpath]) _, err = capsys.readouterr() assert err[:22] == "[ERROR] Failed to open" def test_stdout(filepath, load_file, capsys): path = filepath('begintag.sql') expected = load_file('begintag.sql') sqlparse.cli.main([path]) out, _ = capsys.readouterr() assert out == expected def test_script(): # Call with the --help option as a basic sanity check. cmd = [sys.executable, '-m', 'sqlparse.cli', '--help'] assert subprocess.call(cmd) == 0 @pytest.mark.parametrize('fpath, encoding', ( ('encoding_utf8.sql', 'utf-8'), ('encoding_gbk.sql', 'gbk'), )) def test_encoding_stdout(fpath, encoding, filepath, load_file, capfd): path = filepath(fpath) expected = load_file(fpath, encoding) sqlparse.cli.main([path, '--encoding', encoding]) out, _ = capfd.readouterr() assert out == expected @pytest.mark.parametrize('fpath, encoding', ( ('encoding_utf8.sql', 'utf-8'), ('encoding_gbk.sql', 'gbk'), )) def test_encoding_output_file(fpath, encoding, filepath, load_file, tmpdir): in_path = filepath(fpath) expected = load_file(fpath, encoding) out_path = tmpdir.dirname + '/encoding_out.sql' sqlparse.cli.main([in_path, '--encoding', encoding, '-o', out_path]) out = load_file(out_path, encoding) assert out == expected @pytest.mark.parametrize('fpath, encoding', ( ('encoding_utf8.sql', 'utf-8'), ('encoding_gbk.sql', 'gbk'), )) def test_encoding_stdin(fpath, encoding, filepath, load_file, capfd): path = filepath(fpath) expected = load_file(fpath, encoding) old_stdin = sys.stdin with open(path) as f: sys.stdin = f sqlparse.cli.main(['-', '--encoding', encoding]) sys.stdin = old_stdin out, _ = capfd.readouterr() assert out == expected def test_encoding(filepath, capsys): path = filepath('test_cp1251.sql') expected = 'insert into foo values (1); -- 袩械褋薪褟 锌褉芯 薪邪写械卸写褍\n' sqlparse.cli.main([path, '--encoding=cp1251']) out, _ = capsys.readouterr() assert out == expected def test_cli_multiple_files_with_inplace(tmpdir): """Test CLI with multiple files and --in-place flag.""" # Create test files file1 = tmpdir.join("test1.sql") file1.write("select * from foo") file2 = tmpdir.join("test2.sql") file2.write("select * from bar") # Run sqlformat with --in-place result = sqlparse.cli.main([str(file1), str(file2), '--in-place', '--reindent']) assert result == 0 # Files should be modified in-place assert "select" in file1.read() assert "select" in file2.read() def test_cli_multiple_files_without_inplace_fails(tmpdir, capsys): """Test that multiple files require --in-place flag.""" file1 = tmpdir.join("test1.sql") file1.write("select * from foo") file2 = tmpdir.join("test2.sql") file2.write("select * from bar") result = sqlparse.cli.main([str(file1), str(file2)]) assert result != 0 # Should fail _, err = capsys.readouterr() assert "Multiple files require --in-place flag" in err def test_cli_inplace_with_stdin_fails(capsys): """Test that --in-place flag cannot be used with stdin.""" result = sqlparse.cli.main(['-', '--in-place']) assert result != 0 # Should fail _, err = capsys.readouterr() assert "Cannot use --in-place with stdin" in err def test_cli_outfile_with_multiple_files_fails(tmpdir, capsys): """Test that -o cannot be used with multiple files.""" file1 = tmpdir.join("test1.sql") file1.write("select * from foo") file2 = tmpdir.join("test2.sql") file2.write("select * from bar") out = tmpdir.join("out.sql") result = sqlparse.cli.main([str(file1), str(file2), '-o', str(out)]) assert result != 0 # Should fail _, err = capsys.readouterr() assert "Cannot use -o/--outfile with multiple files" in err def test_cli_single_file_inplace(tmpdir): """Test --in-place flag with a single file.""" test_file = tmpdir.join("test.sql") test_file.write("select * from foo") result = sqlparse.cli.main([str(test_file), '--in-place', '--keywords', 'upper']) assert result == 0 content = test_file.read() assert "SELECT" in content def test_cli_error_handling_continues(tmpdir, capsys): """Test that errors in one file don't stop processing of others.""" file1 = tmpdir.join("test1.sql") file1.write("select * from foo") # file2 doesn't exist - it will cause an error file3 = tmpdir.join("test3.sql") file3.write("select * from baz") result = sqlparse.cli.main([str(file1), str(tmpdir.join("nonexistent.sql")), str(file3), '--in-place']) # Should return error code but still process valid files assert result != 0 assert "select * from foo" in file1.read() assert "select * from baz" in file3.read() _, err = capsys.readouterr() assert "Failed to read" in err sqlparse-0.5.4/tests/test_dos_prevention.py0000644000000000000000000000706713615410400016115 0ustar00"""Tests for DoS prevention mechanisms in sqlparse.""" import pytest import sqlparse import time class TestDoSPrevention: """Test cases to ensure sqlparse is protected against DoS attacks.""" def test_large_tuple_list_performance(self): """Test that parsing a large list of tuples doesn't cause DoS.""" # Generate SQL with many tuples (like Django composite primary key queries) sql = ''' SELECT "composite_pk_comment"."tenant_id", "composite_pk_comment"."comment_id" FROM "composite_pk_comment" WHERE ("composite_pk_comment"."tenant_id", "composite_pk_comment"."comment_id") IN (''' # Generate 5000 tuples - this would previously cause a hang tuples = [] for i in range(1, 5001): tuples.append(f"(1, {i})") sql += ", ".join(tuples) + ")" # Test should complete quickly (under 5 seconds) start_time = time.time() result = sqlparse.format(sql, reindent=True, keyword_case="upper") execution_time = time.time() - start_time assert execution_time < 5.0, f"Parsing took too long: {execution_time:.2f}s" assert len(result) > 0, "Result should not be empty" assert "SELECT" in result.upper(), "SQL should be properly formatted" def test_deeply_nested_groups_limited(self): """Test that deeply nested groups don't cause stack overflow.""" # Create deeply nested parentheses sql = "SELECT " + "(" * 200 + "1" + ")" * 200 # Should not raise RecursionError result = sqlparse.format(sql, reindent=True) assert "SELECT" in result assert "1" in result def test_very_large_token_list_limited(self): """Test that very large token lists are handled gracefully.""" # Create a SQL with many identifiers identifiers = [] for i in range(15000): # More than MAX_GROUPING_TOKENS identifiers.append(f"col{i}") sql = f"SELECT {', '.join(identifiers)} FROM table1" # Should complete without hanging start_time = time.time() result = sqlparse.format(sql, reindent=True) execution_time = time.time() - start_time assert execution_time < 10.0, f"Parsing took too long: {execution_time:.2f}s" assert "SELECT" in result assert "FROM" in result def test_normal_sql_still_works(self): """Test that normal SQL still works correctly after DoS protections.""" sql = """ SELECT u.id, u.name, p.title FROM users u JOIN posts p ON u.id = p.user_id WHERE u.active = 1 AND p.published_at > '2023-01-01' ORDER BY p.published_at DESC """ result = sqlparse.format(sql, reindent=True, keyword_case="upper") assert "SELECT" in result assert "FROM" in result assert "JOIN" in result assert "WHERE" in result assert "ORDER BY" in result def test_reasonable_tuple_list_works(self): """Test that reasonable-sized tuple lists still work correctly.""" sql = ''' SELECT id FROM table1 WHERE (col1, col2) IN (''' # 100 tuples should work fine tuples = [] for i in range(1, 101): tuples.append(f"({i}, {i * 2})") sql += ", ".join(tuples) + ")" result = sqlparse.format(sql, reindent=True, keyword_case="upper") assert "SELECT" in result assert "WHERE" in result assert "IN" in result assert "1," in result # First tuple should be there assert "200" in result # Last tuple should be there sqlparse-0.5.4/tests/test_format.py0000644000000000000000000007040013615410400014336 0ustar00import pytest import sqlparse from sqlparse.exceptions import SQLParseError class TestFormat: def test_keywordcase(self): sql = 'select * from bar; -- select foo\n' res = sqlparse.format(sql, keyword_case='upper') assert res == 'SELECT * FROM bar; -- select foo\n' res = sqlparse.format(sql, keyword_case='capitalize') assert res == 'Select * From bar; -- select foo\n' res = sqlparse.format(sql.upper(), keyword_case='lower') assert res == 'select * from BAR; -- SELECT FOO\n' def test_keywordcase_invalid_option(self): sql = 'select * from bar; -- select foo\n' with pytest.raises(SQLParseError): sqlparse.format(sql, keyword_case='foo') def test_identifiercase(self): sql = 'select * from bar; -- select foo\n' res = sqlparse.format(sql, identifier_case='upper') assert res == 'select * from BAR; -- select foo\n' res = sqlparse.format(sql, identifier_case='capitalize') assert res == 'select * from Bar; -- select foo\n' res = sqlparse.format(sql.upper(), identifier_case='lower') assert res == 'SELECT * FROM bar; -- SELECT FOO\n' def test_identifiercase_invalid_option(self): sql = 'select * from bar; -- select foo\n' with pytest.raises(SQLParseError): sqlparse.format(sql, identifier_case='foo') def test_identifiercase_quotes(self): sql = 'select * from "foo"."bar"' res = sqlparse.format(sql, identifier_case="upper") assert res == 'select * from "foo"."bar"' def test_strip_comments_single(self): sql = 'select *-- statement starts here\nfrom foo' res = sqlparse.format(sql, strip_comments=True) assert res == 'select *\nfrom foo' sql = 'select * -- statement starts here\nfrom foo' res = sqlparse.format(sql, strip_comments=True) assert res == 'select *\nfrom foo' sql = 'select-- foo\nfrom -- bar\nwhere' res = sqlparse.format(sql, strip_comments=True) assert res == 'select\nfrom\nwhere' sql = 'select *-- statement starts here\n\nfrom foo' res = sqlparse.format(sql, strip_comments=True) assert res == 'select *\n\nfrom foo' sql = 'select * from foo-- statement starts here\nwhere' res = sqlparse.format(sql, strip_comments=True) assert res == 'select * from foo\nwhere' sql = 'select a-- statement starts here\nfrom foo' res = sqlparse.format(sql, strip_comments=True) assert res == 'select a\nfrom foo' sql = '--comment\nselect a-- statement starts here\n' \ 'from foo--comment\nf' res = sqlparse.format(sql, strip_comments=True) assert res == 'select a\nfrom foo\nf' sql = '--A;--B;' res = '' assert res == sqlparse.format(sql, strip_comments=True) sql = '--A;\n--B;' res = '' assert res == sqlparse.format(sql, strip_comments=True) def test_strip_comments_invalid_option(self): sql = 'select-- foo\nfrom -- bar\nwhere' with pytest.raises(SQLParseError): sqlparse.format(sql, strip_comments=None) def test_strip_comments_multi(self): sql = '/* sql starts here */\nselect' res = sqlparse.format(sql, strip_comments=True) assert res == 'select' sql = '/* sql starts here */ select' res = sqlparse.format(sql, strip_comments=True) assert res == ' select' # note whitespace is preserved, see issue 772 sql = '/*\n * sql starts here\n */\nselect' res = sqlparse.format(sql, strip_comments=True) assert res == 'select' sql = '/* sql starts here */' res = sqlparse.format(sql, strip_comments=True) assert res == '' sql = '/* sql starts here */\n/* or here */' res = sqlparse.format(sql, strip_comments=True, strip_whitespace=True) assert res == '' sql = 'select (/* sql starts here */ select 2)' res = sqlparse.format(sql, strip_comments=True, strip_whitespace=True) assert res == 'select (select 2)' sql = 'select (/* sql /* starts here */ select 2)' res = sqlparse.format(sql, strip_comments=True, strip_whitespace=True) assert res == 'select (select 2)' def test_strip_comments_preserves_linebreak(self): sql = 'select * -- a comment\r\nfrom foo' res = sqlparse.format(sql, strip_comments=True) assert res == 'select *\nfrom foo' sql = 'select * -- a comment\nfrom foo' res = sqlparse.format(sql, strip_comments=True) assert res == 'select *\nfrom foo' sql = 'select * -- a comment\rfrom foo' res = sqlparse.format(sql, strip_comments=True) assert res == 'select *\nfrom foo' sql = 'select * -- a comment\r\n\r\nfrom foo' res = sqlparse.format(sql, strip_comments=True) assert res == 'select *\n\nfrom foo' sql = 'select * -- a comment\n\nfrom foo' res = sqlparse.format(sql, strip_comments=True) assert res == 'select *\n\nfrom foo' def test_strip_comments_preserves_whitespace(self): sql = 'SELECT 1/*bar*/ AS foo' # see issue772 res = sqlparse.format(sql, strip_comments=True) assert res == 'SELECT 1 AS foo' def test_strip_comments_preserves_hint(self): sql = 'select --+full(u)' res = sqlparse.format(sql, strip_comments=True) assert res == sql sql = '#+ hint\nselect * from foo' res = sqlparse.format(sql, strip_comments=True) assert res == sql sql = 'select --+full(u)\n--comment simple' res = sqlparse.format(sql, strip_comments=True) assert res == 'select --+full(u)\n' sql = '#+ hint\nselect * from foo\n# comment simple' res = sqlparse.format(sql, strip_comments=True) assert res == '#+ hint\nselect * from foo\n' sql = 'SELECT /*+cluster(T)*/* FROM T_EEE T where A >:1' res = sqlparse.format(sql, strip_comments=True) assert res == sql sql = 'insert /*+ DIRECT */ into sch.table_name as select * from foo' res = sqlparse.format(sql, strip_comments=True) assert res == sql def test_strip_ws(self): f = lambda sql: sqlparse.format(sql, strip_whitespace=True) s = 'select\n* from foo\n\twhere ( 1 = 2 )\n' assert f(s) == 'select * from foo where (1 = 2)' s = 'select -- foo\nfrom bar\n' assert f(s) == 'select -- foo\nfrom bar' def test_strip_ws_invalid_option(self): s = 'select -- foo\nfrom bar\n' with pytest.raises(SQLParseError): sqlparse.format(s, strip_whitespace=None) def test_preserve_ws(self): # preserve at least one whitespace after subgroups f = lambda sql: sqlparse.format(sql, strip_whitespace=True) s = 'select\n* /* foo */ from bar ' assert f(s) == 'select * /* foo */ from bar' def test_notransform_of_quoted_crlf(self): # Make sure that CR/CR+LF characters inside string literals don't get # affected by the formatter. s1 = "SELECT some_column LIKE 'value\r'" s2 = "SELECT some_column LIKE 'value\r'\r\nWHERE id = 1\n" s3 = "SELECT some_column LIKE 'value\\'\r' WHERE id = 1\r" s4 = "SELECT some_column LIKE 'value\\\\\\'\r' WHERE id = 1\r\n" f = lambda x: sqlparse.format(x) # Because of the use of assert f(s1) == "SELECT some_column LIKE 'value\r'" assert f(s2) == "SELECT some_column LIKE 'value\r'\nWHERE id = 1\n" assert f(s3) == "SELECT some_column LIKE 'value\\'\r' WHERE id = 1\n" assert (f(s4) == "SELECT some_column LIKE 'value\\\\\\'\r' WHERE id = 1\n") class TestFormatReindentAligned: @staticmethod def formatter(sql): return sqlparse.format(sql, reindent_aligned=True) def test_basic(self): sql = """ select a, b as bb,c from table join (select a * 2 as a from new_table) other on table.a = other.a where c is true and b between 3 and 4 or d is 'blue' limit 10 """ assert self.formatter(sql) == '\n'.join([ 'select a,', ' b as bb,', ' c', ' from table', ' join (', ' select a * 2 as a', ' from new_table', ' ) other', ' on table.a = other.a', ' where c is true', ' and b between 3 and 4', " or d is 'blue'", ' limit 10']) def test_joins(self): sql = """ select * from a join b on a.one = b.one left join c on c.two = a.two and c.three = a.three full outer join d on d.three = a.three cross join e on e.four = a.four join f using (one, two, three) """ assert self.formatter(sql) == '\n'.join([ 'select *', ' from a', ' join b', ' on a.one = b.one', ' left join c', ' on c.two = a.two', ' and c.three = a.three', ' full outer join d', ' on d.three = a.three', ' cross join e', ' on e.four = a.four', ' join f using (one, two, three)']) def test_case_statement(self): sql = """ select a, case when a = 0 then 1 when bb = 1 then 1 when c = 2 then 2 else 0 end as d, extra_col from table where c is true and b between 3 and 4 """ assert self.formatter(sql) == '\n'.join([ 'select a,', ' case when a = 0 then 1', ' when bb = 1 then 1', ' when c = 2 then 2', ' else 0', ' end as d,', ' extra_col', ' from table', ' where c is true', ' and b between 3 and 4']) def test_case_statement_with_between(self): sql = """ select a, case when a = 0 then 1 when bb = 1 then 1 when c = 2 then 2 when d between 3 and 5 then 3 else 0 end as d, extra_col from table where c is true and b between 3 and 4 """ assert self.formatter(sql) == '\n'.join([ 'select a,', ' case when a = 0 then 1', ' when bb = 1 then 1', ' when c = 2 then 2', ' when d between 3 and 5 then 3', ' else 0', ' end as d,', ' extra_col', ' from table', ' where c is true', ' and b between 3 and 4']) def test_group_by(self): sql = """ select a, b, c, sum(x) as sum_x, count(y) as cnt_y from table group by a,b,c having sum(x) > 1 and count(y) > 5 order by 3,2,1 """ assert self.formatter(sql) == '\n'.join([ 'select a,', ' b,', ' c,', ' sum(x) as sum_x,', ' count(y) as cnt_y', ' from table', ' group by a,', ' b,', ' c', 'having sum(x) > 1', ' and count(y) > 5', ' order by 3,', ' 2,', ' 1']) def test_group_by_subquery(self): # TODO: add subquery alias when test_identifier_list_subquery fixed sql = """ select *, sum_b + 2 as mod_sum from ( select a, sum(b) as sum_b from table group by a,z) order by 1,2 """ assert self.formatter(sql) == '\n'.join([ 'select *,', ' sum_b + 2 as mod_sum', ' from (', ' select a,', ' sum(b) as sum_b', ' from table', ' group by a,', ' z', ' )', ' order by 1,', ' 2']) def test_window_functions(self): sql = """ select a, SUM(a) OVER (PARTITION BY b ORDER BY c ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as sum_a, ROW_NUMBER() OVER (PARTITION BY b, c ORDER BY d DESC) as row_num from table""" assert self.formatter(sql) == '\n'.join([ 'select a,', ' SUM(a) OVER (PARTITION BY b ORDER BY c ROWS ' 'BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as sum_a,', ' ROW_NUMBER() OVER ' '(PARTITION BY b, c ORDER BY d DESC) as row_num', ' from table']) class TestSpacesAroundOperators: @staticmethod def formatter(sql): return sqlparse.format(sql, use_space_around_operators=True) def test_basic(self): sql = ('select a+b as d from table ' 'where (c-d)%2= 1 and e> 3.0/4 and z^2 <100') assert self.formatter(sql) == ( 'select a + b as d from table ' 'where (c - d) % 2 = 1 and e > 3.0 / 4 and z ^ 2 < 100') def test_bools(self): sql = 'select * from table where a &&b or c||d' assert self.formatter( sql) == 'select * from table where a && b or c || d' def test_nested(self): sql = 'select *, case when a-b then c end from table' assert self.formatter( sql) == 'select *, case when a - b then c end from table' def test_wildcard_vs_mult(self): sql = 'select a*b-c from table' assert self.formatter(sql) == 'select a * b - c from table' class TestFormatReindent: def test_option(self): with pytest.raises(SQLParseError): sqlparse.format('foo', reindent=2) with pytest.raises(SQLParseError): sqlparse.format('foo', indent_tabs=2) with pytest.raises(SQLParseError): sqlparse.format('foo', reindent=True, indent_width='foo') with pytest.raises(SQLParseError): sqlparse.format('foo', reindent=True, indent_width=-12) with pytest.raises(SQLParseError): sqlparse.format('foo', reindent=True, wrap_after='foo') with pytest.raises(SQLParseError): sqlparse.format('foo', reindent=True, wrap_after=-12) with pytest.raises(SQLParseError): sqlparse.format('foo', reindent=True, comma_first='foo') def test_stmts(self): f = lambda sql: sqlparse.format(sql, reindent=True) s = 'select foo; select bar' assert f(s) == 'select foo;\n\nselect bar' s = 'select foo' assert f(s) == 'select foo' s = 'select foo; -- test\n select bar' assert f(s) == 'select foo; -- test\n\nselect bar' def test_keywords(self): f = lambda sql: sqlparse.format(sql, reindent=True) s = 'select * from foo union select * from bar;' assert f(s) == '\n'.join([ 'select *', 'from foo', 'union', 'select *', 'from bar;']) def test_keywords_between(self): # issue 14 # don't break AND after BETWEEN f = lambda sql: sqlparse.format(sql, reindent=True) s = 'and foo between 1 and 2 and bar = 3' assert f(s) == '\n'.join([ '', 'and foo between 1 and 2', 'and bar = 3']) def test_parenthesis(self): f = lambda sql: sqlparse.format(sql, reindent=True) s = 'select count(*) from (select * from foo);' assert f(s) == '\n'.join([ 'select count(*)', 'from', ' (select *', ' from foo);']) assert f("select f(1)") == 'select f(1)' assert f("select f( 1 )") == 'select f(1)' assert f("select f(\n\n\n1\n\n\n)") == 'select f(1)' assert f("select f(\n\n\n 1 \n\n\n)") == 'select f(1)' assert f("select f(\n\n\n 1 \n\n\n)") == 'select f(1)' def test_where(self): f = lambda sql: sqlparse.format(sql, reindent=True) s = 'select * from foo where bar = 1 and baz = 2 or bzz = 3;' assert f(s) == '\n'.join([ 'select *', 'from foo', 'where bar = 1', ' and baz = 2', ' or bzz = 3;']) s = 'select * from foo where bar = 1 and (baz = 2 or bzz = 3);' assert f(s) == '\n'.join([ 'select *', 'from foo', 'where bar = 1', ' and (baz = 2', ' or bzz = 3);']) def test_join(self): f = lambda sql: sqlparse.format(sql, reindent=True) s = 'select * from foo join bar on 1 = 2' assert f(s) == '\n'.join([ 'select *', 'from foo', 'join bar on 1 = 2']) s = 'select * from foo inner join bar on 1 = 2' assert f(s) == '\n'.join([ 'select *', 'from foo', 'inner join bar on 1 = 2']) s = 'select * from foo left outer join bar on 1 = 2' assert f(s) == '\n'.join([ 'select *', 'from foo', 'left outer join bar on 1 = 2']) s = 'select * from foo straight_join bar on 1 = 2' assert f(s) == '\n'.join([ 'select *', 'from foo', 'straight_join bar on 1 = 2']) def test_identifier_list(self): f = lambda sql: sqlparse.format(sql, reindent=True) s = 'select foo, bar, baz from table1, table2 where 1 = 2' assert f(s) == '\n'.join([ 'select foo,', ' bar,', ' baz', 'from table1,', ' table2', 'where 1 = 2']) s = 'select a.*, b.id from a, b' assert f(s) == '\n'.join([ 'select a.*,', ' b.id', 'from a,', ' b']) def test_identifier_list_with_wrap_after(self): f = lambda sql: sqlparse.format(sql, reindent=True, wrap_after=14) s = 'select foo, bar, baz from table1, table2 where 1 = 2' assert f(s) == '\n'.join([ 'select foo, bar,', ' baz', 'from table1, table2', 'where 1 = 2']) def test_identifier_list_comment_first(self): f = lambda sql: sqlparse.format(sql, reindent=True, comma_first=True) # not the 3: It cleans up whitespace too! s = 'select foo, bar, baz from table where foo in (1, 2,3)' assert f(s) == '\n'.join([ 'select foo', ' , bar', ' , baz', 'from table', 'where foo in (1', ' , 2', ' , 3)']) def test_identifier_list_with_functions(self): f = lambda sql: sqlparse.format(sql, reindent=True) s = ("select 'abc' as foo, coalesce(col1, col2)||col3 as bar," "col3 from my_table") assert f(s) == '\n'.join([ "select 'abc' as foo,", " coalesce(col1, col2)||col3 as bar,", " col3", "from my_table"]) def test_long_identifier_list_with_functions(self): f = lambda sql: sqlparse.format(sql, reindent=True, wrap_after=30) s = ("select 'abc' as foo, json_build_object('a', a," "'b', b, 'c', c, 'd', d, 'e', e) as col2" "col3 from my_table") assert f(s) == '\n'.join([ "select 'abc' as foo,", " json_build_object('a',", " a, 'b', b, 'c', c, 'd', d,", " 'e', e) as col2col3", "from my_table"]) def test_case(self): f = lambda sql: sqlparse.format(sql, reindent=True) s = 'case when foo = 1 then 2 when foo = 3 then 4 else 5 end' assert f(s) == '\n'.join([ 'case', ' when foo = 1 then 2', ' when foo = 3 then 4', ' else 5', 'end']) def test_case2(self): f = lambda sql: sqlparse.format(sql, reindent=True) s = 'case(foo) when bar = 1 then 2 else 3 end' assert f(s) == '\n'.join([ 'case(foo)', ' when bar = 1 then 2', ' else 3', 'end']) def test_nested_identifier_list(self): # issue4 f = lambda sql: sqlparse.format(sql, reindent=True) s = '(foo as bar, bar1, bar2 as bar3, b4 as b5)' assert f(s) == '\n'.join([ '(foo as bar,', ' bar1,', ' bar2 as bar3,', ' b4 as b5)']) def test_duplicate_linebreaks(self): # issue3 f = lambda sql: sqlparse.format(sql, reindent=True) s = 'select c1 -- column1\nfrom foo' assert f(s) == '\n'.join([ 'select c1 -- column1', 'from foo']) s = 'select c1 -- column1\nfrom foo' r = sqlparse.format(s, reindent=True, strip_comments=True) assert r == '\n'.join([ 'select c1', 'from foo']) s = 'select c1\nfrom foo\norder by c1' assert f(s) == '\n'.join([ 'select c1', 'from foo', 'order by c1']) s = 'select c1 from t1 where (c1 = 1) order by c1' assert f(s) == '\n'.join([ 'select c1', 'from t1', 'where (c1 = 1)', 'order by c1']) def test_keywordfunctions(self): # issue36 f = lambda sql: sqlparse.format(sql, reindent=True) s = 'select max(a) b, foo, bar' assert f(s) == '\n'.join([ 'select max(a) b,', ' foo,', ' bar']) def test_identifier_and_functions(self): # issue45 f = lambda sql: sqlparse.format(sql, reindent=True) s = 'select foo.bar, nvl(1) from dual' assert f(s) == '\n'.join([ 'select foo.bar,', ' nvl(1)', 'from dual']) def test_insert_values(self): # issue 329 f = lambda sql: sqlparse.format(sql, reindent=True) s = 'insert into foo values (1, 2)' assert f(s) == '\n'.join([ 'insert into foo', 'values (1, 2)']) s = 'insert into foo values (1, 2), (3, 4), (5, 6)' assert f(s) == '\n'.join([ 'insert into foo', 'values (1, 2),', ' (3, 4),', ' (5, 6)']) s = 'insert into foo(a, b) values (1, 2), (3, 4), (5, 6)' assert f(s) == '\n'.join([ 'insert into foo(a, b)', 'values (1, 2),', ' (3, 4),', ' (5, 6)']) f = lambda sql: sqlparse.format(sql, reindent=True, comma_first=True) s = 'insert into foo values (1, 2)' assert f(s) == '\n'.join([ 'insert into foo', 'values (1, 2)']) s = 'insert into foo values (1, 2), (3, 4), (5, 6)' assert f(s) == '\n'.join([ 'insert into foo', 'values (1, 2)', ' , (3, 4)', ' , (5, 6)']) s = 'insert into foo(a, b) values (1, 2), (3, 4), (5, 6)' assert f(s) == '\n'.join([ 'insert into foo(a, b)', 'values (1, 2)', ' , (3, 4)', ' , (5, 6)']) class TestOutputFormat: def test_python(self): sql = 'select * from foo;' f = lambda sql: sqlparse.format(sql, output_format='python') assert f(sql) == "sql = 'select * from foo;'" f = lambda sql: sqlparse.format(sql, output_format='python', reindent=True) assert f(sql) == '\n'.join([ "sql = ('select * '", " 'from foo;')"]) def test_python_multiple_statements(self): sql = 'select * from foo; select 1 from dual' f = lambda sql: sqlparse.format(sql, output_format='python') assert f(sql) == '\n'.join([ "sql = 'select * from foo; '", "sql2 = 'select 1 from dual'"]) @pytest.mark.xfail(reason="Needs fixing") def test_python_multiple_statements_with_formatting(self): sql = 'select * from foo; select 1 from dual' f = lambda sql: sqlparse.format(sql, output_format='python', reindent=True) assert f(sql) == '\n'.join([ "sql = ('select * '", " 'from foo;')", "sql2 = ('select 1 '", " 'from dual')"]) def test_php(self): sql = 'select * from foo;' f = lambda sql: sqlparse.format(sql, output_format='php') assert f(sql) == '$sql = "select * from foo;";' f = lambda sql: sqlparse.format(sql, output_format='php', reindent=True) assert f(sql) == '\n'.join([ '$sql = "select * ";', '$sql .= "from foo;";']) def test_sql(self): # "sql" is an allowed option but has no effect sql = 'select * from foo;' f = lambda sql: sqlparse.format(sql, output_format='sql') assert f(sql) == 'select * from foo;' def test_invalid_option(self): sql = 'select * from foo;' with pytest.raises(SQLParseError): sqlparse.format(sql, output_format='foo') def test_format_column_ordering(): # issue89 sql = 'select * from foo order by c1 desc, c2, c3;' formatted = sqlparse.format(sql, reindent=True) expected = '\n'.join([ 'select *', 'from foo', 'order by c1 desc,', ' c2,', ' c3;']) assert formatted == expected def test_truncate_strings(): sql = "update foo set value = '{}';".format('x' * 1000) formatted = sqlparse.format(sql, truncate_strings=10) assert formatted == "update foo set value = 'xxxxxxxxxx[...]';" formatted = sqlparse.format(sql, truncate_strings=3, truncate_char='YYY') assert formatted == "update foo set value = 'xxxYYY';" @pytest.mark.parametrize('option', ['bar', -1, 0]) def test_truncate_strings_invalid_option2(option): with pytest.raises(SQLParseError): sqlparse.format('foo', truncate_strings=option) @pytest.mark.parametrize('sql', [ 'select verrrylongcolumn from foo', 'select "verrrylongcolumn" from "foo"']) def test_truncate_strings_doesnt_truncate_identifiers(sql): formatted = sqlparse.format(sql, truncate_strings=2) assert formatted == sql def test_having_produces_newline(): sql = ('select * from foo, bar where bar.id = foo.bar_id ' 'having sum(bar.value) > 100') formatted = sqlparse.format(sql, reindent=True) expected = [ 'select *', 'from foo,', ' bar', 'where bar.id = foo.bar_id', 'having sum(bar.value) > 100'] assert formatted == '\n'.join(expected) @pytest.mark.parametrize('right_margin', ['ten', 2]) def test_format_right_margin_invalid_option(right_margin): with pytest.raises(SQLParseError): sqlparse.format('foo', right_margin=right_margin) @pytest.mark.xfail(reason="Needs fixing") def test_format_right_margin(): # TODO: Needs better test, only raises exception right now sqlparse.format('foo', right_margin="79") def test_format_json_ops(): # issue542 formatted = sqlparse.format( "select foo->'bar', foo->'bar';", reindent=True) expected = "select foo->'bar',\n foo->'bar';" assert formatted == expected @pytest.mark.parametrize('sql, expected_normal, expected_compact', [ ('case when foo then 1 else bar end', 'case\n when foo then 1\n else bar\nend', 'case when foo then 1 else bar end')]) def test_compact(sql, expected_normal, expected_compact): # issue783 formatted_normal = sqlparse.format(sql, reindent=True) formatted_compact = sqlparse.format(sql, reindent=True, compact=True) assert formatted_normal == expected_normal assert formatted_compact == expected_compact def test_strip_ws_removes_trailing_ws_in_groups(): # issue782 formatted = sqlparse.format('( where foo = bar ) from', strip_whitespace=True) expected = '(where foo = bar) from' assert formatted == expected sqlparse-0.5.4/tests/test_grouping.py0000644000000000000000000005662513615410400014715 0ustar00import pytest import sqlparse from sqlparse import sql, tokens as T def test_grouping_parenthesis(): s = 'select (select (x3) x2) and (y2) bar' parsed = sqlparse.parse(s)[0] assert str(parsed) == s assert len(parsed.tokens) == 7 assert isinstance(parsed.tokens[2], sql.Parenthesis) assert isinstance(parsed.tokens[-1], sql.Identifier) assert len(parsed.tokens[2].tokens) == 5 assert isinstance(parsed.tokens[2].tokens[3], sql.Identifier) assert isinstance(parsed.tokens[2].tokens[3].tokens[0], sql.Parenthesis) assert len(parsed.tokens[2].tokens[3].tokens) == 3 @pytest.mark.parametrize('s', ['foo := 1;', 'foo := 1']) def test_grouping_assignment(s): parsed = sqlparse.parse(s)[0] assert len(parsed.tokens) == 1 assert isinstance(parsed.tokens[0], sql.Assignment) @pytest.mark.parametrize('s', ["x > DATE '2020-01-01'", "x > TIMESTAMP '2020-01-01 00:00:00'"]) def test_grouping_typed_literal(s): parsed = sqlparse.parse(s)[0] assert isinstance(parsed[0][4], sql.TypedLiteral) @pytest.mark.parametrize('s, a, b', [ ('select a from b where c < d + e', sql.Identifier, sql.Identifier), ('select a from b where c < d + interval \'1 day\'', sql.Identifier, sql.TypedLiteral), ('select a from b where c < d + interval \'6\' month', sql.Identifier, sql.TypedLiteral), ('select a from b where c < current_timestamp - interval \'1 day\'', sql.Token, sql.TypedLiteral), ]) def test_compare_expr(s, a, b): parsed = sqlparse.parse(s)[0] assert str(parsed) == s assert isinstance(parsed.tokens[2], sql.Identifier) assert isinstance(parsed.tokens[6], sql.Identifier) assert isinstance(parsed.tokens[8], sql.Where) assert len(parsed.tokens) == 9 where = parsed.tokens[8] assert isinstance(where.tokens[2], sql.Comparison) assert len(where.tokens) == 3 comparison = where.tokens[2] assert isinstance(comparison.tokens[0], sql.Identifier) assert comparison.tokens[2].ttype is T.Operator.Comparison assert isinstance(comparison.tokens[4], sql.Operation) assert len(comparison.tokens) == 5 operation = comparison.tokens[4] assert isinstance(operation.tokens[0], a) assert operation.tokens[2].ttype is T.Operator assert isinstance(operation.tokens[4], b) assert len(operation.tokens) == 5 def test_grouping_identifiers(): s = 'select foo.bar from "myscheme"."table" where fail. order' parsed = sqlparse.parse(s)[0] assert str(parsed) == s assert isinstance(parsed.tokens[2], sql.Identifier) assert isinstance(parsed.tokens[6], sql.Identifier) assert isinstance(parsed.tokens[8], sql.Where) s = 'select * from foo where foo.id = 1' parsed = sqlparse.parse(s)[0] assert str(parsed) == s assert isinstance(parsed.tokens[-1].tokens[-1].tokens[0], sql.Identifier) s = 'select * from (select "foo"."id" from foo)' parsed = sqlparse.parse(s)[0] assert str(parsed) == s assert isinstance(parsed.tokens[-1].tokens[3], sql.Identifier) for s in ["INSERT INTO `test` VALUES('foo', 'bar');", "INSERT INTO `test` VALUES(1, 2), (3, 4), (5, 6);", "INSERT INTO `test(a, b)` VALUES(1, 2), (3, 4), (5, 6);"]: parsed = sqlparse.parse(s)[0] types = [l.ttype for l in parsed.tokens if not l.is_whitespace] assert types == [T.DML, T.Keyword, None, None, T.Punctuation] assert isinstance(parsed.tokens[6], sql.Values) s = "select 1.0*(a+b) as col, sum(c)/sum(d) from myschema.mytable" parsed = sqlparse.parse(s)[0] assert len(parsed.tokens) == 7 assert isinstance(parsed.tokens[2], sql.IdentifierList) assert len(parsed.tokens[2].tokens) == 4 identifiers = list(parsed.tokens[2].get_identifiers()) assert len(identifiers) == 2 assert identifiers[0].get_alias() == "col" @pytest.mark.parametrize('s', [ '1 as f', 'foo as f', 'foo f', '1/2 as f', '1/2 f', '1<2 as f', # issue327 '1<2 f', ]) def test_simple_identifiers(s): parsed = sqlparse.parse(s)[0] assert isinstance(parsed.tokens[0], sql.Identifier) @pytest.mark.parametrize('s', [ 'foo, bar', 'sum(a), sum(b)', 'sum(a) as x, b as y', 'sum(a)::integer, b', 'sum(a)/count(b) as x, y', 'sum(a)::integer as x, y', 'sum(a)::integer/count(b) as x, y', # issue297 ]) def test_group_identifier_list(s): parsed = sqlparse.parse(s)[0] assert isinstance(parsed.tokens[0], sql.IdentifierList) def test_grouping_identifier_wildcard(): p = sqlparse.parse('a.*, b.id')[0] assert isinstance(p.tokens[0], sql.IdentifierList) assert isinstance(p.tokens[0].tokens[0], sql.Identifier) assert isinstance(p.tokens[0].tokens[-1], sql.Identifier) def test_grouping_identifier_name_wildcard(): p = sqlparse.parse('a.*')[0] t = p.tokens[0] assert t.get_name() == '*' assert t.is_wildcard() is True def test_grouping_identifier_invalid(): p = sqlparse.parse('a.')[0] assert isinstance(p.tokens[0], sql.Identifier) assert p.tokens[0].has_alias() is False assert p.tokens[0].get_name() is None assert p.tokens[0].get_real_name() is None assert p.tokens[0].get_parent_name() == 'a' def test_grouping_identifier_invalid_in_middle(): # issue261 s = 'SELECT foo. FROM foo' p = sqlparse.parse(s)[0] assert isinstance(p[2], sql.Identifier) assert p[2][1].ttype == T.Punctuation assert p[3].ttype == T.Whitespace assert str(p[2]) == 'foo.' @pytest.mark.parametrize('s', ['foo as (select *)', 'foo as(select *)']) def test_grouping_identifer_as(s): # issue507 p = sqlparse.parse(s)[0] assert isinstance(p.tokens[0], sql.Identifier) token = p.tokens[0].tokens[2] assert token.ttype == T.Keyword assert token.normalized == 'AS' def test_grouping_identifier_as_invalid(): # issue8 p = sqlparse.parse('foo as select *')[0] assert len(p.tokens), 5 assert isinstance(p.tokens[0], sql.Identifier) assert len(p.tokens[0].tokens) == 1 assert p.tokens[2].ttype == T.Keyword def test_grouping_identifier_function(): p = sqlparse.parse('foo() as bar')[0] assert isinstance(p.tokens[0], sql.Identifier) assert isinstance(p.tokens[0].tokens[0], sql.Function) p = sqlparse.parse('foo()||col2 bar')[0] assert isinstance(p.tokens[0], sql.Identifier) assert isinstance(p.tokens[0].tokens[0], sql.Operation) assert isinstance(p.tokens[0].tokens[0].tokens[0], sql.Function) p = sqlparse.parse('foo(c1) over win1 as bar')[0] assert isinstance(p.tokens[0], sql.Identifier) assert isinstance(p.tokens[0].tokens[0], sql.Function) assert len(p.tokens[0].tokens[0].tokens) == 4 assert isinstance(p.tokens[0].tokens[0].tokens[3], sql.Over) assert isinstance(p.tokens[0].tokens[0].tokens[3].tokens[2], sql.Identifier) p = sqlparse.parse('foo(c1) over (partition by c2 order by c3) as bar')[0] assert isinstance(p.tokens[0], sql.Identifier) assert isinstance(p.tokens[0].tokens[0], sql.Function) assert len(p.tokens[0].tokens[0].tokens) == 4 assert isinstance(p.tokens[0].tokens[0].tokens[3], sql.Over) assert isinstance(p.tokens[0].tokens[0].tokens[3].tokens[2], sql.Parenthesis) @pytest.mark.parametrize('s', ['foo+100', 'foo + 100', 'foo*100']) def test_grouping_operation(s): p = sqlparse.parse(s)[0] assert isinstance(p.tokens[0], sql.Operation) def test_grouping_identifier_list(): p = sqlparse.parse('a, b, c')[0] assert isinstance(p.tokens[0], sql.IdentifierList) p = sqlparse.parse('(a, b, c)')[0] assert isinstance(p.tokens[0].tokens[1], sql.IdentifierList) def test_grouping_identifier_list_subquery(): """identifier lists should still work in subqueries with aliases""" p = sqlparse.parse("select * from (" "select a, b + c as d from table) sub")[0] subquery = p.tokens[-1].tokens[0] idx, iden_list = subquery.token_next_by(i=sql.IdentifierList) assert iden_list is not None # all the identifiers should be within the IdentifierList _, ilist = subquery.token_next_by(i=sql.Identifier, idx=idx) assert ilist is None def test_grouping_identifier_list_case(): p = sqlparse.parse('a, case when 1 then 2 else 3 end as b, c')[0] assert isinstance(p.tokens[0], sql.IdentifierList) p = sqlparse.parse('(a, case when 1 then 2 else 3 end as b, c)')[0] assert isinstance(p.tokens[0].tokens[1], sql.IdentifierList) def test_grouping_identifier_list_other(): # issue2 p = sqlparse.parse("select *, null, 1, 'foo', bar from mytable, x")[0] assert isinstance(p.tokens[2], sql.IdentifierList) assert len(p.tokens[2].tokens) == 13 def test_grouping_identifier_list_with_inline_comments(): # issue163 p = sqlparse.parse('foo /* a comment */, bar')[0] assert isinstance(p.tokens[0], sql.IdentifierList) assert isinstance(p.tokens[0].tokens[0], sql.Identifier) assert isinstance(p.tokens[0].tokens[3], sql.Identifier) def test_grouping_identifiers_with_operators(): p = sqlparse.parse('a+b as c from table where (d-e)%2= 1')[0] assert len([x for x in p.flatten() if x.ttype == T.Name]) == 5 def test_grouping_identifier_list_with_order(): # issue101 p = sqlparse.parse('1, 2 desc, 3')[0] assert isinstance(p.tokens[0], sql.IdentifierList) assert isinstance(p.tokens[0].tokens[3], sql.Identifier) assert str(p.tokens[0].tokens[3]) == '2 desc' def test_grouping_nested_identifier_with_order(): # issue745 p = sqlparse.parse('(a desc)')[0] assert isinstance(p.tokens[0], sql.Parenthesis) assert isinstance(p.tokens[0].tokens[1], sql.Identifier) assert str(p.tokens[0].tokens[1]) == 'a desc' def test_grouping_where(): s = 'select * from foo where bar = 1 order by id desc' p = sqlparse.parse(s)[0] assert str(p) == s assert len(p.tokens) == 12 s = 'select x from (select y from foo where bar = 1) z' p = sqlparse.parse(s)[0] assert str(p) == s assert isinstance(p.tokens[-1].tokens[0].tokens[-2], sql.Where) @pytest.mark.parametrize('s', ( 'select 1 where 1 = 2 union select 2', 'select 1 where 1 = 2 union all select 2', )) def test_grouping_where_union(s): p = sqlparse.parse(s)[0] assert p.tokens[5].value.startswith('union') def test_returning_kw_ends_where_clause(): s = 'delete from foo where x > y returning z' p = sqlparse.parse(s)[0] assert isinstance(p.tokens[6], sql.Where) assert p.tokens[7].ttype == T.Keyword assert p.tokens[7].value == 'returning' def test_into_kw_ends_where_clause(): # issue324 s = 'select * from foo where a = 1 into baz' p = sqlparse.parse(s)[0] assert isinstance(p.tokens[8], sql.Where) assert p.tokens[9].ttype == T.Keyword assert p.tokens[9].value == 'into' @pytest.mark.parametrize('sql, expected', [ # note: typecast needs to be 2nd token for this test ('select foo::integer from bar', 'integer'), ('select (current_database())::information_schema.sql_identifier', 'information_schema.sql_identifier'), ]) def test_grouping_typecast(sql, expected): p = sqlparse.parse(sql)[0] assert p.tokens[2].get_typecast() == expected def test_grouping_alias(): s = 'select foo as bar from mytable' p = sqlparse.parse(s)[0] assert str(p) == s assert p.tokens[2].get_real_name() == 'foo' assert p.tokens[2].get_alias() == 'bar' s = 'select foo from mytable t1' p = sqlparse.parse(s)[0] assert str(p) == s assert p.tokens[6].get_real_name() == 'mytable' assert p.tokens[6].get_alias() == 't1' s = 'select foo::integer as bar from mytable' p = sqlparse.parse(s)[0] assert str(p) == s assert p.tokens[2].get_alias() == 'bar' s = ('SELECT DISTINCT ' '(current_database())::information_schema.sql_identifier AS view') p = sqlparse.parse(s)[0] assert str(p) == s assert p.tokens[4].get_alias() == 'view' def test_grouping_alias_case(): # see issue46 p = sqlparse.parse('CASE WHEN 1 THEN 2 ELSE 3 END foo')[0] assert len(p.tokens) == 1 assert p.tokens[0].get_alias() == 'foo' def test_grouping_alias_ctas(): p = sqlparse.parse('CREATE TABLE tbl1 AS SELECT coalesce(t1.col1, 0) AS col1 FROM t1')[0] assert p.tokens[10].get_alias() == 'col1' assert isinstance(p.tokens[10].tokens[0], sql.Function) def test_grouping_subquery_no_parens(): # Not totally sure if this is the right approach... # When a THEN clause contains a subquery w/o parenthesis around it *and* # a WHERE condition, the WHERE grouper consumes END too. # This takes makes sure that it doesn't fail. p = sqlparse.parse('CASE WHEN 1 THEN select 2 where foo = 1 end')[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Case) @pytest.mark.parametrize('s', ['foo.bar', 'x, y', 'x > y', 'x / y']) def test_grouping_alias_returns_none(s): # see issue185 and issue445 p = sqlparse.parse(s)[0] assert len(p.tokens) == 1 assert p.tokens[0].get_alias() is None def test_grouping_idlist_function(): # see issue10 too p = sqlparse.parse('foo(1) x, bar')[0] assert isinstance(p.tokens[0], sql.IdentifierList) def test_grouping_comparison_exclude(): # make sure operators are not handled too lazy p = sqlparse.parse('(=)')[0] assert isinstance(p.tokens[0], sql.Parenthesis) assert not isinstance(p.tokens[0].tokens[1], sql.Comparison) p = sqlparse.parse('(a=1)')[0] assert isinstance(p.tokens[0].tokens[1], sql.Comparison) p = sqlparse.parse('(a>=1)')[0] assert isinstance(p.tokens[0].tokens[1], sql.Comparison) def test_grouping_function(): p = sqlparse.parse('foo()')[0] assert isinstance(p.tokens[0], sql.Function) p = sqlparse.parse('foo(null, bar)')[0] assert isinstance(p.tokens[0], sql.Function) assert len(list(p.tokens[0].get_parameters())) == 2 p = sqlparse.parse('foo(5) over win1')[0] assert isinstance(p.tokens[0], sql.Function) assert len(list(p.tokens[0].get_parameters())) == 1 assert isinstance(p.tokens[0].get_window(), sql.Identifier) p = sqlparse.parse('foo(5) over (PARTITION BY c1)')[0] assert isinstance(p.tokens[0], sql.Function) assert len(list(p.tokens[0].get_parameters())) == 1 assert isinstance(p.tokens[0].get_window(), sql.Parenthesis) def test_grouping_function_not_in(): # issue183 p = sqlparse.parse('in(1, 2)')[0] assert len(p.tokens) == 2 assert p.tokens[0].ttype == T.Keyword assert isinstance(p.tokens[1], sql.Parenthesis) def test_grouping_varchar(): p = sqlparse.parse('"text" Varchar(50) NOT NULL')[0] assert isinstance(p.tokens[2], sql.Function) def test_statement_get_type(): def f(sql): return sqlparse.parse(sql)[0] assert f('select * from foo').get_type() == 'SELECT' assert f('update foo').get_type() == 'UPDATE' assert f(' update foo').get_type() == 'UPDATE' assert f('\nupdate foo').get_type() == 'UPDATE' assert f('foo').get_type() == 'UNKNOWN' def test_identifier_with_operators(): # issue 53 p = sqlparse.parse('foo||bar')[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Operation) # again with whitespaces p = sqlparse.parse('foo || bar')[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Operation) def test_identifier_with_op_trailing_ws(): # make sure trailing whitespace isn't grouped with identifier p = sqlparse.parse('foo || bar ')[0] assert len(p.tokens) == 2 assert isinstance(p.tokens[0], sql.Operation) assert p.tokens[1].ttype is T.Whitespace def test_identifier_with_string_literals(): p = sqlparse.parse("foo + 'bar'")[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Operation) # This test seems to be wrong. It was introduced when fixing #53, but #111 # showed that this shouldn't be an identifier at all. I'm leaving this # commented in the source for a while. # def test_identifier_string_concat(): # p = sqlparse.parse("'foo' || bar")[0] # assert len(p.tokens) == 1 # assert isinstance(p.tokens[0], sql.Identifier) def test_identifier_consumes_ordering(): # issue89 p = sqlparse.parse('select * from foo order by c1 desc, c2, c3')[0] assert isinstance(p.tokens[-1], sql.IdentifierList) ids = list(p.tokens[-1].get_identifiers()) assert len(ids) == 3 assert ids[0].get_name() == 'c1' assert ids[0].get_ordering() == 'DESC' assert ids[1].get_name() == 'c2' assert ids[1].get_ordering() is None def test_comparison_with_keywords(): # issue90 # in fact these are assignments, but for now we don't distinguish them p = sqlparse.parse('foo = NULL')[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Comparison) assert len(p.tokens[0].tokens) == 5 assert p.tokens[0].left.value == 'foo' assert p.tokens[0].right.value == 'NULL' # make sure it's case-insensitive p = sqlparse.parse('foo = null')[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Comparison) def test_comparison_with_floats(): # issue145 p = sqlparse.parse('foo = 25.5')[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Comparison) assert len(p.tokens[0].tokens) == 5 assert p.tokens[0].left.value == 'foo' assert p.tokens[0].right.value == '25.5' def test_comparison_with_parenthesis(): # issue23 p = sqlparse.parse('(3 + 4) = 7')[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Comparison) comp = p.tokens[0] assert isinstance(comp.left, sql.Parenthesis) assert comp.right.ttype is T.Number.Integer @pytest.mark.parametrize('operator', ( '=', '!=', '>', '<', '<=', '>=', '~', '~~', '!~~', 'LIKE', 'NOT LIKE', 'ILIKE', 'NOT ILIKE', )) def test_comparison_with_strings(operator): # issue148 p = sqlparse.parse(f"foo {operator} 'bar'")[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Comparison) assert p.tokens[0].right.value == "'bar'" assert p.tokens[0].right.ttype == T.String.Single def test_like_and_ilike_comparison(): def validate_where_clause(where_clause, expected_tokens): assert len(where_clause.tokens) == len(expected_tokens) for where_token, expected_token in zip(where_clause, expected_tokens): expected_ttype, expected_value = expected_token if where_token.ttype is not None: assert where_token.match(expected_ttype, expected_value, regex=True) else: # Certain tokens, such as comparison tokens, do not define a ttype that can be # matched against. For these tokens, we ensure that the token instance is of # the expected type and has a value conforming to specified regular expression import re assert (isinstance(where_token, expected_ttype) and re.match(expected_value, where_token.value)) [p1] = sqlparse.parse("select * from mytable where mytable.mycolumn LIKE 'expr%' limit 5;") [p1_where] = [token for token in p1 if isinstance(token, sql.Where)] validate_where_clause(p1_where, [ (T.Keyword, "where"), (T.Whitespace, None), (sql.Comparison, r"mytable.mycolumn LIKE.*"), (T.Whitespace, None), ]) [p2] = sqlparse.parse( "select * from mytable where mycolumn NOT ILIKE '-expr' group by othercolumn;") [p2_where] = [token for token in p2 if isinstance(token, sql.Where)] validate_where_clause(p2_where, [ (T.Keyword, "where"), (T.Whitespace, None), (sql.Comparison, r"mycolumn NOT ILIKE.*"), (T.Whitespace, None), ]) def test_comparison_with_functions(): # issue230 p = sqlparse.parse('foo = DATE(bar.baz)')[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Comparison) assert len(p.tokens[0].tokens) == 5 assert p.tokens[0].left.value == 'foo' assert p.tokens[0].right.value == 'DATE(bar.baz)' p = sqlparse.parse('DATE(foo.bar) = DATE(bar.baz)')[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Comparison) assert len(p.tokens[0].tokens) == 5 assert p.tokens[0].left.value == 'DATE(foo.bar)' assert p.tokens[0].right.value == 'DATE(bar.baz)' p = sqlparse.parse('DATE(foo.bar) = bar.baz')[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Comparison) assert len(p.tokens[0].tokens) == 5 assert p.tokens[0].left.value == 'DATE(foo.bar)' assert p.tokens[0].right.value == 'bar.baz' def test_comparison_with_typed_literal(): p = sqlparse.parse("foo = DATE 'bar.baz'")[0] assert len(p.tokens) == 1 comp = p.tokens[0] assert isinstance(comp, sql.Comparison) assert len(comp.tokens) == 5 assert comp.left.value == 'foo' assert isinstance(comp.right, sql.TypedLiteral) assert comp.right.value == "DATE 'bar.baz'" @pytest.mark.parametrize('start', ['FOR', 'FOREACH']) def test_forloops(start): p = sqlparse.parse(f'{start} foo in bar LOOP foobar END LOOP')[0] assert (len(p.tokens)) == 1 assert isinstance(p.tokens[0], sql.For) def test_nested_for(): p = sqlparse.parse('FOR foo LOOP FOR bar LOOP END LOOP END LOOP')[0] assert len(p.tokens) == 1 for1 = p.tokens[0] assert for1.tokens[0].value == 'FOR' assert for1.tokens[-1].value == 'END LOOP' for2 = for1.tokens[6] assert isinstance(for2, sql.For) assert for2.tokens[0].value == 'FOR' assert for2.tokens[-1].value == 'END LOOP' def test_begin(): p = sqlparse.parse('BEGIN foo END')[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Begin) def test_keyword_followed_by_parenthesis(): p = sqlparse.parse('USING(somecol')[0] assert len(p.tokens) == 3 assert p.tokens[0].ttype == T.Keyword assert p.tokens[1].ttype == T.Punctuation def test_nested_begin(): p = sqlparse.parse('BEGIN foo BEGIN bar END END')[0] assert len(p.tokens) == 1 outer = p.tokens[0] assert outer.tokens[0].value == 'BEGIN' assert outer.tokens[-1].value == 'END' inner = outer.tokens[4] assert inner.tokens[0].value == 'BEGIN' assert inner.tokens[-1].value == 'END' assert isinstance(inner, sql.Begin) def test_aliased_column_without_as(): p = sqlparse.parse('foo bar')[0].tokens assert len(p) == 1 assert p[0].get_real_name() == 'foo' assert p[0].get_alias() == 'bar' p = sqlparse.parse('foo.bar baz')[0].tokens[0] assert p.get_parent_name() == 'foo' assert p.get_real_name() == 'bar' assert p.get_alias() == 'baz' def test_qualified_function(): p = sqlparse.parse('foo()')[0].tokens[0] assert p.get_parent_name() is None assert p.get_real_name() == 'foo' p = sqlparse.parse('foo.bar()')[0].tokens[0] assert p.get_parent_name() == 'foo' assert p.get_real_name() == 'bar' def test_aliased_function_without_as(): p = sqlparse.parse('foo() bar')[0].tokens[0] assert p.get_parent_name() is None assert p.get_real_name() == 'foo' assert p.get_alias() == 'bar' p = sqlparse.parse('foo.bar() baz')[0].tokens[0] assert p.get_parent_name() == 'foo' assert p.get_real_name() == 'bar' assert p.get_alias() == 'baz' def test_aliased_literal_without_as(): p = sqlparse.parse('1 foo')[0].tokens assert len(p) == 1 assert p[0].get_alias() == 'foo' def test_grouping_as_cte(): p = sqlparse.parse('foo AS WITH apple AS 1, banana AS 2')[0].tokens assert len(p) > 4 assert p[0].get_alias() is None assert p[2].value == 'AS' assert p[4].value == 'WITH' def test_grouping_create_table(): p = sqlparse.parse("create table db.tbl (a string)")[0].tokens assert p[4].value == "db.tbl" sqlparse-0.5.4/tests/test_keywords.py0000644000000000000000000000067713615410400014726 0ustar00import pytest from sqlparse import tokens from sqlparse.lexer import Lexer class TestSQLREGEX: @pytest.mark.parametrize('number', ['1.0', '-1.0', '1.', '-1.', '.1', '-.1']) def test_float_numbers(self, number): ttype = next(tt for action, tt in Lexer.get_default_instance()._SQL_REGEX if action(number)) assert tokens.Number.Float == ttype sqlparse-0.5.4/tests/test_parse.py0000644000000000000000000004543413615410400014171 0ustar00"""Tests sqlparse.parse().""" from io import StringIO import pytest import sqlparse from sqlparse import sql, tokens as T, keywords from sqlparse.lexer import Lexer def test_parse_tokenize(): s = 'select * from foo;' stmts = sqlparse.parse(s) assert len(stmts) == 1 assert str(stmts[0]) == s def test_parse_multistatement(): sql1 = 'select * from foo;' sql2 = 'select * from bar;' stmts = sqlparse.parse(sql1 + sql2) assert len(stmts) == 2 assert str(stmts[0]) == sql1 assert str(stmts[1]) == sql2 @pytest.mark.parametrize('s', ['select\n*from foo;', 'select\r\n*from foo', 'select\r*from foo', 'select\r\n*from foo\n']) def test_parse_newlines(s): p = sqlparse.parse(s)[0] assert str(p) == s def test_parse_within(): s = 'foo(col1, col2)' p = sqlparse.parse(s)[0] col1 = p.tokens[0].tokens[1].tokens[1].tokens[0] assert col1.within(sql.Function) def test_parse_child_of(): s = '(col1, col2)' p = sqlparse.parse(s)[0] assert p.tokens[0].tokens[1].is_child_of(p.tokens[0]) s = 'select foo' p = sqlparse.parse(s)[0] assert not p.tokens[2].is_child_of(p.tokens[0]) assert p.tokens[2].is_child_of(p) def test_parse_has_ancestor(): s = 'foo or (bar, baz)' p = sqlparse.parse(s)[0] baz = p.tokens[-1].tokens[1].tokens[-1] assert baz.has_ancestor(p.tokens[-1].tokens[1]) assert baz.has_ancestor(p.tokens[-1]) assert baz.has_ancestor(p) @pytest.mark.parametrize('s', ['.5', '.51', '1.5', '12.5']) def test_parse_float(s): t = sqlparse.parse(s)[0].tokens assert len(t) == 1 assert t[0].ttype is sqlparse.tokens.Number.Float @pytest.mark.parametrize('s, holder', [ ('select * from foo where user = ?', '?'), ('select * from foo where user = :1', ':1'), ('select * from foo where user = :name', ':name'), ('select * from foo where user = %s', '%s'), ('select * from foo where user = $a', '$a')]) def test_parse_placeholder(s, holder): t = sqlparse.parse(s)[0].tokens[-1].tokens assert t[-1].ttype is sqlparse.tokens.Name.Placeholder assert t[-1].value == holder def test_parse_modulo_not_placeholder(): tokens = list(sqlparse.lexer.tokenize('x %3')) assert tokens[2][0] == sqlparse.tokens.Operator def test_parse_access_symbol(): # see issue27 t = sqlparse.parse('select a.[foo bar] as foo')[0].tokens assert isinstance(t[-1], sql.Identifier) assert t[-1].get_name() == 'foo' assert t[-1].get_real_name() == '[foo bar]' assert t[-1].get_parent_name() == 'a' def test_parse_square_brackets_notation_isnt_too_greedy(): # see issue153 t = sqlparse.parse('[foo], [bar]')[0].tokens assert isinstance(t[0], sql.IdentifierList) assert len(t[0].tokens) == 4 assert t[0].tokens[0].get_real_name() == '[foo]' assert t[0].tokens[-1].get_real_name() == '[bar]' def test_parse_square_brackets_notation_isnt_too_greedy2(): # see issue583 t = sqlparse.parse('[(foo[i])]')[0].tokens assert isinstance(t[0], sql.SquareBrackets) # not Identifier! def test_parse_keyword_like_identifier(): # see issue47 t = sqlparse.parse('foo.key')[0].tokens assert len(t) == 1 assert isinstance(t[0], sql.Identifier) def test_parse_function_parameter(): # see issue94 t = sqlparse.parse('abs(some_col)')[0].tokens[0].get_parameters() assert len(t) == 1 assert isinstance(t[0], sql.Identifier) def test_parse_function_param_single_literal(): t = sqlparse.parse('foo(5)')[0].tokens[0].get_parameters() assert len(t) == 1 assert t[0].ttype is T.Number.Integer def test_parse_nested_function(): t = sqlparse.parse('foo(bar(5))')[0].tokens[0].get_parameters() assert len(t) == 1 assert type(t[0]) is sql.Function def test_parse_casted_params(): t = sqlparse.parse("foo(DATE '2023-11-14', TIMESTAMP '2023-11-15')")[0].tokens[0].get_parameters() assert len(t) == 2 def test_parse_div_operator(): p = sqlparse.parse('col1 DIV 5 AS div_col1')[0].tokens assert p[0].tokens[0].tokens[2].ttype is T.Operator assert p[0].get_alias() == 'div_col1' def test_quoted_identifier(): t = sqlparse.parse('select x.y as "z" from foo')[0].tokens assert isinstance(t[2], sql.Identifier) assert t[2].get_name() == 'z' assert t[2].get_real_name() == 'y' @pytest.mark.parametrize('name', [ 'foo', '_foo', # issue175 '1_data', # valid MySQL table name, see issue337 '妤呭悕绋', # valid at least for SQLite3, see issue641 ]) def test_valid_identifier_names(name): t = sqlparse.parse(name)[0].tokens assert isinstance(t[0], sql.Identifier) assert t[0].get_name() == name def test_psql_quotation_marks(): # issue83 # regression: make sure plain $$ work t = sqlparse.split(""" CREATE OR REPLACE FUNCTION testfunc1(integer) RETURNS integer AS $$ .... $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION testfunc2(integer) RETURNS integer AS $$ .... $$ LANGUAGE plpgsql;""") assert len(t) == 2 # make sure $SOMETHING$ works too t = sqlparse.split(""" CREATE OR REPLACE FUNCTION testfunc1(integer) RETURNS integer AS $PROC_1$ .... $PROC_1$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION testfunc2(integer) RETURNS integer AS $PROC_2$ .... $PROC_2$ LANGUAGE plpgsql;""") assert len(t) == 2 # operators are valid infront of dollar quoted strings t = sqlparse.split("""UPDATE SET foo =$$bar;SELECT bar$$""") assert len(t) == 1 # identifiers must be separated by whitespace t = sqlparse.split("""UPDATE SET foo TO$$bar;SELECT bar$$""") assert len(t) == 2 def test_double_precision_is_builtin(): s = 'DOUBLE PRECISION' t = sqlparse.parse(s)[0].tokens assert len(t) == 1 assert t[0].ttype == sqlparse.tokens.Name.Builtin assert t[0].value == 'DOUBLE PRECISION' @pytest.mark.parametrize('ph', ['?', ':1', ':foo', '%s', '%(foo)s']) def test_placeholder(ph): p = sqlparse.parse(ph)[0].tokens assert len(p) == 1 assert p[0].ttype is T.Name.Placeholder @pytest.mark.parametrize('num, expected', [ ('6.67428E-8', T.Number.Float), ('1.988e33', T.Number.Float), ('1e-12', T.Number.Float), ('e1', None), ]) def test_scientific_numbers(num, expected): p = sqlparse.parse(num)[0].tokens assert len(p) == 1 assert p[0].ttype is expected def test_single_quotes_are_strings(): p = sqlparse.parse("'foo'")[0].tokens assert len(p) == 1 assert p[0].ttype is T.String.Single def test_double_quotes_are_identifiers(): p = sqlparse.parse('"foo"')[0].tokens assert len(p) == 1 assert isinstance(p[0], sql.Identifier) def test_single_quotes_with_linebreaks(): # issue118 p = sqlparse.parse("'f\nf'")[0].tokens assert len(p) == 1 assert p[0].ttype is T.String.Single def test_sqlite_identifiers(): # Make sure we still parse sqlite style escapes p = sqlparse.parse('[col1],[col2]')[0].tokens id_names = [id_.get_name() for id_ in p[0].get_identifiers()] assert len(p) == 1 assert isinstance(p[0], sql.IdentifierList) assert id_names == ['[col1]', '[col2]'] p = sqlparse.parse('[col1]+[col2]')[0] types = [tok.ttype for tok in p.flatten()] assert types == [T.Name, T.Operator, T.Name] def test_simple_1d_array_index(): p = sqlparse.parse('col[1]')[0].tokens assert len(p) == 1 assert p[0].get_name() == 'col' indices = list(p[0].get_array_indices()) assert len(indices) == 1 # 1-dimensional index assert len(indices[0]) == 1 # index is single token assert indices[0][0].value == '1' def test_2d_array_index(): p = sqlparse.parse('col[x][(y+1)*2]')[0].tokens assert len(p) == 1 assert p[0].get_name() == 'col' assert len(list(p[0].get_array_indices())) == 2 # 2-dimensional index def test_array_index_function_result(): p = sqlparse.parse('somefunc()[1]')[0].tokens assert len(p) == 1 assert len(list(p[0].get_array_indices())) == 1 def test_schema_qualified_array_index(): p = sqlparse.parse('schem.col[1]')[0].tokens assert len(p) == 1 assert p[0].get_parent_name() == 'schem' assert p[0].get_name() == 'col' assert list(p[0].get_array_indices())[0][0].value == '1' def test_aliased_array_index(): p = sqlparse.parse('col[1] x')[0].tokens assert len(p) == 1 assert p[0].get_alias() == 'x' assert p[0].get_real_name() == 'col' assert list(p[0].get_array_indices())[0][0].value == '1' def test_array_literal(): # See issue #176 p = sqlparse.parse('ARRAY[%s, %s]')[0] assert len(p.tokens) == 2 assert len(list(p.flatten())) == 7 def test_typed_array_definition(): # array indices aren't grouped with built-ins, but make sure we can extract # identifier names p = sqlparse.parse('x int, y int[], z int')[0] names = [x.get_name() for x in p.get_sublists() if isinstance(x, sql.Identifier)] assert names == ['x', 'y', 'z'] @pytest.mark.parametrize('s', ['select 1 -- foo', 'select 1 # foo']) def test_single_line_comments(s): # see issue178 p = sqlparse.parse(s)[0] assert len(p.tokens) == 5 assert p.tokens[-1].ttype == T.Comment.Single @pytest.mark.parametrize('s', ['foo', '@foo', '#foo', '##foo']) def test_names_and_special_names(s): # see issue192 p = sqlparse.parse(s)[0] assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Identifier) def test_get_token_at_offset(): p = sqlparse.parse('select * from dual')[0] # 0123456789 assert p.get_token_at_offset(0) == p.tokens[0] assert p.get_token_at_offset(1) == p.tokens[0] assert p.get_token_at_offset(6) == p.tokens[1] assert p.get_token_at_offset(7) == p.tokens[2] assert p.get_token_at_offset(8) == p.tokens[3] assert p.get_token_at_offset(9) == p.tokens[4] assert p.get_token_at_offset(10) == p.tokens[4] def test_pprint(): p = sqlparse.parse('select a0, b0, c0, d0, e0 from ' '(select * from dual) q0 where 1=1 and 2=2')[0] output = StringIO() p._pprint_tree(f=output) pprint = '\n'.join([ "|- 0 DML 'select'", "|- 1 Whitespace ' '", "|- 2 IdentifierList 'a0, b0...'", "| |- 0 Identifier 'a0'", "| | `- 0 Name 'a0'", "| |- 1 Punctuation ','", "| |- 2 Whitespace ' '", "| |- 3 Identifier 'b0'", "| | `- 0 Name 'b0'", "| |- 4 Punctuation ','", "| |- 5 Whitespace ' '", "| |- 6 Identifier 'c0'", "| | `- 0 Name 'c0'", "| |- 7 Punctuation ','", "| |- 8 Whitespace ' '", "| |- 9 Identifier 'd0'", "| | `- 0 Name 'd0'", "| |- 10 Punctuation ','", "| |- 11 Whitespace ' '", "| `- 12 Identifier 'e0'", "| `- 0 Name 'e0'", "|- 3 Whitespace ' '", "|- 4 Keyword 'from'", "|- 5 Whitespace ' '", "|- 6 Identifier '(selec...'", "| |- 0 Parenthesis '(selec...'", "| | |- 0 Punctuation '('", "| | |- 1 DML 'select'", "| | |- 2 Whitespace ' '", "| | |- 3 Wildcard '*'", "| | |- 4 Whitespace ' '", "| | |- 5 Keyword 'from'", "| | |- 6 Whitespace ' '", "| | |- 7 Identifier 'dual'", "| | | `- 0 Name 'dual'", "| | `- 8 Punctuation ')'", "| |- 1 Whitespace ' '", "| `- 2 Identifier 'q0'", "| `- 0 Name 'q0'", "|- 7 Whitespace ' '", "`- 8 Where 'where ...'", " |- 0 Keyword 'where'", " |- 1 Whitespace ' '", " |- 2 Comparison '1=1'", " | |- 0 Integer '1'", " | |- 1 Comparison '='", " | `- 2 Integer '1'", " |- 3 Whitespace ' '", " |- 4 Keyword 'and'", " |- 5 Whitespace ' '", " `- 6 Comparison '2=2'", " |- 0 Integer '2'", " |- 1 Comparison '='", " `- 2 Integer '2'", ""]) assert output.getvalue() == pprint def test_wildcard_multiplication(): p = sqlparse.parse('select * from dual')[0] assert p.tokens[2].ttype == T.Wildcard p = sqlparse.parse('select a0.* from dual a0')[0] assert p.tokens[2][2].ttype == T.Wildcard p = sqlparse.parse('select 1 * 2 from dual')[0] assert p.tokens[2][2].ttype == T.Operator def test_stmt_tokens_parents(): # see issue 226 s = "CREATE TABLE test();" stmt = sqlparse.parse(s)[0] for token in stmt.tokens: assert token.has_ancestor(stmt) @pytest.mark.parametrize('sql, is_literal', [ ('$$foo$$', True), ('$_$foo$_$', True), ('$token$ foo $token$', True), # don't parse inner tokens ('$_$ foo $token$bar$token$ baz$_$', True), ('$A$ foo $B$', False) # tokens don't match ]) def test_dbldollar_as_literal(sql, is_literal): # see issue 277 p = sqlparse.parse(sql)[0] if is_literal: assert len(p.tokens) == 1 assert p.tokens[0].ttype == T.Literal else: for token in p.tokens: assert token.ttype != T.Literal def test_non_ascii(): _test_non_ascii = "insert into test (id, name) values (1, '褌械褋褌');" s = _test_non_ascii stmts = sqlparse.parse(s) assert len(stmts) == 1 statement = stmts[0] assert str(statement) == s assert statement._pprint_tree() is None s = _test_non_ascii.encode('utf-8') stmts = sqlparse.parse(s, 'utf-8') assert len(stmts) == 1 statement = stmts[0] assert str(statement) == _test_non_ascii assert statement._pprint_tree() is None def test_get_real_name(): # issue 369 s = "update a t set t.b=1" stmts = sqlparse.parse(s) assert len(stmts) == 1 assert 'a' == stmts[0].tokens[2].get_real_name() assert 't' == stmts[0].tokens[2].get_alias() def test_from_subquery(): # issue 446 s = 'from(select 1)' stmts = sqlparse.parse(s) assert len(stmts) == 1 assert len(stmts[0].tokens) == 2 assert stmts[0].tokens[0].value == 'from' assert stmts[0].tokens[0].ttype == T.Keyword s = 'from (select 1)' stmts = sqlparse.parse(s) assert len(stmts) == 1 assert len(stmts[0].tokens) == 3 assert stmts[0].tokens[0].value == 'from' assert stmts[0].tokens[0].ttype == T.Keyword assert stmts[0].tokens[1].ttype == T.Whitespace def test_parenthesis(): tokens = sqlparse.parse("(\n\n1\n\n)")[0].tokens[0].tokens assert list(map(lambda t: t.ttype, tokens)) == [T.Punctuation, T.Newline, T.Newline, T.Number.Integer, T.Newline, T.Newline, T.Punctuation] tokens = sqlparse.parse("(\n\n 1 \n\n)")[0].tokens[0].tokens assert list(map(lambda t: t.ttype, tokens)) == [T.Punctuation, T.Newline, T.Newline, T.Whitespace, T.Number.Integer, T.Whitespace, T.Newline, T.Newline, T.Punctuation] def test_configurable_keywords(): sql = """select * from foo BACON SPAM EGGS;""" tokens = sqlparse.parse(sql)[0] assert list( (t.ttype, t.value) for t in tokens if t.ttype not in sqlparse.tokens.Whitespace ) == [ (sqlparse.tokens.Keyword.DML, "select"), (sqlparse.tokens.Wildcard, "*"), (sqlparse.tokens.Keyword, "from"), (None, "foo BACON"), (None, "SPAM EGGS"), (sqlparse.tokens.Punctuation, ";"), ] Lexer.get_default_instance().add_keywords( { "BACON": sqlparse.tokens.Name.Builtin, "SPAM": sqlparse.tokens.Keyword, "EGGS": sqlparse.tokens.Keyword, } ) tokens = sqlparse.parse(sql)[0] # reset the syntax for later tests. Lexer.get_default_instance().default_initialization() assert list( (t.ttype, t.value) for t in tokens if t.ttype not in sqlparse.tokens.Whitespace ) == [ (sqlparse.tokens.Keyword.DML, "select"), (sqlparse.tokens.Wildcard, "*"), (sqlparse.tokens.Keyword, "from"), (None, "foo"), (sqlparse.tokens.Name.Builtin, "BACON"), (sqlparse.tokens.Keyword, "SPAM"), (sqlparse.tokens.Keyword, "EGGS"), (sqlparse.tokens.Punctuation, ";"), ] def test_regexp(): s = "column REGEXP '.+static.+'" stmts = sqlparse.parse(s) assert len(stmts) == 1 assert stmts[0].tokens[0].ttype == T.Keyword assert stmts[0].tokens[0].value == "column" assert stmts[0].tokens[2].ttype == T.Comparison assert stmts[0].tokens[2].value == "REGEXP" assert stmts[0].tokens[4].ttype == T.Literal.String.Single assert stmts[0].tokens[4].value == "'.+static.+'" def test_regexp_binary(): s = "column REGEXP BINARY '.+static.+'" stmts = sqlparse.parse(s) assert len(stmts) == 1 assert stmts[0].tokens[0].ttype == T.Keyword assert stmts[0].tokens[0].value == "column" assert stmts[0].tokens[2].ttype == T.Comparison assert stmts[0].tokens[2].value == "REGEXP BINARY" assert stmts[0].tokens[4].ttype == T.Literal.String.Single assert stmts[0].tokens[4].value == "'.+static.+'" def test_configurable_regex(): lex = Lexer.get_default_instance() lex.clear() my_regex = (r"ZORDER\s+BY\b", sqlparse.tokens.Keyword) lex.set_SQL_REGEX( keywords.SQL_REGEX[:38] + [my_regex] + keywords.SQL_REGEX[38:] ) lex.add_keywords(keywords.KEYWORDS_COMMON) lex.add_keywords(keywords.KEYWORDS_ORACLE) lex.add_keywords(keywords.KEYWORDS_PLPGSQL) lex.add_keywords(keywords.KEYWORDS_HQL) lex.add_keywords(keywords.KEYWORDS_MSACCESS) lex.add_keywords(keywords.KEYWORDS) tokens = sqlparse.parse("select * from foo zorder by bar;")[0] # reset the syntax for later tests. Lexer.get_default_instance().default_initialization() assert list( (t.ttype, t.value) for t in tokens if t.ttype not in sqlparse.tokens.Whitespace )[4] == (sqlparse.tokens.Keyword, "zorder by") @pytest.mark.parametrize('sql', [ '->', '->>', '#>', '#>>', '@>', '<@', # leaving ? out for now, they're somehow ambiguous as placeholders # '?', '?|', '?&', '||', '-', '#-' ]) def test_json_operators(sql): p = sqlparse.parse(sql) assert len(p) == 1 assert len(p[0].tokens) == 1 assert p[0].tokens[0].ttype == sqlparse.tokens.Operator sqlparse-0.5.4/tests/test_regressions.py0000644000000000000000000003502013615410400015410 0ustar00import copy import sys import pytest import sqlparse from sqlparse import sql, tokens as T from sqlparse.exceptions import SQLParseError def test_issue9(): # make sure where doesn't consume parenthesis p = sqlparse.parse('(where 1)')[0] assert isinstance(p, sql.Statement) assert len(p.tokens) == 1 assert isinstance(p.tokens[0], sql.Parenthesis) prt = p.tokens[0] assert len(prt.tokens) == 3 assert prt.tokens[0].ttype == T.Punctuation assert prt.tokens[-1].ttype == T.Punctuation def test_issue13(): parsed = sqlparse.parse("select 'one';\n" "select 'two\\'';\n" "select 'three';") assert len(parsed) == 3 assert str(parsed[1]).strip() == "select 'two\\'';" @pytest.mark.parametrize('s', ['--hello', '-- hello', '--hello\n', '--', '--\n']) def test_issue26(s): # parse stand-alone comments p = sqlparse.parse(s)[0] assert len(p.tokens) == 1 assert p.tokens[0].ttype is T.Comment.Single @pytest.mark.parametrize('value', ['create', 'CREATE']) def test_issue34(value): t = sqlparse.parse("create")[0].token_first() assert t.match(T.Keyword.DDL, value) is True def test_issue35(): # missing space before LIMIT. Updated for #321 sql = sqlparse.format("select * from foo where bar = 1 limit 1", reindent=True) assert sql == "\n".join([ "select *", "from foo", "where bar = 1", "limit 1"]) def test_issue38(): sql = sqlparse.format("SELECT foo; -- comment", strip_comments=True) assert sql == "SELECT foo;" sql = sqlparse.format("/* foo */", strip_comments=True) assert sql == "" def test_issue39(): p = sqlparse.parse('select user.id from user')[0] assert len(p.tokens) == 7 idt = p.tokens[2] assert idt.__class__ == sql.Identifier assert len(idt.tokens) == 3 assert idt.tokens[0].match(T.Name, 'user') is True assert idt.tokens[1].match(T.Punctuation, '.') is True assert idt.tokens[2].match(T.Name, 'id') is True def test_issue40(): # make sure identifier lists in subselects are grouped p = sqlparse.parse('SELECT id, name FROM ' '(SELECT id, name FROM bar) as foo')[0] assert len(p.tokens) == 7 assert p.tokens[2].__class__ == sql.IdentifierList assert p.tokens[-1].__class__ == sql.Identifier assert p.tokens[-1].get_name() == 'foo' sp = p.tokens[-1].tokens[0] assert sp.tokens[3].__class__ == sql.IdentifierList # make sure that formatting works as expected s = sqlparse.format('SELECT id == name FROM ' '(SELECT id, name FROM bar)', reindent=True) assert s == '\n'.join([ 'SELECT id == name', 'FROM', ' (SELECT id,', ' name', ' FROM bar)']) s = sqlparse.format('SELECT id == name FROM ' '(SELECT id, name FROM bar) as foo', reindent=True) assert s == '\n'.join([ 'SELECT id == name', 'FROM', ' (SELECT id,', ' name', ' FROM bar) as foo']) @pytest.mark.parametrize('s', ['select x.y::text as z from foo', 'select x.y::text as "z" from foo', 'select x."y"::text as z from foo', 'select x."y"::text as "z" from foo', 'select "x".y::text as z from foo', 'select "x".y::text as "z" from foo', 'select "x"."y"::text as z from foo', 'select "x"."y"::text as "z" from foo']) @pytest.mark.parametrize('func_name, result', [('get_name', 'z'), ('get_real_name', 'y'), ('get_parent_name', 'x'), ('get_alias', 'z'), ('get_typecast', 'text')]) def test_issue78(s, func_name, result): # the bug author provided this nice examples, let's use them! p = sqlparse.parse(s)[0] i = p.tokens[2] assert isinstance(i, sql.Identifier) func = getattr(i, func_name) assert func() == result def test_issue83(): sql = """ CREATE OR REPLACE FUNCTION func_a(text) RETURNS boolean LANGUAGE plpgsql STRICT IMMUTABLE AS $_$ BEGIN ... END; $_$; CREATE OR REPLACE FUNCTION func_b(text) RETURNS boolean LANGUAGE plpgsql STRICT IMMUTABLE AS $_$ BEGIN ... END; $_$; ALTER TABLE..... ;""" t = sqlparse.split(sql) assert len(t) == 3 def test_comment_encoding_when_reindent(): # There was an UnicodeEncodeError in the reindent filter that # casted every comment followed by a keyword to str. sql = 'select foo -- Comment containing 脺ml盲uts\nfrom bar' formatted = sqlparse.format(sql, reindent=True) assert formatted == sql def test_parse_sql_with_binary(): # See https://github.com/andialbrecht/sqlparse/pull/88 # digest = '聜|脣锚聤plL4隆h聭酶N{' digest = '\x82|\xcb\x0e\xea\x8aplL4\xa1h\x91\xf8N{' sql = f"select * from foo where bar = '{digest}'" formatted = sqlparse.format(sql, reindent=True) tformatted = f"select *\nfrom foo\nwhere bar = '{digest}'" assert formatted == tformatted def test_dont_alias_keywords(): # The _group_left_right function had a bug where the check for the # left side wasn't handled correctly. In one case this resulted in # a keyword turning into an identifier. p = sqlparse.parse('FROM AS foo')[0] assert len(p.tokens) == 5 assert p.tokens[0].ttype is T.Keyword assert p.tokens[2].ttype is T.Keyword def test_format_accepts_encoding(load_file): # issue20 sql = load_file('test_cp1251.sql', 'cp1251') formatted = sqlparse.format(sql, reindent=True, encoding='cp1251') tformatted = 'insert into foo\nvalues (1); -- 袩械褋薪褟 锌褉芯 薪邪写械卸写褍' assert formatted == tformatted def test_stream(get_stream): with get_stream("stream.sql") as stream: p = sqlparse.parse(stream)[0] assert p.get_type() == 'INSERT' def test_issue90(): sql = ('UPDATE "gallery_photo" SET "owner_id" = 4018, "deleted_at" = NULL,' ' "width" = NULL, "height" = NULL, "rating_votes" = 0,' ' "rating_score" = 0, "thumbnail_width" = NULL,' ' "thumbnail_height" = NULL, "price" = 1, "description" = NULL') formatted = sqlparse.format(sql, reindent=True) tformatted = '\n'.join([ 'UPDATE "gallery_photo"', 'SET "owner_id" = 4018,', ' "deleted_at" = NULL,', ' "width" = NULL,', ' "height" = NULL,', ' "rating_votes" = 0,', ' "rating_score" = 0,', ' "thumbnail_width" = NULL,', ' "thumbnail_height" = NULL,', ' "price" = 1,', ' "description" = NULL']) assert formatted == tformatted def test_except_formatting(): sql = 'SELECT 1 FROM foo WHERE 2 = 3 EXCEPT SELECT 2 FROM bar WHERE 1 = 2' formatted = sqlparse.format(sql, reindent=True) tformatted = '\n'.join([ 'SELECT 1', 'FROM foo', 'WHERE 2 = 3', 'EXCEPT', 'SELECT 2', 'FROM bar', 'WHERE 1 = 2']) assert formatted == tformatted def test_null_with_as(): sql = 'SELECT NULL AS c1, NULL AS c2 FROM t1' formatted = sqlparse.format(sql, reindent=True) tformatted = '\n'.join([ 'SELECT NULL AS c1,', ' NULL AS c2', 'FROM t1']) assert formatted == tformatted def test_issue190_open_file(filepath): path = filepath('stream.sql') with open(path) as stream: p = sqlparse.parse(stream)[0] assert p.get_type() == 'INSERT' def test_issue193_splitting_function(): sql = """ CREATE FUNCTION a(x VARCHAR(20)) RETURNS VARCHAR(20) BEGIN DECLARE y VARCHAR(20); RETURN x; END; SELECT * FROM a.b;""" statements = sqlparse.split(sql) assert len(statements) == 2 def test_issue194_splitting_function(): sql = """ CREATE FUNCTION a(x VARCHAR(20)) RETURNS VARCHAR(20) BEGIN DECLARE y VARCHAR(20); IF (1 = 1) THEN SET x = y; END IF; RETURN x; END; SELECT * FROM a.b;""" statements = sqlparse.split(sql) assert len(statements) == 2 def test_issue186_get_type(): sql = "-- comment\ninsert into foo" p = sqlparse.parse(sql)[0] assert p.get_type() == 'INSERT' def test_issue212_py2unicode(): t1 = sql.Token(T.String, 'sch枚ner ') t2 = sql.Token(T.String, 'bug') token_list = sql.TokenList([t1, t2]) assert str(token_list) == 'sch枚ner bug' def test_issue213_leadingws(): sql = " select * from foo" assert sqlparse.format(sql, strip_whitespace=True) == "select * from foo" def test_issue227_gettype_cte(): select_stmt = sqlparse.parse('SELECT 1, 2, 3 FROM foo;') assert select_stmt[0].get_type() == 'SELECT' with_stmt = sqlparse.parse('WITH foo AS (SELECT 1, 2, 3)' 'SELECT * FROM foo;') assert with_stmt[0].get_type() == 'SELECT' with2_stmt = sqlparse.parse(""" WITH foo AS (SELECT 1 AS abc, 2 AS def), bar AS (SELECT * FROM something WHERE x > 1) INSERT INTO elsewhere SELECT * FROM foo JOIN bar;""") assert with2_stmt[0].get_type() == 'INSERT' def test_issue207_runaway_format(): sql = 'select 1 from (select 1 as one, 2 as two, 3 from dual) t0' p = sqlparse.format(sql, reindent=True) assert p == '\n'.join([ "select 1", "from", " (select 1 as one,", " 2 as two,", " 3", " from dual) t0"]) def test_token_next_doesnt_ignore_skip_cm(): sql = '--comment\nselect 1' tok = sqlparse.parse(sql)[0].token_next(-1, skip_cm=True)[1] assert tok.value == 'select' @pytest.mark.parametrize('s', [ 'SELECT x AS', 'AS' ]) def test_issue284_as_grouping(s): p = sqlparse.parse(s)[0] assert s == str(p) def test_issue315_utf8_by_default(): # Make sure the lexer can handle utf-8 string by default correctly # digest = '榻愬ぉ澶у湥.銈儵銉曘儷銇洸.靷瀾頃挫殧' # The digest contains Chinese, Japanese and Korean characters # All in 'utf-8' encoding. digest = ( '\xe9\xbd\x90\xe5\xa4\xa9\xe5\xa4\xa7\xe5\x9c\xa3.' '\xe3\x82\xab\xe3\x83\xa9\xe3\x83\x95\xe3\x83\xab\xe3\x81\xaa\xe9' '\x9b\xb2.' '\xec\x82\xac\xeb\x9e\x91\xed\x95\xb4\xec\x9a\x94' ) sql = f"select * from foo where bar = '{digest}'" formatted = sqlparse.format(sql, reindent=True) tformatted = f"select *\nfrom foo\nwhere bar = '{digest}'" assert formatted == tformatted def test_issue322_concurrently_is_keyword(): s = 'CREATE INDEX CONCURRENTLY myindex ON mytable(col1);' p = sqlparse.parse(s)[0] assert len(p.tokens) == 12 assert p.tokens[0].ttype is T.Keyword.DDL # CREATE assert p.tokens[2].ttype is T.Keyword # INDEX assert p.tokens[4].ttype is T.Keyword # CONCURRENTLY assert p.tokens[4].value == 'CONCURRENTLY' assert isinstance(p.tokens[6], sql.Identifier) assert p.tokens[6].value == 'myindex' @pytest.mark.parametrize('s', [ 'SELECT @min_price:=MIN(price), @max_price:=MAX(price) FROM shop;', 'SELECT @min_price:=MIN(price), @max_price:=MAX(price) FROM shop', ]) def test_issue359_index_error_assignments(s): sqlparse.parse(s) sqlparse.format(s, strip_comments=True) def test_issue469_copy_as_psql_command(): formatted = sqlparse.format( '\\copy select * from foo', keyword_case='upper', identifier_case='capitalize') assert formatted == '\\copy SELECT * FROM Foo' @pytest.mark.xfail(reason='Needs to be fixed') def test_issue484_comments_and_newlines(): formatted = sqlparse.format('\n'.join([ 'Create table myTable', '(', ' myId TINYINT NOT NULL, --my special comment', ' myName VARCHAR2(100) NOT NULL', ')']), strip_comments=True) assert formatted == ('\n'.join([ 'Create table myTable', '(', ' myId TINYINT NOT NULL,', ' myName VARCHAR2(100) NOT NULL', ')'])) def test_issue485_split_multi(): p_sql = '''CREATE OR REPLACE RULE ruled_tab_2rules AS ON INSERT TO public.ruled_tab DO instead ( select 1; select 2; );''' assert len(sqlparse.split(p_sql)) == 1 def test_issue489_tzcasts(): p = sqlparse.parse('select bar at time zone \'UTC\' as foo')[0] assert p.tokens[-1].has_alias() is True assert p.tokens[-1].get_alias() == 'foo' def test_issue562_tzcasts(): # Test that whitespace between 'from' and 'bar' is retained formatted = sqlparse.format( 'SELECT f(HOUR from bar AT TIME ZONE \'UTC\') from foo', reindent=True ) assert formatted == \ 'SELECT f(HOUR\n from bar AT TIME ZONE \'UTC\')\nfrom foo' def test_as_in_parentheses_indents(): # did raise NoneType has no attribute is_group in _process_parentheses formatted = sqlparse.format('(as foo)', reindent=True) assert formatted == '(as foo)' def test_format_invalid_where_clause(): # did raise ValueError formatted = sqlparse.format('where, foo', reindent=True) assert formatted == 'where, foo' def test_splitting_at_and_backticks_issue588(): splitted = sqlparse.split( 'grant foo to user1@`myhost`; grant bar to user1@`myhost`;') assert len(splitted) == 2 assert splitted[-1] == 'grant bar to user1@`myhost`;' def test_comment_between_cte_clauses_issue632(): p, = sqlparse.parse(""" WITH foo AS (), -- A comment before baz subquery baz AS () SELECT * FROM baz;""") assert p.get_type() == "SELECT" def test_copy_issue672(): p = sqlparse.parse('select * from foo')[0] copied = copy.deepcopy(p) assert str(p) == str(copied) def test_primary_key_issue740(): p = sqlparse.parse('PRIMARY KEY')[0] assert len(p.tokens) == 1 assert p.tokens[0].ttype == T.Keyword @pytest.fixture def limit_recursion(): curr_limit = sys.getrecursionlimit() sys.setrecursionlimit(100) yield sys.setrecursionlimit(curr_limit) def test_max_recursion(limit_recursion): with pytest.raises(SQLParseError): sqlparse.parse('[' * 1000 + ']' * 1000) sqlparse-0.5.4/tests/test_split.py0000644000000000000000000001645213615410400014210 0ustar00# Tests splitting functions. import types from io import StringIO import pytest import sqlparse def test_split_semicolon(): sql1 = 'select * from foo;' sql2 = "select * from foo where bar = 'foo;bar';" stmts = sqlparse.parse(''.join([sql1, sql2])) assert len(stmts) == 2 assert str(stmts[0]) == sql1 assert str(stmts[1]) == sql2 def test_split_backslash(): stmts = sqlparse.parse("select '\'; select '\'';") assert len(stmts) == 2 @pytest.mark.parametrize('fn', ['function.sql', 'function_psql.sql', 'function_psql2.sql', 'function_psql3.sql', 'function_psql4.sql']) def test_split_create_function(load_file, fn): sql = load_file(fn) stmts = sqlparse.parse(sql) assert len(stmts) == 1 assert str(stmts[0]) == sql def test_split_dashcomments(load_file): sql = load_file('dashcomment.sql') stmts = sqlparse.parse(sql) assert len(stmts) == 3 assert ''.join(str(q) for q in stmts) == sql @pytest.mark.parametrize('s', ['select foo; -- comment\n', 'select foo; -- comment\r', 'select foo; -- comment\r\n', 'select foo; -- comment']) def test_split_dashcomments_eol(s): stmts = sqlparse.parse(s) assert len(stmts) == 1 def test_split_begintag(load_file): sql = load_file('begintag.sql') stmts = sqlparse.parse(sql) assert len(stmts) == 3 assert ''.join(str(q) for q in stmts) == sql def test_split_begintag_2(load_file): sql = load_file('begintag_2.sql') stmts = sqlparse.parse(sql) assert len(stmts) == 1 assert ''.join(str(q) for q in stmts) == sql def test_split_dropif(): sql = 'DROP TABLE IF EXISTS FOO;\n\nSELECT * FROM BAR;' stmts = sqlparse.parse(sql) assert len(stmts) == 2 assert ''.join(str(q) for q in stmts) == sql def test_split_comment_with_umlaut(): sql = ('select * from foo;\n' '-- Testing an umlaut: 盲\n' 'select * from bar;') stmts = sqlparse.parse(sql) assert len(stmts) == 2 assert ''.join(str(q) for q in stmts) == sql def test_split_comment_end_of_line(): sql = ('select * from foo; -- foo\n' 'select * from bar;') stmts = sqlparse.parse(sql) assert len(stmts) == 2 assert ''.join(str(q) for q in stmts) == sql # make sure the comment belongs to first query assert str(stmts[0]) == 'select * from foo; -- foo\n' def test_split_casewhen(): sql = ("SELECT case when val = 1 then 2 else null end as foo;\n" "comment on table actor is 'The actor table.';") stmts = sqlparse.split(sql) assert len(stmts) == 2 def test_split_casewhen_procedure(load_file): # see issue580 stmts = sqlparse.split(load_file('casewhen_procedure.sql')) assert len(stmts) == 2 def test_split_cursor_declare(): sql = ('DECLARE CURSOR "foo" AS SELECT 1;\n' 'SELECT 2;') stmts = sqlparse.split(sql) assert len(stmts) == 2 def test_split_if_function(): # see issue 33 # don't let IF as a function confuse the splitter sql = ('CREATE TEMPORARY TABLE tmp ' 'SELECT IF(a=1, a, b) AS o FROM one; ' 'SELECT t FROM two') stmts = sqlparse.split(sql) assert len(stmts) == 2 def test_split_stream(): stream = StringIO("SELECT 1; SELECT 2;") stmts = sqlparse.parsestream(stream) assert isinstance(stmts, types.GeneratorType) assert len(list(stmts)) == 2 def test_split_encoding_parsestream(): stream = StringIO("SELECT 1; SELECT 2;") stmts = list(sqlparse.parsestream(stream)) assert isinstance(stmts[0].tokens[0].value, str) def test_split_unicode_parsestream(): stream = StringIO('SELECT 枚') stmts = list(sqlparse.parsestream(stream)) assert str(stmts[0]) == 'SELECT 枚' def test_split_simple(): stmts = sqlparse.split('select * from foo; select * from bar;') assert len(stmts) == 2 assert stmts[0] == 'select * from foo;' assert stmts[1] == 'select * from bar;' def test_split_ignores_empty_newlines(): stmts = sqlparse.split('select foo;\nselect bar;\n') assert len(stmts) == 2 assert stmts[0] == 'select foo;' assert stmts[1] == 'select bar;' def test_split_quotes_with_new_line(): stmts = sqlparse.split('select "foo\nbar"') assert len(stmts) == 1 assert stmts[0] == 'select "foo\nbar"' stmts = sqlparse.split("select 'foo\n\bar'") assert len(stmts) == 1 assert stmts[0] == "select 'foo\n\bar'" def test_split_mysql_handler_for(load_file): # see issue581 stmts = sqlparse.split(load_file('mysql_handler.sql')) assert len(stmts) == 2 @pytest.mark.parametrize('sql, expected', [ ('select * from foo;', ['select * from foo']), ('select * from foo', ['select * from foo']), ('select * from foo; select * from bar;', [ 'select * from foo', 'select * from bar', ]), (' select * from foo;\n\nselect * from bar;\n\n\n\n', [ 'select * from foo', 'select * from bar', ]), ('select * from foo\n\n; bar', ['select * from foo', 'bar']), ]) def test_split_strip_semicolon(sql, expected): stmts = sqlparse.split(sql, strip_semicolon=True) assert len(stmts) == len(expected) for idx, expectation in enumerate(expected): assert stmts[idx] == expectation def test_split_strip_semicolon_procedure(load_file): stmts = sqlparse.split(load_file('mysql_handler.sql'), strip_semicolon=True) assert len(stmts) == 2 assert stmts[0].endswith('end') assert stmts[1].endswith('end') @pytest.mark.parametrize('sql, num', [ ('USE foo;\nGO\nSELECT 1;\nGO', 4), ('SELECT * FROM foo;\nGO', 2), ('USE foo;\nGO 2\nSELECT 1;', 3) ]) def test_split_go(sql, num): # issue762 stmts = sqlparse.split(sql) assert len(stmts) == num def test_split_multiple_case_in_begin(load_file): # issue784 stmts = sqlparse.split(load_file('multiple_case_in_begin.sql')) assert len(stmts) == 1 def test_split_if_exists_in_begin_end(): # issue812 # IF EXISTS should not be confused with control flow IF sql = """CREATE TASK t1 AS BEGIN CREATE OR REPLACE TABLE temp1; DROP TABLE IF EXISTS temp1; END; EXECUTE TASK t1;""" stmts = sqlparse.split(sql) assert len(stmts) == 2 assert 'CREATE TASK' in stmts[0] assert 'EXECUTE TASK' in stmts[1] def test_split_begin_end_semicolons(): # issue809 # Semicolons inside BEGIN...END blocks should not split statements sql = """WITH FUNCTION meaning_of_life() RETURNS tinyint BEGIN DECLARE a tinyint DEFAULT CAST(6 as tinyint); DECLARE b tinyint DEFAULT CAST(7 as tinyint); RETURN a * b; END SELECT meaning_of_life();""" stmts = sqlparse.split(sql) assert len(stmts) == 1 assert 'WITH' in stmts[0] assert 'SELECT meaning_of_life()' in stmts[0] def test_split_begin_end_procedure(): # issue809 # Test with CREATE PROCEDURE (BigQuery style) sql = """CREATE OR REPLACE PROCEDURE mydataset.create_customer() BEGIN DECLARE id STRING; SET id = GENERATE_UUID(); INSERT INTO mydataset.customers (customer_id) VALUES(id); SELECT FORMAT("Created customer %s", id); END;""" stmts = sqlparse.split(sql) assert len(stmts) == 1 assert 'CREATE OR REPLACE PROCEDURE' in stmts[0] sqlparse-0.5.4/tests/test_tokenize.py0000644000000000000000000001432413615410400014701 0ustar00import types from io import StringIO import pytest import sqlparse from sqlparse import lexer from sqlparse import sql, tokens as T def test_tokenize_simple(): s = 'select * from foo;' stream = lexer.tokenize(s) assert isinstance(stream, types.GeneratorType) tokens = list(stream) assert len(tokens) == 8 assert len(tokens[0]) == 2 assert tokens[0] == (T.Keyword.DML, 'select') assert tokens[-1] == (T.Punctuation, ';') def test_tokenize_backticks(): s = '`foo`.`bar`' tokens = list(lexer.tokenize(s)) assert len(tokens) == 3 assert tokens[0] == (T.Name, '`foo`') @pytest.mark.parametrize('s', ['foo\nbar\n', 'foo\rbar\r', 'foo\r\nbar\r\n', 'foo\r\nbar\n']) def test_tokenize_linebreaks(s): # issue1 tokens = lexer.tokenize(s) assert ''.join(str(x[1]) for x in tokens) == s def test_tokenize_inline_keywords(): # issue 7 s = "create created_foo" tokens = list(lexer.tokenize(s)) assert len(tokens) == 3 assert tokens[0][0] == T.Keyword.DDL assert tokens[2][0] == T.Name assert tokens[2][1] == 'created_foo' s = "enddate" tokens = list(lexer.tokenize(s)) assert len(tokens) == 1 assert tokens[0][0] == T.Name s = "join_col" tokens = list(lexer.tokenize(s)) assert len(tokens) == 1 assert tokens[0][0] == T.Name s = "left join_col" tokens = list(lexer.tokenize(s)) assert len(tokens) == 3 assert tokens[2][0] == T.Name assert tokens[2][1] == 'join_col' def test_tokenize_negative_numbers(): s = "values(-1)" tokens = list(lexer.tokenize(s)) assert len(tokens) == 4 assert tokens[2][0] == T.Number.Integer assert tokens[2][1] == '-1' def test_token_str(): token = sql.Token(None, 'FoO') assert str(token) == 'FoO' def test_token_repr(): token = sql.Token(T.Keyword, 'foo') tst = " 0 AND typ.oid = att.atttypid AND att.attisdropped = false AND rel.relname = p_tabelle AND con.conkey[1] = 1 AND ns.oid = rel.relnamespace AND ns.nspname = 'public' ORDER BY att.attnum; v_sql := 'SELECT ' || v_key || ' AS id FROM ' || p_tabelle || ' WHERE ' || p_key || ' = ' || p_value; FOR v_data IN EXECUTE v_sql LOOP --RAISE NOTICE ' -> % %', p_tabelle, v_data.id; FOR v_constraint IN SELECT t.constraint_name , t.constraint_type , t.table_name , c.column_name FROM public.v_table_constraints t , public.v_constraint_columns c WHERE t.constraint_name = c.constraint_name AND t.constraint_type = 'FOREIGN KEY' AND c.table_name = p_tabelle AND t.table_schema = 'public' AND c.table_schema = 'public' LOOP v_fieldname := substring(v_constraint.constraint_name from 1 for length(v_constraint.constraint_name) - length(v_constraint.column_name) - 1); IF (v_constraint.table_name = p_tabelle) AND (p_value = v_data.id) THEN --RAISE NOTICE 'Skip (Selbstverweis)'; CONTINUE; ELSE PERFORM delete_data(v_constraint.table_name::varchar, v_fieldname::varchar, v_data.id::integer); END IF; END LOOP; END LOOP; v_sql := 'DELETE FROM ' || p_tabelle || ' WHERE ' || p_key || ' = ' || p_value; --RAISE NOTICE '%', v_sql; EXECUTE v_sql; p_retval := 1; ELSE --RAISE NOTICE ' -> Keine S盲tze gefunden'; p_retval := 0; END IF; RETURN p_retval; END; $$ LANGUAGE plpgsql;sqlparse-0.5.4/tests/files/function_psql2.sql0000644000000000000000000000026113615410400016224 0ustar00CREATE OR REPLACE FUNCTION update_something() RETURNS void AS $body$ BEGIN raise notice 'foo'; END; $body$ LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;sqlparse-0.5.4/tests/files/function_psql3.sql0000644000000000000000000000025313615410400016226 0ustar00CREATE OR REPLACE FUNCTION foo() RETURNS integer AS $body$ DECLARE BEGIN select * from foo; END; $body$ LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;sqlparse-0.5.4/tests/files/function_psql4.sql0000644000000000000000000000033513615410400016230 0ustar00CREATE FUNCTION doubledollarinbody(var1 text) RETURNS text /* see issue277 */ LANGUAGE plpgsql AS $_$ DECLARE str text; BEGIN str = $$'foo'$$||var1; execute 'select '||str into str; return str; END $_$; sqlparse-0.5.4/tests/files/huge_select.sql0000644000000000000000000002233213615410400015550 0ustar00select case when i = 0 then 1 else 0 end as col0, case when i = 1 then 1 else 1 end as col1, case when i = 2 then 1 else 2 end as col2, case when i = 3 then 1 else 3 end as col3, case when i = 4 then 1 else 4 end as col4, case when i = 5 then 1 else 5 end as col5, case when i = 6 then 1 else 6 end as col6, case when i = 7 then 1 else 7 end as col7, case when i = 8 then 1 else 8 end as col8, case when i = 9 then 1 else 9 end as col9, case when i = 10 then 1 else 10 end as col10, case when i = 11 then 1 else 11 end as col11, case when i = 12 then 1 else 12 end as col12, case when i = 13 then 1 else 13 end as col13, case when i = 14 then 1 else 14 end as col14, case when i = 15 then 1 else 15 end as col15, case when i = 16 then 1 else 16 end as col16, case when i = 17 then 1 else 17 end as col17, case when i = 18 then 1 else 18 end as col18, case when i = 19 then 1 else 19 end as col19, case when i = 20 then 1 else 20 end as col20, case when i = 21 then 1 else 21 end as col21, case when i = 22 then 1 else 22 end as col22 from foo UNION select case when i = 0 then 1 else 0 end as col0, case when i = 1 then 1 else 1 end as col1, case when i = 2 then 1 else 2 end as col2, case when i = 3 then 1 else 3 end as col3, case when i = 4 then 1 else 4 end as col4, case when i = 5 then 1 else 5 end as col5, case when i = 6 then 1 else 6 end as col6, case when i = 7 then 1 else 7 end as col7, case when i = 8 then 1 else 8 end as col8, case when i = 9 then 1 else 9 end as col9, case when i = 10 then 1 else 10 end as col10, case when i = 11 then 1 else 11 end as col11, case when i = 12 then 1 else 12 end as col12, case when i = 13 then 1 else 13 end as col13, case when i = 14 then 1 else 14 end as col14, case when i = 15 then 1 else 15 end as col15, case when i = 16 then 1 else 16 end as col16, case when i = 17 then 1 else 17 end as col17, case when i = 18 then 1 else 18 end as col18, case when i = 19 then 1 else 19 end as col19, case when i = 20 then 1 else 20 end as col20, case when i = 21 then 1 else 21 end as col21, case when i = 22 then 1 else 22 end as col22 from foo UNION select case when i = 0 then 1 else 0 end as col0, case when i = 1 then 1 else 1 end as col1, case when i = 2 then 1 else 2 end as col2, case when i = 3 then 1 else 3 end as col3, case when i = 4 then 1 else 4 end as col4, case when i = 5 then 1 else 5 end as col5, case when i = 6 then 1 else 6 end as col6, case when i = 7 then 1 else 7 end as col7, case when i = 8 then 1 else 8 end as col8, case when i = 9 then 1 else 9 end as col9, case when i = 10 then 1 else 10 end as col10, case when i = 11 then 1 else 11 end as col11, case when i = 12 then 1 else 12 end as col12, case when i = 13 then 1 else 13 end as col13, case when i = 14 then 1 else 14 end as col14, case when i = 15 then 1 else 15 end as col15, case when i = 16 then 1 else 16 end as col16, case when i = 17 then 1 else 17 end as col17, case when i = 18 then 1 else 18 end as col18, case when i = 19 then 1 else 19 end as col19, case when i = 20 then 1 else 20 end as col20, case when i = 21 then 1 else 21 end as col21, case when i = 22 then 1 else 22 end as col22 from foo UNION select case when i = 0 then 1 else 0 end as col0, case when i = 1 then 1 else 1 end as col1, case when i = 2 then 1 else 2 end as col2, case when i = 3 then 1 else 3 end as col3, case when i = 4 then 1 else 4 end as col4, case when i = 5 then 1 else 5 end as col5, case when i = 6 then 1 else 6 end as col6, case when i = 7 then 1 else 7 end as col7, case when i = 8 then 1 else 8 end as col8, case when i = 9 then 1 else 9 end as col9, case when i = 10 then 1 else 10 end as col10, case when i = 11 then 1 else 11 end as col11, case when i = 12 then 1 else 12 end as col12, case when i = 13 then 1 else 13 end as col13, case when i = 14 then 1 else 14 end as col14, case when i = 15 then 1 else 15 end as col15, case when i = 16 then 1 else 16 end as col16, case when i = 17 then 1 else 17 end as col17, case when i = 18 then 1 else 18 end as col18, case when i = 19 then 1 else 19 end as col19, case when i = 20 then 1 else 20 end as col20, case when i = 21 then 1 else 21 end as col21, case when i = 22 then 1 else 22 end as col22 from foo UNION select case when i = 0 then 1 else 0 end as col0, case when i = 1 then 1 else 1 end as col1, case when i = 2 then 1 else 2 end as col2, case when i = 3 then 1 else 3 end as col3, case when i = 4 then 1 else 4 end as col4, case when i = 5 then 1 else 5 end as col5, case when i = 6 then 1 else 6 end as col6, case when i = 7 then 1 else 7 end as col7, case when i = 8 then 1 else 8 end as col8, case when i = 9 then 1 else 9 end as col9, case when i = 10 then 1 else 10 end as col10, case when i = 11 then 1 else 11 end as col11, case when i = 12 then 1 else 12 end as col12, case when i = 13 then 1 else 13 end as col13, case when i = 14 then 1 else 14 end as col14, case when i = 15 then 1 else 15 end as col15, case when i = 16 then 1 else 16 end as col16, case when i = 17 then 1 else 17 end as col17, case when i = 18 then 1 else 18 end as col18, case when i = 19 then 1 else 19 end as col19, case when i = 20 then 1 else 20 end as col20, case when i = 21 then 1 else 21 end as col21, case when i = 22 then 1 else 22 end as col22 from foo UNION select case when i = 0 then 1 else 0 end as col0, case when i = 1 then 1 else 1 end as col1, case when i = 2 then 1 else 2 end as col2, case when i = 3 then 1 else 3 end as col3, case when i = 4 then 1 else 4 end as col4, case when i = 5 then 1 else 5 end as col5, case when i = 6 then 1 else 6 end as col6, case when i = 7 then 1 else 7 end as col7, case when i = 8 then 1 else 8 end as col8, case when i = 9 then 1 else 9 end as col9, case when i = 10 then 1 else 10 end as col10, case when i = 11 then 1 else 11 end as col11, case when i = 12 then 1 else 12 end as col12, case when i = 13 then 1 else 13 end as col13, case when i = 14 then 1 else 14 end as col14, case when i = 15 then 1 else 15 end as col15, case when i = 16 then 1 else 16 end as col16, case when i = 17 then 1 else 17 end as col17, case when i = 18 then 1 else 18 end as col18, case when i = 19 then 1 else 19 end as col19, case when i = 20 then 1 else 20 end as col20, case when i = 21 then 1 else 21 end as col21, case when i = 22 then 1 else 22 end as col22 from foo UNION select case when i = 0 then 1 else 0 end as col0, case when i = 1 then 1 else 1 end as col1, case when i = 2 then 1 else 2 end as col2, case when i = 3 then 1 else 3 end as col3, case when i = 4 then 1 else 4 end as col4, case when i = 5 then 1 else 5 end as col5, case when i = 6 then 1 else 6 end as col6, case when i = 7 then 1 else 7 end as col7, case when i = 8 then 1 else 8 end as col8, case when i = 9 then 1 else 9 end as col9, case when i = 10 then 1 else 10 end as col10, case when i = 11 then 1 else 11 end as col11, case when i = 12 then 1 else 12 end as col12, case when i = 13 then 1 else 13 end as col13, case when i = 14 then 1 else 14 end as col14, case when i = 15 then 1 else 15 end as col15, case when i = 16 then 1 else 16 end as col16, case when i = 17 then 1 else 17 end as col17, case when i = 18 then 1 else 18 end as col18, case when i = 19 then 1 else 19 end as col19, case when i = 20 then 1 else 20 end as col20, case when i = 21 then 1 else 21 end as col21, case when i = 22 then 1 else 22 end as col22 from foo UNION select case when i = 0 then 1 else 0 end as col0, case when i = 1 then 1 else 1 end as col1, case when i = 2 then 1 else 2 end as col2, case when i = 3 then 1 else 3 end as col3, case when i = 4 then 1 else 4 end as col4, case when i = 5 then 1 else 5 end as col5, case when i = 6 then 1 else 6 end as col6, case when i = 7 then 1 else 7 end as col7, case when i = 8 then 1 else 8 end as col8, case when i = 9 then 1 else 9 end as col9, case when i = 10 then 1 else 10 end as col10, case when i = 11 then 1 else 11 end as col11, case when i = 12 then 1 else 12 end as col12, case when i = 13 then 1 else 13 end as col13, case when i = 14 then 1 else 14 end as col14, case when i = 15 then 1 else 15 end as col15, case when i = 16 then 1 else 16 end as col16, case when i = 17 then 1 else 17 end as col17, case when i = 18 then 1 else 18 end as col18, case when i = 19 then 1 else 19 end as col19, case when i = 20 then 1 else 20 end as col20, case when i = 21 then 1 else 21 end as col21, case when i = 22 then 1 else 22 end as col22 from foo UNION select case when i = 0 then 1 else 0 end as col0, case when i = 1 then 1 else 1 end as col1, case when i = 2 then 1 else 2 end as col2, case when i = 3 then 1 else 3 end as col3, case when i = 4 then 1 else 4 end as col4, case when i = 5 then 1 else 5 end as col5, case when i = 6 then 1 else 6 end as col6, case when i = 7 then 1 else 7 end as col7, case when i = 8 then 1 else 8 end as col8, case when i = 9 then 1 else 9 end as col9, case when i = 10 then 1 else 10 end as col10, case when i = 11 then 1 else 11 end as col11, case when i = 12 then 1 else 12 end as col12, case when i = 13 then 1 else 13 end as col13, case when i = 14 then 1 else 14 end as col14, case when i = 15 then 1 else 15 end as col15, case when i = 16 then 1 else 16 end as col16, case when i = 17 then 1 else 17 end as col17, case when i = 18 then 1 else 18 end as col18, case when i = 19 then 1 else 19 end as col19, case when i = 20 then 1 else 20 end as col20, case when i = 21 then 1 else 21 end as col21, case when i = 22 then 1 else 22 end as col22 from foosqlparse-0.5.4/tests/files/multiple_case_in_begin.sql0000644000000000000000000000030113615410400017731 0ustar00CREATE TRIGGER mytrig AFTER UPDATE OF vvv ON mytable BEGIN UPDATE aa SET mycola = (CASE WHEN (A=1) THEN 2 END); UPDATE bb SET mycolb = (CASE WHEN (B=1) THEN 5 END); END;sqlparse-0.5.4/tests/files/mysql_handler.sql0000644000000000000000000000020613615410400016117 0ustar00create procedure proc1() begin declare handler for foo begin end; select 1; end; create procedure proc2() begin select 1; end; sqlparse-0.5.4/tests/files/stream.sql0000644000000000000000000000005413615410400014551 0ustar00-- this file is streamed in insert into foo sqlparse-0.5.4/tests/files/test_cp1251.sql0000644000000000000000000000006113615410400015226 0ustar00insert into foo values (1); -- 襄耥 镳 磬溴驿 sqlparse-0.5.4/.gitignore0000644000000000000000000000021513615410400012260 0ustar00*.py[co] docs/build dist/ build/ MANIFEST .coverage .cache/ *.egg-info/ htmlcov/ .pytest_cache# pixi environments .pixi/* !.pixi/config.toml sqlparse-0.5.4/AUTHORS0000644000000000000000000000660613615410400011352 0ustar00python-sqlparse is written and maintained by Andi Albrecht . This module contains code (namely the lexer and filter mechanism) from the pygments project that was written by Georg Brandl. This module contains code (Python 2/3 compatibility) from the six project: https://bitbucket.org/gutworth/six. Alphabetical list of contributors: * Adam Greenhall * Adam Johnson * Aki Ariga * Alexander Beedie * Alexey Malyshev * ali-tny * andrew deryabin * Andrew Tipton * atronah * casey * Cau锚 Beloni * Christian Clauss * circld * Corey Zumar * Cristian Orellana * Dag Wieers * Daniel Harding * Darik Gamble * Demetrio92 * Dennis Taylor * Dvo艡谩k V谩clav * Erik Cederstrand * Florian Bauer * Fredy Wijaya * Gavin Wahl * Georg Traar * griff <70294474+griffatrasgo@users.noreply.github.com> * Hugo van Kemenade * hurcy * Ian Robertson * Igor Khrol * JacekPliszka * JavierPan * Jean-Martin Archer * Jes煤s Legan茅s Combarro "Piranna" * Johannes Hoff * John Bodley * Jon Dufresne * Josh Soref * Kevin Jing Qiu * koljonen * Likai Liu * Long Le Xich * mathilde.oustlant * Michael Schuller * Mike Amy * mulos * Oleg Broytman * osmnv <80402144+osmnv@users.noreply.github.com> * Patrick Schemitz * Pi Delport * Prudhvi Vatala * quest * Robert Nix * Rocky Meza * Romain Rigaux * Rowan Seymour * Ryan Wooden * saaj * Sergei Stropysh * Shen Longxing * Simon Heisterkamp * Sjoerd Job Postmus * skryzh * Soloman Weng * spigwitmer * Stefan Warnat * Tao Wang * Tenghuan * Tim Graham * Victor Hahn * Victor Uriarte * Ville Skytt盲 * vthriller * wayne.wuw * Will Jones * William Ivanski * Yago Riveiro * Zi-Xuan Fu sqlparse-0.5.4/LICENSE0000644000000000000000000000300113615410400011271 0ustar00Copyright (c) 2016, Andi Albrecht All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the authors nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. sqlparse-0.5.4/README.rst0000644000000000000000000000606113615410400011764 0ustar00python-sqlparse - Parse SQL statements ====================================== |buildstatus|_ |coverage|_ |docs|_ |packageversion|_ .. docincludebegin sqlparse is a non-validating SQL parser for Python. It provides support for parsing, splitting and formatting SQL statements. The module is compatible with Python 3.8+ and released under the terms of the `New BSD license `_. Visit the project page at https://github.com/andialbrecht/sqlparse for further information about this project. Quick Start ----------- .. code-block:: sh $ pip install sqlparse .. code-block:: python >>> import sqlparse >>> # Split a string containing two SQL statements: >>> raw = 'select * from foo; select * from bar;' >>> statements = sqlparse.split(raw) >>> statements ['select * from foo;', 'select * from bar;'] >>> # Format the first statement and print it out: >>> first = statements[0] >>> print(sqlparse.format(first, reindent=True, keyword_case='upper')) SELECT * FROM foo; >>> # Parsing a SQL statement: >>> parsed = sqlparse.parse('select * from foo')[0] >>> parsed.tokens [, , >> Pre-commit Hook --------------- sqlparse can be used as a `pre-commit `_ hook to automatically format SQL files before committing: .. code-block:: yaml repos: - repo: https://github.com/andialbrecht/sqlparse rev: 0.5.4 # Use the latest version hooks: - id: sqlformat # Optional: Add more formatting options # IMPORTANT: --in-place is required, already included by default args: [--in-place, --reindent, --keywords, upper] Then install the hook: .. code-block:: sh $ pre-commit install Your SQL files will now be automatically formatted on each commit. **Note**: The hook uses ``--in-place --reindent`` by default. If you override the ``args``, you **must** include ``--in-place`` for the hook to work. Links ----- Project page https://github.com/andialbrecht/sqlparse Bug tracker https://github.com/andialbrecht/sqlparse/issues Documentation https://sqlparse.readthedocs.io/ Online Demo https://sqlformat.org/ sqlparse is licensed under the BSD license. Parts of the code are based on pygments written by Georg Brandl and others. pygments-Homepage: http://pygments.org/ .. |buildstatus| image:: https://github.com/andialbrecht/sqlparse/actions/workflows/python-app.yml/badge.svg .. _buildstatus: https://github.com/andialbrecht/sqlparse/actions/workflows/python-app.yml .. |coverage| image:: https://codecov.io/gh/andialbrecht/sqlparse/branch/master/graph/badge.svg .. _coverage: https://codecov.io/gh/andialbrecht/sqlparse .. |docs| image:: https://readthedocs.org/projects/sqlparse/badge/?version=latest .. _docs: https://sqlparse.readthedocs.io/en/latest/?badge=latest .. |packageversion| image:: https://img.shields.io/pypi/v/sqlparse?color=%2334D058&label=pypi%20package .. _packageversion: https://pypi.org/project/sqlparse sqlparse-0.5.4/pyproject.toml0000644000000000000000000000756513615410400013223 0ustar00[build-system] requires = ["hatchling"] build-backend = "hatchling.build" [project] name = "sqlparse" description = "A non-validating SQL parser." authors = [{name = "Andi Albrecht", email = "albrecht.andi@gmail.com"}] readme = "README.rst" dynamic = ["version"] classifiers = [ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "License :: OSI Approved :: BSD License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", "Topic :: Database", "Topic :: Software Development", ] requires-python = ">=3.8" [project.urls] Home = "https://github.com/andialbrecht/sqlparse" Documentation = "https://sqlparse.readthedocs.io/" "Release Notes" = "https://sqlparse.readthedocs.io/en/latest/changes.html" Source = "https://github.com/andialbrecht/sqlparse" Tracker = "https://github.com/andialbrecht/sqlparse/issues" [project.scripts] sqlformat = "sqlparse.__main__:main" [project.optional-dependencies] dev = [ "build", ] doc = [ "sphinx", ] [tool.hatch.version] path = "sqlparse/__init__.py" [tool.coverage.run] source_pkgs = ["sqlparse", "tests"] branch = true parallel = true omit = [ "sqlparse/__main__.py", ] [tool.coverage.paths] sqlparse = ["sqlparse"] tests = ["tests"] [tool.coverage.report] exclude_lines = [ "no cov", "if __name__ == .__main__.:", "if TYPE_CHECKING:", ] [tool.pixi.workspace] channels = ["conda-forge"] platforms = ["linux-64", "osx-64", "osx-arm64", "win-64"] [tool.pixi.pypi-dependencies] sqlparse = { path = ".", editable = true } [tool.pixi.feature.test.dependencies] pytest = "*" coverage = "*" flake8 = "*" [tool.pixi.feature.test.pypi-dependencies] sqlparse = { path = ".", editable = true } [tool.pixi.feature.py38.dependencies] python = "3.8.*" [tool.pixi.feature.py39.dependencies] python = "3.9.*" [tool.pixi.feature.py310.dependencies] python = "3.10.*" [tool.pixi.feature.py311.dependencies] python = "3.11.*" [tool.pixi.feature.py312.dependencies] python = "3.12.*" [tool.pixi.feature.py313.dependencies] python = "3.13.*" [tool.pixi.feature.py314.dependencies] python = "3.14.*" [tool.pixi.environments] default = { solve-group = "default" } dev = { features = ["dev"], solve-group = "default" } doc = { features = ["doc"], solve-group = "default" } py38 = { features = ["test", "py38"], solve-group = "py38" } py39 = { features = ["test", "py39"], solve-group = "py39" } py310 = { features = ["test", "py310"], solve-group = "py310" } py311 = { features = ["test", "py311"], solve-group = "py311" } py312 = { features = ["test", "py312"], solve-group = "py312" } py313 = { features = ["test", "py313"], solve-group = "py313" } py314 = { features = ["test", "py314"], solve-group = "py314" } [tool.pixi.tasks] test-py38 = "pixi run -e py38 pytest tests/" test-py39 = "pixi run -e py39 pytest tests/" test-py310 = "pixi run -e py310 pytest tests/" test-py311 = "pixi run -e py311 pytest tests/" test-py312 = "pixi run -e py312 pytest tests/" test-py313 = "pixi run -e py313 pytest tests/" test-py314 = "pixi run -e py314 pytest tests/" test-all = { depends-on = ["test-py38", "test-py39", "test-py310", "test-py311", "test-py312", "test-py313", "test-py314"] } lint = "pixi run -e py311 flake8 sqlparse/" coverage = "coverage run -m pytest tests/" coverage-combine = "coverage combine" coverage-report = "coverage report" coverage-xml = "coverage xml" sqlparse-0.5.4/PKG-INFO0000644000000000000000000001117613615410400011375 0ustar00Metadata-Version: 2.4 Name: sqlparse Version: 0.5.4 Summary: A non-validating SQL parser. Project-URL: Home, https://github.com/andialbrecht/sqlparse Project-URL: Documentation, https://sqlparse.readthedocs.io/ Project-URL: Release Notes, https://sqlparse.readthedocs.io/en/latest/changes.html Project-URL: Source, https://github.com/andialbrecht/sqlparse Project-URL: Tracker, https://github.com/andialbrecht/sqlparse/issues Author-email: Andi Albrecht License-File: AUTHORS License-File: LICENSE Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Programming Language :: Python :: 3.14 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: Topic :: Database Classifier: Topic :: Software Development Requires-Python: >=3.8 Provides-Extra: dev Requires-Dist: build; extra == 'dev' Provides-Extra: doc Requires-Dist: sphinx; extra == 'doc' Description-Content-Type: text/x-rst python-sqlparse - Parse SQL statements ====================================== |buildstatus|_ |coverage|_ |docs|_ |packageversion|_ .. docincludebegin sqlparse is a non-validating SQL parser for Python. It provides support for parsing, splitting and formatting SQL statements. The module is compatible with Python 3.8+ and released under the terms of the `New BSD license `_. Visit the project page at https://github.com/andialbrecht/sqlparse for further information about this project. Quick Start ----------- .. code-block:: sh $ pip install sqlparse .. code-block:: python >>> import sqlparse >>> # Split a string containing two SQL statements: >>> raw = 'select * from foo; select * from bar;' >>> statements = sqlparse.split(raw) >>> statements ['select * from foo;', 'select * from bar;'] >>> # Format the first statement and print it out: >>> first = statements[0] >>> print(sqlparse.format(first, reindent=True, keyword_case='upper')) SELECT * FROM foo; >>> # Parsing a SQL statement: >>> parsed = sqlparse.parse('select * from foo')[0] >>> parsed.tokens [, , >> Pre-commit Hook --------------- sqlparse can be used as a `pre-commit `_ hook to automatically format SQL files before committing: .. code-block:: yaml repos: - repo: https://github.com/andialbrecht/sqlparse rev: 0.5.4 # Use the latest version hooks: - id: sqlformat # Optional: Add more formatting options # IMPORTANT: --in-place is required, already included by default args: [--in-place, --reindent, --keywords, upper] Then install the hook: .. code-block:: sh $ pre-commit install Your SQL files will now be automatically formatted on each commit. **Note**: The hook uses ``--in-place --reindent`` by default. If you override the ``args``, you **must** include ``--in-place`` for the hook to work. Links ----- Project page https://github.com/andialbrecht/sqlparse Bug tracker https://github.com/andialbrecht/sqlparse/issues Documentation https://sqlparse.readthedocs.io/ Online Demo https://sqlformat.org/ sqlparse is licensed under the BSD license. Parts of the code are based on pygments written by Georg Brandl and others. pygments-Homepage: http://pygments.org/ .. |buildstatus| image:: https://github.com/andialbrecht/sqlparse/actions/workflows/python-app.yml/badge.svg .. _buildstatus: https://github.com/andialbrecht/sqlparse/actions/workflows/python-app.yml .. |coverage| image:: https://codecov.io/gh/andialbrecht/sqlparse/branch/master/graph/badge.svg .. _coverage: https://codecov.io/gh/andialbrecht/sqlparse .. |docs| image:: https://readthedocs.org/projects/sqlparse/badge/?version=latest .. _docs: https://sqlparse.readthedocs.io/en/latest/?badge=latest .. |packageversion| image:: https://img.shields.io/pypi/v/sqlparse?color=%2334D058&label=pypi%20package .. _packageversion: https://pypi.org/project/sqlparse