././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499839.3962374 isort-5.13.2/CHANGELOG.md0000644000000000000000000011214614536412777011526 0ustar00Changelog ========= NOTE: isort follows the [semver](https://semver.org/) versioning standard. Find out more about isort's release policy [here](https://pycqa.github.io/isort/docs/major_releases/release_policy). ### 5.13.2 December 13 2023 - Apply the bracket fix from issue #471 only for use_parentheses=True (#2184) @bp72 - Confine pre-commit to stages (#2213) @davidculley - Fixed colors extras (#2212) @staticdev ### 5.13.1 December 11 2023 - Fixed integration tests (#2208) @bp72 - Fixed normalizing imports from more than one level of parent modules (issue/2152) (#2191) @bp72 - Remove optional dependencies without extras (#2207) @staticdev ### 5.13.0 December 9 2023 - Cleanup deprecated extras (#2089) @staticdev - Fixed #1989: settings lookup when working in stream based mode - Fixed 80 line length for wemake linter (#2183) @skatromb - Add support for Python 3.12 (#2175) @hugovk - Fixed: add newest version to pre-commit docs (#2190) @AzulGarza - Fixed assertions in test_git_hook (#2196) @mgorny - Removed check for include_trailing_comma for the Hanging Indent wrap mode (#2192) @bp72 - Use the standard library tomllib on sufficiently new python (#2202) @eli-schwartz - Update pre-commit.md version number (#2197) @nicobako - doc: Update black_compatibility.md (#2177) @JSS95 - Fixed safety sept 2023 (#2178) @staticdev - docs: fix black profile documentation (#2163) @nijel - Fixed typo: indended -> indented (#2161) @vadimkerr - Docs(configuration/options.md): fix missing trailing spaces for hard linebreak (#2157) @JoeyTeng - Update pre-commit.md (#2148) @godiard - chore: move configurations to pyproject.toml (#2115) @SauravMaheshkar - Fixed typo in README (#2112) @stefmolin - Update version in pre-commit setup to avoid installation issue with poetry (#2103) @stefmolin - Skip .pytype directory by default. (#2098) @manueljacob - Fixed a tip block styling in the Config Files section (#2097) @Klavionik - Do not cache configuration files (#1995) @kaste - Derive settings_path from --filename (#1992) @kaste - Fixed year of version 5.12.0 in CHANGELOG.md (#2082) @DjLegolas ### 5.12.0 January 28 2023 - Removed support for Python 3.7 - Fixed incompatiblity with latest poetry version - Added support for directory limitations within built in git hook ### 5.11.5 January 30 2023 [hotfix] - Fixed incompatiblity with latest poetry version ### 5.11.4 December 21 2022 - Fixed #2038 (again): stop installing documentation files to top-level site-packages (#2057) @mgorny - CI: only run release workflows for upstream (#2052) @hugovk - Tests: remove obsolete toml import from the test suite (#1978) @mgorny - CI: bump Poetry 1.3.1 (#2058) @staticdev ### 5.11.3 December 16 2022 - Fixed #2007: settings for py3.11 (#2040) @staticdev - Fixed #2038: packaging pypoetry (#2042) @staticdev - Docs: renable portray (#2043) @timothycrosley - Ci: add minimum GitHub token permissions for workflows (#1969) @varunsh-coder - Ci: general CI improvements (#2041) @staticdev - Ci: add release workflow (#2026) @staticdev ### 5.11.2 December 12 2022 - Hotfix #2034: isort --version is not accurate on 5.11.x releases (#2034) @gschaffner ### 5.11.1 December 12 2022 - Hotfix #2031: only call `colorama.init` if `colorama` is available (#2032) @tomaarsen ### 5.11.0 December 12 2022 - Added official support for Python 3.11 (#1996, #2008, #2011) @staticdev - Dropped support for Python 3.6 (#2019) @barrelful - Fixed problematic tests (#2021, #2022) @staticdev - Fixed #1960: Rich compatibility (#1961) @ofek - Fixed #1945, #1986: Python 4.0 upper bound dependency resolving issues @staticdev - Fixed Pyodide CDN URL (#1991) @andersk - Docs: clarify description of use_parentheses (#1941) @mgedmin - Fixed #1976: `black` compatibility for `.pyi` files @XuehaiPan - Implemented #1683: magic trailing comma option (#1876) @legau - Add missing space in unrecoverable exception message (#1933) @andersk - Fixed #1895: skip-gitignore: use allow list, not deny list @bmalehorn - Fixed #1917: infinite loop for unmatched parenthesis (#1919) @anirudnits - Docs: shared profiles (#1896) @matthewhughes934 - Fixed build-backend values in the example plugins (#1892) @mgorny - Remove reference to jamescurtin/isort-action (#1885) @AndrewLane - Split long cython import lines (#1931) @davidcollins001 - Update plone profile: copy of `black`, plus three settings. (#1926) @mauritsvanrees - Fixed #1815, #1862: Add a command-line flag to sort all re-exports (#1863) @parafoxia - Fixed #1854: `lines_before_imports` appending lines after comments (#1861) @legau - Remove redundant `multi_line_output = 3` from "Compatibility with black" (#1858) @jdufresne - Add tox config example (#1856) @umonaca - Docs: add examples for frozenset and tuple settings (#1822) @sgaist - Docs: add multiple config documentation (#1850) @anirudnits ### 5.10.1 November 8 2021 - Fixed #1819: Occasional inconsistency with multiple src paths. - Fixed #1840: skip_file ignored when on the first docstring line ### 5.10.0 November 3 2021 - Implemented #1796: Switch to `tomli` for pyproject.toml configuration loader. - Fixed #1801: CLI bug (--exend-skip-glob, overrides instead of extending). - Fixed #1802: respect PATH customization in nested calls to git. - Fixed #1838: Append only with certain code snippets incorrectly adds imports. - Added official support for Python 3.10 #### Potentially breaking changes: - Fixed #1785: `_ast` module incorrectly excluded from stdlib definition. ### 5.9.3 July 28 2021 - Improved text of skipped file message to mention gitignore feature. - Made all exceptions pickleable. - Fixed #1779: Pylama integration ignores pylama specific isort config overrides. - Fixed #1781: `--from-first` CLI flag shouldn't take any arguments. - Fixed #1792: Sorting literals sometimes ignored when placed on first few lines of file. - Fixed #1777: extend_skip is not honored wit a git submodule when skip_gitignore=true. ### 5.9.2 July 8th 2021 - Improved behavior of `isort --check --atomic` against Cython files. - Fixed #1769: Future imports added below assignments when no other imports present. - Fixed #1772: skip-gitignore will check files not in the git repository. - Fixed #1762: in some cases when skip-gitignore is set, isort fails to skip any files. - Fixed #1767: Encoding issues surfacing when invalid characters set in `__init__.py` files during placement. - Fixed #1771: Improved handling of skips against named streamed in content. ### 5.9.1 June 21st 2021 [hotfix] - Fixed #1758: projects with many files and skip_ignore set can lead to a command-line overload. ### 5.9.0 June 21st 2021 - Improved CLI startup time. - Implemented #1697: Provisional support for PEP 582: skip `__pypackages__` directories by default. - Implemented #1705: More intuitive handling of isort:skip_file comments on streams. - Implemented #1737: Support for using action comments to avoid adding imports to individual files. - Implemented #1750: Ability to customize output format lines. - Implemented #1732: Support for custom sort functions. - Implemented #1722: Improved behavior for running isort in atomic mode over Cython source files. - Fixed (https://github.com/PyCQA/isort/pull/1695): added imports being added to doc string in some cases. - Fixed (https://github.com/PyCQA/isort/pull/1714): in rare cases line continuation combined with tabs can output invalid code. - Fixed (https://github.com/PyCQA/isort/pull/1726): isort ignores reverse_sort when force_sort_within_sections is true. - Fixed #1741: comments in hanging indent modes can lead to invalid code. - Fixed #1744: repeat noqa comments dropped when * import and non * imports exist from the same package. - Fixed #1721: repeat noqa comments on separate from lines with force-single-line set, sometimes get dropped. #### Goal Zero (Tickets related to aspirational goal of achieving 0 regressions for remaining 5.0.0 lifespan): - Implemented #1394: 100% branch coverage (in addition to line coverage) enforced. - Implemented #1751: Strict typing enforcement (turned on mypy strict mode). ### 5.8.0 March 20th 2021 - Fixed #1631: as import comments can in some cases be duplicated. - Fixed #1667: extra newline added with float-to-top, after skip, in some cases. - Fixed #1594: incorrect placement of noqa comments with multiple from imports. - Fixed #1566: in some cases different length limits for dos based line endings. - Implemented #1648: Export MyPY type hints. - Implemented #1641: Identified import statements now return runnable code. - Implemented #1661: Added "wemake" profile. - Implemented #1669: Parallel (`-j`) now defaults to number of CPU cores if no value is provided. - Implemented #1668: Added a safeguard against accidental usage against /. - Implemented #1638 / #1644: Provide a flag `--overwrite-in-place` to ensure same file handle is used after sorting. - Implemented #1684: Added support for extending skips with `--extend-skip` and `--extend-skip-glob`. - Implemented #1688: Auto identification and skipping of some invalid import statements. - Implemented #1645: Ability to reverse the import sorting order. - Implemented #1504: Added ability to push star imports to the top to avoid overriding explicitly defined imports. - Documented #1685: Skip doesn't support plain directory names, but skip_glob does. ### 5.7.0 December 30th 2020 - Fixed #1612: In rare circumstances an extra comma is added after import and before comment. - Fixed #1593: isort encounters bug in Python 3.6.0. - Implemented #1596: Provide ways for extension formatting and file paths to be specified when using streaming input from CLI. - Implemented #1583: Ability to output and diff within a single API call to `isort.file`. - Implemented #1562, #1592 & #1593: Better more useful fatal error messages. - Implemented #1575: Support for automatically fixing mixed indentation of import sections. - Implemented #1582: Added a CLI option for skipping symlinks. - Implemented #1603: Support for disabling float_to_top from the command line. - Implemented #1604: Allow toggling section comments on and off for indented import sections. ### 5.6.4 October 12, 2020 - Fixed #1556: Empty line added between imports that should be skipped. ### 5.6.3 October 11, 2020 - Improved packaging of test files alongside source distribution (see: https://github.com/PyCQA/isort/pull/1555). ### 5.6.2 October 10, 2020 - Fixed #1548: On rare occasions an unecessary empty line can be added when an import is marked as skipped. - Fixed #1542: Bug in VERTICAL_PREFIX_FROM_MODULE_IMPORT wrap mode. - Fixed #1552: Pylama test dependent on source layout. #### Goal Zero: (Tickets related to aspirational goal of achieving 0 regressions for remaining 5.0.0 lifespan): - Zope added to integration test suite - Additional testing of CLI (simulate unseekable streams) ### 5.6.1 [Hotfix] October 8, 2020 - Fixed #1546: Unstable (non-idempotent) behavior with certain src trees. ### 5.6.0 October 7, 2020 - Implemented #1433: Provide helpful feedback in case a custom config file is specified without a configuration. - Implemented #1494: Default to sorting imports within `.pxd` files. - Implemented #1502: Improved float-to-top behavior when there is an existing import section present at top-of-file. - Implemented #1511: Support for easily seeing all files isort will be ran against using `isort . --show-files`. - Implemented #1487: Improved handling of encoding errors. - Improved handling of unsupported configuration option errors (see #1475). - Fixed #1463: Better interactive documentation for future option. - Fixed #1461: Quiet config option not respected by file API in some circumstances. - Fixed #1482: pylama integration is not working correctly out-of-the-box. - Fixed #1492: --check does not work with stdin source. - Fixed #1499: isort gets confused by single line, multi-line style comments when using float-to-top. - Fixed #1525: Some warnings can't be disabled with --quiet. - Fixed #1523: in rare cases isort can ignore direct from import if as import is also on same line. #### Potentially breaking changes: - Implemented #1540: Officially support Python 3.9 stdlib imports by default. - Fixed #1443: Incorrect third vs first party categorization - namespace packages. - Fixed #1486: "Google" profile is not quite Google style. - Fixed "PyCharm" profile to always add 2 lines to be consistent with what PyCharm "Optimize Imports" does. #### Goal Zero: (Tickets related to aspirational goal of achieving 0 regressions for remaining 5.0.0 lifespan): - Implemented #1472: Full testing of stdin CLI Options - Added additional branch coverage. - More projects added to integration test suite. ### 5.5.5 [Hotfix] October 7, 2020 - Fixed #1539: in extremely rare cases isort 5.5.4 introduces syntax error by removing closing paren. ### 5.5.4 [Hotfix] September 29, 2020 - Fixed #1507: in rare cases isort changes the content of multiline strings after a yield statement. - Fixed #1505: Support case where known_SECTION points to a section not listed in sections. ### 5.5.3 [Hotfix] September 20, 2020 - Fixed #1488: in rare cases isort can mangle `yield from` or `raise from` statements. ### 5.5.2 [Hotfix] September 9, 2020 - Fixed #1469: --diff option is ignored when input is from stdin. ### 5.5.1 September 4, 2020 - Fixed #1454: Ensure indented import sections with import heading and a preceding comment don't cause import sorting loops. - Fixed #1453: isort error when float to top on almost empty file. - Fixed #1456 and #1415: noqa comment moved to where flake8 cant see it. - Fixed #1460: .svn missing from default ignore list. ### 5.5.0 September 3, 2020 - Fixed #1398: isort: off comment doesn't work, if it's the top comment in the file. - Fixed #1395: reverse_relative setting doesn't have any effect when combined with force_sort_within_sections. - Fixed #1399: --skip can error in the case of projects that contain recursive symlinks. - Fixed #1389: ensure_newline_before_comments doesn't work if comment is at top of section and sections don't have lines between them. - Fixed #1396: comments in imports with ";" can keep isort from recognizing import line. - Fixed #1380: As imports removed when `combine_star` is set. - Fixed #1382: --float-to-top has no effect if no import is already at the top. - Fixed #1420: isort never settles on module docstring + add import. - Fixed #1421: Error raised when repo contains circular symlinks. - Fixed #1427: noqa comment is moved from star import to constant import. - Fixed #1444 & 1445: Incorrect placement of import additions. - Fixed #1447: isort5 throws error when stdin used on Windows with deprecated args. - Implemented #1397: Added support for specifying config file when using git hook (thanks @diseraluca!). - Implemented #1405: Added support for coloring diff output. - Implemented #1434: New multi-line grid mode without parentheses. #### Goal Zero (Tickets related to aspirational goal of achieving 0 regressions for remaining 5.0.0 lifespan): - Implemented #1392: Extensive profile testing. - Implemented #1393: Proprety based testing applied to code snippets. - Implemented #1391: Create automated integration test that includes full code base of largest OpenSource isort users. #### Potentially breaking changes: - Fixed #1429: --check doesn't print to stderr as the documentation says. This means if you were looking for `ERROR:` messages for files that contain incorrect imports within stdout you will now need to look in stderr. ### 5.4.2 Aug 14, 2020 - Fixed #1383: Known other does not work anymore with .editorconfig. - Fixed: Regression in first known party path expansion. ### 5.4.1 [Hotfix] Aug 13, 2020 - Fixed #1381: --combine-as loses # noqa in different circumstances. ### 5.4.0 Aug 12, 2020 - Implemented #1373: support for length sort only of direct (AKA straight) imports. - Fixed #1321: --combine-as loses # noqa. - Fixed #1375: --dont-order-by-type CLI broken. ### 5.3.2 [Hotfix] Aug 7, 2020 - Fixed incorrect warning code (W503->W0503). ### 5.3.1 Aug 7, 2020 - Improve upgrade warnings to be less noisy and point to error codes for easy interoperability with Visual Studio Code (see: #1363). ### 5.3.0 Aug 4, 2020 - Implemented ability to treat all or select comments as code (issue #1357) - Implemented ability to use different configs for different file extensions (issue #1162) - Implemented ability to specify the types of imports (issue #1181) - Implemented ability to dedup import headings (issue #953) - Added experimental support for sorting literals (issue #1358) - Added experimental support for sorting and deduping groupings of assignments. - Improved handling of deprecated single line variables for usage with Visual Studio Code (issue #1363) - Improved handling of mixed newline forms within same source file. - Improved error handling for known sections. - Improved API consistency, returning a boolean value for all modification API calls to indicate if changes were made. - Fixed #1366: spurious errors when combining skip with --gitignore. - Fixed #1359: --skip-gitignore does not honor ignored symlink #### Internal Development: - Initial hypothesmith powered test to help catch unexpected syntax parsing and output errors (thanks @Zac-HD!) ### 5.2.2 July 30, 2020 - Fixed #1356: return status when arguments are passed in without files or a content stream. ### 5.2.1 July 28, 2020 - Update precommit to default to filtering files that are defined in skip. - Improved relative path detection for `skip` config usage. - Added recursive symbolic link protection. - Implemented #1177: Support for color output using `--color`. - Implemented recursive symlink detection support. ### 5.2.0 July 27, 2020 - Implemented #1335: Official API for diff capturing. - Implemented #1331: Warn when sections don't match up. - Implemented #1261: By popular demand, `filter_files` can now be set in the config option. - Implemented #960: Support for respecting git ignore via "--gitignore" or "skip_gitignore=True". - Implemented #727: Ability to only add imports if existing imports exist. - Implemented #970: Support for custom sharable isort profiles. - Implemented #1214: Added support for git_hook lazy option (Thanks @sztamas!) - Implemented #941: Added an additional `multi_line_output` mode for more compact formatting (Thanks @sztamas!) - Implemented #1020: Option for LOCALFOLDER. - Implemented #1353: Added support for output formatting plugins. - `# isort: split` can now be used at the end of an import line. - Fixed #1339: Extra indent is not preserved when isort:skip is used in nested imports. - Fixed #1348: `--diff` works incorrectly with files that have CRLF line endings. - Improved code repositories usage of pylint tags (#1350). ### 5.1.4 July 19, 2020 - Fixed issue #1333: Use of wrap_length raises an exception about it not being lower or equal to line_length. - Fixed issue #1330: Ensure stdout can be stubbed dynamically for `show_unified_diff` function. ### 5.1.3 July 18, 2020 - Fixed issue #1329: Fix comments duplicated when --fass option is set. ### 5.1.2 July 17, 2020 - Fixed issue #1219 / #1326: Comments not wrapped for long lines - Fixed issue #1156: Bug related to isort:skip usage followed by a multiline comment block ### 5.1.1 July 15, 2020 - Fixed issue #1322: Occasionally two extra newlines before comment with `-n` & `--fss`. - Fixed issue #1189: `--diff` broken when reading from standard input. ### 5.1.0 July 14, 2020 - isort now throws an exception if an invalid settings path is given (issue #1174). - Implemented support for automatic redundant alias removal (issue #1281). - Implemented experimental support for floating all imports to the top of a file (issue #1228) - Fixed #1178: support for semicolons in decorators. - Fixed #1315: Extra newline before comment with -n + --fss. - Fixed #1192: `-k` or `--keep-direct-and-as-imports` option has been deprecated as it is now always on. #### Formatting changes implied: - Fixed #1280: rewrite of as imports changes the behavior of the imports. ### 5.0.9 July 11, 2020 - Fixed #1301: Import headings in nested sections leads to check errors ### 5.0.8 July 11, 2020 - Fixed #1277 & #1278: New line detection issues on Windows. - Fixed #1294: Fix bundled git hook. ### 5.0.7 July 9, 2020 - Fixed #1306: unexpected --diff behavior. - Fixed #1279: Fixed NOQA comment regression. ### 5.0.6 July 8, 2020 - Fixed #1302: comments and --trailing-comma can generate invalid code. - Fixed #1293: extra new line in indented imports, when immediately followed by a comment. - Fixed #1304: isort 5 no longer recognises `sre_parse` as a stdlib module. - Fixed #1300: add_imports moves comments following import section. - Fixed #1276: Fix a bug that creates only one line after triple quotes. ### 5.0.5 July 7, 2020 - Fixed #1285: packaging issue with bundling tests via poetry. - Fixed #1284: Regression when sorting `.pyi` files from CLI using black profile. - Fixed #1275 & #1283: Blank line after docstring removed. - Fixed #1298: CLI Help out of date with isort 5. - Fixed #1290: Unecessary blank lines above nested imports when import comments turned on. - Fixed #1297: Usage of `--add-imports` alongside `--check` is broken. - Fixed #1289: Stream usage no longer auto picking up config file from current working directory. - Fixed #1296: Force_single_line setting removes immediately following comment line. - Fixed #1295: `ensure_newline_before_comments` doesnt work with `force_sort_within_sections`. - Setting not_skip will no longer immediately fail but instead give user a warning and direct to upgrade docs. ### 5.0.4 July 6, 2020 - Fixed #1264: a regression with comment handling and `force_sort_within_sections` config option - Added warning for deprecated CLI flags and linked to upgrade guide. ### 5.0.3 - July 4, 2020 - Fixed setup.py command incorrectly passing check=True as a configuration parameter (see: https://github.com/pycqa/isort/issues/1258) - Fixed missing patch version - Fixed issue #1253: Atomic fails when passed in not readable output stream ### 5.0.2 - July 4, 2020 - Ensured black profile was complete, adding missing line_length definition. ### 5.0.1 - July 4, 2020 - Fixed a runtime error in a vendored dependency (toml). ### 5.0.0 Penny - July 4, 2020 **Breaking changes:** - isort now requires Python 3.6+ to run but continues to support formatting on ALL versions of python including Python 2 code. - isort deprecates official support for Python 3.4, removing modules only in this release from known_standard_library: - user - Config files are no longer composed on-top of each-other. Instead the first config file found is used. - Since there is no longer composition negative form settings (such as --dont-skip or it's config file variant `not_skip`) are no longer required and have been removed. - Two-letter shortened setting names (like `ac` for `atomic`) now require two dashes to avoid ambiguity: `--ac`. - For consistency with other tools `-v` now is shorthand for verbose and `-V` is shorthand for version. See Issue: #1067. - `length_sort_{section_name}` config usage has been deprecated. Instead `length_sort_sections` list can be used to specify a list of sections that need to be length sorted. - `safety_excludes` and `unsafe` have been deprecated - Config now includes as default full set of safety directories defined by safety excludes. - `--recursive` option has been removed. Directories passed in are now automatically sorted recursive. - `--apply` option has been removed as it is the default behaviour. - isort now does nothing, beyond giving instructions and exiting status code 0, when ran with no arguments. - a new `--interactive` flag has been added to enable the old style behaviour. - isort now works on contiguous sections of imports, instead of one whole file at a time. - ~~isort now formats all nested "as" imports in the "from" form. `import x.y as a` becomes `from x import y as a`.~~ NOTE: This was undone in version 5.1.0 due to feedback it caused issues with some project conventions. - `keep_direct_and_as_imports` option now defaults to `True`. - `appdirs` is no longer supported. Unless manually specified, config should be project config only. - `toml` is now installed as a vendorized module, meaning pyproject.toml based config is always supported. - Completely new Python API, old version is removed and no longer accessible. - New module placement logic and module fully replaces old finders. Old approach is still available via `--old-finders`. Internal: - isort now utilizes mypy and typing to filter out typing related issues before deployment. - isort now utilizes black internally to ensure more consistent formatting. - profile support for common project types (black, django, google, etc) - Much much more. There is some difficulty in fully capturing the extent of changes in this release - just because of how all encompassing the release is. See: [Github Issues](https://github.com/pycqa/isort/issues?q=is%3Aissue+is%3Aclosed) for more. ### 4.3.21 - June 25, 2019 - hot fix release - Fixed issue #957 - Long aliases and use_parentheses generates invalid syntax ### 4.3.20 - May 14, 2019 - hot fix release - Fixed issue #948 - Pipe redirection broken on Python2.7 ### 4.3.19 - May 12, 2019 - hot fix release - Fixed issue #942 - correctly handle pyi (Python Template Files) to match `black` output ### 4.3.18 - May 1, 2019 - hot fix release - Fixed an issue with parsing files that contain unicode characters in Python 2 - Fixed issue #924 - Pulling in pip internals causes depreciation warning - Fixed issue #938 - Providing a way to filter explicitly passed in files via configuration settings (`--filter-files`) - Improved interoperability with toml configuration files ### 4.3.17 - April 7, 2019 - hot fix release - Fixed issue #905 & #919: Import section headers behaving strangely ### 4.3.16 - March 23, 2019 - hot fix release - Fixed issue #909 - skip and skip-glob are not enforced when using settings-path. - Fixed issue #907 - appdirs optional requirement does not correctly specify version - Fixed issue #902 - Too broad warning about missing toml package - Fixed issue #778 - remove `user` from known standard library as it's no longer in any supported Python version. ### 4.3.15 - March 10, 2019 - hot fix release - Fixed a regression with handling streaming input from pipes (Issue #895) - Fixed handling of \x0c whitespace character (Issue #811) - Improved CLI documentation ### 4.3.14 - March 9, 2019 - hot fix release - Fixed a regression with */directory/*.py style patterns ### 4.3.13 - March 8, 2019 - hot fix release - Fixed the inability to accurately determine import section when a mix of conda and virtual environments are used. - Fixed some output being printed even when --quiet mode is enabled. - Fixed issue #890 interoperability with PyCharm by allowing case sensitive non type grouped sorting. - Fixed issue #889 under some circumstances isort will incorrectly add a new line at the beginning of a file. - Fixed issue #885 many files not being skipped according to set skip settings. - Fixed issue #842 streaming encoding improvements. ### 4.3.12 - March 6, 2019 - hot fix release - Fix error caused when virtual environment not detected ### 4.3.11 - March 6, 2019 - hot fix release - Fixed issue #876: confused by symlinks pointing to virtualenv gives FIRSTPARTY not THIRDPARTY - Fixed issue #873: current version skips every file on travis - Additional caching to reduce performance regression introduced in 4.3.5 ### 4.3.10 - March 2, 2019 - hot fix release - Fixed Windows incompatibilities (Issue #835) - Fixed relative import sorting bug (Issue #417) - Fixed "no_lines_before" to also be respected from previous empty sections. - Fixed slow-down introduced by finders mechanism by adding a LRU cache (issue #848) - Fixed issue #842 default encoding not-set in Python2 - Restored Windows automated testing - Added Mac automated testing ### 4.3.9 - February 25, 2019 - hot fix release - Fixed a bug that led to an incompatibility with black: #831 ### 4.3.8 - February 25, 2019 - hot fix release - Fixed a bug that led to the recursive option not always been available from the command line. ### 4.3.7 - February 25, 2019 - hot fix release - Expands the finder failsafe to occur on the creation of the finder objects. ### 4.3.6 - February 24, 2019 - hot fix release - Fixes a fatal error that occurs if a single finder throws an exception. Important as we add more finders that utilize third party libraries. ### 4.3.5 - February 24, 2019 - last Python 2.7 Maintenance Release This is the final Python 2.x release of isort, and includes the following major changes: Potentially Interface Breaking: - The `-r` option for removing imports has been renamed `-rm` to avoid accidental deletions and confusion with the `-rc` recursive option. - `__init__.py` has been removed from the default ignore list. The default ignore list is now empty - with all items needing to be explicitly ignored. - Isort will now by default ignore .tox / venv folders in an effort to be "safe". You can disable this behaviour by setting the "--unsafe" flag, this is separate from any skip or not skip rules you may have in place. - Isort now allows for files missing closing newlines in whitespace check - `distutils` support has been removed to simplify setup.py New: - Official Python 3.7 Compatibility. - Support for using requirements files to auto determine third-paty section if pipreqs & requirementslib are installed. - Added support for using pyproject.toml if toml is installed. - Added support for XDG_HOME if appdirs is installed. - An option has been added to enable ignoring trailing comments ('ignore_comments') defaulting to False. - Added support to enable line length sorting for only specific sections - Added a `correctly_sorted` property on the SortsImport to enable more intuitive programmatic checking. Fixes: - Improved black compatibility. - Isort will now detect files in the CWD as first-party. - Fixed several cases where '-ns' or 'not_skip' was being incorrectly ignored. - Fixed sorting of relative path imports ('.', '..', '...', etc). - Fixed bugs caused by a failure to maintain order when loading iterables from config files. - Correctly handle CPython compiled imports and others that need EXT_SUFFIX to correctly identify. - Fixed handling of Symbolic Links to follow them when walking the path. - Fixed handling of relative known_paths. - Fixed lack of access to all wrap modes from the CLI. - Fixed handling of FIFO files. - Fixed a bug that could result in multiple imports being inserted on the same line. ### 4.3.4 - February 12, 2018 - hotfix release - Fixed issue #671: isort is corrupting CRLF files ### 4.3.3 - Feburary 5, 2018 - hotfix release - Fixed issue #665: Tabs turned into single spaces ### 4.3.2 - Feburary 4, 2018 - hotfix release - Fixed issue #651: Add imports option is broken - Fixed issue #662: An error generated by rewriting `.imports` to `. imoprts` ### 4.3.1 - Feburary 2, 2018 - hotfix release - Fixed setup.py errors - Fixed issue #654: Trailing comma count error - Fixed issue #650: Wrong error message displayed ### 4.3.0 - January 31, 2018 - Fixed #557: `force_alphabetical_sort` and `force_sort_within_sections` can now be utilized together without extra new lines - Fix case-sensitive path existence check in Mac OS X - Added `--no-lines-before` for more granular control over section output - Fixed #493: Unwanted conversion to Windows line endings - Fixed #590: Import `as` mucks with alphabetical sorting - Implemented `--version-number` to retrieve just the version number without the isort logo - Breaking changes - Python 2.7+ only (dropped 2.6) allowing various code simplifications and improvements. ### 4.2.15 - June 6, 2017 - hotfix release IMPORTANT NOTE: This will be the last release with Python 2.6 support, subsequent releases will be 2.7+ only - Fixed certain one line imports not being successfully wrapped ### 4.2.14 - June 5, 2017 - hotfix release - Fixed #559 & #565: Added missing standard library imports ### 4.2.13 - June 2, 2017 - hotfix release - Fixed #553: Check only and --diff now work together again ### 4.2.12 - June 1, 2017 - hotfix release - Fixed wheel distribution bug ### 4.2.11 - June 1, 2017 - hotfix release - Fixed #546: Can't select y/n/c after latest update - Fixed #545: Incorrectly moves __future__ imports above encoding comments ### 4.2.9 - June 1, 2017 - hotfix release - Fixed #428: Check only modifies sorting - Fixed #540: Not correctly identifying stdlib modules ### 4.2.8 - May 31, 2017 - Added `--virtual-env` switch command line option - Added --enforce-whitespace option to go along with --check-only for more exact checks (issue #423) - Fixed imports with a tailing '\' and no space in-between getting removed (issue #425) - Fixed issue #299: long lines occasionally not wrapped - Fixed issue #432: No longer add import inside class when class starts at top of file after encoding comment - Fixed issue #440: Added missing `--use-parentheses` option to command line tool and documentation - Fixed issue #496: import* imports now get successfully identified and reformatted instead of deleted - Fixed issue #491: Non ending parentheses withing single line comments no longer cause formatting issues - Fixed issue #471: Imports that wrap the maximum line length and contain comments on the last line are no longer rendered incorrectly - Fixed issue #436: Force sort within section no longer rearranges comments - Fixed issue #473: Force_to_top and force_sort_within_sections now work together - Fixed issue #484 & #472: Consistent output with imports of same spelling but different case - Fixed issue #433: No longer incorrectly add an extra new-line when comment between imports and function definition - Fixed issue #419: Path specification for skipped paths is not Unix/Windows inter-operable. Breaking Changes: - Fixed issue #511: All command line options with an underscore, have had the underscore replaced with a dash for consistency. This effects: multi-line, add-import, remove-import, force-adds, --force-single-line-imports, and length-sort. - Replaced the `--enforce-whitespace` option with `--ignore-whitespace` to restore original behavior of strict whitespace by default ### 4.2.5 - Fixed an issue that caused modules to inccorectly be matched as thirdparty when they simply had `src` in the leading path, even if they weren't withing $VIRTUALENV/src #414 ### 4.2.4 - Fixed an issue that caused module that contained functions before doc strings, to incorrectly place imports - Fixed regression in how `force_alphabetical_sort` was being interpretted (issue #409) - Fixed stray print statement printing skipped files (issue #411) - Added option for forcing imports into a single bucket: `no_sections` - Added option for new lines between import types (from, straight): `lines_between_sections` ### 4.2.3 - Fixed a large number of priority bugs - bug fix only release ### 4.2.2 - Give an error message when isort is unable to determine where to place a module - Allow imports to be sorted by module, independent of import_type, when `force_sort_within_sections` option is set - Fixed an issue that caused Python files with 2 top comments not to be sorted ### 4.2.1 - Hot fix release to fix code error when skipping globs ### 4.2.0 - Added option "NOQA" Do not wrap lines, but add a noqa statement at the end - Added support for running isort recursively, simply with a standalone `isort` command - Added support to run isort library as a module - Added compatibility for Python 3.5 - Fixed performance issue (#338) when running on project with lots of skipped directories - Fixed issue #328: extra new can occasionally occur when using alphabetical-only sort - Fixed custom sections parsing from config file (unicode string -> list) - Updated pylama extension to the correct entry point - Skip files even when file_contents is provided if they are explicitly in skip list - Removed always showing isort banner, keeping it for when the version is requested, verbose is used, or show_logo setting is set. ### 4.1.2 - Fixed issue #323: Accidental default configuration change introduced ### 4.1.1 - Added support for partial file match skips (thanks to @Amwam) - Added support for --quiet option to only show errors when running isort - Fixed issue #316: isort added new lines incorrectly when a top-of line comment is present ### 4.1.0 - Started keeping a log of all changes between releases - Added the isort logo to the command line interface - Added example usage gif to README - Implemented issue #292: skip setting now supports glob patterns - Implemented issue #271: Add option to sort imports purely alphabetically - Implemented issue #301: Readme is now natively in RST format, making it easier for Python tooling to pick up - Implemented pylama isort extension - Fixed issue #260: # encoding lines at the top of the file are now correctly supported - Fixed issue #284: Sticky comments above first import are now supported - Fixed issue #310: Ensure comments don't get duplicated when reformatting imports - Fixed issue #289: Sections order not being respected - Fixed issue #296: Made it more clear how to set arguments more then once ### 4.0.0 - Removed all external dependencies ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6241853 isort-5.13.2/LICENSE0000644000000000000000000000210114536412763010702 0ustar00The MIT License (MIT) Copyright (c) 2013 Timothy Edmund Crosley Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6241853 isort-5.13.2/README.md0000644000000000000000000002441114536412763011164 0ustar00[![isort - isort your imports, so you don't have to.](https://raw.githubusercontent.com/pycqa/isort/main/art/logo_large.png)](https://pycqa.github.io/isort/) ------------------------------------------------------------------------ [![PyPI version](https://badge.fury.io/py/isort.svg)](https://badge.fury.io/py/isort) [![Test Status](https://github.com/pycqa/isort/workflows/Test/badge.svg?branch=develop)](https://github.com/pycqa/isort/actions?query=workflow%3ATest) [![Lint Status](https://github.com/pycqa/isort/workflows/Lint/badge.svg?branch=develop)](https://github.com/pycqa/isort/actions?query=workflow%3ALint) [![Code coverage Status](https://codecov.io/gh/pycqa/isort/branch/main/graph/badge.svg)](https://codecov.io/gh/pycqa/isort) [![License](https://img.shields.io/github/license/mashape/apistatus.svg)](https://pypi.org/project/isort/) [![Join the chat at https://gitter.im/timothycrosley/isort](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/timothycrosley/isort?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Downloads](https://pepy.tech/badge/isort)](https://pepy.tech/project/isort) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/) [![DeepSource](https://static.deepsource.io/deepsource-badge-light-mini.svg)](https://deepsource.io/gh/pycqa/isort/?ref=repository-badge) _________________ [Read Latest Documentation](https://pycqa.github.io/isort/) - [Browse GitHub Code Repository](https://github.com/pycqa/isort/) _________________ isort your imports, so you don't have to. isort is a Python utility / library to sort imports alphabetically and automatically separate into sections and by type. It provides a command line utility, Python library and [plugins for various editors](https://github.com/pycqa/isort/wiki/isort-Plugins) to quickly sort all your imports. It requires Python 3.8+ to run but supports formatting Python 2 code too. - [Try isort now from your browser!](https://pycqa.github.io/isort/docs/quick_start/0.-try.html) - [Using black? See the isort and black compatibility guide.](https://pycqa.github.io/isort/docs/configuration/black_compatibility.html) - [isort has official support for pre-commit!](https://pycqa.github.io/isort/docs/configuration/pre-commit.html) ![Example Usage](https://raw.github.com/pycqa/isort/main/example.gif) Before isort: ```python from my_lib import Object import os from my_lib import Object3 from my_lib import Object2 import sys from third_party import lib15, lib1, lib2, lib3, lib4, lib5, lib6, lib7, lib8, lib9, lib10, lib11, lib12, lib13, lib14 import sys from __future__ import absolute_import from third_party import lib3 print("Hey") print("yo") ``` After isort: ```python from __future__ import absolute_import import os import sys from third_party import (lib1, lib2, lib3, lib4, lib5, lib6, lib7, lib8, lib9, lib10, lib11, lib12, lib13, lib14, lib15) from my_lib import Object, Object2, Object3 print("Hey") print("yo") ``` ## Installing isort Installing isort is as simple as: ```bash pip install isort ``` ## Using isort **From the command line**: To run on specific files: ```bash isort mypythonfile.py mypythonfile2.py ``` To apply recursively: ```bash isort . ``` If [globstar](https://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html) is enabled, `isort .` is equivalent to: ```bash isort **/*.py ``` To view proposed changes without applying them: ```bash isort mypythonfile.py --diff ``` Finally, to atomically run isort against a project, only applying changes if they don't introduce syntax errors: ```bash isort --atomic . ``` (Note: this is disabled by default, as it prevents isort from running against code written using a different version of Python.) **From within Python**: ```python import isort isort.file("pythonfile.py") ``` or: ```python import isort sorted_code = isort.code("import b\nimport a\n") ``` ## Installing isort's for your preferred text editor Several plugins have been written that enable to use isort from within a variety of text-editors. You can find a full list of them [on the isort wiki](https://github.com/pycqa/isort/wiki/isort-Plugins). Additionally, I will enthusiastically accept pull requests that include plugins for other text editors and add documentation for them as I am notified. ## Multi line output modes You will notice above the \"multi\_line\_output\" setting. This setting defines how from imports wrap when they extend past the line\_length limit and has [12 possible settings](https://pycqa.github.io/isort/docs/configuration/multi_line_output_modes.html). ## Indentation To change the how constant indents appear - simply change the indent property with the following accepted formats: - Number of spaces you would like. For example: 4 would cause standard 4 space indentation. - Tab - A verbatim string with quotes around it. For example: ```python " " ``` is equivalent to 4. For the import styles that use parentheses, you can control whether or not to include a trailing comma after the last import with the `include_trailing_comma` option (defaults to `False`). ## Intelligently Balanced Multi-line Imports As of isort 3.1.0 support for balanced multi-line imports has been added. With this enabled isort will dynamically change the import length to the one that produces the most balanced grid, while staying below the maximum import length defined. Example: ```python from __future__ import (absolute_import, division, print_function, unicode_literals) ``` Will be produced instead of: ```python from __future__ import (absolute_import, division, print_function, unicode_literals) ``` To enable this set `balanced_wrapping` to `True` in your config or pass the `-e` option into the command line utility. ## Custom Sections and Ordering isort provides configuration options to change almost every aspect of how imports are organized, ordered, or grouped together in sections. [Click here](https://pycqa.github.io/isort/docs/configuration/custom_sections_and_ordering.html) for an overview of all these options. ## Skip processing of imports (outside of configuration) To make isort ignore a single import simply add a comment at the end of the import line containing the text `isort:skip`: ```python import module # isort:skip ``` or: ```python from xyz import (abc, # isort:skip yo, hey) ``` To make isort skip an entire file simply add `isort:skip_file` to the module's doc string: ```python """ my_module.py Best module ever isort:skip_file """ import b import a ``` ## Adding or removing an import from multiple files isort can be ran or configured to add / remove imports automatically. [See a complete guide here.](https://pycqa.github.io/isort/docs/configuration/add_or_remove_imports.html) ## Using isort to verify code The `--check-only` option ------------------------- isort can also be used to verify that code is correctly formatted by running it with `-c`. Any files that contain incorrectly sorted and/or formatted imports will be outputted to `stderr`. ```bash isort **/*.py -c -v SUCCESS: /home/timothy/Projects/Open_Source/isort/isort_kate_plugin.py Everything Looks Good! ERROR: /home/timothy/Projects/Open_Source/isort/isort/isort.py Imports are incorrectly sorted. ``` One great place this can be used is with a pre-commit git hook, such as this one by \@acdha: This can help to ensure a certain level of code quality throughout a project. ## Git hook isort provides a hook function that can be integrated into your Git pre-commit script to check Python code before committing. [More info here.](https://pycqa.github.io/isort/docs/configuration/git_hook.html) ## Setuptools integration Upon installation, isort enables a `setuptools` command that checks Python files declared by your project. [More info here.](https://pycqa.github.io/isort/docs/configuration/setuptools_integration.html) ## Spread the word [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/) Place this badge at the top of your repository to let others know your project uses isort. For README.md: ```markdown [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/) ``` Or README.rst: ```rst .. image:: https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336 :target: https://pycqa.github.io/isort/ ``` ## Security contact information To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. ## Why isort? isort simply stands for import sort. It was originally called "sortImports" however I got tired of typing the extra characters and came to the realization camelCase is not pythonic. I wrote isort because in an organization I used to work in the manager came in one day and decided all code must have alphabetically sorted imports. The code base was huge - and he meant for us to do it by hand. However, being a programmer - I\'m too lazy to spend 8 hours mindlessly performing a function, but not too lazy to spend 16 hours automating it. I was given permission to open source sortImports and here we are :) ------------------------------------------------------------------------ [Get professionally supported isort with the Tidelift Subscription](https://tidelift.com/subscription/pkg/pypi-isort?utm_source=pypi-isort&utm_medium=referral&utm_campaign=readme) Professional support for isort is available as part of the [Tidelift Subscription](https://tidelift.com/subscription/pkg/pypi-isort?utm_source=pypi-isort&utm_medium=referral&utm_campaign=readme). Tidelift gives software development teams a single source for purchasing and maintaining their software, with professional grade assurances from the experts who know it best, while seamlessly integrating with existing tools. ------------------------------------------------------------------------ Thanks and I hope you find isort useful! ~Timothy Crosley ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/__init__.py0000644000000000000000000000154714536412763013163 0ustar00"""Defines the public isort interface""" __all__ = ( "Config", "ImportKey", "__version__", "check_code", "check_file", "check_stream", "code", "file", "find_imports_in_code", "find_imports_in_file", "find_imports_in_paths", "find_imports_in_stream", "place_module", "place_module_with_reason", "settings", "stream", ) from . import settings from ._version import __version__ from .api import ImportKey from .api import check_code_string as check_code from .api import ( check_file, check_stream, find_imports_in_code, find_imports_in_file, find_imports_in_paths, find_imports_in_stream, place_module, place_module_with_reason, ) from .api import sort_code_string as code from .api import sort_file as file from .api import sort_stream as stream from .settings import Config ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/__main__.py0000644000000000000000000000004414536412763013133 0ustar00from isort.main import main main() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/_vendored/tomli/LICENSE0000644000000000000000000000206014536412763015137 0ustar00MIT License Copyright (c) 2021 Taneli Hukkinen Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/_vendored/tomli/__init__.py0000644000000000000000000000032514536412763016245 0ustar00"""A lil' TOML parser.""" __all__ = ("loads", "load", "TOMLDecodeError") __version__ = "1.2.0" # DO NOT EDIT THIS LINE MANUALLY. LET bump2version UTILITY DO IT from ._parser import TOMLDecodeError, load, loads ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/_vendored/tomli/_parser.py0000644000000000000000000005162514536412763016152 0ustar00import string import warnings from types import MappingProxyType from typing import IO, Any, Callable, Dict, FrozenSet, Iterable, NamedTuple, Optional, Tuple from ._re import ( RE_DATETIME, RE_LOCALTIME, RE_NUMBER, match_to_datetime, match_to_localtime, match_to_number, ) ASCII_CTRL = frozenset(chr(i) for i in range(32)) | frozenset(chr(127)) # Neither of these sets include quotation mark or backslash. They are # currently handled as separate cases in the parser functions. ILLEGAL_BASIC_STR_CHARS = ASCII_CTRL - frozenset("\t") ILLEGAL_MULTILINE_BASIC_STR_CHARS = ASCII_CTRL - frozenset("\t\n\r") ILLEGAL_LITERAL_STR_CHARS = ILLEGAL_BASIC_STR_CHARS ILLEGAL_MULTILINE_LITERAL_STR_CHARS = ASCII_CTRL - frozenset("\t\n") ILLEGAL_COMMENT_CHARS = ILLEGAL_BASIC_STR_CHARS TOML_WS = frozenset(" \t") TOML_WS_AND_NEWLINE = TOML_WS | frozenset("\n") BARE_KEY_CHARS = frozenset(string.ascii_letters + string.digits + "-_") KEY_INITIAL_CHARS = BARE_KEY_CHARS | frozenset("\"'") HEXDIGIT_CHARS = frozenset(string.hexdigits) BASIC_STR_ESCAPE_REPLACEMENTS = MappingProxyType( { "\\b": "\u0008", # backspace "\\t": "\u0009", # tab "\\n": "\u000A", # linefeed "\\f": "\u000C", # form feed "\\r": "\u000D", # carriage return '\\"': "\u0022", # quote "\\\\": "\u005C", # backslash } ) # Type annotations ParseFloat = Callable[[str], Any] Key = Tuple[str, ...] Pos = int class TOMLDecodeError(ValueError): """An error raised if a document is not valid TOML.""" def load(fp: IO, *, parse_float: ParseFloat = float) -> Dict[str, Any]: """Parse TOML from a file object.""" s = fp.read() if isinstance(s, bytes): s = s.decode() else: warnings.warn( "Text file object support is deprecated in favor of binary file objects." ' Use `open("foo.toml", "rb")` to open the file in binary mode.', DeprecationWarning, ) return loads(s, parse_float=parse_float) def loads(s: str, *, parse_float: ParseFloat = float) -> Dict[str, Any]: # noqa: C901 """Parse TOML from a string.""" # The spec allows converting "\r\n" to "\n", even in string # literals. Let's do so to simplify parsing. src = s.replace("\r\n", "\n") pos = 0 out = Output(NestedDict(), Flags()) header: Key = () # Parse one statement at a time # (typically means one line in TOML source) while True: # 1. Skip line leading whitespace pos = skip_chars(src, pos, TOML_WS) # 2. Parse rules. Expect one of the following: # - end of file # - end of line # - comment # - key/value pair # - append dict to list (and move to its namespace) # - create dict (and move to its namespace) # Skip trailing whitespace when applicable. try: char = src[pos] except IndexError: break if char == "\n": pos += 1 continue if char in KEY_INITIAL_CHARS: pos = key_value_rule(src, pos, out, header, parse_float) pos = skip_chars(src, pos, TOML_WS) elif char == "[": try: second_char: Optional[str] = src[pos + 1] except IndexError: second_char = None if second_char == "[": pos, header = create_list_rule(src, pos, out) else: pos, header = create_dict_rule(src, pos, out) pos = skip_chars(src, pos, TOML_WS) elif char != "#": raise suffixed_err(src, pos, "Invalid statement") # 3. Skip comment pos = skip_comment(src, pos) # 4. Expect end of line or end of file try: char = src[pos] except IndexError: break if char != "\n": raise suffixed_err(src, pos, "Expected newline or end of document after a statement") pos += 1 return out.data.dict class Flags: """Flags that map to parsed keys/namespaces.""" # Marks an immutable namespace (inline array or inline table). FROZEN = 0 # Marks a nest that has been explicitly created and can no longer # be opened using the "[table]" syntax. EXPLICIT_NEST = 1 def __init__(self) -> None: self._flags: Dict[str, dict] = {} def unset_all(self, key: Key) -> None: cont = self._flags for k in key[:-1]: if k not in cont: return cont = cont[k]["nested"] cont.pop(key[-1], None) def set_for_relative_key(self, head_key: Key, rel_key: Key, flag: int) -> None: cont = self._flags for k in head_key: if k not in cont: cont[k] = {"flags": set(), "recursive_flags": set(), "nested": {}} cont = cont[k]["nested"] for k in rel_key: if k in cont: cont[k]["flags"].add(flag) else: cont[k] = {"flags": {flag}, "recursive_flags": set(), "nested": {}} cont = cont[k]["nested"] def set(self, key: Key, flag: int, *, recursive: bool) -> None: # noqa: A003 cont = self._flags key_parent, key_stem = key[:-1], key[-1] for k in key_parent: if k not in cont: cont[k] = {"flags": set(), "recursive_flags": set(), "nested": {}} cont = cont[k]["nested"] if key_stem not in cont: cont[key_stem] = {"flags": set(), "recursive_flags": set(), "nested": {}} cont[key_stem]["recursive_flags" if recursive else "flags"].add(flag) def is_(self, key: Key, flag: int) -> bool: if not key: return False # document root has no flags cont = self._flags for k in key[:-1]: if k not in cont: return False inner_cont = cont[k] if flag in inner_cont["recursive_flags"]: return True cont = inner_cont["nested"] key_stem = key[-1] if key_stem in cont: cont = cont[key_stem] return flag in cont["flags"] or flag in cont["recursive_flags"] return False class NestedDict: def __init__(self) -> None: # The parsed content of the TOML document self.dict: Dict[str, Any] = {} def get_or_create_nest( self, key: Key, *, access_lists: bool = True, ) -> dict: cont: Any = self.dict for k in key: if k not in cont: cont[k] = {} cont = cont[k] if access_lists and isinstance(cont, list): cont = cont[-1] if not isinstance(cont, dict): raise KeyError("There is no nest behind this key") return cont def append_nest_to_list(self, key: Key) -> None: cont = self.get_or_create_nest(key[:-1]) last_key = key[-1] if last_key in cont: list_ = cont[last_key] if not isinstance(list_, list): raise KeyError("An object other than list found behind this key") list_.append({}) else: cont[last_key] = [{}] class Output(NamedTuple): data: NestedDict flags: Flags def skip_chars(src: str, pos: Pos, chars: Iterable[str]) -> Pos: try: while src[pos] in chars: pos += 1 except IndexError: pass return pos def skip_until( src: str, pos: Pos, expect: str, *, error_on: FrozenSet[str], error_on_eof: bool, ) -> Pos: try: new_pos = src.index(expect, pos) except ValueError: new_pos = len(src) if error_on_eof: raise suffixed_err(src, new_pos, f'Expected "{expect!r}"') if not error_on.isdisjoint(src[pos:new_pos]): while src[pos] not in error_on: pos += 1 raise suffixed_err(src, pos, f'Found invalid character "{src[pos]!r}"') return new_pos def skip_comment(src: str, pos: Pos) -> Pos: try: char: Optional[str] = src[pos] except IndexError: char = None if char == "#": return skip_until(src, pos + 1, "\n", error_on=ILLEGAL_COMMENT_CHARS, error_on_eof=False) return pos def skip_comments_and_array_ws(src: str, pos: Pos) -> Pos: while True: pos_before_skip = pos pos = skip_chars(src, pos, TOML_WS_AND_NEWLINE) pos = skip_comment(src, pos) if pos == pos_before_skip: return pos def create_dict_rule(src: str, pos: Pos, out: Output) -> Tuple[Pos, Key]: pos += 1 # Skip "[" pos = skip_chars(src, pos, TOML_WS) pos, key = parse_key(src, pos) if out.flags.is_(key, Flags.EXPLICIT_NEST) or out.flags.is_(key, Flags.FROZEN): raise suffixed_err(src, pos, f"Can not declare {key} twice") out.flags.set(key, Flags.EXPLICIT_NEST, recursive=False) try: out.data.get_or_create_nest(key) except KeyError: raise suffixed_err(src, pos, "Can not overwrite a value") if not src.startswith("]", pos): raise suffixed_err(src, pos, 'Expected "]" at the end of a table declaration') return pos + 1, key def create_list_rule(src: str, pos: Pos, out: Output) -> Tuple[Pos, Key]: pos += 2 # Skip "[[" pos = skip_chars(src, pos, TOML_WS) pos, key = parse_key(src, pos) if out.flags.is_(key, Flags.FROZEN): raise suffixed_err(src, pos, f"Can not mutate immutable namespace {key}") # Free the namespace now that it points to another empty list item... out.flags.unset_all(key) # ...but this key precisely is still prohibited from table declaration out.flags.set(key, Flags.EXPLICIT_NEST, recursive=False) try: out.data.append_nest_to_list(key) except KeyError: raise suffixed_err(src, pos, "Can not overwrite a value") if not src.startswith("]]", pos): raise suffixed_err(src, pos, 'Expected "]]" at the end of an array declaration') return pos + 2, key def key_value_rule(src: str, pos: Pos, out: Output, header: Key, parse_float: ParseFloat) -> Pos: pos, key, value = parse_key_value_pair(src, pos, parse_float) key_parent, key_stem = key[:-1], key[-1] abs_key_parent = header + key_parent if out.flags.is_(abs_key_parent, Flags.FROZEN): raise suffixed_err(src, pos, f"Can not mutate immutable namespace {abs_key_parent}") # Containers in the relative path can't be opened with the table syntax after this out.flags.set_for_relative_key(header, key, Flags.EXPLICIT_NEST) try: nest = out.data.get_or_create_nest(abs_key_parent) except KeyError: raise suffixed_err(src, pos, "Can not overwrite a value") if key_stem in nest: raise suffixed_err(src, pos, "Can not overwrite a value") # Mark inline table and array namespaces recursively immutable if isinstance(value, (dict, list)): out.flags.set(header + key, Flags.FROZEN, recursive=True) nest[key_stem] = value return pos def parse_key_value_pair(src: str, pos: Pos, parse_float: ParseFloat) -> Tuple[Pos, Key, Any]: pos, key = parse_key(src, pos) try: char: Optional[str] = src[pos] except IndexError: char = None if char != "=": raise suffixed_err(src, pos, 'Expected "=" after a key in a key/value pair') pos += 1 pos = skip_chars(src, pos, TOML_WS) pos, value = parse_value(src, pos, parse_float) return pos, key, value def parse_key(src: str, pos: Pos) -> Tuple[Pos, Key]: pos, key_part = parse_key_part(src, pos) key: Key = (key_part,) pos = skip_chars(src, pos, TOML_WS) while True: try: char: Optional[str] = src[pos] except IndexError: char = None if char != ".": return pos, key pos += 1 pos = skip_chars(src, pos, TOML_WS) pos, key_part = parse_key_part(src, pos) key += (key_part,) pos = skip_chars(src, pos, TOML_WS) def parse_key_part(src: str, pos: Pos) -> Tuple[Pos, str]: try: char: Optional[str] = src[pos] except IndexError: char = None if char in BARE_KEY_CHARS: start_pos = pos pos = skip_chars(src, pos, BARE_KEY_CHARS) return pos, src[start_pos:pos] if char == "'": return parse_literal_str(src, pos) if char == '"': return parse_one_line_basic_str(src, pos) raise suffixed_err(src, pos, "Invalid initial character for a key part") def parse_one_line_basic_str(src: str, pos: Pos) -> Tuple[Pos, str]: pos += 1 return parse_basic_str(src, pos, multiline=False) def parse_array(src: str, pos: Pos, parse_float: ParseFloat) -> Tuple[Pos, list]: pos += 1 array: list = [] pos = skip_comments_and_array_ws(src, pos) if src.startswith("]", pos): return pos + 1, array while True: pos, val = parse_value(src, pos, parse_float) array.append(val) pos = skip_comments_and_array_ws(src, pos) c = src[pos : pos + 1] if c == "]": return pos + 1, array if c != ",": raise suffixed_err(src, pos, "Unclosed array") pos += 1 pos = skip_comments_and_array_ws(src, pos) if src.startswith("]", pos): return pos + 1, array def parse_inline_table(src: str, pos: Pos, parse_float: ParseFloat) -> Tuple[Pos, dict]: pos += 1 nested_dict = NestedDict() flags = Flags() pos = skip_chars(src, pos, TOML_WS) if src.startswith("}", pos): return pos + 1, nested_dict.dict while True: pos, key, value = parse_key_value_pair(src, pos, parse_float) key_parent, key_stem = key[:-1], key[-1] if flags.is_(key, Flags.FROZEN): raise suffixed_err(src, pos, f"Can not mutate immutable namespace {key}") try: nest = nested_dict.get_or_create_nest(key_parent, access_lists=False) except KeyError: raise suffixed_err(src, pos, "Can not overwrite a value") if key_stem in nest: raise suffixed_err(src, pos, f'Duplicate inline table key "{key_stem}"') nest[key_stem] = value pos = skip_chars(src, pos, TOML_WS) c = src[pos : pos + 1] if c == "}": return pos + 1, nested_dict.dict if c != ",": raise suffixed_err(src, pos, "Unclosed inline table") if isinstance(value, (dict, list)): flags.set(key, Flags.FROZEN, recursive=True) pos += 1 pos = skip_chars(src, pos, TOML_WS) def parse_basic_str_escape( # noqa: C901 src: str, pos: Pos, *, multiline: bool = False ) -> Tuple[Pos, str]: escape_id = src[pos : pos + 2] pos += 2 if multiline and escape_id in {"\\ ", "\\\t", "\\\n"}: # Skip whitespace until next non-whitespace character or end of # the doc. Error if non-whitespace is found before newline. if escape_id != "\\\n": pos = skip_chars(src, pos, TOML_WS) try: char = src[pos] except IndexError: return pos, "" if char != "\n": raise suffixed_err(src, pos, 'Unescaped "\\" in a string') pos += 1 pos = skip_chars(src, pos, TOML_WS_AND_NEWLINE) return pos, "" if escape_id == "\\u": return parse_hex_char(src, pos, 4) if escape_id == "\\U": return parse_hex_char(src, pos, 8) try: return pos, BASIC_STR_ESCAPE_REPLACEMENTS[escape_id] except KeyError: if len(escape_id) != 2: raise suffixed_err(src, pos, "Unterminated string") raise suffixed_err(src, pos, 'Unescaped "\\" in a string') def parse_basic_str_escape_multiline(src: str, pos: Pos) -> Tuple[Pos, str]: return parse_basic_str_escape(src, pos, multiline=True) def parse_hex_char(src: str, pos: Pos, hex_len: int) -> Tuple[Pos, str]: hex_str = src[pos : pos + hex_len] if len(hex_str) != hex_len or not HEXDIGIT_CHARS.issuperset(hex_str): raise suffixed_err(src, pos, "Invalid hex value") pos += hex_len hex_int = int(hex_str, 16) if not is_unicode_scalar_value(hex_int): raise suffixed_err(src, pos, "Escaped character is not a Unicode scalar value") return pos, chr(hex_int) def parse_literal_str(src: str, pos: Pos) -> Tuple[Pos, str]: pos += 1 # Skip starting apostrophe start_pos = pos pos = skip_until(src, pos, "'", error_on=ILLEGAL_LITERAL_STR_CHARS, error_on_eof=True) return pos + 1, src[start_pos:pos] # Skip ending apostrophe def parse_multiline_str(src: str, pos: Pos, *, literal: bool) -> Tuple[Pos, str]: pos += 3 if src.startswith("\n", pos): pos += 1 if literal: delim = "'" end_pos = skip_until( src, pos, "'''", error_on=ILLEGAL_MULTILINE_LITERAL_STR_CHARS, error_on_eof=True, ) result = src[pos:end_pos] pos = end_pos + 3 else: delim = '"' pos, result = parse_basic_str(src, pos, multiline=True) # Add at maximum two extra apostrophes/quotes if the end sequence # is 4 or 5 chars long instead of just 3. if not src.startswith(delim, pos): return pos, result pos += 1 if not src.startswith(delim, pos): return pos, result + delim pos += 1 return pos, result + (delim * 2) def parse_basic_str(src: str, pos: Pos, *, multiline: bool) -> Tuple[Pos, str]: if multiline: error_on = ILLEGAL_MULTILINE_BASIC_STR_CHARS parse_escapes = parse_basic_str_escape_multiline else: error_on = ILLEGAL_BASIC_STR_CHARS parse_escapes = parse_basic_str_escape result = "" start_pos = pos while True: try: char = src[pos] except IndexError: raise suffixed_err(src, pos, "Unterminated string") if char == '"': if not multiline: return pos + 1, result + src[start_pos:pos] if src.startswith('"""', pos): return pos + 3, result + src[start_pos:pos] pos += 1 continue if char == "\\": result += src[start_pos:pos] pos, parsed_escape = parse_escapes(src, pos) result += parsed_escape start_pos = pos continue if char in error_on: raise suffixed_err(src, pos, f'Illegal character "{char!r}"') pos += 1 def parse_value(src: str, pos: Pos, parse_float: ParseFloat) -> Tuple[Pos, Any]: # noqa: C901 try: char: Optional[str] = src[pos] except IndexError: char = None # Basic strings if char == '"': if src.startswith('"""', pos): return parse_multiline_str(src, pos, literal=False) return parse_one_line_basic_str(src, pos) # Literal strings if char == "'": if src.startswith("'''", pos): return parse_multiline_str(src, pos, literal=True) return parse_literal_str(src, pos) # Booleans if char == "t": if src.startswith("true", pos): return pos + 4, True if char == "f": if src.startswith("false", pos): return pos + 5, False # Dates and times datetime_match = RE_DATETIME.match(src, pos) if datetime_match: try: datetime_obj = match_to_datetime(datetime_match) except ValueError: raise suffixed_err(src, pos, "Invalid date or datetime") return datetime_match.end(), datetime_obj localtime_match = RE_LOCALTIME.match(src, pos) if localtime_match: return localtime_match.end(), match_to_localtime(localtime_match) # Integers and "normal" floats. # The regex will greedily match any type starting with a decimal # char, so needs to be located after handling of dates and times. number_match = RE_NUMBER.match(src, pos) if number_match: return number_match.end(), match_to_number(number_match, parse_float) # Arrays if char == "[": return parse_array(src, pos, parse_float) # Inline tables if char == "{": return parse_inline_table(src, pos, parse_float) # Special floats first_three = src[pos : pos + 3] if first_three in {"inf", "nan"}: return pos + 3, parse_float(first_three) first_four = src[pos : pos + 4] if first_four in {"-inf", "+inf", "-nan", "+nan"}: return pos + 4, parse_float(first_four) raise suffixed_err(src, pos, "Invalid value") def suffixed_err(src: str, pos: Pos, msg: str) -> TOMLDecodeError: """Return a `TOMLDecodeError` where error message is suffixed with coordinates in source.""" def coord_repr(src: str, pos: Pos) -> str: if pos >= len(src): return "end of document" line = src.count("\n", 0, pos) + 1 if line == 1: column = pos + 1 else: column = pos - src.rindex("\n", 0, pos) return f"line {line}, column {column}" return TOMLDecodeError(f"{msg} (at {coord_repr(src, pos)})") def is_unicode_scalar_value(codepoint: int) -> bool: return (0 <= codepoint <= 55295) or (57344 <= codepoint <= 1114111) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/_vendored/tomli/_re.py0000644000000000000000000000542114536412763015255 0ustar00import re from datetime import date, datetime, time, timedelta, timezone, tzinfo from functools import lru_cache from typing import TYPE_CHECKING, Any, Optional, Union if TYPE_CHECKING: from tomli._parser import ParseFloat # E.g. # - 00:32:00.999999 # - 00:32:00 _TIME_RE_STR = r"([01][0-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])(?:\.([0-9]{1,6})[0-9]*)?" RE_NUMBER = re.compile( r""" 0 (?: x[0-9A-Fa-f](?:_?[0-9A-Fa-f])* # hex | b[01](?:_?[01])* # bin | o[0-7](?:_?[0-7])* # oct ) | [+-]?(?:0|[1-9](?:_?[0-9])*) # dec, integer part (?P (?:\.[0-9](?:_?[0-9])*)? # optional fractional part (?:[eE][+-]?[0-9](?:_?[0-9])*)? # optional exponent part ) """, flags=re.VERBOSE, ) RE_LOCALTIME = re.compile(_TIME_RE_STR) RE_DATETIME = re.compile( rf""" ([0-9]{{4}})-(0[1-9]|1[0-2])-(0[1-9]|[12][0-9]|3[01]) # date, e.g. 1988-10-27 (?: [T ] {_TIME_RE_STR} (?:(Z)|([+-])([01][0-9]|2[0-3]):([0-5][0-9]))? # optional time offset )? """, flags=re.VERBOSE, ) def match_to_datetime(match: "re.Match") -> Union[datetime, date]: """Convert a `RE_DATETIME` match to `datetime.datetime` or `datetime.date`. Raises ValueError if the match does not correspond to a valid date or datetime. """ ( year_str, month_str, day_str, hour_str, minute_str, sec_str, micros_str, zulu_time, offset_sign_str, offset_hour_str, offset_minute_str, ) = match.groups() year, month, day = int(year_str), int(month_str), int(day_str) if hour_str is None: return date(year, month, day) hour, minute, sec = int(hour_str), int(minute_str), int(sec_str) micros = int(micros_str.ljust(6, "0")) if micros_str else 0 if offset_sign_str: tz: Optional[tzinfo] = cached_tz(offset_hour_str, offset_minute_str, offset_sign_str) elif zulu_time: tz = timezone.utc else: # local date-time tz = None return datetime(year, month, day, hour, minute, sec, micros, tzinfo=tz) @lru_cache(maxsize=None) def cached_tz(hour_str: str, minute_str: str, sign_str: str) -> timezone: sign = 1 if sign_str == "+" else -1 return timezone( timedelta( hours=sign * int(hour_str), minutes=sign * int(minute_str), ) ) def match_to_localtime(match: "re.Match") -> time: hour_str, minute_str, sec_str, micros_str = match.groups() micros = int(micros_str.ljust(6, "0")) if micros_str else 0 return time(int(hour_str), int(minute_str), int(sec_str), micros) def match_to_number(match: "re.Match", parse_float: "ParseFloat") -> Any: if match.group("floatpart"): return parse_float(match.group()) return int(match.group(), 0) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/_vendored/tomli/py.typed0000644000000000000000000000003214536412763015626 0ustar00# Marker file for PEP 561 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/_version.py0000644000000000000000000000011014536412763013231 0ustar00from importlib import metadata __version__ = metadata.version("isort") ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/api.py0000644000000000000000000006301014536412763012166 0ustar00__all__ = ( "ImportKey", "check_code_string", "check_file", "check_stream", "find_imports_in_code", "find_imports_in_file", "find_imports_in_paths", "find_imports_in_stream", "place_module", "place_module_with_reason", "sort_code_string", "sort_file", "sort_stream", ) import contextlib import shutil import sys from enum import Enum from io import StringIO from itertools import chain from pathlib import Path from typing import Any, Iterator, Optional, Set, TextIO, Union, cast from warnings import warn from isort import core from . import files, identify, io from .exceptions import ( ExistingSyntaxErrors, FileSkipComment, FileSkipSetting, IntroducedSyntaxErrors, ) from .format import ask_whether_to_apply_changes_to_file, create_terminal_printer, show_unified_diff from .io import Empty, File from .place import module as place_module # noqa: F401 from .place import module_with_reason as place_module_with_reason # noqa: F401 from .settings import CYTHON_EXTENSIONS, DEFAULT_CONFIG, Config class ImportKey(Enum): """Defines how to key an individual import, generally for deduping. Import keys are defined from less to more specific: from x.y import z as a ______| | | | | | | | PACKAGE | | | ________| | | | | | MODULE | | _________________| | | | ATTRIBUTE | ______________________| | ALIAS """ PACKAGE = 1 MODULE = 2 ATTRIBUTE = 3 ALIAS = 4 def sort_code_string( code: str, extension: Optional[str] = None, config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, disregard_skip: bool = False, show_diff: Union[bool, TextIO] = False, **config_kwargs: Any, ) -> str: """Sorts any imports within the provided code string, returning a new string with them sorted. - **code**: The string of code with imports that need to be sorted. - **extension**: The file extension that contains imports. Defaults to filename extension or py. - **config**: The config object to use when sorting imports. - **file_path**: The disk location where the code string was pulled from. - **disregard_skip**: set to `True` if you want to ignore a skip set in config for this file. - **show_diff**: If `True` the changes that need to be done will be printed to stdout, if a TextIO stream is provided results will be written to it, otherwise no diff will be computed. - ****config_kwargs**: Any config modifications. """ input_stream = StringIO(code) output_stream = StringIO() config = _config(path=file_path, config=config, **config_kwargs) sort_stream( input_stream, output_stream, extension=extension, config=config, file_path=file_path, disregard_skip=disregard_skip, show_diff=show_diff, ) output_stream.seek(0) return output_stream.read() def check_code_string( code: str, show_diff: Union[bool, TextIO] = False, extension: Optional[str] = None, config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, disregard_skip: bool = False, **config_kwargs: Any, ) -> bool: """Checks the order, format, and categorization of imports within the provided code string. Returns `True` if everything is correct, otherwise `False`. - **code**: The string of code with imports that need to be sorted. - **show_diff**: If `True` the changes that need to be done will be printed to stdout, if a TextIO stream is provided results will be written to it, otherwise no diff will be computed. - **extension**: The file extension that contains imports. Defaults to filename extension or py. - **config**: The config object to use when sorting imports. - **file_path**: The disk location where the code string was pulled from. - **disregard_skip**: set to `True` if you want to ignore a skip set in config for this file. - ****config_kwargs**: Any config modifications. """ config = _config(path=file_path, config=config, **config_kwargs) return check_stream( StringIO(code), show_diff=show_diff, extension=extension, config=config, file_path=file_path, disregard_skip=disregard_skip, ) def sort_stream( input_stream: TextIO, output_stream: TextIO, extension: Optional[str] = None, config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, disregard_skip: bool = False, show_diff: Union[bool, TextIO] = False, raise_on_skip: bool = True, **config_kwargs: Any, ) -> bool: """Sorts any imports within the provided code stream, outputs to the provided output stream. Returns `True` if anything is modified from the original input stream, otherwise `False`. - **input_stream**: The stream of code with imports that need to be sorted. - **output_stream**: The stream where sorted imports should be written to. - **extension**: The file extension that contains imports. Defaults to filename extension or py. - **config**: The config object to use when sorting imports. - **file_path**: The disk location where the code string was pulled from. - **disregard_skip**: set to `True` if you want to ignore a skip set in config for this file. - **show_diff**: If `True` the changes that need to be done will be printed to stdout, if a TextIO stream is provided results will be written to it, otherwise no diff will be computed. - ****config_kwargs**: Any config modifications. """ extension = extension or (file_path and file_path.suffix.lstrip(".")) or "py" if show_diff: _output_stream = StringIO() _input_stream = StringIO(input_stream.read()) changed = sort_stream( input_stream=_input_stream, output_stream=_output_stream, extension=extension, config=config, file_path=file_path, disregard_skip=disregard_skip, raise_on_skip=raise_on_skip, **config_kwargs, ) _output_stream.seek(0) _input_stream.seek(0) show_unified_diff( file_input=_input_stream.read(), file_output=_output_stream.read(), file_path=file_path, output=output_stream if show_diff is True else show_diff, color_output=config.color_output, ) return changed config = _config(path=file_path, config=config, **config_kwargs) content_source = str(file_path or "Passed in content") if not disregard_skip and file_path and config.is_skipped(file_path): raise FileSkipSetting(content_source) _internal_output = output_stream if config.atomic: try: file_content = input_stream.read() compile(file_content, content_source, "exec", 0, 1) except SyntaxError: if extension not in CYTHON_EXTENSIONS: raise ExistingSyntaxErrors(content_source) if config.verbose: warn( f"{content_source} Python AST errors found but ignored due to Cython extension" ) input_stream = StringIO(file_content) if not output_stream.readable(): _internal_output = StringIO() try: changed = core.process( input_stream, _internal_output, extension=extension, config=config, raise_on_skip=raise_on_skip, ) except FileSkipComment: raise FileSkipComment(content_source) if config.atomic: _internal_output.seek(0) try: compile(_internal_output.read(), content_source, "exec", 0, 1) _internal_output.seek(0) except SyntaxError: # pragma: no cover if extension not in CYTHON_EXTENSIONS: raise IntroducedSyntaxErrors(content_source) if config.verbose: warn( f"{content_source} Python AST errors found but ignored due to Cython extension" ) if _internal_output != output_stream: output_stream.write(_internal_output.read()) return changed def check_stream( input_stream: TextIO, show_diff: Union[bool, TextIO] = False, extension: Optional[str] = None, config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, disregard_skip: bool = False, **config_kwargs: Any, ) -> bool: """Checks any imports within the provided code stream, returning `False` if any unsorted or incorrectly imports are found or `True` if no problems are identified. - **input_stream**: The stream of code with imports that need to be sorted. - **show_diff**: If `True` the changes that need to be done will be printed to stdout, if a TextIO stream is provided results will be written to it, otherwise no diff will be computed. - **extension**: The file extension that contains imports. Defaults to filename extension or py. - **config**: The config object to use when sorting imports. - **file_path**: The disk location where the code string was pulled from. - **disregard_skip**: set to `True` if you want to ignore a skip set in config for this file. - ****config_kwargs**: Any config modifications. """ config = _config(path=file_path, config=config, **config_kwargs) if show_diff: input_stream = StringIO(input_stream.read()) changed: bool = sort_stream( input_stream=input_stream, output_stream=Empty, extension=extension, config=config, file_path=file_path, disregard_skip=disregard_skip, ) printer = create_terminal_printer( color=config.color_output, error=config.format_error, success=config.format_success ) if not changed: if config.verbose and not config.only_modified: printer.success(f"{file_path or ''} Everything Looks Good!") return True printer.error(f"{file_path or ''} Imports are incorrectly sorted and/or formatted.") if show_diff: output_stream = StringIO() input_stream.seek(0) file_contents = input_stream.read() sort_stream( input_stream=StringIO(file_contents), output_stream=output_stream, extension=extension, config=config, file_path=file_path, disregard_skip=disregard_skip, ) output_stream.seek(0) show_unified_diff( file_input=file_contents, file_output=output_stream.read(), file_path=file_path, output=None if show_diff is True else show_diff, color_output=config.color_output, ) return False def check_file( filename: Union[str, Path], show_diff: Union[bool, TextIO] = False, config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, disregard_skip: bool = True, extension: Optional[str] = None, **config_kwargs: Any, ) -> bool: """Checks any imports within the provided file, returning `False` if any unsorted or incorrectly imports are found or `True` if no problems are identified. - **filename**: The name or Path of the file to check. - **show_diff**: If `True` the changes that need to be done will be printed to stdout, if a TextIO stream is provided results will be written to it, otherwise no diff will be computed. - **config**: The config object to use when sorting imports. - **file_path**: The disk location where the code string was pulled from. - **disregard_skip**: set to `True` if you want to ignore a skip set in config for this file. - **extension**: The file extension that contains imports. Defaults to filename extension or py. - ****config_kwargs**: Any config modifications. """ file_config: Config = config if "config_trie" in config_kwargs: config_trie = config_kwargs.pop("config_trie", None) if config_trie: config_info = config_trie.search(filename) if config.verbose: print(f"{config_info[0]} used for file {filename}") file_config = Config(**config_info[1]) with io.File.read(filename) as source_file: return check_stream( source_file.stream, show_diff=show_diff, extension=extension, config=file_config, file_path=file_path or source_file.path, disregard_skip=disregard_skip, **config_kwargs, ) def _tmp_file(source_file: File) -> Path: return source_file.path.with_suffix(source_file.path.suffix + ".isorted") @contextlib.contextmanager def _in_memory_output_stream_context() -> Iterator[TextIO]: yield StringIO(newline=None) @contextlib.contextmanager def _file_output_stream_context(filename: Union[str, Path], source_file: File) -> Iterator[TextIO]: tmp_file = _tmp_file(source_file) with tmp_file.open("w+", encoding=source_file.encoding, newline="") as output_stream: shutil.copymode(filename, tmp_file) yield output_stream def sort_file( filename: Union[str, Path], extension: Optional[str] = None, config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, disregard_skip: bool = True, ask_to_apply: bool = False, show_diff: Union[bool, TextIO] = False, write_to_stdout: bool = False, output: Optional[TextIO] = None, **config_kwargs: Any, ) -> bool: """Sorts and formats any groups of imports imports within the provided file or Path. Returns `True` if the file has been changed, otherwise `False`. - **filename**: The name or Path of the file to format. - **extension**: The file extension that contains imports. Defaults to filename extension or py. - **config**: The config object to use when sorting imports. - **file_path**: The disk location where the code string was pulled from. - **disregard_skip**: set to `True` if you want to ignore a skip set in config for this file. - **ask_to_apply**: If `True`, prompt before applying any changes. - **show_diff**: If `True` the changes that need to be done will be printed to stdout, if a TextIO stream is provided results will be written to it, otherwise no diff will be computed. - **write_to_stdout**: If `True`, write to stdout instead of the input file. - **output**: If a TextIO is provided, results will be written there rather than replacing the original file content. - ****config_kwargs**: Any config modifications. """ file_config: Config = config if "config_trie" in config_kwargs: config_trie = config_kwargs.pop("config_trie", None) if config_trie: config_info = config_trie.search(filename) if config.verbose: print(f"{config_info[0]} used for file {filename}") file_config = Config(**config_info[1]) with io.File.read(filename) as source_file: actual_file_path = file_path or source_file.path config = _config(path=actual_file_path, config=file_config, **config_kwargs) changed: bool = False try: if write_to_stdout: changed = sort_stream( input_stream=source_file.stream, output_stream=sys.stdout, config=config, file_path=actual_file_path, disregard_skip=disregard_skip, extension=extension, ) else: if output is None: try: if config.overwrite_in_place: output_stream_context = _in_memory_output_stream_context() else: output_stream_context = _file_output_stream_context( filename, source_file ) with output_stream_context as output_stream: changed = sort_stream( input_stream=source_file.stream, output_stream=output_stream, config=config, file_path=actual_file_path, disregard_skip=disregard_skip, extension=extension, ) output_stream.seek(0) if changed: if show_diff or ask_to_apply: source_file.stream.seek(0) show_unified_diff( file_input=source_file.stream.read(), file_output=output_stream.read(), file_path=actual_file_path, output=None if show_diff is True else cast(TextIO, show_diff), color_output=config.color_output, ) if show_diff or ( ask_to_apply and not ask_whether_to_apply_changes_to_file( str(source_file.path) ) ): return False source_file.stream.close() if config.overwrite_in_place: output_stream.seek(0) with source_file.path.open("w") as fs: shutil.copyfileobj(output_stream, fs) if changed: if not config.overwrite_in_place: tmp_file = _tmp_file(source_file) tmp_file.replace(source_file.path) if not config.quiet: print(f"Fixing {source_file.path}") finally: try: # Python 3.8+: use `missing_ok=True` instead of try except. if not config.overwrite_in_place: # pragma: no branch tmp_file = _tmp_file(source_file) tmp_file.unlink() except FileNotFoundError: pass # pragma: no cover else: changed = sort_stream( input_stream=source_file.stream, output_stream=output, config=config, file_path=actual_file_path, disregard_skip=disregard_skip, extension=extension, ) if changed and show_diff: source_file.stream.seek(0) output.seek(0) show_unified_diff( file_input=source_file.stream.read(), file_output=output.read(), file_path=actual_file_path, output=None if show_diff is True else show_diff, color_output=config.color_output, ) source_file.stream.close() except ExistingSyntaxErrors: warn(f"{actual_file_path} unable to sort due to existing syntax errors") except IntroducedSyntaxErrors: # pragma: no cover warn(f"{actual_file_path} unable to sort as isort introduces new syntax errors") return changed def find_imports_in_code( code: str, config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, unique: Union[bool, ImportKey] = False, top_only: bool = False, **config_kwargs: Any, ) -> Iterator[identify.Import]: """Finds and returns all imports within the provided code string. - **code**: The string of code with imports that need to be sorted. - **config**: The config object to use when sorting imports. - **file_path**: The disk location where the code string was pulled from. - **unique**: If True, only the first instance of an import is returned. - **top_only**: If True, only return imports that occur before the first function or class. - ****config_kwargs**: Any config modifications. """ yield from find_imports_in_stream( input_stream=StringIO(code), config=config, file_path=file_path, unique=unique, top_only=top_only, **config_kwargs, ) def find_imports_in_stream( input_stream: TextIO, config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, unique: Union[bool, ImportKey] = False, top_only: bool = False, _seen: Optional[Set[str]] = None, **config_kwargs: Any, ) -> Iterator[identify.Import]: """Finds and returns all imports within the provided code stream. - **input_stream**: The stream of code with imports that need to be sorted. - **config**: The config object to use when sorting imports. - **file_path**: The disk location where the code string was pulled from. - **unique**: If True, only the first instance of an import is returned. - **top_only**: If True, only return imports that occur before the first function or class. - **_seen**: An optional set of imports already seen. Generally meant only for internal use. - ****config_kwargs**: Any config modifications. """ config = _config(config=config, **config_kwargs) identified_imports = identify.imports( input_stream, config=config, file_path=file_path, top_only=top_only ) if not unique: yield from identified_imports seen: Set[str] = set() if _seen is None else _seen for identified_import in identified_imports: if unique in (True, ImportKey.ALIAS): key = identified_import.statement() elif unique == ImportKey.ATTRIBUTE: key = f"{identified_import.module}.{identified_import.attribute}" elif unique == ImportKey.MODULE: key = identified_import.module elif unique == ImportKey.PACKAGE: # pragma: no branch # type checking ensures this key = identified_import.module.split(".")[0] if key and key not in seen: seen.add(key) yield identified_import def find_imports_in_file( filename: Union[str, Path], config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, unique: Union[bool, ImportKey] = False, top_only: bool = False, **config_kwargs: Any, ) -> Iterator[identify.Import]: """Finds and returns all imports within the provided source file. - **filename**: The name or Path of the file to look for imports in. - **extension**: The file extension that contains imports. Defaults to filename extension or py. - **config**: The config object to use when sorting imports. - **file_path**: The disk location where the code string was pulled from. - **unique**: If True, only the first instance of an import is returned. - **top_only**: If True, only return imports that occur before the first function or class. - ****config_kwargs**: Any config modifications. """ with io.File.read(filename) as source_file: yield from find_imports_in_stream( input_stream=source_file.stream, config=config, file_path=file_path or source_file.path, unique=unique, top_only=top_only, **config_kwargs, ) def find_imports_in_paths( paths: Iterator[Union[str, Path]], config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, unique: Union[bool, ImportKey] = False, top_only: bool = False, **config_kwargs: Any, ) -> Iterator[identify.Import]: """Finds and returns all imports within the provided source paths. - **paths**: A collection of paths to recursively look for imports within. - **extension**: The file extension that contains imports. Defaults to filename extension or py. - **config**: The config object to use when sorting imports. - **file_path**: The disk location where the code string was pulled from. - **unique**: If True, only the first instance of an import is returned. - **top_only**: If True, only return imports that occur before the first function or class. - ****config_kwargs**: Any config modifications. """ config = _config(config=config, **config_kwargs) seen: Optional[Set[str]] = set() if unique else None yield from chain( *( find_imports_in_file( file_name, unique=unique, config=config, top_only=top_only, _seen=seen ) for file_name in files.find(map(str, paths), config, [], []) ) ) def _config( path: Optional[Path] = None, config: Config = DEFAULT_CONFIG, **config_kwargs: Any ) -> Config: if path and ( config is DEFAULT_CONFIG and "settings_path" not in config_kwargs and "settings_file" not in config_kwargs ): config_kwargs["settings_path"] = path if config_kwargs: if config is not DEFAULT_CONFIG: raise ValueError( "You can either specify custom configuration options using kwargs or " "passing in a Config object. Not Both!" ) config = Config(**config_kwargs) return config ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/comments.py0000644000000000000000000000164514536412763013250 0ustar00from typing import List, Optional, Tuple def parse(line: str) -> Tuple[str, str]: """Parses import lines for comments and returns back the import statement and the associated comment. """ comment_start = line.find("#") if comment_start != -1: return (line[:comment_start], line[comment_start + 1 :].strip()) return (line, "") def add_to_line( comments: Optional[List[str]], original_string: str = "", removed: bool = False, comment_prefix: str = "", ) -> str: """Returns a string with comments added if removed is not set.""" if removed: return parse(original_string)[0] if not comments: return original_string unique_comments: List[str] = [] for comment in comments: if comment not in unique_comments: unique_comments.append(comment) return f"{parse(original_string)[0]}{comment_prefix} {'; '.join(unique_comments)}" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/core.py0000644000000000000000000005377414536412763012365 0ustar00import textwrap from io import StringIO from itertools import chain from typing import List, TextIO, Union import isort.literal from isort.settings import DEFAULT_CONFIG, Config from . import output, parse from .exceptions import ExistingSyntaxErrors, FileSkipComment from .format import format_natural, remove_whitespace from .settings import FILE_SKIP_COMMENTS CIMPORT_IDENTIFIERS = ("cimport ", "cimport*", "from.cimport") IMPORT_START_IDENTIFIERS = ("from ", "from.import", "import ", "import*") + CIMPORT_IDENTIFIERS DOCSTRING_INDICATORS = ('"""', "'''") COMMENT_INDICATORS = DOCSTRING_INDICATORS + ("'", '"', "#") CODE_SORT_COMMENTS = ( "# isort: list", "# isort: dict", "# isort: set", "# isort: unique-list", "# isort: tuple", "# isort: unique-tuple", "# isort: assignments", ) LITERAL_TYPE_MAPPING = {"(": "tuple", "[": "list", "{": "set"} def process( input_stream: TextIO, output_stream: TextIO, extension: str = "py", raise_on_skip: bool = True, config: Config = DEFAULT_CONFIG, ) -> bool: """Parses stream identifying sections of contiguous imports and sorting them Code with unsorted imports is read from the provided `input_stream`, sorted and then outputted to the specified `output_stream`. - `input_stream`: Text stream with unsorted import sections. - `output_stream`: Text stream to output sorted inputs into. - `config`: Config settings to use when sorting imports. Defaults settings. - *Default*: `isort.settings.DEFAULT_CONFIG`. - `extension`: The file extension or file extension rules that should be used. - *Default*: `"py"`. - *Choices*: `["py", "pyi", "pyx"]`. Returns `True` if there were changes that needed to be made (errors present) from what was provided in the input_stream, otherwise `False`. """ line_separator: str = config.line_ending add_imports: List[str] = [format_natural(addition) for addition in config.add_imports] import_section: str = "" next_import_section: str = "" next_cimports: bool = False in_quote: str = "" was_in_quote: bool = False first_comment_index_start: int = -1 first_comment_index_end: int = -1 contains_imports: bool = False in_top_comment: bool = False first_import_section: bool = True indent: str = "" isort_off: bool = False skip_file: bool = False code_sorting: Union[bool, str] = False code_sorting_section: str = "" code_sorting_indent: str = "" cimports: bool = False made_changes: bool = False stripped_line: str = "" end_of_file: bool = False verbose_output: List[str] = [] lines_before: List[str] = [] is_reexport: bool = False sort_section_pointer: int = 0 if config.float_to_top: new_input = "" current = "" isort_off = False for line in chain(input_stream, (None,)): if isort_off and line is not None: if line == "# isort: on\n": isort_off = False new_input += line elif line in ("# isort: split\n", "# isort: off\n", None) or str(line).endswith( "# isort: split\n" ): if line == "# isort: off\n": isort_off = True if current: if add_imports: add_line_separator = line_separator or "\n" current += add_line_separator + add_line_separator.join(add_imports) add_imports = [] parsed = parse.file_contents(current, config=config) verbose_output += parsed.verbose_output extra_space = "" while current and current[-1] == "\n": extra_space += "\n" current = current[:-1] extra_space = extra_space.replace("\n", "", 1) sorted_output = output.sorted_imports( parsed, config, extension, import_type="import" ) made_changes = made_changes or _has_changed( before=current, after=sorted_output, line_separator=parsed.line_separator, ignore_whitespace=config.ignore_whitespace, ) new_input += sorted_output new_input += extra_space current = "" new_input += line or "" else: current += line or "" input_stream = StringIO(new_input) for index, line in enumerate(chain(input_stream, (None,))): if line is None: if index == 0 and not config.force_adds: return False not_imports = True end_of_file = True line = "" if not line_separator: line_separator = "\n" if code_sorting and code_sorting_section: if is_reexport: output_stream.seek(sort_section_pointer, 0) sorted_code = textwrap.indent( isort.literal.assignment( code_sorting_section, str(code_sorting), extension, config=_indented_config(config, indent), ), code_sorting_indent, ) made_changes = made_changes or _has_changed( before=code_sorting_section, after=sorted_code, line_separator=line_separator, ignore_whitespace=config.ignore_whitespace, ) sort_section_pointer += output_stream.write(sorted_code) else: stripped_line = line.strip() if stripped_line and not line_separator: line_separator = line[len(line.rstrip()) :].replace(" ", "").replace("\t", "") for file_skip_comment in FILE_SKIP_COMMENTS: if file_skip_comment in line: if raise_on_skip: raise FileSkipComment("Passed in content") isort_off = True skip_file = True if not in_quote: if stripped_line == "# isort: off": isort_off = True elif stripped_line.startswith("# isort: dont-add-imports"): add_imports = [] elif stripped_line.startswith("# isort: dont-add-import:"): import_not_to_add = stripped_line.split("# isort: dont-add-import:", 1)[ 1 ].strip() add_imports = [ import_to_add for import_to_add in add_imports if not import_to_add == import_not_to_add ] if ( (index == 0 or (index in (1, 2) and not contains_imports)) and stripped_line.startswith("#") and stripped_line not in config.section_comments and stripped_line not in CODE_SORT_COMMENTS ): in_top_comment = True elif in_top_comment and ( not line.startswith("#") or stripped_line in config.section_comments or stripped_line in CODE_SORT_COMMENTS ): in_top_comment = False first_comment_index_end = index - 1 was_in_quote = bool(in_quote) if (not stripped_line.startswith("#") or in_quote) and '"' in line or "'" in line: char_index = 0 if first_comment_index_start == -1 and ( line.startswith('"') or line.startswith("'") ): first_comment_index_start = index while char_index < len(line): if line[char_index] == "\\": char_index += 1 elif in_quote: if line[char_index : char_index + len(in_quote)] == in_quote: in_quote = "" if first_comment_index_end < first_comment_index_start: first_comment_index_end = index elif line[char_index] in ("'", '"'): long_quote = line[char_index : char_index + 3] if long_quote in ('"""', "'''"): in_quote = long_quote char_index += 2 else: in_quote = line[char_index] elif line[char_index] == "#": break char_index += 1 not_imports = bool(in_quote) or was_in_quote or in_top_comment or isort_off if not (in_quote or was_in_quote or in_top_comment): if isort_off: if not skip_file and stripped_line == "# isort: on": isort_off = False elif stripped_line.endswith("# isort: split"): not_imports = True elif stripped_line in CODE_SORT_COMMENTS: code_sorting = stripped_line.split("isort: ")[1].strip() code_sorting_indent = line[: -len(line.lstrip())] not_imports = True elif config.sort_reexports and stripped_line.startswith("__all__"): _, rhs = stripped_line.split("=") code_sorting = LITERAL_TYPE_MAPPING.get(rhs.lstrip()[0], "tuple") code_sorting_indent = line[: -len(line.lstrip())] not_imports = True code_sorting_section += line is_reexport = True sort_section_pointer -= len(line) elif code_sorting: if not stripped_line: sorted_code = textwrap.indent( isort.literal.assignment( code_sorting_section, str(code_sorting), extension, config=_indented_config(config, indent), ), code_sorting_indent, ) made_changes = made_changes or _has_changed( before=code_sorting_section, after=sorted_code, line_separator=line_separator, ignore_whitespace=config.ignore_whitespace, ) if is_reexport: output_stream.seek(sort_section_pointer, 0) sort_section_pointer += output_stream.write(sorted_code) not_imports = True code_sorting = False code_sorting_section = "" code_sorting_indent = "" is_reexport = False else: code_sorting_section += line line = "" elif ( stripped_line in config.section_comments or stripped_line in config.section_comments_end ): if import_section and not contains_imports: output_stream.write(import_section) import_section = line not_imports = False else: import_section += line indent = line[: -len(line.lstrip())] elif not (stripped_line or contains_imports): not_imports = True elif ( not stripped_line or stripped_line.startswith("#") and (not indent or indent + line.lstrip() == line) and not config.treat_all_comments_as_code and stripped_line not in config.treat_comments_as_code ): import_section += line elif stripped_line.startswith(IMPORT_START_IDENTIFIERS): new_indent = line[: -len(line.lstrip())] import_statement = line stripped_line = line.strip().split("#")[0] while stripped_line.endswith("\\") or ( "(" in stripped_line and ")" not in stripped_line ): if stripped_line.endswith("\\"): while stripped_line and stripped_line.endswith("\\"): line = input_stream.readline() stripped_line = line.strip().split("#")[0] import_statement += line else: while ")" not in stripped_line: line = input_stream.readline() if not line: # end of file without closing parenthesis raise ExistingSyntaxErrors("Parenthesis is not closed") stripped_line = line.strip().split("#")[0] import_statement += line if ( import_statement.lstrip().startswith("from") and "import" not in import_statement ): line = import_statement not_imports = True else: did_contain_imports = contains_imports contains_imports = True cimport_statement: bool = False if ( import_statement.lstrip().startswith(CIMPORT_IDENTIFIERS) or " cimport " in import_statement or " cimport*" in import_statement or " cimport(" in import_statement or ( ".cimport" in import_statement and "cython.cimports" not in import_statement ) # Allow pure python imports. See #2062 ): cimport_statement = True if cimport_statement != cimports or ( new_indent != indent and import_section and (not did_contain_imports or len(new_indent) < len(indent)) ): indent = new_indent if import_section: next_cimports = cimport_statement next_import_section = import_statement import_statement = "" not_imports = True line = "" else: cimports = cimport_statement else: if new_indent != indent: if import_section and did_contain_imports: import_statement = indent + import_statement.lstrip() else: indent = new_indent import_section += import_statement else: not_imports = True sort_section_pointer += len(line) if not_imports: if not was_in_quote and config.lines_before_imports > -1: if line.strip() == "": lines_before += line continue if not import_section: output_stream.write("".join(lines_before)) lines_before = [] raw_import_section: str = import_section if ( add_imports and (stripped_line or end_of_file) and not config.append_only and not in_top_comment and not was_in_quote and not import_section and not line.lstrip().startswith(COMMENT_INDICATORS) and not (line.rstrip().endswith(DOCSTRING_INDICATORS) and "=" not in line) ): add_line_separator = line_separator or "\n" import_section = add_line_separator.join(add_imports) + add_line_separator if end_of_file and index != 0: output_stream.write(add_line_separator) contains_imports = True add_imports = [] if next_import_section and not import_section: # pragma: no cover raw_import_section = import_section = next_import_section next_import_section = "" if import_section: if add_imports and (contains_imports or not config.append_only) and not indent: import_section = ( line_separator.join(add_imports) + line_separator + import_section ) contains_imports = True add_imports = [] if not indent: import_section += line raw_import_section += line if not contains_imports: output_stream.write(import_section) else: leading_whitespace = import_section[: -len(import_section.lstrip())] trailing_whitespace = import_section[len(import_section.rstrip()) :] if first_import_section and not import_section.lstrip( line_separator ).startswith(COMMENT_INDICATORS): import_section = import_section.lstrip(line_separator) raw_import_section = raw_import_section.lstrip(line_separator) first_import_section = False if indent: import_section = "".join( line[len(indent) :] for line in import_section.splitlines(keepends=True) ) parsed_content = parse.file_contents(import_section, config=config) verbose_output += parsed_content.verbose_output sorted_import_section = output.sorted_imports( parsed_content, _indented_config(config, indent), extension, import_type="cimport" if cimports else "import", ) if not (import_section.strip() and not sorted_import_section): if indent: sorted_import_section = ( leading_whitespace + textwrap.indent(sorted_import_section, indent).strip() + trailing_whitespace ) made_changes = made_changes or _has_changed( before=raw_import_section, after=sorted_import_section, line_separator=line_separator, ignore_whitespace=config.ignore_whitespace, ) output_stream.write(sorted_import_section) if not line and not indent and next_import_section: output_stream.write(line_separator) if indent: output_stream.write(line) if not next_import_section: indent = "" if next_import_section: cimports = next_cimports contains_imports = True else: contains_imports = False import_section = next_import_section next_import_section = "" else: output_stream.write(line) not_imports = False if stripped_line and not in_quote and not import_section and not next_import_section: if stripped_line == "yield": while not stripped_line or stripped_line == "yield": new_line = input_stream.readline() if not new_line: break output_stream.write(new_line) stripped_line = new_line.strip().split("#")[0] if stripped_line.startswith("raise") or stripped_line.startswith("yield"): while stripped_line.endswith("\\"): new_line = input_stream.readline() if not new_line: break output_stream.write(new_line) stripped_line = new_line.strip().split("#")[0] if made_changes and config.only_modified: for output_str in verbose_output: print(output_str) return made_changes def _indented_config(config: Config, indent: str) -> Config: if not indent: return config return Config( config=config, line_length=max(config.line_length - len(indent), 0), wrap_length=max(config.wrap_length - len(indent), 0), lines_after_imports=1, import_headings=config.import_headings if config.indented_import_headings else {}, import_footers=config.import_footers if config.indented_import_headings else {}, ) def _has_changed(before: str, after: str, line_separator: str, ignore_whitespace: bool) -> bool: if ignore_whitespace: return ( remove_whitespace(before, line_separator=line_separator).strip() != remove_whitespace(after, line_separator=line_separator).strip() ) return before.strip() != after.strip() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/deprecated/__init__.py0000644000000000000000000000000014536412763015242 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/deprecated/finders.py0000644000000000000000000003374514536412763015163 0ustar00"""Finders try to find right section for passed module name""" import importlib.machinery import inspect import os import os.path import re import sys import sysconfig from abc import ABCMeta, abstractmethod from contextlib import contextmanager from fnmatch import fnmatch from functools import lru_cache from glob import glob from pathlib import Path from typing import Dict, Iterable, Iterator, List, Optional, Pattern, Sequence, Tuple, Type from isort import sections from isort.settings import KNOWN_SECTION_MAPPING, Config from isort.utils import exists_case_sensitive try: from pipreqs import pipreqs # type: ignore except ImportError: pipreqs = None try: from pip_api import parse_requirements # type: ignore except ImportError: parse_requirements = None @contextmanager def chdir(path: str) -> Iterator[None]: """Context manager for changing dir and restoring previous workdir after exit.""" curdir = os.getcwd() os.chdir(path) try: yield finally: os.chdir(curdir) class BaseFinder(metaclass=ABCMeta): def __init__(self, config: Config) -> None: self.config = config @abstractmethod def find(self, module_name: str) -> Optional[str]: raise NotImplementedError class ForcedSeparateFinder(BaseFinder): def find(self, module_name: str) -> Optional[str]: for forced_separate in self.config.forced_separate: # Ensure all forced_separate patterns will match to end of string path_glob = forced_separate if not forced_separate.endswith("*"): path_glob = f"{forced_separate}*" if fnmatch(module_name, path_glob) or fnmatch(module_name, "." + path_glob): return forced_separate return None class LocalFinder(BaseFinder): def find(self, module_name: str) -> Optional[str]: if module_name.startswith("."): return "LOCALFOLDER" return None class KnownPatternFinder(BaseFinder): def __init__(self, config: Config) -> None: super().__init__(config) self.known_patterns: List[Tuple[Pattern[str], str]] = [] for placement in reversed(config.sections): known_placement = KNOWN_SECTION_MAPPING.get(placement, placement).lower() config_key = f"known_{known_placement}" known_patterns = list( getattr(self.config, config_key, self.config.known_other.get(known_placement, [])) ) known_patterns = [ pattern for known_pattern in known_patterns for pattern in self._parse_known_pattern(known_pattern) ] for known_pattern in known_patterns: regexp = "^" + known_pattern.replace("*", ".*").replace("?", ".?") + "$" self.known_patterns.append((re.compile(regexp), placement)) def _parse_known_pattern(self, pattern: str) -> List[str]: """Expand pattern if identified as a directory and return found sub packages""" if pattern.endswith(os.path.sep): patterns = [ filename for filename in os.listdir(os.path.join(self.config.directory, pattern)) if os.path.isdir(os.path.join(self.config.directory, pattern, filename)) ] else: patterns = [pattern] return patterns def find(self, module_name: str) -> Optional[str]: # Try to find most specific placement instruction match (if any) parts = module_name.split(".") module_names_to_check = (".".join(parts[:first_k]) for first_k in range(len(parts), 0, -1)) for module_name_to_check in module_names_to_check: for pattern, placement in self.known_patterns: if pattern.match(module_name_to_check): return placement return None class PathFinder(BaseFinder): def __init__(self, config: Config, path: str = ".") -> None: super().__init__(config) # restore the original import path (i.e. not the path to bin/isort) root_dir = os.path.abspath(path) src_dir = f"{root_dir}/src" self.paths = [root_dir, src_dir] # virtual env self.virtual_env = self.config.virtual_env or os.environ.get("VIRTUAL_ENV") if self.virtual_env: self.virtual_env = os.path.realpath(self.virtual_env) self.virtual_env_src = "" if self.virtual_env: self.virtual_env_src = f"{self.virtual_env}/src/" for venv_path in glob(f"{self.virtual_env}/lib/python*/site-packages"): if venv_path not in self.paths: self.paths.append(venv_path) for nested_venv_path in glob(f"{self.virtual_env}/lib/python*/*/site-packages"): if nested_venv_path not in self.paths: self.paths.append(nested_venv_path) for venv_src_path in glob(f"{self.virtual_env}/src/*"): if os.path.isdir(venv_src_path): self.paths.append(venv_src_path) # conda self.conda_env = self.config.conda_env or os.environ.get("CONDA_PREFIX") or "" if self.conda_env: self.conda_env = os.path.realpath(self.conda_env) for conda_path in glob(f"{self.conda_env}/lib/python*/site-packages"): if conda_path not in self.paths: self.paths.append(conda_path) for nested_conda_path in glob(f"{self.conda_env}/lib/python*/*/site-packages"): if nested_conda_path not in self.paths: self.paths.append(nested_conda_path) # handle case-insensitive paths on windows self.stdlib_lib_prefix = os.path.normcase(sysconfig.get_paths()["stdlib"]) if self.stdlib_lib_prefix not in self.paths: self.paths.append(self.stdlib_lib_prefix) # add system paths for system_path in sys.path[1:]: if system_path not in self.paths: self.paths.append(system_path) def find(self, module_name: str) -> Optional[str]: for prefix in self.paths: package_path = "/".join((prefix, module_name.split(".")[0])) path_obj = Path(package_path).resolve() is_module = ( exists_case_sensitive(package_path + ".py") or any( exists_case_sensitive(package_path + ext_suffix) for ext_suffix in importlib.machinery.EXTENSION_SUFFIXES ) or exists_case_sensitive(package_path + "/__init__.py") ) is_package = exists_case_sensitive(package_path) and os.path.isdir(package_path) if is_module or is_package: if ( "site-packages" in prefix or "dist-packages" in prefix or (self.virtual_env and self.virtual_env_src in prefix) ): return sections.THIRDPARTY if os.path.normcase(prefix) == self.stdlib_lib_prefix: return sections.STDLIB if self.conda_env and self.conda_env in prefix: return sections.THIRDPARTY for src_path in self.config.src_paths: if src_path in path_obj.parents and not self.config.is_skipped(path_obj): return sections.FIRSTPARTY if os.path.normcase(prefix).startswith(self.stdlib_lib_prefix): return sections.STDLIB # pragma: no cover - edge case for one OS. Hard to test. return self.config.default_section return None class ReqsBaseFinder(BaseFinder): enabled = False def __init__(self, config: Config, path: str = ".") -> None: super().__init__(config) self.path = path if self.enabled: self.mapping = self._load_mapping() self.names = self._load_names() @abstractmethod def _get_names(self, path: str) -> Iterator[str]: raise NotImplementedError @abstractmethod def _get_files_from_dir(self, path: str) -> Iterator[str]: raise NotImplementedError @staticmethod def _load_mapping() -> Optional[Dict[str, str]]: """Return list of mappings `package_name -> module_name` Example: django-haystack -> haystack """ if not pipreqs: return None path = os.path.dirname(inspect.getfile(pipreqs)) path = os.path.join(path, "mapping") with open(path) as f: mappings: Dict[str, str] = {} # pypi_name: import_name for line in f: import_name, _, pypi_name = line.strip().partition(":") mappings[pypi_name] = import_name return mappings # return dict(tuple(line.strip().split(":")[::-1]) for line in f) def _load_names(self) -> List[str]: """Return list of thirdparty modules from requirements""" names = [] for path in self._get_files(): for name in self._get_names(path): names.append(self._normalize_name(name)) return names @staticmethod def _get_parents(path: str) -> Iterator[str]: prev = "" while path != prev: prev = path yield path path = os.path.dirname(path) def _get_files(self) -> Iterator[str]: """Return paths to all requirements files""" path = os.path.abspath(self.path) if os.path.isfile(path): path = os.path.dirname(path) for path in self._get_parents(path): # noqa yield from self._get_files_from_dir(path) def _normalize_name(self, name: str) -> str: """Convert package name to module name Examples: Django -> django django-haystack -> django_haystack Flask-RESTFul -> flask_restful """ if self.mapping: name = self.mapping.get(name.replace("-", "_"), name) return name.lower().replace("-", "_") def find(self, module_name: str) -> Optional[str]: # required lib not installed yet if not self.enabled: return None module_name, _sep, _submodules = module_name.partition(".") module_name = module_name.lower() if not module_name: return None for name in self.names: if module_name == name: return sections.THIRDPARTY return None class RequirementsFinder(ReqsBaseFinder): exts = (".txt", ".in") enabled = bool(parse_requirements) def _get_files_from_dir(self, path: str) -> Iterator[str]: """Return paths to requirements files from passed dir.""" yield from self._get_files_from_dir_cached(path) @classmethod @lru_cache(maxsize=16) def _get_files_from_dir_cached(cls, path: str) -> List[str]: results = [] for fname in os.listdir(path): if "requirements" not in fname: continue full_path = os.path.join(path, fname) # *requirements*/*.{txt,in} if os.path.isdir(full_path): for subfile_name in os.listdir(full_path): for ext in cls.exts: if subfile_name.endswith(ext): results.append(os.path.join(full_path, subfile_name)) continue # *requirements*.{txt,in} if os.path.isfile(full_path): for ext in cls.exts: if fname.endswith(ext): results.append(full_path) break return results def _get_names(self, path: str) -> Iterator[str]: """Load required packages from path to requirements file""" yield from self._get_names_cached(path) @classmethod @lru_cache(maxsize=16) def _get_names_cached(cls, path: str) -> List[str]: result = [] with chdir(os.path.dirname(path)): requirements = parse_requirements(Path(path)) for req in requirements.values(): if req.name: result.append(req.name) return result class DefaultFinder(BaseFinder): def find(self, module_name: str) -> Optional[str]: return self.config.default_section class FindersManager: _default_finders_classes: Sequence[Type[BaseFinder]] = ( ForcedSeparateFinder, LocalFinder, KnownPatternFinder, PathFinder, RequirementsFinder, DefaultFinder, ) def __init__( self, config: Config, finder_classes: Optional[Iterable[Type[BaseFinder]]] = None ) -> None: self.verbose: bool = config.verbose if finder_classes is None: finder_classes = self._default_finders_classes finders: List[BaseFinder] = [] for finder_cls in finder_classes: try: finders.append(finder_cls(config)) except Exception as exception: # if one finder fails to instantiate isort can continue using the rest if self.verbose: print( ( f"{finder_cls.__name__} encountered an error ({exception}) during " "instantiation and cannot be used" ) ) self.finders: Tuple[BaseFinder, ...] = tuple(finders) def find(self, module_name: str) -> Optional[str]: for finder in self.finders: try: section = finder.find(module_name) if section is not None: return section except Exception as exception: # isort has to be able to keep trying to identify the correct # import section even if one approach fails if self.verbose: print( f"{finder.__class__.__name__} encountered an error ({exception}) while " f"trying to identify the {module_name} module" ) return None ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/exceptions.py0000644000000000000000000001562414536412763013606 0ustar00"""All isort specific exception classes should be defined here""" from functools import partial from pathlib import Path from typing import Any, Dict, List, Type, Union from .profiles import profiles class ISortError(Exception): """Base isort exception object from which all isort sourced exceptions should inherit""" def __reduce__(self): # type: ignore return (partial(type(self), **self.__dict__), ()) class InvalidSettingsPath(ISortError): """Raised when a settings path is provided that is neither a valid file or directory""" def __init__(self, settings_path: str): super().__init__( f"isort was told to use the settings_path: {settings_path} as the base directory or " "file that represents the starting point of config file discovery, but it does not " "exist." ) self.settings_path = settings_path class ExistingSyntaxErrors(ISortError): """Raised when isort is told to sort imports within code that has existing syntax errors""" def __init__(self, file_path: str): super().__init__( f"isort was told to sort imports within code that contains syntax errors: " f"{file_path}." ) self.file_path = file_path class IntroducedSyntaxErrors(ISortError): """Raised when isort has introduced a syntax error in the process of sorting imports""" def __init__(self, file_path: str): super().__init__( f"isort introduced syntax errors when attempting to sort the imports contained within " f"{file_path}." ) self.file_path = file_path class FileSkipped(ISortError): """Should be raised when a file is skipped for any reason""" def __init__(self, message: str, file_path: str): super().__init__(message) self.message = message self.file_path = file_path class FileSkipComment(FileSkipped): """Raised when an entire file is skipped due to a isort skip file comment""" def __init__(self, file_path: str, **kwargs: str): super().__init__( f"{file_path} contains a file skip comment and was skipped.", file_path=file_path ) class FileSkipSetting(FileSkipped): """Raised when an entire file is skipped due to provided isort settings""" def __init__(self, file_path: str, **kwargs: str): super().__init__( f"{file_path} was skipped as it's listed in 'skip' setting" " or matches a glob in 'skip_glob' setting", file_path=file_path, ) class ProfileDoesNotExist(ISortError): """Raised when a profile is set by the user that doesn't exist""" def __init__(self, profile: str): super().__init__( f"Specified profile of {profile} does not exist. " f"Available profiles: {','.join(profiles)}." ) self.profile = profile class SortingFunctionDoesNotExist(ISortError): """Raised when the specified sorting function isn't available""" def __init__(self, sort_order: str, available_sort_orders: List[str]): super().__init__( f"Specified sort_order of {sort_order} does not exist. " f"Available sort_orders: {','.join(available_sort_orders)}." ) self.sort_order = sort_order self.available_sort_orders = available_sort_orders class FormattingPluginDoesNotExist(ISortError): """Raised when a formatting plugin is set by the user that doesn't exist""" def __init__(self, formatter: str): super().__init__(f"Specified formatting plugin of {formatter} does not exist. ") self.formatter = formatter class LiteralParsingFailure(ISortError): """Raised when one of isorts literal sorting comments is used but isort can't parse the the given data structure. """ def __init__(self, code: str, original_error: Union[Exception, Type[Exception]]): super().__init__( f"isort failed to parse the given literal {code}. It's important to note " "that isort literal sorting only supports simple literals parsable by " f"ast.literal_eval which gave the exception of {original_error}." ) self.code = code self.original_error = original_error class LiteralSortTypeMismatch(ISortError): """Raised when an isort literal sorting comment is used, with a type that doesn't match the supplied data structure's type. """ def __init__(self, kind: type, expected_kind: type): super().__init__( f"isort was told to sort a literal of type {expected_kind} but was given " f"a literal of type {kind}." ) self.kind = kind self.expected_kind = expected_kind class AssignmentsFormatMismatch(ISortError): """Raised when isort is told to sort assignments but the format of the assignment section doesn't match isort's expectation. """ def __init__(self, code: str): super().__init__( "isort was told to sort a section of assignments, however the given code:\n\n" f"{code}\n\n" "Does not match isort's strict single line formatting requirement for assignment " "sorting:\n\n" "{variable_name} = {value}\n" "{variable_name2} = {value2}\n" "...\n\n" ) self.code = code class UnsupportedSettings(ISortError): """Raised when settings are passed into isort (either from config, CLI, or runtime) that it doesn't support. """ @staticmethod def _format_option(name: str, value: Any, source: str) -> str: return f"\t- {name} = {value} (source: '{source}')" def __init__(self, unsupported_settings: Dict[str, Dict[str, str]]): errors = "\n".join( self._format_option(name, **option) for name, option in unsupported_settings.items() ) super().__init__( "isort was provided settings that it doesn't support:\n\n" f"{errors}\n\n" "For a complete and up-to-date listing of supported settings see: " "https://pycqa.github.io/isort/docs/configuration/options.\n" ) self.unsupported_settings = unsupported_settings class UnsupportedEncoding(ISortError): """Raised when isort encounters an encoding error while trying to read a file""" def __init__(self, filename: Union[str, Path]): super().__init__(f"Unknown or unsupported encoding in {filename}") self.filename = filename class MissingSection(ISortError): """Raised when isort encounters an import that matches a section that is not defined""" def __init__(self, import_module: str, section: str): super().__init__( f"Found {import_module} import while parsing, but {section} was not included " "in the `sections` setting of your config. Please add it before continuing\n" "See https://pycqa.github.io/isort/#custom-sections-and-ordering " "for more info." ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/files.py0000644000000000000000000000306514536412763012523 0ustar00import os from pathlib import Path from typing import Iterable, Iterator, List, Set from isort.settings import Config def find( paths: Iterable[str], config: Config, skipped: List[str], broken: List[str] ) -> Iterator[str]: """Fines and provides an iterator for all Python source files defined in paths.""" visited_dirs: Set[Path] = set() for path in paths: if os.path.isdir(path): for dirpath, dirnames, filenames in os.walk( path, topdown=True, followlinks=config.follow_links ): base_path = Path(dirpath) for dirname in list(dirnames): full_path = base_path / dirname resolved_path = full_path.resolve() if config.is_skipped(full_path): skipped.append(dirname) dirnames.remove(dirname) else: if resolved_path in visited_dirs: # pragma: no cover dirnames.remove(dirname) visited_dirs.add(resolved_path) for filename in filenames: filepath = os.path.join(dirpath, filename) if config.is_supported_filetype(filepath): if config.is_skipped(Path(os.path.abspath(filepath))): skipped.append(filename) else: yield filepath elif not os.path.exists(path): broken.append(path) else: yield path ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/format.py0000644000000000000000000001255314536412763012713 0ustar00import re import sys from datetime import datetime from difflib import unified_diff from pathlib import Path from typing import Optional, TextIO try: import colorama except ImportError: colorama_unavailable = True else: colorama_unavailable = False ADDED_LINE_PATTERN = re.compile(r"\+[^+]") REMOVED_LINE_PATTERN = re.compile(r"-[^-]") def format_simplified(import_line: str) -> str: import_line = import_line.strip() if import_line.startswith("from "): import_line = import_line.replace("from ", "") import_line = import_line.replace(" import ", ".") elif import_line.startswith("import "): import_line = import_line.replace("import ", "") return import_line def format_natural(import_line: str) -> str: import_line = import_line.strip() if not import_line.startswith("from ") and not import_line.startswith("import "): if "." not in import_line: return f"import {import_line}" parts = import_line.split(".") end = parts.pop(-1) return f"from {'.'.join(parts)} import {end}" return import_line def show_unified_diff( *, file_input: str, file_output: str, file_path: Optional[Path], output: Optional[TextIO] = None, color_output: bool = False, ) -> None: """Shows a unified_diff for the provided input and output against the provided file path. - **file_input**: A string that represents the contents of a file before changes. - **file_output**: A string that represents the contents of a file after changes. - **file_path**: A Path object that represents the file path of the file being changed. - **output**: A stream to output the diff to. If non is provided uses sys.stdout. - **color_output**: Use color in output if True. """ printer = create_terminal_printer(color_output, output) file_name = "" if file_path is None else str(file_path) file_mtime = str( datetime.now() if file_path is None else datetime.fromtimestamp(file_path.stat().st_mtime) ) unified_diff_lines = unified_diff( file_input.splitlines(keepends=True), file_output.splitlines(keepends=True), fromfile=file_name + ":before", tofile=file_name + ":after", fromfiledate=file_mtime, tofiledate=str(datetime.now()), ) for line in unified_diff_lines: printer.diff_line(line) def ask_whether_to_apply_changes_to_file(file_path: str) -> bool: answer = None while answer not in ("yes", "y", "no", "n", "quit", "q"): answer = input(f"Apply suggested changes to '{file_path}' [y/n/q]? ") # nosec answer = answer.lower() if answer in ("no", "n"): return False if answer in ("quit", "q"): sys.exit(1) return True def remove_whitespace(content: str, line_separator: str = "\n") -> str: content = content.replace(line_separator, "").replace(" ", "").replace("\x0c", "") return content class BasicPrinter: ERROR = "ERROR" SUCCESS = "SUCCESS" def __init__(self, error: str, success: str, output: Optional[TextIO] = None): self.output = output or sys.stdout self.success_message = success self.error_message = error def success(self, message: str) -> None: print(self.success_message.format(success=self.SUCCESS, message=message), file=self.output) def error(self, message: str) -> None: print(self.error_message.format(error=self.ERROR, message=message), file=sys.stderr) def diff_line(self, line: str) -> None: self.output.write(line) class ColoramaPrinter(BasicPrinter): def __init__(self, error: str, success: str, output: Optional[TextIO]): super().__init__(error, success, output=output) # Note: this constants are instance variables instead ofs class variables # because they refer to colorama which might not be installed. self.ERROR = self.style_text("ERROR", colorama.Fore.RED) self.SUCCESS = self.style_text("SUCCESS", colorama.Fore.GREEN) self.ADDED_LINE = colorama.Fore.GREEN self.REMOVED_LINE = colorama.Fore.RED @staticmethod def style_text(text: str, style: Optional[str] = None) -> str: if style is None: return text return style + text + str(colorama.Style.RESET_ALL) def diff_line(self, line: str) -> None: style = None if re.match(ADDED_LINE_PATTERN, line): style = self.ADDED_LINE elif re.match(REMOVED_LINE_PATTERN, line): style = self.REMOVED_LINE self.output.write(self.style_text(line, style)) def create_terminal_printer( color: bool, output: Optional[TextIO] = None, error: str = "", success: str = "" ) -> BasicPrinter: if color and colorama_unavailable: no_colorama_message = ( "\n" "Sorry, but to use --color (color_output) the colorama python package is required.\n\n" "Reference: https://pypi.org/project/colorama/\n\n" "You can either install it separately on your system or as the colors extra " "for isort. Ex: \n\n" "$ pip install isort[colors]\n" ) print(no_colorama_message, file=sys.stderr) sys.exit(1) if not colorama_unavailable: colorama.init(strip=False) return ( ColoramaPrinter(error, success, output) if color else BasicPrinter(error, success, output) ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/hooks.py0000644000000000000000000000641214536412763012543 0ustar00"""Defines a git hook to allow pre-commit warnings and errors about import order. usage: exit_code = git_hook(strict=True|False, modify=True|False) """ import os import subprocess # nosec - Needed for hook from pathlib import Path from typing import List, Optional from isort import Config, api, exceptions def get_output(command: List[str]) -> str: """Run a command and return raw output :param str command: the command to run :returns: the stdout output of the command """ result = subprocess.run(command, stdout=subprocess.PIPE, check=True) # nosec - trusted input return result.stdout.decode() def get_lines(command: List[str]) -> List[str]: """Run a command and return lines of output :param str command: the command to run :returns: list of whitespace-stripped lines output by command """ stdout = get_output(command) return [line.strip() for line in stdout.splitlines()] def git_hook( strict: bool = False, modify: bool = False, lazy: bool = False, settings_file: str = "", directories: Optional[List[str]] = None, ) -> int: """Git pre-commit hook to check staged files for isort errors :param bool strict - if True, return number of errors on exit, causing the hook to fail. If False, return zero so it will just act as a warning. :param bool modify - if True, fix the sources if they are not sorted properly. If False, only report result without modifying anything. :param bool lazy - if True, also check/fix unstaged files. This is useful if you frequently use ``git commit -a`` for example. If False, only check/fix the staged files for isort errors. :param str settings_file - A path to a file to be used as the configuration file for this run. When settings_file is the empty string, the configuration file will be searched starting at the directory containing the first staged file, if any, and going upward in the directory structure. :param list[str] directories - A list of directories to restrict the hook to. :return number of errors if in strict mode, 0 otherwise. """ # Get list of files modified and staged diff_cmd = ["git", "diff-index", "--cached", "--name-only", "--diff-filter=ACMRTUXB", "HEAD"] if lazy: diff_cmd.remove("--cached") if directories: diff_cmd.extend(directories) files_modified = get_lines(diff_cmd) if not files_modified: return 0 errors = 0 config = Config( settings_file=settings_file, settings_path=os.path.dirname(os.path.abspath(files_modified[0])), ) for filename in files_modified: if filename.endswith(".py"): # Get the staged contents of the file staged_cmd = ["git", "show", f":{filename}"] staged_contents = get_output(staged_cmd) try: if not api.check_code_string( staged_contents, file_path=Path(filename), config=config ): errors += 1 if modify: api.sort_file(filename, config=config) except exceptions.FileSkipped: # pragma: no cover pass return errors if strict else 0 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/identify.py0000644000000000000000000002026114536412763013231 0ustar00"""Fast stream based import identification. Eventually this will likely replace parse.py """ from functools import partial from pathlib import Path from typing import Iterator, NamedTuple, Optional, TextIO, Tuple from isort.parse import normalize_line, skip_line, strip_syntax from .comments import parse as parse_comments from .settings import DEFAULT_CONFIG, Config STATEMENT_DECLARATIONS: Tuple[str, ...] = ("def ", "cdef ", "cpdef ", "class ", "@", "async def") class Import(NamedTuple): line_number: int indented: bool module: str attribute: Optional[str] = None alias: Optional[str] = None cimport: bool = False file_path: Optional[Path] = None def statement(self) -> str: import_cmd = "cimport" if self.cimport else "import" if self.attribute: import_string = f"from {self.module} {import_cmd} {self.attribute}" else: import_string = f"{import_cmd} {self.module}" if self.alias: import_string += f" as {self.alias}" return import_string def __str__(self) -> str: return ( f"{self.file_path or ''}:{self.line_number} " f"{'indented ' if self.indented else ''}{self.statement()}" ) def imports( input_stream: TextIO, config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, top_only: bool = False, ) -> Iterator[Import]: """Parses a python file taking out and categorizing imports.""" in_quote = "" indexed_input = enumerate(input_stream) for index, raw_line in indexed_input: (skipping_line, in_quote) = skip_line( raw_line, in_quote=in_quote, index=index, section_comments=config.section_comments ) if top_only and not in_quote and raw_line.startswith(STATEMENT_DECLARATIONS): break if skipping_line: continue stripped_line = raw_line.strip().split("#")[0] if stripped_line.startswith("raise") or stripped_line.startswith("yield"): if stripped_line == "yield": while not stripped_line or stripped_line == "yield": try: index, next_line = next(indexed_input) except StopIteration: break stripped_line = next_line.strip().split("#")[0] while stripped_line.endswith("\\"): try: index, next_line = next(indexed_input) except StopIteration: break stripped_line = next_line.strip().split("#")[0] continue # pragma: no cover line, *end_of_line_comment = raw_line.split("#", 1) statements = [line.strip() for line in line.split(";")] if end_of_line_comment: statements[-1] = f"{statements[-1]}#{end_of_line_comment[0]}" for statement in statements: line, _raw_line = normalize_line(statement) if line.startswith(("import ", "cimport ")): type_of_import = "straight" elif line.startswith("from "): type_of_import = "from" else: continue # pragma: no cover import_string, _ = parse_comments(line) normalized_import_string = ( import_string.replace("import(", "import (").replace("\\", " ").replace("\n", " ") ) cimports: bool = ( " cimport " in normalized_import_string or normalized_import_string.startswith("cimport") ) identified_import = partial( Import, index + 1, # line numbers use 1 based indexing raw_line.startswith((" ", "\t")), cimport=cimports, file_path=file_path, ) if "(" in line.split("#", 1)[0]: while not line.split("#")[0].strip().endswith(")"): try: index, next_line = next(indexed_input) except StopIteration: break line, _ = parse_comments(next_line) import_string += "\n" + line else: while line.strip().endswith("\\"): try: index, next_line = next(indexed_input) except StopIteration: break line, _ = parse_comments(next_line) # Still need to check for parentheses after an escaped line if "(" in line.split("#")[0] and ")" not in line.split("#")[0]: import_string += "\n" + line while not line.split("#")[0].strip().endswith(")"): try: index, next_line = next(indexed_input) except StopIteration: break line, _ = parse_comments(next_line) import_string += "\n" + line else: if import_string.strip().endswith( (" import", " cimport") ) or line.strip().startswith(("import ", "cimport ")): import_string += "\n" + line else: import_string = ( import_string.rstrip().rstrip("\\") + " " + line.lstrip() ) if type_of_import == "from": import_string = ( import_string.replace("import(", "import (") .replace("\\", " ") .replace("\n", " ") ) parts = import_string.split(" cimport " if cimports else " import ") from_import = parts[0].split(" ") import_string = (" cimport " if cimports else " import ").join( [from_import[0] + " " + "".join(from_import[1:])] + parts[1:] ) just_imports = [ item.replace("{|", "{ ").replace("|}", " }") for item in strip_syntax(import_string).split() ] direct_imports = just_imports[1:] top_level_module = "" if "as" in just_imports and (just_imports.index("as") + 1) < len(just_imports): while "as" in just_imports: attribute = None as_index = just_imports.index("as") if type_of_import == "from": attribute = just_imports[as_index - 1] top_level_module = just_imports[0] module = top_level_module + "." + attribute alias = just_imports[as_index + 1] direct_imports.remove(attribute) direct_imports.remove(alias) direct_imports.remove("as") just_imports[1:] = direct_imports if attribute == alias and config.remove_redundant_aliases: yield identified_import(top_level_module, attribute) else: yield identified_import(top_level_module, attribute, alias=alias) else: module = just_imports[as_index - 1] alias = just_imports[as_index + 1] just_imports.remove(alias) just_imports.remove("as") just_imports.remove(module) if module == alias and config.remove_redundant_aliases: yield identified_import(module) else: yield identified_import(module, alias=alias) if just_imports: if type_of_import == "from": module = just_imports.pop(0) for attribute in just_imports: yield identified_import(module, attribute) else: for module in just_imports: yield identified_import(module) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/io.py0000644000000000000000000000425014536412763012025 0ustar00"""Defines any IO utilities used by isort""" import dataclasses import re import tokenize from contextlib import contextmanager from io import BytesIO, StringIO, TextIOWrapper from pathlib import Path from typing import Any, Callable, Iterator, TextIO, Union from isort.exceptions import UnsupportedEncoding _ENCODING_PATTERN = re.compile(rb"^[ \t\f]*#.*?coding[:=][ \t]*([-_.a-zA-Z0-9]+)") @dataclasses.dataclass(frozen=True) class File: stream: TextIO path: Path encoding: str @staticmethod def detect_encoding(filename: Union[str, Path], readline: Callable[[], bytes]) -> str: try: return tokenize.detect_encoding(readline)[0] except Exception: raise UnsupportedEncoding(filename) @staticmethod def from_contents(contents: str, filename: str) -> "File": encoding = File.detect_encoding(filename, BytesIO(contents.encode("utf-8")).readline) return File(stream=StringIO(contents), path=Path(filename).resolve(), encoding=encoding) @property def extension(self) -> str: return self.path.suffix.lstrip(".") @staticmethod def _open(filename: Union[str, Path]) -> TextIOWrapper: """Open a file in read only mode using the encoding detected by detect_encoding(). """ buffer = open(filename, "rb") try: encoding = File.detect_encoding(filename, buffer.readline) buffer.seek(0) text = TextIOWrapper(buffer, encoding, line_buffering=True, newline="") text.mode = "r" # type: ignore return text except Exception: buffer.close() raise @staticmethod @contextmanager def read(filename: Union[str, Path]) -> Iterator["File"]: file_path = Path(filename).resolve() stream = None try: stream = File._open(file_path) yield File(stream=stream, path=file_path, encoding=stream.encoding) finally: if stream is not None: stream.close() class _EmptyIO(StringIO): def write(self, *args: Any, **kwargs: Any) -> None: # type: ignore # skipcq: PTC-W0049 pass Empty = _EmptyIO() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/literal.py0000644000000000000000000000720114536412763013051 0ustar00import ast from pprint import PrettyPrinter from typing import Any, Callable, Dict, List, Set, Tuple from isort.exceptions import ( AssignmentsFormatMismatch, LiteralParsingFailure, LiteralSortTypeMismatch, ) from isort.settings import DEFAULT_CONFIG, Config class ISortPrettyPrinter(PrettyPrinter): """an isort customized pretty printer for sorted literals""" def __init__(self, config: Config): super().__init__(width=config.line_length, compact=True) type_mapping: Dict[str, Tuple[type, Callable[[Any, ISortPrettyPrinter], str]]] = {} def assignments(code: str) -> str: values = {} for line in code.splitlines(keepends=True): if not line.strip(): continue if " = " not in line: raise AssignmentsFormatMismatch(code) variable_name, value = line.split(" = ", 1) values[variable_name] = value return "".join( f"{variable_name} = {values[variable_name]}" for variable_name in sorted(values.keys()) ) def assignment(code: str, sort_type: str, extension: str, config: Config = DEFAULT_CONFIG) -> str: """Sorts the literal present within the provided code against the provided sort type, returning the sorted representation of the source code. """ if sort_type == "assignments": return assignments(code) if sort_type not in type_mapping: raise ValueError( "Trying to sort using an undefined sort_type. " f"Defined sort types are {', '.join(type_mapping.keys())}." ) variable_name, literal = code.split("=") variable_name = variable_name.strip() literal = literal.lstrip() try: value = ast.literal_eval(literal) except Exception as error: raise LiteralParsingFailure(code, error) expected_type, sort_function = type_mapping[sort_type] if type(value) != expected_type: raise LiteralSortTypeMismatch(type(value), expected_type) printer = ISortPrettyPrinter(config) sorted_value_code = f"{variable_name} = {sort_function(value, printer)}" if config.formatting_function: sorted_value_code = config.formatting_function( sorted_value_code, extension, config ).rstrip() sorted_value_code += code[len(code.rstrip()) :] return sorted_value_code def register_type( name: str, kind: type ) -> Callable[[Callable[[Any, ISortPrettyPrinter], str]], Callable[[Any, ISortPrettyPrinter], str]]: """Registers a new literal sort type.""" def wrap( function: Callable[[Any, ISortPrettyPrinter], str] ) -> Callable[[Any, ISortPrettyPrinter], str]: type_mapping[name] = (kind, function) return function return wrap @register_type("dict", dict) def _dict(value: Dict[Any, Any], printer: ISortPrettyPrinter) -> str: return printer.pformat(dict(sorted(value.items(), key=lambda item: item[1]))) # type: ignore @register_type("list", list) def _list(value: List[Any], printer: ISortPrettyPrinter) -> str: return printer.pformat(sorted(value)) @register_type("unique-list", list) def _unique_list(value: List[Any], printer: ISortPrettyPrinter) -> str: return printer.pformat(list(sorted(set(value)))) @register_type("set", set) def _set(value: Set[Any], printer: ISortPrettyPrinter) -> str: return "{" + printer.pformat(tuple(sorted(value)))[1:-1] + "}" @register_type("tuple", tuple) def _tuple(value: Tuple[Any, ...], printer: ISortPrettyPrinter) -> str: return printer.pformat(tuple(sorted(value))) @register_type("unique-tuple", tuple) def _unique_tuple(value: Tuple[Any, ...], printer: ISortPrettyPrinter) -> str: return printer.pformat(tuple(sorted(set(value)))) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/logo.py0000644000000000000000000000060414536412763012355 0ustar00from ._version import __version__ ASCII_ART = rf""" _ _ (_) ___ ___ _ __| |_ | |/ _/ / _ \/ '__ _/ | |\__ \/\_\/| | | |_ |_|\___/\___/\_/ \_/ isort your imports, so you don't have to. VERSION {__version__} """ __doc__ = f""" ```python {ASCII_ART} ``` """ ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/main.py0000644000000000000000000013347314536412763012354 0ustar00"""Tool for sorting imports alphabetically, and automatically separated into sections.""" import argparse import functools import json import os import sys from gettext import gettext as _ from io import TextIOWrapper from pathlib import Path from typing import Any, Dict, List, Optional, Sequence, Union from warnings import warn from . import __version__, api, files, sections from .exceptions import FileSkipped, ISortError, UnsupportedEncoding from .format import create_terminal_printer from .logo import ASCII_ART from .profiles import profiles from .settings import VALID_PY_TARGETS, Config, find_all_configs from .utils import Trie from .wrap_modes import WrapModes DEPRECATED_SINGLE_DASH_ARGS = { "-ac", "-af", "-ca", "-cs", "-df", "-ds", "-dt", "-fas", "-fass", "-ff", "-fgw", "-fss", "-lai", "-lbt", "-le", "-ls", "-nis", "-nlb", "-ot", "-rr", "-sd", "-sg", "-sl", "-sp", "-tc", "-wl", "-ws", } QUICK_GUIDE = f""" {ASCII_ART} Nothing to do: no files or paths have have been passed in! Try one of the following: `isort .` - sort all Python files, starting from the current directory, recursively. `isort . --interactive` - Do the same, but ask before making any changes. `isort . --check --diff` - Check to see if imports are correctly sorted within this project. `isort --help` - In-depth information about isort's available command-line options. Visit https://pycqa.github.io/isort/ for complete information about how to use isort. """ class SortAttempt: def __init__(self, incorrectly_sorted: bool, skipped: bool, supported_encoding: bool) -> None: self.incorrectly_sorted = incorrectly_sorted self.skipped = skipped self.supported_encoding = supported_encoding def sort_imports( file_name: str, config: Config, check: bool = False, ask_to_apply: bool = False, write_to_stdout: bool = False, **kwargs: Any, ) -> Optional[SortAttempt]: incorrectly_sorted: bool = False skipped: bool = False try: if check: try: incorrectly_sorted = not api.check_file(file_name, config=config, **kwargs) except FileSkipped: skipped = True return SortAttempt(incorrectly_sorted, skipped, True) try: incorrectly_sorted = not api.sort_file( file_name, config=config, ask_to_apply=ask_to_apply, write_to_stdout=write_to_stdout, **kwargs, ) except FileSkipped: skipped = True return SortAttempt(incorrectly_sorted, skipped, True) except (OSError, ValueError) as error: warn(f"Unable to parse file {file_name} due to {error}") return None except UnsupportedEncoding: if config.verbose: warn(f"Encoding not supported for {file_name}") return SortAttempt(incorrectly_sorted, skipped, False) except ISortError as error: _print_hard_fail(config, message=str(error)) sys.exit(1) except Exception: _print_hard_fail(config, offending_file=file_name) raise def _print_hard_fail( config: Config, offending_file: Optional[str] = None, message: Optional[str] = None ) -> None: """Fail on unrecoverable exception with custom message.""" message = message or ( f"Unrecoverable exception thrown when parsing {offending_file or ''}! " "This should NEVER happen.\n" "If encountered, please open an issue: https://github.com/PyCQA/isort/issues/new" ) printer = create_terminal_printer( color=config.color_output, error=config.format_error, success=config.format_success ) printer.error(message) def _build_arg_parser() -> argparse.ArgumentParser: parser = argparse.ArgumentParser( description="Sort Python import definitions alphabetically " "within logical sections. Run with no arguments to see a quick " "start guide, otherwise, one or more files/directories/stdin must be provided. " "Use `-` as the first argument to represent stdin. Use --interactive to use the pre 5.0.0 " "interactive behavior." " " "If you've used isort 4 but are new to isort 5, see the upgrading guide: " "https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html", add_help=False, # prevent help option from appearing in "optional arguments" group ) general_group = parser.add_argument_group("general options") target_group = parser.add_argument_group("target options") output_group = parser.add_argument_group("general output options") inline_args_group = output_group.add_mutually_exclusive_group() section_group = parser.add_argument_group("section output options") deprecated_group = parser.add_argument_group("deprecated options") general_group.add_argument( "-h", "--help", action="help", default=argparse.SUPPRESS, help=_("show this help message and exit"), ) general_group.add_argument( "-V", "--version", action="store_true", dest="show_version", help="Displays the currently installed version of isort.", ) general_group.add_argument( "--vn", "--version-number", action="version", version=__version__, help="Returns just the current version number without the logo", ) general_group.add_argument( "-v", "--verbose", action="store_true", dest="verbose", help="Shows verbose output, such as when files are skipped or when a check is successful.", ) general_group.add_argument( "--only-modified", "--om", dest="only_modified", action="store_true", help="Suppresses verbose output for non-modified files.", ) general_group.add_argument( "--dedup-headings", dest="dedup_headings", action="store_true", help="Tells isort to only show an identical custom import heading comment once, even if" " there are multiple sections with the comment set.", ) general_group.add_argument( "-q", "--quiet", action="store_true", dest="quiet", help="Shows extra quiet output, only errors are outputted.", ) general_group.add_argument( "-d", "--stdout", help="Force resulting output to stdout, instead of in-place.", dest="write_to_stdout", action="store_true", ) general_group.add_argument( "--overwrite-in-place", help="Tells isort to overwrite in place using the same file handle. " "Comes at a performance and memory usage penalty over its standard " "approach but ensures all file flags and modes stay unchanged.", dest="overwrite_in_place", action="store_true", ) general_group.add_argument( "--show-config", dest="show_config", action="store_true", help="See isort's determined config, as well as sources of config options.", ) general_group.add_argument( "--show-files", dest="show_files", action="store_true", help="See the files isort will be run against with the current config options.", ) general_group.add_argument( "--df", "--diff", dest="show_diff", action="store_true", help="Prints a diff of all the changes isort would make to a file, instead of " "changing it in place", ) general_group.add_argument( "-c", "--check-only", "--check", action="store_true", dest="check", help="Checks the file for unsorted / unformatted imports and prints them to the " "command line without modifying the file. Returns 0 when nothing would change and " "returns 1 when the file would be reformatted.", ) general_group.add_argument( "--ws", "--ignore-whitespace", action="store_true", dest="ignore_whitespace", help="Tells isort to ignore whitespace differences when --check-only is being used.", ) general_group.add_argument( "--sp", "--settings-path", "--settings-file", "--settings", dest="settings_path", help="Explicitly set the settings path or file instead of auto determining " "based on file location.", ) general_group.add_argument( "--cr", "--config-root", dest="config_root", help="Explicitly set the config root for resolving all configs. When used " "with the --resolve-all-configs flag, isort will look at all sub-folders " "in this config root to resolve config files and sort files based on the " "closest available config(if any)", ) general_group.add_argument( "--resolve-all-configs", dest="resolve_all_configs", action="store_true", help="Tells isort to resolve the configs for all sub-directories " "and sort files in terms of its closest config files.", ) general_group.add_argument( "--profile", dest="profile", type=str, help="Base profile type to use for configuration. " f"Profiles include: {', '.join(profiles.keys())}. As well as any shared profiles.", ) general_group.add_argument( "--old-finders", "--magic-placement", dest="old_finders", action="store_true", help="Use the old deprecated finder logic that relies on environment introspection magic.", ) general_group.add_argument( "-j", "--jobs", help="Number of files to process in parallel. Negative value means use number of CPUs.", dest="jobs", type=int, nargs="?", const=-1, ) general_group.add_argument( "--ac", "--atomic", dest="atomic", action="store_true", help="Ensures the output doesn't save if the resulting file contains syntax errors.", ) general_group.add_argument( "--interactive", dest="ask_to_apply", action="store_true", help="Tells isort to apply changes interactively.", ) general_group.add_argument( "--format-error", dest="format_error", help="Override the format used to print errors.", ) general_group.add_argument( "--format-success", dest="format_success", help="Override the format used to print success.", ) general_group.add_argument( "--srx", "--sort-reexports", dest="sort_reexports", action="store_true", help="Automatically sort all re-exports (module level __all__ collections)", ) target_group.add_argument( "files", nargs="*", help="One or more Python source files that need their imports sorted." ) target_group.add_argument( "--filter-files", dest="filter_files", action="store_true", help="Tells isort to filter files even when they are explicitly passed in as " "part of the CLI command.", ) target_group.add_argument( "-s", "--skip", help="Files that isort should skip over. If you want to skip multiple " "files you should specify twice: --skip file1 --skip file2. Values can be " "file names, directory names or file paths. To skip all files in a nested path " "use --skip-glob.", dest="skip", action="append", ) target_group.add_argument( "--extend-skip", help="Extends --skip to add additional files that isort should skip over. " "If you want to skip multiple " "files you should specify twice: --skip file1 --skip file2. Values can be " "file names, directory names or file paths. To skip all files in a nested path " "use --skip-glob.", dest="extend_skip", action="append", ) target_group.add_argument( "--sg", "--skip-glob", help="Files that isort should skip over.", dest="skip_glob", action="append", ) target_group.add_argument( "--extend-skip-glob", help="Additional files that isort should skip over (extending --skip-glob).", dest="extend_skip_glob", action="append", ) target_group.add_argument( "--gitignore", "--skip-gitignore", action="store_true", dest="skip_gitignore", help="Treat project as a git repository and ignore files listed in .gitignore." "\nNOTE: This requires git to be installed and accessible from the same shell as isort.", ) target_group.add_argument( "--ext", "--extension", "--supported-extension", dest="supported_extensions", action="append", help="Specifies what extensions isort can be run against.", ) target_group.add_argument( "--blocked-extension", dest="blocked_extensions", action="append", help="Specifies what extensions isort can never be run against.", ) target_group.add_argument( "--dont-follow-links", dest="dont_follow_links", action="store_true", help="Tells isort not to follow symlinks that are encountered when running recursively.", ) target_group.add_argument( "--filename", dest="filename", help="Provide the filename associated with a stream.", ) target_group.add_argument( "--allow-root", action="store_true", default=False, help="Tells isort not to treat / specially, allowing it to be run against the root dir.", ) output_group.add_argument( "-a", "--add-import", dest="add_imports", action="append", help="Adds the specified import line to all files, " "automatically determining correct placement.", ) output_group.add_argument( "--append", "--append-only", dest="append_only", action="store_true", help="Only adds the imports specified in --add-import if the file" " contains existing imports.", ) output_group.add_argument( "--af", "--force-adds", dest="force_adds", action="store_true", help="Forces import adds even if the original file is empty.", ) output_group.add_argument( "--rm", "--remove-import", dest="remove_imports", action="append", help="Removes the specified import from all files.", ) output_group.add_argument( "--float-to-top", dest="float_to_top", action="store_true", help="Causes all non-indented imports to float to the top of the file having its imports " "sorted (immediately below the top of file comment).\n" "This can be an excellent shortcut for collecting imports every once in a while " "when you place them in the middle of a file to avoid context switching.\n\n" "*NOTE*: It currently doesn't work with cimports and introduces some extra over-head " "and a performance penalty.", ) output_group.add_argument( "--dont-float-to-top", dest="dont_float_to_top", action="store_true", help="Forces --float-to-top setting off. See --float-to-top for more information.", ) output_group.add_argument( "--ca", "--combine-as", dest="combine_as_imports", action="store_true", help="Combines as imports on the same line.", ) output_group.add_argument( "--cs", "--combine-star", dest="combine_star", action="store_true", help="Ensures that if a star import is present, " "nothing else is imported from that namespace.", ) output_group.add_argument( "-e", "--balanced", dest="balanced_wrapping", action="store_true", help="Balances wrapping to produce the most consistent line length possible", ) output_group.add_argument( "--ff", "--from-first", dest="from_first", action="store_true", help="Switches the typical ordering preference, " "showing from imports first then straight ones.", ) output_group.add_argument( "--fgw", "--force-grid-wrap", nargs="?", const=2, type=int, dest="force_grid_wrap", help="Force number of from imports (defaults to 2 when passed as CLI flag without value) " "to be grid wrapped regardless of line " "length. If 0 is passed in (the global default) only line length is considered.", ) output_group.add_argument( "-i", "--indent", help='String to place for indents defaults to " " (4 spaces).', dest="indent", type=str, ) output_group.add_argument( "--lbi", "--lines-before-imports", dest="lines_before_imports", type=int ) output_group.add_argument( "--lai", "--lines-after-imports", dest="lines_after_imports", type=int ) output_group.add_argument( "--lbt", "--lines-between-types", dest="lines_between_types", type=int ) output_group.add_argument( "--le", "--line-ending", dest="line_ending", help="Forces line endings to the specified value. " "If not set, values will be guessed per-file.", ) output_group.add_argument( "--ls", "--length-sort", help="Sort imports by their string length.", dest="length_sort", action="store_true", ) output_group.add_argument( "--lss", "--length-sort-straight", help="Sort straight imports by their string length. Similar to `length_sort` " "but applies only to straight imports and doesn't affect from imports.", dest="length_sort_straight", action="store_true", ) output_group.add_argument( "-m", "--multi-line", dest="multi_line_output", choices=list(WrapModes.__members__.keys()) + [str(mode.value) for mode in WrapModes.__members__.values()], type=str, help="Multi line output (0-grid, 1-vertical, 2-hanging, 3-vert-hanging, 4-vert-grid, " "5-vert-grid-grouped, 6-deprecated-alias-for-5, 7-noqa, " "8-vertical-hanging-indent-bracket, 9-vertical-prefix-from-module-import, " "10-hanging-indent-with-parentheses).", ) output_group.add_argument( "-n", "--ensure-newline-before-comments", dest="ensure_newline_before_comments", action="store_true", help="Inserts a blank line before a comment following an import.", ) inline_args_group.add_argument( "--nis", "--no-inline-sort", dest="no_inline_sort", action="store_true", help="Leaves `from` imports with multiple imports 'as-is' " "(e.g. `from foo import a, c ,b`).", ) output_group.add_argument( "--ot", "--order-by-type", dest="order_by_type", action="store_true", help="Order imports by type, which is determined by case, in addition to alphabetically.\n" "\n**NOTE**: type here refers to the implied type from the import name capitalization.\n" ' isort does not do type introspection for the imports. These "types" are simply: ' "CONSTANT_VARIABLE, CamelCaseClass, variable_or_function. If your project follows PEP8" " or a related coding standard and has many imports this is a good default, otherwise you " "likely will want to turn it off. From the CLI the `--dont-order-by-type` option will turn " "this off.", ) output_group.add_argument( "--dt", "--dont-order-by-type", dest="dont_order_by_type", action="store_true", help="Don't order imports by type, which is determined by case, in addition to " "alphabetically.\n\n" "**NOTE**: type here refers to the implied type from the import name capitalization.\n" ' isort does not do type introspection for the imports. These "types" are simply: ' "CONSTANT_VARIABLE, CamelCaseClass, variable_or_function. If your project follows PEP8" " or a related coding standard and has many imports this is a good default. You can turn " "this on from the CLI using `--order-by-type`.", ) output_group.add_argument( "--rr", "--reverse-relative", dest="reverse_relative", action="store_true", help="Reverse order of relative imports.", ) output_group.add_argument( "--reverse-sort", dest="reverse_sort", action="store_true", help="Reverses the ordering of imports.", ) output_group.add_argument( "--sort-order", dest="sort_order", help="Specify sorting function. Can be built in (natural[default] = force numbers " "to be sequential, native = Python's built-in sorted function) or an installable plugin.", ) inline_args_group.add_argument( "--sl", "--force-single-line-imports", dest="force_single_line", action="store_true", help="Forces all from imports to appear on their own line", ) output_group.add_argument( "--nsl", "--single-line-exclusions", help="One or more modules to exclude from the single line rule.", dest="single_line_exclusions", action="append", ) output_group.add_argument( "--tc", "--trailing-comma", dest="include_trailing_comma", action="store_true", help="Includes a trailing comma on multi line imports that include parentheses.", ) output_group.add_argument( "--up", "--use-parentheses", dest="use_parentheses", action="store_true", help="Use parentheses for line continuation on length limit instead of slashes." " **NOTE**: This is separate from wrap modes, and only affects how individual lines that " " are too long get continued, not sections of multiple imports.", ) output_group.add_argument( "-l", "-w", "--line-length", "--line-width", help="The max length of an import line (used for wrapping long imports).", dest="line_length", type=int, ) output_group.add_argument( "--wl", "--wrap-length", dest="wrap_length", type=int, help="Specifies how long lines that are wrapped should be, if not set line_length is used." "\nNOTE: wrap_length must be LOWER than or equal to line_length.", ) output_group.add_argument( "--case-sensitive", dest="case_sensitive", action="store_true", help="Tells isort to include casing when sorting module names", ) output_group.add_argument( "--remove-redundant-aliases", dest="remove_redundant_aliases", action="store_true", help=( "Tells isort to remove redundant aliases from imports, such as `import os as os`." " This defaults to `False` simply because some projects use these seemingly useless " " aliases to signify intent and change behaviour." ), ) output_group.add_argument( "--honor-noqa", dest="honor_noqa", action="store_true", help="Tells isort to honor noqa comments to enforce skipping those comments.", ) output_group.add_argument( "--treat-comment-as-code", dest="treat_comments_as_code", action="append", help="Tells isort to treat the specified single line comment(s) as if they are code.", ) output_group.add_argument( "--treat-all-comment-as-code", dest="treat_all_comments_as_code", action="store_true", help="Tells isort to treat all single line comments as if they are code.", ) output_group.add_argument( "--formatter", dest="formatter", type=str, help="Specifies the name of a formatting plugin to use when producing output.", ) output_group.add_argument( "--color", dest="color_output", action="store_true", help="Tells isort to use color in terminal output.", ) output_group.add_argument( "--ext-format", dest="ext_format", help="Tells isort to format the given files according to an extensions formatting rules.", ) output_group.add_argument( "--star-first", help="Forces star imports above others to avoid overriding directly imported variables.", dest="star_first", action="store_true", ) output_group.add_argument( "--split-on-trailing-comma", help="Split imports list followed by a trailing comma into VERTICAL_HANGING_INDENT mode", dest="split_on_trailing_comma", action="store_true", ) section_group.add_argument( "--sd", "--section-default", dest="default_section", help="Sets the default section for import options: " + str(sections.DEFAULT), ) section_group.add_argument( "--only-sections", "--os", dest="only_sections", action="store_true", help="Causes imports to be sorted based on their sections like STDLIB, THIRDPARTY, etc. " "Within sections, the imports are ordered by their import style and the imports with " "the same style maintain their relative positions.", ) section_group.add_argument( "--ds", "--no-sections", help="Put all imports into the same section bucket", dest="no_sections", action="store_true", ) section_group.add_argument( "--fas", "--force-alphabetical-sort", action="store_true", dest="force_alphabetical_sort", help="Force all imports to be sorted as a single section", ) section_group.add_argument( "--fss", "--force-sort-within-sections", action="store_true", dest="force_sort_within_sections", help="Don't sort straight-style imports (like import sys) before from-style imports " "(like from itertools import groupby). Instead, sort the imports by module, " "independent of import style.", ) section_group.add_argument( "--hcss", "--honor-case-in-force-sorted-sections", action="store_true", dest="honor_case_in_force_sorted_sections", help="Honor `--case-sensitive` when `--force-sort-within-sections` is being used. " "Without this option set, `--order-by-type` decides module name ordering too.", ) section_group.add_argument( "--srss", "--sort-relative-in-force-sorted-sections", action="store_true", dest="sort_relative_in_force_sorted_sections", help="When using `--force-sort-within-sections`, sort relative imports the same " "way as they are sorted when not using that setting.", ) section_group.add_argument( "--fass", "--force-alphabetical-sort-within-sections", action="store_true", dest="force_alphabetical_sort_within_sections", help="Force all imports to be sorted alphabetically within a section", ) section_group.add_argument( "-t", "--top", help="Force specific imports to the top of their appropriate section.", dest="force_to_top", action="append", ) section_group.add_argument( "--combine-straight-imports", "--csi", dest="combine_straight_imports", action="store_true", help="Combines all the bare straight imports of the same section in a single line. " "Won't work with sections which have 'as' imports", ) section_group.add_argument( "--nlb", "--no-lines-before", help="Sections which should not be split with previous by empty lines", dest="no_lines_before", action="append", ) section_group.add_argument( "--src", "--src-path", dest="src_paths", action="append", help="Add an explicitly defined source path " "(modules within src paths have their imports automatically categorized as first_party)." " Glob expansion (`*` and `**`) is supported for this option.", ) section_group.add_argument( "-b", "--builtin", dest="known_standard_library", action="append", help="Force isort to recognize a module as part of Python's standard library.", ) section_group.add_argument( "--extra-builtin", dest="extra_standard_library", action="append", help="Extra modules to be included in the list of ones in Python's standard library.", ) section_group.add_argument( "-f", "--future", dest="known_future_library", action="append", help="Force isort to recognize a module as part of Python's internal future compatibility " "libraries. WARNING: this overrides the behavior of __future__ handling and therefore" " can result in code that can't execute. If you're looking to add dependencies such " "as six, a better option is to create another section below --future using custom " "sections. See: https://github.com/PyCQA/isort#custom-sections-and-ordering and the " "discussion here: https://github.com/PyCQA/isort/issues/1463.", ) section_group.add_argument( "-o", "--thirdparty", dest="known_third_party", action="append", help="Force isort to recognize a module as being part of a third party library.", ) section_group.add_argument( "-p", "--project", dest="known_first_party", action="append", help="Force isort to recognize a module as being part of the current python project.", ) section_group.add_argument( "--known-local-folder", dest="known_local_folder", action="append", help="Force isort to recognize a module as being a local folder. " "Generally, this is reserved for relative imports (from . import module).", ) section_group.add_argument( "--virtual-env", dest="virtual_env", help="Virtual environment to use for determining whether a package is third-party", ) section_group.add_argument( "--conda-env", dest="conda_env", help="Conda environment to use for determining whether a package is third-party", ) section_group.add_argument( "--py", "--python-version", action="store", dest="py_version", choices=tuple(VALID_PY_TARGETS) + ("auto",), help="Tells isort to set the known standard library based on the specified Python " "version. Default is to assume any Python 3 version could be the target, and use a union " "of all stdlib modules across versions. If auto is specified, the version of the " "interpreter used to run isort " f"(currently: {sys.version_info.major}{sys.version_info.minor}) will be used.", ) # deprecated options deprecated_group.add_argument( "--recursive", dest="deprecated_flags", action="append_const", const="--recursive", help=argparse.SUPPRESS, ) deprecated_group.add_argument( "-rc", dest="deprecated_flags", action="append_const", const="-rc", help=argparse.SUPPRESS ) deprecated_group.add_argument( "--dont-skip", dest="deprecated_flags", action="append_const", const="--dont-skip", help=argparse.SUPPRESS, ) deprecated_group.add_argument( "-ns", dest="deprecated_flags", action="append_const", const="-ns", help=argparse.SUPPRESS ) deprecated_group.add_argument( "--apply", dest="deprecated_flags", action="append_const", const="--apply", help=argparse.SUPPRESS, ) deprecated_group.add_argument( "-k", "--keep-direct-and-as", dest="deprecated_flags", action="append_const", const="--keep-direct-and-as", help=argparse.SUPPRESS, ) return parser def parse_args(argv: Optional[Sequence[str]] = None) -> Dict[str, Any]: argv = sys.argv[1:] if argv is None else list(argv) remapped_deprecated_args = [] for index, arg in enumerate(argv): if arg in DEPRECATED_SINGLE_DASH_ARGS: remapped_deprecated_args.append(arg) argv[index] = f"-{arg}" parser = _build_arg_parser() arguments = {key: value for key, value in vars(parser.parse_args(argv)).items() if value} if remapped_deprecated_args: arguments["remapped_deprecated_args"] = remapped_deprecated_args if "dont_order_by_type" in arguments: arguments["order_by_type"] = False del arguments["dont_order_by_type"] if "dont_follow_links" in arguments: arguments["follow_links"] = False del arguments["dont_follow_links"] if "dont_float_to_top" in arguments: del arguments["dont_float_to_top"] if arguments.get("float_to_top", False): sys.exit("Can't set both --float-to-top and --dont-float-to-top.") else: arguments["float_to_top"] = False multi_line_output = arguments.get("multi_line_output", None) if multi_line_output: if multi_line_output.isdigit(): arguments["multi_line_output"] = WrapModes(int(multi_line_output)) else: arguments["multi_line_output"] = WrapModes[multi_line_output] return arguments def _preconvert(item: Any) -> Union[str, List[Any]]: """Preconverts objects from native types into JSONifyiable types""" if isinstance(item, (set, frozenset)): return list(item) if isinstance(item, WrapModes): return str(item.name) if isinstance(item, Path): return str(item) if callable(item) and hasattr(item, "__name__"): return str(item.__name__) raise TypeError(f"Unserializable object {item} of type {type(item)}") def identify_imports_main( argv: Optional[Sequence[str]] = None, stdin: Optional[TextIOWrapper] = None ) -> None: parser = argparse.ArgumentParser( description="Get all import definitions from a given file." "Use `-` as the first argument to represent stdin." ) parser.add_argument( "files", nargs="+", help="One or more Python source files that need their imports sorted." ) parser.add_argument( "--top-only", action="store_true", default=False, help="Only identify imports that occur in before functions or classes.", ) target_group = parser.add_argument_group("target options") target_group.add_argument( "--follow-links", action="store_true", default=False, help="Tells isort to follow symlinks that are encountered when running recursively.", ) uniqueness = parser.add_mutually_exclusive_group() uniqueness.add_argument( "--unique", action="store_true", default=False, help="If true, isort will only identify unique imports.", ) uniqueness.add_argument( "--packages", dest="unique", action="store_const", const=api.ImportKey.PACKAGE, default=False, help="If true, isort will only identify the unique top level modules imported.", ) uniqueness.add_argument( "--modules", dest="unique", action="store_const", const=api.ImportKey.MODULE, default=False, help="If true, isort will only identify the unique modules imported.", ) uniqueness.add_argument( "--attributes", dest="unique", action="store_const", const=api.ImportKey.ATTRIBUTE, default=False, help="If true, isort will only identify the unique attributes imported.", ) arguments = parser.parse_args(argv) file_names = arguments.files if file_names == ["-"]: identified_imports = api.find_imports_in_stream( sys.stdin if stdin is None else stdin, unique=arguments.unique, top_only=arguments.top_only, follow_links=arguments.follow_links, ) else: identified_imports = api.find_imports_in_paths( file_names, unique=arguments.unique, top_only=arguments.top_only, follow_links=arguments.follow_links, ) for identified_import in identified_imports: if arguments.unique == api.ImportKey.PACKAGE: print(identified_import.module.split(".")[0]) elif arguments.unique == api.ImportKey.MODULE: print(identified_import.module) elif arguments.unique == api.ImportKey.ATTRIBUTE: print(f"{identified_import.module}.{identified_import.attribute}") else: print(str(identified_import)) def main(argv: Optional[Sequence[str]] = None, stdin: Optional[TextIOWrapper] = None) -> None: arguments = parse_args(argv) if arguments.get("show_version"): print(ASCII_ART) return show_config: bool = arguments.pop("show_config", False) show_files: bool = arguments.pop("show_files", False) if show_config and show_files: sys.exit("Error: either specify show-config or show-files not both.") if "settings_path" in arguments: if os.path.isfile(arguments["settings_path"]): arguments["settings_file"] = os.path.abspath(arguments["settings_path"]) arguments["settings_path"] = os.path.dirname(arguments["settings_file"]) else: arguments["settings_path"] = os.path.abspath(arguments["settings_path"]) if "virtual_env" in arguments: venv = arguments["virtual_env"] arguments["virtual_env"] = os.path.abspath(venv) if not os.path.isdir(arguments["virtual_env"]): warn(f"virtual_env dir does not exist: {arguments['virtual_env']}") file_names = arguments.pop("files", []) if not file_names and not show_config: print(QUICK_GUIDE) if arguments: sys.exit("Error: arguments passed in without any paths or content.") return if "settings_path" not in arguments: arguments["settings_path"] = ( arguments.get("filename", None) or os.getcwd() if file_names == ["-"] else os.path.abspath(file_names[0] if file_names else ".") ) if not os.path.isdir(arguments["settings_path"]): arguments["settings_path"] = os.path.dirname(arguments["settings_path"]) config_dict = arguments.copy() ask_to_apply = config_dict.pop("ask_to_apply", False) jobs = config_dict.pop("jobs", None) check = config_dict.pop("check", False) show_diff = config_dict.pop("show_diff", False) write_to_stdout = config_dict.pop("write_to_stdout", False) deprecated_flags = config_dict.pop("deprecated_flags", False) remapped_deprecated_args = config_dict.pop("remapped_deprecated_args", False) stream_filename = config_dict.pop("filename", None) ext_format = config_dict.pop("ext_format", None) allow_root = config_dict.pop("allow_root", None) resolve_all_configs = config_dict.pop("resolve_all_configs", False) wrong_sorted_files = False all_attempt_broken = False no_valid_encodings = False config_trie: Optional[Trie] = None if resolve_all_configs: config_trie = find_all_configs(config_dict.pop("config_root", ".")) if "src_paths" in config_dict: config_dict["src_paths"] = { Path(src_path).resolve() for src_path in config_dict.get("src_paths", ()) } config = Config(**config_dict) if show_config: print(json.dumps(config.__dict__, indent=4, separators=(",", ": "), default=_preconvert)) return if file_names == ["-"]: file_path = Path(stream_filename) if stream_filename else None if show_files: sys.exit("Error: can't show files for streaming input.") input_stream = sys.stdin if stdin is None else stdin if check: incorrectly_sorted = not api.check_stream( input_stream=input_stream, config=config, show_diff=show_diff, file_path=file_path, extension=ext_format, ) wrong_sorted_files = incorrectly_sorted else: try: api.sort_stream( input_stream=input_stream, output_stream=sys.stdout, config=config, show_diff=show_diff, file_path=file_path, extension=ext_format, raise_on_skip=False, ) except FileSkipped: sys.stdout.write(input_stream.read()) elif "/" in file_names and not allow_root: printer = create_terminal_printer( color=config.color_output, error=config.format_error, success=config.format_success ) printer.error("it is dangerous to operate recursively on '/'") printer.error("use --allow-root to override this failsafe") sys.exit(1) else: if stream_filename: printer = create_terminal_printer( color=config.color_output, error=config.format_error, success=config.format_success ) printer.error("Filename override is intended only for stream (-) sorting.") sys.exit(1) skipped: List[str] = [] broken: List[str] = [] if config.filter_files: filtered_files = [] for file_name in file_names: if config.is_skipped(Path(file_name)): skipped.append(file_name) else: filtered_files.append(file_name) file_names = filtered_files file_names = files.find(file_names, config, skipped, broken) if show_files: for file_name in file_names: print(file_name) return num_skipped = 0 num_broken = 0 num_invalid_encoding = 0 if config.verbose: print(ASCII_ART) if jobs: import multiprocessing executor = multiprocessing.Pool(jobs if jobs > 0 else multiprocessing.cpu_count()) attempt_iterator = executor.imap( functools.partial( sort_imports, config=config, check=check, ask_to_apply=ask_to_apply, write_to_stdout=write_to_stdout, extension=ext_format, config_trie=config_trie, ), file_names, ) else: # https://github.com/python/typeshed/pull/2814 attempt_iterator = ( sort_imports( # type: ignore file_name, config=config, check=check, ask_to_apply=ask_to_apply, show_diff=show_diff, write_to_stdout=write_to_stdout, extension=ext_format, config_trie=config_trie, ) for file_name in file_names ) # If any files passed in are missing considered as error, should be removed is_no_attempt = True any_encoding_valid = False for sort_attempt in attempt_iterator: if not sort_attempt: continue # pragma: no cover - shouldn't happen, satisfies type constraint incorrectly_sorted = sort_attempt.incorrectly_sorted if arguments.get("check", False) and incorrectly_sorted: wrong_sorted_files = True if sort_attempt.skipped: num_skipped += ( 1 # pragma: no cover - shouldn't happen, due to skip in iter_source_code ) if not sort_attempt.supported_encoding: num_invalid_encoding += 1 else: any_encoding_valid = True is_no_attempt = False num_skipped += len(skipped) if num_skipped and not config.quiet: if config.verbose: for was_skipped in skipped: print( f"{was_skipped} was skipped as it's listed in 'skip' setting, " "matches a glob in 'skip_glob' setting, or is in a .gitignore file with " "--skip-gitignore enabled." ) print(f"Skipped {num_skipped} files") num_broken += len(broken) if num_broken and not config.quiet: if config.verbose: for was_broken in broken: warn(f"{was_broken} was broken path, make sure it exists correctly") print(f"Broken {num_broken} paths") if num_broken > 0 and is_no_attempt: all_attempt_broken = True if num_invalid_encoding > 0 and not any_encoding_valid: no_valid_encodings = True if not config.quiet and (remapped_deprecated_args or deprecated_flags): if remapped_deprecated_args: warn( "W0502: The following deprecated single dash CLI flags were used and translated: " f"{', '.join(remapped_deprecated_args)}!" ) if deprecated_flags: warn( "W0501: The following deprecated CLI flags were used and ignored: " f"{', '.join(deprecated_flags)}!" ) warn( "W0500: Please see the 5.0.0 Upgrade guide: " "https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html" ) if wrong_sorted_files: sys.exit(1) if all_attempt_broken: sys.exit(1) if no_valid_encodings: printer = create_terminal_printer( color=config.color_output, error=config.format_error, success=config.format_success ) printer.error("No valid encodings.") sys.exit(1) if __name__ == "__main__": main() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/output.py0000644000000000000000000006623414536412763012770 0ustar00import copy import itertools from functools import partial from typing import Any, Iterable, List, Optional, Set, Tuple, Type from isort.format import format_simplified from . import parse, sorting, wrap from .comments import add_to_line as with_comments from .identify import STATEMENT_DECLARATIONS from .settings import DEFAULT_CONFIG, Config def sorted_imports( parsed: parse.ParsedContent, config: Config = DEFAULT_CONFIG, extension: str = "py", import_type: str = "import", ) -> str: """Adds the imports back to the file. (at the index of the first import) sorted alphabetically and split between groups """ if parsed.import_index == -1: return _output_as_string(parsed.lines_without_imports, parsed.line_separator) formatted_output: List[str] = parsed.lines_without_imports.copy() remove_imports = [format_simplified(removal) for removal in config.remove_imports] sections: Iterable[str] = itertools.chain(parsed.sections, config.forced_separate) if config.no_sections: parsed.imports["no_sections"] = {"straight": {}, "from": {}} base_sections: Tuple[str, ...] = () for section in sections: if section == "FUTURE": base_sections = ("FUTURE",) continue parsed.imports["no_sections"]["straight"].update( parsed.imports[section].get("straight", {}) ) parsed.imports["no_sections"]["from"].update(parsed.imports[section].get("from", {})) sections = base_sections + ("no_sections",) output: List[str] = [] seen_headings: Set[str] = set() pending_lines_before = False for section in sections: straight_modules = parsed.imports[section]["straight"] if not config.only_sections: straight_modules = sorting.sort( config, straight_modules, key=lambda key: sorting.module_key( key, config, section_name=section, straight_import=True ), reverse=config.reverse_sort, ) from_modules = parsed.imports[section]["from"] if not config.only_sections: from_modules = sorting.sort( config, from_modules, key=lambda key: sorting.module_key(key, config, section_name=section), reverse=config.reverse_sort, ) if config.star_first: star_modules = [] other_modules = [] for module in from_modules: if "*" in parsed.imports[section]["from"][module]: star_modules.append(module) else: other_modules.append(module) from_modules = star_modules + other_modules straight_imports = _with_straight_imports( parsed, config, straight_modules, section, remove_imports, import_type ) from_imports = _with_from_imports( parsed, config, from_modules, section, remove_imports, import_type ) lines_between = [""] * ( config.lines_between_types if from_modules and straight_modules else 0 ) if config.from_first: section_output = from_imports + lines_between + straight_imports else: section_output = straight_imports + lines_between + from_imports if config.force_sort_within_sections: # collapse comments comments_above = [] new_section_output: List[str] = [] for line in section_output: if not line: continue if line.startswith("#"): comments_above.append(line) elif comments_above: new_section_output.append(_LineWithComments(line, comments_above)) comments_above = [] else: new_section_output.append(line) # only_sections options is not imposed if force_sort_within_sections is True new_section_output = sorting.sort( config, new_section_output, key=partial(sorting.section_key, config=config), reverse=config.reverse_sort, ) # uncollapse comments section_output = [] for line in new_section_output: comments = getattr(line, "comments", ()) if comments: section_output.extend(comments) section_output.append(str(line)) section_name = section no_lines_before = section_name in config.no_lines_before if section_output: if section_name in parsed.place_imports: parsed.place_imports[section_name] = section_output continue section_title = config.import_headings.get(section_name.lower(), "") if section_title and section_title not in seen_headings: if config.dedup_headings: seen_headings.add(section_title) section_comment = f"# {section_title}" if section_comment not in parsed.lines_without_imports[0:1]: # pragma: no branch section_output.insert(0, section_comment) section_footer = config.import_footers.get(section_name.lower(), "") if section_footer and section_footer not in seen_headings: if config.dedup_headings: seen_headings.add(section_footer) section_comment_end = f"# {section_footer}" if ( section_comment_end not in parsed.lines_without_imports[-1:] ): # pragma: no branch section_output.append("") # Empty line for black compatibility section_output.append(section_comment_end) if pending_lines_before or not no_lines_before: output += [""] * config.lines_between_sections output += section_output pending_lines_before = False else: pending_lines_before = pending_lines_before or not no_lines_before if config.ensure_newline_before_comments: output = _ensure_newline_before_comment(output) while output and output[-1].strip() == "": output.pop() # pragma: no cover while output and output[0].strip() == "": output.pop(0) if config.formatting_function: output = config.formatting_function( parsed.line_separator.join(output), extension, config ).splitlines() output_at = 0 if parsed.import_index < parsed.original_line_count: output_at = parsed.import_index formatted_output[output_at:0] = output if output: imports_tail = output_at + len(output) while [ character.strip() for character in formatted_output[imports_tail : imports_tail + 1] ] == [""]: formatted_output.pop(imports_tail) if len(formatted_output) > imports_tail: next_construct = "" tail = formatted_output[imports_tail:] for index, line in enumerate(tail): # pragma: no branch should_skip, in_quote, *_ = parse.skip_line( line, in_quote="", index=len(formatted_output), section_comments=config.section_comments, needs_import=False, ) if not should_skip and line.strip(): if ( line.strip().startswith("#") and len(tail) > (index + 1) and tail[index + 1].strip() ): continue next_construct = line break if in_quote: # pragma: no branch next_construct = line break if config.lines_after_imports != -1: lines_after_imports = config.lines_after_imports if config.profile == "black" and extension == "pyi": # special case for black lines_after_imports = 1 formatted_output[imports_tail:0] = ["" for line in range(lines_after_imports)] elif extension != "pyi" and next_construct.startswith(STATEMENT_DECLARATIONS): formatted_output[imports_tail:0] = ["", ""] else: formatted_output[imports_tail:0] = [""] if config.lines_before_imports != -1: lines_before_imports = config.lines_before_imports if config.profile == "black" and extension == "pyi": # special case for black lines_before_imports = 1 formatted_output[:0] = ["" for line in range(lines_before_imports)] if parsed.place_imports: new_out_lines = [] for index, line in enumerate(formatted_output): new_out_lines.append(line) if line in parsed.import_placements: new_out_lines.extend(parsed.place_imports[parsed.import_placements[line]]) if ( len(formatted_output) <= (index + 1) or formatted_output[index + 1].strip() != "" ): new_out_lines.append("") formatted_output = new_out_lines return _output_as_string(formatted_output, parsed.line_separator) def _with_from_imports( parsed: parse.ParsedContent, config: Config, from_modules: Iterable[str], section: str, remove_imports: List[str], import_type: str, ) -> List[str]: output: List[str] = [] for module in from_modules: if module in remove_imports: continue import_start = f"from {module} {import_type} " from_imports = list(parsed.imports[section]["from"][module]) if ( not config.no_inline_sort or (config.force_single_line and module not in config.single_line_exclusions) ) and not config.only_sections: from_imports = sorting.sort( config, from_imports, key=lambda key: sorting.module_key( key, config, True, config.force_alphabetical_sort_within_sections, section_name=section, ), reverse=config.reverse_sort, ) if remove_imports: from_imports = [ line for line in from_imports if f"{module}.{line}" not in remove_imports ] sub_modules = [f"{module}.{from_import}" for from_import in from_imports] as_imports = { from_import: [ f"{from_import} as {as_module}" for as_module in parsed.as_map["from"][sub_module] ] for from_import, sub_module in zip(from_imports, sub_modules) if sub_module in parsed.as_map["from"] } if config.combine_as_imports and not ("*" in from_imports and config.combine_star): if not config.no_inline_sort: for as_import in as_imports: if not config.only_sections: as_imports[as_import] = sorting.sort(config, as_imports[as_import]) for from_import in copy.copy(from_imports): if from_import in as_imports: idx = from_imports.index(from_import) if parsed.imports[section]["from"][module][from_import]: from_imports[(idx + 1) : (idx + 1)] = as_imports.pop(from_import) else: from_imports[idx : (idx + 1)] = as_imports.pop(from_import) only_show_as_imports = False comments = parsed.categorized_comments["from"].pop(module, ()) above_comments = parsed.categorized_comments["above"]["from"].pop(module, None) while from_imports: if above_comments: output.extend(above_comments) above_comments = None if "*" in from_imports and config.combine_star: import_statement = wrap.line( with_comments( _with_star_comments(parsed, module, list(comments or ())), f"{import_start}*", removed=config.ignore_comments, comment_prefix=config.comment_prefix, ), parsed.line_separator, config, ) from_imports = [ from_import for from_import in from_imports if from_import in as_imports ] only_show_as_imports = True elif config.force_single_line and module not in config.single_line_exclusions: import_statement = "" while from_imports: from_import = from_imports.pop(0) single_import_line = with_comments( comments, import_start + from_import, removed=config.ignore_comments, comment_prefix=config.comment_prefix, ) comment = ( parsed.categorized_comments["nested"].get(module, {}).pop(from_import, None) ) if comment: single_import_line += ( f"{comments and ';' or config.comment_prefix} " f"{comment}" ) if from_import in as_imports: if ( parsed.imports[section]["from"][module][from_import] and not only_show_as_imports ): output.append( wrap.line(single_import_line, parsed.line_separator, config) ) from_comments = parsed.categorized_comments["straight"].get( f"{module}.{from_import}" ) if not config.only_sections: output.extend( with_comments( from_comments, wrap.line( import_start + as_import, parsed.line_separator, config ), removed=config.ignore_comments, comment_prefix=config.comment_prefix, ) for as_import in sorting.sort(config, as_imports[from_import]) ) else: output.extend( with_comments( from_comments, wrap.line( import_start + as_import, parsed.line_separator, config ), removed=config.ignore_comments, comment_prefix=config.comment_prefix, ) for as_import in as_imports[from_import] ) else: output.append(wrap.line(single_import_line, parsed.line_separator, config)) comments = None else: while from_imports and from_imports[0] in as_imports: from_import = from_imports.pop(0) if not config.only_sections: as_imports[from_import] = sorting.sort(config, as_imports[from_import]) from_comments = ( parsed.categorized_comments["straight"].get(f"{module}.{from_import}") or [] ) if ( parsed.imports[section]["from"][module][from_import] and not only_show_as_imports ): specific_comment = ( parsed.categorized_comments["nested"] .get(module, {}) .pop(from_import, None) ) if specific_comment: from_comments.append(specific_comment) output.append( wrap.line( with_comments( from_comments, import_start + from_import, removed=config.ignore_comments, comment_prefix=config.comment_prefix, ), parsed.line_separator, config, ) ) from_comments = [] for as_import in as_imports[from_import]: specific_comment = ( parsed.categorized_comments["nested"] .get(module, {}) .pop(as_import, None) ) if specific_comment: from_comments.append(specific_comment) output.append( wrap.line( with_comments( from_comments, import_start + as_import, removed=config.ignore_comments, comment_prefix=config.comment_prefix, ), parsed.line_separator, config, ) ) from_comments = [] if "*" in from_imports: output.append( with_comments( _with_star_comments(parsed, module, []), f"{import_start}*", removed=config.ignore_comments, comment_prefix=config.comment_prefix, ) ) from_imports.remove("*") for from_import in copy.copy(from_imports): comment = ( parsed.categorized_comments["nested"].get(module, {}).pop(from_import, None) ) if comment: from_imports.remove(from_import) if from_imports: use_comments = [] else: use_comments = comments comments = None single_import_line = with_comments( use_comments, import_start + from_import, removed=config.ignore_comments, comment_prefix=config.comment_prefix, ) single_import_line += ( f"{use_comments and ';' or config.comment_prefix} " f"{comment}" ) output.append(wrap.line(single_import_line, parsed.line_separator, config)) from_import_section = [] while from_imports and ( from_imports[0] not in as_imports or ( config.combine_as_imports and parsed.imports[section]["from"][module][from_import] ) ): from_import_section.append(from_imports.pop(0)) if config.combine_as_imports: comments = (comments or []) + list( parsed.categorized_comments["from"].pop(f"{module}.__combined_as__", ()) ) import_statement = with_comments( comments, import_start + (", ").join(from_import_section), removed=config.ignore_comments, comment_prefix=config.comment_prefix, ) if not from_import_section: import_statement = "" do_multiline_reformat = False force_grid_wrap = config.force_grid_wrap if force_grid_wrap and len(from_import_section) >= force_grid_wrap: do_multiline_reformat = True if len(import_statement) > config.line_length and len(from_import_section) > 1: do_multiline_reformat = True # If line too long AND have imports AND we are # NOT using GRID or VERTICAL wrap modes if ( len(import_statement) > config.line_length and len(from_import_section) > 0 and config.multi_line_output not in (wrap.Modes.GRID, wrap.Modes.VERTICAL) # type: ignore ): do_multiline_reformat = True if config.split_on_trailing_comma and module in parsed.trailing_commas: import_statement = wrap.import_statement( import_start=import_start, from_imports=from_import_section, comments=comments, line_separator=parsed.line_separator, config=config, explode=True, ) elif do_multiline_reformat: import_statement = wrap.import_statement( import_start=import_start, from_imports=from_import_section, comments=comments, line_separator=parsed.line_separator, config=config, ) if config.multi_line_output == wrap.Modes.GRID: # type: ignore other_import_statement = wrap.import_statement( import_start=import_start, from_imports=from_import_section, comments=comments, line_separator=parsed.line_separator, config=config, multi_line_output=wrap.Modes.VERTICAL_GRID, # type: ignore ) if ( max( len(import_line) for import_line in import_statement.split(parsed.line_separator) ) > config.line_length ): import_statement = other_import_statement elif len(import_statement) > config.line_length: import_statement = wrap.line(import_statement, parsed.line_separator, config) if import_statement: output.append(import_statement) return output def _with_straight_imports( parsed: parse.ParsedContent, config: Config, straight_modules: Iterable[str], section: str, remove_imports: List[str], import_type: str, ) -> List[str]: output: List[str] = [] as_imports = any((module in parsed.as_map["straight"] for module in straight_modules)) # combine_straight_imports only works for bare imports, 'as' imports not included if config.combine_straight_imports and not as_imports: if not straight_modules: return [] above_comments: List[str] = [] inline_comments: List[str] = [] for module in straight_modules: if module in parsed.categorized_comments["above"]["straight"]: above_comments.extend(parsed.categorized_comments["above"]["straight"].pop(module)) if module in parsed.categorized_comments["straight"]: inline_comments.extend(parsed.categorized_comments["straight"][module]) combined_straight_imports = ", ".join(straight_modules) if inline_comments: combined_inline_comments = " ".join(inline_comments) else: combined_inline_comments = "" output.extend(above_comments) if combined_inline_comments: output.append( f"{import_type} {combined_straight_imports} # {combined_inline_comments}" ) else: output.append(f"{import_type} {combined_straight_imports}") return output for module in straight_modules: if module in remove_imports: continue import_definition = [] if module in parsed.as_map["straight"]: if parsed.imports[section]["straight"][module]: import_definition.append((f"{import_type} {module}", module)) import_definition.extend( (f"{import_type} {module} as {as_import}", f"{module} as {as_import}") for as_import in parsed.as_map["straight"][module] ) else: import_definition.append((f"{import_type} {module}", module)) comments_above = parsed.categorized_comments["above"]["straight"].pop(module, None) if comments_above: output.extend(comments_above) output.extend( with_comments( parsed.categorized_comments["straight"].get(imodule), idef, removed=config.ignore_comments, comment_prefix=config.comment_prefix, ) for idef, imodule in import_definition ) return output def _output_as_string(lines: List[str], line_separator: str) -> str: return line_separator.join(_normalize_empty_lines(lines)) def _normalize_empty_lines(lines: List[str]) -> List[str]: while lines and lines[-1].strip() == "": lines.pop(-1) lines.append("") return lines class _LineWithComments(str): comments: List[str] def __new__( cls: Type["_LineWithComments"], value: Any, comments: List[str] ) -> "_LineWithComments": instance = super().__new__(cls, value) instance.comments = comments return instance def _ensure_newline_before_comment(output: List[str]) -> List[str]: new_output: List[str] = [] def is_comment(line: Optional[str]) -> bool: return line.startswith("#") if line else False for line, prev_line in zip(output, [None] + output): # type: ignore if is_comment(line) and prev_line != "" and not is_comment(prev_line): new_output.append("") new_output.append(line) return new_output def _with_star_comments(parsed: parse.ParsedContent, module: str, comments: List[str]) -> List[str]: star_comment = parsed.categorized_comments["nested"].get(module, {}).pop("*", None) if star_comment: return comments + [star_comment] return comments ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/parse.py0000644000000000000000000006161714536412763012542 0ustar00"""Defines parsing functions used by isort for parsing import definitions""" import re from collections import OrderedDict, defaultdict from functools import partial from itertools import chain from typing import TYPE_CHECKING, Any, Dict, List, NamedTuple, Optional, Set, Tuple from warnings import warn from . import place from .comments import parse as parse_comments from .exceptions import MissingSection from .settings import DEFAULT_CONFIG, Config if TYPE_CHECKING: from mypy_extensions import TypedDict CommentsAboveDict = TypedDict( "CommentsAboveDict", {"straight": Dict[str, Any], "from": Dict[str, Any]} ) CommentsDict = TypedDict( "CommentsDict", { "from": Dict[str, Any], "straight": Dict[str, Any], "nested": Dict[str, Any], "above": CommentsAboveDict, }, ) def _infer_line_separator(contents: str) -> str: if "\r\n" in contents: return "\r\n" if "\r" in contents: return "\r" return "\n" def normalize_line(raw_line: str) -> Tuple[str, str]: """Normalizes import related statements in the provided line. Returns (normalized_line: str, raw_line: str) """ line = re.sub(r"from(\.+)cimport ", r"from \g<1> cimport ", raw_line) line = re.sub(r"from(\.+)import ", r"from \g<1> import ", line) line = line.replace("import*", "import *") line = re.sub(r" (\.+)import ", r" \g<1> import ", line) line = re.sub(r" (\.+)cimport ", r" \g<1> cimport ", line) line = line.replace("\t", " ") return line, raw_line def import_type(line: str, config: Config = DEFAULT_CONFIG) -> Optional[str]: """If the current line is an import line it will return its type (from or straight)""" if config.honor_noqa and line.lower().rstrip().endswith("noqa"): return None if "isort:skip" in line or "isort: skip" in line or "isort: split" in line: return None if line.startswith(("import ", "cimport ")): return "straight" if line.startswith("from "): return "from" return None def strip_syntax(import_string: str) -> str: import_string = import_string.replace("_import", "[[i]]") import_string = import_string.replace("_cimport", "[[ci]]") for remove_syntax in ["\\", "(", ")", ","]: import_string = import_string.replace(remove_syntax, " ") import_list = import_string.split() for key in ("from", "import", "cimport"): if key in import_list: import_list.remove(key) import_string = " ".join(import_list) import_string = import_string.replace("[[i]]", "_import") import_string = import_string.replace("[[ci]]", "_cimport") return import_string.replace("{ ", "{|").replace(" }", "|}") def skip_line( line: str, in_quote: str, index: int, section_comments: Tuple[str, ...], needs_import: bool = True, ) -> Tuple[bool, str]: """Determine if a given line should be skipped. Returns back a tuple containing: (skip_line: bool, in_quote: str,) """ should_skip = bool(in_quote) if '"' in line or "'" in line: char_index = 0 while char_index < len(line): if line[char_index] == "\\": char_index += 1 elif in_quote: if line[char_index : char_index + len(in_quote)] == in_quote: in_quote = "" elif line[char_index] in ("'", '"'): long_quote = line[char_index : char_index + 3] if long_quote in ('"""', "'''"): in_quote = long_quote char_index += 2 else: in_quote = line[char_index] elif line[char_index] == "#": break char_index += 1 if ";" in line.split("#")[0] and needs_import: for part in (part.strip() for part in line.split(";")): if ( part and not part.startswith("from ") and not part.startswith(("import ", "cimport ")) ): should_skip = True return (bool(should_skip or in_quote), in_quote) class ParsedContent(NamedTuple): in_lines: List[str] lines_without_imports: List[str] import_index: int place_imports: Dict[str, List[str]] import_placements: Dict[str, str] as_map: Dict[str, Dict[str, List[str]]] imports: Dict[str, Dict[str, Any]] categorized_comments: "CommentsDict" change_count: int original_line_count: int line_separator: str sections: Any verbose_output: List[str] trailing_commas: Set[str] def file_contents(contents: str, config: Config = DEFAULT_CONFIG) -> ParsedContent: """Parses a python file taking out and categorizing imports.""" line_separator: str = config.line_ending or _infer_line_separator(contents) in_lines = contents.splitlines() if contents and contents[-1] in ("\n", "\r"): in_lines.append("") out_lines = [] original_line_count = len(in_lines) if config.old_finders: from .deprecated.finders import FindersManager finder = FindersManager(config=config).find else: finder = partial(place.module, config=config) line_count = len(in_lines) place_imports: Dict[str, List[str]] = {} import_placements: Dict[str, str] = {} as_map: Dict[str, Dict[str, List[str]]] = { "straight": defaultdict(list), "from": defaultdict(list), } imports: OrderedDict[str, Dict[str, Any]] = OrderedDict() verbose_output: List[str] = [] for section in chain(config.sections, config.forced_separate): imports[section] = {"straight": OrderedDict(), "from": OrderedDict()} categorized_comments: CommentsDict = { "from": {}, "straight": {}, "nested": {}, "above": {"straight": {}, "from": {}}, } trailing_commas: Set[str] = set() index = 0 import_index = -1 in_quote = "" while index < line_count: line = in_lines[index] index += 1 statement_index = index (skipping_line, in_quote) = skip_line( line, in_quote=in_quote, index=index, section_comments=config.section_comments ) if ( line in config.section_comments or line in config.section_comments_end ) and not skipping_line: if import_index == -1: # pragma: no branch import_index = index - 1 continue if "isort:imports-" in line and line.startswith("#"): section = line.split("isort:imports-")[-1].split()[0].upper() place_imports[section] = [] import_placements[line] = section elif "isort: imports-" in line and line.startswith("#"): section = line.split("isort: imports-")[-1].split()[0].upper() place_imports[section] = [] import_placements[line] = section if skipping_line: out_lines.append(line) continue lstripped_line = line.lstrip() if ( config.float_to_top and import_index == -1 and line and not in_quote and not lstripped_line.startswith("#") and not lstripped_line.startswith("'''") and not lstripped_line.startswith('"""') ): if not lstripped_line.startswith("import") and not lstripped_line.startswith("from"): import_index = index - 1 while import_index and not in_lines[import_index - 1]: import_index -= 1 else: commentless = line.split("#", 1)[0].strip() if ( ("isort:skip" in line or "isort: skip" in line) and "(" in commentless and ")" not in commentless ): import_index = index starting_line = line while "isort:skip" in starting_line or "isort: skip" in starting_line: commentless = starting_line.split("#", 1)[0] if ( "(" in commentless and not commentless.rstrip().endswith(")") and import_index < line_count ): while import_index < line_count and not commentless.rstrip().endswith( ")" ): commentless = in_lines[import_index].split("#", 1)[0] import_index += 1 else: import_index += 1 if import_index >= line_count: break starting_line = in_lines[import_index] line, *end_of_line_comment = line.split("#", 1) if ";" in line: statements = [line.strip() for line in line.split(";")] else: statements = [line] if end_of_line_comment: statements[-1] = f"{statements[-1]}#{end_of_line_comment[0]}" for statement in statements: line, raw_line = normalize_line(statement) type_of_import = import_type(line, config) or "" raw_lines = [raw_line] if not type_of_import: out_lines.append(raw_line) continue if import_index == -1: import_index = index - 1 nested_comments = {} import_string, comment = parse_comments(line) comments = [comment] if comment else [] line_parts = [part for part in strip_syntax(import_string).strip().split(" ") if part] if type_of_import == "from" and len(line_parts) == 2 and comments: nested_comments[line_parts[-1]] = comments[0] if "(" in line.split("#", 1)[0] and index < line_count: while not line.split("#")[0].strip().endswith(")") and index < line_count: line, new_comment = parse_comments(in_lines[index]) index += 1 if new_comment: comments.append(new_comment) stripped_line = strip_syntax(line).strip() if ( type_of_import == "from" and stripped_line and " " not in stripped_line.replace(" as ", "") and new_comment ): nested_comments[stripped_line] = comments[-1] import_string += line_separator + line raw_lines.append(line) else: while line.strip().endswith("\\"): line, new_comment = parse_comments(in_lines[index]) line = line.lstrip() index += 1 if new_comment: comments.append(new_comment) # Still need to check for parentheses after an escaped line if ( "(" in line.split("#")[0] and ")" not in line.split("#")[0] and index < line_count ): stripped_line = strip_syntax(line).strip() if ( type_of_import == "from" and stripped_line and " " not in stripped_line.replace(" as ", "") and new_comment ): nested_comments[stripped_line] = comments[-1] import_string += line_separator + line raw_lines.append(line) while not line.split("#")[0].strip().endswith(")") and index < line_count: line, new_comment = parse_comments(in_lines[index]) index += 1 if new_comment: comments.append(new_comment) stripped_line = strip_syntax(line).strip() if ( type_of_import == "from" and stripped_line and " " not in stripped_line.replace(" as ", "") and new_comment ): nested_comments[stripped_line] = comments[-1] import_string += line_separator + line raw_lines.append(line) stripped_line = strip_syntax(line).strip() if ( type_of_import == "from" and stripped_line and " " not in stripped_line.replace(" as ", "") and new_comment ): nested_comments[stripped_line] = comments[-1] if import_string.strip().endswith( (" import", " cimport") ) or line.strip().startswith(("import ", "cimport ")): import_string += line_separator + line else: import_string = import_string.rstrip().rstrip("\\") + " " + line.lstrip() if type_of_import == "from": cimports: bool import_string = ( import_string.replace("import(", "import (") .replace("\\", " ") .replace("\n", " ") ) if "import " not in import_string: out_lines.extend(raw_lines) continue if " cimport " in import_string: parts = import_string.split(" cimport ") cimports = True else: parts = import_string.split(" import ") cimports = False from_import = parts[0].split(" ") import_string = (" cimport " if cimports else " import ").join( [from_import[0] + " " + "".join(from_import[1:])] + parts[1:] ) just_imports = [ item.replace("{|", "{ ").replace("|}", " }") for item in strip_syntax(import_string).split() ] attach_comments_to: Optional[List[Any]] = None direct_imports = just_imports[1:] straight_import = True top_level_module = "" if "as" in just_imports and (just_imports.index("as") + 1) < len(just_imports): straight_import = False while "as" in just_imports: nested_module = None as_index = just_imports.index("as") if type_of_import == "from": nested_module = just_imports[as_index - 1] top_level_module = just_imports[0] module = top_level_module + "." + nested_module as_name = just_imports[as_index + 1] direct_imports.remove(nested_module) direct_imports.remove(as_name) direct_imports.remove("as") if nested_module == as_name and config.remove_redundant_aliases: pass elif as_name not in as_map["from"][module]: # pragma: no branch as_map["from"][module].append(as_name) full_name = f"{nested_module} as {as_name}" associated_comment = nested_comments.get(full_name) if associated_comment: categorized_comments["nested"].setdefault(top_level_module, {})[ full_name ] = associated_comment if associated_comment in comments: # pragma: no branch comments.pop(comments.index(associated_comment)) else: module = just_imports[as_index - 1] as_name = just_imports[as_index + 1] if module == as_name and config.remove_redundant_aliases: pass elif as_name not in as_map["straight"][module]: as_map["straight"][module].append(as_name) if comments and attach_comments_to is None: if nested_module and config.combine_as_imports: attach_comments_to = categorized_comments["from"].setdefault( f"{top_level_module}.__combined_as__", [] ) else: if type_of_import == "from" or ( config.remove_redundant_aliases and as_name == module.split(".")[-1] ): attach_comments_to = categorized_comments["straight"].setdefault( module, [] ) else: attach_comments_to = categorized_comments["straight"].setdefault( f"{module} as {as_name}", [] ) del just_imports[as_index : as_index + 2] if type_of_import == "from": import_from = just_imports.pop(0) placed_module = finder(import_from) if config.verbose and not config.only_modified: print(f"from-type place_module for {import_from} returned {placed_module}") elif config.verbose: verbose_output.append( f"from-type place_module for {import_from} returned {placed_module}" ) if placed_module == "": warn( f"could not place module {import_from} of line {line} --" " Do you need to define a default section?" ) if placed_module and placed_module not in imports: raise MissingSection(import_module=import_from, section=placed_module) root = imports[placed_module][type_of_import] # type: ignore for import_name in just_imports: associated_comment = nested_comments.get(import_name) if associated_comment: categorized_comments["nested"].setdefault(import_from, {})[ import_name ] = associated_comment if associated_comment in comments: # pragma: no branch comments.pop(comments.index(associated_comment)) if ( config.force_single_line and comments and attach_comments_to is None and len(just_imports) == 1 ): nested_from_comments = categorized_comments["nested"].setdefault( import_from, {} ) existing_comment = nested_from_comments.get(just_imports[0], "") nested_from_comments[ just_imports[0] ] = f"{existing_comment}{'; ' if existing_comment else ''}{'; '.join(comments)}" comments = [] if comments and attach_comments_to is None: attach_comments_to = categorized_comments["from"].setdefault(import_from, []) if len(out_lines) > max(import_index, 1) - 1: last = out_lines[-1].rstrip() if out_lines else "" while ( last.startswith("#") and not last.endswith('"""') and not last.endswith("'''") and "isort:imports-" not in last and "isort: imports-" not in last and not config.treat_all_comments_as_code and not last.strip() in config.treat_comments_as_code ): categorized_comments["above"]["from"].setdefault(import_from, []).insert( 0, out_lines.pop(-1) ) if out_lines: last = out_lines[-1].rstrip() else: last = "" if statement_index - 1 == import_index: # pragma: no cover import_index -= len( categorized_comments["above"]["from"].get(import_from, []) ) if import_from not in root: root[import_from] = OrderedDict( (module, module in direct_imports) for module in just_imports ) else: root[import_from].update( (module, root[import_from].get(module, False) or module in direct_imports) for module in just_imports ) if comments and attach_comments_to is not None: attach_comments_to.extend(comments) if ( just_imports and just_imports[-1] and "," in import_string.split(just_imports[-1])[-1] ): trailing_commas.add(import_from) else: if comments and attach_comments_to is not None: attach_comments_to.extend(comments) comments = [] for module in just_imports: if comments: categorized_comments["straight"][module] = comments comments = [] if len(out_lines) > max(import_index, +1, 1) - 1: last = out_lines[-1].rstrip() if out_lines else "" while ( last.startswith("#") and not last.endswith('"""') and not last.endswith("'''") and "isort:imports-" not in last and "isort: imports-" not in last and not config.treat_all_comments_as_code and not last.strip() in config.treat_comments_as_code ): categorized_comments["above"]["straight"].setdefault(module, []).insert( 0, out_lines.pop(-1) ) if out_lines: last = out_lines[-1].rstrip() else: last = "" if index - 1 == import_index: import_index -= len( categorized_comments["above"]["straight"].get(module, []) ) placed_module = finder(module) if config.verbose and not config.only_modified: print(f"else-type place_module for {module} returned {placed_module}") elif config.verbose: verbose_output.append( f"else-type place_module for {module} returned {placed_module}" ) if placed_module == "": warn( f"could not place module {module} of line {line} --" " Do you need to define a default section?" ) imports.setdefault("", {"straight": OrderedDict(), "from": OrderedDict()}) if placed_module and placed_module not in imports: raise MissingSection(import_module=module, section=placed_module) straight_import |= imports[placed_module][type_of_import].get( # type: ignore module, False ) imports[placed_module][type_of_import][module] = straight_import # type: ignore change_count = len(out_lines) - original_line_count return ParsedContent( in_lines=in_lines, lines_without_imports=out_lines, import_index=import_index, place_imports=place_imports, import_placements=import_placements, as_map=as_map, imports=imports, categorized_comments=categorized_comments, change_count=change_count, original_line_count=original_line_count, line_separator=line_separator, sections=config.sections, verbose_output=verbose_output, trailing_commas=trailing_commas, ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/place.py0000644000000000000000000001206314536412763012503 0ustar00"""Contains all logic related to placing an import within a certain section.""" import importlib from fnmatch import fnmatch from functools import lru_cache from pathlib import Path from typing import FrozenSet, Iterable, Optional, Tuple from isort import sections from isort.settings import DEFAULT_CONFIG, Config from isort.utils import exists_case_sensitive LOCAL = "LOCALFOLDER" def module(name: str, config: Config = DEFAULT_CONFIG) -> str: """Returns the section placement for the given module name.""" return module_with_reason(name, config)[0] @lru_cache(maxsize=1000) def module_with_reason(name: str, config: Config = DEFAULT_CONFIG) -> Tuple[str, str]: """Returns the section placement for the given module name alongside the reasoning.""" return ( _forced_separate(name, config) or _local(name, config) or _known_pattern(name, config) or _src_path(name, config) or (config.default_section, "Default option in Config or universal default.") ) def _forced_separate(name: str, config: Config) -> Optional[Tuple[str, str]]: for forced_separate in config.forced_separate: # Ensure all forced_separate patterns will match to end of string path_glob = forced_separate if not forced_separate.endswith("*"): path_glob = f"{forced_separate}*" if fnmatch(name, path_glob) or fnmatch(name, "." + path_glob): return (forced_separate, f"Matched forced_separate ({forced_separate}) config value.") return None def _local(name: str, config: Config) -> Optional[Tuple[str, str]]: if name.startswith("."): return (LOCAL, "Module name started with a dot.") return None def _known_pattern(name: str, config: Config) -> Optional[Tuple[str, str]]: parts = name.split(".") module_names_to_check = (".".join(parts[:first_k]) for first_k in range(len(parts), 0, -1)) for module_name_to_check in module_names_to_check: for pattern, placement in config.known_patterns: if placement in config.sections and pattern.match(module_name_to_check): return (placement, f"Matched configured known pattern {pattern}") return None def _src_path( name: str, config: Config, src_paths: Optional[Iterable[Path]] = None, prefix: Tuple[str, ...] = (), ) -> Optional[Tuple[str, str]]: if src_paths is None: src_paths = config.src_paths root_module_name, *nested_module = name.split(".", 1) new_prefix = prefix + (root_module_name,) namespace = ".".join(new_prefix) for src_path in src_paths: module_path = (src_path / root_module_name).resolve() if not prefix and not module_path.is_dir() and src_path.name == root_module_name: module_path = src_path.resolve() if nested_module and ( namespace in config.namespace_packages or ( config.auto_identify_namespace_packages and _is_namespace_package(module_path, config.supported_extensions) ) ): return _src_path(nested_module[0], config, (module_path,), new_prefix) if ( _is_module(module_path) or _is_package(module_path) or _src_path_is_module(src_path, root_module_name) ): return (sections.FIRSTPARTY, f"Found in one of the configured src_paths: {src_path}.") return None def _is_module(path: Path) -> bool: return ( exists_case_sensitive(str(path.with_suffix(".py"))) or any( exists_case_sensitive(str(path.with_suffix(ext_suffix))) for ext_suffix in importlib.machinery.EXTENSION_SUFFIXES ) or exists_case_sensitive(str(path / "__init__.py")) ) def _is_package(path: Path) -> bool: return exists_case_sensitive(str(path)) and path.is_dir() def _is_namespace_package(path: Path, src_extensions: FrozenSet[str]) -> bool: if not _is_package(path): return False init_file = path / "__init__.py" if not init_file.exists(): filenames = [ filepath for filepath in path.iterdir() if filepath.suffix.lstrip(".") in src_extensions or filepath.name.lower() in ("setup.cfg", "pyproject.toml") ] if filenames: return False else: with init_file.open("rb") as open_init_file: file_start = open_init_file.read(4096) if ( b"__import__('pkg_resources').declare_namespace(__name__)" not in file_start and b'__import__("pkg_resources").declare_namespace(__name__)' not in file_start and b"__path__ = __import__('pkgutil').extend_path(__path__, __name__)" not in file_start and b'__path__ = __import__("pkgutil").extend_path(__path__, __name__)' not in file_start ): return False return True def _src_path_is_module(src_path: Path, module_name: str) -> bool: return ( module_name == src_path.name and src_path.is_dir() and exists_case_sensitive(str(src_path)) ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/profiles.py0000644000000000000000000000414014536412763013237 0ustar00"""Common profiles are defined here to be easily used within a project using --profile {name}""" from typing import Any, Dict black = { "multi_line_output": 3, "include_trailing_comma": True, "force_grid_wrap": 0, "use_parentheses": True, "ensure_newline_before_comments": True, "line_length": 88, } django = { "combine_as_imports": True, "include_trailing_comma": True, "multi_line_output": 5, "line_length": 79, } pycharm = { "multi_line_output": 3, "force_grid_wrap": 2, "lines_after_imports": 2, } google = { "force_single_line": True, "force_sort_within_sections": True, "lexicographical": True, "single_line_exclusions": ("typing",), "order_by_type": False, "group_by_package": True, } open_stack = { "force_single_line": True, "force_sort_within_sections": True, "lexicographical": True, } plone = black.copy() plone.update( { "force_alphabetical_sort": True, "force_single_line": True, "lines_after_imports": 2, } ) attrs = { "atomic": True, "force_grid_wrap": 0, "include_trailing_comma": True, "lines_after_imports": 2, "lines_between_types": 1, "multi_line_output": 3, "use_parentheses": True, } hug = { "multi_line_output": 3, "include_trailing_comma": True, "force_grid_wrap": 0, "use_parentheses": True, "line_length": 100, } wemake = { "multi_line_output": 3, "include_trailing_comma": True, "use_parentheses": True, "line_length": 79, } appnexus = { **black, "force_sort_within_sections": True, "order_by_type": False, "case_sensitive": False, "reverse_relative": True, "sort_relative_in_force_sorted_sections": True, "sections": ["FUTURE", "STDLIB", "THIRDPARTY", "FIRSTPARTY", "APPLICATION", "LOCALFOLDER"], "no_lines_before": "LOCALFOLDER", } profiles: Dict[str, Dict[str, Any]] = { "black": black, "django": django, "pycharm": pycharm, "google": google, "open_stack": open_stack, "plone": plone, "attrs": attrs, "hug": hug, "wemake": wemake, "appnexus": appnexus, } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/py.typed0000644000000000000000000000000014536412763012530 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/pylama_isort.py0000644000000000000000000000243414536412763014123 0ustar00import os import sys from contextlib import contextmanager from typing import Any, Dict, Iterator, List, Optional from pylama.lint import Linter as BaseLinter # type: ignore from isort.exceptions import FileSkipped from . import api @contextmanager def suppress_stdout() -> Iterator[None]: stdout = sys.stdout with open(os.devnull, "w") as devnull: sys.stdout = devnull yield sys.stdout = stdout class Linter(BaseLinter): # type: ignore def allow(self, path: str) -> bool: """Determine if this path should be linted.""" return path.endswith(".py") def run( self, path: str, params: Optional[Dict[str, Any]] = None, **meta: Any ) -> List[Dict[str, Any]]: """Lint the file. Return an array of error dicts if appropriate.""" with suppress_stdout(): try: if not api.check_file(path, disregard_skip=False, **params or {}): return [ { "lnum": 0, "col": 0, "text": "Incorrectly sorted imports.", "type": "ISORT", } ] except FileSkipped: pass return [] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/sections.py0000644000000000000000000000045114536412763013244 0ustar00"""Defines all sections isort uses by default""" from typing import Tuple FUTURE: str = "FUTURE" STDLIB: str = "STDLIB" THIRDPARTY: str = "THIRDPARTY" FIRSTPARTY: str = "FIRSTPARTY" LOCALFOLDER: str = "LOCALFOLDER" DEFAULT: Tuple[str, ...] = (FUTURE, STDLIB, THIRDPARTY, FIRSTPARTY, LOCALFOLDER) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/settings.py0000644000000000000000000010544714536412763013270 0ustar00"""isort/settings.py. Defines how the default settings for isort should be loaded """ import configparser import fnmatch import os import posixpath import re import stat import subprocess # nosec: Needed for gitignore support. import sys from dataclasses import dataclass, field from pathlib import Path from typing import ( TYPE_CHECKING, Any, Callable, Dict, FrozenSet, Iterable, List, Optional, Pattern, Set, Tuple, Type, Union, ) from warnings import warn from . import sorting, stdlibs from .exceptions import ( FormattingPluginDoesNotExist, InvalidSettingsPath, ProfileDoesNotExist, SortingFunctionDoesNotExist, UnsupportedSettings, ) from .profiles import profiles as profiles from .sections import DEFAULT as SECTION_DEFAULTS from .sections import FIRSTPARTY, FUTURE, LOCALFOLDER, STDLIB, THIRDPARTY from .utils import Trie from .wrap_modes import WrapModes from .wrap_modes import from_string as wrap_mode_from_string if TYPE_CHECKING: tomllib: Any else: if sys.version_info >= (3, 11): import tomllib else: from ._vendored import tomli as tomllib _SHEBANG_RE = re.compile(rb"^#!.*\bpython[23w]?\b") CYTHON_EXTENSIONS = frozenset({"pyx", "pxd"}) SUPPORTED_EXTENSIONS = frozenset({"py", "pyi", *CYTHON_EXTENSIONS}) BLOCKED_EXTENSIONS = frozenset({"pex"}) FILE_SKIP_COMMENTS: Tuple[str, ...] = ( "isort:" + "skip_file", "isort: " + "skip_file", ) # Concatenated to avoid this file being skipped MAX_CONFIG_SEARCH_DEPTH: int = 25 # The number of parent directories to for a config file within STOP_CONFIG_SEARCH_ON_DIRS: Tuple[str, ...] = (".git", ".hg") VALID_PY_TARGETS: Tuple[str, ...] = tuple( target.replace("py", "") for target in dir(stdlibs) if not target.startswith("_") ) CONFIG_SOURCES: Tuple[str, ...] = ( ".isort.cfg", "pyproject.toml", "setup.cfg", "tox.ini", ".editorconfig", ) DEFAULT_SKIP: FrozenSet[str] = frozenset( { ".venv", "venv", ".tox", ".eggs", ".git", ".hg", ".mypy_cache", ".nox", ".svn", ".bzr", "_build", "buck-out", "build", "dist", ".pants.d", ".direnv", "node_modules", "__pypackages__", ".pytype", } ) CONFIG_SECTIONS: Dict[str, Tuple[str, ...]] = { ".isort.cfg": ("settings", "isort"), "pyproject.toml": ("tool.isort",), "setup.cfg": ("isort", "tool:isort"), "tox.ini": ("isort", "tool:isort"), ".editorconfig": ("*", "*.py", "**.py", "*.{py}"), } FALLBACK_CONFIG_SECTIONS: Tuple[str, ...] = ("isort", "tool:isort", "tool.isort") IMPORT_HEADING_PREFIX = "import_heading_" IMPORT_FOOTER_PREFIX = "import_footer_" KNOWN_PREFIX = "known_" KNOWN_SECTION_MAPPING: Dict[str, str] = { STDLIB: "STANDARD_LIBRARY", FUTURE: "FUTURE_LIBRARY", FIRSTPARTY: "FIRST_PARTY", THIRDPARTY: "THIRD_PARTY", LOCALFOLDER: "LOCAL_FOLDER", } RUNTIME_SOURCE = "runtime" DEPRECATED_SETTINGS = ("not_skip", "keep_direct_and_as_imports") _STR_BOOLEAN_MAPPING = { "y": True, "yes": True, "t": True, "on": True, "1": True, "true": True, "n": False, "no": False, "f": False, "off": False, "0": False, "false": False, } @dataclass(frozen=True) class _Config: """Defines the data schema and defaults used for isort configuration. NOTE: known lists, such as known_standard_library, are intentionally not complete as they are dynamically determined later on. """ py_version: str = "3" force_to_top: FrozenSet[str] = frozenset() skip: FrozenSet[str] = DEFAULT_SKIP extend_skip: FrozenSet[str] = frozenset() skip_glob: FrozenSet[str] = frozenset() extend_skip_glob: FrozenSet[str] = frozenset() skip_gitignore: bool = False line_length: int = 79 wrap_length: int = 0 line_ending: str = "" sections: Tuple[str, ...] = SECTION_DEFAULTS no_sections: bool = False known_future_library: FrozenSet[str] = frozenset(("__future__",)) known_third_party: FrozenSet[str] = frozenset() known_first_party: FrozenSet[str] = frozenset() known_local_folder: FrozenSet[str] = frozenset() known_standard_library: FrozenSet[str] = frozenset() extra_standard_library: FrozenSet[str] = frozenset() known_other: Dict[str, FrozenSet[str]] = field(default_factory=dict) multi_line_output: WrapModes = WrapModes.GRID # type: ignore forced_separate: Tuple[str, ...] = () indent: str = " " * 4 comment_prefix: str = " #" length_sort: bool = False length_sort_straight: bool = False length_sort_sections: FrozenSet[str] = frozenset() add_imports: FrozenSet[str] = frozenset() remove_imports: FrozenSet[str] = frozenset() append_only: bool = False reverse_relative: bool = False force_single_line: bool = False single_line_exclusions: Tuple[str, ...] = () default_section: str = THIRDPARTY import_headings: Dict[str, str] = field(default_factory=dict) import_footers: Dict[str, str] = field(default_factory=dict) balanced_wrapping: bool = False use_parentheses: bool = False order_by_type: bool = True atomic: bool = False lines_before_imports: int = -1 lines_after_imports: int = -1 lines_between_sections: int = 1 lines_between_types: int = 0 combine_as_imports: bool = False combine_star: bool = False include_trailing_comma: bool = False from_first: bool = False verbose: bool = False quiet: bool = False force_adds: bool = False force_alphabetical_sort_within_sections: bool = False force_alphabetical_sort: bool = False force_grid_wrap: int = 0 force_sort_within_sections: bool = False lexicographical: bool = False group_by_package: bool = False ignore_whitespace: bool = False no_lines_before: FrozenSet[str] = frozenset() no_inline_sort: bool = False ignore_comments: bool = False case_sensitive: bool = False sources: Tuple[Dict[str, Any], ...] = () virtual_env: str = "" conda_env: str = "" ensure_newline_before_comments: bool = False directory: str = "" profile: str = "" honor_noqa: bool = False src_paths: Tuple[Path, ...] = () old_finders: bool = False remove_redundant_aliases: bool = False float_to_top: bool = False filter_files: bool = False formatter: str = "" formatting_function: Optional[Callable[[str, str, object], str]] = None color_output: bool = False treat_comments_as_code: FrozenSet[str] = frozenset() treat_all_comments_as_code: bool = False supported_extensions: FrozenSet[str] = SUPPORTED_EXTENSIONS blocked_extensions: FrozenSet[str] = BLOCKED_EXTENSIONS constants: FrozenSet[str] = frozenset() classes: FrozenSet[str] = frozenset() variables: FrozenSet[str] = frozenset() dedup_headings: bool = False only_sections: bool = False only_modified: bool = False combine_straight_imports: bool = False auto_identify_namespace_packages: bool = True namespace_packages: FrozenSet[str] = frozenset() follow_links: bool = True indented_import_headings: bool = True honor_case_in_force_sorted_sections: bool = False sort_relative_in_force_sorted_sections: bool = False overwrite_in_place: bool = False reverse_sort: bool = False star_first: bool = False import_dependencies = Dict[str, str] git_ls_files: Dict[Path, Set[str]] = field(default_factory=dict) format_error: str = "{error}: {message}" format_success: str = "{success}: {message}" sort_order: str = "natural" sort_reexports: bool = False split_on_trailing_comma: bool = False def __post_init__(self) -> None: py_version = self.py_version if py_version == "auto": # pragma: no cover if sys.version_info.major == 2 and sys.version_info.minor <= 6: py_version = "2" elif sys.version_info.major == 3 and ( sys.version_info.minor <= 5 or sys.version_info.minor >= 12 ): py_version = "3" else: py_version = f"{sys.version_info.major}{sys.version_info.minor}" if py_version not in VALID_PY_TARGETS: raise ValueError( f"The python version {py_version} is not supported. " "You can set a python version with the -py or --python-version flag. " f"The following versions are supported: {VALID_PY_TARGETS}" ) if py_version != "all": object.__setattr__(self, "py_version", f"py{py_version}") if not self.known_standard_library: object.__setattr__( self, "known_standard_library", frozenset(getattr(stdlibs, self.py_version).stdlib) ) if self.multi_line_output == WrapModes.VERTICAL_GRID_GROUPED_NO_COMMA: # type: ignore vertical_grid_grouped = WrapModes.VERTICAL_GRID_GROUPED # type: ignore object.__setattr__(self, "multi_line_output", vertical_grid_grouped) if self.force_alphabetical_sort: object.__setattr__(self, "force_alphabetical_sort_within_sections", True) object.__setattr__(self, "no_sections", True) object.__setattr__(self, "lines_between_types", 1) object.__setattr__(self, "from_first", True) if self.wrap_length > self.line_length: raise ValueError( "wrap_length must be set lower than or equal to line_length: " f"{self.wrap_length} > {self.line_length}." ) def __hash__(self) -> int: return id(self) _DEFAULT_SETTINGS = {**vars(_Config()), "source": "defaults"} class Config(_Config): def __init__( self, settings_file: str = "", settings_path: str = "", config: Optional[_Config] = None, **config_overrides: Any, ): self._known_patterns: Optional[List[Tuple[Pattern[str], str]]] = None self._section_comments: Optional[Tuple[str, ...]] = None self._section_comments_end: Optional[Tuple[str, ...]] = None self._skips: Optional[FrozenSet[str]] = None self._skip_globs: Optional[FrozenSet[str]] = None self._sorting_function: Optional[Callable[..., List[str]]] = None if config: config_vars = vars(config).copy() config_vars.update(config_overrides) config_vars["py_version"] = config_vars["py_version"].replace("py", "") config_vars.pop("_known_patterns") config_vars.pop("_section_comments") config_vars.pop("_section_comments_end") config_vars.pop("_skips") config_vars.pop("_skip_globs") config_vars.pop("_sorting_function") super().__init__(**config_vars) return # We can't use self.quiet to conditionally show warnings before super.__init__() is called # at the end of this method. _Config is also frozen so setting self.quiet isn't possible. # Therefore we extract quiet early here in a variable and use that in warning conditions. quiet = config_overrides.get("quiet", False) sources: List[Dict[str, Any]] = [_DEFAULT_SETTINGS] config_settings: Dict[str, Any] project_root: str if settings_file: config_settings = _get_config_data( settings_file, CONFIG_SECTIONS.get(os.path.basename(settings_file), FALLBACK_CONFIG_SECTIONS), ) project_root = os.path.dirname(settings_file) if not config_settings and not quiet: warn( f"A custom settings file was specified: {settings_file} but no configuration " "was found inside. This can happen when [settings] is used as the config " "header instead of [isort]. " "See: https://pycqa.github.io/isort/docs/configuration/config_files" "/#custom_config_files for more information." ) elif settings_path: if not os.path.exists(settings_path): raise InvalidSettingsPath(settings_path) settings_path = os.path.abspath(settings_path) project_root, config_settings = _find_config(settings_path) else: config_settings = {} project_root = os.getcwd() profile_name = config_overrides.get("profile", config_settings.get("profile", "")) profile: Dict[str, Any] = {} if profile_name: if profile_name not in profiles: import pkg_resources for plugin in pkg_resources.iter_entry_points("isort.profiles"): profiles.setdefault(plugin.name, plugin.load()) if profile_name not in profiles: raise ProfileDoesNotExist(profile_name) profile = profiles[profile_name].copy() profile["source"] = f"{profile_name} profile" sources.append(profile) if config_settings: sources.append(config_settings) if config_overrides: config_overrides["source"] = RUNTIME_SOURCE sources.append(config_overrides) combined_config = {**profile, **config_settings, **config_overrides} if "indent" in combined_config: indent = str(combined_config["indent"]) if indent.isdigit(): indent = " " * int(indent) else: indent = indent.strip("'").strip('"') if indent.lower() == "tab": indent = "\t" combined_config["indent"] = indent known_other = {} import_headings = {} import_footers = {} for key, value in tuple(combined_config.items()): # Collect all known sections beyond those that have direct entries if key.startswith(KNOWN_PREFIX) and key not in ( "known_standard_library", "known_future_library", "known_third_party", "known_first_party", "known_local_folder", ): import_heading = key[len(KNOWN_PREFIX) :].lower() maps_to_section = import_heading.upper() combined_config.pop(key) if maps_to_section in KNOWN_SECTION_MAPPING: section_name = f"known_{KNOWN_SECTION_MAPPING[maps_to_section].lower()}" if section_name in combined_config and not quiet: warn( f"Can't set both {key} and {section_name} in the same config file.\n" f"Default to {section_name} if unsure." "\n\n" "See: https://pycqa.github.io/isort/" "#custom-sections-and-ordering." ) else: combined_config[section_name] = frozenset(value) else: known_other[import_heading] = frozenset(value) if maps_to_section not in combined_config.get("sections", ()) and not quiet: warn( f"`{key}` setting is defined, but {maps_to_section} is not" " included in `sections` config option:" f" {combined_config.get('sections', SECTION_DEFAULTS)}.\n\n" "See: https://pycqa.github.io/isort/" "#custom-sections-and-ordering." ) if key.startswith(IMPORT_HEADING_PREFIX): import_headings[key[len(IMPORT_HEADING_PREFIX) :].lower()] = str(value) if key.startswith(IMPORT_FOOTER_PREFIX): import_footers[key[len(IMPORT_FOOTER_PREFIX) :].lower()] = str(value) # Coerce all provided config values into their correct type default_value = _DEFAULT_SETTINGS.get(key, None) if default_value is None: continue combined_config[key] = type(default_value)(value) for section in combined_config.get("sections", ()): if section in SECTION_DEFAULTS: continue if not section.lower() in known_other: config_keys = ", ".join(known_other.keys()) warn( f"`sections` setting includes {section}, but no known_{section.lower()} " "is defined. " f"The following known_SECTION config options are defined: {config_keys}." ) if "directory" not in combined_config: combined_config["directory"] = ( os.path.dirname(config_settings["source"]) if config_settings.get("source", None) else os.getcwd() ) path_root = Path(combined_config.get("directory", project_root)).resolve() path_root = path_root if path_root.is_dir() else path_root.parent if "src_paths" not in combined_config: combined_config["src_paths"] = (path_root / "src", path_root) else: src_paths: List[Path] = [] for src_path in combined_config.get("src_paths", ()): full_paths = ( path_root.glob(src_path) if "*" in str(src_path) else [path_root / src_path] ) for path in full_paths: if path not in src_paths: src_paths.append(path) combined_config["src_paths"] = tuple(src_paths) if "formatter" in combined_config: import pkg_resources for plugin in pkg_resources.iter_entry_points("isort.formatters"): if plugin.name == combined_config["formatter"]: combined_config["formatting_function"] = plugin.load() break else: raise FormattingPluginDoesNotExist(combined_config["formatter"]) # Remove any config values that are used for creating config object but # aren't defined in dataclass combined_config.pop("source", None) combined_config.pop("sources", None) combined_config.pop("runtime_src_paths", None) deprecated_options_used = [ option for option in combined_config if option in DEPRECATED_SETTINGS ] if deprecated_options_used: for deprecated_option in deprecated_options_used: combined_config.pop(deprecated_option) if not quiet: warn( "W0503: Deprecated config options were used: " f"{', '.join(deprecated_options_used)}." "Please see the 5.0.0 upgrade guide: " "https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html" ) if known_other: combined_config["known_other"] = known_other if import_headings: for import_heading_key in import_headings: combined_config.pop(f"{IMPORT_HEADING_PREFIX}{import_heading_key}") combined_config["import_headings"] = import_headings if import_footers: for import_footer_key in import_footers: combined_config.pop(f"{IMPORT_FOOTER_PREFIX}{import_footer_key}") combined_config["import_footers"] = import_footers unsupported_config_errors = {} for option in set(combined_config.keys()).difference( getattr(_Config, "__dataclass_fields__", {}).keys() ): for source in reversed(sources): if option in source: unsupported_config_errors[option] = { "value": source[option], "source": source["source"], } if unsupported_config_errors: raise UnsupportedSettings(unsupported_config_errors) super().__init__(sources=tuple(sources), **combined_config) def is_supported_filetype(self, file_name: str) -> bool: _root, ext = os.path.splitext(file_name) ext = ext.lstrip(".") if ext in self.supported_extensions: return True if ext in self.blocked_extensions: return False # Skip editor backup files. if file_name.endswith("~"): return False try: if stat.S_ISFIFO(os.stat(file_name).st_mode): return False except OSError: pass try: with open(file_name, "rb") as fp: line = fp.readline(100) except OSError: return False else: return bool(_SHEBANG_RE.match(line)) def _check_folder_git_ls_files(self, folder: str) -> Optional[Path]: env = {**os.environ, "LANG": "C.UTF-8"} try: topfolder_result = subprocess.check_output( # nosec # skipcq: PYL-W1510 ["git", "-C", folder, "rev-parse", "--show-toplevel"], encoding="utf-8", env=env ) except subprocess.CalledProcessError: return None git_folder = Path(topfolder_result.rstrip()).resolve() # files committed to git tracked_files = ( subprocess.check_output( # nosec # skipcq: PYL-W1510 ["git", "-C", str(git_folder), "ls-files", "-z"], encoding="utf-8", env=env, ) .rstrip("\0") .split("\0") ) # files that haven't been committed yet, but aren't ignored tracked_files_others = ( subprocess.check_output( # nosec # skipcq: PYL-W1510 ["git", "-C", str(git_folder), "ls-files", "-z", "--others", "--exclude-standard"], encoding="utf-8", env=env, ) .rstrip("\0") .split("\0") ) self.git_ls_files[git_folder] = { str(git_folder / Path(f)) for f in tracked_files + tracked_files_others } return git_folder def is_skipped(self, file_path: Path) -> bool: """Returns True if the file and/or folder should be skipped based on current settings.""" if self.directory and Path(self.directory) in file_path.resolve().parents: file_name = os.path.relpath(file_path.resolve(), self.directory) else: file_name = str(file_path) os_path = str(file_path) normalized_path = os_path.replace("\\", "/") if normalized_path[1:2] == ":": normalized_path = normalized_path[2:] for skip_path in self.skips: if posixpath.abspath(normalized_path) == posixpath.abspath( skip_path.replace("\\", "/") ): return True position = os.path.split(file_name) while position[1]: if position[1] in self.skips: return True position = os.path.split(position[0]) for sglob in self.skip_globs: if fnmatch.fnmatch(file_name, sglob) or fnmatch.fnmatch("/" + file_name, sglob): return True if not (os.path.isfile(os_path) or os.path.isdir(os_path) or os.path.islink(os_path)): return True if self.skip_gitignore: if file_path.name == ".git": # pragma: no cover return True git_folder = None file_paths = [file_path, file_path.resolve()] for folder in self.git_ls_files: if any(folder in path.parents for path in file_paths): git_folder = folder break else: git_folder = self._check_folder_git_ls_files(str(file_path.parent)) # git_ls_files are good files you should parse. If you're not in the allow list, skip. if ( git_folder and not file_path.is_dir() and str(file_path.resolve()) not in self.git_ls_files[git_folder] ): return True return False @property def known_patterns(self) -> List[Tuple[Pattern[str], str]]: if self._known_patterns is not None: return self._known_patterns self._known_patterns = [] pattern_sections = [STDLIB] + [section for section in self.sections if section != STDLIB] for placement in reversed(pattern_sections): known_placement = KNOWN_SECTION_MAPPING.get(placement, placement).lower() config_key = f"{KNOWN_PREFIX}{known_placement}" known_modules = getattr(self, config_key, self.known_other.get(known_placement, ())) extra_modules = getattr(self, f"extra_{known_placement}", ()) all_modules = set(extra_modules).union(known_modules) known_patterns = [ pattern for known_pattern in all_modules for pattern in self._parse_known_pattern(known_pattern) ] for known_pattern in known_patterns: regexp = "^" + known_pattern.replace("*", ".*").replace("?", ".?") + "$" self._known_patterns.append((re.compile(regexp), placement)) return self._known_patterns @property def section_comments(self) -> Tuple[str, ...]: if self._section_comments is not None: return self._section_comments self._section_comments = tuple(f"# {heading}" for heading in self.import_headings.values()) return self._section_comments @property def section_comments_end(self) -> Tuple[str, ...]: if self._section_comments_end is not None: return self._section_comments_end self._section_comments_end = tuple(f"# {footer}" for footer in self.import_footers.values()) return self._section_comments_end @property def skips(self) -> FrozenSet[str]: if self._skips is not None: return self._skips self._skips = self.skip.union(self.extend_skip) return self._skips @property def skip_globs(self) -> FrozenSet[str]: if self._skip_globs is not None: return self._skip_globs self._skip_globs = self.skip_glob.union(self.extend_skip_glob) return self._skip_globs @property def sorting_function(self) -> Callable[..., List[str]]: if self._sorting_function is not None: return self._sorting_function if self.sort_order == "natural": self._sorting_function = sorting.naturally elif self.sort_order == "native": self._sorting_function = sorted else: available_sort_orders = ["natural", "native"] import pkg_resources for sort_plugin in pkg_resources.iter_entry_points("isort.sort_function"): available_sort_orders.append(sort_plugin.name) if sort_plugin.name == self.sort_order: self._sorting_function = sort_plugin.load() break else: raise SortingFunctionDoesNotExist(self.sort_order, available_sort_orders) return self._sorting_function def _parse_known_pattern(self, pattern: str) -> List[str]: """Expand pattern if identified as a directory and return found sub packages""" if pattern.endswith(os.path.sep): patterns = [ filename for filename in os.listdir(os.path.join(self.directory, pattern)) if os.path.isdir(os.path.join(self.directory, pattern, filename)) ] else: patterns = [pattern] return patterns def _get_str_to_type_converter(setting_name: str) -> Union[Callable[[str], Any], Type[Any]]: type_converter: Union[Callable[[str], Any], Type[Any]] = type( _DEFAULT_SETTINGS.get(setting_name, "") ) if type_converter == WrapModes: type_converter = wrap_mode_from_string return type_converter def _as_list(value: str) -> List[str]: if isinstance(value, list): return [item.strip() for item in value] filtered = [item.strip() for item in value.replace("\n", ",").split(",") if item.strip()] return filtered def _abspaths(cwd: str, values: Iterable[str]) -> Set[str]: paths = { os.path.join(cwd, value) if not value.startswith(os.path.sep) and value.endswith(os.path.sep) else value for value in values } return paths def _find_config(path: str) -> Tuple[str, Dict[str, Any]]: current_directory = path tries = 0 while current_directory and tries < MAX_CONFIG_SEARCH_DEPTH: for config_file_name in CONFIG_SOURCES: potential_config_file = os.path.join(current_directory, config_file_name) if os.path.isfile(potential_config_file): config_data: Dict[str, Any] try: config_data = _get_config_data( potential_config_file, CONFIG_SECTIONS[config_file_name] ) except Exception: warn(f"Failed to pull configuration information from {potential_config_file}") config_data = {} if config_data: return (current_directory, config_data) for stop_dir in STOP_CONFIG_SEARCH_ON_DIRS: if os.path.isdir(os.path.join(current_directory, stop_dir)): return (current_directory, {}) new_directory = os.path.split(current_directory)[0] if new_directory == current_directory: break current_directory = new_directory tries += 1 return (path, {}) def find_all_configs(path: str) -> Trie: """ Looks for config files in the path provided and in all of its sub-directories. Parses and stores any config file encountered in a trie and returns the root of the trie """ trie_root = Trie("default", {}) for dirpath, _, _ in os.walk(path): for config_file_name in CONFIG_SOURCES: potential_config_file = os.path.join(dirpath, config_file_name) if os.path.isfile(potential_config_file): config_data: Dict[str, Any] try: config_data = _get_config_data( potential_config_file, CONFIG_SECTIONS[config_file_name] ) except Exception: warn(f"Failed to pull configuration information from {potential_config_file}") config_data = {} if config_data: trie_root.insert(potential_config_file, config_data) break return trie_root def _get_config_data(file_path: str, sections: Tuple[str, ...]) -> Dict[str, Any]: settings: Dict[str, Any] = {} if file_path.endswith(".toml"): with open(file_path, "rb") as bin_config_file: config = tomllib.load(bin_config_file) for section in sections: config_section = config for key in section.split("."): config_section = config_section.get(key, {}) settings.update(config_section) else: with open(file_path, encoding="utf-8") as config_file: if file_path.endswith(".editorconfig"): line = "\n" last_position = config_file.tell() while line: line = config_file.readline() if "[" in line: config_file.seek(last_position) break last_position = config_file.tell() config = configparser.ConfigParser(strict=False) config.read_file(config_file) for section in sections: if section.startswith("*.{") and section.endswith("}"): extension = section[len("*.{") : -1] for config_key in config.keys(): if ( config_key.startswith("*.{") and config_key.endswith("}") and extension in map( lambda text: text.strip(), config_key[len("*.{") : -1].split(",") # type: ignore # noqa ) ): settings.update(config.items(config_key)) elif config.has_section(section): settings.update(config.items(section)) if settings: settings["source"] = file_path if file_path.endswith(".editorconfig"): indent_style = settings.pop("indent_style", "").strip() indent_size = settings.pop("indent_size", "").strip() if indent_size == "tab": indent_size = settings.pop("tab_width", "").strip() if indent_style == "space": settings["indent"] = " " * (indent_size and int(indent_size) or 4) elif indent_style == "tab": settings["indent"] = "\t" * (indent_size and int(indent_size) or 1) max_line_length = settings.pop("max_line_length", "").strip() if max_line_length and (max_line_length == "off" or max_line_length.isdigit()): settings["line_length"] = ( float("inf") if max_line_length == "off" else int(max_line_length) ) settings = { key: value for key, value in settings.items() if key in _DEFAULT_SETTINGS.keys() or key.startswith(KNOWN_PREFIX) } for key, value in settings.items(): existing_value_type = _get_str_to_type_converter(key) if existing_value_type == tuple: settings[key] = tuple(_as_list(value)) elif existing_value_type == frozenset: settings[key] = frozenset(_as_list(settings.get(key))) # type: ignore elif existing_value_type == bool: # Only some configuration formats support native boolean values. if not isinstance(value, bool): value = _as_bool(value) settings[key] = value elif key.startswith(KNOWN_PREFIX): settings[key] = _abspaths(os.path.dirname(file_path), _as_list(value)) elif key == "force_grid_wrap": try: result = existing_value_type(value) except ValueError: # backwards compatibility for true / false force grid wrap result = 0 if value.lower().strip() == "false" else 2 settings[key] = result elif key == "comment_prefix": settings[key] = str(value).strip("'").strip('"') else: settings[key] = existing_value_type(value) return settings def _as_bool(value: str) -> bool: """Given a string value that represents True or False, returns the Boolean equivalent. Heavily inspired from distutils strtobool. """ try: return _STR_BOOLEAN_MAPPING[value.lower()] except KeyError: raise ValueError(f"invalid truth value {value}") DEFAULT_CONFIG = Config() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/setuptools_commands.py0000644000000000000000000000437114536412763015524 0ustar00import glob import os import sys from typing import Any, Dict, Iterator, List from warnings import warn import setuptools # type: ignore from . import api from .settings import DEFAULT_CONFIG class ISortCommand(setuptools.Command): # type: ignore """The :class:`ISortCommand` class is used by setuptools to perform imports checks on registered modules. """ description = "Run isort on modules registered in setuptools" user_options: List[Any] = [] def initialize_options(self) -> None: default_settings = vars(DEFAULT_CONFIG).copy() for key, value in default_settings.items(): setattr(self, key, value) def finalize_options(self) -> None: """Get options from config files.""" self.arguments: Dict[str, Any] = {} # skipcq: PYL-W0201 self.arguments["settings_path"] = os.getcwd() def distribution_files(self) -> Iterator[str]: """Find distribution packages.""" # This is verbatim from flake8 if self.distribution.packages: # pragma: no cover package_dirs = self.distribution.package_dir or {} for package in self.distribution.packages: pkg_dir = package if package in package_dirs: pkg_dir = package_dirs[package] elif "" in package_dirs: # pragma: no cover pkg_dir = package_dirs[""] + os.path.sep + pkg_dir yield pkg_dir.replace(".", os.path.sep) if self.distribution.py_modules: for filename in self.distribution.py_modules: yield f"{filename}.py" # Don't miss the setup.py file itself yield "setup.py" def run(self) -> None: arguments = self.arguments wrong_sorted_files = False for path in self.distribution_files(): for python_file in glob.iglob(os.path.join(path, "*.py")): try: if not api.check_file(python_file, **arguments): wrong_sorted_files = True # pragma: no cover except OSError as error: # pragma: no cover warn(f"Unable to parse file {python_file} due to {error}") if wrong_sorted_files: sys.exit(1) # pragma: no cover ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/sorting.py0000644000000000000000000001064314536412763013106 0ustar00import re from typing import TYPE_CHECKING, Any, Callable, Iterable, List, Optional if TYPE_CHECKING: from .settings import Config else: Config = Any _import_line_intro_re = re.compile("^(?:from|import) ") _import_line_midline_import_re = re.compile(" import ") def module_key( module_name: str, config: Config, sub_imports: bool = False, ignore_case: bool = False, section_name: Optional[Any] = None, straight_import: Optional[bool] = False, ) -> str: match = re.match(r"^(\.+)\s*(.*)", module_name) if match: sep = " " if config.reverse_relative else "_" module_name = sep.join(match.groups()) prefix = "" if ignore_case: module_name = str(module_name).lower() else: module_name = str(module_name) if sub_imports and config.order_by_type: if module_name in config.constants: prefix = "A" elif module_name in config.classes: prefix = "B" elif module_name in config.variables: prefix = "C" elif module_name.isupper() and len(module_name) > 1: # see issue #376 prefix = "A" elif module_name in config.classes or module_name[0:1].isupper(): prefix = "B" else: prefix = "C" if not config.case_sensitive: module_name = module_name.lower() length_sort = ( config.length_sort or (config.length_sort_straight and straight_import) or str(section_name).lower() in config.length_sort_sections ) _length_sort_maybe = (str(len(module_name)) + ":" + module_name) if length_sort else module_name return f"{module_name in config.force_to_top and 'A' or 'B'}{prefix}{_length_sort_maybe}" def section_key(line: str, config: Config) -> str: section = "B" if ( not config.sort_relative_in_force_sorted_sections and config.reverse_relative and line.startswith("from .") ): match = re.match(r"^from (\.+)\s*(.*)", line) if match: # pragma: no cover - regex always matches if line starts with "from ." line = f"from {' '.join(match.groups())}" if config.group_by_package and line.strip().startswith("from"): line = line.split(" import", 1)[0] if config.lexicographical: line = _import_line_intro_re.sub("", _import_line_midline_import_re.sub(".", line)) else: line = re.sub("^from ", "", line) line = re.sub("^import ", "", line) if config.sort_relative_in_force_sorted_sections: sep = " " if config.reverse_relative else "_" line = re.sub(r"^(\.+)", rf"\1{sep}", line) if line.split(" ")[0] in config.force_to_top: section = "A" # * If honor_case_in_force_sorted_sections is true, and case_sensitive and # order_by_type are different, only ignore case in part of the line. # * Otherwise, let order_by_type decide the sorting of the whole line. This # is only "correct" if case_sensitive and order_by_type have the same value. if config.honor_case_in_force_sorted_sections and config.case_sensitive != config.order_by_type: split_module = line.split(" import ", 1) if len(split_module) > 1: module_name, names = split_module if not config.case_sensitive: module_name = module_name.lower() if not config.order_by_type: names = names.lower() line = " import ".join([module_name, names]) elif not config.case_sensitive: line = line.lower() elif not config.order_by_type: line = line.lower() return f"{section}{len(line) if config.length_sort else ''}{line}" def sort( config: Config, to_sort: Iterable[str], key: Optional[Callable[[str], Any]] = None, reverse: bool = False, ) -> List[str]: return config.sorting_function(to_sort, key=key, reverse=reverse) def naturally( to_sort: Iterable[str], key: Optional[Callable[[str], Any]] = None, reverse: bool = False ) -> List[str]: """Returns a naturally sorted list""" if key is None: key_callback = _natural_keys else: def key_callback(text: str) -> List[Any]: return _natural_keys(key(text)) # type: ignore return sorted(to_sort, key=key_callback, reverse=reverse) def _atoi(text: str) -> Any: return int(text) if text.isdigit() else text def _natural_keys(text: str) -> List[Any]: return [_atoi(c) for c in re.split(r"(\d+)", text)] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/__init__.py0000644000000000000000000000014414536412763014617 0ustar00from . import all as _all from . import py2, py3, py27, py36, py37, py38, py39, py310, py311, py312 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/all.py0000644000000000000000000000007114536412763013627 0ustar00from . import py2, py3 stdlib = py2.stdlib | py3.stdlib ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/py2.py0000644000000000000000000000005114536412763013567 0ustar00from . import py27 stdlib = py27.stdlib ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/py27.py0000644000000000000000000001063014536412763013662 0ustar00""" File contains the standard library of Python 2.7. DO NOT EDIT. If the standard library changes, a new list should be created using the mkstdlibs.py script. """ stdlib = { "AL", "BaseHTTPServer", "Bastion", "CGIHTTPServer", "Carbon", "ColorPicker", "ConfigParser", "Cookie", "DEVICE", "DocXMLRPCServer", "EasyDialogs", "FL", "FrameWork", "GL", "HTMLParser", "MacOS", "MimeWriter", "MiniAEFrame", "Nav", "PixMapWrapper", "Queue", "SUNAUDIODEV", "ScrolledText", "SimpleHTTPServer", "SimpleXMLRPCServer", "SocketServer", "StringIO", "Tix", "Tkinter", "UserDict", "UserList", "UserString", "W", "__builtin__", "_ast", "_winreg", "abc", "aepack", "aetools", "aetypes", "aifc", "al", "anydbm", "applesingle", "argparse", "array", "ast", "asynchat", "asyncore", "atexit", "audioop", "autoGIL", "base64", "bdb", "binascii", "binhex", "bisect", "bsddb", "buildtools", "bz2", "cPickle", "cProfile", "cStringIO", "calendar", "cd", "cfmfile", "cgi", "cgitb", "chunk", "cmath", "cmd", "code", "codecs", "codeop", "collections", "colorsys", "commands", "compileall", "compiler", "contextlib", "cookielib", "copy", "copy_reg", "crypt", "csv", "ctypes", "curses", "datetime", "dbhash", "dbm", "decimal", "difflib", "dircache", "dis", "distutils", "dl", "doctest", "dumbdbm", "dummy_thread", "dummy_threading", "email", "encodings", "ensurepip", "errno", "exceptions", "fcntl", "filecmp", "fileinput", "findertools", "fl", "flp", "fm", "fnmatch", "formatter", "fpectl", "fpformat", "fractions", "ftplib", "functools", "future_builtins", "gc", "gdbm", "gensuitemodule", "getopt", "getpass", "gettext", "gl", "glob", "grp", "gzip", "hashlib", "heapq", "hmac", "hotshot", "htmlentitydefs", "htmllib", "httplib", "ic", "icopen", "imageop", "imaplib", "imgfile", "imghdr", "imp", "importlib", "imputil", "inspect", "io", "itertools", "jpeg", "json", "keyword", "lib2to3", "linecache", "locale", "logging", "macerrors", "macostools", "macpath", "macresource", "mailbox", "mailcap", "marshal", "math", "md5", "mhlib", "mimetools", "mimetypes", "mimify", "mmap", "modulefinder", "msilib", "msvcrt", "multifile", "multiprocessing", "mutex", "netrc", "new", "nis", "nntplib", "ntpath", "numbers", "operator", "optparse", "os", "ossaudiodev", "parser", "pdb", "pickle", "pickletools", "pipes", "pkgutil", "platform", "plistlib", "popen2", "poplib", "posix", "posixfile", "posixpath", "pprint", "profile", "pstats", "pty", "pwd", "py_compile", "pyclbr", "pydoc", "quopri", "random", "re", "readline", "resource", "rexec", "rfc822", "rlcompleter", "robotparser", "runpy", "sched", "select", "sets", "sgmllib", "sha", "shelve", "shlex", "shutil", "signal", "site", "smtpd", "smtplib", "sndhdr", "socket", "spwd", "sqlite3", "sre", "sre_compile", "sre_constants", "sre_parse", "ssl", "stat", "statvfs", "string", "stringprep", "struct", "subprocess", "sunau", "sunaudiodev", "symbol", "symtable", "sys", "sysconfig", "syslog", "tabnanny", "tarfile", "telnetlib", "tempfile", "termios", "test", "textwrap", "thread", "threading", "time", "timeit", "token", "tokenize", "trace", "traceback", "ttk", "tty", "turtle", "types", "unicodedata", "unittest", "urllib", "urllib2", "urlparse", "user", "uu", "uuid", "videoreader", "warnings", "wave", "weakref", "webbrowser", "whichdb", "winsound", "wsgiref", "xdrlib", "xml", "xmlrpclib", "zipfile", "zipimport", "zlib", } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/py3.py0000644000000000000000000000030714536412763013574 0ustar00from . import py36, py37, py38, py39, py310, py311, py312 stdlib = ( py36.stdlib | py37.stdlib | py38.stdlib | py39.stdlib | py310.stdlib | py311.stdlib | py312.stdlib ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/py310.py0000644000000000000000000000631614536412763013743 0ustar00""" File contains the standard library of Python 3.10. DO NOT EDIT. If the standard library changes, a new list should be created using the mkstdlibs.py script. """ stdlib = { "_ast", "_thread", "abc", "aifc", "argparse", "array", "ast", "asynchat", "asyncio", "asyncore", "atexit", "audioop", "base64", "bdb", "binascii", "binhex", "bisect", "builtins", "bz2", "cProfile", "calendar", "cgi", "cgitb", "chunk", "cmath", "cmd", "code", "codecs", "codeop", "collections", "colorsys", "compileall", "concurrent", "configparser", "contextlib", "contextvars", "copy", "copyreg", "crypt", "csv", "ctypes", "curses", "dataclasses", "datetime", "dbm", "decimal", "difflib", "dis", "distutils", "doctest", "email", "encodings", "ensurepip", "enum", "errno", "faulthandler", "fcntl", "filecmp", "fileinput", "fnmatch", "fractions", "ftplib", "functools", "gc", "getopt", "getpass", "gettext", "glob", "graphlib", "grp", "gzip", "hashlib", "heapq", "hmac", "html", "http", "idlelib", "imaplib", "imghdr", "imp", "importlib", "inspect", "io", "ipaddress", "itertools", "json", "keyword", "lib2to3", "linecache", "locale", "logging", "lzma", "mailbox", "mailcap", "marshal", "math", "mimetypes", "mmap", "modulefinder", "msilib", "msvcrt", "multiprocessing", "netrc", "nis", "nntplib", "ntpath", "numbers", "operator", "optparse", "os", "ossaudiodev", "pathlib", "pdb", "pickle", "pickletools", "pipes", "pkgutil", "platform", "plistlib", "poplib", "posix", "posixpath", "pprint", "profile", "pstats", "pty", "pwd", "py_compile", "pyclbr", "pydoc", "queue", "quopri", "random", "re", "readline", "reprlib", "resource", "rlcompleter", "runpy", "sched", "secrets", "select", "selectors", "shelve", "shlex", "shutil", "signal", "site", "smtpd", "smtplib", "sndhdr", "socket", "socketserver", "spwd", "sqlite3", "sre", "sre_compile", "sre_constants", "sre_parse", "ssl", "stat", "statistics", "string", "stringprep", "struct", "subprocess", "sunau", "symtable", "sys", "sysconfig", "syslog", "tabnanny", "tarfile", "telnetlib", "tempfile", "termios", "test", "textwrap", "threading", "time", "timeit", "tkinter", "token", "tokenize", "trace", "traceback", "tracemalloc", "tty", "turtle", "turtledemo", "types", "typing", "unicodedata", "unittest", "urllib", "uu", "uuid", "venv", "warnings", "wave", "weakref", "webbrowser", "winreg", "winsound", "wsgiref", "xdrlib", "xml", "xmlrpc", "zipapp", "zipfile", "zipimport", "zlib", "zoneinfo", } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/py311.py0000644000000000000000000000641114536412763013740 0ustar00""" File contains the standard library of Python 3.11. DO NOT EDIT. If the standard library changes, a new list should be created using the mkstdlibs.py script. """ stdlib = { "_ast", "_thread", "_tkinter", "abc", "aifc", "argparse", "array", "ast", "asynchat", "asyncio", "asyncore", "atexit", "audioop", "base64", "bdb", "binascii", "bisect", "builtins", "bz2", "cProfile", "calendar", "cgi", "cgitb", "chunk", "cmath", "cmd", "code", "codecs", "codeop", "collections", "colorsys", "compileall", "concurrent", "configparser", "contextlib", "contextvars", "copy", "copyreg", "crypt", "csv", "ctypes", "curses", "dataclasses", "datetime", "dbm", "decimal", "difflib", "dis", "distutils", "doctest", "email", "encodings", "ensurepip", "enum", "errno", "faulthandler", "fcntl", "filecmp", "fileinput", "fnmatch", "fractions", "ftplib", "functools", "gc", "getopt", "getpass", "gettext", "glob", "graphlib", "grp", "gzip", "hashlib", "heapq", "hmac", "html", "http", "idlelib", "imaplib", "imghdr", "imp", "importlib", "inspect", "io", "ipaddress", "itertools", "json", "keyword", "lib2to3", "linecache", "locale", "logging", "lzma", "mailbox", "mailcap", "marshal", "math", "mimetypes", "mmap", "modulefinder", "msilib", "msvcrt", "multiprocessing", "netrc", "nis", "nntplib", "ntpath", "numbers", "operator", "optparse", "os", "ossaudiodev", "pathlib", "pdb", "pickle", "pickletools", "pipes", "pkgutil", "platform", "plistlib", "poplib", "posix", "posixpath", "pprint", "profile", "pstats", "pty", "pwd", "py_compile", "pyclbr", "pydoc", "queue", "quopri", "random", "re", "readline", "reprlib", "resource", "rlcompleter", "runpy", "sched", "secrets", "select", "selectors", "shelve", "shlex", "shutil", "signal", "site", "sitecustomize", "smtpd", "smtplib", "sndhdr", "socket", "socketserver", "spwd", "sqlite3", "sre", "sre_compile", "sre_constants", "sre_parse", "ssl", "stat", "statistics", "string", "stringprep", "struct", "subprocess", "sunau", "symtable", "sys", "sysconfig", "syslog", "tabnanny", "tarfile", "telnetlib", "tempfile", "termios", "test", "textwrap", "threading", "time", "timeit", "tkinter", "token", "tokenize", "tomllib", "trace", "traceback", "tracemalloc", "tty", "turtle", "turtledemo", "types", "typing", "unicodedata", "unittest", "urllib", "usercustomize", "uu", "uuid", "venv", "warnings", "wave", "weakref", "webbrowser", "winreg", "winsound", "wsgiref", "xdrlib", "xml", "xmlrpc", "zipapp", "zipfile", "zipimport", "zlib", "zoneinfo", } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/py312.py0000644000000000000000000000630014536412763013736 0ustar00""" File contains the standard library of Python 3.12. DO NOT EDIT. If the standard library changes, a new list should be created using the mkstdlibs.py script. """ stdlib = { "_ast", "_thread", "_tkinter", "abc", "aifc", "argparse", "array", "ast", "asyncio", "atexit", "audioop", "base64", "bdb", "binascii", "bisect", "builtins", "bz2", "cProfile", "calendar", "cgi", "cgitb", "chunk", "cmath", "cmd", "code", "codecs", "codeop", "collections", "colorsys", "compileall", "concurrent", "configparser", "contextlib", "contextvars", "copy", "copyreg", "crypt", "csv", "ctypes", "curses", "dataclasses", "datetime", "dbm", "decimal", "difflib", "dis", "doctest", "email", "encodings", "ensurepip", "enum", "errno", "faulthandler", "fcntl", "filecmp", "fileinput", "fnmatch", "fractions", "ftplib", "functools", "gc", "getopt", "getpass", "gettext", "glob", "graphlib", "grp", "gzip", "hashlib", "heapq", "hmac", "html", "http", "idlelib", "imaplib", "imghdr", "importlib", "inspect", "io", "ipaddress", "itertools", "json", "keyword", "lib2to3", "linecache", "locale", "logging", "lzma", "mailbox", "mailcap", "marshal", "math", "mimetypes", "mmap", "modulefinder", "msilib", "msvcrt", "multiprocessing", "netrc", "nis", "nntplib", "ntpath", "numbers", "operator", "optparse", "os", "ossaudiodev", "pathlib", "pdb", "pickle", "pickletools", "pipes", "pkgutil", "platform", "plistlib", "poplib", "posix", "posixpath", "pprint", "profile", "pstats", "pty", "pwd", "py_compile", "pyclbr", "pydoc", "queue", "quopri", "random", "re", "readline", "reprlib", "resource", "rlcompleter", "runpy", "sched", "secrets", "select", "selectors", "shelve", "shlex", "shutil", "signal", "site", "sitecustomize", "smtplib", "sndhdr", "socket", "socketserver", "spwd", "sqlite3", "sre", "sre_compile", "sre_constants", "sre_parse", "ssl", "stat", "statistics", "string", "stringprep", "struct", "subprocess", "sunau", "symtable", "sys", "sysconfig", "syslog", "tabnanny", "tarfile", "telnetlib", "tempfile", "termios", "test", "textwrap", "threading", "time", "timeit", "tkinter", "token", "tokenize", "tomllib", "trace", "traceback", "tracemalloc", "tty", "turtle", "turtledemo", "types", "typing", "unicodedata", "unittest", "urllib", "usercustomize", "uu", "uuid", "venv", "warnings", "wave", "weakref", "webbrowser", "winreg", "winsound", "wsgiref", "xdrlib", "xml", "xmlrpc", "zipapp", "zipfile", "zipimport", "zlib", "zoneinfo", } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/py36.py0000644000000000000000000000635614536412763013674 0ustar00""" File contains the standard library of Python 3.6. DO NOT EDIT. If the standard library changes, a new list should be created using the mkstdlibs.py script. """ stdlib = { "_ast", "_dummy_thread", "_thread", "abc", "aifc", "argparse", "array", "ast", "asynchat", "asyncio", "asyncore", "atexit", "audioop", "base64", "bdb", "binascii", "binhex", "bisect", "builtins", "bz2", "cProfile", "calendar", "cgi", "cgitb", "chunk", "cmath", "cmd", "code", "codecs", "codeop", "collections", "colorsys", "compileall", "concurrent", "configparser", "contextlib", "copy", "copyreg", "crypt", "csv", "ctypes", "curses", "datetime", "dbm", "decimal", "difflib", "dis", "distutils", "doctest", "dummy_threading", "email", "encodings", "ensurepip", "enum", "errno", "faulthandler", "fcntl", "filecmp", "fileinput", "fnmatch", "formatter", "fpectl", "fractions", "ftplib", "functools", "gc", "getopt", "getpass", "gettext", "glob", "grp", "gzip", "hashlib", "heapq", "hmac", "html", "http", "imaplib", "imghdr", "imp", "importlib", "inspect", "io", "ipaddress", "itertools", "json", "keyword", "lib2to3", "linecache", "locale", "logging", "lzma", "macpath", "mailbox", "mailcap", "marshal", "math", "mimetypes", "mmap", "modulefinder", "msilib", "msvcrt", "multiprocessing", "netrc", "nis", "nntplib", "ntpath", "numbers", "operator", "optparse", "os", "ossaudiodev", "parser", "pathlib", "pdb", "pickle", "pickletools", "pipes", "pkgutil", "platform", "plistlib", "poplib", "posix", "posixpath", "pprint", "profile", "pstats", "pty", "pwd", "py_compile", "pyclbr", "pydoc", "queue", "quopri", "random", "re", "readline", "reprlib", "resource", "rlcompleter", "runpy", "sched", "secrets", "select", "selectors", "shelve", "shlex", "shutil", "signal", "site", "smtpd", "smtplib", "sndhdr", "socket", "socketserver", "spwd", "sqlite3", "sre", "sre_compile", "sre_constants", "sre_parse", "ssl", "stat", "statistics", "string", "stringprep", "struct", "subprocess", "sunau", "symbol", "symtable", "sys", "sysconfig", "syslog", "tabnanny", "tarfile", "telnetlib", "tempfile", "termios", "test", "textwrap", "threading", "time", "timeit", "tkinter", "token", "tokenize", "trace", "traceback", "tracemalloc", "tty", "turtle", "turtledemo", "types", "typing", "unicodedata", "unittest", "urllib", "uu", "uuid", "venv", "warnings", "wave", "weakref", "webbrowser", "winreg", "winsound", "wsgiref", "xdrlib", "xml", "xmlrpc", "zipapp", "zipfile", "zipimport", "zlib", } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/py37.py0000644000000000000000000000640614536412763013671 0ustar00""" File contains the standard library of Python 3.7. DO NOT EDIT. If the standard library changes, a new list should be created using the mkstdlibs.py script. """ stdlib = { "_ast", "_dummy_thread", "_thread", "abc", "aifc", "argparse", "array", "ast", "asynchat", "asyncio", "asyncore", "atexit", "audioop", "base64", "bdb", "binascii", "binhex", "bisect", "builtins", "bz2", "cProfile", "calendar", "cgi", "cgitb", "chunk", "cmath", "cmd", "code", "codecs", "codeop", "collections", "colorsys", "compileall", "concurrent", "configparser", "contextlib", "contextvars", "copy", "copyreg", "crypt", "csv", "ctypes", "curses", "dataclasses", "datetime", "dbm", "decimal", "difflib", "dis", "distutils", "doctest", "dummy_threading", "email", "encodings", "ensurepip", "enum", "errno", "faulthandler", "fcntl", "filecmp", "fileinput", "fnmatch", "formatter", "fractions", "ftplib", "functools", "gc", "getopt", "getpass", "gettext", "glob", "grp", "gzip", "hashlib", "heapq", "hmac", "html", "http", "imaplib", "imghdr", "imp", "importlib", "inspect", "io", "ipaddress", "itertools", "json", "keyword", "lib2to3", "linecache", "locale", "logging", "lzma", "macpath", "mailbox", "mailcap", "marshal", "math", "mimetypes", "mmap", "modulefinder", "msilib", "msvcrt", "multiprocessing", "netrc", "nis", "nntplib", "ntpath", "numbers", "operator", "optparse", "os", "ossaudiodev", "parser", "pathlib", "pdb", "pickle", "pickletools", "pipes", "pkgutil", "platform", "plistlib", "poplib", "posix", "posixpath", "pprint", "profile", "pstats", "pty", "pwd", "py_compile", "pyclbr", "pydoc", "queue", "quopri", "random", "re", "readline", "reprlib", "resource", "rlcompleter", "runpy", "sched", "secrets", "select", "selectors", "shelve", "shlex", "shutil", "signal", "site", "smtpd", "smtplib", "sndhdr", "socket", "socketserver", "spwd", "sqlite3", "sre", "sre_compile", "sre_constants", "sre_parse", "ssl", "stat", "statistics", "string", "stringprep", "struct", "subprocess", "sunau", "symbol", "symtable", "sys", "sysconfig", "syslog", "tabnanny", "tarfile", "telnetlib", "tempfile", "termios", "test", "textwrap", "threading", "time", "timeit", "tkinter", "token", "tokenize", "trace", "traceback", "tracemalloc", "tty", "turtle", "turtledemo", "types", "typing", "unicodedata", "unittest", "urllib", "uu", "uuid", "venv", "warnings", "wave", "weakref", "webbrowser", "winreg", "winsound", "wsgiref", "xdrlib", "xml", "xmlrpc", "zipapp", "zipfile", "zipimport", "zlib", } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/py38.py0000644000000000000000000000636714536412763013700 0ustar00""" File contains the standard library of Python 3.8. DO NOT EDIT. If the standard library changes, a new list should be created using the mkstdlibs.py script. """ stdlib = { "_ast", "_dummy_thread", "_thread", "abc", "aifc", "argparse", "array", "ast", "asynchat", "asyncio", "asyncore", "atexit", "audioop", "base64", "bdb", "binascii", "binhex", "bisect", "builtins", "bz2", "cProfile", "calendar", "cgi", "cgitb", "chunk", "cmath", "cmd", "code", "codecs", "codeop", "collections", "colorsys", "compileall", "concurrent", "configparser", "contextlib", "contextvars", "copy", "copyreg", "crypt", "csv", "ctypes", "curses", "dataclasses", "datetime", "dbm", "decimal", "difflib", "dis", "distutils", "doctest", "dummy_threading", "email", "encodings", "ensurepip", "enum", "errno", "faulthandler", "fcntl", "filecmp", "fileinput", "fnmatch", "formatter", "fractions", "ftplib", "functools", "gc", "getopt", "getpass", "gettext", "glob", "grp", "gzip", "hashlib", "heapq", "hmac", "html", "http", "imaplib", "imghdr", "imp", "importlib", "inspect", "io", "ipaddress", "itertools", "json", "keyword", "lib2to3", "linecache", "locale", "logging", "lzma", "mailbox", "mailcap", "marshal", "math", "mimetypes", "mmap", "modulefinder", "msilib", "msvcrt", "multiprocessing", "netrc", "nis", "nntplib", "ntpath", "numbers", "operator", "optparse", "os", "ossaudiodev", "parser", "pathlib", "pdb", "pickle", "pickletools", "pipes", "pkgutil", "platform", "plistlib", "poplib", "posix", "posixpath", "pprint", "profile", "pstats", "pty", "pwd", "py_compile", "pyclbr", "pydoc", "queue", "quopri", "random", "re", "readline", "reprlib", "resource", "rlcompleter", "runpy", "sched", "secrets", "select", "selectors", "shelve", "shlex", "shutil", "signal", "site", "smtpd", "smtplib", "sndhdr", "socket", "socketserver", "spwd", "sqlite3", "sre", "sre_compile", "sre_constants", "sre_parse", "ssl", "stat", "statistics", "string", "stringprep", "struct", "subprocess", "sunau", "symbol", "symtable", "sys", "sysconfig", "syslog", "tabnanny", "tarfile", "telnetlib", "tempfile", "termios", "test", "textwrap", "threading", "time", "timeit", "tkinter", "token", "tokenize", "trace", "traceback", "tracemalloc", "tty", "turtle", "turtledemo", "types", "typing", "unicodedata", "unittest", "urllib", "uu", "uuid", "venv", "warnings", "wave", "weakref", "webbrowser", "winreg", "winsound", "wsgiref", "xdrlib", "xml", "xmlrpc", "zipapp", "zipfile", "zipimport", "zlib", } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/stdlibs/py39.py0000644000000000000000000000635314536412763013674 0ustar00""" File contains the standard library of Python 3.9. DO NOT EDIT. If the standard library changes, a new list should be created using the mkstdlibs.py script. """ stdlib = { "_ast", "_thread", "abc", "aifc", "argparse", "array", "ast", "asynchat", "asyncio", "asyncore", "atexit", "audioop", "base64", "bdb", "binascii", "binhex", "bisect", "builtins", "bz2", "cProfile", "calendar", "cgi", "cgitb", "chunk", "cmath", "cmd", "code", "codecs", "codeop", "collections", "colorsys", "compileall", "concurrent", "configparser", "contextlib", "contextvars", "copy", "copyreg", "crypt", "csv", "ctypes", "curses", "dataclasses", "datetime", "dbm", "decimal", "difflib", "dis", "distutils", "doctest", "email", "encodings", "ensurepip", "enum", "errno", "faulthandler", "fcntl", "filecmp", "fileinput", "fnmatch", "formatter", "fractions", "ftplib", "functools", "gc", "getopt", "getpass", "gettext", "glob", "graphlib", "grp", "gzip", "hashlib", "heapq", "hmac", "html", "http", "imaplib", "imghdr", "imp", "importlib", "inspect", "io", "ipaddress", "itertools", "json", "keyword", "lib2to3", "linecache", "locale", "logging", "lzma", "mailbox", "mailcap", "marshal", "math", "mimetypes", "mmap", "modulefinder", "msilib", "msvcrt", "multiprocessing", "netrc", "nis", "nntplib", "ntpath", "numbers", "operator", "optparse", "os", "ossaudiodev", "parser", "pathlib", "pdb", "pickle", "pickletools", "pipes", "pkgutil", "platform", "plistlib", "poplib", "posix", "posixpath", "pprint", "profile", "pstats", "pty", "pwd", "py_compile", "pyclbr", "pydoc", "queue", "quopri", "random", "re", "readline", "reprlib", "resource", "rlcompleter", "runpy", "sched", "secrets", "select", "selectors", "shelve", "shlex", "shutil", "signal", "site", "smtpd", "smtplib", "sndhdr", "socket", "socketserver", "spwd", "sqlite3", "sre", "sre_compile", "sre_constants", "sre_parse", "ssl", "stat", "statistics", "string", "stringprep", "struct", "subprocess", "sunau", "symbol", "symtable", "sys", "sysconfig", "syslog", "tabnanny", "tarfile", "telnetlib", "tempfile", "termios", "test", "textwrap", "threading", "time", "timeit", "tkinter", "token", "tokenize", "trace", "traceback", "tracemalloc", "tty", "turtle", "turtledemo", "types", "typing", "unicodedata", "unittest", "urllib", "uu", "uuid", "venv", "warnings", "wave", "weakref", "webbrowser", "winreg", "winsound", "wsgiref", "xdrlib", "xml", "xmlrpc", "zipapp", "zipfile", "zipimport", "zlib", "zoneinfo", } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/utils.py0000644000000000000000000000455514536412763012566 0ustar00import os import sys from pathlib import Path from typing import Any, Dict, Optional, Tuple class TrieNode: def __init__(self, config_file: str = "", config_data: Optional[Dict[str, Any]] = None) -> None: if not config_data: config_data = {} self.nodes: Dict[str, TrieNode] = {} self.config_info: Tuple[str, Dict[str, Any]] = (config_file, config_data) class Trie: """ A prefix tree to store the paths of all config files and to search the nearest config associated with each file """ def __init__(self, config_file: str = "", config_data: Optional[Dict[str, Any]] = None) -> None: self.root: TrieNode = TrieNode(config_file, config_data) def insert(self, config_file: str, config_data: Dict[str, Any]) -> None: resolved_config_path_as_tuple = Path(config_file).parent.resolve().parts temp = self.root for path in resolved_config_path_as_tuple: if path not in temp.nodes: temp.nodes[path] = TrieNode() temp = temp.nodes[path] temp.config_info = (config_file, config_data) def search(self, filename: str) -> Tuple[str, Dict[str, Any]]: """ Returns the closest config relative to filename by doing a depth first search on the prefix tree. """ resolved_file_path_as_tuple = Path(filename).resolve().parts temp = self.root last_stored_config: Tuple[str, Dict[str, Any]] = ("", {}) for path in resolved_file_path_as_tuple: if temp.config_info[0]: last_stored_config = temp.config_info if path not in temp.nodes: break temp = temp.nodes[path] return last_stored_config def exists_case_sensitive(path: str) -> bool: """Returns if the given path exists and also matches the case on Windows. When finding files that can be imported, it is important for the cases to match because while file os.path.exists("module.py") and os.path.exists("MODULE.py") both return True on Windows, Python can only import using the case of the real file. """ result = os.path.exists(path) if (sys.platform.startswith("win") or sys.platform == "darwin") and result: # pragma: no cover directory, basename = os.path.split(path) result = basename in os.listdir(directory) return result ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6281853 isort-5.13.2/isort/wrap.py0000644000000000000000000001436514536412763012377 0ustar00import copy import re from typing import List, Optional, Sequence from .settings import DEFAULT_CONFIG, Config from .wrap_modes import WrapModes as Modes from .wrap_modes import formatter_from_string, vertical_hanging_indent def import_statement( import_start: str, from_imports: List[str], comments: Sequence[str] = (), line_separator: str = "\n", config: Config = DEFAULT_CONFIG, multi_line_output: Optional[Modes] = None, explode: bool = False, ) -> str: """Returns a multi-line wrapped form of the provided from import statement.""" if explode: formatter = vertical_hanging_indent line_length = 1 include_trailing_comma = True else: formatter = formatter_from_string((multi_line_output or config.multi_line_output).name) line_length = config.wrap_length or config.line_length include_trailing_comma = config.include_trailing_comma dynamic_indent = " " * (len(import_start) + 1) indent = config.indent statement = formatter( statement=import_start, imports=copy.copy(from_imports), white_space=dynamic_indent, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=config.comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=config.ignore_comments, ) if config.balanced_wrapping: lines = statement.split(line_separator) line_count = len(lines) if len(lines) > 1: minimum_length = min(len(line) for line in lines[:-1]) else: minimum_length = 0 new_import_statement = statement while len(lines[-1]) < minimum_length and len(lines) == line_count and line_length > 10: statement = new_import_statement line_length -= 1 new_import_statement = formatter( statement=import_start, imports=copy.copy(from_imports), white_space=dynamic_indent, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=config.comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=config.ignore_comments, ) lines = new_import_statement.split(line_separator) if statement.count(line_separator) == 0: return _wrap_line(statement, line_separator, config) return statement def line(content: str, line_separator: str, config: Config = DEFAULT_CONFIG) -> str: """Returns a line wrapped to the specified line-length, if possible.""" wrap_mode = config.multi_line_output if len(content) > config.line_length and wrap_mode != Modes.NOQA: # type: ignore line_without_comment = content comment = None if "#" in content: line_without_comment, comment = content.split("#", 1) for splitter in ("import ", "cimport ", ".", "as "): exp = r"\b" + re.escape(splitter) + r"\b" if re.search(exp, line_without_comment) and not line_without_comment.strip().startswith( splitter ): line_parts = re.split(exp, line_without_comment) if comment and not (config.use_parentheses and "noqa" in comment): _comma_maybe = ( "," if ( config.include_trailing_comma and config.use_parentheses and not line_without_comment.rstrip().endswith(",") ) else "" ) line_parts[ -1 ] = f"{line_parts[-1].strip()}{_comma_maybe}{config.comment_prefix}{comment}" next_line = [] while (len(content) + 2) > ( config.wrap_length or config.line_length ) and line_parts: next_line.append(line_parts.pop()) content = splitter.join(line_parts) if not content: content = next_line.pop() cont_line = _wrap_line( config.indent + splitter.join(next_line).lstrip(), line_separator, config, ) if config.use_parentheses: if splitter == "as ": output = f"{content}{splitter}{cont_line.lstrip()}" else: _comma = "," if config.include_trailing_comma and not comment else "" if wrap_mode in ( Modes.VERTICAL_HANGING_INDENT, # type: ignore Modes.VERTICAL_GRID_GROUPED, # type: ignore ): _separator = line_separator else: _separator = "" noqa_comment = "" if comment and "noqa" in comment: noqa_comment = f"{config.comment_prefix}{comment}" cont_line = cont_line.rstrip() _comma = "," if config.include_trailing_comma else "" output = ( f"{content}{splitter}({noqa_comment}" f"{line_separator}{cont_line}{_comma}{_separator})" ) lines = output.split(line_separator) if config.comment_prefix in lines[-1] and lines[-1].endswith(")"): content, comment = lines[-1].split(config.comment_prefix, 1) lines[-1] = content + ")" + config.comment_prefix + comment[:-1] output = line_separator.join(lines) return output return f"{content}{splitter}\\{line_separator}{cont_line}" elif len(content) > config.line_length and wrap_mode == Modes.NOQA and "# NOQA" not in content: # type: ignore return f"{content}{config.comment_prefix} NOQA" return content _wrap_line = line ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/isort/wrap_modes.py0000644000000000000000000003220614536412763013560 0ustar00"""Defines all wrap modes that can be used when outputting formatted imports""" import enum from inspect import signature from typing import Any, Callable, Dict, List import isort.comments _wrap_modes: Dict[str, Callable[..., str]] = {} def from_string(value: str) -> "WrapModes": return getattr(WrapModes, str(value), None) or WrapModes(int(value)) def formatter_from_string(name: str) -> Callable[..., str]: return _wrap_modes.get(name.upper(), grid) def _wrap_mode_interface( statement: str, imports: List[str], white_space: str, indent: str, line_length: int, comments: List[str], line_separator: str, comment_prefix: str, include_trailing_comma: bool, remove_comments: bool, ) -> str: """Defines the common interface used by all wrap mode functions""" return "" def _wrap_mode(function: Callable[..., str]) -> Callable[..., str]: """Registers an individual wrap mode. Function name and order are significant and used for creating enum. """ _wrap_modes[function.__name__.upper()] = function function.__signature__ = signature(_wrap_mode_interface) # type: ignore function.__annotations__ = _wrap_mode_interface.__annotations__ return function @_wrap_mode def grid(**interface: Any) -> str: if not interface["imports"]: return "" interface["statement"] += "(" + interface["imports"].pop(0) while interface["imports"]: next_import = interface["imports"].pop(0) next_statement = isort.comments.add_to_line( interface["comments"], interface["statement"] + ", " + next_import, removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"], ) if ( len(next_statement.split(interface["line_separator"])[-1]) + 1 > interface["line_length"] ): lines = [f"{interface['white_space']}{next_import.split(' ')[0]}"] for part in next_import.split(" ")[1:]: new_line = f"{lines[-1]} {part}" if len(new_line) + 1 > interface["line_length"]: lines.append(f"{interface['white_space']}{part}") else: lines[-1] = new_line next_import = interface["line_separator"].join(lines) interface["statement"] = ( isort.comments.add_to_line( interface["comments"], f"{interface['statement']},", removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"], ) + f"{interface['line_separator']}{next_import}" ) interface["comments"] = [] else: interface["statement"] += ", " + next_import return f"{interface['statement']}{',' if interface['include_trailing_comma'] else ''})" @_wrap_mode def vertical(**interface: Any) -> str: if not interface["imports"]: return "" first_import = ( isort.comments.add_to_line( interface["comments"], interface["imports"].pop(0) + ",", removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"], ) + interface["line_separator"] + interface["white_space"] ) _imports = ("," + interface["line_separator"] + interface["white_space"]).join( interface["imports"] ) _comma_maybe = "," if interface["include_trailing_comma"] else "" return f"{interface['statement']}({first_import}{_imports}{_comma_maybe})" def _hanging_indent_end_line(line: str) -> str: if not line.endswith(" "): line += " " return line + "\\" @_wrap_mode def hanging_indent(**interface: Any) -> str: if not interface["imports"]: return "" line_length_limit = interface["line_length"] - 3 next_import = interface["imports"].pop(0) next_statement = interface["statement"] + next_import # Check for first import if len(next_statement) > line_length_limit: next_statement = ( _hanging_indent_end_line(interface["statement"]) + interface["line_separator"] + interface["indent"] + next_import ) interface["statement"] = next_statement while interface["imports"]: next_import = interface["imports"].pop(0) next_statement = interface["statement"] + ", " + next_import if len(next_statement.split(interface["line_separator"])[-1]) > line_length_limit: next_statement = ( _hanging_indent_end_line(interface["statement"] + ",") + f"{interface['line_separator']}{interface['indent']}{next_import}" ) interface["statement"] = next_statement if interface["comments"]: statement_with_comments = isort.comments.add_to_line( interface["comments"], interface["statement"], removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"], ) if len(statement_with_comments.split(interface["line_separator"])[-1]) <= ( line_length_limit + 2 ): return statement_with_comments return ( _hanging_indent_end_line(interface["statement"]) + str(interface["line_separator"]) + isort.comments.add_to_line( interface["comments"], interface["indent"], removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"].lstrip(), ) ) return str(interface["statement"]) @_wrap_mode def vertical_hanging_indent(**interface: Any) -> str: _line_with_comments = isort.comments.add_to_line( interface["comments"], "", removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"], ) _imports = ("," + interface["line_separator"] + interface["indent"]).join(interface["imports"]) _comma_maybe = "," if interface["include_trailing_comma"] else "" return ( f"{interface['statement']}({_line_with_comments}{interface['line_separator']}" f"{interface['indent']}{_imports}{_comma_maybe}{interface['line_separator']})" ) def _vertical_grid_common(need_trailing_char: bool, **interface: Any) -> str: if not interface["imports"]: return "" interface["statement"] += ( isort.comments.add_to_line( interface["comments"], "(", removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"], ) + interface["line_separator"] + interface["indent"] + interface["imports"].pop(0) ) while interface["imports"]: next_import = interface["imports"].pop(0) next_statement = f"{interface['statement']}, {next_import}" current_line_length = len(next_statement.split(interface["line_separator"])[-1]) if interface["imports"] or interface["include_trailing_comma"]: # We need to account for a comma after this import. current_line_length += 1 if not interface["imports"] and need_trailing_char: # We need to account for a closing ) we're going to add. current_line_length += 1 if current_line_length > interface["line_length"]: next_statement = ( f"{interface['statement']},{interface['line_separator']}" f"{interface['indent']}{next_import}" ) interface["statement"] = next_statement if interface["include_trailing_comma"]: interface["statement"] += "," return str(interface["statement"]) @_wrap_mode def vertical_grid(**interface: Any) -> str: return _vertical_grid_common(need_trailing_char=True, **interface) + ")" @_wrap_mode def vertical_grid_grouped(**interface: Any) -> str: return ( _vertical_grid_common(need_trailing_char=False, **interface) + str(interface["line_separator"]) + ")" ) @_wrap_mode def vertical_grid_grouped_no_comma(**interface: Any) -> str: # This is a deprecated alias for vertical_grid_grouped above. This function # needs to exist for backwards compatibility but should never get called. raise NotImplementedError @_wrap_mode def noqa(**interface: Any) -> str: _imports = ", ".join(interface["imports"]) retval = f"{interface['statement']}{_imports}" comment_str = " ".join(interface["comments"]) if interface["comments"]: if ( len(retval) + len(interface["comment_prefix"]) + 1 + len(comment_str) <= interface["line_length"] ): return f"{retval}{interface['comment_prefix']} {comment_str}" if "NOQA" in interface["comments"]: return f"{retval}{interface['comment_prefix']} {comment_str}" return f"{retval}{interface['comment_prefix']} NOQA {comment_str}" if len(retval) <= interface["line_length"]: return retval return f"{retval}{interface['comment_prefix']} NOQA" @_wrap_mode def vertical_hanging_indent_bracket(**interface: Any) -> str: if not interface["imports"]: return "" statement = vertical_hanging_indent(**interface) return f'{statement[:-1]}{interface["indent"]})' @_wrap_mode def vertical_prefix_from_module_import(**interface: Any) -> str: if not interface["imports"]: return "" prefix_statement = interface["statement"] output_statement = prefix_statement + interface["imports"].pop(0) comments = interface["comments"] statement = output_statement statement_with_comments = "" for next_import in interface["imports"]: statement = statement + ", " + next_import statement_with_comments = isort.comments.add_to_line( comments, statement, removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"], ) if ( len(statement_with_comments.split(interface["line_separator"])[-1]) + 1 > interface["line_length"] ): statement = ( isort.comments.add_to_line( comments, output_statement, removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"], ) + f"{interface['line_separator']}{prefix_statement}{next_import}" ) comments = [] output_statement = statement if comments and statement_with_comments: output_statement = statement_with_comments return str(output_statement) @_wrap_mode def hanging_indent_with_parentheses(**interface: Any) -> str: if not interface["imports"]: return "" line_length_limit = interface["line_length"] - 1 interface["statement"] += "(" next_import = interface["imports"].pop(0) next_statement = interface["statement"] + next_import # Check for first import if len(next_statement) > line_length_limit: next_statement = ( isort.comments.add_to_line( interface["comments"], interface["statement"], removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"], ) + f"{interface['line_separator']}{interface['indent']}{next_import}" ) interface["comments"] = [] interface["statement"] = next_statement while interface["imports"]: next_import = interface["imports"].pop(0) if ( not interface["line_separator"] in interface["statement"] and "#" in interface["statement"] ): # pragma: no cover # TODO: fix, this is because of test run inconsistency. line, comments = interface["statement"].split("#", 1) next_statement = ( f"{line.rstrip()}, {next_import}{interface['comment_prefix']}{comments}" ) else: next_statement = isort.comments.add_to_line( interface["comments"], interface["statement"] + ", " + next_import, removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"], ) current_line = next_statement.split(interface["line_separator"])[-1] if len(current_line) > line_length_limit: next_statement = ( isort.comments.add_to_line( interface["comments"], interface["statement"] + ",", removed=interface["remove_comments"], comment_prefix=interface["comment_prefix"], ) + f"{interface['line_separator']}{interface['indent']}{next_import}" ) interface["comments"] = [] interface["statement"] = next_statement return f"{interface['statement']}{',' if interface['include_trailing_comma'] else ''})" @_wrap_mode def backslash_grid(**interface: Any) -> str: interface["indent"] = interface["white_space"][:-1] return hanging_indent(**interface) WrapModes = enum.Enum( # type: ignore "WrapModes", {wrap_mode: index for index, wrap_mode in enumerate(_wrap_modes.keys())} ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499839.3962374 isort-5.13.2/pyproject.toml0000644000000000000000000001002314536412777012620 0ustar00[tool.black] line-length = 100 [tool.poetry] name = "isort" version = "5.13.2" description = "A Python utility / library to sort Python imports." authors = ["Timothy Crosley "] license = "MIT" readme = "README.md" repository = "https://github.com/pycqa/isort" homepage = "https://pycqa.github.io/isort/" documentation = "https://pycqa.github.io/isort/" keywords = ["Refactor", "Lint", "Imports", "Sort", "Clean"] classifiers = [ "Development Status :: 6 - Mature", "Intended Audience :: Developers", "Natural Language :: English", "Environment :: Console", "License :: OSI Approved :: MIT License", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", "Topic :: Software Development :: Libraries", "Topic :: Utilities", ] urls = { Changelog = "https://github.com/pycqa/isort/blob/main/CHANGELOG.md" } include = [ { path = "tests", format = "sdist" }, { path = "ACKNOWLEDGEMENTS.md", format = "sdist" }, { path = "CHANGELOG.md", format = "sdist" }, { path = "LICENSE", format = "sdist" }, ] [tool.poetry.dependencies] python = ">=3.8.0" colorama = {version = ">=0.4.6", optional = true} [tool.poetry.extras] colors = ["colorama"] plugins = ["setuptools"] [tool.poetry.dev-dependencies] bandit = ">=1.6" black = ">=22.6.0" colorama = ">=0.4.6" coverage = {version = ">=6.5.0", extras = ["toml"]} cruft = ">=2.12.0" example-isort-sorting-plugin = ">=0.1.0" example-shared-isort-profile = ">=0.1.0" flake8 = ">=3.8.4" flake8-bugbear = ">=22.12.6,<23.0.0" httpx = ">=0.13.3" hypothesmith = ">=0.1.3" #hypothesis-auto = { version = ">=1.0.0" } hypothesis = ">=6.10.1" ipython = ">=7.16" mypy = ">=0.902,<1.0.0" pytest = ">=7.4.2" pytest-mock = ">=1.10" pep8-naming = ">=0.8.2" portray = ">=1.8.0" requirementslib = ">=1.5" pipreqs = ">=0.4.9" pip_api = ">=0.0.12" pylama = ">=7.7" pip = ">=21.1.1" py = ">=1.11.0" safety = ">=2.2.0" smmap2 = ">=3.0.1" libcst = ">=0.3.18" mypy-extensions = ">=0.4.3" pre-commit = ">=2.13.0" pytest-benchmark = ">=3.4.1" toml = ">=0.10.2" types-pkg-resources = ">=0.1.2" types-colorama = ">=0.4.2" types-toml = ">=0.1.3" vulture = ">=1.0" [tool.coverage.paths] source = [ "isort/", ".tox/*/lib/python*/site-packages/isort/", ".tox/*/lib/site-packages/isort/" ] tests = ["tests", "*/tests"] [tool.coverage.run] branch = true source = ["isort", "tests"] omit = [ "isort/_vendored/*", "tests/*", "isort/deprecated/*", ] [tool.coverage.report] show_missing = true fail_under = 99 exclude_lines = [ "pragma: no cover", "except ImportError:", "if TYPE_CHECKING:", "if __name__ == .__main__.:", "raise NotImplementedError", ] [tool.poetry.scripts] isort = "isort.main:main" isort-identify-imports = "isort.main:identify_imports_main" [tool.poetry.plugins."distutils.commands"] isort = "isort.setuptools_commands:ISortCommand" [tool.poetry.plugins."pylama.linter"] isort = "isort.pylama_isort:Linter" [tool.portray.mkdocs] edit_uri = "https://github.com/pycqa/isort/edit/main/" extra_css = ["art/stylesheets/extra.css"] [tool.portray.mkdocs.theme] name = "material" favicon = "art/logo.png" logo = "art/logo.png" palette = {scheme = "isort"} [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" [tool.mypy] python_version = 3.8 strict = true follow_imports = "silent" exclude = "isort/_vendored|tests/unit/example_projects|tests/unit/example_crlf_file.py" [[tool.mypy.overrides]] module = "tests.*" allow_untyped_defs = true allow_incomplete_defs = true allow_untyped_calls = true [tool.isort] profile = "hug" src_paths = ["isort", "tests"] skip = [ "tests/unit/example_crlf_file.py" ] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/__init__.py0000644000000000000000000000000014536412763013144 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/benchmark/test_api.py0000644000000000000000000000143114536412763015160 0ustar00from typing import Any import pytest from isort import api imperfect_content = "import b\nimport a\n" fixed_content = "import a\nimport b\n" @pytest.fixture def imperfect(tmpdir) -> Any: imperfect_file = tmpdir.join("test_needs_changes.py") imperfect_file.write_text(imperfect_content, "utf8") return imperfect_file def test_sort_file(benchmark, imperfect) -> None: def sort_file(): api.sort_file(imperfect) benchmark.pedantic(sort_file, iterations=10, rounds=100) assert imperfect.read() == fixed_content def test_sort_file_in_place(benchmark, imperfect) -> None: def sort_file(): api.sort_file(imperfect, overwrite_in_place=True) benchmark.pedantic(sort_file, iterations=10, rounds=100) assert imperfect.read() == fixed_content ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/integration/test_hypothesmith.py0000644000000000000000000000676314536412763017542 0ustar00import ast from typing import get_type_hints import hypothesis import libcst from hypothesis import strategies as st from hypothesmith import from_grammar, from_node import isort def _as_config(kw) -> isort.Config: if "wrap_length" in kw and "line_length" in kw: kw["wrap_length"], kw["line_length"] = sorted([kw["wrap_length"], kw["line_length"]]) try: return isort.Config(**kw) except ValueError: kw["wrap_length"] = 0 return isort.Config(**kw) def _record_targets(code: str, prefix: str = "") -> str: # target larger inputs - the Hypothesis engine will do a multi-objective # hill-climbing search using these scores to generate 'better' examples. nodes = list(ast.walk(ast.parse(code))) import_nodes = [n for n in nodes if isinstance(n, (ast.Import, ast.ImportFrom))] uniq_nodes = {type(n) for n in nodes} for value, label in [ (len(import_nodes), "total number of import nodes"), (len(uniq_nodes), "number of unique ast node types"), ]: hypothesis.target(float(value), label=prefix + label) return code def configs(**force_strategies: st.SearchStrategy[isort.Config]) -> st.SearchStrategy[isort.Config]: """Generate arbitrary Config objects.""" skip = { "line_ending", "sections", "known_future_library", "forced_separate", "lines_before_imports", "lines_after_imports", "lines_between_sections", "lines_between_types", "sources", "virtual_env", "conda_env", "directory", "formatter", "formatting_function", } inferred_kwargs = { k: st.from_type(v) for k, v in get_type_hints(isort.settings._Config).items() if k not in skip } specific = { "line_length": st.integers(0, 200), "wrap_length": st.integers(0, 200), "indent": st.integers(0, 20).map(lambda n: n * " "), "default_section": st.sampled_from(sorted(isort.settings.KNOWN_SECTION_MAPPING)), "force_grid_wrap": st.integers(0, 20), "profile": st.sampled_from(sorted(isort.settings.profiles)), "py_version": st.sampled_from(("auto",) + isort.settings.VALID_PY_TARGETS), } kwargs = {**inferred_kwargs, **specific, **force_strategies} return st.fixed_dictionaries({}, optional=kwargs).map(_as_config) # type: ignore st.register_type_strategy(isort.Config, configs()) @hypothesis.example("import A\nimportA\r\n\n", isort.Config(), False) @hypothesis.given( source_code=st.lists( from_grammar(auto_target=False) | from_node(auto_target=False) | from_node(libcst.Import, auto_target=False) | from_node(libcst.ImportFrom, auto_target=False), min_size=1, max_size=10, ).map("\n".join), config=st.builds(isort.Config), disregard_skip=st.booleans(), ) @hypothesis.seed(235738473415671197623909623354096762459) @hypothesis.settings( suppress_health_check=[hypothesis.HealthCheck.too_slow, hypothesis.HealthCheck.filter_too_much] ) def test_isort_is_idempotent(source_code: str, config: isort.Config, disregard_skip: bool) -> None: # NOTE: if this test finds a bug, please notify @Zac-HD so that it can be added to the # Hypothesmith trophy case. This really helps with research impact evaluations! _record_targets(source_code) result = isort.code(source_code, config=config, disregard_skip=disregard_skip) assert result == isort.code(result, config=config, disregard_skip=disregard_skip) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/integration/test_literal.py0000644000000000000000000000072314536412763016437 0ustar00"""Tests that need installation of other packages.""" # TODO: find a way to install example-isort-formatting-plugin to pass tests # import isort.literal # from isort.settings import Config # def test_value_assignment_list(): # assert isort.literal.assignment("x = ['b', 'a']", "list", "py") == "x = ['a', 'b']" # assert ( # isort.literal.assignment("x = ['b', 'a']", "list", "py", Config(formatter="example")) # == 'x = ["a", "b"]' # ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/integration/test_projects_using_isort.py0000644000000000000000000000556614536412763021273 0ustar00"""Tests projects that use isort to see if any differences are found between their current imports and what isort suggest on the develop branch. This is an important early warning signal of regressions. NOTE: If you use isort within a public repository, please feel empowered to add your project here! It is important to isort that as few regressions as possible are experienced by our users. Having your project tested here is the most sure way to keep those regressions form ever happening. """ from __future__ import annotations from pathlib import Path from subprocess import check_call from typing import Generator, Sequence from isort.main import main def git_clone(repository_url: str, directory: Path): """Clones the given repository into the given directory path""" check_call(["git", "clone", "--depth", "1", repository_url, str(directory)]) def run_isort(arguments: Generator[str, None, None] | Sequence[str]): """Runs isort in diff and check mode with the given arguments""" main(["--check-only", "--diff", *arguments]) def test_django(tmpdir): git_clone("https://github.com/django/django.git", tmpdir) run_isort( str(target_dir) for target_dir in (tmpdir / "django", tmpdir / "tests", tmpdir / "scripts") ) def test_plone(tmpdir): git_clone("https://github.com/plone/plone.app.multilingualindexes.git", tmpdir) run_isort([str(tmpdir / "src"), "--skip", "languagefallback.py"]) def test_pandas(tmpdir): git_clone("https://github.com/pandas-dev/pandas.git", tmpdir) run_isort((str(tmpdir / "pandas"), "--skip", "__init__.py")) def test_habitat_lab(tmpdir): git_clone("https://github.com/facebookresearch/habitat-lab.git", tmpdir) run_isort([str(tmpdir)]) def test_pylint(tmpdir): git_clone("https://github.com/PyCQA/pylint.git", tmpdir) run_isort([str(tmpdir), "--skip", "bad.py"]) def test_hypothesis(tmpdir): git_clone("https://github.com/HypothesisWorks/hypothesis.git", tmpdir) run_isort( ( str(tmpdir), "--skip", "tests", "--profile", "black", "--ca", "--project", "hypothesis", "--project", "hypothesistooling", ) ) def test_pyramid(tmpdir): git_clone("https://github.com/Pylons/pyramid.git", tmpdir) run_isort( str(target_dir) for target_dir in (tmpdir / "src" / "pyramid", tmpdir / "tests", tmpdir / "setup.py") ) def test_products_zopetree(tmpdir): git_clone("https://github.com/jugmac00/Products.ZopeTree.git", tmpdir) run_isort([str(tmpdir)]) def test_dobby(tmpdir): git_clone("https://github.com/rocketDuck/dobby.git", tmpdir) run_isort([str(tmpdir / "tests"), str(tmpdir / "src")]) def test_zope(tmpdir): git_clone("https://github.com/zopefoundation/Zope.git", tmpdir) run_isort([str(tmpdir), "--skip", "util.py"]) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/integration/test_setting_combinations.py0000644000000000000000000016540114536412763021232 0ustar00from typing import get_type_hints import hypothesis from hypothesis import strategies as st import isort def _as_config(kw) -> isort.Config: kw["atomic"] = False if "wrap_length" in kw and "line_length" in kw: kw["wrap_length"], kw["line_length"] = sorted([kw["wrap_length"], kw["line_length"]]) try: return isort.Config(**kw) except ValueError: kw["wrap_length"] = 0 return isort.Config(**kw) def configs() -> st.SearchStrategy[isort.Config]: """Generate arbitrary Config objects.""" skip = { "line_ending", "sections", "known_standard_library", "known_future_library", "known_third_party", "known_first_party", "known_local_folder", "extra_standard_library", "forced_separate", "lines_before_imports", "lines_after_imports", "add_imports", "lines_between_sections", "lines_between_types", "sources", "virtual_env", "conda_env", "directory", "formatter", "formatting_function", "comment_prefix", "atomic", "skip", "src_paths", } inferred_kwargs = { k: st.from_type(v) for k, v in get_type_hints(isort.settings._Config).items() if k not in skip } specific = { "line_length": st.integers(0, 200), "wrap_length": st.integers(0, 200), "indent": st.integers(0, 20).map(lambda n: n * " "), "default_section": st.sampled_from(sorted(isort.settings.KNOWN_SECTION_MAPPING)), "force_grid_wrap": st.integers(0, 20), "profile": st.sampled_from(sorted(isort.settings.profiles)), "sort_order": st.sampled_from(sorted(("native", "natural", "natural_plus"))), "py_version": st.sampled_from(("auto",) + isort.settings.VALID_PY_TARGETS), } kwargs = {**inferred_kwargs, **specific} return st.fixed_dictionaries({}, optional=kwargs).map(_as_config) # type:ignore st.register_type_strategy(isort.Config, configs()) CODE_SNIPPET = """ '''Taken from bottle.py Copyright (c) 2009-2018, Marcel Hellkamp. License: MIT (see LICENSE for details) ''' # Lots of stdlib and builtin differences. if py3k: import http.client as httplib import _thread as thread from urllib.parse import urljoin, SplitResult as UrlSplitResult from urllib.parse import urlencode, quote as urlquote, unquote as urlunquote urlunquote = functools.partial(urlunquote, encoding='latin1') from http.cookies import SimpleCookie, Morsel, CookieError from collections.abc import MutableMapping as DictMixin import pickle # comment number 2 from io import BytesIO import configparser basestring = str unicode = str json_loads = lambda s: json_lds(touni(s)) callable = lambda x: hasattr(x, '__call__') imap = map def _raise(*a): raise a[0](a[1]).with_traceback(a[2]) else: # 2.x import httplib import thread from urlparse import urljoin, SplitResult as UrlSplitResult from urllib import urlencode, quote as urlquote, unquote as urlunquote from Cookie import SimpleCookie, Morsel, CookieError from itertools import imap import cPickle as pickle from StringIO import StringIO as BytesIO import ConfigParser as configparser # commentnumberone from collections import MutableMapping as DictMixin unicode = unicode json_loads = json_lds exec(compile('def _raise(*a): raise a[0], a[1], a[2]', '', 'exec')) """ SHOULD_RETAIN = [ """'''Taken from bottle.py Copyright (c) 2009-2018, Marcel Hellkamp. License: MIT (see LICENSE for details) '''""", "# Lots of stdlib and builtin differences.", "if py3k:", "http.client", "_thread", "urllib.parse", "urlencode", "urlunquote = functools.partial(urlunquote, encoding='latin1')", "http.cookies", "SimpleCookie", "collections.abc", "pickle", "comment number 2", "io", "configparser", """basestring = str unicode = str json_loads = lambda s: json_lds(touni(s)) callable = lambda x: hasattr(x, '__call__') imap = map def _raise(*a): raise a[0](a[1]).with_traceback(a[2]) else: # 2.x """, "httplib", "thread", "urlparse", "urllib", "Cookie", "itertools", "cPickle", "StringIO", "ConfigParser", "commentnumberone", "collections", """unicode = unicode json_loads = json_lds exec(compile('def _raise(*a): raise a[0], a[1], a[2]', '', 'exec'))""", ] @hypothesis.example( config=isort.Config( py_version="all", force_to_top=frozenset(), skip=frozenset( { ".svn", ".venv", "build", "dist", ".bzr", ".tox", ".hg", ".mypy_cache", ".nox", "_build", "buck-out", "node_modules", ".git", ".eggs", ".pants.d", "venv", ".direnv", } ), skip_glob=frozenset(), skip_gitignore=True, line_length=79, wrap_length=0, line_ending="", sections=("FUTURE", "STDLIB", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER"), no_sections=False, known_future_library=frozenset({"__future__"}), known_third_party=frozenset(), known_first_party=frozenset(), known_local_folder=frozenset(), known_standard_library=frozenset( { "pwd", "types", "nntplib", "jpeg", "pyclbr", "encodings", "ctypes", "macerrors", "filecmp", "dbm", "mimetypes", "statvfs", "msvcrt", "spwd", "codecs", "SimpleHTTPServer", "compiler", "pickletools", "tkinter", "pickle", "fm", "bsddb", "contextvars", "dummy_thread", "pipes", "heapq", "dircache", "commands", "unicodedata", "ntpath", "marshal", "fpformat", "linecache", "calendar", "pty", "MimeWriter", "inspect", "mmap", "ic", "tty", "nis", "new", "wave", "HTMLParser", "anydbm", "tracemalloc", "pdb", "sunau", "GL", "parser", "winsound", "dbhash", "zlib", "MacOS", "pprint", "crypt", "aetools", "DEVICE", "fl", "gettext", "asyncore", "copyreg", "queue", "resource", "turtledemo", "fnmatch", "hotshot", "trace", "string", "plistlib", "gzip", "functools", "aepack", "hashlib", "imp", "MiniAEFrame", "getpass", "shutil", "ttk", "multifile", "operator", "reprlib", "subprocess", "cgi", "select", "SimpleXMLRPCServer", "audioop", "macresource", "stringprep", "wsgiref", "SUNAUDIODEV", "atexit", "lzma", "asyncio", "datetime", "binhex", "autoGIL", "doctest", "thread", "enum", "tempfile", "posixfile", "mhlib", "html", "itertools", "exceptions", "sgmllib", "array", "test", "imputil", "shlex", "flp", "uu", "gdbm", "urlparse", "msilib", "termios", "modulefinder", "ossaudiodev", "timeit", "binascii", "popen2", "ConfigParser", "poplib", "zipfile", "cfmfile", "pstats", "AL", "contextlib", "code", "zipimport", "base64", "platform", "ast", "fileinput", "locale", "buildtools", "stat", "quopri", "readline", "collections", "aetypes", "concurrent", "runpy", "copy_reg", "rexec", "cmath", "optparse", "dummy_threading", "ColorPicker", "sched", "netrc", "sunaudiodev", "socketserver", "logging", "PixMapWrapper", "sysconfig", "Nav", "copy", "cmd", "csv", "chunk", "multiprocessing", "warnings", "weakref", "py_compile", "sre", "sre_parse", "curses", "threading", "re", "FrameWork", "_thread", "imgfile", "cd", "sre_constants", "xdrlib", "dataclasses", "urllib2", "StringIO", "configparser", "importlib", "UserList", "posixpath", "mailbox", "rfc822", "grp", "pydoc", "sets", "textwrap", "numbers", "W", "gl", "htmllib", "macostools", "tarfile", "ipaddress", "xmlrpc", "icopen", "traceback", "_winreg", "random", "CGIHTTPServer", "dis", "sha", "selectors", "statistics", "DocXMLRPCServer", "imghdr", "venv", "keyword", "xmlrpclib", "ftplib", "getopt", "posix", "smtpd", "profile", "sndhdr", "signal", "EasyDialogs", "dumbdbm", "fcntl", "SocketServer", "distutils", "symbol", "pathlib", "cStringIO", "imaplib", "unittest", "al", "cProfile", "robotparser", "BaseHTTPServer", "os", "pkgutil", "socket", "fractions", "shelve", "aifc", "cgitb", "xml", "decimal", "sre_compile", "ssl", "user", "Bastion", "formatter", "time", "abc", "winreg", "difflib", "FL", "bz2", "asynchat", "gc", "gensuitemodule", "symtable", "secrets", "Carbon", "mailcap", "sys", "bdb", "fpectl", "httplib", "webbrowser", "smtplib", "Cookie", "whichdb", "turtle", "tokenize", "UserString", "tabnanny", "site", "struct", "codeop", "email", "typing", "cookielib", "Queue", "rlcompleter", "errno", "macpath", "videoreader", "md5", "cPickle", "Tix", "io", "faulthandler", "Tkinter", "glob", "syslog", "telnetlib", "_dummy_thread", "hmac", "uuid", "imageop", "future_builtins", "json", "htmlentitydefs", "lib2to3", "UserDict", "mutex", "sqlite3", "findertools", "bisect", "builtins", "urllib", "http", "compileall", "argparse", "ScrolledText", "token", "dl", "applesingle", "math", "ensurepip", "mimify", "mimetools", "colorsys", "zipapp", "__builtin__", } ), extra_standard_library=frozenset(), known_other={"other": frozenset({"", "\x10\x1bm"})}, multi_line_output=0, forced_separate=(), indent=" ", comment_prefix=" #", length_sort=True, length_sort_straight=False, length_sort_sections=frozenset(), add_imports=frozenset(), remove_imports=frozenset( { "", "\U00076fe7รพs\x0c\U000c8b75v\U00106541", "๐ฅ’’>\U0001960euj๐’Ž•\x9e", "\x15\x9b", "\x02l", "\U000b71ef.\x1c", "\x7f?\U000ec91c", "\x7f,รžoร€P8\x1b\x1eยป3\x86\x94ยครร“~\U00066b1a,O\U0010ab28\x90ยซ", "Y\x06ยบZ\x04รรฌ\U00078ce1.\U0010c1f9[EK\x83Eร–รธ", ";ร€ยจ|\x1bร‚ ๐‘’๐ŸธV", } ), append_only=False, reverse_relative=True, force_single_line=False, single_line_exclusions=( "Y\U000347d9g\x957K", "", "รŠ\U000e8ad2\U0008fa72รนร\x19รง\U000eaecc๐คŽช.", "ยทo\U000d00e5\U000b36de+\x8f\U000b5953ยด\x08oรœ", "", ":sIยถ", "", ), default_section="THIRDPARTY", import_headings={}, import_footers={}, balanced_wrapping=False, use_parentheses=True, order_by_type=True, atomic=False, lines_before_imports=-1, lines_after_imports=-1, lines_between_sections=1, lines_between_types=0, combine_as_imports=True, combine_star=False, include_trailing_comma=False, from_first=False, verbose=False, quiet=False, force_adds=False, force_alphabetical_sort_within_sections=False, force_alphabetical_sort=False, force_grid_wrap=0, force_sort_within_sections=False, lexicographical=False, ignore_whitespace=False, no_lines_before=frozenset( { "uรธรธ", "ยข", "&\x8c5ร\U000e5f01ร˜", "\U0005d415\U000a3df2h\U000f24e5\U00104d7b34ยนร’ร€", "\U000e374c8", "w", } ), no_inline_sort=False, ignore_comments=False, case_sensitive=False, sources=( { "py_version": "py3", "force_to_top": frozenset(), "skip": frozenset( { ".svn", ".venv", "build", "dist", ".bzr", ".tox", ".hg", ".mypy_cache", ".nox", "_build", "buck-out", "node_modules", ".git", ".eggs", ".pants.d", "venv", ".direnv", } ), "skip_glob": frozenset(), "skip_gitignore": False, "line_length": 79, "wrap_length": 0, "line_ending": "", "sections": ("FUTURE", "STDLIB", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER"), "no_sections": False, "known_future_library": frozenset({"__future__"}), "known_third_party": frozenset(), "known_first_party": frozenset(), "known_local_folder": frozenset(), "known_standard_library": frozenset( { "pwd", "copy", "cmd", "csv", "chunk", "multiprocessing", "warnings", "types", "weakref", "nntplib", "pyclbr", "encodings", "py_compile", "sre", "ctypes", "sre_parse", "filecmp", "curses", "threading", "dbm", "re", "_thread", "sre_constants", "xdrlib", "dataclasses", "mimetypes", "configparser", "importlib", "msvcrt", "spwd", "posixpath", "mailbox", "codecs", "grp", "pickletools", "tkinter", "pickle", "contextvars", "pydoc", "textwrap", "numbers", "pipes", "heapq", "tarfile", "unicodedata", "ntpath", "ipaddress", "marshal", "xmlrpc", "traceback", "linecache", "calendar", "pty", "random", "dis", "selectors", "statistics", "imghdr", "venv", "inspect", "mmap", "keyword", "ftplib", "tty", "nis", "getopt", "posix", "smtpd", "wave", "profile", "sndhdr", "signal", "tracemalloc", "pdb", "sunau", "winsound", "parser", "zlib", "fcntl", "pprint", "distutils", "crypt", "symbol", "gettext", "pathlib", "asyncore", "copyreg", "imaplib", "unittest", "queue", "resource", "turtledemo", "fnmatch", "cProfile", "os", "pkgutil", "socket", "trace", "fractions", "string", "shelve", "plistlib", "aifc", "gzip", "functools", "cgitb", "xml", "hashlib", "decimal", "imp", "sre_compile", "ssl", "formatter", "winreg", "time", "getpass", "shutil", "abc", "difflib", "bz2", "operator", "reprlib", "subprocess", "cgi", "select", "asynchat", "audioop", "gc", "secrets", "symtable", "mailcap", "sys", "bdb", "fpectl", "stringprep", "webbrowser", "smtplib", "wsgiref", "atexit", "lzma", "asyncio", "datetime", "binhex", "doctest", "turtle", "enum", "tempfile", "tokenize", "tabnanny", "site", "html", "struct", "itertools", "codeop", "email", "array", "test", "typing", "shlex", "uu", "msilib", "termios", "rlcompleter", "modulefinder", "ossaudiodev", "timeit", "binascii", "poplib", "errno", "macpath", "zipfile", "io", "faulthandler", "pstats", "contextlib", "code", "glob", "zipimport", "base64", "syslog", "platform", "ast", "fileinput", "telnetlib", "locale", "_dummy_thread", "hmac", "stat", "uuid", "quopri", "readline", "collections", "json", "concurrent", "lib2to3", "sqlite3", "runpy", "cmath", "optparse", "bisect", "builtins", "urllib", "dummy_threading", "http", "compileall", "argparse", "token", "sched", "netrc", "math", "ensurepip", "socketserver", "colorsys", "zipapp", "logging", "sysconfig", } ), "extra_standard_library": frozenset(), "known_other": {}, "multi_line_output": 0, "forced_separate": (), "indent": " ", "comment_prefix": " #", "length_sort": False, "length_sort_straight": False, "length_sort_sections": frozenset(), "add_imports": frozenset(), "remove_imports": frozenset(), "append_only": False, "reverse_relative": False, "force_single_line": False, "single_line_exclusions": (), "default_section": "THIRDPARTY", "import_headings": {}, "import_footers": {}, "balanced_wrapping": False, "use_parentheses": False, "order_by_type": True, "atomic": False, "lines_before_imports": -1, "lines_after_imports": -1, "lines_between_sections": 1, "lines_between_types": 0, "combine_as_imports": False, "combine_star": False, "include_trailing_comma": False, "from_first": False, "verbose": False, "quiet": False, "force_adds": False, "force_alphabetical_sort_within_sections": False, "force_alphabetical_sort": False, "force_grid_wrap": 0, "force_sort_within_sections": False, "lexicographical": False, "ignore_whitespace": False, "no_lines_before": frozenset(), "no_inline_sort": False, "ignore_comments": False, "case_sensitive": False, "sources": (), "virtual_env": "", "conda_env": "", "ensure_newline_before_comments": False, "directory": "", "profile": "", "honor_noqa": False, "src_paths": frozenset(), "old_finders": False, "remove_redundant_aliases": False, "float_to_top": False, "filter_files": False, "formatter": "", "formatting_function": None, "color_output": False, "treat_comments_as_code": frozenset(), "treat_all_comments_as_code": False, "supported_extensions": frozenset({"py", "pyx", "pyi"}), "blocked_extensions": frozenset({"pex"}), "constants": frozenset(), "classes": frozenset(), "variables": frozenset(), "dedup_headings": False, "source": "defaults", }, { "classes": frozenset( { "\U000eb6c6\x9eร‘\U0008297dรขhรฏ\x8eร†", "C", "\x8e\U000422acยฑ\U000b5a1f\U000c4166", "รนรš", } ), "single_line_exclusions": ( "Y\U000347d9g\x957K", "", "รŠ\U000e8ad2\U0008fa72รนร\x19รง\U000eaecc๐คŽช.", "ยทo\U000d00e5\U000b36de+\x8f\U000b5953ยด\x08oรœ", "", ":sIยถ", "", ), "indent": " ", "no_lines_before": frozenset( { "uรธรธ", "ยข", "&\x8c5ร\U000e5f01ร˜", "\U0005d415\U000a3df2h\U000f24e5\U00104d7b34ยนร’ร€", "\U000e374c8", "w", } ), "quiet": False, "honor_noqa": False, "dedup_headings": True, "known_other": { "\x10\x1bm": frozenset({"\U000682a49\U000e1a63ยฒKร‡ยถ4", "", "\x1a", "ยฉ"}), "": frozenset({"รญรฅ\x94รŒ", "\U000cf258"}), }, "treat_comments_as_code": frozenset({""}), "length_sort": True, "reverse_relative": True, "combine_as_imports": True, "py_version": "all", "use_parentheses": True, "skip_gitignore": True, "remove_imports": frozenset( { "", "\U00076fe7รพs\x0c\U000c8b75v\U00106541", "๐ฅ’’>\U0001960euj๐’Ž•\x9e", "\x15\x9b", "\x02l", "\U000b71ef.\x1c", "\x7f?\U000ec91c", "\x7f,รžoร€P8\x1b\x1eยป3\x86\x94ยครร“~\U00066b1a,O\U0010ab28\x90ยซ", "Y\x06ยบZ\x04รรฌ\U00078ce1.\U0010c1f9[EK\x83Eร–รธ", ";ร€ยจ|\x1bร‚ ๐‘’๐ŸธV", } ), "atomic": False, "source": "runtime", }, ), virtual_env="", conda_env="", ensure_newline_before_comments=False, directory="/home/abuild/rpmbuild/BUILD/isort-5.11.0", profile="", honor_noqa=False, old_finders=False, remove_redundant_aliases=False, float_to_top=False, filter_files=False, formatting_function=None, color_output=False, treat_comments_as_code=frozenset({""}), treat_all_comments_as_code=False, supported_extensions=frozenset({"py", "pyx", "pyi"}), blocked_extensions=frozenset({"pex"}), constants=frozenset(), classes=frozenset( {"\U000eb6c6\x9eร‘\U0008297dรขhรฏ\x8eร†", "C", "\x8e\U000422acยฑ\U000b5a1f\U000c4166", "รนรš"} ), variables=frozenset(), dedup_headings=True, ), disregard_skip=True, ) @hypothesis.example( config=isort.Config( py_version="2", combine_straight_imports=True, ), disregard_skip=True, ) @hypothesis.given( config=st.from_type(isort.Config), disregard_skip=st.booleans(), ) @hypothesis.settings(deadline=None) def test_isort_is_idempotent(config: isort.Config, disregard_skip: bool) -> None: try: result = isort.code(CODE_SNIPPET, config=config, disregard_skip=disregard_skip) result = isort.code(result, config=config, disregard_skip=disregard_skip) assert result == isort.code(result, config=config, disregard_skip=disregard_skip) except ValueError: pass @hypothesis.example( config=isort.Config( py_version="all", force_to_top=frozenset(), skip=frozenset( { ".svn", ".venv", "build", "dist", ".bzr", ".tox", ".hg", ".mypy_cache", ".nox", "_build", "buck-out", "node_modules", ".git", ".eggs", ".pants.d", "venv", ".direnv", } ), skip_glob=frozenset(), skip_gitignore=True, line_length=79, wrap_length=0, line_ending="", sections=("FUTURE", "STDLIB", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER"), no_sections=False, known_future_library=frozenset({"__future__"}), known_third_party=frozenset(), known_first_party=frozenset(), known_local_folder=frozenset(), known_standard_library=frozenset( { "pwd", "types", "nntplib", "jpeg", "pyclbr", "encodings", "ctypes", "macerrors", "filecmp", "dbm", "mimetypes", "statvfs", "msvcrt", "spwd", "codecs", "SimpleHTTPServer", "compiler", "pickletools", "tkinter", "pickle", "fm", "bsddb", "contextvars", "dummy_thread", "pipes", "heapq", "dircache", "commands", "unicodedata", "ntpath", "marshal", "fpformat", "linecache", "calendar", "pty", "MimeWriter", "inspect", "mmap", "ic", "tty", "nis", "new", "wave", "HTMLParser", "anydbm", "tracemalloc", "pdb", "sunau", "GL", "parser", "winsound", "dbhash", "zlib", "MacOS", "pprint", "crypt", "aetools", "DEVICE", "fl", "gettext", "asyncore", "copyreg", "queue", "resource", "turtledemo", "fnmatch", "hotshot", "trace", "string", "plistlib", "gzip", "functools", "aepack", "hashlib", "imp", "MiniAEFrame", "getpass", "shutil", "ttk", "multifile", "operator", "reprlib", "subprocess", "cgi", "select", "SimpleXMLRPCServer", "audioop", "macresource", "stringprep", "wsgiref", "SUNAUDIODEV", "atexit", "lzma", "asyncio", "datetime", "binhex", "autoGIL", "doctest", "thread", "enum", "tempfile", "posixfile", "mhlib", "html", "itertools", "exceptions", "sgmllib", "array", "test", "imputil", "shlex", "flp", "uu", "gdbm", "urlparse", "msilib", "termios", "modulefinder", "ossaudiodev", "timeit", "binascii", "popen2", "ConfigParser", "poplib", "zipfile", "cfmfile", "pstats", "AL", "contextlib", "code", "zipimport", "base64", "platform", "ast", "fileinput", "locale", "buildtools", "stat", "quopri", "readline", "collections", "aetypes", "concurrent", "runpy", "copy_reg", "rexec", "cmath", "optparse", "dummy_threading", "ColorPicker", "sched", "netrc", "sunaudiodev", "socketserver", "logging", "PixMapWrapper", "sysconfig", "Nav", "copy", "cmd", "csv", "chunk", "multiprocessing", "warnings", "weakref", "py_compile", "sre", "sre_parse", "curses", "threading", "re", "FrameWork", "_thread", "imgfile", "cd", "sre_constants", "xdrlib", "dataclasses", "urllib2", "StringIO", "configparser", "importlib", "UserList", "posixpath", "mailbox", "rfc822", "grp", "pydoc", "sets", "textwrap", "numbers", "W", "gl", "htmllib", "macostools", "tarfile", "ipaddress", "xmlrpc", "icopen", "traceback", "_winreg", "random", "CGIHTTPServer", "dis", "sha", "selectors", "statistics", "DocXMLRPCServer", "imghdr", "venv", "keyword", "xmlrpclib", "ftplib", "getopt", "posix", "smtpd", "profile", "sndhdr", "signal", "EasyDialogs", "dumbdbm", "fcntl", "SocketServer", "distutils", "symbol", "pathlib", "cStringIO", "imaplib", "unittest", "al", "cProfile", "robotparser", "BaseHTTPServer", "os", "pkgutil", "socket", "fractions", "shelve", "aifc", "cgitb", "xml", "decimal", "sre_compile", "ssl", "user", "Bastion", "formatter", "time", "abc", "winreg", "difflib", "FL", "bz2", "asynchat", "gc", "gensuitemodule", "symtable", "secrets", "Carbon", "mailcap", "sys", "bdb", "fpectl", "httplib", "webbrowser", "smtplib", "Cookie", "whichdb", "turtle", "tokenize", "UserString", "tabnanny", "site", "struct", "codeop", "email", "typing", "cookielib", "Queue", "rlcompleter", "errno", "macpath", "videoreader", "md5", "cPickle", "Tix", "io", "faulthandler", "Tkinter", "glob", "syslog", "telnetlib", "_dummy_thread", "hmac", "uuid", "imageop", "future_builtins", "json", "htmlentitydefs", "lib2to3", "UserDict", "mutex", "sqlite3", "findertools", "bisect", "builtins", "urllib", "http", "compileall", "argparse", "ScrolledText", "token", "dl", "applesingle", "math", "ensurepip", "mimify", "mimetools", "colorsys", "zipapp", "__builtin__", } ), extra_standard_library=frozenset(), known_other={"other": frozenset({"", "\x10\x1bm"})}, multi_line_output=0, forced_separate=(), indent=" ", comment_prefix=" #", length_sort=True, length_sort_straight=False, length_sort_sections=frozenset(), add_imports=frozenset(), remove_imports=frozenset( { "", "\U00076fe7รพs\x0c\U000c8b75v\U00106541", "๐ฅ’’>\U0001960euj๐’Ž•\x9e", "\x15\x9b", "\x02l", "\U000b71ef.\x1c", "\x7f?\U000ec91c", "\x7f,รžoร€P8\x1b\x1eยป3\x86\x94ยครร“~\U00066b1a,O\U0010ab28\x90ยซ", "Y\x06ยบZ\x04รรฌ\U00078ce1.\U0010c1f9[EK\x83Eร–รธ", ";ร€ยจ|\x1bร‚ ๐‘’๐ŸธV", } ), append_only=False, reverse_relative=True, force_single_line=False, single_line_exclusions=( "Y\U000347d9g\x957K", "", "รŠ\U000e8ad2\U0008fa72รนร\x19รง\U000eaecc๐คŽช.", "ยทo\U000d00e5\U000b36de+\x8f\U000b5953ยด\x08oรœ", "", ":sIยถ", "", ), default_section="THIRDPARTY", import_headings={}, import_footers={}, balanced_wrapping=False, use_parentheses=True, order_by_type=True, atomic=False, lines_before_imports=-1, lines_after_imports=-1, lines_between_sections=1, lines_between_types=0, combine_as_imports=True, combine_star=False, include_trailing_comma=False, from_first=False, verbose=False, quiet=False, force_adds=False, force_alphabetical_sort_within_sections=False, force_alphabetical_sort=False, force_grid_wrap=0, force_sort_within_sections=False, lexicographical=False, ignore_whitespace=False, no_lines_before=frozenset( { "uรธรธ", "ยข", "&\x8c5ร\U000e5f01ร˜", "\U0005d415\U000a3df2h\U000f24e5\U00104d7b34ยนร’ร€", "\U000e374c8", "w", } ), no_inline_sort=False, ignore_comments=False, case_sensitive=False, sources=( { "py_version": "py3", "force_to_top": frozenset(), "skip": frozenset( { ".svn", ".venv", "build", "dist", ".bzr", ".tox", ".hg", ".mypy_cache", ".nox", "_build", "buck-out", "node_modules", ".git", ".eggs", ".pants.d", "venv", ".direnv", } ), "skip_glob": frozenset(), "skip_gitignore": False, "line_length": 79, "wrap_length": 0, "line_ending": "", "sections": ("FUTURE", "STDLIB", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER"), "no_sections": False, "known_future_library": frozenset({"__future__"}), "known_third_party": frozenset(), "known_first_party": frozenset(), "known_local_folder": frozenset(), "known_standard_library": frozenset( { "pwd", "copy", "cmd", "csv", "chunk", "multiprocessing", "warnings", "types", "weakref", "nntplib", "pyclbr", "encodings", "py_compile", "sre", "ctypes", "sre_parse", "filecmp", "curses", "threading", "dbm", "re", "_thread", "sre_constants", "xdrlib", "dataclasses", "mimetypes", "configparser", "importlib", "msvcrt", "spwd", "posixpath", "mailbox", "codecs", "grp", "pickletools", "tkinter", "pickle", "contextvars", "pydoc", "textwrap", "numbers", "pipes", "heapq", "tarfile", "unicodedata", "ntpath", "ipaddress", "marshal", "xmlrpc", "traceback", "linecache", "calendar", "pty", "random", "dis", "selectors", "statistics", "imghdr", "venv", "inspect", "mmap", "keyword", "ftplib", "tty", "nis", "getopt", "posix", "smtpd", "wave", "profile", "sndhdr", "signal", "tracemalloc", "pdb", "sunau", "winsound", "parser", "zlib", "fcntl", "pprint", "distutils", "crypt", "symbol", "gettext", "pathlib", "asyncore", "copyreg", "imaplib", "unittest", "queue", "resource", "turtledemo", "fnmatch", "cProfile", "os", "pkgutil", "socket", "trace", "fractions", "string", "shelve", "plistlib", "aifc", "gzip", "functools", "cgitb", "xml", "hashlib", "decimal", "imp", "sre_compile", "ssl", "formatter", "winreg", "time", "getpass", "shutil", "abc", "difflib", "bz2", "operator", "reprlib", "subprocess", "cgi", "select", "asynchat", "audioop", "gc", "secrets", "symtable", "mailcap", "sys", "bdb", "fpectl", "stringprep", "webbrowser", "smtplib", "wsgiref", "atexit", "lzma", "asyncio", "datetime", "binhex", "doctest", "turtle", "enum", "tempfile", "tokenize", "tabnanny", "site", "html", "struct", "itertools", "codeop", "email", "array", "test", "typing", "shlex", "uu", "msilib", "termios", "rlcompleter", "modulefinder", "ossaudiodev", "timeit", "binascii", "poplib", "errno", "macpath", "zipfile", "io", "faulthandler", "pstats", "contextlib", "code", "glob", "zipimport", "base64", "syslog", "platform", "ast", "fileinput", "telnetlib", "locale", "_dummy_thread", "hmac", "stat", "uuid", "quopri", "readline", "collections", "json", "concurrent", "lib2to3", "sqlite3", "runpy", "cmath", "optparse", "bisect", "builtins", "urllib", "dummy_threading", "http", "compileall", "argparse", "token", "sched", "netrc", "math", "ensurepip", "socketserver", "colorsys", "zipapp", "logging", "sysconfig", } ), "extra_standard_library": frozenset(), "known_other": {}, "multi_line_output": 0, "forced_separate": (), "indent": " ", "comment_prefix": " #", "length_sort": False, "length_sort_straight": False, "length_sort_sections": frozenset(), "add_imports": frozenset(), "remove_imports": frozenset(), "append_only": False, "reverse_relative": False, "force_single_line": False, "single_line_exclusions": (), "default_section": "THIRDPARTY", "import_headings": {}, "import_footers": {}, "balanced_wrapping": False, "use_parentheses": False, "order_by_type": True, "atomic": False, "lines_before_imports": -1, "lines_after_imports": -1, "lines_between_sections": 1, "lines_between_types": 0, "combine_as_imports": False, "combine_star": False, "include_trailing_comma": False, "from_first": False, "verbose": False, "quiet": False, "force_adds": False, "force_alphabetical_sort_within_sections": False, "force_alphabetical_sort": False, "force_grid_wrap": 0, "force_sort_within_sections": False, "lexicographical": False, "ignore_whitespace": False, "no_lines_before": frozenset(), "no_inline_sort": False, "ignore_comments": False, "case_sensitive": False, "sources": (), "virtual_env": "", "conda_env": "", "ensure_newline_before_comments": False, "directory": "", "profile": "", "honor_noqa": False, "src_paths": frozenset(), "old_finders": False, "remove_redundant_aliases": False, "float_to_top": False, "filter_files": False, "formatter": "", "formatting_function": None, "color_output": False, "treat_comments_as_code": frozenset(), "treat_all_comments_as_code": False, "supported_extensions": frozenset({"py", "pyx", "pyi"}), "blocked_extensions": frozenset({"pex"}), "constants": frozenset(), "classes": frozenset(), "variables": frozenset(), "dedup_headings": False, "source": "defaults", }, { "classes": frozenset( { "\U000eb6c6\x9eร‘\U0008297dรขhรฏ\x8eร†", "C", "\x8e\U000422acยฑ\U000b5a1f\U000c4166", "รนรš", } ), "single_line_exclusions": ( "Y\U000347d9g\x957K", "", "รŠ\U000e8ad2\U0008fa72รนร\x19รง\U000eaecc๐คŽช.", "ยทo\U000d00e5\U000b36de+\x8f\U000b5953ยด\x08oรœ", "", ":sIยถ", "", ), "indent": " ", "no_lines_before": frozenset( { "uรธรธ", "ยข", "&\x8c5ร\U000e5f01ร˜", "\U0005d415\U000a3df2h\U000f24e5\U00104d7b34ยนร’ร€", "\U000e374c8", "w", } ), "quiet": False, "honor_noqa": False, "dedup_headings": True, "known_other": { "\x10\x1bm": frozenset({"\U000682a49\U000e1a63ยฒKร‡ยถ4", "", "\x1a", "ยฉ"}), "": frozenset({"รญรฅ\x94รŒ", "\U000cf258"}), }, "treat_comments_as_code": frozenset({""}), "length_sort": True, "reverse_relative": True, "combine_as_imports": True, "py_version": "all", "use_parentheses": True, "skip_gitignore": True, "remove_imports": frozenset( { "", "\U00076fe7รพs\x0c\U000c8b75v\U00106541", "๐ฅ’’>\U0001960euj๐’Ž•\x9e", "\x15\x9b", "\x02l", "\U000b71ef.\x1c", "\x7f?\U000ec91c", "\x7f,รžoร€P8\x1b\x1eยป3\x86\x94ยครร“~\U00066b1a,O\U0010ab28\x90ยซ", "Y\x06ยบZ\x04รรฌ\U00078ce1.\U0010c1f9[EK\x83Eร–รธ", ";ร€ยจ|\x1bร‚ ๐‘’๐ŸธV", } ), "atomic": False, "source": "runtime", }, ), virtual_env="", conda_env="", ensure_newline_before_comments=False, directory="/home/abuild/rpmbuild/BUILD/isort-5.11.0", profile="", honor_noqa=False, old_finders=False, remove_redundant_aliases=False, float_to_top=False, filter_files=False, formatting_function=None, color_output=False, treat_comments_as_code=frozenset({""}), treat_all_comments_as_code=False, supported_extensions=frozenset({"py", "pyx", "pyi"}), blocked_extensions=frozenset({"pex"}), constants=frozenset(), classes=frozenset( {"\U000eb6c6\x9eร‘\U0008297dรขhรฏ\x8eร†", "C", "\x8e\U000422acยฑ\U000b5a1f\U000c4166", "รนรš"} ), variables=frozenset(), dedup_headings=True, ), disregard_skip=True, ) @hypothesis.given( config=st.from_type(isort.Config), disregard_skip=st.booleans(), ) @hypothesis.settings(deadline=None) def test_isort_doesnt_lose_imports_or_comments(config: isort.Config, disregard_skip: bool) -> None: result = isort.code(CODE_SNIPPET, config=config, disregard_skip=disregard_skip) for should_be_retained in SHOULD_RETAIN: if should_be_retained not in result: if config.ignore_comments and should_be_retained.startswith("comment"): continue assert should_be_retained in result ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/integration/test_ticketed_features.py0000644000000000000000000000602314536412763020474 0ustar00"""Tests that need installation of other packages.""" # TODO: find a way to install example-isort-formatting-plugin to pass tests # from io import StringIO # import pytest # import isort # from isort import api, exceptions # def test_isort_supports_formatting_plugins(): # """Test to ensure isort provides a way to create and share formatting plugins. # See: https://github.com/pycqa/isort/issues/1353. # """ # # formatting plugin # assert isort.code("import a", formatter="example") == "import a\n" # # non-existent plugin # with pytest.raises(exceptions.FormattingPluginDoesNotExist): # assert isort.code("import a", formatter="madeupfake") == "import a\n" # def test_isort_literals_issue_1358(): # assert ( # isort.code( # """ # import x # import a # # isort: list # __all__ = ["b", "a", "b"] # # isort: unique-list # __all__ = ["b", "a", "b"] # # isort: tuple # __all__ = ("b", "a", "b") # # isort: unique-tuple # __all__ = ("b", "a", "b") # # isort: set # __all__ = {"b", "a", "b"} # def method(): # # isort: list # x = ["b", "a"] # # isort: dict # y = {"z": "z", "b": "b", "b": "c"}""" # ) # == """ # import a # import x # # isort: list # __all__ = ['a', 'b', 'b'] # # isort: unique-list # __all__ = ['a', 'b'] # # isort: tuple # __all__ = ('a', 'b', 'b') # # isort: unique-tuple # __all__ = ('a', 'b') # # isort: set # __all__ = {'a', 'b'} # def method(): # # isort: list # x = ['a', 'b'] # # isort: dict # y = {'b': 'c', 'z': 'z'}""" # ) # assert ( # isort.code( # """ # import x # import a # # isort: list # __all__ = ["b", "a", "b"] # # isort: unique-list # __all__ = ["b", "a", "b"] # # isort: tuple # __all__ = ("b", "a", "b") # # isort: unique-tuple # __all__ = ("b", "a", "b") # # isort: set # __all__ = {"b", "a", "b"} # def method(): # # isort: list # x = ["b", "a"] # # isort: assignments # d = 1 # b = 2 # a = 3 # # isort: dict # y = {"z": "z", "b": "b", "b": "c"}""", # formatter="example", # ) # == """ # import a # import x # # isort: list # __all__ = ["a", "b", "b"] # # isort: unique-list # __all__ = ["a", "b"] # # isort: tuple # __all__ = ("a", "b", "b") # # isort: unique-tuple # __all__ = ("a", "b") # # isort: set # __all__ = {"a", "b"} # def method(): # # isort: list # x = ["a", "b"] # # isort: assignments # a = 3 # b = 2 # d = 1 # # isort: dict # y = {"b": "c", "z": "z"}""" # ) # assert api.sort_stream( # input_stream=StringIO( # """ # import a # import x # # isort: list # __all__ = ["b", "a", "b"] # # isort: unique-list # __all__ = ["b", "a", "b"] # # isort: tuple # __all__ = ("b", "a", "b") # # isort: unique-tuple # __all__ = ("b", "a", "b") # # isort: set # __all__ = {"b", "a", "b"} # def method(): # # isort: list # x = ["b", "a"] # # isort: assignments # d = 1 # b = 2 # a = 3 # # isort: dict # y = {"z": "z", "b": "b", "b": "c"}""", # ), # output_stream=StringIO(), # ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/__init__.py0000644000000000000000000000000014536412763014123 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/conftest.py0000644000000000000000000000106214536412763014222 0ustar00"""isort test wide fixtures and configuration""" import os from pathlib import Path import pytest TEST_DIR = os.path.dirname(os.path.abspath(__file__)) SRC_DIR = os.path.abspath(os.path.join(TEST_DIR, "../../isort/")) @pytest.fixture def test_dir(): return TEST_DIR @pytest.fixture def src_dir(): return SRC_DIR @pytest.fixture def test_path(): return Path(TEST_DIR).resolve() @pytest.fixture def src_path(): return Path(SRC_DIR).resolve() @pytest.fixture def examples_path(): return Path(TEST_DIR).resolve() / "example_projects" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_crlf_file.py0000644000000000000000000000015314536412763016035 0ustar00import b import a def func(): x = 1 y = 2 z = 3 c = 4 return x + y + z + c ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/almost-implicit/.isort.cfg0000644000000000000000000000003214536412763024510 0ustar00[settings] src_paths=root ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/almost-implicit/root/nested/__init__.py0000644000000000000000000000000014536412763027162 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/almost-implicit/root/nested/x.py0000644000000000000000000000000014536412763025672 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/almost-implicit/root/y.py0000644000000000000000000000000014536412763024411 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/implicit/.isort.cfg0000644000000000000000000000003214536412763023213 0ustar00[settings] src_paths=root ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/implicit/root/nested/__init__.py0000644000000000000000000000000014536412763025665 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/implicit/root/nested/x.py0000644000000000000000000000000014536412763024375 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/none/.isort.cfg0000644000000000000000000000003214536412763022340 0ustar00[settings] src_paths=root ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/none/root/__init__.py0000644000000000000000000000000014536412763023530 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/none/root/nested/__init__.py0000644000000000000000000000000014536412763025012 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/pkg_resource/.isort.cfg0000644000000000000000000000003214536412763024071 0ustar00[settings] src_paths=root ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/pkg_resource/root/__init__.py0000644000000000000000000000007014536412763025270 0ustar00__import__("pkg_resources").declare_namespace(__name__) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/pkg_resource/root/nested/__init__.py0000644000000000000000000000000014536412763026543 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/pkg_resource/root/nested/x.py0000644000000000000000000000000014536412763025253 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/pkgutil/.isort.cfg0000644000000000000000000000003214536412763023060 0ustar00[settings] src_paths=root ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/pkgutil/root/__init__.py0000644000000000000000000000010114536412763024252 0ustar00__path__ = __import__("pkgutil").extend_path(__path__, __name__) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/pkgutil/root/nested/__init__.py0000644000000000000000000000000014536412763025532 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/pkgutil/root/nested/x.py0000644000000000000000000000000014536412763024242 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/weird_encoding/.isort.cfg0000644000000000000000000000003214536412763024361 0ustar00[settings] src_paths=root ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/weird_encoding/root/__init__.py0000644000000000000000000000012314536412763025557 0ustar00description = "ๅŸบไบŽFastAPI + Mysql็š„ TodoList" # Exception: UnicodeDecodeError ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/example_projects/namespaces/weird_encoding/root/nested/__init__.py0000644000000000000000000000000014536412763027033 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/profiles/__init__.py0000644000000000000000000000000014536412763015746 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/profiles/test_attrs.py0000644000000000000000000000346514536412763016425 0ustar00from functools import partial from ..utils import isort_test attrs_isort_test = partial(isort_test, profile="attrs") def test_attrs_code_snippet_one(): attrs_isort_test( """from __future__ import absolute_import, division, print_function import sys from functools import partial from . import converters, exceptions, filters, setters, validators from ._config import get_run_validators, set_run_validators from ._funcs import asdict, assoc, astuple, evolve, has, resolve_types from ._make import ( NOTHING, Attribute, Factory, attrib, attrs, fields, fields_dict, make_class, validate, ) from ._version_info import VersionInfo __version__ = "20.2.0.dev0" """ ) def test_attrs_code_snippet_two(): attrs_isort_test( """from __future__ import absolute_import, division, print_function import copy import linecache import sys import threading import uuid import warnings from operator import itemgetter from . import _config, setters from ._compat import ( PY2, isclass, iteritems, metadata_proxy, ordered_dict, set_closure_cell, ) from .exceptions import ( DefaultAlreadySetError, FrozenInstanceError, NotAnAttrsClassError, PythonTooOldError, UnannotatedAttributeError, ) # This is used at least twice, so cache it here. _obj_setattr = object.__setattr__ """ ) def test_attrs_code_snippet_three(): attrs_isort_test( ''' """ Commonly useful validators. """ from __future__ import absolute_import, division, print_function import re from ._make import _AndValidator, and_, attrib, attrs from .exceptions import NotCallableError __all__ = [ "and_", "deep_iterable", "deep_mapping", "in_", "instance_of", "is_callable", "matches_re", "optional", "provides", ] ''' ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/profiles/test_black.py0000644000000000000000000002356114536412763016343 0ustar00import black from black.report import NothingChanged import isort def black_format(code: str, is_pyi: bool = False, line_length: int = 88) -> str: """Formats the provided code snippet using black""" try: return black.format_file_contents( code, fast=True, mode=black.FileMode( is_pyi=is_pyi, line_length=line_length, ), ) except NothingChanged: return code def black_test(code: str, expected_output: str = "", *, is_pyi: bool = False, **config_kwargs): """Tests that the given code: - Behaves the same when formatted multiple times with isort. - Agrees with black formatting. - Matches the desired output or itself if none is provided. """ expected_output = expected_output or code config_kwargs = { "extension": "pyi" if is_pyi else None, "profile": "black", **config_kwargs, } # output should stay consistent over multiple runs output = isort.code(code, **config_kwargs) assert output == isort.code(code, **config_kwargs) # output should agree with black black_output = black_format(output, is_pyi=is_pyi) assert output == black_output # output should match expected output assert output == expected_output def test_black_snippet_one(): """Test consistent code formatting between isort and black for code snippet from black repository. See: https://github.com/psf/black/blob/master/tests/test_black.py """ black_test( """#!/usr/bin/env python3 import asyncio import logging from concurrent.futures import ThreadPoolExecutor from contextlib import contextmanager from dataclasses import replace from functools import partial import inspect from io import BytesIO, TextIOWrapper import os from pathlib import Path from platform import system import regex as re import sys from tempfile import TemporaryDirectory import types from typing import ( Any, BinaryIO, Callable, Dict, Generator, List, Tuple, Iterator, TypeVar, ) import unittest from unittest.mock import patch, MagicMock import click from click import unstyle from click.testing import CliRunner import black from black import Feature, TargetVersion try: import blackd from aiohttp.test_utils import AioHTTPTestCase, unittest_run_loop from aiohttp import web except ImportError: has_blackd_deps = False else: has_blackd_deps = True from pathspec import PathSpec # Import other test classes from .test_primer import PrimerCLITests # noqa: F401 DEFAULT_MODE = black.FileMode(experimental_string_processing=True) """, """#!/usr/bin/env python3 import asyncio import inspect import logging import os import sys import types import unittest from concurrent.futures import ThreadPoolExecutor from contextlib import contextmanager from dataclasses import replace from functools import partial from io import BytesIO, TextIOWrapper from pathlib import Path from platform import system from tempfile import TemporaryDirectory from typing import ( Any, BinaryIO, Callable, Dict, Generator, Iterator, List, Tuple, TypeVar, ) from unittest.mock import MagicMock, patch import black import click import regex as re from black import Feature, TargetVersion from click import unstyle from click.testing import CliRunner try: import blackd from aiohttp import web from aiohttp.test_utils import AioHTTPTestCase, unittest_run_loop except ImportError: has_blackd_deps = False else: has_blackd_deps = True from pathspec import PathSpec # Import other test classes from .test_primer import PrimerCLITests # noqa: F401 DEFAULT_MODE = black.FileMode(experimental_string_processing=True) """, ) def test_black_snippet_two(): """Test consistent code formatting between isort and black for code snippet from black repository. See: https://github.com/psf/black/blob/master/tests/test_primer.py """ black_test( '''#!/usr/bin/env python3 import asyncio import sys import unittest from contextlib import contextmanager from copy import deepcopy from io import StringIO from os import getpid from pathlib import Path from platform import system from subprocess import CalledProcessError from tempfile import TemporaryDirectory, gettempdir from typing import Any, Callable, Generator, Iterator, Tuple from unittest.mock import Mock, patch from click.testing import CliRunner from black_primer import cli, lib EXPECTED_ANALYSIS_OUTPUT = """\ -- primer results ๐Ÿ“Š -- 68 / 69 succeeded (98.55%) โœ… 1 / 69 FAILED (1.45%) ๐Ÿ’ฉ - 0 projects disabled by config - 0 projects skipped due to Python version - 0 skipped due to long checkout Failed projects: ## black: - Returned 69 - stdout: Black didn't work """ ''', '''#!/usr/bin/env python3 import asyncio import sys import unittest from contextlib import contextmanager from copy import deepcopy from io import StringIO from os import getpid from pathlib import Path from platform import system from subprocess import CalledProcessError from tempfile import TemporaryDirectory, gettempdir from typing import Any, Callable, Generator, Iterator, Tuple from unittest.mock import Mock, patch from black_primer import cli, lib from click.testing import CliRunner EXPECTED_ANALYSIS_OUTPUT = """-- primer results ๐Ÿ“Š -- 68 / 69 succeeded (98.55%) โœ… 1 / 69 FAILED (1.45%) ๐Ÿ’ฉ - 0 projects disabled by config - 0 projects skipped due to Python version - 0 skipped due to long checkout Failed projects: ## black: - Returned 69 - stdout: Black didn't work """ ''', ) def test_black_snippet_three(): """Test consistent code formatting between isort and black for code snippet from black repository. See: https://github.com/psf/black/blob/master/src/black/__init__.py """ black_test( """import ast import asyncio from abc import ABC, abstractmethod from collections import defaultdict from concurrent.futures import Executor, ThreadPoolExecutor, ProcessPoolExecutor from contextlib import contextmanager from datetime import datetime from enum import Enum from functools import lru_cache, partial, wraps import io import itertools import logging from multiprocessing import Manager, freeze_support import os from pathlib import Path import pickle import regex as re import signal import sys import tempfile import tokenize import traceback from typing import ( Any, Callable, Collection, Dict, Generator, Generic, Iterable, Iterator, List, Optional, Pattern, Sequence, Set, Sized, Tuple, Type, TypeVar, Union, cast, TYPE_CHECKING, ) from typing_extensions import Final from mypy_extensions import mypyc_attr from appdirs import user_cache_dir from dataclasses import dataclass, field, replace import click import toml from typed_ast import ast3, ast27 from pathspec import PathSpec # lib2to3 fork from blib2to3.pytree import Node, Leaf, type_repr from blib2to3 import pygram, pytree from blib2to3.pgen2 import driver, token from blib2to3.pgen2.grammar import Grammar from blib2to3.pgen2.parse import ParseError from _black_version import version as __version__ if TYPE_CHECKING: import colorama # noqa: F401 DEFAULT_LINE_LENGTH = 88 """, """import ast import asyncio import io import itertools import logging import os import pickle import signal import sys import tempfile import tokenize import traceback from abc import ABC, abstractmethod from collections import defaultdict from concurrent.futures import Executor, ProcessPoolExecutor, ThreadPoolExecutor from contextlib import contextmanager from dataclasses import dataclass, field, replace from datetime import datetime from enum import Enum from functools import lru_cache, partial, wraps from multiprocessing import Manager, freeze_support from pathlib import Path from typing import ( TYPE_CHECKING, Any, Callable, Collection, Dict, Generator, Generic, Iterable, Iterator, List, Optional, Pattern, Sequence, Set, Sized, Tuple, Type, TypeVar, Union, cast, ) import click import regex as re import toml from _black_version import version as __version__ from appdirs import user_cache_dir from blib2to3 import pygram, pytree from blib2to3.pgen2 import driver, token from blib2to3.pgen2.grammar import Grammar from blib2to3.pgen2.parse import ParseError # lib2to3 fork from blib2to3.pytree import Leaf, Node, type_repr from mypy_extensions import mypyc_attr from pathspec import PathSpec from typed_ast import ast3, ast27 from typing_extensions import Final if TYPE_CHECKING: import colorama # noqa: F401 DEFAULT_LINE_LENGTH = 88 """, ) def test_black_pyi_file(): """Test consistent code formatting between isort and black for `.pyi` files. black only allows no more than two consecutive blank lines in a `.pyi` file. """ black_test( """# comment import math from typing import Sequence import numpy as np def add(a: np.ndarray, b: np.ndarray) -> np.ndarray: ... def sub(a: np.ndarray, b: np.ndarray) -> np.ndarray: ... """, """# comment import math from typing import Sequence import numpy as np def add(a: np.ndarray, b: np.ndarray) -> np.ndarray: ... def sub(a: np.ndarray, b: np.ndarray) -> np.ndarray: ... """, is_pyi=False, lines_before_imports=2, lines_after_imports=2, ) black_test( """# comment import math from typing import Sequence import numpy as np def add(a: np.ndarray, b: np.ndarray) -> np.ndarray: ... def sub(a: np.ndarray, b: np.ndarray) -> np.ndarray: ... """, """# comment import math from typing import Sequence import numpy as np def add(a: np.ndarray, b: np.ndarray) -> np.ndarray: ... def sub(a: np.ndarray, b: np.ndarray) -> np.ndarray: ... """, is_pyi=True, lines_before_imports=2, # will be ignored lines_after_imports=2, # will be ignored ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/profiles/test_django.py0000644000000000000000000000723514536412763016531 0ustar00from functools import partial from ..utils import isort_test django_isort_test = partial(isort_test, profile="django", known_first_party=["django"]) def test_django_snippet_one(): django_isort_test( """import copy import inspect import warnings from functools import partialmethod from itertools import chain from django.apps import apps from django.conf import settings from django.core import checks from django.core.exceptions import ( NON_FIELD_ERRORS, FieldDoesNotExist, FieldError, MultipleObjectsReturned, ObjectDoesNotExist, ValidationError, ) from django.db import ( DEFAULT_DB_ALIAS, DJANGO_VERSION_PICKLE_KEY, DatabaseError, connection, connections, router, transaction, ) from django.db.models import ( NOT_PROVIDED, ExpressionWrapper, IntegerField, Max, Value, ) from django.db.models.constants import LOOKUP_SEP from django.db.models.constraints import CheckConstraint from django.db.models.deletion import CASCADE, Collector from django.db.models.fields.related import ( ForeignObjectRel, OneToOneField, lazy_related_operation, resolve_relation, ) from django.db.models.functions import Coalesce from django.db.models.manager import Manager from django.db.models.options import Options from django.db.models.query import Q from django.db.models.signals import ( class_prepared, post_init, post_save, pre_init, pre_save, ) from django.db.models.utils import make_model_tuple from django.utils.encoding import force_str from django.utils.hashable import make_hashable from django.utils.text import capfirst, get_text_list from django.utils.translation import gettext_lazy as _ from django.utils.version import get_version class Deferred: def __repr__(self): return '' def __str__(self): return ''""" ) def test_django_snippet_two(): django_isort_test( '''from django.utils.version import get_version VERSION = (3, 2, 0, 'alpha', 0) __version__ = get_version(VERSION) def setup(set_prefix=True): """ Configure the settings (this happens as a side effect of accessing the first setting), configure logging and populate the app registry. Set the thread-local urlresolvers script prefix if `set_prefix` is True. """ from django.apps import apps from django.conf import settings from django.urls import set_script_prefix from django.utils.log import configure_logging configure_logging(settings.LOGGING_CONFIG, settings.LOGGING) if set_prefix: set_script_prefix( '/' if settings.FORCE_SCRIPT_NAME is None else settings.FORCE_SCRIPT_NAME ) apps.populate(settings.INSTALLED_APPS)''' ) def test_django_snippet_three(): django_isort_test( """import cgi import codecs import copy import warnings from io import BytesIO from itertools import chain from urllib.parse import quote, urlencode, urljoin, urlsplit from django.conf import settings from django.core import signing from django.core.exceptions import ( DisallowedHost, ImproperlyConfigured, RequestDataTooBig, ) from django.core.files import uploadhandler from django.http.multipartparser import MultiPartParser, MultiPartParserError from django.utils.datastructures import ( CaseInsensitiveMapping, ImmutableList, MultiValueDict, ) from django.utils.deprecation import RemovedInDjango40Warning from django.utils.encoding import escape_uri_path, iri_to_uri from django.utils.functional import cached_property from django.utils.http import is_same_domain, limited_parse_qsl from django.utils.regex_helper import _lazy_re_compile from .multipartparser import parse_header RAISE_ERROR = object() class UnreadablePostError(OSError): pass""" ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/profiles/test_google.py0000644000000000000000000003210314536412763016533 0ustar00from functools import partial from ..utils import isort_test google_isort_test = partial(isort_test, profile="google") def test_google_code_snippet_shared_example(): """Tests snippet examples directly shared with the isort project. See: https://github.com/PyCQA/isort/issues/1486. """ google_isort_test( """import collections import cProfile """ ) google_isort_test( """from a import z from a.b import c """ ) def test_google_code_snippet_one(): google_isort_test( '''# coding=utf-8 # Copyright 2018 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """JAX user-facing transformations and utilities. The transformations here mostly wrap internal transformations, providing convenience flags to control behavior and handling Python containers of arguments and outputs. The Python containers handled are pytrees (see tree_util.py), which include nested tuples/lists/dicts, where the leaves are arrays. """ # flake8: noqa: F401 import collections import functools import inspect import itertools as it import threading import weakref from typing import Any, Callable, Iterable, List, NamedTuple, Optional, Sequence, Tuple, TypeVar, Union from warnings import warn import numpy as np from contextlib import contextmanager, ExitStack from . import core from . import linear_util as lu from . import ad_util from . import dtypes from .core import eval_jaxpr from .api_util import (wraps, flatten_fun, apply_flat_fun, flatten_fun_nokwargs, flatten_fun_nokwargs2, argnums_partial, flatten_axes, donation_vector, rebase_donate_argnums) from .traceback_util import api_boundary from .tree_util import (tree_map, tree_flatten, tree_unflatten, tree_structure, tree_transpose, tree_leaves, tree_multimap, treedef_is_leaf, Partial) from .util import (unzip2, curry, partial, safe_map, safe_zip, prod, split_list, extend_name_stack, wrap_name, cache) from .lib import xla_bridge as xb from .lib import xla_client as xc # Unused imports to be exported from .lib.xla_bridge import (device_count, local_device_count, devices, local_devices, host_id, host_ids, host_count) from .abstract_arrays import ConcreteArray, ShapedArray, raise_to_shaped from .interpreters import partial_eval as pe from .interpreters import xla from .interpreters import pxla from .interpreters import ad from .interpreters import batching from .interpreters import masking from .interpreters import invertible_ad as iad from .interpreters.invertible_ad import custom_ivjp from .custom_derivatives import custom_jvp, custom_vjp from .config import flags, config, bool_env AxisName = Any # This TypeVar is used below to express the fact that function call signatures # are invariant under the jit, vmap, and pmap transformations. # Specifically, we statically assert that the return type is invariant. # Until PEP-612 is implemented, we cannot express the same invariance for # function arguments. # Note that the return type annotations will generally not strictly hold # in JIT internals, as Tracer values are passed through the function. # Should this raise any type errors for the tracing code in future, we can disable # type checking in parts of the tracing code, or remove these annotations. T = TypeVar("T") map = safe_map zip = safe_zip FLAGS = flags.FLAGS flags.DEFINE_bool("jax_disable_jit", bool_env("JAX_DISABLE_JIT", False), "Disable JIT compilation and just call original Python.") ''', '''# coding=utf-8 # Copyright 2018 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """JAX user-facing transformations and utilities. The transformations here mostly wrap internal transformations, providing convenience flags to control behavior and handling Python containers of arguments and outputs. The Python containers handled are pytrees (see tree_util.py), which include nested tuples/lists/dicts, where the leaves are arrays. """ # flake8: noqa: F401 import collections from contextlib import contextmanager from contextlib import ExitStack import functools import inspect import itertools as it import threading from typing import (Any, Callable, Iterable, List, NamedTuple, Optional, Sequence, Tuple, TypeVar, Union) from warnings import warn import weakref import numpy as np from . import ad_util from . import core from . import dtypes from . import linear_util as lu from .abstract_arrays import ConcreteArray from .abstract_arrays import raise_to_shaped from .abstract_arrays import ShapedArray from .api_util import apply_flat_fun from .api_util import argnums_partial from .api_util import donation_vector from .api_util import flatten_axes from .api_util import flatten_fun from .api_util import flatten_fun_nokwargs from .api_util import flatten_fun_nokwargs2 from .api_util import rebase_donate_argnums from .api_util import wraps from .config import bool_env from .config import config from .config import flags from .core import eval_jaxpr from .custom_derivatives import custom_jvp from .custom_derivatives import custom_vjp from .interpreters import ad from .interpreters import batching from .interpreters import invertible_ad as iad from .interpreters import masking from .interpreters import partial_eval as pe from .interpreters import pxla from .interpreters import xla from .interpreters.invertible_ad import custom_ivjp from .lib import xla_bridge as xb from .lib import xla_client as xc # Unused imports to be exported from .lib.xla_bridge import device_count from .lib.xla_bridge import devices from .lib.xla_bridge import host_count from .lib.xla_bridge import host_id from .lib.xla_bridge import host_ids from .lib.xla_bridge import local_device_count from .lib.xla_bridge import local_devices from .traceback_util import api_boundary from .tree_util import Partial from .tree_util import tree_flatten from .tree_util import tree_leaves from .tree_util import tree_map from .tree_util import tree_multimap from .tree_util import tree_structure from .tree_util import tree_transpose from .tree_util import tree_unflatten from .tree_util import treedef_is_leaf from .util import cache from .util import curry from .util import extend_name_stack from .util import partial from .util import prod from .util import safe_map from .util import safe_zip from .util import split_list from .util import unzip2 from .util import wrap_name AxisName = Any # This TypeVar is used below to express the fact that function call signatures # are invariant under the jit, vmap, and pmap transformations. # Specifically, we statically assert that the return type is invariant. # Until PEP-612 is implemented, we cannot express the same invariance for # function arguments. # Note that the return type annotations will generally not strictly hold # in JIT internals, as Tracer values are passed through the function. # Should this raise any type errors for the tracing code in future, we can disable # type checking in parts of the tracing code, or remove these annotations. T = TypeVar("T") map = safe_map zip = safe_zip FLAGS = flags.FLAGS flags.DEFINE_bool("jax_disable_jit", bool_env("JAX_DISABLE_JIT", False), "Disable JIT compilation and just call original Python.") ''', ) def test_google_code_snippet_two(): google_isort_test( """#!/usr/bin/env python # In[ ]: # coding: utf-8 ###### Searching and Downloading Google Images to the local disk ###### # Import Libraries import sys version = (3, 0) cur_version = sys.version_info if cur_version >= version: # If the Current Version of Python is 3.0 or above import urllib.request from urllib.request import Request, urlopen from urllib.request import URLError, HTTPError from urllib.parse import quote import http.client from http.client import IncompleteRead, BadStatusLine http.client._MAXHEADERS = 1000 else: # If the Current Version of Python is 2.x import urllib2 from urllib2 import Request, urlopen from urllib2 import URLError, HTTPError from urllib import quote import httplib from httplib import IncompleteRead, BadStatusLine httplib._MAXHEADERS = 1000 import time # Importing the time library to check the time of code execution import os import argparse import ssl import datetime import json import re import codecs import socket""", """#!/usr/bin/env python # In[ ]: # coding: utf-8 ###### Searching and Downloading Google Images to the local disk ###### # Import Libraries import sys version = (3, 0) cur_version = sys.version_info if cur_version >= version: # If the Current Version of Python is 3.0 or above import http.client from http.client import BadStatusLine from http.client import IncompleteRead from urllib.parse import quote import urllib.request from urllib.request import HTTPError from urllib.request import Request from urllib.request import URLError from urllib.request import urlopen http.client._MAXHEADERS = 1000 else: # If the Current Version of Python is 2.x from urllib import quote import httplib from httplib import BadStatusLine from httplib import IncompleteRead import urllib2 from urllib2 import HTTPError from urllib2 import Request from urllib2 import URLError from urllib2 import urlopen httplib._MAXHEADERS = 1000 import argparse import codecs import datetime import json import os import re import socket import ssl import time # Importing the time library to check the time of code execution """, ) def test_code_snippet_three(): google_isort_test( '''# Copyright 2019 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Monitoring.""" # pylint: disable=invalid-name # TODO(ochang): Remove V3 from names once all metrics are migrated to # stackdriver. from builtins import object from builtins import range from builtins import str import bisect import collections import functools import itertools import re import six import threading import time try: from google.cloud import monitoring_v3 except (ImportError, RuntimeError): monitoring_v3 = None from google.api_core import exceptions from google.api_core import retry from base import errors from base import utils from config import local_config from google_cloud_utils import compute_metadata from google_cloud_utils import credentials from metrics import logs from system import environment''', '''# Copyright 2019 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Monitoring.""" # pylint: disable=invalid-name # TODO(ochang): Remove V3 from names once all metrics are migrated to # stackdriver. import bisect from builtins import object from builtins import range from builtins import str import collections import functools import itertools import re import threading import time import six try: from google.cloud import monitoring_v3 except (ImportError, RuntimeError): monitoring_v3 = None from base import errors from base import utils from config import local_config from google.api_core import exceptions from google.api_core import retry from google_cloud_utils import compute_metadata from google_cloud_utils import credentials from metrics import logs from system import environment ''', ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/profiles/test_hug.py0000644000000000000000000000561114536412763016046 0ustar00from functools import partial from ..utils import isort_test hug_isort_test = partial(isort_test, profile="hug", known_first_party=["hug"]) def test_hug_code_snippet_one(): hug_isort_test( ''' from __future__ import absolute_import import asyncio import sys from collections import OrderedDict, namedtuple from distutils.util import strtobool from functools import partial from itertools import chain from types import ModuleType from wsgiref.simple_server import make_server import falcon from falcon import HTTP_METHODS import hug.defaults import hug.output_format from hug import introspect from hug._version import current INTRO = """ /#######################################################################\\ `.----``..-------..``.----. :/:::::--:---------:--::::://. .+::::----##/-/oo+:-##----::::// `//::-------/oosoo-------::://. ## ## ## ## ##### .-:------./++o/o-.------::-` ``` ## ## ## ## ## `----.-./+o+:..----. `.:///. ######## ## ## ## ``` `----.-::::::------ `.-:::://. ## ## ## ## ## #### ://::--.``` -:``...-----...` `:--::::::-.` ## ## ## ## ## ## :/:::::::::-:- ````` .:::::-.` ## ## #### ###### ``.--:::::::. .:::.` ``..::. .:: EMBRACE THE APIs OF THE FUTURE ::- .:- -::` ::- VERSION {0} `::- -::` -::-` -::- \\########################################################################/ Copyright (C) 2016 Timothy Edmund Crosley Under the MIT License """.format( current )''' ) def test_hug_code_snippet_two(): hug_isort_test( """from __future__ import absolute_import import functools from collections import namedtuple from falcon import HTTP_METHODS import hug.api import hug.defaults import hug.output_format from hug import introspect from hug.format import underscore def default_output_format( content_type="application/json", apply_globally=False, api=None, cli=False, http=True ): """ ) def test_hug_code_snippet_three(): hug_isort_test( """from __future__ import absolute_import import argparse import asyncio import os import sys from collections import OrderedDict from functools import lru_cache, partial, wraps import falcon from falcon import HTTP_BAD_REQUEST import hug._empty as empty import hug.api import hug.output_format import hug.types as types from hug import introspect from hug.exceptions import InvalidTypeData from hug.format import parse_content_type from hug.types import ( MarshmallowInputSchema, MarshmallowReturnSchema, Multiple, OneOf, SmartBoolean, Text, text, ) DOC_TYPE_MAP = {str: "String", bool: "Boolean", list: "Multiple", int: "Integer", float: "Float"} """ ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/profiles/test_open_stack.py0000644000000000000000000000710314536412763017407 0ustar00from functools import partial from ..utils import isort_test open_stack_isort_test = partial(isort_test, profile="open_stack") def test_open_stack_code_snippet_one(): open_stack_isort_test( """import httplib import logging import random import StringIO import time import unittest import eventlet import webob.exc import nova.api.ec2 from nova.api import manager from nova.api import openstack from nova.auth import users from nova.endpoint import cloud import nova.flags from nova.i18n import _ from nova.i18n import _LC from nova import test """, known_first_party=["nova"], py_version="2", order_by_type=False, ) def test_open_stack_code_snippet_two(): open_stack_isort_test( """# Copyright 2011 VMware, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import os import random from neutron_lib.callbacks import events from neutron_lib.callbacks import registry from neutron_lib.callbacks import resources from neutron_lib import context from neutron_lib.db import api as session from neutron_lib.plugins import directory from neutron_lib import rpc as n_rpc from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log as logging from oslo_messaging import server as rpc_server from oslo_service import loopingcall from oslo_service import service as common_service from oslo_utils import excutils from oslo_utils import importutils import psutil from neutron.common import config from neutron.common import profiler from neutron.conf import service from neutron import worker as neutron_worker from neutron import wsgi service.register_service_opts(service.SERVICE_OPTS) """, known_first_party=["neutron"], ) def test_open_stack_code_snippet_three(): open_stack_isort_test( """ # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from oslo_log import log as logging import oslo_messaging as messaging from oslo_messaging.rpc import dispatcher from oslo_serialization import jsonutils from oslo_service import periodic_task from oslo_utils import importutils import six import nova.conf import nova.context import nova.exception from nova.i18n import _ __all__ = [ 'init', 'cleanup', 'set_defaults', 'add_extra_exmods', 'clear_extra_exmods', 'get_allowed_exmods', 'RequestContextSerializer', 'get_client', 'get_server', 'get_notifier', ] profiler = importutils.try_import("osprofiler.profiler") """, known_first_party=["nova"], ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/profiles/test_plone.py0000644000000000000000000000435014536412763016377 0ustar00from functools import partial from ..utils import isort_test plone_isort_test = partial(isort_test, profile="plone") def test_plone_code_snippet_one(): plone_isort_test( """# -*- coding: utf-8 -*- from plone.app.multilingual.testing import PLONE_APP_MULTILINGUAL_PRESET_FIXTURE # noqa from plone.app.robotframework.testing import REMOTE_LIBRARY_BUNDLE_FIXTURE from plone.app.testing import FunctionalTesting from plone.app.testing import IntegrationTesting from plone.app.testing import PloneWithPackageLayer from plone.testing import z2 import plone.app.multilingualindexes PAMI_FIXTURE = PloneWithPackageLayer( bases=(PLONE_APP_MULTILINGUAL_PRESET_FIXTURE,), name="PAMILayer:Fixture", gs_profile_id="plone.app.multilingualindexes:default", zcml_package=plone.app.multilingualindexes, zcml_filename="configure.zcml", additional_z2_products=["plone.app.multilingualindexes"], ) """ ) def test_plone_code_snippet_two(): plone_isort_test( """# -*- coding: utf-8 -*- from Acquisition import aq_base from App.class_init import InitializeClass from App.special_dtml import DTMLFile from BTrees.OOBTree import OOTreeSet from logging import getLogger from plone import api from plone.app.multilingual.events import ITranslationRegisteredEvent from plone.app.multilingual.interfaces import ITG from plone.app.multilingual.interfaces import ITranslatable from plone.app.multilingual.interfaces import ITranslationManager from plone.app.multilingualindexes.utils import get_configuration from plone.indexer.interfaces import IIndexableObject from Products.CMFPlone.utils import safe_hasattr from Products.DateRecurringIndex.index import DateRecurringIndex from Products.PluginIndexes.common.UnIndex import UnIndex from Products.ZCatalog.Catalog import Catalog from ZODB.POSException import ConflictError from zope.component import getMultiAdapter from zope.component import queryAdapter from zope.globalrequest import getRequest logger = getLogger(__name__) """ ) def test_plone_code_snippet_three(): plone_isort_test( """# -*- coding: utf-8 -*- from plone.app.querystring.interfaces import IQueryModifier from zope.interface import provider import logging logger = logging.getLogger(__name__) """ ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/profiles/test_pycharm.py0000644000000000000000000000214114536412763016721 0ustar00from functools import partial from ..utils import isort_test pycharm_isort_test = partial(isort_test, profile="pycharm") def test_pycharm_snippet_one(): pycharm_isort_test( """import shutil import sys from io import StringIO from pathlib import Path from typing import ( Optional, TextIO, Union, cast ) from warnings import warn from isort import core from . import io from .exceptions import ( ExistingSyntaxErrors, FileSkipComment, FileSkipSetting, IntroducedSyntaxErrors ) from .format import ( ask_whether_to_apply_changes_to_file, create_terminal_printer, show_unified_diff ) from .io import Empty from .place import module as place_module # noqa: F401 from .place import module_with_reason as place_module_with_reason # noqa: F401 from .settings import ( DEFAULT_CONFIG, Config ) def sort_code_string( code: str, extension: Optional[str] = None, config: Config = DEFAULT_CONFIG, file_path: Optional[Path] = None, disregard_skip: bool = False, show_diff: Union[bool, TextIO] = False, **config_kwargs, ): """ ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/profiles/test_wemake.py0000644000000000000000000000532314536412763016534 0ustar00"""A set of test cases for the wemake isort profile. Snippets are taken directly from the wemake-python-styleguide project here: https://github.com/wemake-services/wemake-python-styleguide """ from functools import partial from ..utils import isort_test wemake_isort_test = partial( isort_test, profile="wemake", known_first_party=["wemake_python_styleguide"] ) def test_wemake_snippet_one(): wemake_isort_test( """ import ast import tokenize import traceback from typing import ClassVar, Iterator, Sequence, Type from flake8.options.manager import OptionManager from typing_extensions import final from wemake_python_styleguide import constants, types from wemake_python_styleguide import version as pkg_version from wemake_python_styleguide.options.config import Configuration from wemake_python_styleguide.options.validation import validate_options from wemake_python_styleguide.presets.types import file_tokens as tokens_preset from wemake_python_styleguide.presets.types import filename as filename_preset from wemake_python_styleguide.presets.types import tree as tree_preset from wemake_python_styleguide.transformations.ast_tree import transform from wemake_python_styleguide.violations import system from wemake_python_styleguide.visitors import base VisitorClass = Type[base.BaseVisitor] """ ) def test_wemake_snippet_two(): wemake_isort_test( """ from collections import defaultdict from typing import ClassVar, DefaultDict, List from flake8.formatting.base import BaseFormatter from flake8.statistics import Statistics from flake8.style_guide import Violation from pygments import highlight from pygments.formatters import TerminalFormatter from pygments.lexers import PythonLexer from typing_extensions import Final from wemake_python_styleguide.version import pkg_version #: That url is generated and hosted by Sphinx. DOCS_URL_TEMPLATE: Final = ( 'https://wemake-python-stylegui.de/en/{0}/pages/usage/violations/' ) """ ) def test_wemake_snippet_three(): wemake_isort_test( """ import ast from pep8ext_naming import NamingChecker from typing_extensions import final from wemake_python_styleguide.transformations.ast.bugfixes import ( fix_async_offset, fix_line_number, ) from wemake_python_styleguide.transformations.ast.enhancements import ( set_if_chain, set_node_context, ) @final class _ClassVisitor(ast.NodeVisitor): ... """ ) def test_wemake_snippet_four(): """80 line length should be fixed""" wemake_isort_test( """ from typing import Iterable, Iterator, Optional, Sequence, Tuple, TypeVar, Union """, """ from typing import ( Iterable, Iterator, Optional, Sequence, Tuple, TypeVar, Union, ) """, ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/test_action_comments.py0000644000000000000000000000117314536412763016621 0ustar00"""Tests for isort action comments, such as isort: skip""" import isort def test_isort_off_and_on(): """Test so ensure isort: off action comment and associated on action comment work together""" # as top of file comment assert ( isort.code( """# isort: off import a import a # isort: on import a import a """ ) == """# isort: off import a import a # isort: on import a """ ) # as middle comment assert ( isort.code( """ import a import a # isort: off import a import a """ ) == """ import a # isort: off import a import a """ ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/test_api.py0000644000000000000000000000704314536412763014212 0ustar00"""Tests the isort API module""" import os from io import StringIO from unittest.mock import MagicMock, patch import pytest from isort import ImportKey, api from isort.settings import Config imperfect_content = "import b\nimport a\n" fixed_content = "import a\nimport b\n" fixed_diff = "+import a\n import b\n-import a\n" @pytest.fixture def imperfect(tmpdir): imperfect_file = tmpdir.join("test_needs_changes.py") imperfect_file.write_text(imperfect_content, "utf8") return imperfect_file def test_sort_file_with_bad_syntax(tmpdir) -> None: tmp_file = tmpdir.join("test_bad_syntax.py") tmp_file.write_text("""print('mismatching quotes")""", "utf8") with pytest.warns(UserWarning): api.sort_file(tmp_file, atomic=True) with pytest.warns(UserWarning): api.sort_file(tmp_file, atomic=True, write_to_stdout=True) def test_sort_file(imperfect) -> None: assert api.sort_file(imperfect) assert imperfect.read() == fixed_content def test_sort_file_in_place(imperfect) -> None: assert api.sort_file(imperfect, overwrite_in_place=True) assert imperfect.read() == fixed_content def test_sort_file_to_stdout(capsys, imperfect) -> None: assert api.sort_file(imperfect, write_to_stdout=True) out, _ = capsys.readouterr() assert out == fixed_content.replace("\n", os.linesep) def test_other_ask_to_apply(imperfect) -> None: # First show diff, but ensure change wont get written by asking to apply # and ensuring answer is no. with patch("isort.format.input", MagicMock(return_value="n")): assert not api.sort_file(imperfect, ask_to_apply=True) assert imperfect.read() == imperfect_content # Then run again, but apply the change (answer is yes) with patch("isort.format.input", MagicMock(return_value="y")): assert api.sort_file(imperfect, ask_to_apply=True) assert imperfect.read() == fixed_content def test_check_file_no_changes(capsys, tmpdir) -> None: perfect = tmpdir.join("test_no_changes.py") perfect.write_text("import a\nimport b\n", "utf8") assert api.check_file(perfect, show_diff=True) out, _ = capsys.readouterr() assert not out def test_check_file_with_changes(capsys, imperfect) -> None: assert not api.check_file(imperfect, show_diff=True) out, _ = capsys.readouterr() assert fixed_diff.replace("\n", os.linesep) in out def test_sorted_imports_multiple_configs() -> None: with pytest.raises(ValueError): api.sort_code_string("import os", config=Config(line_length=80), line_length=80) def test_diff_stream() -> None: output = StringIO() assert api.sort_stream(StringIO("import b\nimport a\n"), output, show_diff=True) output.seek(0) assert fixed_diff in output.read() def test_sort_code_string_mixed_newlines(): assert api.sort_code_string("import A\n\r\nimportA\n\n") == "import A\r\n\r\nimportA\r\n\n" def test_find_imports_in_file(imperfect): found_imports = list(api.find_imports_in_file(imperfect)) assert "b" in [found_import.module for found_import in found_imports] def test_find_imports_in_code(): code = """ from x.y import z as a from x.y import z as a from x.y import z import x.y import x """ assert len(list(api.find_imports_in_code(code))) == 5 assert len(list(api.find_imports_in_code(code, unique=True))) == 4 assert len(list(api.find_imports_in_code(code, unique=ImportKey.ATTRIBUTE))) == 3 assert len(list(api.find_imports_in_code(code, unique=ImportKey.MODULE))) == 2 assert len(list(api.find_imports_in_code(code, unique=ImportKey.PACKAGE))) == 1 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/test_comments.py0000644000000000000000000000156014536412763015264 0ustar00from hypothesis import given from hypothesis import strategies as st import isort.comments def test_add_to_line(): assert ( isort.comments.add_to_line([], "import os # comment", removed=True).strip() == "import os" ) # These tests were written by the `hypothesis.extra.ghostwriter` module # and is provided under the Creative Commons Zero public domain dedication. @given( comments=st.one_of(st.none(), st.lists(st.text())), original_string=st.text(), removed=st.booleans(), comment_prefix=st.text(), ) def test_fuzz_add_to_line(comments, original_string, removed, comment_prefix): isort.comments.add_to_line( comments=comments, original_string=original_string, removed=removed, comment_prefix=comment_prefix, ) @given(line=st.text()) def test_fuzz_parse(line): isort.comments.parse(line=line) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/test_deprecated_finders.py0000644000000000000000000001415514536412763017255 0ustar00import importlib.machinery import os import posixpath from pathlib import Path from unittest.mock import patch from isort import sections, settings from isort.deprecated import finders from isort.deprecated.finders import FindersManager from isort.settings import Config class TestFindersManager: def test_init(self): assert FindersManager(settings.DEFAULT_CONFIG) class ExceptionOnInit(finders.BaseFinder): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) raise ValueError("test") with patch( "isort.deprecated.finders.FindersManager._default_finders_classes", FindersManager._default_finders_classes + (ExceptionOnInit,), # type: ignore ): assert FindersManager(settings.Config(verbose=True)) def test_no_finders(self): assert FindersManager(settings.DEFAULT_CONFIG, []).find("isort") is None def test_find_broken_finder(self): class ExceptionOnFind(finders.BaseFinder): def find(*args, **kwargs): raise ValueError("test") assert ( FindersManager(settings.Config(verbose=True), [ExceptionOnFind]).find("isort") is None ) class AbstractTestFinder: kind = finders.BaseFinder @classmethod def setup_class(cls): cls.instance = cls.kind(settings.DEFAULT_CONFIG) # type: ignore def test_create(self): assert self.kind(settings.DEFAULT_CONFIG) # type: ignore def test_find(self): self.instance.find("isort") # type: ignore self.instance.find("") # type: ignore class TestForcedSeparateFinder(AbstractTestFinder): kind = finders.ForcedSeparateFinder class TestDefaultFinder(AbstractTestFinder): kind = finders.DefaultFinder class TestKnownPatternFinder(AbstractTestFinder): kind = finders.KnownPatternFinder class TestLocalFinder(AbstractTestFinder): kind = finders.LocalFinder class TestPathFinder(AbstractTestFinder): kind = finders.PathFinder def test_conda_and_virtual_env(self, tmpdir): python3lib = tmpdir.mkdir("lib").mkdir("python3") python3lib.mkdir("site-packages").mkdir("y") python3lib.mkdir("n").mkdir("site-packages").mkdir("x") tmpdir.mkdir("z").join("__init__.py").write("__version__ = '1.0.0'") tmpdir.chdir() conda = self.kind(settings.Config(conda_env=str(tmpdir)), str(tmpdir)) venv = self.kind(settings.Config(virtual_env=str(tmpdir)), str(tmpdir)) assert conda.find("y") == venv.find("y") == "THIRDPARTY" assert conda.find("x") == venv.find("x") == "THIRDPARTY" assert conda.find("z") == "THIRDPARTY" assert conda.find("os") == venv.find("os") == "STDLIB" def test_default_section(self, tmpdir): tmpdir.join("file.py").write("import b\nimport a\n") assert self.kind(settings.Config(default_section="CUSTOM"), tmpdir).find("file") == "CUSTOM" def test_src_paths(self, tmpdir): tmpdir.join("file.py").write("import b\nimport a\n") assert ( self.kind(settings.Config(src_paths=[Path(str(tmpdir))]), tmpdir).find("file") == settings.DEFAULT_CONFIG.default_section ) class TestRequirementsFinder(AbstractTestFinder): kind = finders.RequirementsFinder def test_no_pipreqs(self): with patch("isort.deprecated.finders.pipreqs", None): assert not self.kind(settings.DEFAULT_CONFIG).find("isort") def test_not_enabled(self): test_finder = self.kind(settings.DEFAULT_CONFIG) test_finder.enabled = False assert not test_finder.find("isort") def test_requirements_dir(self, tmpdir): tmpdir.mkdir("requirements").join("development.txt").write("x==1.00") test_finder = self.kind(settings.DEFAULT_CONFIG, str(tmpdir)) assert test_finder.find("x") def test_requirements_finder(tmpdir) -> None: subdir = tmpdir.mkdir("subdir").join("lol.txt") subdir.write("flask") req_file = tmpdir.join("requirements.txt") req_file.write("Django==1.11\n-e git+https://github.com/orsinium/deal.git#egg=deal\n") for path in (str(tmpdir), str(subdir)): finder = finders.RequirementsFinder(config=Config(), path=path) files = list(finder._get_files()) assert len(files) == 1 # file finding assert files[0].endswith("requirements.txt") # file finding assert set(finder._get_names(str(req_file))) == {"Django", "deal"} # file parsing assert finder.find("django") == sections.THIRDPARTY # package in reqs assert finder.find("flask") is None # package not in reqs assert finder.find("deal") == sections.THIRDPARTY # vcs assert len(finder.mapping) > 100 # type: ignore assert finder._normalize_name("deal") == "deal" assert finder._normalize_name("Django") == "django" # lowercase assert finder._normalize_name("django_haystack") == "haystack" # mapping assert finder._normalize_name("Flask-RESTful") == "flask_restful" # convert `-`to `_` req_file.remove() def test_path_finder(monkeypatch) -> None: config = config = Config() finder = finders.PathFinder(config=config) third_party_prefix = next(path for path in finder.paths if "site-packages" in path) ext_suffixes = importlib.machinery.EXTENSION_SUFFIXES imaginary_paths = { posixpath.join(finder.stdlib_lib_prefix, "example_1.py"), posixpath.join(third_party_prefix, "example_2.py"), posixpath.join(os.getcwd(), "example_3.py"), } imaginary_paths.update( { posixpath.join(third_party_prefix, "example_" + str(i) + ext_suffix) for i, ext_suffix in enumerate(ext_suffixes, 4) } ) monkeypatch.setattr( "isort.deprecated.finders.exists_case_sensitive", lambda p: p in imaginary_paths ) assert finder.find("example_1") == sections.STDLIB assert finder.find("example_2") == sections.THIRDPARTY assert finder.find("example_3") == settings.DEFAULT_CONFIG.default_section for i, _ in enumerate(ext_suffixes, 4): assert finder.find("example_" + str(i)) == sections.THIRDPARTY ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/test_exceptions.py0000644000000000000000000000757514536412763015634 0ustar00import pickle from isort import exceptions class TestISortError: def setup_class(self): self.instance = exceptions.ISortError() def test_init(self): assert isinstance(self.instance, exceptions.ISortError) def test_pickleable(self): assert isinstance(pickle.loads(pickle.dumps(self.instance)), exceptions.ISortError) class TestExistingSyntaxErrors(TestISortError): def setup_class(self): self.instance: exceptions.ExistingSyntaxErrors = exceptions.ExistingSyntaxErrors( "file_path" ) def test_variables(self): assert self.instance.file_path == "file_path" class TestIntroducedSyntaxErrors(TestISortError): def setup_class(self): self.instance: exceptions.IntroducedSyntaxErrors = exceptions.IntroducedSyntaxErrors( "file_path" ) def test_variables(self): assert self.instance.file_path == "file_path" class TestFileSkipped(TestISortError): def setup_class(self): self.instance: exceptions.FileSkipped = exceptions.FileSkipped("message", "file_path") def test_variables(self): assert self.instance.file_path == "file_path" assert str(self.instance) == "message" class TestFileSkipComment(TestISortError): def setup_class(self): self.instance: exceptions.FileSkipComment = exceptions.FileSkipComment("file_path") def test_variables(self): assert self.instance.file_path == "file_path" class TestFileSkipSetting(TestISortError): def setup_class(self): self.instance: exceptions.FileSkipSetting = exceptions.FileSkipSetting("file_path") def test_variables(self): assert self.instance.file_path == "file_path" class TestProfileDoesNotExist(TestISortError): def setup_class(self): self.instance: exceptions.ProfileDoesNotExist = exceptions.ProfileDoesNotExist("profile") def test_variables(self): assert self.instance.profile == "profile" class TestSortingFunctionDoesNotExist(TestISortError): def setup_class(self): self.instance: exceptions.SortingFunctionDoesNotExist = ( exceptions.SortingFunctionDoesNotExist("round", ["square", "peg"]) ) def test_variables(self): assert self.instance.sort_order == "round" assert self.instance.available_sort_orders == ["square", "peg"] class TestLiteralParsingFailure(TestISortError): def setup_class(self): self.instance: exceptions.LiteralParsingFailure = exceptions.LiteralParsingFailure( "x = [", SyntaxError ) def test_variables(self): assert self.instance.code == "x = [" assert self.instance.original_error == SyntaxError class TestLiteralSortTypeMismatch(TestISortError): def setup_class(self): self.instance: exceptions.LiteralSortTypeMismatch = exceptions.LiteralSortTypeMismatch( tuple, list ) def test_variables(self): assert self.instance.kind == tuple assert self.instance.expected_kind == list class TestAssignmentsFormatMismatch(TestISortError): def setup_class(self): self.instance: exceptions.AssignmentsFormatMismatch = exceptions.AssignmentsFormatMismatch( "print x" ) def test_variables(self): assert self.instance.code == "print x" class TestUnsupportedSettings(TestISortError): def setup_class(self): self.instance: exceptions.UnsupportedSettings = exceptions.UnsupportedSettings( {"apply": {"value": "true", "source": "/"}} ) def test_variables(self): assert self.instance.unsupported_settings == {"apply": {"value": "true", "source": "/"}} class TestUnsupportedEncoding(TestISortError): def setup_class(self): self.instance: exceptions.UnsupportedEncoding = exceptions.UnsupportedEncoding("file.py") def test_variables(self): assert self.instance.filename == "file.py" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/test_files.py0000644000000000000000000000037114536412763014540 0ustar00from isort import files from isort.settings import DEFAULT_CONFIG def test_find(tmpdir): tmp_file = tmpdir.join("file.py") tmp_file.write("import os, sys\n") assert tuple(files.find((tmp_file,), DEFAULT_CONFIG, [], [])) == (tmp_file,) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/test_format.py0000644000000000000000000001053114536412763014725 0ustar00from io import StringIO from pathlib import Path from unittest.mock import MagicMock, patch import colorama import pytest from hypothesis import given, reject from hypothesis import strategies as st import isort.format def test_ask_whether_to_apply_changes_to_file(): with patch("isort.format.input", MagicMock(return_value="y")): assert isort.format.ask_whether_to_apply_changes_to_file("") with patch("isort.format.input", MagicMock(return_value="n")): assert not isort.format.ask_whether_to_apply_changes_to_file("") with patch("isort.format.input", MagicMock(return_value="q")): with pytest.raises(SystemExit): assert isort.format.ask_whether_to_apply_changes_to_file("") def test_basic_printer(capsys): printer = isort.format.create_terminal_printer( color=False, success="{success}: {message}", error="{error}: {message}" ) printer.success("All good!") out, _ = capsys.readouterr() assert out == "SUCCESS: All good!\n" printer.error("Some error") _, err = capsys.readouterr() assert err == "ERROR: Some error\n" printer = isort.format.create_terminal_printer( color=False, success="success: {message}: {success}", error="error: {message}: {error}" ) printer.success("All good!") out, _ = capsys.readouterr() assert out == "success: All good!: SUCCESS\n" printer.error("Some error") _, err = capsys.readouterr() assert err == "error: Some error: ERROR\n" def test_basic_printer_diff(capsys): printer = isort.format.create_terminal_printer(color=False) printer.diff_line("+ added line\n") printer.diff_line("- removed line\n") out, _ = capsys.readouterr() assert out == "+ added line\n- removed line\n" def test_colored_printer_success(capsys): printer = isort.format.create_terminal_printer(color=True, success="{success}: {message}") printer.success("All good!") out, _ = capsys.readouterr() assert "SUCCESS" in out assert "All good!" in out assert colorama.Fore.GREEN in out def test_colored_printer_error(capsys): printer = isort.format.create_terminal_printer(color=True, error="{error}: {message}") printer.error("Some error") _, err = capsys.readouterr() assert "ERROR" in err assert "Some error" in err assert colorama.Fore.RED in err def test_colored_printer_diff(capsys): printer = isort.format.create_terminal_printer(color=True) printer.diff_line("+++ file1\n") printer.diff_line("--- file2\n") printer.diff_line("+ added line\n") printer.diff_line("normal line\n") printer.diff_line("- removed line\n") printer.diff_line("normal line\n") out, _ = capsys.readouterr() # No color added to lines with multiple + and -'s assert out.startswith("+++ file1\n--- file2\n") # Added lines are green assert colorama.Fore.GREEN + "+ added line" in out # Removed lines are red assert colorama.Fore.RED + "- removed line" in out # Normal lines are reset back assert colorama.Style.RESET_ALL + "normal line" in out def test_colored_printer_diff_output(capsys): output = StringIO() printer = isort.format.create_terminal_printer(color=True, output=output) printer.diff_line("a line\n") out, _ = capsys.readouterr() assert out == "" output.seek(0) assert output.read().startswith("a line\n") @patch("isort.format.colorama_unavailable", True) def test_colorama_not_available_handled_gracefully(capsys): with pytest.raises(SystemExit) as system_exit: _ = isort.format.create_terminal_printer(color=True) assert system_exit.value.code and int(system_exit.value.code) > 0 _, err = capsys.readouterr() assert "colorama" in err assert "colors extra" in err # This test code was written by the `hypothesis.extra.ghostwriter` module # and is provided under the Creative Commons Zero public domain dedication. @given( file_input=st.text(), file_output=st.text(), file_path=st.one_of(st.none(), st.builds(Path)), output=st.one_of(st.none(), st.builds(StringIO, st.text())), ) def test_fuzz_show_unified_diff(file_input, file_output, file_path, output): try: isort.format.show_unified_diff( file_input=file_input, file_output=file_output, file_path=file_path, output=output, ) except UnicodeEncodeError: reject() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/test_hooks.py0000644000000000000000000000754214536412763014570 0ustar00import os from pathlib import Path from unittest.mock import MagicMock, patch from isort import exceptions, hooks def test_git_hook(src_dir): """Simple smoke level testing of git hooks""" # Ensure correct subprocess command is called with patch("subprocess.run", MagicMock()) as run_mock: hooks.git_hook() run_mock.assert_called_once() assert run_mock.call_args[0][0] == [ "git", "diff-index", "--cached", "--name-only", "--diff-filter=ACMRTUXB", "HEAD", ] with patch("subprocess.run", MagicMock()) as run_mock: hooks.git_hook(lazy=True) run_mock.assert_called_once() assert run_mock.call_args[0][0] == [ "git", "diff-index", "--name-only", "--diff-filter=ACMRTUXB", "HEAD", ] # Test that non python files aren't processed with patch( "isort.hooks.get_lines", MagicMock(return_value=["README.md", "setup.cfg", "LICDENSE", "mkdocs.yml", "test"]), ): with patch("subprocess.run", MagicMock()) as run_mock: hooks.git_hook(modify=True) run_mock.assert_not_called() mock_main_py = MagicMock(return_value=[os.path.join(src_dir, "main.py")]) mock_imperfect = MagicMock() mock_imperfect.return_value.stdout = b"import b\nimport a" # Test with incorrectly sorted file returned from git with patch("isort.hooks.get_lines", mock_main_py): with patch("subprocess.run", mock_imperfect): with patch("isort.api.sort_file", MagicMock(return_value=False)) as api_mock: hooks.git_hook(modify=True) api_mock.assert_called_once() assert api_mock.call_args[0][0] == mock_main_py.return_value[0] # Test with sorted file returned from git and modify=False with patch("isort.hooks.get_lines", mock_main_py): with patch("subprocess.run", mock_imperfect): with patch("isort.api.sort_file", MagicMock(return_value=False)) as api_mock: hooks.git_hook(modify=False) api_mock.assert_not_called() # Test with skipped file returned from git with patch( "isort.hooks.get_lines", MagicMock(return_value=[os.path.join(src_dir, "main.py")]) ) as run_mock: class FakeProcessResponse(object): stdout = b"# isort: skip-file\nimport b\nimport a\n" with patch("subprocess.run", MagicMock(return_value=FakeProcessResponse())) as run_mock: with patch("isort.api", MagicMock(side_effect=exceptions.FileSkipped("", ""))): hooks.git_hook(modify=True) def test_git_hook_uses_the_configuration_file_specified_in_settings_path(tmp_path: Path) -> None: subdirectory_path = tmp_path / "subdirectory" configuration_file_path = subdirectory_path / ".isort.cfg" # Inserting the modified file in the parent directory of the configuration file ensures that it # will not be found by the normal search routine modified_file_path = configuration_file_path.parent.parent / "somefile.py" # This section will be used to check that the configuration file was indeed loaded section = "testsection" os.mkdir(subdirectory_path) with open(configuration_file_path, "w") as fd: fd.write("[isort]\n") fd.write(f"sections={section}") with open(modified_file_path, "w") as fd: pass files_modified = [str(modified_file_path.absolute())] with patch("isort.hooks.get_lines", MagicMock(return_value=files_modified)): with patch("isort.hooks.get_output", MagicMock(return_value="")): with patch("isort.api.check_code_string", MagicMock()) as run_mock: hooks.git_hook(settings_file=str(configuration_file_path)) assert run_mock.call_args[1]["config"].sections == (section,) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6321852 isort-5.13.2/tests/unit/test_identify.py0000644000000000000000000001023014536412763015244 0ustar00from io import StringIO from typing import List from isort import Config, identify from isort.identify import Import def imports_in_code(code: str, **kwargs) -> List[identify.Import]: return list(identify.imports(StringIO(code), **kwargs)) def test_top_only(): imports_in_function = """ import abc def xyz(): import defg """ assert len(imports_in_code(imports_in_function)) == 2 assert len(imports_in_code(imports_in_function, top_only=True)) == 1 imports_after_class = """ import abc class MyObject: pass import defg """ assert len(imports_in_code(imports_after_class)) == 2 assert len(imports_in_code(imports_after_class, top_only=True)) == 1 def test_top_doc_string(): assert ( len( imports_in_code( ''' #! /bin/bash import x """import abc from y import z """ import abc ''' ) ) == 1 ) def test_yield_and_raise_edge_cases(): assert not imports_in_code( """ raise SomeException("Blah") \\ from exceptionsInfo.popitem()[1] """ ) assert not imports_in_code( """ def generator_function(): yield \\ from other_function()[1] """ ) assert ( len( imports_in_code( """ # one # two def function(): # three \\ import b import a """ ) ) == 2 ) assert ( len( imports_in_code( """ # one # two def function(): raise \\ import b import a """ ) ) == 1 ) assert not imports_in_code( """ def generator_function(): ( yield from other_function()[1] ) """ ) assert not imports_in_code( """ def generator_function(): ( ( (((( ((((( (( ((( yield from other_function()[1] ))))))))))))) ))) """ ) assert ( len( imports_in_code( """ def generator_function(): import os yield \\ from other_function()[1] """ ) ) == 1 ) assert not imports_in_code( """ def generator_function(): ( ( (((( ((((( (( ((( yield """ ) assert not imports_in_code( """ def generator_function(): ( ( (((( ((((( (( ((( raise ( """ ) assert not imports_in_code( """ def generator_function(): ( ( (((( ((((( (( ((( raise \\ from \\ """ ) assert ( len( imports_in_code( """ def generator_function(): ( ( (((( ((((( (( ((( raise \\ from \\ import c import abc import xyz """ ) ) == 2 ) def test_complex_examples(): assert ( len( imports_in_code( """ import a, b, c; import n x = ( 1, 2, 3 ) import x from os \\ import path from os ( import path ) from os import \\ path from os \\ import ( path ) from os import ( \\""" ) ) == 9 ) assert not imports_in_code("from os import \\") assert ( imports_in_code( """ from os \\ import ( system""" ) == [ Import( line_number=2, indented=False, module="os", attribute="system", alias=None, cimport=False, file_path=None, ) ] ) def test_aliases(): assert imports_in_code("import os as os")[0].alias == "os" assert not imports_in_code( "import os as os", config=Config( remove_redundant_aliases=True, ), )[0].alias assert imports_in_code("from os import path as path")[0].alias == "path" assert not imports_in_code( "from os import path as path", config=Config(remove_redundant_aliases=True) )[0].alias def test_indented(): assert not imports_in_code("import os")[0].indented assert imports_in_code(" import os")[0].indented assert imports_in_code("\timport os")[0].indented ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_importable.py0000644000000000000000000000222214536412763015571 0ustar00"""Basic set of tests to ensure entire code base is importable""" import pytest def test_importable(): """Simple smoketest to ensure all isort modules are importable""" import isort import isort._version import isort.api import isort.comments import isort.deprecated.finders import isort.exceptions import isort.format import isort.hooks import isort.logo import isort.main import isort.output import isort.parse import isort.place import isort.profiles import isort.pylama_isort import isort.sections import isort.settings import isort.setuptools_commands import isort.sorting import isort.stdlibs import isort.stdlibs.all import isort.stdlibs.py2 import isort.stdlibs.py3 import isort.stdlibs.py27 import isort.stdlibs.py36 import isort.stdlibs.py37 import isort.stdlibs.py38 import isort.stdlibs.py39 import isort.stdlibs.py310 import isort.stdlibs.py311 import isort.stdlibs.py312 import isort.utils import isort.wrap import isort.wrap_modes with pytest.raises(SystemExit): import isort.__main__ # noqa: F401 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_io.py0000644000000000000000000000263314536412763014050 0ustar00import sys from unittest.mock import patch import pytest from isort import io class TestFile: @pytest.mark.skipif(sys.platform == "win32", reason="Can't run file encoding test in AppVeyor") def test_read(self, tmpdir): test_file_content = """# -*- encoding: ascii -*- import แฝฉ """ test_file = tmpdir.join("file.py") test_file.write(test_file_content) with pytest.raises(Exception): with io.File.read(str(test_file)) as file_handler: file_handler.stream.read() def test_from_content(self, tmpdir): test_file = tmpdir.join("file.py") test_file.write_text("import os", "utf8") file_obj = io.File.from_contents("import os", filename=str(test_file)) assert file_obj assert file_obj.extension == "py" def test_open(self, tmpdir): with pytest.raises(Exception): io.File._open("THISCANTBEAREALFILEแฝฉแฝฉแฝฉแฝฉแฝฉแฝฉแฝฉแฝฉแฝฉแฝฉแฝฉแฝฉ.แฝฉแฝฉแฝฉแฝฉแฝฉ") def raise_arbitrary_exception(*args, **kwargs): raise RuntimeError("test") test_file = tmpdir.join("file.py") test_file.write("import os") assert io.File._open(str(test_file)) # correctly responds to error determining encoding with patch("tokenize.detect_encoding", raise_arbitrary_exception): with pytest.raises(Exception): io.File._open(str(test_file)) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_isort.py0000644000000000000000000055125114536412763014606 0ustar00"""Tests all major functionality of the isort library Should be ran using py.test by simply running py.test in the isort project directory """ import os import os.path import subprocess import sys from io import StringIO from pathlib import Path from tempfile import NamedTemporaryFile from typing import TYPE_CHECKING, Any, Dict, Iterator, List, Set, Tuple import py import pytest import isort from isort import api, files, sections from isort.exceptions import ExistingSyntaxErrors, FileSkipped, MissingSection from isort.settings import Config from isort.utils import exists_case_sensitive from .utils import UnreadableStream, as_stream if TYPE_CHECKING: WrapModes: Any else: from isort.wrap_modes import WrapModes TEST_DEFAULT_CONFIG = """ [*.{py,pyi}] max_line_length = 120 indent_style = space indent_size = 4 known_first_party = isort known_third_party = kate known_something_else = something_entirely_different sections = FUTURE, STDLIB, THIRDPARTY, FIRSTPARTY, LOCALFOLDER, SOMETHING_ELSE ignore_frosted_errors = E103 skip = build,.tox,venv balanced_wrapping = true """ SHORT_IMPORT = "from third_party import lib1, lib2, lib3, lib4" SINGLE_FROM_IMPORT = "from third_party import lib1" SINGLE_LINE_LONG_IMPORT = "from third_party import lib1, lib2, lib3, lib4, lib5, lib5ab" REALLY_LONG_IMPORT = ( "from third_party import lib1, lib2, lib3, lib4, lib5, lib6, lib7, lib8, lib9, lib10, lib11," "lib12, lib13, lib14, lib15, lib16, lib17, lib18, lib20, lib21, lib22" ) REALLY_LONG_IMPORT_WITH_COMMENT = ( "from third_party import lib1, lib2, lib3, lib4, lib5, lib6, lib7, lib8, lib9, " "lib10, lib11, lib12, lib13, lib14, lib15, lib16, lib17, lib18, lib20, lib21, lib22" " # comment" ) @pytest.fixture(scope="session", autouse=True) def default_settings_path(tmpdir_factory) -> Iterator[str]: config_dir = tmpdir_factory.mktemp("config") config_file = config_dir.join(".editorconfig").strpath with open(config_file, "w") as editorconfig: editorconfig.write(TEST_DEFAULT_CONFIG) assert Config(config_file).known_other with config_dir.as_cwd(): yield config_dir.strpath def test_happy_path() -> None: """Test the most basic use case, straight imports no code, simply not organized by category.""" test_input = "import sys\nimport os\nimport myproject.test\nimport django.settings" test_output = isort.code(test_input, known_first_party=["myproject"]) assert test_output == ( "import os\n" "import sys\n" "\n" "import django.settings\n" "\n" "import myproject.test\n" ) def test_code_intermixed() -> None: """Defines what should happen when isort encounters imports intermixed with code. (it should pull them all to the top) """ test_input = ( "import sys\n" "print('yo')\n" "print('I like to put code between imports cause I want stuff to break')\n" "import myproject.test\n" ) test_output = isort.code(test_input) assert test_output == ( "import sys\n" "\n" "print('yo')\n" "print('I like to put code between imports cause I want stuff to break')\n" "import myproject.test\n" ) def test_correct_space_between_imports() -> None: """Ensure after imports a correct amount of space (in newlines) is enforced. (2 for method, class, or decorator definitions 1 for anything else) """ test_input_method = "import sys\ndef my_method():\n print('hello world')\n" test_output_method = isort.code(test_input_method) assert test_output_method == ("import sys\n\n\ndef my_method():\n print('hello world')\n") test_input_decorator = ( "import sys\n" "@my_decorator\n" "def my_method():\n" " print('hello world')\n" ) test_output_decorator = isort.code(test_input_decorator) assert test_output_decorator == ( "import sys\n" "\n" "\n" "@my_decorator\n" "def my_method():\n" " print('hello world')\n" ) test_input_class = "import sys\nclass MyClass(object):\n pass\n" test_output_class = isort.code(test_input_class) assert test_output_class == "import sys\n\n\nclass MyClass(object):\n pass\n" test_input_other = "import sys\nprint('yo')\n" test_output_other = isort.code(test_input_other) assert test_output_other == "import sys\n\nprint('yo')\n" test_input_inquotes = ( "import sys\n" "@my_decorator('''hello\nworld''')\n" "def my_method():\n" " print('hello world')\n" ) test_output_inquotes = api.sort_code_string(test_input_inquotes) assert ( test_output_inquotes == "import sys\n" "\n\n" "@my_decorator('''hello\nworld''')\n" "def my_method():\n" " print('hello world')\n" ) test_input_assign = "import sys\nVAR = 1\n" test_output_assign = api.sort_code_string(test_input_assign) assert test_output_assign == "import sys\n\nVAR = 1\n" test_input_assign = "import sys\nVAR = 1\ndef y():\n" test_output_assign = api.sort_code_string(test_input_assign) assert test_output_assign == "import sys\n\nVAR = 1\ndef y():\n" test_input = """ import os x = "hi" def x(): pass """ assert isort.code(test_input) == test_input def test_sort_on_number() -> None: """Ensure numbers get sorted logically (10 > 9 not the other way around)""" test_input = "import lib10\nimport lib9\n" test_output = isort.code(test_input) assert test_output == "import lib9\nimport lib10\n" def test_line_length() -> None: """Ensure isort enforces the set line_length.""" assert len(isort.code(REALLY_LONG_IMPORT, line_length=80).split("\n")[0]) <= 80 assert len(isort.code(REALLY_LONG_IMPORT, line_length=120).split("\n")[0]) <= 120 test_output = isort.code(REALLY_LONG_IMPORT, line_length=42) assert test_output == ( "from third_party import (lib1, lib2, lib3,\n" " lib4, lib5, lib6,\n" " lib7, lib8, lib9,\n" " lib10, lib11,\n" " lib12, lib13,\n" " lib14, lib15,\n" " lib16, lib17,\n" " lib18, lib20,\n" " lib21, lib22)\n" ) test_input = ( "from django.contrib.gis.gdal.field import (\n" " OFTDate, OFTDateTime, OFTInteger, OFTInteger64, OFTReal, OFTString,\n" " OFTTime,\n" ")\n" ) # Test case described in issue #654 assert ( isort.code( code=test_input, include_trailing_comma=True, line_length=79, multi_line_output=WrapModes.VERTICAL_GRID_GROUPED, balanced_wrapping=False, ) == test_input ) test_output = isort.code(code=REALLY_LONG_IMPORT, line_length=42, wrap_length=32) assert test_output == ( "from third_party import (lib1,\n" " lib2,\n" " lib3,\n" " lib4,\n" " lib5,\n" " lib6,\n" " lib7,\n" " lib8,\n" " lib9,\n" " lib10,\n" " lib11,\n" " lib12,\n" " lib13,\n" " lib14,\n" " lib15,\n" " lib16,\n" " lib17,\n" " lib18,\n" " lib20,\n" " lib21,\n" " lib22)\n" ) test_input = ( "from .test import a_very_long_function_name_that_exceeds_the_normal_pep8_line_length\n" ) with pytest.raises(ValueError): test_output = isort.code(code=REALLY_LONG_IMPORT, line_length=80, wrap_length=99) assert ( isort.code(REALLY_LONG_IMPORT, line_length=100, wrap_length=99) == """ from third_party import (lib1, lib2, lib3, lib4, lib5, lib6, lib7, lib8, lib9, lib10, lib11, lib12, lib13, lib14, lib15, lib16, lib17, lib18, lib20, lib21, lib22) """.lstrip() ) # Test Case described in issue #1015 test_output = isort.code( REALLY_LONG_IMPORT, line_length=25, multi_line_output=WrapModes.HANGING_INDENT ) assert test_output == ( "from third_party import \\\n" " lib1, lib2, lib3, \\\n" " lib4, lib5, lib6, \\\n" " lib7, lib8, lib9, \\\n" " lib10, lib11, \\\n" " lib12, lib13, \\\n" " lib14, lib15, \\\n" " lib16, lib17, \\\n" " lib18, lib20, \\\n" " lib21, lib22\n" ) def test_output_modes() -> None: """Test setting isort to use various output modes works as expected""" test_output_grid = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.GRID, line_length=40 ) assert test_output_grid == ( "from third_party import (lib1, lib2,\n" " lib3, lib4,\n" " lib5, lib6,\n" " lib7, lib8,\n" " lib9, lib10,\n" " lib11, lib12,\n" " lib13, lib14,\n" " lib15, lib16,\n" " lib17, lib18,\n" " lib20, lib21,\n" " lib22)\n" ) test_output_vertical = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.VERTICAL, line_length=40 ) assert test_output_vertical == ( "from third_party import (lib1,\n" " lib2,\n" " lib3,\n" " lib4,\n" " lib5,\n" " lib6,\n" " lib7,\n" " lib8,\n" " lib9,\n" " lib10,\n" " lib11,\n" " lib12,\n" " lib13,\n" " lib14,\n" " lib15,\n" " lib16,\n" " lib17,\n" " lib18,\n" " lib20,\n" " lib21,\n" " lib22)\n" ) comment_output_vertical = isort.code( code=REALLY_LONG_IMPORT_WITH_COMMENT, multi_line_output=WrapModes.VERTICAL, line_length=40 ) assert comment_output_vertical == ( "from third_party import (lib1, # comment\n" " lib2,\n" " lib3,\n" " lib4,\n" " lib5,\n" " lib6,\n" " lib7,\n" " lib8,\n" " lib9,\n" " lib10,\n" " lib11,\n" " lib12,\n" " lib13,\n" " lib14,\n" " lib15,\n" " lib16,\n" " lib17,\n" " lib18,\n" " lib20,\n" " lib21,\n" " lib22)\n" ) test_output_hanging_indent = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.HANGING_INDENT, line_length=40, indent=" ", ) assert test_output_hanging_indent == ( "from third_party import lib1, lib2, \\\n" " lib3, lib4, lib5, lib6, lib7, \\\n" " lib8, lib9, lib10, lib11, lib12, \\\n" " lib13, lib14, lib15, lib16, lib17, \\\n" " lib18, lib20, lib21, lib22\n" ) comment_output_hanging_indent = isort.code( code=REALLY_LONG_IMPORT_WITH_COMMENT, multi_line_output=WrapModes.HANGING_INDENT, line_length=40, indent=" ", ) assert comment_output_hanging_indent == ( "from third_party import lib1, lib2, \\\n" " lib3, lib4, lib5, lib6, lib7, \\\n" " lib8, lib9, lib10, lib11, lib12, \\\n" " lib13, lib14, lib15, lib16, lib17, \\\n" " lib18, lib20, lib21, lib22 \\\n" " # comment\n" ) test_output_vertical_indent = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=40, indent=" ", ) assert test_output_vertical_indent == ( "from third_party import (\n" " lib1,\n" " lib2,\n" " lib3,\n" " lib4,\n" " lib5,\n" " lib6,\n" " lib7,\n" " lib8,\n" " lib9,\n" " lib10,\n" " lib11,\n" " lib12,\n" " lib13,\n" " lib14,\n" " lib15,\n" " lib16,\n" " lib17,\n" " lib18,\n" " lib20,\n" " lib21,\n" " lib22\n" ")\n" ) comment_output_vertical_indent = isort.code( code=REALLY_LONG_IMPORT_WITH_COMMENT, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=40, indent=" ", ) assert comment_output_vertical_indent == ( "from third_party import ( # comment\n" " lib1,\n" " lib2,\n" " lib3,\n" " lib4,\n" " lib5,\n" " lib6,\n" " lib7,\n" " lib8,\n" " lib9,\n" " lib10,\n" " lib11,\n" " lib12,\n" " lib13,\n" " lib14,\n" " lib15,\n" " lib16,\n" " lib17,\n" " lib18,\n" " lib20,\n" " lib21,\n" " lib22\n" ")\n" ) test_output_vertical_grid = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.VERTICAL_GRID, line_length=40, indent=" ", ) assert test_output_vertical_grid == ( "from third_party import (\n" " lib1, lib2, lib3, lib4, lib5, lib6,\n" " lib7, lib8, lib9, lib10, lib11,\n" " lib12, lib13, lib14, lib15, lib16,\n" " lib17, lib18, lib20, lib21, lib22)\n" ) comment_output_vertical_grid = isort.code( code=REALLY_LONG_IMPORT_WITH_COMMENT, multi_line_output=WrapModes.VERTICAL_GRID, line_length=40, indent=" ", ) assert comment_output_vertical_grid == ( "from third_party import ( # comment\n" " lib1, lib2, lib3, lib4, lib5, lib6,\n" " lib7, lib8, lib9, lib10, lib11,\n" " lib12, lib13, lib14, lib15, lib16,\n" " lib17, lib18, lib20, lib21, lib22)\n" ) test_output_vertical_grid_grouped = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.VERTICAL_GRID_GROUPED, line_length=40, indent=" ", ) assert test_output_vertical_grid_grouped == ( "from third_party import (\n" " lib1, lib2, lib3, lib4, lib5, lib6,\n" " lib7, lib8, lib9, lib10, lib11,\n" " lib12, lib13, lib14, lib15, lib16,\n" " lib17, lib18, lib20, lib21, lib22\n" ")\n" ) comment_output_vertical_grid_grouped = isort.code( code=REALLY_LONG_IMPORT_WITH_COMMENT, multi_line_output=WrapModes.VERTICAL_GRID_GROUPED, line_length=40, indent=" ", ) assert comment_output_vertical_grid_grouped == ( "from third_party import ( # comment\n" " lib1, lib2, lib3, lib4, lib5, lib6,\n" " lib7, lib8, lib9, lib10, lib11,\n" " lib12, lib13, lib14, lib15, lib16,\n" " lib17, lib18, lib20, lib21, lib22\n" ")\n" ) output_noqa = isort.code(code=REALLY_LONG_IMPORT_WITH_COMMENT, multi_line_output=WrapModes.NOQA) assert output_noqa == ( "from third_party import lib1, lib2, lib3, lib4, lib5, lib6, lib7," " lib8, lib9, lib10, lib11," " lib12, lib13, lib14, lib15, lib16, lib17, lib18, lib20, lib21, lib22 " "# NOQA comment\n" ) test_case = isort.code( code=SINGLE_LINE_LONG_IMPORT, multi_line_output=WrapModes.VERTICAL_GRID_GROUPED, line_length=40, indent=" ", ) test_output_vertical_grid_grouped_doesnt_wrap_early = test_case assert test_output_vertical_grid_grouped_doesnt_wrap_early == ( "from third_party import (\n lib1, lib2, lib3, lib4, lib5, lib5ab\n)\n" ) test_output_prefix_from_module = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.VERTICAL_PREFIX_FROM_MODULE_IMPORT, line_length=40, ) assert test_output_prefix_from_module == ( "from third_party import lib1, lib2\n" "from third_party import lib3, lib4\n" "from third_party import lib5, lib6\n" "from third_party import lib7, lib8\n" "from third_party import lib9, lib10\n" "from third_party import lib11, lib12\n" "from third_party import lib13, lib14\n" "from third_party import lib15, lib16\n" "from third_party import lib17, lib18\n" "from third_party import lib20, lib21\n" "from third_party import lib22\n" ) test_output_prefix_from_module_with_comment = isort.code( code=REALLY_LONG_IMPORT_WITH_COMMENT, multi_line_output=WrapModes.VERTICAL_PREFIX_FROM_MODULE_IMPORT, line_length=40, indent=" ", ) assert test_output_prefix_from_module_with_comment == ( "from third_party import lib1 # comment\n" "from third_party import lib2, lib3\n" "from third_party import lib4, lib5\n" "from third_party import lib6, lib7\n" "from third_party import lib8, lib9\n" "from third_party import lib10, lib11\n" "from third_party import lib12, lib13\n" "from third_party import lib14, lib15\n" "from third_party import lib16, lib17\n" "from third_party import lib18, lib20\n" "from third_party import lib21, lib22\n" ) test_output_hanging_indent_with_parentheses = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.HANGING_INDENT_WITH_PARENTHESES, line_length=40, indent=" ", ) assert test_output_hanging_indent_with_parentheses == ( "from third_party import (lib1, lib2,\n" " lib3, lib4, lib5, lib6, lib7, lib8,\n" " lib9, lib10, lib11, lib12, lib13,\n" " lib14, lib15, lib16, lib17, lib18,\n" " lib20, lib21, lib22)\n" ) comment_output_hanging_indent_with_parentheses = isort.code( code=REALLY_LONG_IMPORT_WITH_COMMENT, multi_line_output=WrapModes.HANGING_INDENT_WITH_PARENTHESES, line_length=40, indent=" ", ) assert comment_output_hanging_indent_with_parentheses == ( "from third_party import (lib1, # comment\n" " lib2, lib3, lib4, lib5, lib6, lib7,\n" " lib8, lib9, lib10, lib11, lib12,\n" " lib13, lib14, lib15, lib16, lib17,\n" " lib18, lib20, lib21, lib22)\n" ) test_input = ( "def a():\n" " from allennlp.modules.text_field_embedders.basic_text_field_embedder" " import BasicTextFieldEmbedder" ) test_output = isort.code(test_input, line_length=100) assert test_output == ( "def a():\n" " from allennlp.modules.text_field_embedders.basic_text_field_embedder import \\\n" " BasicTextFieldEmbedder" ) test_input = ( "class A:\n" " def a():\n" " from allennlp.common.registrable import Registrable" " # import here to avoid circular imports\n" "\n\n" "class B:\n" " def b():\n" " from allennlp.common.registrable import Registrable" " # import here to avoid circular imports\n" ) test_output = isort.code(test_input, line_length=100) assert test_output == test_input def test_qa_comment_case() -> None: test_input = "from veryveryveryveryveryveryveryveryveryveryvery import X # NOQA" test_output = isort.code(code=test_input, line_length=40, multi_line_output=WrapModes.NOQA) assert test_output == "from veryveryveryveryveryveryveryveryveryveryvery import X # NOQA\n" test_input = "import veryveryveryveryveryveryveryveryveryveryvery # NOQA" test_output = isort.code(code=test_input, line_length=40, multi_line_output=WrapModes.NOQA) assert test_output == "import veryveryveryveryveryveryveryveryveryveryvery # NOQA\n" def test_length_sort() -> None: """Test setting isort to sort on length instead of alphabetically.""" test_input = ( "import medium_sizeeeeeeeeeeeeee\n" "import shortie\n" "import looooooooooooooooooooooooooooooooooooooong\n" "import medium_sizeeeeeeeeeeeeea\n" ) test_output = isort.code(test_input, length_sort=True) assert test_output == ( "import shortie\n" "import medium_sizeeeeeeeeeeeeea\n" "import medium_sizeeeeeeeeeeeeee\n" "import looooooooooooooooooooooooooooooooooooooong\n" ) def test_length_sort_straight() -> None: """Test setting isort to sort straight imports on length instead of alphabetically.""" test_input = ( "import medium_sizeeeeeeeeeeeeee\n" "import shortie\n" "import looooooooooooooooooooooooooooooooooooooong\n" "from medium_sizeeeeeeeeeeeeee import b\n" "from shortie import c\n" "from looooooooooooooooooooooooooooooooooooooong import a\n" ) test_output = isort.code(test_input, length_sort_straight=True) assert test_output == ( "import shortie\n" "import medium_sizeeeeeeeeeeeeee\n" "import looooooooooooooooooooooooooooooooooooooong\n" "from looooooooooooooooooooooooooooooooooooooong import a\n" "from medium_sizeeeeeeeeeeeeee import b\n" "from shortie import c\n" ) def test_length_sort_section() -> None: """Test setting isort to sort on length instead of alphabetically for a specific section.""" test_input = ( "import medium_sizeeeeeeeeeeeeee\n" "import shortie\n" "import datetime\n" "import sys\n" "import os\n" "import looooooooooooooooooooooooooooooooooooooong\n" "import medium_sizeeeeeeeeeeeeea\n" ) test_output = isort.code(test_input, length_sort_sections=("stdlib",)) assert test_output == ( "import os\n" "import sys\n" "import datetime\n" "\n" "import looooooooooooooooooooooooooooooooooooooong\n" "import medium_sizeeeeeeeeeeeeea\n" "import medium_sizeeeeeeeeeeeeee\n" "import shortie\n" ) def test_convert_hanging() -> None: """Ensure that isort will convert hanging indents to correct indent method.""" test_input = ( "from third_party import lib1, lib2, \\\n" " lib3, lib4, lib5, lib6, lib7, \\\n" " lib8, lib9, lib10, lib11, lib12, \\\n" " lib13, lib14, lib15, lib16, lib17, \\\n" " lib18, lib20, lib21, lib22\n" ) test_output = isort.code(code=test_input, multi_line_output=WrapModes.GRID, line_length=40) assert test_output == ( "from third_party import (lib1, lib2,\n" " lib3, lib4,\n" " lib5, lib6,\n" " lib7, lib8,\n" " lib9, lib10,\n" " lib11, lib12,\n" " lib13, lib14,\n" " lib15, lib16,\n" " lib17, lib18,\n" " lib20, lib21,\n" " lib22)\n" ) def test_custom_indent() -> None: """Ensure setting a custom indent will work as expected.""" test_output = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.HANGING_INDENT, line_length=40, indent=" ", balanced_wrapping=False, ) assert test_output == ( "from third_party import lib1, lib2, \\\n" " lib3, lib4, lib5, lib6, lib7, lib8, \\\n" " lib9, lib10, lib11, lib12, lib13, \\\n" " lib14, lib15, lib16, lib17, lib18, \\\n" " lib20, lib21, lib22\n" ) test_output = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.HANGING_INDENT, line_length=40, indent="' '", balanced_wrapping=False, ) assert test_output == ( "from third_party import lib1, lib2, \\\n" " lib3, lib4, lib5, lib6, lib7, lib8, \\\n" " lib9, lib10, lib11, lib12, lib13, \\\n" " lib14, lib15, lib16, lib17, lib18, \\\n" " lib20, lib21, lib22\n" ) test_output = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.HANGING_INDENT, line_length=40, indent="tab", balanced_wrapping=False, ) assert test_output == ( "from third_party import lib1, lib2, \\\n" "\tlib3, lib4, lib5, lib6, lib7, lib8, \\\n" "\tlib9, lib10, lib11, lib12, lib13, \\\n" "\tlib14, lib15, lib16, lib17, lib18, \\\n" "\tlib20, lib21, lib22\n" ) test_output = isort.code( code=REALLY_LONG_IMPORT, multi_line_output=WrapModes.HANGING_INDENT, line_length=40, indent=2, balanced_wrapping=False, ) assert test_output == ( "from third_party import lib1, lib2, \\\n" " lib3, lib4, lib5, lib6, lib7, lib8, \\\n" " lib9, lib10, lib11, lib12, lib13, \\\n" " lib14, lib15, lib16, lib17, lib18, \\\n" " lib20, lib21, lib22\n" ) def test_use_parentheses() -> None: test_input = ( "from fooooooooooooooooooooooooo.baaaaaaaaaaaaaaaaaaarrrrrrr import " " my_custom_function as my_special_function" ) test_output = isort.code(test_input, line_length=79, use_parentheses=True) assert test_output == ( "from fooooooooooooooooooooooooo.baaaaaaaaaaaaaaaaaaarrrrrrr import (\n" " my_custom_function as my_special_function)\n" ) test_output = isort.code( code=test_input, line_length=79, use_parentheses=True, include_trailing_comma=True ) assert test_output == ( "from fooooooooooooooooooooooooo.baaaaaaaaaaaaaaaaaaarrrrrrr import (\n" " my_custom_function as my_special_function,)\n" ) test_output = isort.code( code=test_input, line_length=79, use_parentheses=True, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, ) assert test_output == ( "from fooooooooooooooooooooooooo.baaaaaaaaaaaaaaaaaaarrrrrrr import (\n" " my_custom_function as my_special_function\n)\n" ) test_output = isort.code( code=test_input, line_length=79, use_parentheses=True, multi_line_output=WrapModes.VERTICAL_GRID_GROUPED, include_trailing_comma=True, ) assert test_output == ( "from fooooooooooooooooooooooooo.baaaaaaaaaaaaaaaaaaarrrrrrr import (\n" " my_custom_function as my_special_function,\n)\n" ) def test_skip() -> None: """Ensure skipping a single import will work as expected.""" test_input = ( "import myproject\n" "import django\n" "print('hey')\n" "import sys # isort: skip this import needs to be placed here\n\n\n\n\n\n\n" ) test_output = isort.code(test_input, known_first_party=["myproject"]) assert test_output == ( "import django\n" "\n" "import myproject\n" "\n" "print('hey')\n" "import sys # isort: skip this import needs to be placed here\n" ) def test_skip_with_file_name() -> None: """Ensure skipping a file works even when file_contents is provided.""" test_input = "import django\nimport myproject\n" with pytest.raises(FileSkipped): isort.code( file_path=Path("/baz.py"), code=test_input, settings_path=os.getcwd(), skip=["baz.py"] ) def test_skip_within_file() -> None: """Ensure skipping a whole file works.""" test_input = "# isort: skip_file\nimport django\nimport myproject\n" with pytest.raises(FileSkipped): isort.code(test_input, known_third_party=["django"]) def test_force_to_top() -> None: """Ensure forcing a single import to the top of its category works as expected.""" test_input = "import lib6\nimport lib2\nimport lib5\nimport lib1\n" test_output = isort.code(test_input, force_to_top=["lib5"]) assert test_output == "import lib5\nimport lib1\nimport lib2\nimport lib6\n" def test_add_imports() -> None: """Ensures adding imports works as expected.""" test_input = "import lib6\nimport lib2\nimport lib5\nimport lib1\n\n" test_output = isort.code(code=test_input, add_imports=["import lib4", "import lib7"]) assert test_output == ( "import lib1\n" "import lib2\n" "import lib4\n" "import lib5\n" "import lib6\n" "import lib7\n" ) # Using simplified syntax test_input = "import lib6\nimport lib2\nimport lib5\nimport lib1\n\n" test_output = isort.code(code=test_input, add_imports=["lib4", "lib7", "lib8.a"]) assert test_output == ( "import lib1\n" "import lib2\n" "import lib4\n" "import lib5\n" "import lib6\n" "import lib7\n" "from lib8 import a\n" ) # On a file that has no pre-existing imports test_input = '"""Module docstring"""\n' "class MyClass(object):\n pass\n" test_output = isort.code(code=test_input, add_imports=["from __future__ import print_function"]) assert test_output == ( '"""Module docstring"""\n' "from __future__ import print_function\n" "\n" "\n" "class MyClass(object):\n" " pass\n" ) # On a file that has no pre-existing imports and a multiline docstring test_input = ( '"""Module docstring\n\nWith a second line\n"""\n' "class MyClass(object):\n pass\n" ) test_output = isort.code(code=test_input, add_imports=["from __future__ import print_function"]) assert test_output == ( '"""Module docstring\n' "\n" "With a second line\n" '"""\n' "from __future__ import print_function\n" "\n" "\n" "class MyClass(object):\n" " pass\n" ) # On a file that has no pre-existing imports and a multiline docstring. # In this example, the closing quotes for the docstring are on the final # line rather than a separate one. test_input = ( '"""Module docstring\n\nWith a second line"""\n' "class MyClass(object):\n pass\n" ) test_output = isort.code(code=test_input, add_imports=["from __future__ import print_function"]) assert test_output == ( '"""Module docstring\n' "\n" 'With a second line"""\n' "from __future__ import print_function\n" "\n" "\n" "class MyClass(object):\n" " pass\n" ) # On a file that has no pre-existing imports, and no doc-string test_input = "class MyClass(object):\n pass\n" test_output = isort.code(code=test_input, add_imports=["from __future__ import print_function"]) assert test_output == ( "from __future__ import print_function\n" "\n" "\n" "class MyClass(object):\n" " pass\n" ) # On a file with no content what so ever test_input = "" test_output = isort.code(test_input, add_imports=["lib4"]) assert test_output == ("") # On a file with no content what so ever, after force_adds is set to True test_input = "" test_output = isort.code(code=test_input, add_imports=["lib4"], force_adds=True) assert test_output == ("import lib4\n") def test_remove_imports() -> None: """Ensures removing imports works as expected.""" test_input = "import lib6\nimport lib2\nimport lib5\nimport lib1" test_output = isort.code(test_input, remove_imports=["lib2", "lib6"]) assert test_output == "import lib1\nimport lib5\n" # Using natural syntax test_input = ( "import lib6\n" "import lib2\n" "import lib5\n" "import lib1\n" "from lib8 import a" ) test_output = isort.code( code=test_input, remove_imports=["import lib2", "import lib6", "from lib8 import a"] ) assert test_output == "import lib1\nimport lib5\n" # From imports test_input = "from x import y" test_output = isort.code(test_input, remove_imports=["x"]) assert test_output == "" test_input = "from x import y" test_output = isort.code(test_input, remove_imports=["x.y"]) assert test_output == "" def test_comments_above(): """Test to ensure comments above an import will stay in place""" test_input = "import os\n\nfrom x import y\n\n# comment\nfrom z import __version__, api\n" assert isort.code(test_input, ensure_newline_before_comments=True) == test_input def test_explicitly_local_import() -> None: """Ensure that explicitly local imports are separated.""" test_input = "import lib1\nimport lib2\nimport .lib6\nfrom . import lib7" assert isort.code(test_input) == ( "import lib1\nimport lib2\n\nimport .lib6\nfrom . import lib7\n" ) assert isort.code(test_input, old_finders=True) == ( "import lib1\nimport lib2\n\nimport .lib6\nfrom . import lib7\n" ) def test_quotes_in_file() -> None: """Ensure imports within triple quotes don't get imported.""" test_input = "import os\n\n" '"""\n' "Let us\nimport foo\nokay?\n" '"""\n' assert isort.code(test_input) == test_input test_input = "import os\n\n" '\'"""\'\n' "import foo\n" assert isort.code(test_input) == test_input test_input = "import os\n\n" '"""Let us"""\n' "import foo\n\n" '"""okay?"""\n' assert isort.code(test_input) == test_input test_input = "import os\n\n" '#"""\n' "import foo\n" '#"""' assert isort.code(test_input) == ('import os\n\nimport foo\n\n#"""\n#"""\n') test_input = "import os\n\n'\\\nimport foo'\n" assert isort.code(test_input) == test_input test_input = "import os\n\n'''\n\\'''\nimport junk\n'''\n" assert isort.code(test_input) == test_input def test_check_newline_in_imports(capsys) -> None: """Ensure tests works correctly when new lines are in imports.""" test_input = "from lib1 import (\n sub1,\n sub2,\n sub3\n)\n" assert api.check_code_string( code=test_input, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=20, verbose=True, ) out, _ = capsys.readouterr() assert "SUCCESS" in out # if the verbose is only on modified outputs no output will be given assert api.check_code_string( code=test_input, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=20, verbose=True, only_modified=True, ) out, _ = capsys.readouterr() assert not out # we can make the input invalid to again see output test_input = "from lib1 import (\n sub2,\n sub1,\n sub3\n)\n" assert not api.check_code_string( code=test_input, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=20, verbose=True, only_modified=True, ) out, _ = capsys.readouterr() assert out def test_forced_separate() -> None: """Ensure that forcing certain sub modules to show separately works as expected.""" test_input = ( "import sys\n" "import warnings\n" "from collections import OrderedDict\n" "\n" "from django.core.exceptions import ImproperlyConfigured, SuspiciousOperation\n" "from django.core.paginator import InvalidPage\n" "from django.core.urlresolvers import reverse\n" "from django.db import models\n" "from django.db.models.fields import FieldDoesNotExist\n" "from django.utils import six\n" "\n" "from django.utils.deprecation import RenameMethodsBase\n" "from django.utils.encoding import force_str, force_text\n" "from django.utils.http import urlencode\n" "from django.utils.translation import ugettext, ugettext_lazy\n" "\n" "from django.contrib.admin import FieldListFilter\n" "from django.contrib.admin.exceptions import DisallowedModelAdminLookup\n" "from django.contrib.admin.options import IncorrectLookupParameters, IS_POPUP_VAR, " "TO_FIELD_VAR\n" ) assert ( isort.code( code=test_input, forced_separate=["django.utils.*", "django.contrib"], known_third_party=["django"], line_length=120, order_by_type=False, ) == test_input ) assert ( isort.code( code=test_input, forced_separate=["django.utils.*", "django.contrib"], known_third_party=["django"], line_length=120, order_by_type=False, old_finders=True, ) == test_input ) test_input = "from .foo import bar\n\nfrom .y import ca\n" assert ( isort.code(code=test_input, forced_separate=[".y"], line_length=120, order_by_type=False) == test_input ) assert ( isort.code( code=test_input, forced_separate=[".y"], line_length=120, order_by_type=False, old_finders=True, ) == test_input ) def test_default_section() -> None: """Test to ensure changing the default section works as expected.""" test_input = "import sys\nimport os\nimport myproject.test\nimport django.settings" test_output = isort.code( code=test_input, known_third_party=["django"], default_section="FIRSTPARTY" ) assert test_output == ( "import os\n" "import sys\n" "\n" "import django.settings\n" "\n" "import myproject.test\n" ) test_output_custom = isort.code( code=test_input, known_third_party=["django"], default_section="STDLIB" ) assert test_output_custom == ( "import myproject.test\n" "import os\n" "import sys\n" "\n" "import django.settings\n" ) def test_first_party_overrides_standard_section() -> None: """Test to ensure changing the default section works as expected.""" test_input = ( "from HTMLParser import HTMLParseError, HTMLParser\n" "import sys\n" "import os\n" "import profile.test\n" ) test_output = isort.code(code=test_input, known_first_party=["profile"], py_version="27") assert test_output == ( "import os\n" "import sys\n" "from HTMLParser import HTMLParseError, HTMLParser\n" "\n" "import profile.test\n" ) def test_thirdy_party_overrides_standard_section() -> None: """Test to ensure changing the default section works as expected.""" test_input = "import sys\nimport os\nimport profile.test\n" test_output = isort.code(test_input, known_third_party=["profile"]) assert test_output == "import os\nimport sys\n\nimport profile.test\n" def test_known_pattern_path_expansion(tmpdir) -> None: """Test to ensure patterns ending with path sep gets expanded and nested packages treated as known patterns. """ src_dir = tmpdir.mkdir("src") src_dir.mkdir("foo") src_dir.mkdir("bar") test_input = ( "from kate_plugin import isort_plugin\n" "import sys\n" "from foo import settings\n" "import bar\n" "import this\n" "import os\n" ) test_output = isort.code( code=test_input, default_section="THIRDPARTY", known_first_party=["src/", "this", "kate_plugin"], directory=str(tmpdir), ) test_output_old_finder = isort.code( code=test_input, default_section="FIRSTPARTY", old_finders=True, known_first_party=["src/", "this", "kate_plugin"], directory=str(tmpdir), ) assert ( test_output_old_finder == test_output == ( "import os\n" "import sys\n" "\n" "import bar\n" "import this\n" "from foo import settings\n" "from kate_plugin import isort_plugin\n" ) ) def test_force_single_line_imports() -> None: """Test to ensure forcing imports to each have their own line works as expected.""" test_input = ( "from third_party import lib1, lib2, \\\n" " lib3, lib4, lib5, lib6, lib7, \\\n" " lib8, lib9, lib10, lib11, lib12, \\\n" " lib13, lib14, lib15, lib16, lib17, \\\n" " lib18, lib20, lib21, lib22\n" ) test_output = isort.code( code=test_input, multi_line_output=WrapModes.GRID, line_length=40, force_single_line=True ) assert test_output == ( "from third_party import lib1\n" "from third_party import lib2\n" "from third_party import lib3\n" "from third_party import lib4\n" "from third_party import lib5\n" "from third_party import lib6\n" "from third_party import lib7\n" "from third_party import lib8\n" "from third_party import lib9\n" "from third_party import lib10\n" "from third_party import lib11\n" "from third_party import lib12\n" "from third_party import lib13\n" "from third_party import lib14\n" "from third_party import lib15\n" "from third_party import lib16\n" "from third_party import lib17\n" "from third_party import lib18\n" "from third_party import lib20\n" "from third_party import lib21\n" "from third_party import lib22\n" ) test_input = ( "from third_party import lib_a, lib_b, lib_d\n" "from third_party.lib_c import lib1\n" ) test_output = isort.code( code=test_input, multi_line_output=WrapModes.GRID, line_length=40, force_single_line=True ) assert test_output == ( "from third_party import lib_a\n" "from third_party import lib_b\n" "from third_party import lib_d\n" "from third_party.lib_c import lib1\n" ) def test_force_single_line_long_imports() -> None: test_input = "from veryveryveryveryveryvery import small, big\n" test_output = isort.code( code=test_input, multi_line_output=WrapModes.NOQA, line_length=40, force_single_line=True ) assert test_output == ( "from veryveryveryveryveryvery import big\n" "from veryveryveryveryveryvery import small # NOQA\n" ) def test_force_single_line_imports_and_sort_within_sections() -> None: test_input = ( "from third_party import lib_a, lib_b, lib_d\n" "from third_party.lib_c import lib1\n" ) test_output = isort.code( code=test_input, multi_line_output=WrapModes.GRID, line_length=40, force_single_line=True, force_sort_within_sections=True, ) assert test_output == ( "from third_party import lib_a\n" "from third_party import lib_b\n" "from third_party import lib_d\n" "from third_party.lib_c import lib1\n" ) test_output = isort.code( code=test_input, multi_line_output=WrapModes.GRID, line_length=40, force_single_line=True, force_sort_within_sections=True, lexicographical=True, ) assert test_output == ( "from third_party import lib_a\n" "from third_party import lib_b\n" "from third_party.lib_c import lib1\n" "from third_party import lib_d\n" ) test_input = """import sympy import numpy as np import pandas as pd from matplotlib import pyplot as plt """ assert ( isort.code(code=test_input, force_sort_within_sections=True, length_sort=True) == test_input ) def test_titled_imports() -> None: """Tests setting custom titled/commented import sections.""" test_input = ( "import sys\n" "import unicodedata\n" "import statistics\n" "import os\n" "import myproject.test\n" "import django.settings" ) test_output = isort.code( code=test_input, known_first_party=["myproject"], import_heading_stdlib="Standard Library", import_heading_firstparty="My Stuff", ) assert test_output == ( "# Standard Library\n" "import os\n" "import statistics\n" "import sys\n" "import unicodedata\n" "\n" "import django.settings\n" "\n" "# My Stuff\n" "import myproject.test\n" ) test_second_run = isort.code( code=test_output, known_first_party=["myproject"], import_heading_stdlib="Standard Library", import_heading_firstparty="My Stuff", ) assert test_second_run == test_output test_input_lines_down = ( "# comment 1\n" "import django.settings\n" "\n" "# Standard Library\n" "import sys\n" "import unicodedata\n" "import statistics\n" "import os\n" "import myproject.test\n" ) test_output_lines_down = isort.code( code=test_input_lines_down, known_first_party=["myproject"], import_heading_stdlib="Standard Library", import_heading_firstparty="My Stuff", ) assert test_output_lines_down == ( "# comment 1\n" "# Standard Library\n" "import os\n" "import statistics\n" "import sys\n" "import unicodedata\n" "\n" "import django.settings\n" "\n" "# My Stuff\n" "import myproject.test\n" ) def test_footered_imports() -> None: """Tests setting both custom titles and footers to import sections.""" test_input = ( "import sys\n" "import unicodedata\n" "import statistics\n" "import os\n" "import myproject.test\n" "import django.settings" ) test_output = isort.code( code=test_input, known_first_party=["myproject"], import_footer_stdlib="Standard Library End", import_footer_firstparty="My Stuff End", ) assert test_output == ( "import os\n" "import statistics\n" "import sys\n" "import unicodedata\n" "\n" "# Standard Library End\n" "\n" "import django.settings\n" "\n" "import myproject.test\n" "\n" "# My Stuff End\n" ) test_second_run = isort.code( code=test_output, known_first_party=["myproject"], import_footer_stdlib="Standard Library End", import_footer_firstparty="My Stuff End", ) assert test_second_run == test_output test_input_lines_down = ( "# comment 1\n" "import django.settings\n" "\n" "import sys\n" "import unicodedata\n" "import statistics\n" "import os\n" "import myproject.test\n" "\n" "# Standard Library End\n" ) test_output_lines_down = isort.code( code=test_input_lines_down, known_first_party=["myproject"], import_footer_stdlib="Standard Library End", import_footer_firstparty="My Stuff End", ) assert test_output_lines_down == ( "# comment 1\n" "import os\n" "import statistics\n" "import sys\n" "import unicodedata\n" "\n" "# Standard Library End\n" "\n" "import django.settings\n" "\n" "import myproject.test\n" "\n" "# My Stuff End\n" ) def test_titled_and_footered_imports() -> None: """Tests setting custom footers to import sections.""" test_input = ( "import sys\n" "import unicodedata\n" "import statistics\n" "import os\n" "import myproject.test\n" "import django.settings" ) test_output = isort.code( code=test_input, known_first_party=["myproject"], import_heading_stdlib="Standard Library", import_heading_firstparty="My Stuff", import_footer_stdlib="Standard Library End", import_footer_firstparty="My Stuff End", ) assert test_output == ( "# Standard Library\n" "import os\n" "import statistics\n" "import sys\n" "import unicodedata\n" "\n" "# Standard Library End\n" "\n" "import django.settings\n" "\n" "# My Stuff\n" "import myproject.test\n" "\n" "# My Stuff End\n" ) test_second_run = isort.code( code=test_output, known_first_party=["myproject"], import_heading_stdlib="Standard Library", import_heading_firstparty="My Stuff", import_footer_stdlib="Standard Library End", import_footer_firstparty="My Stuff End", ) assert test_second_run == test_output test_input_lines_down = ( "# comment 1\n" "import django.settings\n" "\n" "# Standard Library\n" "import sys\n" "import unicodedata\n" "import statistics\n" "import os\n" "import myproject.test\n" "\n" "# Standard Library End\n" ) test_output_lines_down = isort.code( code=test_input_lines_down, known_first_party=["myproject"], import_heading_stdlib="Standard Library", import_heading_firstparty="My Stuff", import_footer_stdlib="Standard Library End", import_footer_firstparty="My Stuff End", ) assert test_output_lines_down == ( "# comment 1\n" "# Standard Library\n" "import os\n" "import statistics\n" "import sys\n" "import unicodedata\n" "\n" "# Standard Library End\n" "\n" "import django.settings\n" "\n" "# My Stuff\n" "import myproject.test\n" "\n" "# My Stuff End\n" ) test_input_lines_down = ( "# comment 1\n" "import django.settings\n" "\n" "# Standard Library\n" "import sys\n" "import unicodedata\n" "import statistics\n" "import os\n" "import myproject.test\n" "\n" "# Standard Library End\n" "# Standard Library End\n" ) test_output_lines_down = isort.code( code=test_input_lines_down, known_first_party=["myproject"], import_heading_stdlib="Standard Library", import_heading_firstparty="My Stuff", import_footer_stdlib="Standard Library End", import_footer_firstparty="My Stuff End", dedup_headings=True, ) assert test_output_lines_down == ( "# comment 1\n" "# Standard Library\n" "import os\n" "import statistics\n" "import sys\n" "import unicodedata\n" "\n" "# Standard Library End\n" "\n" "import django.settings\n" "\n" "# My Stuff\n" "import myproject.test\n" "\n" "# My Stuff End\n" ) test_input_lines_down = ( "# comment 1\n" "# Standard Library\n" "import os\n" "import statistics\n" "import sys\n" "import unicodedata\n" "\n" "# Standard Library End\n" "\n" "import django.settings\n" "\n" "# My Stuff\n" "import myproject.test\n" ) test_output_lines_down = isort.code( code=test_input_lines_down, known_first_party=["myproject"], import_heading_stdlib="Standard Library", import_heading_firstparty="My Stuff", import_footer_stdlib="Standard Library End", import_footer_firstparty="My Stuff End", dedup_headings=True, ) assert test_output_lines_down == ( "# comment 1\n" "# Standard Library\n" "import os\n" "import statistics\n" "import sys\n" "import unicodedata\n" "\n" "# Standard Library End\n" "\n" "import django.settings\n" "\n" "# My Stuff\n" "import myproject.test\n" "\n" "# My Stuff End\n" ) def test_balanced_wrapping() -> None: """Tests balanced wrapping mode, where the length of individual lines maintain width.""" test_input = ( "from __future__ import (absolute_import, division, print_function,\n" " unicode_literals)" ) test_output = isort.code(code=test_input, line_length=70, balanced_wrapping=True) assert test_output == ( "from __future__ import (absolute_import, division,\n" " print_function, unicode_literals)\n" ) def test_relative_import_with_space() -> None: """Tests the case where the relation and the module that is being imported from is separated with a space. """ test_input = "from ... fields.sproqet import SproqetCollection" assert isort.code(test_input) == ("from ...fields.sproqet import SproqetCollection\n") test_input = "from .import foo" test_output = "from . import foo\n" assert isort.code(test_input) == test_output test_input = "from.import foo" test_output = "from . import foo\n" assert isort.code(test_input) == test_output def test_multiline_import() -> None: """Test the case where import spawns multiple lines with inconsistent indentation.""" test_input = "from pkg \\\n import stuff, other_suff \\\n more_stuff" assert isort.code(test_input) == ("from pkg import more_stuff, other_suff, stuff\n") # test again with a custom configuration custom_configuration = { "force_single_line": True, "line_length": 120, "known_first_party": ["asdf", "qwer"], "default_section": "THIRDPARTY", "forced_separate": "asdf", } # type: Dict[str, Any] expected_output = ( "from pkg import more_stuff\n" "from pkg import other_suff\n" "from pkg import stuff\n" ) assert isort.code(test_input, **custom_configuration) == expected_output def test_single_multiline() -> None: """Test the case where a single import spawns multiple lines.""" test_input = "from os import\\\n getuid\n\nprint getuid()\n" output = isort.code(test_input) assert output == ("from os import getuid\n\nprint getuid()\n") def test_atomic_mode() -> None: """With atomic mode isort should be able to automatically detect and stop syntax errors""" # without syntax error, everything works OK test_input = "from b import d, c\nfrom a import f, e\n" assert isort.code(test_input, atomic=True) == ("from a import e, f\nfrom b import c, d\n") # with syntax error content is not changed test_input += "while True print 'Hello world'" # blatant syntax error with pytest.raises(ExistingSyntaxErrors): isort.code(test_input, atomic=True) # unless file is for Cython which doesn't yet provide a public AST parsing API assert ( isort.code(test_input, extension="pyx", atomic=True, verbose=True) == isort.code(test_input, extension="pyx", atomic=True) == """from a import e, f from b import c, d while True print 'Hello world' """ ) # ensure atomic works with streams test_stream_input = as_stream("from b import d, c\nfrom a import f, e\n") test_output = UnreadableStream() isort.stream(test_stream_input, test_output, atomic=True) test_output.seek(0) assert test_output.read() == "from a import e, f\nfrom b import c, d\n" def test_order_by_type() -> None: test_input = "from module import Class, CONSTANT, function" assert isort.code(test_input, order_by_type=True) == ( "from module import CONSTANT, Class, function\n" ) # More complex sample data test_input = "from module import Class, CONSTANT, function, BASIC, Apple" assert isort.code(test_input, order_by_type=True) == ( "from module import BASIC, CONSTANT, Apple, Class, function\n" ) # Really complex sample data, to verify we don't mess with top level imports, only nested ones test_input = ( "import StringIO\n" "import glob\n" "import os\n" "import shutil\n" "import tempfile\n" "import time\n" "from subprocess import PIPE, Popen, STDOUT\n" ) assert isort.code(test_input, order_by_type=True, py_version="27") == ( "import glob\n" "import os\n" "import shutil\n" "import StringIO\n" "import tempfile\n" "import time\n" "from subprocess import PIPE, STDOUT, Popen\n" ) def test_custom_lines_before_import_section() -> None: """Test the case where the number of lines to output after imports has been explicitly set.""" test_input = """from a import b foo = 'bar' """ ln = "\n" # default case is no line added before the import assert isort.code(test_input) == (test_input) # test again with a custom number of lines before the import section assert isort.code(test_input, lines_before_imports=2) == 2 * ln + test_input comment = "# Comment\n" # test with a comment above assert isort.code(comment + ln + test_input, lines_before_imports=0) == comment + test_input # test with comments with empty lines assert ( isort.code(comment + ln + comment + 3 * ln + test_input, lines_before_imports=1) == comment + ln + comment + 1 * ln + test_input ) def test_custom_lines_after_import_section() -> None: """Test the case where the number of lines to output after imports has been explicitly set.""" test_input = "from a import b\nfoo = 'bar'\n" # default case is one space if not method or class after imports assert isort.code(test_input) == ("from a import b\n\nfoo = 'bar'\n") # test again with a custom number of lines after the import section assert isort.code(test_input, lines_after_imports=2) == ("from a import b\n\n\nfoo = 'bar'\n") def test_smart_lines_after_import_section() -> None: """Tests the default 'smart' behavior for dealing with lines after the import section""" # one space if not method or class after imports test_input = "from a import b\nfoo = 'bar'\n" assert isort.code(test_input) == ("from a import b\n\nfoo = 'bar'\n") # two spaces if a method or class after imports test_input = "from a import b\ndef my_function():\n pass\n" assert isort.code(test_input) == ("from a import b\n\n\ndef my_function():\n pass\n") # two spaces if an async method after imports test_input = "from a import b\nasync def my_function():\n pass\n" assert isort.code(test_input) == ("from a import b\n\n\nasync def my_function():\n pass\n") # two spaces if a method or class after imports - even if comment before function test_input = ( "from a import b\n" "# comment should be ignored\n" "def my_function():\n" " pass\n" ) assert isort.code(test_input) == ( "from a import b\n" "\n" "\n" "# comment should be ignored\n" "def my_function():\n" " pass\n" ) # the same logic does not apply to doc strings test_input = ( "from a import b\n" '"""\n' " comment should be ignored\n" '"""\n' "def my_function():\n" " pass\n" ) assert isort.code(test_input) == ( "from a import b\n" "\n" '"""\n' " comment should be ignored\n" '"""\n' "def my_function():\n" " pass\n" ) # Ensure logic doesn't incorrectly skip over assignments to multi-line strings test_input = 'from a import b\nX = """test\n"""\ndef my_function():\n pass\n' assert isort.code(test_input) == ( "from a import b\n" "\n" 'X = """test\n' '"""\n' "def my_function():\n" " pass\n" ) def test_settings_overwrite() -> None: """Test to ensure settings overwrite instead of trying to combine.""" assert Config(known_standard_library=["not_std_library"]).known_standard_library == frozenset( {"not_std_library"} ) assert Config(known_first_party=["thread"]).known_first_party == frozenset({"thread"}) def test_combined_from_and_as_imports() -> None: """Test to ensure it's possible to combine from and as imports.""" test_input = ( "from translate.misc.multistring import multistring\n" "from translate.storage import base, factory\n" "from translate.storage.placeables import general, parse as rich_parse\n" ) assert isort.code(test_input, combine_as_imports=True) == test_input assert isort.code(test_input, combine_as_imports=True, only_sections=True) == test_input test_input = "import os \nimport os as _os" test_output = "import os\nimport os as _os\n" assert isort.code(test_input) == test_output def test_as_imports_with_line_length() -> None: """Test to ensure it's possible to combine from and as imports.""" test_input = ( "from translate.storage import base as storage_base\n" "from translate.storage.placeables import general, parse as rich_parse\n" ) assert isort.code(code=test_input, combine_as_imports=False, line_length=40) == ( "from translate.storage import \\\n base as storage_base\n" "from translate.storage.placeables import \\\n general\n" "from translate.storage.placeables import \\\n parse as rich_parse\n" ) def test_keep_comments() -> None: """Test to ensure isort properly keeps comments in tact after sorting.""" # Straight Import test_input = "import foo # bar\n" assert isort.code(test_input) == test_input # Star import test_input_star = "from foo import * # bar\n" assert isort.code(test_input_star) == test_input_star # Force Single Line From Import test_input = "from foo import bar # comment\n" assert isort.code(test_input, force_single_line=True) == test_input # From import test_input = "from foo import bar # My Comment\n" assert isort.code(test_input) == test_input # More complicated case test_input = "from a import b # My Comment1\nfrom a import c # My Comment2\n" assert isort.code(test_input) == ( "from a import b # My Comment1\nfrom a import c # My Comment2\n" ) # Test case where imports comments make imports extend pass the line length test_input = ( "from a import b # My Comment1\n" "from a import c # My Comment2\n" "from a import d\n" ) assert isort.code(test_input, line_length=45) == ( "from a import b # My Comment1\n" "from a import c # My Comment2\n" "from a import d\n" ) # Test case where imports with comments will be beyond line length limit test_input = ( "from a import b, c # My Comment1\n" "from a import c, d # My Comment2 is really really really really long\n" ) assert isort.code(test_input, line_length=45) == ( "from a import ( # My Comment1; My Comment2 is really really really really long\n" " b, c, d)\n" ) # Test that comments are not stripped from 'import ... as ...' by default test_input = "from a import b as bb # b comment\nfrom a import c as cc # c comment\n" assert isort.code(test_input) == test_input # Test that 'import ... as ...' comments are not collected inappropriately test_input = ( "from a import b as bb # b comment\n" "from a import c as cc # c comment\n" "from a import d\n" ) assert isort.code(test_input) == test_input assert isort.code(test_input, combine_as_imports=True) == ( "from a import b as bb, c as cc, d # b comment; c comment\n" ) def test_multiline_split_on_dot() -> None: """Test to ensure isort correctly handles multiline imports, even when split right after a '.' """ test_input = ( "from my_lib.my_package.test.level_1.level_2.level_3.level_4.level_5.\\\n" " my_module import my_function" ) assert isort.code(test_input, line_length=70) == ( "from my_lib.my_package.test.level_1.level_2.level_3.level_4.level_5.my_module import \\\n" " my_function\n" ) def test_import_star() -> None: """Test to ensure isort handles star imports correctly""" test_input = "from blah import *\nfrom blah import _potato\n" assert isort.code(test_input) == ("from blah import *\nfrom blah import _potato\n") assert isort.code(test_input, combine_star=True) == ("from blah import *\n") def test_include_trailing_comma() -> None: """Test for the include_trailing_comma option""" test_output_grid = isort.code( code=SHORT_IMPORT, multi_line_output=WrapModes.GRID, line_length=40, include_trailing_comma=True, ) assert test_output_grid == ( "from third_party import (lib1, lib2,\n" " lib3, lib4,)\n" ) test_output_vertical = isort.code( code=SHORT_IMPORT, multi_line_output=WrapModes.VERTICAL, line_length=40, include_trailing_comma=True, ) assert test_output_vertical == ( "from third_party import (lib1,\n" " lib2,\n" " lib3,\n" " lib4,)\n" ) test_output_vertical_indent = isort.code( code=SHORT_IMPORT, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=40, include_trailing_comma=True, ) assert test_output_vertical_indent == ( "from third_party import (\n" " lib1,\n" " lib2,\n" " lib3,\n" " lib4,\n" ")\n" ) test_output_vertical_grid = isort.code( code=SHORT_IMPORT, multi_line_output=WrapModes.VERTICAL_GRID, line_length=40, include_trailing_comma=True, ) assert test_output_vertical_grid == ( "from third_party import (\n lib1, lib2, lib3, lib4,)\n" ) test_output_vertical_grid_grouped = isort.code( code=SHORT_IMPORT, multi_line_output=WrapModes.VERTICAL_GRID_GROUPED, line_length=40, include_trailing_comma=True, ) assert test_output_vertical_grid_grouped == ( "from third_party import (\n lib1, lib2, lib3, lib4,\n)\n" ) test_output_wrap_single_import_with_use_parentheses = isort.code( code=SINGLE_FROM_IMPORT, line_length=25, include_trailing_comma=True, use_parentheses=True ) assert test_output_wrap_single_import_with_use_parentheses == ( "from third_party import (\n lib1,)\n" ) test_output_wrap_single_import_vertical_indent = isort.code( code=SINGLE_FROM_IMPORT, line_length=25, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, include_trailing_comma=True, use_parentheses=True, ) assert test_output_wrap_single_import_vertical_indent == ( "from third_party import (\n lib1,\n)\n" ) trailing_comma_with_comment = ( "from six.moves.urllib.parse import urlencode " "# pylint: disable=no-name-in-module,import-error" ) expected_trailing_comma_with_comment = ( "from six.moves.urllib.parse import (\n" " urlencode, # pylint: disable=no-n" "ame-in-module,import-error\n)\n" ) trailing_comma_with_comment = isort.code( code=trailing_comma_with_comment, line_length=80, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, include_trailing_comma=True, use_parentheses=True, ) assert trailing_comma_with_comment == expected_trailing_comma_with_comment # The next time around, it should be equal trailing_comma_with_comment = isort.code( code=trailing_comma_with_comment, line_length=80, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, include_trailing_comma=True, use_parentheses=True, ) assert trailing_comma_with_comment == expected_trailing_comma_with_comment def test_similar_to_std_library() -> None: """Test to ensure modules that are named similarly to a standard library import don't end up clobbered """ test_input = "import datetime\n\nimport requests\nimport times\n" assert isort.code(test_input, known_third_party=["requests", "times"]) == test_input def test_correctly_placed_imports() -> None: """Test to ensure comments stay on correct placement after being sorted""" test_input = "from a import b # comment for b\nfrom a import c # comment for c\n" assert isort.code(test_input, force_single_line=True) == ( "from a import b # comment for b\nfrom a import c # comment for c\n" ) assert isort.code(test_input) == ( "from a import b # comment for b\nfrom a import c # comment for c\n" ) # Full example test from issue #143 test_input = ( "from itertools import chain\n" "\n" "from django.test import TestCase\n" "from model_mommy import mommy\n" "\n" "from apps.clientman.commands.download_usage_rights import " "associate_right_for_item_product\n" "from apps.clientman.commands.download_usage_rights import " "associate_right_for_item_product_d" "efinition\n" "from apps.clientman.commands.download_usage_rights import " "associate_right_for_item_product_d" "efinition_platform\n" "from apps.clientman.commands.download_usage_rights import " "associate_right_for_item_product_p" "latform\n" "from apps.clientman.commands.download_usage_rights import " "associate_right_for_territory_reta" "il_model\n" "from apps.clientman.commands.download_usage_rights import " "associate_right_for_territory_reta" "il_model_definition_platform_provider # noqa\n" "from apps.clientman.commands.download_usage_rights import " "clear_right_for_item_product\n" "from apps.clientman.commands.download_usage_rights import " "clear_right_for_item_product_defini" "tion\n" "from apps.clientman.commands.download_usage_rights import " "clear_right_for_item_product_defini" "tion_platform\n" "from apps.clientman.commands.download_usage_rights import " "clear_right_for_item_product_platfo" "rm\n" "from apps.clientman.commands.download_usage_rights import " "clear_right_for_territory_retail_mo" "del\n" "from apps.clientman.commands.download_usage_rights import " "clear_right_for_territory_retail_mo" "del_definition_platform_provider # noqa\n" "from apps.clientman.commands.download_usage_rights import " "create_download_usage_right\n" "from apps.clientman.commands.download_usage_rights import " "delete_download_usage_right\n" "from apps.clientman.commands.download_usage_rights import " "disable_download_for_item_product\n" "from apps.clientman.commands.download_usage_rights import " "disable_download_for_item_product_d" "efinition\n" "from apps.clientman.commands.download_usage_rights import " "disable_download_for_item_product_d" "efinition_platform\n" "from apps.clientman.commands.download_usage_rights import " "disable_download_for_item_product_p" "latform\n" "from apps.clientman.commands.download_usage_rights import " "disable_download_for_territory_reta" "il_model\n" "from apps.clientman.commands.download_usage_rights import " "disable_download_for_territory_reta" "il_model_definition_platform_provider # noqa\n" "from apps.clientman.commands.download_usage_rights import " "get_download_rights_for_item\n" "from apps.clientman.commands.download_usage_rights import " "get_right\n" ) assert ( isort.code( code=test_input, force_single_line=True, line_length=140, known_third_party=["django", "model_mommy"], default_section=sections.FIRSTPARTY, ) == test_input ) def test_auto_detection() -> None: """Initial test to ensure isort auto-detection works correctly - will grow over time as new issues are raised. """ # Issue 157 test_input = "import binascii\nimport os\n\nimport cv2\nimport requests\n" assert isort.code(test_input, known_third_party=["cv2", "requests"]) == test_input # alternative solution assert isort.code(test_input, default_section="THIRDPARTY") == test_input def test_same_line_statements() -> None: """Ensure isort correctly handles the case where a single line contains multiple statements including an import """ test_input = "import pdb; import nose\n" assert isort.code(test_input) == ("import pdb\n\nimport nose\n") test_input = "import pdb; pdb.set_trace()\nimport nose; nose.run()\n" assert isort.code(test_input) == test_input def test_long_line_comments() -> None: """Ensure isort correctly handles comments at the end of extremely long lines""" test_input = ( "from foo.utils.fabric_stuff.live import check_clean_live, deploy_live, " "sync_live_envdir, " "update_live_app, update_live_cron # noqa\n" "from foo.utils.fabric_stuff.stage import check_clean_stage, deploy_stage, " "sync_stage_envdir, " "update_stage_app, update_stage_cron # noqa\n" ) assert isort.code(code=test_input, line_length=100, balanced_wrapping=True) == ( "from foo.utils.fabric_stuff.live import (check_clean_live, deploy_live, # noqa\n" " sync_live_envdir, update_live_app, " "update_live_cron)\n" "from foo.utils.fabric_stuff.stage import (check_clean_stage, deploy_stage, # noqa\n" " sync_stage_envdir, update_stage_app, " "update_stage_cron)\n" ) def test_tab_character_in_import() -> None: """Ensure isort correctly handles import statements that contain a tab character""" test_input = ( "from __future__ import print_function\n" "from __future__ import\tprint_function\n" ) assert isort.code(test_input) == "from __future__ import print_function\n" def test_split_position() -> None: """Ensure isort splits on import instead of . when possible""" test_input = ( "from p24.shared.exceptions.master.host_state_flag_unchanged " "import HostStateUnchangedException\n" ) assert isort.code(test_input, line_length=80) == ( "from p24.shared.exceptions.master.host_state_flag_unchanged import \\\n" " HostStateUnchangedException\n" ) def test_place_comments() -> None: """Ensure manually placing imports works as expected""" test_input = ( "import sys\n" "import os\n" "import myproject.test\n" "import django.settings\n" "\n" "# isort: imports-thirdparty\n" "# isort: imports-firstparty\n" "# isort:imports-stdlib\n" "\n" ) expected_output = ( "\n# isort: imports-thirdparty\n" "import django.settings\n" "\n" "# isort: imports-firstparty\n" "import myproject.test\n" "\n" "# isort:imports-stdlib\n" "import os\n" "import sys\n" ) test_output = isort.code(test_input, known_first_party=["myproject"]) assert test_output == expected_output test_output = isort.code(test_output, known_first_party=["myproject"]) assert test_output == expected_output def test_placement_control() -> None: """Ensure that most specific placement control match wins""" test_input = ( "import os\n" "import sys\n" "from bottle import Bottle, redirect, response, run\n" "import p24.imports._argparse as argparse\n" "import p24.imports._subprocess as subprocess\n" "import p24.imports._VERSION as VERSION\n" "import p24.shared.media_wiki_syntax as syntax\n" ) test_output = isort.code( code=test_input, known_first_party=["p24", "p24.imports._VERSION"], known_standard_library=["p24.imports", "os", "sys"], known_third_party=["bottle"], default_section="THIRDPARTY", ) assert test_output == ( "import os\n" "import p24.imports._argparse as argparse\n" "import p24.imports._subprocess as subprocess\n" "import sys\n" "\n" "from bottle import Bottle, redirect, response, run\n" "\n" "import p24.imports._VERSION as VERSION\n" "import p24.shared.media_wiki_syntax as syntax\n" ) def test_custom_sections() -> None: """Ensure that most specific placement control match wins""" test_input = ( "import os\n" "import sys\n" "from django.conf import settings\n" "from bottle import Bottle, redirect, response, run\n" "import p24.imports._argparse as argparse\n" "from django.db import models\n" "import p24.imports._subprocess as subprocess\n" "import pandas as pd\n" "import p24.imports._VERSION as VERSION\n" "import numpy as np\n" "import p24.shared.media_wiki_syntax as syntax\n" ) test_output = isort.code( code=test_input, known_first_party=["p24", "p24.imports._VERSION"], import_heading_stdlib="Standard Library", import_heading_thirdparty="Third Party", import_heading_firstparty="First Party", import_heading_django="Django", import_heading_pandas="Pandas", known_standard_library=["p24.imports", "os", "sys"], known_third_party=["bottle"], known_django=["django"], known_pandas=["pandas", "numpy"], default_section="THIRDPARTY", sections=[ "FUTURE", "STDLIB", "DJANGO", "THIRDPARTY", "PANDAS", "FIRSTPARTY", "LOCALFOLDER", ], ) assert test_output == ( "# Standard Library\n" "import os\n" "import p24.imports._argparse as argparse\n" "import p24.imports._subprocess as subprocess\n" "import sys\n" "\n" "# Django\n" "from django.conf import settings\n" "from django.db import models\n" "\n" "# Third Party\n" "from bottle import Bottle, redirect, response, run\n" "\n" "# Pandas\n" "import numpy as np\n" "import pandas as pd\n" "\n" "# First Party\n" "import p24.imports._VERSION as VERSION\n" "import p24.shared.media_wiki_syntax as syntax\n" ) def test_custom_sections_exception_handling() -> None: """Ensure that appropriate exception is raised for missing sections""" test_input = "import requests\n" with pytest.raises(MissingSection): isort.code( code=test_input, default_section="THIRDPARTY", sections=[ "FUTURE", "STDLIB", "DJANGO", "PANDAS", "FIRSTPARTY", "LOCALFOLDER", ], ) test_input = "from requests import get, post\n" with pytest.raises(MissingSection): isort.code( code=test_input, default_section="THIRDPARTY", sections=[ "FUTURE", "STDLIB", "DJANGO", "PANDAS", "FIRSTPARTY", "LOCALFOLDER", ], ) def test_glob_known() -> None: """Ensure that most specific placement control match wins""" test_input = ( "import os\n" "from django_whatever import whatever\n" "import sys\n" "from django.conf import settings\n" "from . import another\n" ) test_output = isort.code( code=test_input, import_heading_stdlib="Standard Library", import_heading_thirdparty="Third Party", import_heading_firstparty="First Party", import_heading_django="Django", import_heading_djangoplugins="Django Plugins", import_heading_localfolder="Local", known_django=["django"], known_djangoplugins=["django_*"], default_section="THIRDPARTY", sections=[ "FUTURE", "STDLIB", "DJANGO", "DJANGOPLUGINS", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER", ], ) assert test_output == ( "# Standard Library\n" "import os\n" "import sys\n" "\n" "# Django\n" "from django.conf import settings\n" "\n" "# Django Plugins\n" "from django_whatever import whatever\n" "\n" "# Local\n" "from . import another\n" ) def test_sticky_comments() -> None: """Test to ensure it is possible to make comments 'stick' above imports""" test_input = ( "import os\n" "\n" "# Used for type-hinting (ref: https://github.com/davidhalter/jedi/issues/414).\n" "from selenium.webdriver.remote.webdriver import WebDriver # noqa\n" ) assert isort.code(test_input) == test_input test_input = ( "from django import forms\n" "# While this couples the geographic forms to the GEOS library,\n" "# it decouples from database (by not importing SpatialBackend).\n" "from django.contrib.gis.geos import GEOSException, GEOSGeometry\n" "from django.utils.translation import ugettext_lazy as _\n" ) assert isort.code(test_input) == test_input def test_zipimport() -> None: """Imports ending in "import" shouldn't be clobbered""" test_input = "from zipimport import zipimport\n" assert isort.code(test_input) == test_input def test_from_ending() -> None: """Imports ending in "from" shouldn't be clobbered.""" test_input = "from foo import get_foo_from, get_foo\n" expected_output = "from foo import get_foo, get_foo_from\n" assert isort.code(test_input) == expected_output def test_from_first() -> None: """Tests the setting from_first works correctly""" test_input = "from os import path\nimport os\n" assert isort.code(test_input, from_first=True) == test_input def test_top_comments() -> None: """Ensure correct behavior with top comments""" test_input = ( "# -*- encoding: utf-8 -*-\n" "# Test comment\n" "#\n" "from __future__ import unicode_literals\n" ) assert isort.code(test_input) == test_input test_input = ( "# -*- coding: utf-8 -*-\n" "from django.db import models\n" "from django.utils.encoding import python_2_unicode_compatible\n" ) assert isort.code(test_input) == test_input test_input = "# Comment\nimport sys\n" assert isort.code(test_input) == test_input test_input = "# -*- coding\nimport sys\n" assert isort.code(test_input) == test_input def test_consistency() -> None: """Ensures consistency of handling even when dealing with non ordered-by-type imports""" test_input = "from sqlalchemy.dialects.postgresql import ARRAY, array\n" assert isort.code(test_input, order_by_type=True) == test_input def test_force_grid_wrap() -> None: """Ensures removing imports works as expected.""" test_input = "from bar import lib2\nfrom foo import lib6, lib7\n" test_output = isort.code( code=test_input, force_grid_wrap=2, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT ) assert ( test_output == """from bar import lib2 from foo import ( lib6, lib7 ) """ ) test_output = isort.code( code=test_input, force_grid_wrap=3, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT ) assert test_output == test_input def test_force_grid_wrap_long() -> None: """Ensure that force grid wrap still happens with long line length""" test_input = ( "from foo import lib6, lib7\n" "from bar import lib2\n" "from babar import something_that_is_kind_of_long" ) test_output = isort.code( code=test_input, force_grid_wrap=2, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=9999, ) assert ( test_output == """from babar import something_that_is_kind_of_long from bar import lib2 from foo import ( lib6, lib7 ) """ ) def test_uses_jinja_variables() -> None: """Test a basic set of imports that use jinja variables""" test_input = ( "import sys\n" "import os\n" "import myproject.{ test }\n" "import django.{ settings }" ) test_output = isort.code( code=test_input, known_third_party=["django"], known_first_party=["myproject"] ) assert test_output == ( "import os\n" "import sys\n" "\n" "import django.{ settings }\n" "\n" "import myproject.{ test }\n" ) test_input = "import {{ cookiecutter.repo_name }}\n" "from foo import {{ cookiecutter.bar }}\n" assert isort.code(test_input) == test_input def test_fcntl() -> None: """Test to ensure fcntl gets correctly recognized as stdlib import""" test_input = "import fcntl\nimport os\nimport sys\n" assert isort.code(test_input) == test_input def test_import_split_is_word_boundary_aware() -> None: """Test to ensure that isort splits words in a boundary aware manner""" test_input = ( "from mycompany.model.size_value_array_import_func import \\\n" " get_size_value_array_import_func_jobs" ) test_output = isort.code( code=test_input, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=79 ) assert test_output == ( "from mycompany.model.size_value_array_import_func import (\n" " get_size_value_array_import_func_jobs\n" ")\n" ) def test_other_file_encodings(tmpdir) -> None: """Test to ensure file encoding is respected""" for encoding in ("latin1", "utf8"): tmp_fname = tmpdir.join(f"test_{encoding}.py") file_contents = f"# coding: {encoding}\n\ns = u'รฃ'\n" tmp_fname.write_binary(file_contents.encode(encoding)) api.sort_file(Path(tmp_fname), file_path=Path(tmp_fname), settings_path=os.getcwd()) assert tmp_fname.read_text(encoding) == file_contents def test_other_file_encodings_in_place(tmpdir) -> None: """Test to ensure file encoding is respected when overwritten in place.""" for encoding in ("latin1", "utf8"): tmp_fname = tmpdir.join(f"test_{encoding}.py") file_contents = f"# coding: {encoding}\n\ns = u'รฃ'\n" tmp_fname.write_binary(file_contents.encode(encoding)) api.sort_file( Path(tmp_fname), file_path=Path(tmp_fname), settings_path=os.getcwd(), overwrite_in_place=True, ) assert tmp_fname.read_text(encoding) == file_contents def test_encoding_not_in_comment(tmpdir) -> None: """Test that 'encoding' not in a comment is ignored""" tmp_fname = tmpdir.join("test_encoding.py") file_contents = "class Foo\n coding: latin1\n\ns = u'รฃ'\n" tmp_fname.write_binary(file_contents.encode("utf8")) assert ( isort.code( Path(tmp_fname).read_text("utf8"), file_path=Path(tmp_fname), settings_path=os.getcwd() ) == file_contents ) def test_encoding_not_in_first_two_lines(tmpdir) -> None: """Test that 'encoding' not in the first two lines is ignored""" tmp_fname = tmpdir.join("test_encoding.py") file_contents = "\n\n# -*- coding: latin1\n\ns = u'รฃ'\n" tmp_fname.write_binary(file_contents.encode("utf8")) assert ( isort.code( Path(tmp_fname).read_text("utf8"), file_path=Path(tmp_fname), settings_path=os.getcwd() ) == file_contents ) def test_comment_at_top_of_file() -> None: """Test to ensure isort correctly handles top of file comments""" test_input = ( "# Comment one\n" "from django import forms\n" "# Comment two\n" "from django.contrib.gis.geos import GEOSException\n" ) assert isort.code(test_input) == test_input test_input = "# -*- coding: utf-8 -*-\nfrom django.db import models\n" assert isort.code(test_input) == test_input def test_alphabetic_sorting() -> None: """Test to ensure isort correctly handles single line imports""" test_input = ( "import unittest\n" "\n" "import ABC\n" "import Zope\n" "from django.contrib.gis.geos import GEOSException\n" "from plone.app.testing import getRoles\n" "from plone.app.testing import ManageRoles\n" "from plone.app.testing import setRoles\n" "from Products.CMFPlone import utils\n" ) options = { "force_single_line": True, "force_alphabetical_sort_within_sections": True, } # type: Dict[str, Any] output = isort.code(test_input, **options) assert output == test_input test_input = "# -*- coding: utf-8 -*-\nfrom django.db import models\n" assert isort.code(test_input) == test_input def test_alphabetic_sorting_multi_line() -> None: """Test to ensure isort correctly handles multiline import see: issue 364""" test_input = ( "from a import (CONSTANT_A, cONSTANT_B, CONSTANT_C, CONSTANT_D, CONSTANT_E,\n" " CONSTANT_F, CONSTANT_G, CONSTANT_H, CONSTANT_I, CONSTANT_J)\n" ) options = {"force_alphabetical_sort_within_sections": True} # type: Dict[str, Any] assert isort.code(test_input, **options) == test_input def test_comments_not_duplicated() -> None: """Test to ensure comments aren't duplicated: issue 303""" test_input = ( "from flask import url_for\n" "# Whole line comment\n" "from service import demo # inline comment\n" "from service import settings\n" ) output = isort.code(test_input) assert output.count("# Whole line comment\n") == 1 assert output.count("# inline comment\n") == 1 def test_top_of_line_comments() -> None: """Test to ensure top of line comments stay where they should: issue 260""" test_input = ( "# -*- coding: utf-8 -*-\n" "from django.db import models\n" "#import json as simplejson\n" "from myproject.models import Servidor\n" "\n" "import reversion\n" "\n" "import logging\n" ) output = isort.code(test_input) print(output) assert output.startswith("# -*- coding: utf-8 -*-\n") def test_basic_comment() -> None: """Test to ensure a basic comment wont crash isort""" test_input = "import logging\n# Foo\nimport os\n" assert isort.code(test_input) == test_input def test_shouldnt_add_lines() -> None: """Ensure that isort doesn't add a blank line when a top of import comment is present, See: issue #316 """ test_input = '"""Text"""\n' "# This is a comment\nimport pkg_resources\n" assert isort.code(test_input) == test_input def test_sections_parsed_correct(tmpdir) -> None: """Ensure that modules for custom sections parsed as list from config file and isort result is correct """ conf_file_data = ( "[settings]\n" "sections=FUTURE,STDLIB,THIRDPARTY,FIRSTPARTY,LOCALFOLDER,COMMON\n" "known_common=nose\n" "import_heading_common=Common Library\n" "import_heading_stdlib=Standard Library\n" ) test_input = "import os\nfrom nose import *\nimport nose\nfrom os import path" correct_output = ( "# Standard Library\n" "import os\n" "from os import path\n" "\n" "# Common Library\n" "import nose\n" "from nose import *\n" ) tmpdir.join(".isort.cfg").write(conf_file_data) assert isort.code(test_input, settings_path=str(tmpdir)) == correct_output def test_pyproject_conf_file(tmpdir) -> None: """Ensure that modules for custom sections parsed as list from config file and isort result is correct """ conf_file_data = ( "[build-system]\n" 'requires = ["setuptools", "wheel"]\n' "[tool.poetry]\n" 'name = "isort"\n' 'version = "0.1.0"\n' 'license = "MIT"\n' "[tool.isort]\n" "lines_between_types=1\n" 'known_common="nose"\n' 'known_first_party="foo"\n' 'import_heading_common="Common Library"\n' 'import_heading_stdlib="Standard Library"\n' 'sections="FUTURE,STDLIB,THIRDPARTY,FIRSTPARTY,LOCALFOLDER,COMMON"\n' "include_trailing_comma = true\n" ) test_input = "import os\nfrom nose import *\nimport nose\nfrom os import path\nimport foo" correct_output = ( "# Standard Library\n" "import os\n" "\n" "from os import path\n" "\n" "import foo\n" "\n" "# Common Library\n" "import nose\n" "\n" "from nose import *\n" ) tmpdir.join("pyproject.toml").write(conf_file_data) assert isort.code(test_input, settings_path=str(tmpdir)) == correct_output def test_alphabetic_sorting_no_newlines() -> None: """Test to ensure that alphabetical sort does not erroneously introduce new lines (issue #328) """ test_input = "import os\n" test_output = isort.code(code=test_input, force_alphabetical_sort_within_sections=True) assert test_input == test_output test_input = "import os\n" "import unittest\n" "\n" "from a import b\n" "\n" "\n" "print(1)\n" test_output = isort.code( code=test_input, force_alphabetical_sort_within_sections=True, lines_after_imports=2 ) assert test_input == test_output def test_sort_within_section() -> None: """Test to ensure its possible to force isort to sort within sections""" test_input = ( "from Foob import ar\n" "import foo\n" "from foo import bar\n" "from foo.bar import Quux, baz\n" ) test_output = isort.code(test_input, force_sort_within_sections=True) assert test_output == test_input test_input = ( "import foo\n" "from foo import bar\n" "from foo.bar import baz\n" "from foo.bar import Quux\n" "from Foob import ar\n" ) test_output = isort.code( code=test_input, force_sort_within_sections=True, order_by_type=False, force_single_line=True, ) assert test_output == test_input test_input = ( "import foo\n" "from foo import bar\n" "from foo.bar import baz\n" "from foo.bar import Quux\n" "from Foob import ar\n" ) test_output = isort.code( code=test_input, case_sensitive=True, force_sort_within_sections=True, order_by_type=False, force_single_line=True, ) assert test_output == test_input test_input = ( "from Foob import ar\n" "import foo\n" "from foo import Quux\n" "from foo import baz\n" ) test_output = isort.code( code=test_input, case_sensitive=True, force_sort_within_sections=True, order_by_type=True, force_single_line=True, ) assert test_output == test_input def test_sort_within_section_case_honored() -> None: """Ensure isort can do partial case-sensitive sorting in force-sorted sections""" test_input = ( "import foo\n" "from foo import bar\n" "from foo.bar import Quux, baz\n" "from Foob import ar\n" ) test_output = isort.code( test_input, force_sort_within_sections=True, honor_case_in_force_sorted_sections=True ) assert test_output == test_input test_input = ( "import foo\n" "from foo import bar\n" "from foo.bar import baz\n" "from foo.bar import Quux\n" "from Foob import ar\n" ) test_output = isort.code( code=test_input, force_sort_within_sections=True, honor_case_in_force_sorted_sections=True, order_by_type=False, force_single_line=True, ) assert test_output == test_input test_input = ( "from Foob import ar\n" "import foo\n" "from foo import bar\n" "from foo.bar import baz\n" "from foo.bar import Quux\n" ) test_output = isort.code( code=test_input, case_sensitive=True, force_sort_within_sections=True, honor_case_in_force_sorted_sections=True, order_by_type=False, force_single_line=True, ) assert test_output == test_input test_input = ( "from Foob import ar\n" "import foo\n" "from foo import Quux\n" "from foo import baz\n" ) test_output = isort.code( code=test_input, case_sensitive=True, force_sort_within_sections=True, honor_case_in_force_sorted_sections=True, order_by_type=True, force_single_line=True, ) assert test_output == test_input def test_sorting_with_two_top_comments() -> None: """Test to ensure isort will sort files that contain 2 top comments""" test_input = "#! comment1\n''' comment2\n'''\nimport b\nimport a\n" assert isort.code(test_input) == ("#! comment1\n''' comment2\n'''\nimport a\nimport b\n") def test_lines_between_sections() -> None: """Test to ensure lines_between_sections works""" test_input = "from bar import baz\nimport os\n" assert isort.code(test_input, lines_between_sections=0) == ("import os\nfrom bar import baz\n") assert isort.code(test_input, lines_between_sections=2) == ( "import os\n\n\nfrom bar import baz\n" ) def test_forced_sepatate_globs() -> None: """Test to ensure that forced_separate glob matches lines""" test_input = ( "import os\n" "\n" "from myproject.foo.models import Foo\n" "\n" "from myproject.utils import util_method\n" "\n" "from myproject.bar.models import Bar\n" "\n" "import sys\n" ) test_output = isort.code(code=test_input, forced_separate=["*.models"], line_length=120) assert test_output == ( "import os\n" "import sys\n" "\n" "from myproject.utils import util_method\n" "\n" "from myproject.bar.models import Bar\n" "from myproject.foo.models import Foo\n" ) def test_no_additional_lines_issue_358() -> None: """Test to ensure issue 358 is resolved and running isort multiple times does not add extra newlines """ test_input = ( '"""This is a docstring"""\n' "# This is a comment\n" "from __future__ import (\n" " absolute_import,\n" " division,\n" " print_function,\n" " unicode_literals\n" ")\n" ) expected_output = ( '"""This is a docstring"""\n' "# This is a comment\n" "from __future__ import (\n" " absolute_import,\n" " division,\n" " print_function,\n" " unicode_literals\n" ")\n" ) test_output = isort.code( code=test_input, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=20 ) assert test_output == expected_output test_output = isort.code( code=test_output, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=20 ) assert test_output == expected_output for _attempt in range(5): test_output = isort.code( code=test_output, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=20 ) assert test_output == expected_output test_input = ( '"""This is a docstring"""\n' "\n" "# This is a comment\n" "from __future__ import (\n" " absolute_import,\n" " division,\n" " print_function,\n" " unicode_literals\n" ")\n" ) expected_output = ( '"""This is a docstring"""\n' "\n" "# This is a comment\n" "from __future__ import (\n" " absolute_import,\n" " division,\n" " print_function,\n" " unicode_literals\n" ")\n" ) test_output = isort.code( code=test_input, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=20 ) assert test_output == expected_output test_output = isort.code( code=test_output, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=20 ) assert test_output == expected_output for _attempt in range(5): test_output = isort.code( code=test_output, multi_line_output=WrapModes.VERTICAL_HANGING_INDENT, line_length=20 ) assert test_output == expected_output def test_import_by_paren_issue_375() -> None: """Test to ensure isort can correctly handle sorting imports where the paren is directly by the import body """ test_input = "from .models import(\n Foo,\n Bar,\n)\n" assert isort.code(test_input) == "from .models import Bar, Foo\n" def test_import_by_paren_issue_460() -> None: """Test to ensure isort can doesnt move comments around""" test_input = """ # First comment # Second comment # third comment import io import os """ assert isort.code((test_input)) == test_input def test_function_with_docstring() -> None: """Test to ensure isort can correctly sort imports when the first found content is a function with a docstring """ add_imports = ["from __future__ import unicode_literals"] test_input = "def foo():\n" ' """ Single line triple quoted doctring """\n' " pass\n" expected_output = ( "from __future__ import unicode_literals\n" "\n" "\n" "def foo():\n" ' """ Single line triple quoted doctring """\n' " pass\n" ) assert isort.code(test_input, add_imports=add_imports) == expected_output def test_plone_style() -> None: """Test to ensure isort correctly plone style imports""" test_input = ( "from django.contrib.gis.geos import GEOSException\n" "from plone.app.testing import getRoles\n" "from plone.app.testing import ManageRoles\n" "from plone.app.testing import setRoles\n" "from Products.CMFPlone import utils\n" "\n" "import ABC\n" "import unittest\n" "import Zope\n" ) options = {"force_single_line": True, "force_alphabetical_sort": True} # type: Dict[str, Any] assert isort.code(test_input, **options) == test_input def test_third_party_case_sensitive() -> None: """Modules which match builtins by name but not on case should not be picked up on Windows.""" test_input = "import thirdparty\nimport os\nimport ABC\n" expected_output = "import os\n\nimport ABC\nimport thirdparty\n" assert isort.code(test_input) == expected_output def test_exists_case_sensitive_file(tmpdir) -> None: """Test exists_case_sensitive function for a file.""" tmpdir.join("module.py").ensure(file=1) assert exists_case_sensitive(str(tmpdir.join("module.py"))) assert not exists_case_sensitive(str(tmpdir.join("MODULE.py"))) def test_exists_case_sensitive_directory(tmpdir) -> None: """Test exists_case_sensitive function for a directory.""" tmpdir.join("pkg").ensure(dir=1) assert exists_case_sensitive(str(tmpdir.join("pkg"))) assert not exists_case_sensitive(str(tmpdir.join("PKG"))) def test_sys_path_mutation(tmpdir) -> None: """Test to ensure sys.path is not modified""" tmpdir.mkdir("src").mkdir("a") test_input = "from myproject import test" options = {"virtual_env": str(tmpdir)} # type: Dict[str, Any] expected_length = len(sys.path) isort.code(test_input, **options) assert len(sys.path) == expected_length isort.code(test_input, old_finders=True, **options) def test_long_single_line() -> None: """Test to ensure long single lines get handled correctly""" output = isort.code( code="from ..views import (" " _a," "_xxxxxx_xxxxxxx_xxxxxxxx_xxx_xxxxxxx as xxxxxx_xxxxxxx_xxxxxxxx_xxx_xxxxxxx)", line_length=79, ) for line in output.split("\n"): assert len(line) <= 79 output = isort.code( code="from ..views import (" " _a," "_xxxxxx_xxxxxxx_xxxxxxxx_xxx_xxxxxxx as xxxxxx_xxxxxxx_xxxxxxxx_xxx_xxxxxxx)", line_length=79, combine_as_imports=True, ) for line in output.split("\n"): assert len(line) <= 79 def test_import_inside_class_issue_432() -> None: """Test to ensure issue 432 is resolved and isort doesn't insert imports in the middle of classes """ test_input = "# coding=utf-8\nclass Foo:\n def bar(self):\n pass\n" expected_output = ( "# coding=utf-8\n" "import baz\n" "\n" "\n" "class Foo:\n" " def bar(self):\n" " pass\n" ) assert isort.code(test_input, add_imports=["import baz"]) == expected_output def test_wildcard_import_without_space_issue_496() -> None: """Test to ensure issue #496: wildcard without space, is resolved""" test_input = "from findorserver.coupon.models import*" expected_output = "from findorserver.coupon.models import *\n" assert isort.code(test_input) == expected_output def test_import_line_mangles_issues_491() -> None: """Test to ensure comment on import with parens doesn't cause issues""" test_input = "import os # ([\n\n" 'print("hi")\n' assert isort.code(test_input) == test_input def test_import_line_mangles_issues_505() -> None: """Test to ensure comment on import with parens doesn't cause issues""" test_input = "from sys import * # (\n\n\ndef test():\n" ' print("Test print")\n' assert isort.code(test_input) == test_input def test_import_line_mangles_issues_439() -> None: """Test to ensure comment on import with parens doesn't cause issues""" test_input = "import a # () import\nfrom b import b\n" assert isort.code(test_input) == test_input def test_alias_using_paren_issue_466() -> None: """Test to ensure issue #466: Alias causes slash incorrectly is resolved""" test_input = ( "from django.db.backends.mysql.base import DatabaseWrapper as MySQLDatabaseWrapper\n" ) expected_output = ( "from django.db.backends.mysql.base import (\n" " DatabaseWrapper as MySQLDatabaseWrapper)\n" ) assert isort.code(test_input, line_length=50, use_parentheses=True) == expected_output test_input = ( "from django.db.backends.mysql.base import DatabaseWrapper as MySQLDatabaseWrapper\n" ) expected_output = ( "from django.db.backends.mysql.base import (\n" " DatabaseWrapper as MySQLDatabaseWrapper\n" ")\n" ) assert ( isort.code( code=test_input, line_length=50, multi_line_output=WrapModes.VERTICAL_GRID_GROUPED, use_parentheses=True, ) == expected_output ) def test_long_alias_using_paren_issue_957() -> None: test_input = ( "from package import module as very_very_very_very_very_very_very" "_very_very_very_long_alias\n" ) expected_output = ( "from package import (\n" " module as very_very_very_very_very_very_very_very_very_very_long_alias\n" ")\n" ) out = isort.code( code=test_input, line_length=50, use_parentheses=True, multi_line_output=WrapModes.VERTICAL_GRID_GROUPED, ) assert out == expected_output test_input = ( "from deep.deep.deep.deep.deep.deep.deep.deep.deep.package import module as " "very_very_very_very_very_very_very_very_very_very_long_alias\n" ) expected_output = ( "from deep.deep.deep.deep.deep.deep.deep.deep.deep.package import (\n" " module as very_very_very_very_very_very_very_very_very_very_long_alias\n" ")\n" ) out = isort.code( code=test_input, line_length=50, use_parentheses=True, multi_line_output=WrapModes.VERTICAL_GRID_GROUPED, ) assert out == expected_output test_input = ( "from deep.deep.deep.deep.deep.deep.deep.deep.deep.package " "import very_very_very_very_very_very_very_very_very_very_long_module as very_very_very_" "very_very_very_very_very_very_very_long_alias\n" ) expected_output = ( "from deep.deep.deep.deep.deep.deep.deep.deep.deep.package import (\n" " very_very_very_very_very_very_very_very_very_very_long_module as very_very_very_very" "_very_very_very_very_very_very_long_alias\n" ")\n" ) out = isort.code( code=test_input, line_length=50, use_parentheses=True, multi_line_output=WrapModes.VERTICAL_GRID_GROUPED, ) assert out == expected_output def test_strict_whitespace_by_default(capsys) -> None: test_input = "import os\nfrom django.conf import settings\n" assert not api.check_code_string(test_input) _, err = capsys.readouterr() assert "ERROR" in err assert err.endswith("Imports are incorrectly sorted and/or formatted.\n") def test_strict_whitespace_no_closing_newline_issue_676(capsys) -> None: test_input = "import os\n\nfrom django.conf import settings\n\nprint(1)" assert api.check_code_string(test_input) out, _ = capsys.readouterr() assert out == "" def test_ignore_whitespace(capsys) -> None: test_input = "import os\nfrom django.conf import settings\n" assert api.check_code_string(test_input, ignore_whitespace=True) out, _ = capsys.readouterr() assert out == "" def test_import_wraps_with_comment_issue_471() -> None: """Test to ensure issue #471 is resolved""" test_input = ( "from very_long_module_name import SuperLongClassName #@UnusedImport" " -- long string of comments which wrap over" ) expected_output = ( "from very_long_module_name import (\n" " SuperLongClassName) # @UnusedImport -- long string of comments which wrap over\n" ) assert ( isort.code(code=test_input, line_length=50, multi_line_output=1, use_parentheses=True) == expected_output ) def test_import_case_produces_inconsistent_results_issue_472() -> None: """Test to ensure sorting imports with same name but different case produces the same result across platforms """ test_input = ( "from sqlalchemy.dialects.postgresql import ARRAY\n" "from sqlalchemy.dialects.postgresql import array\n" ) assert isort.code(test_input, force_single_line=True) == test_input test_input = ( "from scrapy.core.downloader.handlers.http import " "HttpDownloadHandler, HTTPDownloadHandler\n" ) assert isort.code(test_input, line_length=100) == test_input def test_inconsistent_behavior_in_python_2_and_3_issue_479() -> None: """Test to ensure Python 2 and 3 have the same behavior""" test_input = ( "from workalendar.europe import UnitedKingdom\n" "\n" "from future.standard_library import hooks\n" ) assert isort.code(test_input, known_first_party=["future"]) == test_input def test_sort_within_section_comments_issue_436() -> None: """Test to ensure sort within sections leaves comments untouched""" test_input = ( "import os.path\n" "import re\n" "\n" "# report.py exists in ... comment line 1\n" "# this file needs to ... comment line 2\n" "# it must not be ... comment line 3\n" "import report\n" ) assert isort.code(test_input, force_sort_within_sections=True) == test_input def test_sort_within_sections_with_force_to_top_issue_473() -> None: """Test to ensure it's possible to sort within sections with items forced to top""" test_input = "import z\nimport foo\nfrom foo import bar\n" assert ( isort.code(code=test_input, force_sort_within_sections=True, force_to_top=["z"]) == test_input ) def test_force_sort_within_sections_with_relative_imports() -> None: """Test sorting of relative imports with force_sort_within_sections=True""" assert isort.check_code( """import . from . import foo from .. import a from ..alpha.beta import b from ..omega import c import .apple as bar from .mango import baz """, show_diff=True, force_sort_within_sections=True, ) def test_force_sort_within_sections_with_reverse_relative_imports() -> None: """Test reverse sorting of relative imports with force_sort_within_sections=True""" assert isort.check_code( """import . from . import foo from .mango import baz from ..alpha.beta import b from .. import a from ..omega import c import .apple as bar """, show_diff=True, force_sort_within_sections=True, reverse_relative=True, ) def test_sort_relative_in_force_sorted_sections_issue_1659() -> None: """Ensure relative imports are sorted within sections""" assert isort.check_code( """from .. import a from ..alpha.beta import b from ..omega import c import . from . import foo import .apple as bar from .mango import baz """, show_diff=True, force_sort_within_sections=True, sort_relative_in_force_sorted_sections=True, ) def test_reverse_sort_relative_in_force_sorted_sections_issue_1659() -> None: """Ensure reverse ordered relative imports are sorted within sections""" assert isort.check_code( """import . from . import foo import .apple as bar from .mango import baz from .. import a from ..alpha.beta import b from ..omega import c """, show_diff=True, force_sort_within_sections=True, sort_relative_in_force_sorted_sections=True, reverse_relative=True, ) def test_correct_number_of_new_lines_with_comment_issue_435() -> None: """Test to ensure that injecting a comment in-between imports doesn't mess up the new line spacing """ test_input = "import foo\n\n# comment\n\n\ndef baz():\n pass\n" assert isort.code(test_input) == test_input def test_future_below_encoding_issue_545() -> None: """Test to ensure future is always below comment""" test_input = ( "#!/usr/bin/env python\n" "from __future__ import print_function\n" "import logging\n" "\n" 'print("hello")\n' ) expected_output = ( "#!/usr/bin/env python\n" "from __future__ import print_function\n" "\n" "import logging\n" "\n" 'print("hello")\n' ) assert isort.code(test_input) == expected_output def test_no_extra_lines_issue_557() -> None: """Test to ensure no extra lines are prepended""" test_input = ( "import os\n" "\n" "from scrapy.core.downloader.handlers.http import " "HttpDownloadHandler, HTTPDownloadHandler\n" ) expected_output = ( "import os\n" "from scrapy.core.downloader.handlers.http import HttpDownloadHandler, " "HTTPDownloadHandler\n" ) assert ( isort.code( code=test_input, force_alphabetical_sort=True, force_sort_within_sections=True, line_length=100, ) == expected_output ) def test_long_import_wrap_support_with_mode_2() -> None: """Test to ensure mode 2 still allows wrapped imports with slash""" test_input = ( "from foobar.foobar.foobar.foobar import \\\n" " an_even_longer_function_name_over_80_characters\n" ) assert ( isort.code(code=test_input, multi_line_output=WrapModes.HANGING_INDENT, line_length=80) == test_input ) def test_pylint_comments_incorrectly_wrapped_issue_571() -> None: """Test to ensure pylint comments don't get wrapped""" test_input = ( "from PyQt5.QtCore import QRegExp # @UnresolvedImport pylint: disable=import-error," "useless-suppression\n" ) expected_output = ( "from PyQt5.QtCore import \\\n" " QRegExp # @UnresolvedImport pylint: disable=import-error,useless-suppression\n" ) assert isort.code(test_input, line_length=60) == expected_output def test_ensure_async_methods_work_issue_537() -> None: """Test to ensure async methods are correctly identified""" test_input = ( "from myapp import myfunction\n" "\n" "\n" "async def test_myfunction(test_client, app):\n" " a = await myfunction(test_client, app)\n" ) assert isort.code(test_input) == test_input def test_ensure_as_imports_sort_correctly_within_from_imports_issue_590() -> None: """Test to ensure combination from and as import statements are sorted correct""" test_input = "from os import defpath\nfrom os import pathsep as separator\n" assert isort.code(test_input, force_sort_within_sections=True) == test_input test_input = "from os import defpath\nfrom os import pathsep as separator\n" assert isort.code(test_input) == test_input test_input = "from os import defpath\nfrom os import pathsep as separator\n" assert isort.code(test_input, force_single_line=True) == test_input def test_ensure_line_endings_are_preserved_issue_493() -> None: """Test to ensure line endings are not converted""" test_input = "from os import defpath\r\nfrom os import pathsep as separator\r\n" assert isort.code(test_input) == test_input test_input = "from os import defpath\rfrom os import pathsep as separator\r" assert isort.code(test_input) == test_input test_input = "from os import defpath\nfrom os import pathsep as separator\n" assert isort.code(test_input) == test_input def test_not_splitted_sections() -> None: whiteline = "\n" stdlib_section = "import unittest\n" firstparty_section = "from app.pkg1 import mdl1\n" local_section = "from .pkg2 import mdl2\n" statement = "foo = bar\n" test_input = ( stdlib_section + whiteline + firstparty_section + whiteline + local_section + whiteline + statement ) assert isort.code(test_input, known_first_party=["app"]) == test_input assert isort.code(test_input, no_lines_before=["LOCALFOLDER"], known_first_party=["app"]) == ( stdlib_section + whiteline + firstparty_section + local_section + whiteline + statement ) # by default STDLIB and FIRSTPARTY sections are split by THIRDPARTY section, # so don't merge them if THIRDPARTY imports aren't exist assert ( isort.code(test_input, no_lines_before=["FIRSTPARTY"], known_first_party=["app"]) == test_input ) # in case when THIRDPARTY section is excluded from sections list, # it's ok to merge STDLIB and FIRSTPARTY assert isort.code( code=test_input, sections=["STDLIB", "FIRSTPARTY", "LOCALFOLDER"], no_lines_before=["FIRSTPARTY"], known_first_party=["app"], ) == (stdlib_section + firstparty_section + whiteline + local_section + whiteline + statement) # it doesn't change output, because stdlib packages don't have any whitelines before them assert ( isort.code(test_input, no_lines_before=["STDLIB"], known_first_party=["app"]) == test_input ) def test_no_lines_before_empty_section() -> None: test_input = "import first\nimport custom\n" assert ( isort.code( code=test_input, known_third_party=["first"], known_custom=["custom"], sections=["THIRDPARTY", "LOCALFOLDER", "CUSTOM"], no_lines_before=["THIRDPARTY", "LOCALFOLDER", "CUSTOM"], ) == test_input ) def test_no_inline_sort() -> None: """Test to ensure multiple `from` imports in one line are not sorted if `--no-inline-sort` flag is enabled. If `--force-single-line-imports` flag is enabled, then `--no-inline-sort` is ignored. """ test_input = "from foo import a, c, b\n" assert isort.code(test_input, no_inline_sort=True, force_single_line=False) == test_input assert ( isort.code(test_input, no_inline_sort=False, force_single_line=False) == "from foo import a, b, c\n" ) expected = "from foo import a\nfrom foo import b\nfrom foo import c\n" assert isort.code(test_input, no_inline_sort=False, force_single_line=True) == expected assert isort.code(test_input, no_inline_sort=True, force_single_line=True) == expected def test_relative_import_of_a_module() -> None: """Imports can be dynamically created (PEP302) and is used by modules such as six. This test ensures that these types of imports are still sorted to the correct type instead of being categorized as local. """ test_input = ( "from __future__ import absolute_import\n" "\n" "import itertools\n" "\n" "from six import add_metaclass\n" "\n" "from six.moves import asd\n" ) expected_results = ( "from __future__ import absolute_import\n" "\n" "import itertools\n" "\n" "from six import add_metaclass\n" "from six.moves import asd\n" ) sorted_result = isort.code(test_input, force_single_line=True) assert sorted_result == expected_results def test_escaped_parens_sort() -> None: test_input = "from foo import \\ \n(a,\nb,\nc)\n" expected = "from foo import a, b, c\n" assert isort.code(test_input) == expected def test_escaped_parens_sort_with_comment() -> None: test_input = "from foo import \\ \n(a,\nb,# comment\nc)\n" expected = "from foo import b # comment\nfrom foo import a, c\n" assert isort.code(test_input) == expected def test_escaped_parens_sort_with_first_comment() -> None: test_input = "from foo import \\ \n(a,# comment\nb,\nc)\n" expected = "from foo import a # comment\nfrom foo import b, c\n" assert isort.code(test_input) == expected def test_escaped_no_parens_sort_with_first_comment() -> None: test_input = "from foo import a, \\\nb, \\\nc # comment\n" expected = "from foo import c # comment\nfrom foo import a, b\n" assert isort.code(test_input) == expected @pytest.mark.skip(reason="TODO: Duplicates currently not handled.") def test_to_ensure_imports_are_brought_to_top_issue_651() -> None: test_input = ( "from __future__ import absolute_import, unicode_literals\n" "\n" 'VAR = """\n' "multiline text\n" '"""\n' "\n" "from __future__ import unicode_literals\n" "from __future__ import absolute_import\n" ) expected_output = ( "from __future__ import absolute_import, unicode_literals\n" "\n" 'VAR = """\n' "multiline text\n" '"""\n' ) assert isort.code(test_input) == expected_output def test_to_ensure_importing_from_imports_module_works_issue_662() -> None: test_input = ( "@wraps(fun)\n" "def __inner(*args, **kwargs):\n" " from .imports import qualname\n" "\n" " warn(description=description or qualname(fun), deprecation=deprecation, " "removal=removal)\n" ) assert isort.code(test_input) == test_input def test_to_ensure_no_unexpected_changes_issue_666() -> None: test_input = ( "from django.conf import settings\n" "from django.core.management import call_command\n" "from django.core.management.base import BaseCommand\n" "from django.utils.translation import ugettext_lazy as _\n" "\n" 'TEMPLATE = """\n' "# This file is generated automatically with the management command\n" "#\n" "# manage.py bis_compile_i18n\n" "#\n" "# please dont change it manually.\n" "from django.utils.translation import ugettext_lazy as _\n" '"""\n' ) assert isort.code(test_input) == test_input def test_to_ensure_tabs_dont_become_space_issue_665() -> None: test_input = "import os\n\n\ndef my_method():\n\tpass\n" assert isort.code(test_input) == test_input def test_new_lines_are_preserved() -> None: with NamedTemporaryFile("w", suffix="py", delete=False) as rn_newline: pass try: with open(rn_newline.name, mode="w", newline="") as rn_newline_input: rn_newline_input.write("import sys\r\nimport os\r\n") api.sort_file(rn_newline.name, settings_path=os.getcwd()) with open(rn_newline.name) as new_line_file: print(new_line_file.read()) with open(rn_newline.name, newline="") as rn_newline_file: rn_newline_contents = rn_newline_file.read() assert rn_newline_contents == "import os\r\nimport sys\r\n" finally: os.remove(rn_newline.name) with NamedTemporaryFile("w", suffix="py", delete=False) as r_newline: pass try: with open(r_newline.name, mode="w", newline="") as r_newline_input: r_newline_input.write("import sys\rimport os\r") api.sort_file(r_newline.name, settings_path=os.getcwd()) with open(r_newline.name, newline="") as r_newline_file: r_newline_contents = r_newline_file.read() assert r_newline_contents == "import os\rimport sys\r" finally: os.remove(r_newline.name) with NamedTemporaryFile("w", suffix="py", delete=False) as n_newline: pass try: with open(n_newline.name, mode="w", newline="") as n_newline_input: n_newline_input.write("import sys\nimport os\n") api.sort_file(n_newline.name, settings_path=os.getcwd()) with open(n_newline.name, newline="") as n_newline_file: n_newline_contents = n_newline_file.read() assert n_newline_contents == "import os\nimport sys\n" finally: os.remove(n_newline.name) def test_forced_separate_is_deterministic_issue_774(tmpdir) -> None: config_file = tmpdir.join("setup.cfg") config_file.write( "[isort]\n" "forced_separate:\n" " separate1\n" " separate2\n" " separate3\n" " separate4\n" ) test_input = ( "import time\n" "\n" "from separate1 import foo\n" "\n" "from separate2 import bar\n" "\n" "from separate3 import baz\n" "\n" "from separate4 import quux\n" ) assert isort.code(test_input, settings_file=config_file.strpath) == test_input def test_monkey_patched_urllib() -> None: with pytest.raises(ImportError): # Previous versions of isort monkey patched urllib which caused unusual # importing for other projects. from urllib import quote # type: ignore # noqa: F401 def test_argument_parsing() -> None: from isort.main import parse_args args = parse_args(["--dt", "-t", "foo", "--skip=bar", "baz.py", "--os"]) assert args["order_by_type"] is False assert args["force_to_top"] == ["foo"] assert args["skip"] == ["bar"] assert args["files"] == ["baz.py"] assert args["only_sections"] is True @pytest.mark.parametrize("multiprocess", (False, True)) def test_command_line(tmpdir, capfd, multiprocess: bool) -> None: from isort.main import main tmpdir.join("file1.py").write("import re\nimport os\n\nimport contextlib\n\n\nimport isort") tmpdir.join("file2.py").write( ("import collections\nimport time\n\nimport abc" "\n\n\nimport isort") ) arguments = [str(tmpdir), "--settings-path", os.getcwd()] if multiprocess: arguments.extend(["--jobs", "2"]) main(arguments) assert ( tmpdir.join("file1.py").read() == "import contextlib\nimport os\nimport re\n\nimport isort\n" ) assert ( tmpdir.join("file2.py").read() == "import abc\nimport collections\nimport time\n\nimport isort\n" ) if not (sys.platform.startswith("win") or sys.platform.startswith("darwin")): out, err = capfd.readouterr() assert not [error for error in err.split("\n") if error and "warning:" not in error] # it informs us about fixing the files: assert str(tmpdir.join("file1.py")) in out assert str(tmpdir.join("file2.py")) in out @pytest.mark.parametrize("quiet", (False, True)) def test_quiet(tmpdir, capfd, quiet: bool) -> None: if sys.platform.startswith("win"): return from isort.main import main tmpdir.join("file1.py").write("import re\nimport os") tmpdir.join("file2.py").write("") arguments = [str(tmpdir)] if quiet: arguments.append("-q") main(arguments) out, err = capfd.readouterr() assert not err assert bool(out) != quiet @pytest.mark.parametrize("enabled", (False, True)) def test_safety_skips(tmpdir, enabled: bool) -> None: tmpdir.join("victim.py").write("# ...") toxdir = tmpdir.mkdir(".tox") toxdir.join("verysafe.py").write("# ...") tmpdir.mkdir("_build").mkdir("python3.7").join("importantsystemlibrary.py").write("# ...") tmpdir.mkdir(".pants.d").join("pants.py").write("import os") if enabled: config = Config(directory=str(tmpdir)) else: config = Config(skip=[], directory=str(tmpdir)) skipped: List[str] = [] broken: List[str] = [] codes = [str(tmpdir)] files.find(codes, config, skipped, broken) # if enabled files within nested unsafe directories should be skipped file_names = { os.path.relpath(f, str(tmpdir)) for f in files.find([str(tmpdir)], config, skipped, broken) } if enabled: assert file_names == {"victim.py"} assert len(skipped) == 3 else: assert file_names == { os.sep.join((".tox", "verysafe.py")), os.sep.join(("_build", "python3.7", "importantsystemlibrary.py")), os.sep.join((".pants.d", "pants.py")), "victim.py", } assert not skipped # directly pointing to files within unsafe directories shouldn't skip them either way file_names = { os.path.relpath(f, str(toxdir)) for f in files.find([str(toxdir)], Config(directory=str(toxdir)), skipped, broken) } assert file_names == {"verysafe.py"} @pytest.mark.parametrize( "skip_glob_assert", ( ([], 0, {os.sep.join(("code", "file.py"))}), (["**/*.py"], 1, set()), (["*/code/*.py"], 1, set()), ), ) def test_skip_glob(tmpdir, skip_glob_assert: Tuple[List[str], int, Set[str]]) -> None: skip_glob, skipped_count, file_names_expected = skip_glob_assert base_dir = tmpdir.mkdir("build") code_dir = base_dir.mkdir("code") code_dir.join("file.py").write("import os") config = Config(skip_glob=skip_glob, directory=str(base_dir)) skipped: List[str] = [] broken: List[str] = [] file_names = { os.path.relpath(f, str(base_dir)) for f in files.find([str(base_dir)], config, skipped, broken) } assert len(skipped) == skipped_count assert file_names == file_names_expected def test_broken(tmpdir) -> None: base_dir = tmpdir.mkdir("broken") config = Config(directory=str(base_dir)) skipped: List[str] = [] broken: List[str] = [] file_names = { os.path.relpath(f, str(base_dir)) for f in files.find(["not-exist"], config, skipped, broken) } assert len(broken) == 1 assert file_names == set() def test_comments_not_removed_issue_576() -> None: test_input = ( "import distutils\n" "# this comment is important and should not be removed\n" "from sys import api_version as api_version\n" ) assert isort.code(test_input) == test_input def test_reverse_relative_imports_issue_417() -> None: test_input = ( "from . import ipsum\n" "from . import lorem\n" "from .dolor import consecteur\n" "from .sit import apidiscing\n" "from .. import donec\n" "from .. import euismod\n" "from ..mi import iaculis\n" "from ..nec import tempor\n" "from ... import diam\n" "from ... import dui\n" "from ...eu import dignissim\n" "from ...ex import metus\n" ) assert isort.code(test_input, force_single_line=True, reverse_relative=True) == test_input def test_inconsistent_relative_imports_issue_577() -> None: test_input = ( "from ... import diam\n" "from ... import dui\n" "from ...eu import dignissim\n" "from ...ex import metus\n" "from .. import donec\n" "from .. import euismod\n" "from ..mi import iaculis\n" "from ..nec import tempor\n" "from . import ipsum\n" "from . import lorem\n" "from .dolor import consecteur\n" "from .sit import apidiscing\n" ) assert isort.code(test_input, force_single_line=True) == test_input def test_unwrap_issue_762() -> None: test_input = "from os.path \\\nimport (join, split)\n" assert isort.code(test_input) == "from os.path import join, split\n" test_input = "from os.\\\n path import (join, split)" assert isort.code(test_input) == "from os.path import join, split\n" def test_multiple_as_imports() -> None: test_input = "from a import b as b\nfrom a import b as bb\nfrom a import b as bb_\n" test_output = isort.code(test_input) assert test_output == test_input test_output = isort.code(test_input, combine_as_imports=True) assert test_output == "from a import b as b, b as bb, b as bb_\n" test_output = isort.code(test_input) assert test_output == test_input test_output = isort.code(code=test_input, combine_as_imports=True) assert test_output == "from a import b as b, b as bb, b as bb_\n" test_input = ( "from a import b\n" "from a import b as b\n" "from a import b as bb\n" "from a import b as bb_\n" ) test_output = isort.code(test_input) assert test_output == test_input test_output = isort.code(code=test_input, combine_as_imports=True) assert test_output == "from a import b, b as b, b as bb, b as bb_\n" test_input = ( "from a import b as e\n" "from a import b as c\n" "from a import b\n" "from a import b as f\n" ) test_output = isort.code(test_input) assert ( test_output == "from a import b\nfrom a import b as c\nfrom a import b as e\nfrom a import b as f\n" ) test_output = isort.code(code=test_input, no_inline_sort=True) assert ( test_output == "from a import b\nfrom a import b as c\nfrom a import b as e\nfrom a import b as f\n" ) test_output = isort.code(code=test_input, combine_as_imports=True) assert test_output == "from a import b, b as c, b as e, b as f\n" test_output = isort.code(code=test_input, combine_as_imports=True, no_inline_sort=True) assert test_output == "from a import b, b as e, b as c, b as f\n" test_input = "import a as a\nimport a as aa\nimport a as aa_\n" test_output = isort.code(code=test_input, combine_as_imports=True) assert test_output == test_input assert test_output == "import a as a\nimport a as aa\nimport a as aa_\n" test_output = isort.code(code=test_input, combine_as_imports=True) assert test_output == test_input def test_all_imports_from_single_module() -> None: test_input = ( "import a\n" "from a import *\n" "from a import b as d\n" "from a import z, x, y\n" "from a import b\n" "from a import w, i as j\n" "from a import b as c, g as h\n" "from a import e as f\n" ) test_output = isort.code( code=test_input, combine_star=False, combine_as_imports=False, force_single_line=False, no_inline_sort=False, ) assert test_output == ( "import a\n" "from a import *\n" "from a import b\n" "from a import b as c\n" "from a import b as d\n" "from a import e as f\n" "from a import g as h\n" "from a import i as j\n" "from a import w, x, y, z\n" ) test_input = ( "import a\n" "from a import *\n" "from a import z, x, y\n" "from a import b\n" "from a import w\n" ) test_output = isort.code( code=test_input, combine_star=True, combine_as_imports=False, force_single_line=False, no_inline_sort=False, ) assert test_output == "import a\nfrom a import *\n" test_input += """ from a import b as c from a import b as d from a import e as f from a import g as h from a import i as j """ test_output = isort.code( code=test_input, combine_star=False, combine_as_imports=True, force_single_line=False, no_inline_sort=False, ) assert test_output == ( "import a\n" "from a import *\n" "from a import b, b as c, b as d, e as f, g as h, i as j, w, x, y, z\n" ) test_output = isort.code( code=test_input, combine_star=False, combine_as_imports=False, force_single_line=True, no_inline_sort=False, ) assert test_output == ( "import a\n" "from a import *\n" "from a import b\n" "from a import b as c\n" "from a import b as d\n" "from a import e as f\n" "from a import g as h\n" "from a import i as j\n" "from a import w\n" "from a import x\n" "from a import y\n" "from a import z\n" ) test_input = ( "import a\n" "from a import *\n" "from a import b\n" "from a import b as d\n" "from a import b as c\n" "from a import z, x, y, w\n" "from a import i as j\n" "from a import g as h\n" "from a import e as f\n" ) test_output = isort.code( code=test_input, combine_star=False, combine_as_imports=False, force_single_line=False, no_inline_sort=True, ) assert test_output == ( "import a\n" "from a import *\n" "from a import b\n" "from a import b as c\n" "from a import b as d\n" "from a import z, x, y, w\n" "from a import i as j\n" "from a import g as h\n" "from a import e as f\n" ) test_input = ( "import a\n" "from a import *\n" "from a import z, x, y\n" "from a import b\n" "from a import w\n" ) test_output = isort.code( code=test_input, combine_star=True, combine_as_imports=True, force_single_line=False, no_inline_sort=False, ) assert test_output == "import a\nfrom a import *\n" test_output = isort.code( code=test_input, combine_star=True, combine_as_imports=False, force_single_line=True, no_inline_sort=False, ) assert test_output == "import a\nfrom a import *\n" test_output = isort.code( code=test_input, combine_star=True, combine_as_imports=False, force_single_line=False, no_inline_sort=True, ) assert test_output == "import a\nfrom a import *\n" test_output = isort.code( code=test_input, combine_star=False, combine_as_imports=True, force_single_line=True, no_inline_sort=False, ) assert test_output == ( "import a\n" "from a import *\n" "from a import b\n" "from a import w\n" "from a import x\n" "from a import y\n" "from a import z\n" ) test_input = ( "import a\n" "from a import *\n" "from a import b\n" "from a import b as d\n" "from a import b as c\n" "from a import z, x, y, w\n" "from a import i as j\n" "from a import g as h\n" "from a import e as f\n" ) test_output = isort.code( code=test_input, combine_star=False, combine_as_imports=True, force_single_line=False, no_inline_sort=True, ) assert test_output == ( "import a\n" "from a import *\n" "from a import b, b as d, b as c, z, x, y, w, i as j, g as h, e as f\n" ) test_output = isort.code( code=test_input, combine_star=False, combine_as_imports=False, force_single_line=True, no_inline_sort=True, ) assert test_output == ( "import a\n" "from a import *\n" "from a import b\n" "from a import b as c\n" "from a import b as d\n" "from a import e as f\n" "from a import g as h\n" "from a import i as j\n" "from a import w\n" "from a import x\n" "from a import y\n" "from a import z\n" ) test_input = ( "import a\n" "from a import *\n" "from a import z, x, y\n" "from a import b\n" "from a import w\n" ) test_output = isort.code( code=test_input, combine_star=True, combine_as_imports=True, force_single_line=True, no_inline_sort=False, ) assert test_output == "import a\nfrom a import *\n" def test_noqa_issue_679() -> None: """Test to ensure that NOQA notation is being observed as expected if honor_noqa is set to `True` """ test_input = """ import os import requestsss import zed # NOQA import ujson # NOQA import foo""" test_output = """ import os import foo import requestsss import ujson # NOQA import zed # NOQA """ test_output_honor_noqa = """ import os import foo import requestsss import zed # NOQA import ujson # NOQA """ assert isort.code(test_input) == test_output assert isort.code(test_input.lower()) == test_output.lower() assert isort.code(test_input, honor_noqa=True) == test_output_honor_noqa assert isort.code(test_input.lower(), honor_noqa=True) == test_output_honor_noqa.lower() def test_extract_multiline_output_wrap_setting_from_a_config_file(tmpdir: py.path.local) -> None: editorconfig_contents = ["root = true", " [*.py]", "multi_line_output = 5"] config_file = tmpdir.join(".editorconfig") config_file.write("\n".join(editorconfig_contents)) config = Config(settings_path=str(tmpdir)) assert config.multi_line_output == WrapModes.VERTICAL_GRID_GROUPED def test_ensure_support_for_non_typed_but_cased_alphabetic_sort_issue_890() -> None: test_input = ( "from pkg import BALL\n" "from pkg import RC\n" "from pkg import Action\n" "from pkg import Bacoo\n" "from pkg import RCNewCode\n" "from pkg import actual\n" "from pkg import rc\n" "from pkg import recorder\n" ) expected_output = ( "from pkg import Action\n" "from pkg import BALL\n" "from pkg import Bacoo\n" "from pkg import RC\n" "from pkg import RCNewCode\n" "from pkg import actual\n" "from pkg import rc\n" "from pkg import recorder\n" ) assert ( isort.code( code=test_input, case_sensitive=True, order_by_type=False, force_single_line=True ) == expected_output ) def test_to_ensure_empty_line_not_added_to_file_start_issue_889() -> None: test_input = "# comment\nimport os\n# comment2\nimport sys\n" assert isort.code(test_input) == test_input def test_to_ensure_correctly_handling_of_whitespace_only_issue_811(capsys) -> None: test_input = ( "import os\n" "import sys\n" "\n" "\x0c\n" "def my_function():\n" ' print("hi")\n' ) isort.code(test_input, ignore_whitespace=True) out, err = capsys.readouterr() assert out == "" assert err == "" def test_standard_library_deprecates_user_issue_778() -> None: test_input = "import os\n\nimport user\n" assert isort.code(test_input) == test_input @pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows") def test_settings_path_skip_issue_909(tmpdir) -> None: base_dir = tmpdir.mkdir("project") config_dir = base_dir.mkdir("conf") config_dir.join(".isort.cfg").write( "[isort]\n" "skip =\n" " file_to_be_skipped.py\n" "skip_glob =\n" " *glob_skip*\n" ) base_dir.join("file_glob_skip.py").write( "import os\n\n" 'print("Hello World")\n' "\nimport sys\nimport os\n" ) base_dir.join("file_to_be_skipped.py").write( "import os\n\n" 'print("Hello World")' "\nimport sys\nimport os\n" ) test_run_directory = os.getcwd() os.chdir(str(base_dir)) with pytest.raises( Exception ): # without the settings path provided: the command should not skip & identify errors subprocess.run(["isort", ".", "--check-only"], check=True) result = subprocess.run( ["isort", ".", "--check-only", "--settings-path=conf/.isort.cfg"], stdout=subprocess.PIPE, check=True, ) os.chdir(str(test_run_directory)) assert b"skipped 2" in result.stdout.lower() @pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows") def test_skip_paths_issue_938(tmpdir) -> None: base_dir = tmpdir.mkdir("project") config_dir = base_dir.mkdir("conf") config_dir.join(".isort.cfg").write( "[isort]\n" "line_length = 88\n" "multi_line_output = 4\n" "lines_after_imports = 2\n" "skip_glob =\n" " migrations/**.py\n" ) base_dir.join("dont_skip.py").write("import os\n\n" 'print("Hello World")' "\nimport sys\n") migrations_dir = base_dir.mkdir("migrations") migrations_dir.join("file_glob_skip.py").write( "import os\n\n" 'print("Hello World")\n' "\nimport sys\n" ) test_run_directory = os.getcwd() os.chdir(str(base_dir)) result = subprocess.run( ["isort", "dont_skip.py", "migrations/file_glob_skip.py"], stdout=subprocess.PIPE, check=True, ) os.chdir(str(test_run_directory)) assert b"skipped" not in result.stdout.lower() os.chdir(str(base_dir)) result = subprocess.run( [ "isort", "--filter-files", "--settings-path=conf/.isort.cfg", "dont_skip.py", "migrations/file_glob_skip.py", ], stdout=subprocess.PIPE, check=True, ) os.chdir(str(test_run_directory)) assert b"skipped 1" in result.stdout.lower() def test_failing_file_check_916() -> None: test_input = ( "#!/usr/bin/env python\n" "# -*- coding: utf-8 -*-\n" "from __future__ import unicode_literals\n" ) expected_output = ( "#!/usr/bin/env python\n" "# -*- coding: utf-8 -*-\n" "# FUTURE\n" "from __future__ import unicode_literals\n" ) settings = { "import_heading_future": "FUTURE", "sections": ["FUTURE", "STDLIB", "NORDIGEN", "FIRSTPARTY", "THIRDPARTY", "LOCALFOLDER"], "indent": " ", "multi_line_output": 3, "lines_after_imports": 2, } # type: Dict[str, Any] assert isort.code(test_input, **settings) == expected_output assert isort.code(expected_output, **settings) == expected_output assert api.check_code_string(expected_output, **settings) def test_import_heading_issue_905() -> None: config = { "import_heading_stdlib": "Standard library imports", "import_heading_thirdparty": "Third party imports", "import_heading_firstparty": "Local imports", "known_third_party": ["numpy"], "known_first_party": ["oklib"], } # type: Dict[str, Any] test_input = ( "# Standard library imports\n" "from os import path as osp\n" "\n" "# Third party imports\n" "import numpy as np\n" "\n" "# Local imports\n" "from oklib.plot_ok import imagesc\n" ) assert isort.code(test_input, **config) == test_input def test_isort_keeps_comments_issue_691() -> None: test_input = ( "import os\n" "# This will make sure the app is always imported when\n" "# Django starts so that shared_task will use this app.\n" "from .celery import app as celery_app # noqa\n" "\n" "PROJECT_DIR = os.path.dirname(os.path.abspath(__file__))\n" "\n" "def path(*subdirectories):\n" " return os.path.join(PROJECT_DIR, *subdirectories)\n" ) expected_output = ( "import os\n" "\n" "# This will make sure the app is always imported when\n" "# Django starts so that shared_task will use this app.\n" "from .celery import app as celery_app # noqa\n" "\n" "PROJECT_DIR = os.path.dirname(os.path.abspath(__file__))\n" "\n" "def path(*subdirectories):\n" " return os.path.join(PROJECT_DIR, *subdirectories)\n" ) assert isort.code(test_input) == expected_output def test_isort_multiline_with_tab_issue_1714() -> None: test_input = "from sys \\ \n" "\timport version\n" "print(version)\n" expected_output = "from sys import version\n" "\n" "print(version)\n" assert isort.code(test_input) == expected_output def test_isort_ensures_blank_line_between_import_and_comment() -> None: config = { "ensure_newline_before_comments": True, "lines_between_sections": 0, "known_one": ["one"], "known_two": ["two"], "known_three": ["three"], "known_four": ["four"], "sections": [ "FUTURE", "STDLIB", "FIRSTPARTY", "THIRDPARTY", "LOCALFOLDER", "ONE", "TWO", "THREE", "FOUR", ], } # type: Dict[str, Any] test_input = ( "import os\n" "# noinspection PyUnresolvedReferences\n" "import one.a\n" "# noinspection PyUnresolvedReferences\n" "import one.b\n" "# noinspection PyUnresolvedReferences\n" "import two.a as aa\n" "# noinspection PyUnresolvedReferences\n" "import two.b as bb\n" "# noinspection PyUnresolvedReferences\n" "from three.a import a\n" "# noinspection PyUnresolvedReferences\n" "from three.b import b\n" "# noinspection PyUnresolvedReferences\n" "from four.a import a as aa\n" "# noinspection PyUnresolvedReferences\n" "from four.b import b as bb\n" ) expected_output = ( "import os\n" "\n" "# noinspection PyUnresolvedReferences\n" "import one.a\n" "\n" "# noinspection PyUnresolvedReferences\n" "import one.b\n" "\n" "# noinspection PyUnresolvedReferences\n" "import two.a as aa\n" "\n" "# noinspection PyUnresolvedReferences\n" "import two.b as bb\n" "\n" "# noinspection PyUnresolvedReferences\n" "from three.a import a\n" "\n" "# noinspection PyUnresolvedReferences\n" "from three.b import b\n" "\n" "# noinspection PyUnresolvedReferences\n" "from four.a import a as aa\n" "\n" "# noinspection PyUnresolvedReferences\n" "from four.b import b as bb\n" ) assert isort.code(test_input, **config) == expected_output def test_pyi_formatting_issue_942(tmpdir) -> None: test_input = "import os\n\n\ndef my_method():\n" expected_py_output = test_input.splitlines() expected_pyi_output = "import os\n\ndef my_method():\n".splitlines() assert isort.code(test_input).splitlines() == expected_py_output assert isort.code(test_input, extension="pyi").splitlines() == expected_pyi_output source_py = tmpdir.join("source.py") source_py.write(test_input) assert ( isort.code(code=Path(source_py).read_text(), file_path=Path(source_py)).splitlines() == expected_py_output ) source_pyi = tmpdir.join("source.pyi") source_pyi.write(test_input) assert ( isort.code( code=Path(source_pyi).read_text(), extension="pyi", file_path=Path(source_pyi) ).splitlines() == expected_pyi_output ) # Ensure it works for direct file API as well (see: issue #1284) source_pyi = tmpdir.join("source.pyi") source_pyi.write(test_input) api.sort_file(Path(source_pyi)) assert source_pyi.read().splitlines() == expected_pyi_output def test_move_class_issue_751() -> None: test_input = ( "# -*- coding: utf-8 -*-" "\n" "# Define your item pipelines here" "#" "# Don't forget to add your pipeline to the ITEM_PIPELINES setting" "# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html" "from datetime import datetime" "from .items import WeiboMblogItem" "\n" "class WeiboMblogPipeline(object):" " def process_item(self, item, spider):" " if isinstance(item, WeiboMblogItem):" " item = self._process_item(item, spider)" " return item" "\n" " def _process_item(self, item, spider):" " item['inserted_at'] = datetime.now()" " return item" "\n" ) assert isort.code(test_input) == test_input def test_python_version() -> None: from isort.main import parse_args # test that the py_version can be added as flag args = parse_args(["--py=27"]) assert args["py_version"] == "27" args = parse_args(["--python-version=3"]) assert args["py_version"] == "3" test_input = "import os\n\nimport user\n" assert isort.code(test_input, py_version="3") == test_input # user is part of the standard library in python 2 output_python_2 = "import os\nimport user\n" assert isort.code(test_input, py_version="27") == output_python_2 test_input = "import os\nimport xml" print(isort.code(test_input, py_version="all")) def test_isort_with_single_character_import() -> None: """Tests to ensure isort handles single capatilized single character imports as class objects by default See Issue #376: https://github.com/pycqa/isort/issues/376 """ test_input = "from django.db.models import CASCADE, SET_NULL, Q\n" assert isort.code(test_input) == test_input def test_isort_nested_imports() -> None: """Ensure imports in a nested block get sorted correctly""" test_input = """ def import_test(): import sys import os # my imports from . import def from . import abc return True """ assert ( isort.code(test_input) == """ def import_test(): import os import sys # my imports from . import abc, def return True """ ) def test_isort_off() -> None: """Test that isort can be turned on and off at will using comments""" test_input = """import os # isort: off import sys import os # isort: on from . import local """ assert isort.code(test_input) == test_input def test_isort_split() -> None: """Test the ability to split isort import sections""" test_input = """import os import sys # isort: split import os import sys """ assert isort.code(test_input) == test_input test_input = """import c import b # isort: split import a import c """ assert isort.code(test_input) == test_input def test_comment_look_alike(): """Test to ensure isort will handle what looks like a single line comment at the end of a multi-line comment. """ test_input = ''' """This is a multi-line comment ending with what appears to be a single line comment # Single Line Comment""" import sys import os ''' assert ( isort.code(test_input) == ''' """This is a multi-line comment ending with what appears to be a single line comment # Single Line Comment""" import os import sys ''' ) def test_cimport_support(): """Test to ensure cimports (Cython style imports) work""" test_input = """ import os import sys import cython import platform import traceback import time import types import re import copy import inspect # used by JavascriptBindings.__SetObjectMethods() import urllib import json import datetime import random if sys.version_info.major == 2: import urlparse else: from urllib import parse as urlparse if sys.version_info.major == 2: from urllib import pathname2url as urllib_pathname2url else: from urllib.request import pathname2url as urllib_pathname2url from cpython.version cimport PY_MAJOR_VERSION import weakref # We should allow multiple string types: str, unicode, bytes. # PyToCefString() can handle them all. # Important: # If you set it to basestring, Cython will accept exactly(!) # str/unicode in Py2 and str in Py3. This won't work in Py3 # as we might want to pass bytes as well. Also it will # reject string subtypes, so using it in publi API functions # would be a bad idea. ctypedef object py_string # You can't use "void" along with cpdef function returning None, it is planned to be # added to Cython in the future, creating this virtual type temporarily. If you # change it later to "void" then don't forget to add "except *". ctypedef object py_void ctypedef long WindowHandle from cpython cimport PyLong_FromVoidPtr from cpython cimport bool as py_bool from libcpp cimport bool as cpp_bool from libcpp.map cimport map as cpp_map from multimap cimport multimap as cpp_multimap from libcpp.pair cimport pair as cpp_pair from libcpp.vector cimport vector as cpp_vector from libcpp.string cimport string as cpp_string from wstring cimport wstring as cpp_wstring from libc.string cimport strlen from libc.string cimport memcpy # preincrement and dereference must be "as" otherwise not seen. from cython.operator cimport preincrement as preinc, dereference as deref # from cython.operator cimport address as addr # Address of an c++ object? from libc.stdlib cimport calloc, malloc, free from libc.stdlib cimport atoi # When pyx file cimports * from a pxd file and that pxd cimports * from another pxd # then these names will be visible in pyx file. # Circular imports are allowed in form "cimport ...", but won't work if you do # "from ... cimport *", this is important to know in pxd files. from libc.stdint cimport uint64_t from libc.stdint cimport uintptr_t cimport ctime IF UNAME_SYSNAME == "Windows": from windows cimport * from dpi_aware_win cimport * ELIF UNAME_SYSNAME == "Linux": from linux cimport * ELIF UNAME_SYSNAME == "Darwin": from mac cimport * from cpp_utils cimport * from task cimport * from cef_string cimport * cdef extern from *: ctypedef CefString ConstCefString "const CefString" from cef_types_wrappers cimport * from cef_task cimport * from cef_runnable cimport * from cef_platform cimport * from cef_ptr cimport * from cef_app cimport * from cef_browser cimport * cimport cef_browser_static from cef_client cimport * from client_handler cimport * from cef_frame cimport * # cannot cimport *, that would cause name conflicts with constants. cimport cef_types ctypedef cef_types.cef_paint_element_type_t PaintElementType ctypedef cef_types.cef_jsdialog_type_t JSDialogType from cef_types cimport CefKeyEvent from cef_types cimport CefMouseEvent from cef_types cimport CefScreenInfo # cannot cimport *, name conflicts IF UNAME_SYSNAME == "Windows": cimport cef_types_win ELIF UNAME_SYSNAME == "Darwin": cimport cef_types_mac ELIF UNAME_SYSNAME == "Linux": cimport cef_types_linux from cef_time cimport * from cef_drag cimport * import os IF CEF_VERSION == 1: from cef_v8 cimport * cimport cef_v8_static cimport cef_v8_stack_trace from v8function_handler cimport * from cef_request_cef1 cimport * from cef_web_urlrequest_cef1 cimport * cimport cef_web_urlrequest_static_cef1 from web_request_client_cef1 cimport * from cef_stream cimport * cimport cef_stream_static from cef_response_cef1 cimport * from cef_stream cimport * from cef_content_filter cimport * from content_filter_handler cimport * from cef_download_handler cimport * from download_handler cimport * from cef_cookie_cef1 cimport * cimport cef_cookie_manager_namespace from cookie_visitor cimport * from cef_render_handler cimport * from cef_drag_data cimport * IF UNAME_SYSNAME == "Windows": IF CEF_VERSION == 1: from http_authentication cimport * IF CEF_VERSION == 3: from cef_values cimport * from cefpython_app cimport * from cef_process_message cimport * from cef_web_plugin_cef3 cimport * from cef_request_handler_cef3 cimport * from cef_request_cef3 cimport * from cef_cookie_cef3 cimport * from cef_string_visitor cimport * cimport cef_cookie_manager_namespace from cookie_visitor cimport * from string_visitor cimport * from cef_callback_cef3 cimport * from cef_response_cef3 cimport * from cef_resource_handler_cef3 cimport * from resource_handler_cef3 cimport * from cef_urlrequest_cef3 cimport * from web_request_client_cef3 cimport * from cef_command_line cimport * from cef_request_context cimport * from cef_request_context_handler cimport * from request_context_handler cimport * from cef_jsdialog_handler cimport * """ expected_output = """ import copy import datetime import inspect # used by JavascriptBindings.__SetObjectMethods() import json import os import platform import random import re import sys import time import traceback import types import urllib import cython if sys.version_info.major == 2: import urlparse else: from urllib import parse as urlparse if sys.version_info.major == 2: from urllib import pathname2url as urllib_pathname2url else: from urllib.request import pathname2url as urllib_pathname2url from cpython.version cimport PY_MAJOR_VERSION import weakref # We should allow multiple string types: str, unicode, bytes. # PyToCefString() can handle them all. # Important: # If you set it to basestring, Cython will accept exactly(!) # str/unicode in Py2 and str in Py3. This won't work in Py3 # as we might want to pass bytes as well. Also it will # reject string subtypes, so using it in publi API functions # would be a bad idea. ctypedef object py_string # You can't use "void" along with cpdef function returning None, it is planned to be # added to Cython in the future, creating this virtual type temporarily. If you # change it later to "void" then don't forget to add "except *". ctypedef object py_void ctypedef long WindowHandle cimport ctime from cpython cimport PyLong_FromVoidPtr from cpython cimport bool as py_bool # preincrement and dereference must be "as" otherwise not seen. from cython.operator cimport dereference as deref from cython.operator cimport preincrement as preinc from libc.stdint cimport uint64_t, uintptr_t from libc.stdlib cimport atoi, calloc, free, malloc from libc.string cimport memcpy, strlen from libcpp cimport bool as cpp_bool from libcpp.map cimport map as cpp_map from libcpp.pair cimport pair as cpp_pair from libcpp.string cimport string as cpp_string from libcpp.vector cimport vector as cpp_vector from multimap cimport multimap as cpp_multimap from wstring cimport wstring as cpp_wstring # from cython.operator cimport address as addr # Address of an c++ object? # When pyx file cimports * from a pxd file and that pxd cimports * from another pxd # then these names will be visible in pyx file. # Circular imports are allowed in form "cimport ...", but won't work if you do # "from ... cimport *", this is important to know in pxd files. IF UNAME_SYSNAME == "Windows": from dpi_aware_win cimport * from windows cimport * ELIF UNAME_SYSNAME == "Linux": from linux cimport * ELIF UNAME_SYSNAME == "Darwin": from mac cimport * from cef_string cimport * from cpp_utils cimport * from task cimport * cdef extern from *: ctypedef CefString ConstCefString "const CefString" cimport cef_browser_static # cannot cimport *, that would cause name conflicts with constants. cimport cef_types from cef_app cimport * from cef_browser cimport * from cef_client cimport * from cef_frame cimport * from cef_platform cimport * from cef_ptr cimport * from cef_runnable cimport * from cef_task cimport * from cef_types_wrappers cimport * from client_handler cimport * ctypedef cef_types.cef_paint_element_type_t PaintElementType ctypedef cef_types.cef_jsdialog_type_t JSDialogType from cef_types cimport CefKeyEvent, CefMouseEvent, CefScreenInfo # cannot cimport *, name conflicts IF UNAME_SYSNAME == "Windows": cimport cef_types_win ELIF UNAME_SYSNAME == "Darwin": cimport cef_types_mac ELIF UNAME_SYSNAME == "Linux": cimport cef_types_linux from cef_drag cimport * from cef_time cimport * import os IF CEF_VERSION == 1: cimport cef_cookie_manager_namespace cimport cef_stream_static cimport cef_v8_stack_trace cimport cef_v8_static cimport cef_web_urlrequest_static_cef1 from cef_content_filter cimport * from cef_cookie_cef1 cimport * from cef_download_handler cimport * from cef_drag_data cimport * from cef_render_handler cimport * from cef_request_cef1 cimport * from cef_response_cef1 cimport * from cef_stream cimport * from cef_v8 cimport * from cef_web_urlrequest_cef1 cimport * from content_filter_handler cimport * from cookie_visitor cimport * from download_handler cimport * from v8function_handler cimport * from web_request_client_cef1 cimport * IF UNAME_SYSNAME == "Windows": IF CEF_VERSION == 1: from http_authentication cimport * IF CEF_VERSION == 3: cimport cef_cookie_manager_namespace from cef_callback_cef3 cimport * from cef_command_line cimport * from cef_cookie_cef3 cimport * from cef_jsdialog_handler cimport * from cef_process_message cimport * from cef_request_cef3 cimport * from cef_request_context cimport * from cef_request_context_handler cimport * from cef_request_handler_cef3 cimport * from cef_resource_handler_cef3 cimport * from cef_response_cef3 cimport * from cef_string_visitor cimport * from cef_urlrequest_cef3 cimport * from cef_values cimport * from cef_web_plugin_cef3 cimport * from cefpython_app cimport * from cookie_visitor cimport * from request_context_handler cimport * from resource_handler_cef3 cimport * from string_visitor cimport * from web_request_client_cef3 cimport * """ assert isort.code(test_input).strip() == expected_output.strip() assert isort.code(test_input, old_finders=True).strip() == expected_output.strip() def test_cdef_support(): assert ( isort.code( code=""" from cpython.version cimport PY_MAJOR_VERSION cdef extern from *: ctypedef CefString ConstCefString "const CefString" """ ) == """ from cpython.version cimport PY_MAJOR_VERSION cdef extern from *: ctypedef CefString ConstCefString "const CefString" """ ) assert ( isort.code( code=""" from cpython.version cimport PY_MAJOR_VERSION cpdef extern from *: ctypedef CefString ConstCefString "const CefString" """ ) == """ from cpython.version cimport PY_MAJOR_VERSION cpdef extern from *: ctypedef CefString ConstCefString "const CefString" """ ) def test_top_level_import_order() -> None: test_input = ( "from rest_framework import throttling, viewsets\n" "from rest_framework.authentication import TokenAuthentication\n" ) assert isort.code(test_input, force_sort_within_sections=True) == test_input def test_noqa_issue_1065() -> None: test_input = """ # # USER SIGNALS # from flask_login import user_logged_in, user_logged_out # noqa from flask_security.signals import ( # noqa password_changed as user_reset_password, # noqa user_confirmed, # noqa user_registered, # noqa ) # noqa from flask_principal import identity_changed as user_identity_changed # noqa """ expected_output = """ # # USER SIGNALS # from flask_login import user_logged_in, user_logged_out # noqa from flask_principal import identity_changed as user_identity_changed # noqa from flask_security.signals import password_changed as user_reset_password # noqa from flask_security.signals import user_confirmed # noqa from flask_security.signals import user_registered # noqa """ assert isort.code(test_input, line_length=100) == expected_output test_input_2 = """ # # USER SIGNALS # from flask_login import user_logged_in, user_logged_out # noqa from flask_security.signals import ( password_changed as user_reset_password, # noqa user_confirmed, # noqa user_registered, # noqa ) from flask_principal import identity_changed as user_identity_changed # noqa """ assert isort.code(test_input_2, line_length=100) == expected_output def test_single_line_exclusions(): test_input = """ # start comment from os import path, system from typing import List, TypeVar """ expected_output = """ # start comment from os import path from os import system from typing import List, TypeVar """ assert ( isort.code(code=test_input, force_single_line=True, single_line_exclusions=("typing",)) == expected_output ) def test_nested_comment_handling(): test_input = """ if True: import foo # comment for bar """ assert isort.code(test_input) == test_input # If comments appear inside import sections at same indentation they can be re-arranged. test_input = """ if True: import sys # os import import os """ expected_output = """ if True: # os import import os import sys """ assert isort.code(test_input) == expected_output # Comments shouldn't be unexpectedly rearranged. See issue #1090. test_input = """ def f(): # comment 1 # comment 2 # comment 3 # comment 4 from a import a from b import b """ assert isort.code(test_input) == test_input # Whitespace shouldn't be adjusted for nested imports. See issue #1090. test_input = """ try: import foo except ImportError: import bar """ assert isort.code(test_input) == test_input def test_comments_top_of_file(): """Test to ensure comments at top of file are correctly handled. See issue #1091.""" test_input = """# comment 1 # comment 2 # comment 3 # comment 4 from foo import * """ assert isort.code(test_input) == test_input test_input = """# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html from datetime import datetime from .items import WeiboMblogItem class WeiboMblogPipeline(object): def process_item(self, item, spider): if isinstance(item, WeiboMblogItem): item = self._process_item(item, spider) return item def _process_item(self, item, spider): item['inserted_at'] = datetime.now() return item """ assert isort.code(test_input) == test_input def test_multiple_aliases(): """Test to ensure isort will retain multiple aliases. See issue #1037""" test_input = """import datetime import datetime as datetime import datetime as dt import datetime as dt2 """ assert isort.code(code=test_input) == test_input def test_parens_in_comment(): """Test to ensure isort can handle parens placed in comments. See issue #1103""" test_input = """from foo import ( # (some text in brackets) bar, ) """ expected_output = "from foo import bar # (some text in brackets)\n" assert isort.code(test_input) == expected_output def test_as_imports_mixed(): """Test to ensure as imports can be mixed with non as. See issue #908""" test_input = """from datetime import datetime import datetime.datetime as dt """ expected_output = """import datetime.datetime as dt from datetime import datetime """ assert isort.code(test_input) == expected_output def test_no_sections_with_future(): """Test to ensure no_sections works with future. See issue #807""" test_input = """from __future__ import print_function import os """ expected_output = """from __future__ import print_function import os """ assert isort.code(test_input, no_sections=True) == expected_output def test_no_sections_with_as_import(): """Test to ensure no_sections work with as import.""" test_input = """import oumpy as np import sympy """ assert isort.code(test_input, no_sections=True) == test_input def test_no_lines_too_long(): """Test to ensure no lines end up too long. See issue: #1015""" test_input = """from package1 import first_package, \ second_package from package2 import \\ first_package """ expected_output = """from package1 import \\ first_package, \\ second_package from package2 import \\ first_package """ assert isort.code(test_input, line_length=25, multi_line_output=2) == expected_output def test_python_future_category(): """Test to ensure a manual python future category will work as needed to install aliases see: Issue #1005 """ test_input = """from __future__ import absolute_import # my future comment from future import standard_library standard_library.install_aliases() import os import re import time from logging.handlers import SysLogHandler from builtins import len, object, str from katlogger import log_formatter, log_rollover from .query_elastic import QueryElastic """ expected_output = """from __future__ import absolute_import # my future comment from future import standard_library standard_library.install_aliases() # Python Standard Library import os import re import time from builtins import len, object, str from logging.handlers import SysLogHandler # CAM Packages from katlogger import log_formatter, log_rollover # Explicitly Local from .query_elastic import QueryElastic """ assert ( isort.code( code=test_input, force_grid_wrap=False, include_trailing_comma=True, indent=4, line_length=90, multi_line_output=3, lines_between_types=1, sections=[ "FUTURE_LIBRARY", "FUTURE_THIRDPARTY", "STDLIB", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER", ], import_heading_stdlib="Python Standard Library", import_heading_thirdparty="Third Library", import_heading_firstparty="CAM Packages", import_heading_localfolder="Explicitly Local", known_first_party=["katlogger"], known_future_thirdparty=["future"], ) == expected_output ) def test_combine_star_comments_above(): input_text = """from __future__ import absolute_import # my future comment from future import *, something """ expected_output = """from __future__ import absolute_import # my future comment from future import * """ assert isort.code(input_text, combine_star=True) == expected_output def test_deprecated_settings(): """Test to ensure isort warns when deprecated settings are used, but doesn't fail to run""" with pytest.warns(UserWarning): assert isort.code("hi", not_skip=True) def test_deprecated_settings_no_warn_in_quiet_mode(recwarn): """Test to ensure isort does NOT warn in quiet mode even though settings are deprecated""" assert isort.code("hi", not_skip=True, quiet=True) assert not recwarn def test_only_sections() -> None: """Test to ensure that the within sections relative position of imports are maintained""" test_input = ( "import sys\n" "\n" "import numpy as np\n" "\n" "import os\n" "from os import path as ospath\n" "\n" "import pandas as pd\n" "\n" "import math\n" "import .views\n" "from collections import defaultdict\n" ) assert ( isort.code(test_input, only_sections=True) == ( "import sys\n" "import os\n" "import math\n" "from os import path as ospath\n" "from collections import defaultdict\n" "\n" "import numpy as np\n" "import pandas as pd\n" "\n" "import .views\n" ) == isort.code(test_input, only_sections=True, force_single_line=True) ) # test to ensure that from_imports remain intact with only_sections test_input = "from foo import b, a, c\n" assert isort.code(test_input, only_sections=True) == test_input def test_combine_straight_imports() -> None: """Tests to ensure that combine_straight_imports works correctly""" test_input = ( "import os\n" "import sys\n" "# this is a comment\n" "import math # inline comment\n" ) assert isort.code(test_input, combine_straight_imports=True) == ( "# this is a comment\n" "import math, os, sys # inline comment\n" ) # test to ensure that combine_straight_import works with only_sections test_input = "import sys, os\n" "import a\n" "import math\n" "import b\n" assert isort.code(test_input, combine_straight_imports=True, only_sections=True) == ( "import sys, os, math\n" "\n" "import a, b\n" ) def test_find_imports_in_code() -> None: test_input = ( "import first_straight\n" "\n" "import second_straight\n" "from first_from import first_from_function_1, first_from_function_2\n" "import bad_name as good_name\n" "from parent.some_bad_defs import bad_name_1 as ok_name_1, bad_name_2 as ok_name_2\n" "\n" "# isort: list\n" "__all__ = ['b', 'c', 'a']\n" "\n" "def bla():\n" " import needed_in_bla_2\n" "\n" "\n" " import needed_in_bla\n" " pass" "\n" "def bla_bla():\n" " import needed_in_bla_bla\n" "\n" " #import not_really_an_import\n" " pass" "\n" "import needed_in_end\n" ) identified_imports = list(map(str, api.find_imports_in_code(test_input))) assert identified_imports == [ ":1 import first_straight", ":3 import second_straight", ":4 from first_from import first_from_function_1", ":4 from first_from import first_from_function_2", ":5 import bad_name as good_name", ":6 from parent.some_bad_defs import bad_name_1 as ok_name_1", ":6 from parent.some_bad_defs import bad_name_2 as ok_name_2", ":12 indented import needed_in_bla_2", ":15 indented import needed_in_bla", ":18 indented import needed_in_bla_bla", ":22 import needed_in_end", ] def test_find_imports_in_stream() -> None: """Ensure that find_imports_in_stream can work with nonseekable streams like STDOUT""" class NonSeekableTestStream(StringIO): def seek(self, position): raise OSError("Stream is not seekable") def seekable(self): return False test_input = NonSeekableTestStream("import m2\n" "import m1\n" "not_import = 7") identified_imports = list(map(str, api.find_imports_in_stream(test_input))) assert identified_imports == [":1 import m2", ":2 import m1"] def test_split_on_trailing_comma() -> None: test_input = "from lib import (a, b, c,)" expected_output = """from lib import ( a, b, c, ) """ output = isort.code(test_input, split_on_trailing_comma=True) assert output == expected_output output = isort.code(expected_output, split_on_trailing_comma=True) assert output == expected_output def test_infinite_loop_in_unmatched_parenthesis() -> None: test_input = "from os import (" # ensure a syntax error is raised for unmatched parenthesis with pytest.raises(ExistingSyntaxErrors): isort.code(test_input) test_input = """from os import ( path, walk ) """ # ensure other cases are handled correctly assert isort.code(test_input) == "from os import path, walk\n" def test_reexport() -> None: test_input = """__all__ = ('foo', 'bar') """ expd_output = """__all__ = ('bar', 'foo') """ assert isort.code(test_input, config=Config(sort_reexports=True)) == expd_output def test_reexport_leave_alone_if_not_enabled() -> None: test_input = """__all__ = ('foo', 'bar') """ assert isort.code(test_input) == test_input def test_reexport_multiline() -> None: test_input = """__all__ = ( 'foo', 'bar', ) """ expd_output = """__all__ = ('bar', 'foo') """ assert isort.code(test_input, config=Config(sort_reexports=True)) == expd_output def test_reexport_list() -> None: test_input = """__all__ = ['foo', 'bar'] """ expd_output = """__all__ = ['bar', 'foo'] """ assert isort.code(test_input, config=Config(sort_reexports=True)) == expd_output def test_reexport_set() -> None: test_input = """__all__ = {'foo', 'bar'} """ expd_output = """__all__ = {'bar', 'foo'} """ assert isort.code(test_input, config=Config(sort_reexports=True)) == expd_output def test_reexport_bare() -> None: test_input = """__all__ = 'foo', 'bar' """ expd_output = """__all__ = ('bar', 'foo') """ assert isort.code(test_input, config=Config(sort_reexports=True)) == expd_output def test_reexport_no_spaces() -> None: test_input = """__all__=('foo', 'bar') """ expd_output = """__all__ = ('bar', 'foo') """ assert isort.code(test_input, config=Config(sort_reexports=True)) == expd_output def test_reexport_not_first_line() -> None: test_input = """import random __all__ = ('foo', 'bar') """ expd_output = """import random __all__ = ('bar', 'foo') """ assert isort.code(test_input, config=Config(sort_reexports=True)) == expd_output def test_reexport_not_last_line() -> None: test_input = """__all__ = ('foo', 'bar') meme = "rickroll" """ expd_output = """__all__ = ('bar', 'foo') meme = "rickroll" """ assert isort.code(test_input, config=Config(sort_reexports=True)) == expd_output ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_literal.py0000644000000000000000000000147414536412763015077 0ustar00import pytest import isort.literal from isort import exceptions def test_value_mismatch(): with pytest.raises(exceptions.LiteralSortTypeMismatch): isort.literal.assignment("x = [1, 2, 3]", "set", "py") def test_invalid_syntax(): with pytest.raises(exceptions.LiteralParsingFailure): isort.literal.assignment("x = [1, 2, 3", "list", "py") def test_invalid_sort_type(): with pytest.raises(ValueError): isort.literal.assignment("x = [1, 2, 3", "tuple-list-not-exist", "py") def test_value_assignment_assignments(): assert isort.literal.assignment("b = 1\na = 2\n", "assignments", "py") == "a = 2\nb = 1\n" def test_assignments_invalid_section(): with pytest.raises(exceptions.AssignmentsFormatMismatch): isort.literal.assignment("\n\nx = 1\nx++", "assignments", "py") ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_main.py0000644000000000000000000010277114536412763014371 0ustar00import json import os import subprocess from datetime import datetime import py import pytest from hypothesis import given from hypothesis import strategies as st from isort import main from isort._version import __version__ from isort.exceptions import InvalidSettingsPath from isort.settings import DEFAULT_CONFIG, Config from .utils import as_stream from io import BytesIO, TextIOWrapper from typing import TYPE_CHECKING, Any if TYPE_CHECKING: WrapModes: Any else: from isort.wrap_modes import WrapModes @given( file_name=st.text(), config=st.builds(Config), check=st.booleans(), ask_to_apply=st.booleans(), write_to_stdout=st.booleans(), ) def test_fuzz_sort_imports(file_name, config, check, ask_to_apply, write_to_stdout): main.sort_imports( file_name=file_name, config=config, check=check, ask_to_apply=ask_to_apply, write_to_stdout=write_to_stdout, ) def test_sort_imports(tmpdir): tmp_file = tmpdir.join("file.py") tmp_file.write("import os, sys\n") assert main.sort_imports(str(tmp_file), DEFAULT_CONFIG, check=True).incorrectly_sorted # type: ignore # noqa main.sort_imports(str(tmp_file), DEFAULT_CONFIG) assert not main.sort_imports(str(tmp_file), DEFAULT_CONFIG, check=True).incorrectly_sorted # type: ignore # noqa skip_config = Config(skip=["file.py"]) assert main.sort_imports( # type: ignore str(tmp_file), config=skip_config, check=True, disregard_skip=False ).skipped assert main.sort_imports(str(tmp_file), config=skip_config, disregard_skip=False).skipped # type: ignore # noqa def test_sort_imports_error_handling(tmpdir, mocker, capsys): tmp_file = tmpdir.join("file.py") tmp_file.write("import os, sys\n") mocker.patch("isort.core.process").side_effect = IndexError("Example unhandled exception") with pytest.raises(IndexError): main.sort_imports(str(tmp_file), DEFAULT_CONFIG, check=True).incorrectly_sorted # type: ignore # noqa out, error = capsys.readouterr() assert "Unrecoverable exception thrown when parsing" in error def test_parse_args(): assert main.parse_args([]) == {} assert main.parse_args(["--multi-line", "1"]) == {"multi_line_output": WrapModes.VERTICAL} assert main.parse_args(["--multi-line", "GRID"]) == {"multi_line_output": WrapModes.GRID} assert main.parse_args(["--dont-order-by-type"]) == {"order_by_type": False} assert main.parse_args(["--dt"]) == {"order_by_type": False} assert main.parse_args(["--only-sections"]) == {"only_sections": True} assert main.parse_args(["--os"]) == {"only_sections": True} assert main.parse_args(["--om"]) == {"only_modified": True} assert main.parse_args(["--only-modified"]) == {"only_modified": True} assert main.parse_args(["--csi"]) == {"combine_straight_imports": True} assert main.parse_args(["--combine-straight-imports"]) == {"combine_straight_imports": True} assert main.parse_args(["--dont-follow-links"]) == {"follow_links": False} assert main.parse_args(["--overwrite-in-place"]) == {"overwrite_in_place": True} assert main.parse_args(["--from-first"]) == {"from_first": True} assert main.parse_args(["--resolve-all-configs"]) == {"resolve_all_configs": True} def test_ascii_art(capsys): main.main(["--version"]) out, error = capsys.readouterr() assert ( out == f""" _ _ (_) ___ ___ _ __| |_ | |/ _/ / _ \\/ '__ _/ | |\\__ \\/\\_\\/| | | |_ |_|\\___/\\___/\\_/ \\_/ isort your imports, so you don't have to. VERSION {__version__} """ ) assert error == "" def test_preconvert(): assert main._preconvert(frozenset([1, 1, 2])) == [1, 2] assert main._preconvert(WrapModes.GRID) == "GRID" assert main._preconvert(main._preconvert) == "_preconvert" with pytest.raises(TypeError): main._preconvert(datetime.now()) def test_show_files(capsys, tmpdir): tmpdir.join("a.py").write("import a") tmpdir.join("b.py").write("import b") # show files should list the files isort would sort main.main([str(tmpdir), "--show-files"]) out, error = capsys.readouterr() assert "a.py" in out assert "b.py" in out assert not error # can not be used for stream with pytest.raises(SystemExit): main.main(["-", "--show-files"]) # can not be used with show-config with pytest.raises(SystemExit): main.main([str(tmpdir), "--show-files", "--show-config"]) def test_missing_default_section(tmpdir): config_file = tmpdir.join(".isort.cfg") config_file.write( """ [settings] sections=MADEUP """ ) python_file = tmpdir.join("file.py") python_file.write("import os") with pytest.raises(SystemExit): main.main([str(python_file)]) def test_ran_against_root(): with pytest.raises(SystemExit): main.main(["/"]) def test_main(capsys, tmpdir): base_args = [ "-sp", str(tmpdir), "--virtual-env", str(tmpdir), "--src-path", str(tmpdir), ] tmpdir.mkdir(".git") # If nothing is passed in the quick guide is returned without erroring main.main([]) out, error = capsys.readouterr() assert main.QUICK_GUIDE in out assert not error # If no files are passed in but arguments are the quick guide is returned, alongside an error. with pytest.raises(SystemExit): main.main(base_args) out, error = capsys.readouterr() assert main.QUICK_GUIDE in out # Unless the config is requested, in which case it will be returned alone as JSON main.main(base_args + ["--show-config"]) out, error = capsys.readouterr() returned_config = json.loads(out) assert returned_config assert returned_config["virtual_env"] == str(tmpdir) # This should work even if settings path is not provided main.main(base_args[2:] + ["--show-config"]) out, error = capsys.readouterr() assert json.loads(out)["virtual_env"] == str(tmpdir) # This should raise an error if an invalid settings path is provided with pytest.raises(InvalidSettingsPath): main.main( base_args[2:] + ["--show-config"] + ["--settings-path", "/random-root-folder-that-cant-exist-right?"] ) # Should be able to set settings path to a file config_file = tmpdir.join(".isort.cfg") config_file.write( """ [settings] profile=hug verbose=true """ ) config_args = ["--settings-path", str(config_file)] main.main( config_args + ["--virtual-env", "/random-root-folder-that-cant-exist-right?"] + ["--show-config"] ) out, error = capsys.readouterr() assert json.loads(out)["profile"] == "hug" # Should be able to stream in content to sort input_content = TextIOWrapper( BytesIO( b""" import b import a """ ) ) main.main(config_args + ["-"], stdin=input_content) out, error = capsys.readouterr() assert ( out == f""" else-type place_module for b returned {DEFAULT_CONFIG.default_section} else-type place_module for a returned {DEFAULT_CONFIG.default_section} import a import b """ ) # Should be able to stream diff input_content = TextIOWrapper( BytesIO( b""" import b import a """ ) ) main.main(config_args + ["-", "--diff"], stdin=input_content) out, error = capsys.readouterr() assert not error assert "+" in out assert "-" in out assert "import a" in out assert "import b" in out # check should work with stdin input_content_check = TextIOWrapper( BytesIO( b""" import b import a """ ) ) with pytest.raises(SystemExit): main.main(config_args + ["-", "--check-only"], stdin=input_content_check) out, error = capsys.readouterr() assert error == "ERROR: Imports are incorrectly sorted and/or formatted.\n" # Should be able to run with just a file python_file = tmpdir.join("has_imports.py") python_file.write( """ import b import a """ ) main.main([str(python_file), "--filter-files", "--verbose"]) assert python_file.read().lstrip() == "import a\nimport b\n" # Add a file to skip should_skip = tmpdir.join("should_skip.py") should_skip.write("import nothing") main.main( [ str(python_file), str(should_skip), "--filter-files", "--verbose", "--skip", str(should_skip), ] ) # Should raise a system exit if check only, with broken file python_file.write( """ import b import a """ ) with pytest.raises(SystemExit): main.main( [ str(python_file), str(should_skip), "--filter-files", "--verbose", "--check-only", "--skip", str(should_skip), ] ) # Should have same behavior if full directory is skipped with pytest.raises(SystemExit): main.main( [str(tmpdir), "--filter-files", "--verbose", "--check-only", "--skip", str(should_skip)] ) # Nested files should be skipped without needing --filter-files nested_file = tmpdir.mkdir("nested_dir").join("skip.py") nested_file.write("import b;import a") python_file.write( """ import a import b """ ) main.main([str(tmpdir), "--extend-skip", "skip.py", "--check"]) # without filter options passed in should successfully sort files main.main([str(python_file), str(should_skip), "--verbose", "--atomic"]) # Should raise a system exit if all passed path is broken with pytest.raises(SystemExit): main.main(["not-exist", "--check-only"]) # Should not raise system exit if any of passed path is not broken main.main([str(python_file), "not-exist", "--verbose", "--check-only"]) out, error = capsys.readouterr() assert "Broken" in out # warnings should be displayed if old flags are used with pytest.warns(UserWarning): main.main([str(python_file), "--recursive", "-fss"]) # warnings should be displayed when streaming input is provided with old flags as well with pytest.warns(UserWarning): main.main(["-sp", str(config_file), "-"], stdin=input_content) def test_isort_filename_overrides(tmpdir, capsys): """Tests isorts available approaches for overriding filename and extension based behavior""" input_text = """ import b import a def function(): pass """ def build_input_content(): return as_stream(input_text) main.main(["-"], stdin=build_input_content()) out, error = capsys.readouterr() assert not error assert out == ( """ import a import b def function(): pass """ ) # if file is skipped it should output unchanged. main.main( ["-", "--filename", "x.py", "--skip", "x.py", "--filter-files"], stdin=build_input_content(), ) out, error = capsys.readouterr() assert not error assert out == ( """ import b import a def function(): pass """ ) main.main(["-", "--ext-format", "pyi"], stdin=build_input_content()) out, error = capsys.readouterr() assert not error assert out == ( """ import a import b def function(): pass """ ) tmp_file = tmpdir.join("tmp.pyi") tmp_file.write_text(input_text, encoding="utf8") main.main(["-", "--filename", str(tmp_file)], stdin=build_input_content()) out, error = capsys.readouterr() assert not error assert out == ( """ import a import b def function(): pass """ ) # setting a filename override when file is passed in as non-stream is not supported. with pytest.raises(SystemExit): main.main([str(tmp_file), "--filename", str(tmp_file)], stdin=build_input_content()) def test_isort_float_to_top_overrides(tmpdir, capsys): """Tests isorts supports overriding float to top from CLI""" test_input = """ import b def function(): pass import a """ config_file = tmpdir.join(".isort.cfg") config_file.write( """ [settings] float_to_top=True """ ) python_file = tmpdir.join("file.py") python_file.write(test_input) main.main([str(python_file)]) out, error = capsys.readouterr() assert not error assert "Fixing" in out assert python_file.read_text(encoding="utf8") == ( """ import a import b def function(): pass """ ) python_file.write(test_input) main.main([str(python_file), "--dont-float-to-top"]) _, error = capsys.readouterr() assert not error assert python_file.read_text(encoding="utf8") == test_input with pytest.raises(SystemExit): main.main([str(python_file), "--float-to-top", "--dont-float-to-top"]) def test_isort_with_stdin(capsys): # ensures that isort sorts stdin without any flags input_content = as_stream( """ import b import a """ ) main.main(["-"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ import a import b """ ) input_content_from = as_stream( """ import c import b from a import z, y, x """ ) main.main(["-"], stdin=input_content_from) out, error = capsys.readouterr() assert out == ( """ import b import c from a import x, y, z """ ) # ensures that isort correctly sorts stdin with --fas flag input_content = as_stream( """ import sys import pandas from z import abc from a import xyz """ ) main.main(["-", "--fas"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ from a import xyz from z import abc import pandas import sys """ ) # ensures that isort correctly sorts stdin with --fass flag input_content = as_stream( """ from a import Path, abc """ ) main.main(["-", "--fass"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ from a import abc, Path """ ) # ensures that isort correctly sorts stdin with --ff flag input_content = as_stream( """ import b from c import x from a import y """ ) main.main(["-", "--ff"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ from a import y from c import x import b """ ) # ensures that isort correctly sorts stdin with -fss flag input_content = as_stream( """ import b from a import a """ ) main.main(["-", "--fss"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ from a import a import b """ ) input_content = as_stream( """ import a from b import c """ ) main.main(["-", "--fss"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ import a from b import c """ ) # ensures that isort correctly sorts stdin with --ds flag input_content = as_stream( """ import sys import pandas import a """ ) main.main(["-", "--ds"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ import a import pandas import sys """ ) # ensures that isort correctly sorts stdin with --cs flag input_content = as_stream( """ from a import b from a import * """ ) main.main(["-", "--cs"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ from a import * """ ) # ensures that isort correctly sorts stdin with --ca flag input_content = as_stream( """ from a import x as X from a import y as Y """ ) main.main(["-", "--ca"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ from a import x as X, y as Y """ ) # ensures that isort works consistently with check and ws flags input_content = as_stream( """ import os import a import b """ ) main.main(["-", "--check-only", "--ws"], stdin=input_content) out, error = capsys.readouterr() assert not error # ensures that isort works consistently with check and diff flags input_content = as_stream( """ import b import a """ ) with pytest.raises(SystemExit): main.main(["-", "--check", "--diff"], stdin=input_content) out, error = capsys.readouterr() assert error assert "underlying stream is not seekable" not in error assert "underlying stream is not seekable" not in error # ensures that isort correctly sorts stdin with --ls flag input_content = as_stream( """ import abcdef import x """ ) main.main(["-", "--ls"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ import x import abcdef """ ) # ensures that isort correctly sorts stdin with --nis flag input_content = as_stream( """ from z import b, c, a """ ) main.main(["-", "--nis"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ from z import b, c, a """ ) # ensures that isort correctly sorts stdin with --sl flag input_content = as_stream( """ from z import b, c, a """ ) main.main(["-", "--sl"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ from z import a from z import b from z import c """ ) # ensures that isort correctly sorts stdin with --top flag input_content = as_stream( """ import os import sys """ ) main.main(["-", "--top", "sys"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ import sys import os """ ) # ensure that isort correctly sorts stdin with --os flag input_content = as_stream( """ import sys import os import z from a import b, e, c """ ) main.main(["-", "--os"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ import sys import os import z from a import b, e, c """ ) # ensures that isort warns with deprecated flags with stdin input_content = as_stream( """ import sys import os """ ) with pytest.warns(UserWarning): main.main(["-", "-ns"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ import os import sys """ ) input_content = as_stream( """ import sys import os """ ) with pytest.warns(UserWarning): main.main(["-", "-k"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ import os import sys """ ) # ensures that only-modified flag works with stdin input_content = as_stream( """ import a import b """ ) main.main(["-", "--verbose", "--only-modified"], stdin=input_content) out, error = capsys.readouterr() assert "else-type place_module for a returned THIRDPARTY" not in out assert "else-type place_module for b returned THIRDPARTY" not in out # ensures that combine-straight-imports flag works with stdin input_content = as_stream( """ import a import b """ ) main.main(["-", "--combine-straight-imports"], stdin=input_content) out, error = capsys.readouterr() assert out == ( """ import a, b """ ) def test_unsupported_encodings(tmpdir, capsys): tmp_file = tmpdir.join("file.py") # fmt: off tmp_file.write_text( ''' # [syntax-error]\ # -*- coding: IBO-8859-1 -*- """ check correct unknown encoding declaration """ __revision__ = 'ื™ื™ื™ื™' ''', encoding="utf8" ) # fmt: on # should throw an error if only unsupported encoding provided with pytest.raises(SystemExit): main.main([str(tmp_file)]) out, error = capsys.readouterr() assert "No valid encodings." in error # should not throw an error if at least one valid encoding found normal_file = tmpdir.join("file1.py") normal_file.write("import os\nimport sys") main.main([str(tmp_file), str(normal_file), "--verbose"]) out, error = capsys.readouterr() def test_stream_skip_file(tmpdir, capsys): input_with_skip = """ # isort: skip_file import b import a """ stream_with_skip = as_stream(input_with_skip) main.main(["-"], stdin=stream_with_skip) out, error = capsys.readouterr() assert out == input_with_skip input_without_skip = input_with_skip.replace("isort: skip_file", "generic comment") stream_without_skip = as_stream(input_without_skip) main.main(["-"], stdin=stream_without_skip) out, error = capsys.readouterr() assert ( out == """ # generic comment import a import b """ ) atomic_input_without_skip = input_with_skip.replace("isort: skip_file", "generic comment") stream_without_skip = as_stream(atomic_input_without_skip) main.main(["-", "--atomic"], stdin=stream_without_skip) out, error = capsys.readouterr() assert ( out == """ # generic comment import a import b """ ) def test_only_modified_flag(tmpdir, capsys): # ensures there is no verbose output for correct files with only-modified flag file1 = tmpdir.join("file1.py") file1.write( """ import a import b """ ) file2 = tmpdir.join("file2.py") file2.write( """ import math import pandas as pd """ ) main.main([str(file1), str(file2), "--verbose", "--only-modified"]) out, error = capsys.readouterr() assert ( out == f""" _ _ (_) ___ ___ _ __| |_ | |/ _/ / _ \\/ '__ _/ | |\\__ \\/\\_\\/| | | |_ |_|\\___/\\___/\\_/ \\_/ isort your imports, so you don't have to. VERSION {__version__} """ ) assert not error # ensures that verbose output is only for modified file(s) with only-modified flag file3 = tmpdir.join("file3.py") file3.write( """ import sys import os """ ) main.main([str(file1), str(file2), str(file3), "--verbose", "--only-modified"]) out, error = capsys.readouterr() assert "else-type place_module for sys returned STDLIB" in out assert "else-type place_module for os returned STDLIB" in out assert "else-type place_module for math returned STDLIB" not in out assert "else-type place_module for pandas returned THIRDPARTY" not in out assert not error # ensures that the behaviour is consistent for check flag with only-modified flag main.main([str(file1), str(file2), "--check-only", "--verbose", "--only-modified"]) out, error = capsys.readouterr() assert ( out == f""" _ _ (_) ___ ___ _ __| |_ | |/ _/ / _ \\/ '__ _/ | |\\__ \\/\\_\\/| | | |_ |_|\\___/\\___/\\_/ \\_/ isort your imports, so you don't have to. VERSION {__version__} """ ) assert not error file4 = tmpdir.join("file4.py") file4.write( """ import sys import os """ ) with pytest.raises(SystemExit): main.main([str(file2), str(file4), "--check-only", "--verbose", "--only-modified"]) out, error = capsys.readouterr() assert "else-type place_module for sys returned STDLIB" in out assert "else-type place_module for os returned STDLIB" in out assert "else-type place_module for math returned STDLIB" not in out assert "else-type place_module for pandas returned THIRDPARTY" not in out def test_identify_imports_main(tmpdir, capsys): file_content = "import mod2\nimport mod2\n" "a = 1\n" "import mod1\n" some_file = tmpdir.join("some_file.py") some_file.write(file_content) file_imports = f"{some_file}:1 import mod2\n{some_file}:4 import mod1\n" file_imports_with_dupes = ( f"{some_file}:1 import mod2\n{some_file}:2 import mod2\n" f"{some_file}:4 import mod1\n" ) main.identify_imports_main([str(some_file), "--unique"]) out, error = capsys.readouterr() assert out.replace("\r\n", "\n") == file_imports assert not error main.identify_imports_main([str(some_file)]) out, error = capsys.readouterr() assert out.replace("\r\n", "\n") == file_imports_with_dupes assert not error main.identify_imports_main(["-", "--unique"], stdin=as_stream(file_content)) out, error = capsys.readouterr() assert out.replace("\r\n", "\n") == file_imports.replace(str(some_file), "") main.identify_imports_main(["-"], stdin=as_stream(file_content)) out, error = capsys.readouterr() assert out.replace("\r\n", "\n") == file_imports_with_dupes.replace(str(some_file), "") main.identify_imports_main([str(tmpdir)]) main.identify_imports_main(["-", "--packages"], stdin=as_stream(file_content)) out, error = capsys.readouterr() assert len(out.split("\n")) == 6 main.identify_imports_main(["-", "--modules"], stdin=as_stream(file_content)) out, error = capsys.readouterr() assert len(out.split("\n")) == 3 main.identify_imports_main(["-", "--attributes"], stdin=as_stream(file_content)) out, error = capsys.readouterr() assert len(out.split("\n")) == 3 def test_gitignore(capsys, tmpdir: py.path.local): import_content = """ import b import a """ def main_check(args): try: main.main(args) except SystemExit: pass return capsys.readouterr() subprocess.run(["git", "init", str(tmpdir)]) python_file = tmpdir.join("has_imports.py") python_file.write(import_content) tmpdir.join("no_imports.py").write("...") out, error = main_check([str(python_file), "--skip-gitignore", "--filter-files", "--check"]) assert "has_imports.py" in error and "no_imports.py" not in error tmpdir.join(".gitignore").write("has_imports.py") out, error = main_check([str(python_file), "--check"]) assert "has_imports.py" in error and "no_imports.py" not in error out, error = main_check([str(python_file), "--skip-gitignore", "--filter-files", "--check"]) assert "Skipped" in out # Should work with nested directories tmpdir.mkdir("nested_dir") tmpdir.join(".gitignore").write("nested_dir/has_imports.py") subfolder_file = tmpdir.join("nested_dir/has_imports.py") subfolder_file.write(import_content) out, error = main_check([str(tmpdir), "--skip-gitignore", "--filter-files", "--check"]) assert "has_imports.py" in error and "nested_dir/has_imports.py" not in error # Should work with relative path currentdir = os.getcwd() os.chdir(tmpdir) out, error = main_check([".", "--skip-gitignore", "--filter-files", "--check"]) assert "has_imports.py" in error and "nested_dir/has_imports.py" not in error tmpdir.join(".gitignore").write( """ nested_dir/has_imports.py has_imports.py """ ) out, error = main_check([".", "--skip-gitignore", "--filter-files", "--check"]) assert "Skipped" in out os.chdir(currentdir) # Should work with multiple git projects tmpdir.join(".git").remove() tmpdir.join(".gitignore").remove() # git_project0 # | has_imports_ignored.py ignored # | has_imports.py should check git_project0 = tmpdir.mkdir("git_project0") subprocess.run(["git", "init", str(git_project0)]) git_project0.join(".gitignore").write("has_imports_ignored.py") git_project0.join("has_imports_ignored.py").write(import_content) git_project0.join("has_imports.py").write(import_content) # git_project1 # | has_imports.py should check # | nested_dir # | has_imports_ignored.py ignored # | has_imports.py should check # | nested_dir_ignored ignored # | has_imports.py ignored from folder git_project1 = tmpdir.mkdir("git_project1") subprocess.run(["git", "init", str(git_project1)]) git_project1.join(".gitignore").write( """ nested_dir/has_imports_ignored.py nested_dir_ignored """ ) git_project1.join("has_imports.py").write(import_content) nested_dir = git_project1.mkdir("nested_dir") nested_dir.join("has_imports.py").write(import_content) nested_dir.join("has_imports_ignored.py").write(import_content) git_project1.mkdir("nested_dir_ignored").join("has_imports.py").write(import_content) should_check = [ "/has_imports.py", "/nested_dir/has_imports.py", "/git_project0/has_imports.py", "/git_project1/has_imports.py", "/git_project1/nested_dir/has_imports.py", ] out, error = main_check([str(tmpdir), "--skip-gitignore", "--filter-files", "--check"]) if os.name == "nt": should_check = [sc.replace("/", "\\") for sc in should_check] assert all(f"{str(tmpdir)}{file}" in error for file in should_check) out, error = main_check([str(tmpdir), "--skip-gitignore", "--filter-files"]) assert all(f"{str(tmpdir)}{file}" in out for file in should_check) # Should work when git project contains symlinks if os.name != "nt": git_project0.join("has_imports_ignored.py").write(import_content) git_project0.join("has_imports.py").write(import_content) tmpdir.join("has_imports.py").write(import_content) tmpdir.join("nested_dir").join("has_imports.py").write(import_content) git_project0.join("ignore_link.py").mksymlinkto(tmpdir.join("has_imports.py")) git_project0.join("ignore_link").mksymlinkto(tmpdir.join("nested_dir")) git_project0.join(".gitignore").write("ignore_link.py\nignore_link", mode="a") out, error = main_check( [str(git_project0), "--skip-gitignore", "--filter-files", "--check"] ) should_check = ["/git_project0/has_imports.py"] assert all(f"{str(tmpdir)}{file}" in error for file in should_check) out, error = main_check([str(git_project0), "--skip-gitignore", "--filter-files"]) assert all(f"{str(tmpdir)}{file}" in out for file in should_check) def test_multiple_configs(capsys, tmpdir): # Ensure that --resolve-all-configs flag resolves multiple configs correctly # and sorts files corresponding to their nearest config setup_cfg = """ [isort] from_first=True """ pyproject_toml = """ [tool.isort] no_inline_sort = \"True\" """ isort_cfg = """ [settings] force_single_line=True """ broken_isort_cfg = """ [iaort_confg] force_single_line=True """ dir1 = tmpdir / "subdir1" dir2 = tmpdir / "subdir2" dir3 = tmpdir / "subdir3" dir4 = tmpdir / "subdir4" dir1.mkdir() dir2.mkdir() dir3.mkdir() dir4.mkdir() setup_cfg_file = dir1 / "setup.cfg" setup_cfg_file.write_text(setup_cfg, "utf-8") pyproject_toml_file = dir2 / "pyproject.toml" pyproject_toml_file.write_text(pyproject_toml, "utf-8") isort_cfg_file = dir3 / ".isort.cfg" isort_cfg_file.write_text(isort_cfg, "utf-8") broken_isort_cfg_file = dir4 / ".isort.cfg" broken_isort_cfg_file.write_text(broken_isort_cfg, "utf-8") import_section = """ from a import y, z, x import b """ file1 = dir1 / "file1.py" file1.write_text(import_section, "utf-8") file2 = dir2 / "file2.py" file2.write_text(import_section, "utf-8") file3 = dir3 / "file3.py" file3.write_text(import_section, "utf-8") file4 = dir4 / "file4.py" file4.write_text(import_section, "utf-8") file5 = tmpdir / "file5.py" file5.write_text(import_section, "utf-8") main.main([str(tmpdir), "--resolve-all-configs", "--cr", str(tmpdir), "--verbose"]) out, _ = capsys.readouterr() assert f"{str(setup_cfg_file)} used for file {str(file1)}" in out assert f"{str(pyproject_toml_file)} used for file {str(file2)}" in out assert f"{str(isort_cfg_file)} used for file {str(file3)}" in out assert f"default used for file {str(file4)}" in out assert f"default used for file {str(file5)}" in out assert ( file1.read() == """ from a import x, y, z import b """ ) assert ( file2.read() == """ import b from a import y, z, x """ ) assert ( file3.read() == """ import b from a import x from a import y from a import z """ ) assert ( file4.read() == """ import b from a import x, y, z """ ) assert ( file5.read() == """ import b from a import x, y, z """ ) # Ensure that --resolve-all-config flags works with --check file6 = dir1 / "file6.py" file6.write( """ import b from a import x, y, z """ ) with pytest.raises(SystemExit): main.main([str(tmpdir), "--resolve-all-configs", "--cr", str(tmpdir), "--check"]) _, err = capsys.readouterr() assert f"{str(file6)} Imports are incorrectly sorted and/or formatted" in err def test_multiple_src_paths(tmpdir, capsys): """ Ensure that isort has consistent behavior with multiple source paths """ tests_module = tmpdir / "tests" app_module = tmpdir / "app" tests_module.mkdir() app_module.mkdir() pyproject_toml = tmpdir / "pyproject.toml" pyproject_toml.write_text( """ [tool.isort] profile = "black" src_paths = ["app", "tests"] auto_identify_namespace_packages = false """, "utf-8", ) file = tmpdir / "file.py" file.write_text( """ from app.something import something from tests.something import something_else """, "utf-8", ) for _ in range(10): # To ensure isort has consistent results in multiple runs main.main([str(tmpdir), "--verbose"]) out, _ = capsys.readouterr() assert ( file.read() == """ from app.something import something from tests.something import something_else """ ) assert "from-type place_module for tests.something returned FIRSTPARTY" in out ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_output.py0000644000000000000000000000110214536412763014767 0ustar00from hypothesis import given, reject from hypothesis import strategies as st import isort.comments @given( comments=st.one_of(st.none(), st.lists(st.text())), original_string=st.text(), removed=st.booleans(), comment_prefix=st.text(), ) def test_fuzz_add_to_line(comments, original_string, removed, comment_prefix): try: isort.comments.add_to_line( comments=comments, original_string=original_string, removed=removed, comment_prefix=comment_prefix, ) except ValueError: reject() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_parse.py0000644000000000000000000000534514536412763014556 0ustar00import pytest from hypothesis import given from hypothesis import strategies as st from isort import parse from isort.settings import Config TEST_CONTENTS = """ import xyz import abc import (\\ # one one as \\ # two three) import \\ zebra as \\ # one not_bacon from x import (\\ # one one as \\ # two three) def function(): pass """ def test_file_contents(): ( in_lines, out_lines, import_index, _, _, _, _, _, change_count, original_line_count, _, _, _, _, ) = parse.file_contents(TEST_CONTENTS, config=Config(default_section="")) assert "\n".join(in_lines) == TEST_CONTENTS assert "import" not in "\n".join(out_lines) assert import_index == 1 assert change_count == -11 assert original_line_count == len(in_lines) # These tests were written by the `hypothesis.extra.ghostwriter` module # and is provided under the Creative Commons Zero public domain dedication. @given(contents=st.text()) def test_fuzz__infer_line_separator(contents): parse._infer_line_separator(contents=contents) @given(import_string=st.text()) def test_fuzz__strip_syntax(import_string): parse.strip_syntax(import_string=import_string) @given(line=st.text(), config=st.builds(Config)) def test_fuzz_import_type(line, config): parse.import_type(line=line, config=config) @given( line=st.text(), in_quote=st.text(), index=st.integers(), section_comments=st.lists(st.text()), needs_import=st.booleans(), ) def test_fuzz_skip_line(line, in_quote, index, section_comments, needs_import): parse.skip_line( line=line, in_quote=in_quote, index=index, section_comments=section_comments, needs_import=needs_import, ) @pytest.mark.parametrize( "raw_line, expected", ( ("from . cimport a", "from . cimport a"), ("from.cimport a", "from . cimport a"), ("from..cimport a", "from .. cimport a"), ("from . import a", "from . import a"), ("from.import a", "from . import a"), ("from..import a", "from .. import a"), ("import *", "import *"), ("import*", "import *"), ("from . import a", "from . import a"), ("from .import a", "from . import a"), ("from ..import a", "from .. import a"), ("from . cimport a", "from . cimport a"), ("from .cimport a", "from . cimport a"), ("from ..cimport a", "from .. cimport a"), ("from\t.\timport a", "from . import a"), ), ) def test_normalize_line(raw_line, expected): line, returned_raw_line = parse.normalize_line(raw_line) assert line == expected assert returned_raw_line == raw_line ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_place.py0000644000000000000000000000521214536412763014521 0ustar00"""Tests for the isort import placement module""" from functools import partial from isort import place, sections from isort.settings import Config def test_module(src_path): place_tester = partial(place.module, config=Config(src_paths=[src_path])) assert place_tester("isort") == sections.FIRSTPARTY assert place_tester("os") == sections.STDLIB assert place_tester(".deprecated") == sections.LOCALFOLDER assert place_tester("__future__") == sections.FUTURE assert place_tester("hug") == sections.THIRDPARTY def test_extra_standard_library(src_path): place_tester = partial( place.module, config=Config(src_paths=[src_path], extra_standard_library=["hug"]) ) assert place_tester("os") == sections.STDLIB assert place_tester("hug") == sections.STDLIB def test_no_standard_library_placement(): assert place.module_with_reason( "pathlib", config=Config(sections=["THIRDPARTY"], default_section="THIRDPARTY") ) == ("THIRDPARTY", "Default option in Config or universal default.") assert place.module("pathlib") == "STDLIB" def test_namespace_package_placement(examples_path): namespace_examples = examples_path / "namespaces" implicit = namespace_examples / "implicit" pkg_resource = namespace_examples / "pkg_resource" pkgutil = namespace_examples / "pkgutil" for namespace_test in (implicit, pkg_resource, pkgutil): print(namespace_test) config = Config(settings_path=namespace_test) no_namespaces = Config(settings_path=namespace_test, auto_identify_namespace_packages=False) namespace_override = Config(settings_path=namespace_test, known_firstparty=["root.name"]) assert place.module("root.name", config=config) == "THIRDPARTY" assert place.module("root.nested", config=config) == "FIRSTPARTY" assert place.module("root.name", config=no_namespaces) == "FIRSTPARTY" assert place.module("root.name", config=namespace_override) == "FIRSTPARTY" no_namespace = namespace_examples / "none" almost_implicit = namespace_examples / "almost-implicit" weird_encoding = namespace_examples / "weird_encoding" for lacks_namespace in (no_namespace, almost_implicit, weird_encoding): config = Config(settings_path=lacks_namespace) manual_namespace = Config(settings_path=lacks_namespace, namespace_packages=["root"]) assert place.module("root.name", config=config) == "FIRSTPARTY" assert place.module("root.nested", config=config) == "FIRSTPARTY" assert place.module("root.name", config=manual_namespace) == "THIRDPARTY" assert place.module("root.nested", config=config) == "FIRSTPARTY" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_pylama_isort.py0000644000000000000000000000143314536412763016141 0ustar00from isort.pylama_isort import Linter class TestLinter: instance = Linter() def test_allow(self): assert not self.instance.allow("test_case.pyc") assert not self.instance.allow("test_case.c") assert self.instance.allow("test_case.py") def test_run(self, tmpdir): correct = tmpdir.join("incorrect.py") correct.write("import a\nimport b\n") assert not self.instance.run(str(correct)) incorrect = tmpdir.join("incorrect.py") incorrect.write("import b\nimport a\n") assert self.instance.run(str(incorrect)) def test_skip(self, tmpdir): incorrect = tmpdir.join("incorrect.py") incorrect.write("# isort: skip_file\nimport b\nimport a\n") assert not self.instance.run(str(incorrect)) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_regressions.py0000644000000000000000000012223614536412763016006 0ustar00"""A growing set of tests designed to ensure isort doesn't have regressions in new versions""" from io import StringIO import pytest import isort def test_isort_duplicating_comments_issue_1264(): """Ensure isort doesn't duplicate comments when force_sort_within_sections is set to `True` as was the case in issue #1264: https://github.com/pycqa/isort/issues/1264 """ assert ( isort.code( """ from homeassistant.util.logging import catch_log_exception # Loading the config flow... from . import config_flow """, force_sort_within_sections=True, ).count("# Loading the config flow...") == 1 ) def test_moving_comments_issue_726(): test_input = ( "from Blue import models as BlueModels\n" "# comment for PlaidModel\n" "from Plaid.models import PlaidModel\n" ) assert isort.code(test_input, force_sort_within_sections=True) == test_input test_input = ( "# comment for BlueModels\n" "from Blue import models as BlueModels\n" "# comment for PlaidModel\n" "# another comment for PlaidModel\n" "from Plaid.models import PlaidModel\n" ) assert isort.code(test_input, force_sort_within_sections=True) == test_input def test_blank_lined_removed_issue_1275(): """Ensure isort doesn't accidentally remove blank lines after doc strings and before imports. See: https://github.com/pycqa/isort/issues/1275 """ assert ( isort.code( '''""" My docstring """ from b import thing from a import other_thing ''' ) == '''""" My docstring """ from a import other_thing from b import thing ''' ) assert ( isort.code( '''""" My docstring """ from b import thing from a import other_thing ''', add_imports=["from b import thing"], ) == '''""" My docstring """ from a import other_thing from b import thing ''' ) def test_blank_lined_removed_issue_1283(): """Ensure isort doesn't accidentally remove blank lines after __version__ identifiers. See: https://github.com/pycqa/isort/issues/1283 """ test_input = """__version__ = "0.58.1" from starlette import status """ assert isort.code(test_input) == test_input def test_extra_blank_line_added_nested_imports_issue_1290(): """Ensure isort doesn't add unnecessary blank lines above nested imports. See: https://github.com/pycqa/isort/issues/1290 """ test_input = '''from typing import TYPE_CHECKING # Special imports from special import thing if TYPE_CHECKING: # Special imports from special import another_thing def func(): """Docstring""" # Special imports from special import something_else return ''' assert ( isort.code( test_input, import_heading_special="Special imports", known_special=["special"], sections=["FUTURE", "STDLIB", "THIRDPARTY", "SPECIAL", "FIRSTPARTY", "LOCALFOLDER"], ) == test_input ) def test_add_imports_shouldnt_make_isort_unusable_issue_1297(): """Test to ensure add imports doesn't cause any unexpected behaviour when combined with check See: https://github.com/pycqa/isort/issues/1297 """ assert isort.check_code( """from __future__ import unicode_literals from os import path """, add_imports={"from __future__ import unicode_literals"}, ) def test_no_extra_lines_for_imports_in_functions_issue_1277(): """Test to ensure isort doesn't introduce extra blank lines for imports within function. See: https://github.com/pycqa/isort/issues/1277 """ test_input = """ def main(): import time import sys """ expected_output = """ def main(): import sys import time """ assert isort.code(isort.code(isort.code(test_input))) == expected_output def test_no_extra_blank_lines_in_methods_issue_1293(): """Test to ensure isort isn't introducing extra lines in methods that contain imports See: https://github.com/pycqa/isort/issues/1293 """ test_input = """ class Something(object): def on_email_deleted(self, email): from hyperkitty.tasks import rebuild_thread_cache_new_email # update or cleanup thread # noqa: E303 (isort issue) if self.emails.count() == 0: ... """ assert isort.code(test_input) == test_input assert isort.code(test_input, lines_after_imports=2) == test_input def test_force_single_line_shouldnt_remove_preceding_comment_lines_issue_1296(): """Tests to ensure force_single_line setting doesn't result in lost comments. See: https://github.com/pycqa/isort/issues/1296 """ test_input = """ # A comment # A comment # Oh no, I'm gone from moo import foo """ # assert isort.code(test_input) == test_input assert isort.code(test_input, force_single_line=True) == test_input def test_ensure_new_line_before_comments_mixed_with_ensure_newline_before_comments_1295(): """Tests to ensure that the black profile can be used in conjunction with force_sort_within_sections. See: https://github.com/pycqa/isort/issues/1295 """ test_input = """ from openzwave.group import ZWaveGroup from openzwave.network import ZWaveNetwork # pylint: disable=import-error from openzwave.option import ZWaveOption """ assert isort.code(test_input, profile="black") == test_input assert isort.code(test_input, profile="black", force_sort_within_sections=True) == test_input def test_trailing_comma_doesnt_introduce_broken_code_with_comment_and_wrap_issue_1302(): """Tests to assert the combination of include_trailing_comma and a wrapped line doesnt break. See: https://github.com/pycqa/isort/issues/1302. """ assert ( isort.code( """ from somewhere import very_very_very_very_very_very_long_symbol # some comment """, line_length=50, include_trailing_comma=True, ) == """ from somewhere import \\ very_very_very_very_very_very_long_symbol # some comment """ ) def test_ensure_sre_parse_is_identified_as_stdlib_issue_1304(): """Ensure sre_parse is idenified as STDLIB. See: https://github.com/pycqa/isort/issues/1304. """ assert ( isort.place_module("sre_parse") == isort.place_module("sre") == isort.settings.STDLIB # type: ignore # noqa ) def test_add_imports_shouldnt_move_lower_comments_issue_1300(): """Ensure add_imports doesn't move comments immediately below imports. See:: https://github.com/pycqa/isort/issues/1300. """ test_input = """from __future__ import unicode_literals from os import path # A comment for a constant ANSWER = 42 """ assert isort.code(test_input, add_imports=["from os import path"]) == test_input def test_windows_newline_issue_1277(): """Test to ensure windows new lines are correctly handled within indented scopes. See: https://github.com/pycqa/isort/issues/1277 """ assert ( isort.code("\ndef main():\r\n import time\r\n\n import sys\r\n") == "\ndef main():\r\n import sys\r\n import time\r\n" ) def test_windows_newline_issue_1278(): """Test to ensure windows new lines are correctly handled within indented scopes. See: https://github.com/pycqa/isort/issues/1278 """ assert isort.check_code( "\ntry:\r\n import datadog_agent\r\n\r\n " "from ..log import CheckLoggingAdapter, init_logging\r\n\r\n init_logging()\r\n" "except ImportError:\r\n pass\r\n" ) def test_check_never_passes_with_indented_headings_issue_1301(): """Test to ensure that test can pass even when there are indented headings. See: https://github.com/pycqa/isort/issues/1301 """ assert isort.check_code( """ try: # stdlib import logging from os import abc, path except ImportError: pass """, import_heading_stdlib="stdlib", ) def test_isort_shouldnt_fail_on_long_from_with_dot_issue_1190(): """Test to ensure that isort will correctly handle formatting a long from import that contains a dot. See: https://github.com/pycqa/isort/issues/1190 """ assert ( isort.code( """ from this_is_a_very_long_import_statement.that_will_occur_across_two_lines\\ .when_the_line_length.is_only_seventynine_chars import ( function1, function2, ) """, line_length=79, multi_line_output=3, ) == """ from this_is_a_very_long_import_statement.that_will_occur_across_two_lines""" """.when_the_line_length.is_only_seventynine_chars import ( function1, function2 ) """ ) def test_isort_shouldnt_add_extra_new_line_when_fass_and_n_issue_1315(): """Test to ensure isort doesnt add a second extra new line when combining --fss and -n options. See: https://github.com/pycqa/isort/issues/1315 """ assert isort.check_code( """import sys # Comment canary from . import foo """, ensure_newline_before_comments=True, # -n force_sort_within_sections=True, # -fss show_diff=True, # for better debugging in the case the test case fails. ) assert ( isort.code( """ from . import foo # Comment canary from .. import foo """, ensure_newline_before_comments=True, force_sort_within_sections=True, ) == """ from . import foo # Comment canary from .. import foo """ ) def test_isort_doesnt_rewrite_import_with_dot_to_from_import_issue_1280(): """Test to ensure isort doesn't rewrite imports in the from of import y.x into from y import x. This is because they are not technically fully equivalent to eachother and can introduce broken behaviour. See: https://github.com/pycqa/isort/issues/1280 """ assert isort.check_code( """ import test.module import test.module as m from test import module from test import module as m """, show_diff=True, ) def test_isort_shouldnt_introduce_extra_lines_with_fass_issue_1322(): """Tests to ensure isort doesn't introduce extra lines when used with fass option. See: https://github.com/pycqa/isort/issues/1322 """ assert ( isort.code( """ import logging # Comment canary from foo import bar import quux """, force_sort_within_sections=True, ensure_newline_before_comments=True, ) == """ import logging # Comment canary from foo import bar import quux """ ) def test_comments_should_cause_wrapping_on_long_lines_black_mode_issue_1219(): """Tests to ensure if isort encounters a single import line which is made too long with a comment it is wrapped when using black profile. See: https://github.com/pycqa/isort/issues/1219 """ assert isort.check_code( """ from many_stop_words import ( get_stop_words as get_base_stopwords, # extended list of stop words, also for en ) """, show_diff=True, profile="black", ) def test_comment_blocks_should_stay_associated_without_extra_lines_issue_1156(): """Tests to ensure isort doesn't add an extra line when there are large import blocks or otherwise warp the intent. See: https://github.com/pycqa/isort/issues/1156 """ assert ( isort.code( """from top_level_ignored import config # isort:skip #################################### # COMMENT BLOCK SEPARATING THESE # #################################### from ast import excepthandler import logging """ ) == """from top_level_ignored import config # isort:skip import logging #################################### # COMMENT BLOCK SEPARATING THESE # #################################### from ast import excepthandler """ ) def test_comment_shouldnt_be_duplicated_with_fass_enabled_issue_1329(): """Tests to ensure isort doesn't duplicate comments when imports occur with comment on top, immediately after large comment blocks. See: https://github.com/pycqa/isort/pull/1329/files. """ assert isort.check_code( """''' Multi-line docstring ''' # Comment for A. import a # Comment for B - not A! import b """, force_sort_within_sections=True, show_diff=True, ) def test_wrap_mode_equal_to_line_length_with_indendet_imports_issue_1333(): assert isort.check_code( """ import a import b def function(): import a as b import c as d """, line_length=17, wrap_length=17, show_diff=True, ) def test_isort_skipped_nested_imports_issue_1339(): """Ensure `isort:skip are honored in nested imports. See: https://github.com/pycqa/isort/issues/1339. """ assert isort.check_code( """ def import_test(): from os ( # isort:skip import path ) """, show_diff=True, ) def test_windows_diff_too_large_misrepresentative_issue_1348(test_path): """Ensure isort handles windows files correctly when it come to producing a diff with --diff. See: https://github.com/pycqa/isort/issues/1348 """ diff_output = StringIO() isort.file(test_path / "example_crlf_file.py", show_diff=diff_output) diff_output.seek(0) assert diff_output.read().endswith( "-1,5 +1,5 @@\n+import a\r\n import b\r\n" "-import a\r\n \r\n \r\n def func():\r\n" ) def test_combine_as_does_not_lose_comments_issue_1321(): """Test to ensure isort doesn't lose comments when --combine-as is used. See: https://github.com/pycqa/isort/issues/1321 """ test_input = """ from foo import * # noqa from foo import bar as quux # other from foo import x as a # noqa import operator as op # op comment import datetime as dtime # dtime comment from datetime import date as d # dcomm from datetime import datetime as dt # dtcomm """ expected_output = """ import datetime as dtime # dtime comment import operator as op # op comment from datetime import date as d, datetime as dt # dcomm; dtcomm from foo import * # noqa from foo import bar as quux, x as a # other; noqa """ assert isort.code(test_input, combine_as_imports=True) == expected_output def test_combine_as_does_not_lose_comments_issue_1381(): """Test to ensure isort doesn't lose comments when --combine-as is used. See: https://github.com/pycqa/isort/issues/1381 """ test_input = """ from smtplib import SMTPConnectError, SMTPNotSupportedError # important comment """ assert "# important comment" in isort.code(test_input, combine_as_imports=True) test_input = """ from appsettings import AppSettings, ObjectSetting, StringSetting # type: ignore """ assert "# type: ignore" in isort.code(test_input, combine_as_imports=True) def test_incorrect_grouping_when_comments_issue_1396(): """Test to ensure isort groups import correct independent of the comments present. See: https://github.com/pycqa/isort/issues/1396 """ assert ( isort.code( """from django.shortcuts import render from apps.profiler.models import Project from django.contrib.auth.decorators import login_required from django.views.generic import ( # ListView, # DetailView, TemplateView, # CreateView, # View ) """, line_length=88, known_first_party=["apps"], known_django=["django"], sections=["FUTURE", "STDLIB", "DJANGO", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER"], ) == """from django.contrib.auth.decorators import login_required from django.shortcuts import render from django.views.generic import \\ TemplateView # ListView,; DetailView,; CreateView,; View from apps.profiler.models import Project """ ) assert ( isort.code( """from django.contrib.auth.decorators import login_required from django.shortcuts import render from apps.profiler.models import Project from django.views.generic import ( # ListView,; DetailView,; CreateView,; View TemplateView, ) """, line_length=88, known_first_party=["apps"], known_django=["django"], sections=["FUTURE", "STDLIB", "DJANGO", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER"], include_trailing_comma=True, multi_line_output=3, force_grid_wrap=0, use_parentheses=True, ensure_newline_before_comments=True, ) == """from django.contrib.auth.decorators import login_required from django.shortcuts import render from django.views.generic import ( # ListView,; DetailView,; CreateView,; View TemplateView, ) from apps.profiler.models import Project """ ) def test_reverse_relative_combined_with_force_sort_within_sections_issue_1395(): """Test to ensure reverse relative combines well with other common isort settings. See: https://github.com/pycqa/isort/issues/1395. """ assert isort.check_code( """from .fileA import a_var from ..fileB import b_var """, show_diff=True, reverse_relative=True, force_sort_within_sections=True, order_by_type=False, case_sensitive=False, multi_line_output=5, sections=["FUTURE", "STDLIB", "THIRDPARTY", "FIRSTPARTY", "APPLICATION", "LOCALFOLDER"], lines_after_imports=2, no_lines_before="LOCALFOLDER", ) def test_isort_should_be_able_to_add_independent_of_doc_string_placement_issue_1420(): """isort should be able to know when an import requested to be added is sucesfully added, independent of where the top doc string is located. See: https://github.com/PyCQA/isort/issues/1420 """ assert isort.check_code( '''"""module docstring""" import os ''', show_diff=True, add_imports=["os"], ) def test_comments_should_never_be_moved_between_imports_issue_1427(): """isort should never move comments to different import statement. See: https://github.com/PyCQA/isort/issues/1427 """ assert isort.check_code( """from package import CONSTANT from package import * # noqa """, force_single_line=True, show_diff=True, ) def test_isort_doesnt_misplace_comments_issue_1431(): """Test to ensure isort wont misplace comments. See: https://github.com/PyCQA/isort/issues/1431 """ input_text = """from com.my_lovely_company.my_lovely_team.my_lovely_project.my_lovely_component import ( MyLovelyCompanyTeamProjectComponent, # NOT DRY ) from com.my_lovely_company.my_lovely_team.my_lovely_project.my_lovely_component import ( MyLovelyCompanyTeamProjectComponent as component, # DRY ) """ assert isort.code(input_text, profile="black") == input_text def test_isort_doesnt_misplace_add_import_issue_1445(): """Test to ensure isort won't misplace an added import depending on docstring position See: https://github.com/PyCQA/isort/issues/1445 """ assert ( isort.code( '''#!/usr/bin/env python """module docstring""" ''', add_imports=["import os"], ) == '''#!/usr/bin/env python """module docstring""" import os ''' ) assert isort.check_code( '''#!/usr/bin/env python """module docstring""" import os ''', add_imports=["import os"], show_diff=True, ) def test_isort_doesnt_mangle_code_when_adding_imports_issue_1444(): """isort should NEVER mangle code. This particularly nasty and easy to reproduce bug, caused isort to produce invalid code just by adding a single import statement depending on comment placement. See: https://github.com/PyCQA/isort/issues/1444 """ assert ( isort.code( ''' """module docstring""" ''', add_imports=["import os"], ) == ''' """module docstring""" import os ''' ) def test_isort_float_to_top_with_sort_on_off_tests(): """Characterization test for current behaviour of float-to-top on isort: on/off sections. - imports in isort:off sections stay where they are - imports in isort:on sections float up, but to the top of the isort:on section (not the top of the file)""" assert ( isort.code( """ def foo(): pass import a # isort: off import stays_in_section x = 1 import stays_in_place # isort: on def bar(): pass import floats_to_top_of_section def baz(): pass """, float_to_top=True, ) == """import a def foo(): pass # isort: off import stays_in_section x = 1 import stays_in_place # isort: on import floats_to_top_of_section def bar(): pass def baz(): pass """ ) to_sort = """# isort: off def foo(): pass import stays_in_place import no_float_to_to_top import no_ordering def bar(): pass """ # No changes if isort is off assert isort.code(to_sort, float_to_top=True) == to_sort def test_isort_doesnt_float_to_top_correctly_when_imports_not_at_top_issue_1382(): """isort should float existing imports to the top, if they are currently below the top. See: https://github.com/PyCQA/isort/issues/1382 """ assert ( isort.code( """ def foo(): pass import a def bar(): pass """, float_to_top=True, ) == """import a def foo(): pass def bar(): pass """ ) assert ( isort.code( """ def foo(): pass import a def bar(): pass """, float_to_top=True, ) == """import a def foo(): pass def bar(): pass """ ) assert ( isort.code( '''"""My comment """ def foo(): pass import a def bar(): pass ''', float_to_top=True, ) == '''"""My comment """ import a def foo(): pass def bar(): pass ''' ) assert ( isort.code( ''' """My comment """ def foo(): pass import a def bar(): pass ''', float_to_top=True, ) == ''' """My comment """ import a def foo(): pass def bar(): pass ''' ) assert ( isort.code( '''#!/bin/bash """My comment """ def foo(): pass import a def bar(): pass ''', float_to_top=True, ) == '''#!/bin/bash """My comment """ import a def foo(): pass def bar(): pass ''' ) assert ( isort.code( '''#!/bin/bash """My comment """ def foo(): pass import a def bar(): pass ''', float_to_top=True, ) == '''#!/bin/bash """My comment """ import a def foo(): pass def bar(): pass ''' ) def test_empty_float_to_top_shouldnt_error_issue_1453(): """isort shouldn't error when float to top is set with a mostly empty file""" assert isort.check_code( """ """, show_diff=True, float_to_top=True, ) assert isort.check_code( """ """, show_diff=True, ) def test_import_sorting_shouldnt_be_endless_with_headers_issue_1454(): """isort should never enter an endless sorting loop. See: https://github.com/PyCQA/isort/issues/1454 """ assert isort.check_code( """ # standard library imports import sys try: # Comment about local lib # related third party imports from local_lib import stuff except ImportError as e: pass """, known_third_party=["local_lib"], import_heading_thirdparty="related third party imports", show_diff=True, ) def test_isort_should_leave_non_import_from_lines_alone_issue_1488(): """isort should never mangle non-import from statements. See: https://github.com/PyCQA/isort/issues/1488 """ raise_from_should_be_ignored = """ raise SomeException("Blah") \\ from exceptionsInfo.popitem()[1] """ assert isort.check_code(raise_from_should_be_ignored, show_diff=True) yield_from_should_be_ignored = """ def generator_function(): yield \\ from other_function()[1] """ assert isort.check_code(yield_from_should_be_ignored, show_diff=True) wont_ignore_comment_contiuation = """ # one # two def function(): # three \\ import b import a """ assert ( isort.code(wont_ignore_comment_contiuation) == """ # one # two def function(): # three \\ import a import b """ ) will_ignore_if_non_comment_continuation = """ # one # two def function(): raise \\ import b import a """ assert isort.check_code(will_ignore_if_non_comment_continuation, show_diff=True) yield_from_parens_should_be_ignored = """ def generator_function(): ( yield from other_function()[1] ) """ assert isort.check_code(yield_from_parens_should_be_ignored, show_diff=True) yield_from_lots_of_parens_and_space_should_be_ignored = """ def generator_function(): ( ( (((( ((((( (( ((( yield from other_function()[1] ))))))))))))) ))) """ assert isort.check_code(yield_from_lots_of_parens_and_space_should_be_ignored, show_diff=True) yield_from_should_be_ignored_when_following_import_statement = """ def generator_function(): import os yield \\ from other_function()[1] """ assert isort.check_code( yield_from_should_be_ignored_when_following_import_statement, show_diff=True ) yield_at_file_end_ignored = """ def generator_function(): ( ( (((( ((((( (( ((( yield """ assert isort.check_code(yield_at_file_end_ignored, show_diff=True) raise_at_file_end_ignored = """ def generator_function(): ( ( (((( ((((( (( ((( raise ( """ assert isort.check_code(raise_at_file_end_ignored, show_diff=True) raise_from_at_file_end_ignored = """ def generator_function(): ( ( (((( ((((( (( ((( raise \\ from \\ """ assert isort.check_code(raise_from_at_file_end_ignored, show_diff=True) def test_isort_float_to_top_correctly_identifies_single_line_comments_1499(): """Test to ensure isort correctly handles the case where float to top is used to push imports to the top and the top comment is a multiline type but only one line. See: https://github.com/PyCQA/isort/issues/1499 """ assert ( isort.code( '''#!/bin/bash """My comment""" def foo(): pass import a def bar(): pass ''', float_to_top=True, ) == ( '''#!/bin/bash """My comment""" import a def foo(): pass def bar(): pass ''' ) ) assert ( isort.code( """#!/bin/bash '''My comment''' def foo(): pass import a def bar(): pass """, float_to_top=True, ) == ( """#!/bin/bash '''My comment''' import a def foo(): pass def bar(): pass """ ) ) assert isort.check_code( """#!/bin/bash '''My comment''' import a x = 1 """, float_to_top=True, show_diff=True, ) def test_isort_shouldnt_mangle_from_multi_line_string_issue_1507(): """isort was seen mangling lines that happened to contain the word from after a yield happened to be in a file. Clearly this shouldn't happen. See: https://github.com/PyCQA/isort/issues/1507. """ assert isort.check_code( ''' def a(): yield f( """ select %s from (values %%s) as t(%s) """ ) def b(): return ( """ select name from foo """ % main_table ) def c(): query = ( """ select {keys} from (values %s) as t(id) """ ) def d(): query = f"""select t.id from {table} t {extra}""" ''', show_diff=True, ) def test_isort_should_keep_all_as_and_non_as_imports_issue_1523(): """isort should keep as and non-as imports of the same path that happen to exist within the same statement. See: https://github.com/PyCQA/isort/issues/1523. """ assert isort.check_code( """ from selenium.webdriver import Remote, Remote as Driver """, show_diff=True, combine_as_imports=True, ) def test_isort_shouldnt_introduce_syntax_error_issue_1539(): """isort should NEVER introduce syntax errors. In 5.5.4 some strings that contained a line starting with from could lead to no empty paren. See: https://github.com/PyCQA/isort/issues/1539. """ assert isort.check_code( '''"""Foobar from {}""".format( "bar", ) ''', show_diff=True, ) assert isort.check_code( '''"""Foobar import {}""".format( "bar", ) ''', show_diff=True, ) assert ( isort.code( '''"""Foobar from {}""" from a import b, a ''', ) == '''"""Foobar from {}""" from a import a, b ''' ) assert ( isort.code( '''"""Foobar from {}""" import b import a ''', ) == '''"""Foobar from {}""" import a import b ''' ) def test_isort_shouldnt_split_skip_issue_1548(): """Ensure isort doesn't add a spurious new line if isort: skip is combined with float to top. See: https://github.com/PyCQA/isort/issues/1548. """ assert isort.check_code( """from tools.dependency_pruning.prune_dependencies import ( # isort:skip prune_dependencies, ) """, show_diff=True, profile="black", float_to_top=True, ) assert isort.check_code( """from tools.dependency_pruning.prune_dependencies import ( # isort:skip prune_dependencies, ) import a import b """, show_diff=True, profile="black", float_to_top=True, ) assert isort.check_code( """from tools.dependency_pruning.prune_dependencies import # isort:skip import a import b """, show_diff=True, float_to_top=True, ) assert isort.check_code( """from tools.dependency_pruning.prune_dependencies import ( # isort:skip a ) import b """, show_diff=True, profile="black", float_to_top=True, ) assert isort.check_code( """from tools.dependency_pruning.prune_dependencies import ( # isort:skip ) """, show_diff=True, profile="black", float_to_top=True, ) assert isort.check_code( """from tools.dependency_pruning.prune_dependencies import ( # isort:skip )""", show_diff=True, profile="black", float_to_top=True, ) assert ( isort.code( """from tools.dependency_pruning.prune_dependencies import ( # isort:skip ) """, profile="black", float_to_top=True, add_imports=["import os"], ) == """from tools.dependency_pruning.prune_dependencies import ( # isort:skip ) import os """ ) assert ( isort.code( """from tools.dependency_pruning.prune_dependencies import ( # isort:skip )""", profile="black", float_to_top=True, add_imports=["import os"], ) == """from tools.dependency_pruning.prune_dependencies import ( # isort:skip ) import os """ ) def test_isort_shouldnt_split_skip_issue_1556(): assert isort.check_code( """ from tools.dependency_pruning.prune_dependencies import ( # isort:skip prune_dependencies, ) from tools.developer_pruning.prune_developers import ( # isort:skip prune_developers, ) """, show_diff=True, profile="black", float_to_top=True, ) assert isort.check_code( """ from tools.dependency_pruning.prune_dependencies import ( # isort:skip prune_dependencies, ) from tools.developer_pruning.prune_developers import x # isort:skip """, show_diff=True, profile="black", float_to_top=True, ) def test_isort_losing_imports_vertical_prefix_from_module_import_wrap_mode_issue_1542(): """Ensure isort doesnt lose imports when a comment is combined with an import and wrap mode VERTICAL_PREFIX_FROM_MODULE_IMPORT is used. See: https://github.com/PyCQA/isort/issues/1542. """ assert ( isort.code( """ from xxxxxxxxxxxxxxxx import AAAAAAAAAA, BBBBBBBBBB from xxxxxxxxxxxxxxxx import CCCCCCCCC, DDDDDDDDD # xxxxxxxxxxxxxxxxxx print(CCCCCCCCC) """, multi_line_output=9, ) == """ from xxxxxxxxxxxxxxxx import AAAAAAAAAA, BBBBBBBBBB # xxxxxxxxxxxxxxxxxx from xxxxxxxxxxxxxxxx import CCCCCCCCC, DDDDDDDDD print(CCCCCCCCC) """ ) assert isort.check_code( """ from xxxxxxxxxxxxxxxx import AAAAAAAAAA, BBBBBBBBBB from xxxxxxxxxxxxxxxx import CCCCCCCCC, DDDDDDDDD # xxxxxxxxxxxxxxxxxx isort: skip print(CCCCCCCCC) """, show_diff=True, multi_line_output=9, ) def test_isort_adding_second_comma_issue_1621(): """Ensure isort doesnt add a second comma when very long comment is present See: https://github.com/PyCQA/isort/issues/1621. """ assert isort.check_code( """from .test import ( TestTestTestTestTestTest2 as TestTestTestTestTestTest1, """ """# Some really long comment bla bla bla bla bla ) """, profile="black", show_diff=True, ) assert ( isort.code( """from .test import ( TestTestTestTestTestTest2 as TestTestTestTestTestTest1 """ """# Some really long comment bla bla bla bla bla ) """, profile="black", ) == """from .test import ( TestTestTestTestTestTest2 as TestTestTestTestTestTest1, """ """# Some really long comment bla bla bla bla bla ) """ ) def test_isort_shouldnt_duplicate_comments_issue_1631(): assert isort.check_code( """ import a # a comment import a as b # b comment """, show_diff=True, ) assert ( isort.code( """ import a # a comment import a as a # b comment """, remove_redundant_aliases=True, ) == """ import a # a comment; b comment """ ) def test_isort_shouldnt_add_extra_new_lines_with_import_heading_issue_1670(): snippet = """#!/usr/bin/python3 -ttu # Standard Library import argparse import datetime import attr import requests def foo() -> int: print("Hello world") return 0 def spam(): # Standard Library import collections import logging """ assert ( isort.code( snippet, import_heading_stdlib="Standard Library", ) == snippet ) def test_isort_shouldnt_add_extra_line_float_to_top_issue_1667(): assert isort.check_code( """ import sys sys.path.insert(1, 'path/containing/something_else/..') import something_else # isort:skip # Some constant SOME_CONSTANT = 4 """, show_diff=True, float_to_top=True, ) def test_isort_shouldnt_move_noqa_comment_issue_1594(): assert ( isort.code( """ from .test import TestTestTestTestTestTest1 # noqa: F401 from .test import TestTestTestTestTestTest2, TestTestTestTestTestTest3, """ """TestTestTestTestTestTest4, TestTestTestTestTestTest5 # noqa: F401 """, profile="black", ) == """ from .test import TestTestTestTestTestTest1 # noqa: F401 from .test import ( # noqa: F401 TestTestTestTestTestTest2, TestTestTestTestTestTest3, TestTestTestTestTestTest4, TestTestTestTestTestTest5, ) """ ) def test_isort_correctly_handles_unix_vs_linux_newlines_issue_1566(): import_statement = ( "from impacket.smb3structs import (\n" "SMB2_CREATE, SMB2_FLAGS_DFS_OPERATIONS, SMB2_IL_IMPERSONATION, " "SMB2_OPLOCK_LEVEL_NONE, SMB2Create," "\nSMB2Create_Response, SMB2Packet)\n" ) assert isort.code(import_statement, line_length=120) == isort.code( import_statement.replace("\n", "\r\n"), line_length=120 ).replace("\r\n", "\n") def test_isort_treats_src_paths_same_as_from_config_as_cli_issue_1711(tmpdir): assert isort.check_code( """ import mymodule import sqlalchemy """, show_diff=True, ) config_file = tmpdir.join(".isort.cfg") config_file.write( """ [settings] src_paths= api """ ) api_dir = tmpdir.mkdir("api") api_dir.join("mymodule.py").write("# comment") config = isort.settings.Config(str(config_file)) assert isort.check_code( """ import sqlalchemy import mymodule """, show_diff=True, config=config, ) def test_isort_should_never_quietly_remove_imports_in_hanging_line_mode_issue_1741(): assert ( isort.code( """ from src import abcd, qwerty, efg, xyz # some comment """, line_length=50, multi_line_output=2, ) == """ from src import abcd, efg, qwerty, xyz \\ # some comment """ ) assert ( isort.code( """ from src import abcd, qwerty, efg, xyz # some comment """, line_length=54, multi_line_output=2, ) == """ from src import abcd, efg, qwerty, xyz # some comment """ ) assert ( isort.code( """ from src import abcd, qwerty, efg, xyz # some comment """, line_length=53, multi_line_output=2, ) == """ from src import abcd, efg, qwerty, xyz \\ # some comment """ ) assert ( isort.code( """ from src import abcd, qwerty, efg, xyz # some comment """, line_length=30, multi_line_output=2, ) == """ from src import abcd, efg, \\ qwerty, xyz \\ # some comment """ ) @pytest.mark.parametrize("multi_line_output", range(12)) def test_isort_should_never_quietly_remove_imports_in_any_hangin_mode_issue_1741( multi_line_output: int, ): sorted_code = isort.code( """ from src import abcd, qwerty, efg, xyz # some comment """, line_length=30, multi_line_output=multi_line_output, ) assert "abcd" in sorted_code assert "qwerty" in sorted_code assert "efg" in sorted_code assert "xyz" in sorted_code def test_isort_should_keep_multi_noqa_with_star_issue_1744(): assert isort.check_code( """ from typing import * # noqa from typing import IO, BinaryIO, Union # noqa """, show_diff=True, ) assert isort.check_code( """ from typing import * # noqa 1 from typing import IO, BinaryIO, Union # noqa 2 """, show_diff=True, ) assert isort.check_code( """ from typing import * # noqa from typing import IO, BinaryIO, Union """, show_diff=True, ) assert isort.check_code( """ from typing import * from typing import IO, BinaryIO, Union # noqa """, show_diff=True, ) assert ( isort.code( """ from typing import * # hi from typing import IO, BinaryIO, Union # noqa """, combine_star=True, ) == """ from typing import * # noqa; hi """ ) assert ( isort.code( """ from typing import * # noqa from typing import IO, BinaryIO, Union # noqa """, combine_star=True, ) == """ from typing import * # noqa """ ) def test_isort_should_keep_multiple_noqa_comments_force_single_line_mode_issue_1721(): assert isort.check_code( """ from some_very_long_filename_to_import_from_that_causes_a_too_long_import_row import ( # noqa: E501 CONSTANT_1, ) from some_very_long_filename_to_import_from_that_causes_a_too_long_import_row import ( # noqa: E501 CONSTANT_2, ) """, show_diff=True, profile="black", force_single_line=True, ) def test_isort_should_only_add_imports_to_valid_location_issue_1769(): assert ( isort.code( '''v = """ """.split( "\n" ) ''', add_imports=["from __future__ import annotations"], ) == '''from __future__ import annotations v = """ """.split( "\n" ) ''' ) assert ( isort.code( '''v=""""""''', add_imports=["from __future__ import annotations"], ) == '''from __future__ import annotations v="""""" ''' ) def test_literal_sort_at_top_of_file_issue_1792(): assert ( isort.code( '''"""I'm a docstring! Look at me!""" # isort: unique-list __all__ = ["Foo", "Foo", "Bar"] from typing import final # arbitrary @final class Foo: ... @final class Bar: ... ''' ) == '''"""I'm a docstring! Look at me!""" # isort: unique-list __all__ = ['Bar', 'Foo'] from typing import final # arbitrary @final class Foo: ... @final class Bar: ... ''' ) def test_isort_should_produce_the_same_code_on_subsequent_runs_issue_1799(tmpdir): code = """import sys if sys.version_info[:2] >= (3, 8): # TODO: Import directly (no need for conditional) when `python_requires = >= 3.8` from importlib.metadata import PackageNotFoundError, version # pragma: no cover else: from importlib_metadata import PackageNotFoundError, version # pragma: no cover """ config_file = tmpdir.join(".isort.cfg") config_file.write( """[isort] profile=black src_paths=isort,test line_length=100 skip=.tox,.venv,build,dist,docs,tests extra_standard_library=pkg_resources,setuptools,typing known_test=pytest known_first_party=ibpt sections=FUTURE,STDLIB,TEST,THIRDPARTY,FIRSTPARTY,LOCALFOLDER import_heading_firstparty=internal import_heading_thirdparty=external """ ) settings = isort.settings.Config(str(config_file)) assert isort.code(code, config=settings) == isort.code( isort.code(code, config=settings), config=settings ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_settings.py0000644000000000000000000002271514536412763015304 0ustar00import os import sys from pathlib import Path import pytest from isort import exceptions, settings from isort.settings import Config from isort.wrap_modes import WrapModes class TestConfig: instance = Config() def test_init(self): assert Config() def test_init_unsupported_settings_fails_gracefully(self): with pytest.raises(exceptions.UnsupportedSettings): Config(apply=True) try: Config(apply=True) except exceptions.UnsupportedSettings as error: assert error.unsupported_settings == {"apply": {"value": True, "source": "runtime"}} def test_known_settings(self): assert Config(known_third_party=["one"]).known_third_party == frozenset({"one"}) assert Config(known_thirdparty=["two"]).known_third_party == frozenset({"two"}) assert Config( known_third_party=["one"], known_thirdparty=["two"] ).known_third_party == frozenset({"one"}) def test_invalid_settings_path(self): with pytest.raises(exceptions.InvalidSettingsPath): Config(settings_path="this_couldnt_possibly_actually_exists/could_it") def test_invalid_pyversion(self): with pytest.raises(ValueError): Config(py_version=10) def test_invalid_profile(self): with pytest.raises(exceptions.ProfileDoesNotExist): Config(profile="blackandwhitestylemixedwithpep8") def test_is_skipped(self): assert Config().is_skipped(Path("C:\\path\\isort.py")) assert Config(skip=["/path/isort.py"]).is_skipped(Path("C:\\path\\isort.py")) def test_is_supported_filetype(self): assert self.instance.is_supported_filetype("file.py") assert self.instance.is_supported_filetype("file.pyi") assert self.instance.is_supported_filetype("file.pyx") assert self.instance.is_supported_filetype("file.pxd") assert not self.instance.is_supported_filetype("file.pyc") assert not self.instance.is_supported_filetype("file.txt") assert not self.instance.is_supported_filetype("file.pex") def test_is_supported_filetype_ioerror(self, tmpdir): does_not_exist = tmpdir.join("fake.txt") assert not self.instance.is_supported_filetype(str(does_not_exist)) def test_is_supported_filetype_shebang(self, tmpdir): path = tmpdir.join("myscript") path.write("#!/usr/bin/env python\n") assert self.instance.is_supported_filetype(str(path)) def test_is_supported_filetype_editor_backup(self, tmpdir): path = tmpdir.join("myscript~") path.write("#!/usr/bin/env python\n") assert not self.instance.is_supported_filetype(str(path)) def test_is_supported_filetype_defaults(self, tmpdir): assert self.instance.is_supported_filetype(str(tmpdir.join("stub.pyi"))) assert self.instance.is_supported_filetype(str(tmpdir.join("source.py"))) assert self.instance.is_supported_filetype(str(tmpdir.join("source.pyx"))) def test_is_supported_filetype_configuration(self, tmpdir): config = Config(supported_extensions=("pyx",), blocked_extensions=("py",)) assert config.is_supported_filetype(str(tmpdir.join("stub.pyx"))) assert not config.is_supported_filetype(str(tmpdir.join("stub.py"))) @pytest.mark.skipif( sys.platform == "win32", reason="cannot create fifo file on Windows platform" ) def test_is_supported_filetype_fifo(self, tmpdir): fifo_file = os.path.join(tmpdir, "fifo_file") os.mkfifo(fifo_file) assert not self.instance.is_supported_filetype(fifo_file) def test_src_paths_are_combined_and_deduplicated(self): src_paths = ["src", "tests"] src_full_paths = (Path(os.getcwd()) / f for f in src_paths) assert sorted(Config(src_paths=src_paths * 2).src_paths) == sorted(src_full_paths) def test_src_paths_supports_glob_expansion(self, tmp_path): libs = tmp_path / "libs" libs.mkdir() requests = libs / "requests" requests.mkdir() beautifulpasta = libs / "beautifulpasta" beautifulpasta.mkdir() assert sorted(Config(directory=tmp_path, src_paths=["libs/*"]).src_paths) == sorted( (beautifulpasta, requests) ) def test_deprecated_multi_line_output(self): assert Config(multi_line_output=6).multi_line_output == WrapModes.VERTICAL_GRID_GROUPED # type: ignore # noqa def test_as_list(): assert settings._as_list([" one "]) == ["one"] # type: ignore assert settings._as_list("one,two") == ["one", "two"] def _write_simple_settings(tmp_file): tmp_file.write_text( """ [isort] force_grid_wrap=true """, "utf8", ) def test_find_config(tmpdir): tmp_config = tmpdir.join(".isort.cfg") # can't find config if it has no relevant section tmp_config.write_text( """ [section] force_grid_wrap=true """, "utf8", ) assert not settings._find_config(str(tmpdir))[1] # or if it is malformed tmp_config.write_text("""arstoyrsyan arienrsaeinrastyngpuywnlguyn354q^%$)(%_)@$""", "utf8") assert not settings._find_config(str(tmpdir))[1] # can when it has either a file format, or generic relevant section _write_simple_settings(tmp_config) assert settings._find_config(str(tmpdir))[1] def test_find_config_deep(tmpdir): # can't find config if it is further up than MAX_CONFIG_SEARCH_DEPTH dirs = [f"dir{i}" for i in range(settings.MAX_CONFIG_SEARCH_DEPTH + 1)] tmp_dirs = tmpdir.ensure(*dirs, dirs=True) tmp_config = tmpdir.join("dir0", ".isort.cfg") _write_simple_settings(tmp_config) assert not settings._find_config(str(tmp_dirs))[1] # but can find config if it is MAX_CONFIG_SEARCH_DEPTH up one_parent_up = os.path.split(str(tmp_dirs))[0] assert settings._find_config(one_parent_up)[1] def test_get_config_data(tmpdir): test_config = tmpdir.join("test_config.editorconfig") test_config.write_text( """ root = true [*.{js,py}] indent_style=tab indent_size=tab [*.py] force_grid_wrap=false comment_prefix="text" [*.{java}] indent_style = space """, "utf8", ) loaded_settings = settings._get_config_data( str(test_config), sections=settings.CONFIG_SECTIONS[".editorconfig"] ) assert loaded_settings assert loaded_settings["comment_prefix"] == "text" assert loaded_settings["force_grid_wrap"] == 0 assert loaded_settings["indent"] == "\t" assert str(tmpdir) in loaded_settings["source"] def test_editorconfig_without_sections(tmpdir): test_config = tmpdir.join("test_config.editorconfig") test_config.write_text("\nroot = true\n", "utf8") loaded_settings = settings._get_config_data(str(test_config), sections=("*.py",)) assert not loaded_settings def test_get_config_data_with_toml_and_utf8(tmpdir): test_config = tmpdir.join("pyproject.toml") # Exception: UnicodeDecodeError: 'gbk' codec can't decode byte 0x84 in position 57 test_config.write_text( """ [tool.poetry] description = "ๅŸบไบŽFastAPI + Mysql็š„ TodoList" # Exception: UnicodeDecodeError name = "TodoList" version = "0.1.0" [tool.isort] multi_line_output = 3 """, "utf8", ) loaded_settings = settings._get_config_data( str(test_config), sections=settings.CONFIG_SECTIONS["pyproject.toml"] ) assert loaded_settings assert str(tmpdir) in loaded_settings["source"] def test_as_bool(): assert settings._as_bool("TrUe") is True assert settings._as_bool("true") is True assert settings._as_bool("t") is True assert settings._as_bool("FALSE") is False assert settings._as_bool("faLSE") is False assert settings._as_bool("f") is False with pytest.raises(ValueError): settings._as_bool("") with pytest.raises(ValueError): settings._as_bool("falsey") with pytest.raises(ValueError): settings._as_bool("truthy") def test_find_all_configs(tmpdir): setup_cfg = """ [isort] profile=django """ pyproject_toml = """ [tool.isort] profile = "hug" """ isort_cfg = """ [settings] profile=black """ pyproject_toml_broken = """ [tool.isorts] something = nothing """ dir1 = tmpdir / "subdir1" dir2 = tmpdir / "subdir2" dir3 = tmpdir / "subdir3" dir4 = tmpdir / "subdir4" dir1.mkdir() dir2.mkdir() dir3.mkdir() dir4.mkdir() setup_cfg_file = dir1 / "setup.cfg" setup_cfg_file.write_text(setup_cfg, "utf-8") pyproject_toml_file = dir2 / "pyproject.toml" pyproject_toml_file.write_text(pyproject_toml, "utf-8") isort_cfg_file = dir3 / ".isort.cfg" isort_cfg_file.write_text(isort_cfg, "utf-8") pyproject_toml_file_broken = dir4 / "pyproject.toml" pyproject_toml_file_broken.write_text(pyproject_toml_broken, "utf-8") config_trie = settings.find_all_configs(str(tmpdir)) config_info_1 = config_trie.search(str(dir1 / "test1.py")) assert config_info_1[0] == str(setup_cfg_file) assert config_info_1[0] == str(setup_cfg_file) and config_info_1[1]["profile"] == "django" config_info_2 = config_trie.search(str(dir2 / "test2.py")) assert config_info_2[0] == str(pyproject_toml_file) assert config_info_2[0] == str(pyproject_toml_file) and config_info_2[1]["profile"] == "hug" config_info_3 = config_trie.search(str(dir3 / "test3.py")) assert config_info_3[0] == str(isort_cfg_file) assert config_info_3[0] == str(isort_cfg_file) and config_info_3[1]["profile"] == "black" config_info_4 = config_trie.search(str(tmpdir / "file4.py")) assert config_info_4[0] == "default" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_setuptools_command.py0000644000000000000000000000164014536412763017355 0ustar00from isort import setuptools_commands def test_isort_command_smoke(src_dir): """A basic smoke test for the setuptools_commands command""" from distutils.dist import Distribution command = setuptools_commands.ISortCommand(Distribution()) command.distribution.packages = ["isort"] command.distribution.package_dir = {"isort": src_dir} command.initialize_options() command.finalize_options() try: command.run() except BaseException: pass command.distribution.package_dir = {"": "isort"} command.distribution.py_modules = ["one", "two"] command.initialize_options() command.finalize_options() command.run() command.distribution.packages = ["not_a_file"] command.distribution.package_dir = {"not_a_file": src_dir} command.initialize_options() command.finalize_options() try: command.run() except BaseException: pass ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_ticketed_features.py0000644000000000000000000005631414536412763017140 0ustar00"""A growing set of tests designed to ensure when isort implements a feature described in a ticket it fully works as defined in the associated ticket. """ from functools import partial from io import StringIO import pytest import isort from isort import Config, exceptions def test_semicolon_ignored_for_dynamic_lines_after_import_issue_1178(): """Test to ensure even if a semicolon is in the decorator in the line following an import the correct line spacing determination will be made. See: https://github.com/pycqa/isort/issues/1178. """ assert isort.check_code( """ import pytest @pytest.mark.skip(';') def test_thing(): pass """, show_diff=True, ) def test_isort_automatically_removes_duplicate_aliases_issue_1193(): """Test to ensure isort can automatically remove duplicate aliases. See: https://github.com/pycqa/isort/issues/1281 """ assert isort.check_code("from urllib import parse as parse\n", show_diff=True) assert ( isort.code("from urllib import parse as parse", remove_redundant_aliases=True) == "from urllib import parse\n" ) assert isort.check_code("import os as os\n", show_diff=True) assert isort.code("import os as os", remove_redundant_aliases=True) == "import os\n" def test_isort_enables_floating_imports_to_top_of_module_issue_1228(): """Test to ensure isort will allow floating all non-indented imports to the top of a file. See: https://github.com/pycqa/isort/issues/1228. """ assert ( isort.code( """ import os def my_function_1(): pass import sys def my_function_2(): pass """, float_to_top=True, ) == """ import os import sys def my_function_1(): pass def my_function_2(): pass """ ) assert ( isort.code( """ import os def my_function_1(): pass # isort: split import sys def my_function_2(): pass """, float_to_top=True, ) == """ import os def my_function_1(): pass # isort: split import sys def my_function_2(): pass """ ) assert ( isort.code( """ import os def my_function_1(): pass # isort: off import b import a def y(): pass # isort: on import b def my_function_2(): pass import a """, float_to_top=True, ) == """ import os def my_function_1(): pass # isort: off import b import a def y(): pass # isort: on import a import b def my_function_2(): pass """ ) def test_isort_provides_official_api_for_diff_output_issue_1335(): """Test to ensure isort API for diff capturing allows capturing diff without sys.stdout. See: https://github.com/pycqa/isort/issues/1335. """ diff_output = StringIO() isort.code("import b\nimport a\n", show_diff=diff_output) diff_output.seek(0) assert "+import a" in diff_output.read() def test_isort_warns_when_known_sections_dont_match_issue_1331(): """Test to ensure that isort warns if there is a mismatch between sections and known_sections. See: https://github.com/pycqa/isort/issues/1331. """ assert ( isort.place_module( "bot_core", config=Config( known_robotlocomotion_upstream=["bot_core"], sections=["ROBOTLOCOMOTION_UPSTREAM", "THIRDPARTY"], ), ) == "ROBOTLOCOMOTION_UPSTREAM" ) with pytest.warns(UserWarning): assert ( isort.place_module( "bot_core", config=Config( known_robotlocomotion_upstream=["bot_core"], sections=["ROBOTLOOMOTION_UPSTREAM", "THIRDPARTY"], ), ) == "THIRDPARTY" ) with pytest.warns(UserWarning): assert ( isort.place_module( "bot_core", config=Config(known_robotlocomotion_upstream=["bot_core"]) ) == "THIRDPARTY" ) def test_isort_supports_append_only_imports_issue_727(): """Test to ensure isort provides a way to only add imports as an append. See: https://github.com/pycqa/isort/issues/727. """ assert isort.code("", add_imports=["from __future__ import absolute_imports"]) == "" assert ( isort.code("import os", add_imports=["from __future__ import absolute_imports"]) == """from __future__ import absolute_imports import os """ ) # issue 1838: don't append in middle of class assert isort.check_code( '''class C: """a """ # comment ''', append_only=True, add_imports=["from __future__ import annotations"], show_diff=True, ) def test_isort_supports_shared_profiles_issue_970(): """Test to ensure isort provides a way to use shared profiles. See: https://github.com/pycqa/isort/issues/970. """ assert isort.code("import a", profile="example") == "import a\n" # shared profile assert isort.code("import a", profile="black") == "import a\n" # bundled profile with pytest.raises(exceptions.ProfileDoesNotExist): assert isort.code("import a", profile="madeupfake") == "import a\n" # non-existent profile def test_treating_comments_as_code_issue_1357(): """Test to ensure isort provides a way to treat comments as code. See: https://github.com/pycqa/isort/issues/1357 """ assert ( isort.code( """# %% import numpy as np np.array([1,2,3]) # %% import pandas as pd pd.Series([1,2,3]) # %% # This is a comment on the second import import pandas as pd pd.Series([4,5,6])""", treat_comments_as_code=["# comment1", "# %%"], ) == """# %% import numpy as np np.array([1,2,3]) # %% import pandas as pd pd.Series([1,2,3]) # %% # This is a comment on the second import import pandas as pd pd.Series([4,5,6]) """ ) assert ( isort.code( """# %% import numpy as np np.array([1,2,3]) # %% import pandas as pd pd.Series([1,2,3]) # %% # This is a comment on the second import import pandas as pd pd.Series([4,5,6])""", treat_comments_as_code=["# comment1", "# %%"], float_to_top=True, ) == """# %% import numpy as np # This is a comment on the second import import pandas as pd np.array([1,2,3]) # %% pd.Series([1,2,3]) # %% pd.Series([4,5,6]) """ ) assert ( isort.code( """# %% import numpy as np np.array([1,2,3]) # %% import pandas as pd pd.Series([1,2,3]) # %% # This is a comment on the second import import pandas as pd pd.Series([4,5,6])""", treat_all_comments_as_code=True, ) == """# %% import numpy as np np.array([1,2,3]) # %% import pandas as pd pd.Series([1,2,3]) # %% # This is a comment on the second import import pandas as pd pd.Series([4,5,6]) """ ) assert ( isort.code( """import b # these are special imports that have to do with installing X plugin import c import a """, treat_all_comments_as_code=True, ) == """import b # these are special imports that have to do with installing X plugin import a import c """ ) def test_isort_allows_setting_import_types_issue_1181(): """Test to ensure isort provides a way to set the type of imports. See: https://github.com/pycqa/isort/issues/1181 """ assert isort.code("from x import AA, Big, variable") == "from x import AA, Big, variable\n" assert ( isort.code("from x import AA, Big, variable", constants=["variable"]) == "from x import AA, variable, Big\n" ) assert ( isort.code("from x import AA, Big, variable", variables=["AA"]) == "from x import Big, AA, variable\n" ) assert ( isort.code( "from x import AA, Big, variable", constants=["Big"], variables=["AA"], classes=["variable"], ) == "from x import Big, variable, AA\n" ) def test_isort_enables_deduping_section_headers_issue_953(): """isort should provide a way to only have identical import headings show up once. See: https://github.com/pycqa/isort/issues/953 """ isort_code = partial( isort.code, config=Config( import_heading_firstparty="Local imports.", import_heading_localfolder="Local imports.", dedup_headings=True, known_first_party=["isort"], ), ) assert ( isort_code("from . import something") == """# Local imports. from . import something """ ) assert ( isort_code( """from isort import y from . import something""" ) == """# Local imports. from isort import y from . import something """ ) assert isort_code("import os") == "import os\n" def test_isort_doesnt_remove_as_imports_when_combine_star_issue_1380(): """Test to ensure isort will not remove as imports along side other imports when requested to combine star imports together. See: https://github.com/PyCQA/isort/issues/1380 """ test_input = """ from a import a from a import * from a import b from a import b as y from a import c """ assert ( isort.code( test_input, combine_star=True, ) == isort.code(test_input, combine_star=True, force_single_line=True) == isort.code( test_input, combine_star=True, force_single_line=True, combine_as_imports=True, ) == """ from a import * from a import b as y """ ) def test_isort_support_custom_groups_above_stdlib_that_contain_stdlib_modules_issue_1407(): """Test to ensure it is possible to declare custom groups above standard library that include modules from the standard library. See: https://github.com/PyCQA/isort/issues/1407 """ assert isort.check_code( """ from __future__ import annotations from typing import * from pathlib import Path """, known_typing=["typing"], sections=["FUTURE", "TYPING", "STDLIB", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER"], no_lines_before=["TYPING"], show_diff=True, ) def test_isort_intelligently_places_noqa_comments_issue_1456(): assert isort.check_code( """ from my.horribly.long.import.line.that.just.keeps.on.going.and.going.and.going import ( # noqa my_symbol, ) """, force_single_line=True, show_diff=True, multi_line_output=3, include_trailing_comma=True, force_grid_wrap=0, use_parentheses=True, line_length=79, ) assert isort.check_code( """ from my.horribly.long.import.line.that.just.keeps.on.going.and.going.and.going import ( my_symbol, ) """, force_single_line=True, show_diff=True, multi_line_output=3, include_trailing_comma=True, force_grid_wrap=0, use_parentheses=True, line_length=79, ) assert isort.check_code( """ from my.horribly.long.import.line.that.just.keeps.on.going.and.going.and.going import ( # noqa my_symbol ) """, force_single_line=True, use_parentheses=True, multi_line_output=3, line_length=79, show_diff=True, ) assert isort.check_code( """ from my.horribly.long.import.line.that.just.keeps.on.going.and.going.and.going import ( my_symbol ) """, force_single_line=True, use_parentheses=True, multi_line_output=3, line_length=79, show_diff=True, ) # see: https://github.com/PyCQA/isort/issues/1415 assert isort.check_code( "from dials.test.algorithms.spot_prediction." "test_scan_static_reflection_predictor import ( # noqa: F401\n" " data as static_test,\n)\n", profile="black", show_diff=True, ) def test_isort_respects_quiet_from_sort_file_api_see_1461(capsys, tmpdir): """Test to ensure isort respects the quiet API parameter when passed in via the API. See: https://github.com/PyCQA/isort/issues/1461. """ settings_file = tmpdir.join(".isort.cfg") custom_settings_file = tmpdir.join(".custom.isort.cfg") tmp_file = tmpdir.join("file.py") tmp_file.write("import b\nimport a\n") isort.file(tmp_file) out, error = capsys.readouterr() assert not error assert "Fixing" in out # When passed in directly as a setting override tmp_file.write("import b\nimport a\n") isort.file(tmp_file, quiet=True) out, error = capsys.readouterr() assert not error assert not out # Present in an automatically loaded configuration file settings_file.write( """ [isort] quiet = true """ ) tmp_file.write("import b\nimport a\n") isort.file(tmp_file) out, error = capsys.readouterr() assert not error assert not out # In a custom configuration file settings_file.write( """ [isort] quiet = false """ ) custom_settings_file.write( """ [isort] quiet = true """ ) tmp_file.write("import b\nimport a\n") isort.file(tmp_file, settings_file=str(custom_settings_file)) out, error = capsys.readouterr() assert not error assert not out # Reused configuration object custom_config = Config(settings_file=str(custom_settings_file)) isort.file(tmp_file, config=custom_config) out, error = capsys.readouterr() assert not error assert not out def test_isort_should_warn_on_empty_custom_config_issue_1433(tmpdir): """Feedback should be provided when a user provides a custom settings file that has no discoverable configuration. See: https://github.com/PyCQA/isort/issues/1433 """ settings_file = tmpdir.join(".custom.cfg") settings_file.write( """ [settings] quiet = true """ ) with pytest.warns(UserWarning): assert not Config(settings_file=str(settings_file)).quiet settings_file.write( """ [isort] quiet = true """ ) with pytest.warns(None) as warning: # type: ignore assert Config(settings_file=str(settings_file)).quiet assert not warning def test_float_to_top_should_respect_existing_newlines_between_imports_issue_1502(): """When a file has an existing top of file import block before code but after comments isort's float to top feature should respect the existing spacing between the top file comment and the import statements. See: https://github.com/PyCQA/isort/issues/1502 """ assert isort.check_code( """#!/bin/bash '''My comment''' import a x = 1 """, float_to_top=True, show_diff=True, ) assert isort.check_code( """#!/bin/bash '''My comment''' import a x = 1 """, float_to_top=True, show_diff=True, ) assert ( isort.code( """#!/bin/bash '''My comment''' import a x = 1 """, float_to_top=True, add_imports=["import b"], ) == """#!/bin/bash '''My comment''' import a import b x = 1 """ ) assert ( isort.code( """#!/bin/bash '''My comment''' def my_function(): pass import a """, float_to_top=True, ) == """#!/bin/bash '''My comment''' import a def my_function(): pass """ ) assert ( isort.code( """#!/bin/bash '''My comment''' def my_function(): pass """, add_imports=["import os"], float_to_top=True, ) == """#!/bin/bash '''My comment''' import os def my_function(): pass """ ) def test_api_to_allow_custom_diff_and_output_stream_1583(capsys, tmpdir): """isort should provide a way from the Python API to process an existing file and output to a stream the new version of that file, as well as a diff to a different stream. See: https://github.com/PyCQA/isort/issues/1583 """ tmp_file = tmpdir.join("file.py") tmp_file.write("import b\nimport a\n") isort_diff = StringIO() isort_output = StringIO() isort.file(tmp_file, show_diff=isort_diff, output=isort_output) _, error = capsys.readouterr() assert not error isort_diff.seek(0) isort_diff_content = isort_diff.read() assert "+import a" in isort_diff_content assert " import b" in isort_diff_content assert "-import a" in isort_diff_content isort_output.seek(0) assert isort_output.read().splitlines() == ["import a", "import b"] # should still work with no diff produced tmp_file2 = tmpdir.join("file2.py") tmp_file2.write("import a\nimport b\n") isort_diff2 = StringIO() isort_output2 = StringIO() isort.file(tmp_file2, show_diff=isort_diff2, output=isort_output2) _, error = capsys.readouterr() assert not error isort_diff2.seek(0) assert not isort_diff2.read() def test_autofix_mixed_indent_imports_1575(): """isort should automatically fix import statements that are sent in with incorrect mixed indentation. See: https://github.com/PyCQA/isort/issues/1575 """ assert ( isort.code( """ import os import os """ ) == """ import os """ ) assert ( isort.code( """ def one(): import os import os """ ) == """ def one(): import os import os """ ) assert ( isort.code( """ import os import os import os import os import os """ ) == """ import os """ ) def test_indented_import_headings_issue_1604(): """Test to ensure it is possible to toggle import headings on indented import sections See: https://github.com/PyCQA/isort/issues/1604 """ assert ( isort.code( """ import numpy as np def function(): import numpy as np """, import_heading_thirdparty="External imports", ) == """ # External imports import numpy as np def function(): # External imports import numpy as np """ ) assert ( isort.code( """ import numpy as np def function(): import numpy as np """, import_heading_thirdparty="External imports", indented_import_headings=False, ) == """ # External imports import numpy as np def function(): import numpy as np """ ) def test_isort_auto_detects_and_ignores_invalid_from_imports_issue_1688(): """isort should automatically detect and ignore incorrectly written from import statements see: https://github.com/PyCQA/isort/issues/1688 """ assert ( isort.code( """ from package1 import alright from package2 imprt and_its_gone from package3 import also_ok """ ) == """ from package1 import alright from package2 imprt and_its_gone from package3 import also_ok """ ) def test_isort_allows_reversing_sort_order_issue_1645(): """isort allows reversing the sort order for those who prefer Z or longer imports first. see: https://github.com/PyCQA/isort/issues/1688 """ assert ( isort.code( """ from xxx import ( g, hi, def, abcd, ) """, profile="black", reverse_sort=True, length_sort=True, line_length=20, ) == """ from xxx import ( abcd, def, hi, g, ) """ ) def test_isort_can_push_star_imports_above_others_issue_1504(): """isort should provide a way to push star imports above other imports to avoid explicit imports from being overwritten. see: https://github.com/PyCQA/isort/issues/1504 """ assert ( ( isort.code( """ from ._bar import Any, All, Not from ._foo import a, * """, star_first=True, ) ) == """ from ._foo import * from ._foo import a from ._bar import All, Any, Not """ ) def test_isort_can_combine_reverse_sort_with_force_sort_within_sections_issue_1726(): """isort should support reversing import order even with force sort within sections turned on. See: https://github.com/PyCQA/isort/issues/1726 """ assert ( isort.code( """ import blaaa from bl4aaaaaaaaaaaaaaaa import r import blaaaaaaaaaaaa import bla import blaaaaaaa from bl1aaaaaaaaaaaaaa import this_is_1 from bl2aaaaaaa import THIIIIIIIIIIIISS_is_2 from bl3aaaaaa import less """, length_sort=True, reverse_sort=True, force_sort_within_sections=True, ) == """ from bl2aaaaaaa import THIIIIIIIIIIIISS_is_2 from bl1aaaaaaaaaaaaaa import this_is_1 from bl4aaaaaaaaaaaaaaaa import r from bl3aaaaaa import less import blaaaaaaaaaaaa import blaaaaaaa import blaaa import bla """ ) def test_isort_can_turn_off_import_adds_with_action_comment_issue_1737(): assert ( isort.code( """ import os """, add_imports=[ "from __future__ import absolute_imports", "from __future__ import annotations", ], ) == """ from __future__ import absolute_imports, annotations import os """ ) assert isort.check_code( """ # isort: dont-add-imports import os """, show_diff=True, add_imports=[ "from __future__ import absolute_imports", "from __future__ import annotations", ], ) assert ( isort.code( """ # isort: dont-add-import: from __future__ import annotations import os """, add_imports=[ "from __future__ import absolute_imports", "from __future__ import annotations", ], ) == """ # isort: dont-add-import: from __future__ import annotations from __future__ import absolute_imports import os """ ) def test_sort_configurable_sort_issue_1732() -> None: """Test support for pluggable isort sort functions.""" test_input = ( "from bob2.apples2 import aardvark as aardvark2\n" "from bob.apples import aardvark \n" "import module9\n" "import module10\n" "import module200\n" ) assert isort.code(test_input, sort_order="native") == ( "import module10\n" "import module200\n" "import module9\n" "from bob.apples import aardvark\n" "from bob2.apples2 import aardvark as aardvark2\n" ) assert ( isort.code(test_input, sort_order="natural") == isort.code(test_input) == ( "import module9\n" "import module10\n" "import module200\n" "from bob2.apples2 import aardvark as aardvark2\n" "from bob.apples import aardvark\n" ) ) assert ( isort.code(test_input, sort_order="natural_plus") == isort.code(test_input) == ( "import module9\n" "import module10\n" "import module200\n" "from bob2.apples2 import aardvark as aardvark2\n" "from bob.apples import aardvark\n" ) ) with pytest.raises(exceptions.SortingFunctionDoesNotExist): isort.code(test_input, sort_order="round") def test_cython_pure_python_imports_2062(): """Test to ensure an import form a cython.cimports remains import, not cimport. See: https://github.com/pycqa/isort/issues/2062. """ assert isort.check_code( """ import cython from cython.cimports.libc import math def use_libc_math(): return math.ceil(5.5) """, show_diff=True, ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_utils.py0000644000000000000000000000304214536412763014574 0ustar00from isort.utils import Trie def test_trie(): trie_root = Trie("default", {"line_length": 70}) trie_root.insert("/temp/config1/.isort.cfg", {"line_length": 71}) trie_root.insert("/temp/config2/setup.cfg", {"line_length": 72}) trie_root.insert("/temp/config3/pyproject.toml", {"line_length": 73}) # Ensure that appropriate configs are resolved for files in different directories config1 = trie_root.search("/temp/config1/subdir/file1.py") assert config1[0] == "/temp/config1/.isort.cfg" assert config1[1] == {"line_length": 71} config1_2 = trie_root.search("/temp/config1/file1_2.py") assert config1_2[0] == "/temp/config1/.isort.cfg" assert config1_2[1] == {"line_length": 71} config2 = trie_root.search("/temp/config2/subdir/subsubdir/file2.py") assert config2[0] == "/temp/config2/setup.cfg" assert config2[1] == {"line_length": 72} config2_2 = trie_root.search("/temp/config2/subdir/file2_2.py") assert config2_2[0] == "/temp/config2/setup.cfg" assert config2_2[1] == {"line_length": 72} config3 = trie_root.search("/temp/config3/subdir/subsubdir/subsubsubdir/file3.py") assert config3[0] == "/temp/config3/pyproject.toml" assert config3[1] == {"line_length": 73} config3_2 = trie_root.search("/temp/config3/file3.py") assert config3_2[0] == "/temp/config3/pyproject.toml" assert config3_2[1] == {"line_length": 73} config_outside = trie_root.search("/temp/file.py") assert config_outside[0] == "default" assert config_outside[1] == {"line_length": 70} ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_wrap.py0000644000000000000000000000315014536412763014405 0ustar00import pytest from isort import wrap from isort.settings import Config from isort.wrap_modes import WrapModes def test_import_statement(): assert wrap.import_statement("", [], []) == "" assert ( wrap.import_statement("from x import ", ["y"], [], config=Config(balanced_wrapping=True)) == "from x import (y)" ) assert ( wrap.import_statement("from long_import ", ["verylong"] * 10, []) == """from long_import (verylong, verylong, verylong, verylong, verylong, verylong, verylong, verylong, verylong, verylong)""" ) assert wrap.import_statement("from x import ", ["y", "z"], [], explode=True) == ( "from x import (\n y,\n z,\n)" ) @pytest.mark.parametrize( "multi_line_output, expected", ( ( WrapModes.VERTICAL_HANGING_INDENT, # type: ignore """from a import ( b as c # comment that is long enough that this import doesn't fit in one line (parens) )""", ), ( WrapModes.VERTICAL, # type: ignore """from a import ( b as c) # comment that is long enough that this import doesn't fit in one line (parens)""", ), ), ) def test_line__comment_with_brackets__expects_unchanged_comment(multi_line_output, expected): content = ( "from a import b as c " "# comment that is long enough that this import doesn't fit in one line (parens)" ) config = Config( multi_line_output=multi_line_output, use_parentheses=True, ) assert wrap.line(content=content, line_separator="\n", config=config) == expected ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/test_wrap_modes.py0000644000000000000000000004036514536412763015605 0ustar00import pytest from hypothesis import given, reject from hypothesis import strategies as st import isort from isort import wrap_modes def test_wrap_mode_interface(): assert ( wrap_modes._wrap_mode_interface("statement", [], "", "", 80, [], "", "", True, True) == "" ) def test_auto_saved(): """hypothesis_auto tests cases that have been saved to ensure they run each test cycle""" assert ( wrap_modes.noqa( **{ "comment_prefix": "-\U000bf82c\x0c\U0004608f\x10%", "comments": [], "imports": [], "include_trailing_comma": False, "indent": "0\x19", "line_length": -19659, "line_separator": "\x15\x0b\U00086494\x1d\U000e00a2\U000ee216\U0006708a\x03\x1f", "remove_comments": False, "statement": "\U00092452", "white_space": "\U000a7322\U000c20e3-\U0010eae4\x07\x14\U0007d486", } ) == "\U00092452-\U000bf82c\x0c\U0004608f\x10% NOQA" ) assert ( wrap_modes.noqa( **{ "comment_prefix": '\x12\x07\U0009e994๐Ÿฃ"\U000ae787\x0e', "comments": ["\x00\U0001ae99\U0005c3e7\U0004d08e", "\x1e", "", ""], "imports": ["*"], "include_trailing_comma": True, "indent": "", "line_length": 31492, "line_separator": "\U00071610\U0005bfbc", "remove_comments": False, "statement": "", "white_space": "\x08\x01โท“\x16%\U0006cd8c", } ) == '*\x12\x07\U0009e994๐Ÿฃ"\U000ae787\x0e \x00\U0001ae99\U0005c3e7\U0004d08e \x1e ' ) assert ( wrap_modes.noqa( **{ "comment_prefix": " #", "comments": ["NOQA", "THERE"], "imports": [], "include_trailing_comma": False, "indent": "0\x19", "line_length": -19659, "line_separator": "\n", "remove_comments": False, "statement": "hi", "white_space": " ", } ) == "hi # NOQA THERE" ) def test_backslash_grid(): """Tests the backslash_grid grid wrap mode, ensuring it matches formatting expectations. See: https://github.com/PyCQA/isort/issues/1434 """ assert ( isort.code( """ from kopf.engines import loggers, posting from kopf.reactor import causation, daemons, effects, handling, lifecycles, registries from kopf.storage import finalizers, states from kopf.structs import (bodies, configuration, containers, diffs, handlers as handlers_, patches, resources) """, multi_line_output=11, line_length=88, combine_as_imports=True, ) == """ from kopf.engines import loggers, posting from kopf.reactor import causation, daemons, effects, handling, lifecycles, registries from kopf.storage import finalizers, states from kopf.structs import bodies, configuration, containers, diffs, \\ handlers as handlers_, patches, resources """ ) @pytest.mark.parametrize("include_trailing_comma", (False, True)) @pytest.mark.parametrize("line_length", (18, 19)) @pytest.mark.parametrize("multi_line_output", (4, 5)) def test_vertical_grid_size_near_line_length( multi_line_output: int, line_length: int, include_trailing_comma: bool, ): separator = " " # Cases where the input should be wrapped: if ( # Mode 4 always adds a closing ")", making the imports line 19 chars, # if include_trailing_comma is True that becomes 20 chars. (multi_line_output == 4 and line_length < 19 + int(include_trailing_comma)) # Modes 5 and 6 only add a comma, if include_trailing_comma is True, # so their lines are 18 or 19 chars long. or (multi_line_output != 4 and line_length < 18 + int(include_trailing_comma)) ): separator = "\n " test_input = f"from foo import (\n aaaa, bbb,{separator}ccc" if include_trailing_comma: test_input += "," if multi_line_output != 4: test_input += "\n" test_input += ")\n" assert ( isort.code( test_input, multi_line_output=multi_line_output, line_length=line_length, include_trailing_comma=include_trailing_comma, ) == test_input ) # This test code was written by the `hypothesis.extra.ghostwriter` module # and is provided under the Creative Commons Zero public domain dedication. @given( statement=st.text(), imports=st.lists(st.text()), white_space=st.text(), indent=st.text(), line_length=st.integers(), comments=st.lists(st.text()), line_separator=st.text(), comment_prefix=st.text(), include_trailing_comma=st.booleans(), remove_comments=st.booleans(), ) def test_fuzz_backslash_grid( statement, imports, white_space, indent, line_length, comments, line_separator, comment_prefix, include_trailing_comma, remove_comments, ): try: isort.wrap_modes.backslash_grid( statement=statement, imports=imports, white_space=white_space, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=remove_comments, ) except ValueError: reject() @given( statement=st.text(), imports=st.lists(st.text()), white_space=st.text(), indent=st.text(), line_length=st.integers(), comments=st.lists(st.text()), line_separator=st.text(), comment_prefix=st.text(), include_trailing_comma=st.booleans(), remove_comments=st.booleans(), ) def test_fuzz_grid( statement, imports, white_space, indent, line_length, comments, line_separator, comment_prefix, include_trailing_comma, remove_comments, ): try: isort.wrap_modes.grid( statement=statement, imports=imports, white_space=white_space, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=remove_comments, ) except ValueError: reject() @given( statement=st.text(), imports=st.lists(st.text()), white_space=st.text(), indent=st.text(), line_length=st.integers(), comments=st.lists(st.text()), line_separator=st.text(), comment_prefix=st.text(), include_trailing_comma=st.booleans(), remove_comments=st.booleans(), ) def test_fuzz_hanging_indent( statement, imports, white_space, indent, line_length, comments, line_separator, comment_prefix, include_trailing_comma, remove_comments, ): try: isort.wrap_modes.hanging_indent( statement=statement, imports=imports, white_space=white_space, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=remove_comments, ) except ValueError: reject() @pytest.mark.parametrize("include_trailing_comma", (True, False)) def test_hanging_indent__with_include_trailing_comma__expect_same_result(include_trailing_comma): result = isort.wrap_modes.hanging_indent( statement="from datetime import ", imports=["datetime", "time", "timedelta", "timezone", "tzinfo"], white_space=" ", indent=" ", line_length=50, comments=[], line_separator="\n", comment_prefix=" #", include_trailing_comma=include_trailing_comma, remove_comments=False, ) assert result == "from datetime import datetime, time, timedelta, \\\n timezone, tzinfo" @given( statement=st.text(), imports=st.lists(st.text()), white_space=st.text(), indent=st.text(), line_length=st.integers(), comments=st.lists(st.text()), line_separator=st.text(), comment_prefix=st.text(), include_trailing_comma=st.booleans(), remove_comments=st.booleans(), ) def test_fuzz_hanging_indent_with_parentheses( statement, imports, white_space, indent, line_length, comments, line_separator, comment_prefix, include_trailing_comma, remove_comments, ): try: isort.wrap_modes.hanging_indent_with_parentheses( statement=statement, imports=imports, white_space=white_space, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=remove_comments, ) except ValueError: reject() @given( statement=st.text(), imports=st.lists(st.text()), white_space=st.text(), indent=st.text(), line_length=st.integers(), comments=st.lists(st.text()), line_separator=st.text(), comment_prefix=st.text(), include_trailing_comma=st.booleans(), remove_comments=st.booleans(), ) def test_fuzz_noqa( statement, imports, white_space, indent, line_length, comments, line_separator, comment_prefix, include_trailing_comma, remove_comments, ): try: isort.wrap_modes.noqa( statement=statement, imports=imports, white_space=white_space, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=remove_comments, ) except ValueError: reject() @given( statement=st.text(), imports=st.lists(st.text()), white_space=st.text(), indent=st.text(), line_length=st.integers(), comments=st.lists(st.text()), line_separator=st.text(), comment_prefix=st.text(), include_trailing_comma=st.booleans(), remove_comments=st.booleans(), ) def test_fuzz_vertical( statement, imports, white_space, indent, line_length, comments, line_separator, comment_prefix, include_trailing_comma, remove_comments, ): try: isort.wrap_modes.vertical( statement=statement, imports=imports, white_space=white_space, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=remove_comments, ) except ValueError: reject() @given( statement=st.text(), imports=st.lists(st.text()), white_space=st.text(), indent=st.text(), line_length=st.integers(), comments=st.lists(st.text()), line_separator=st.text(), comment_prefix=st.text(), include_trailing_comma=st.booleans(), remove_comments=st.booleans(), ) def test_fuzz_vertical_grid( statement, imports, white_space, indent, line_length, comments, line_separator, comment_prefix, include_trailing_comma, remove_comments, ): try: isort.wrap_modes.vertical_grid( statement=statement, imports=imports, white_space=white_space, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=remove_comments, ) except ValueError: reject() @given( statement=st.text(), imports=st.lists(st.text()), white_space=st.text(), indent=st.text(), line_length=st.integers(), comments=st.lists(st.text()), line_separator=st.text(), comment_prefix=st.text(), include_trailing_comma=st.booleans(), remove_comments=st.booleans(), ) def test_fuzz_vertical_grid_grouped( statement, imports, white_space, indent, line_length, comments, line_separator, comment_prefix, include_trailing_comma, remove_comments, ): try: isort.wrap_modes.vertical_grid_grouped( statement=statement, imports=imports, white_space=white_space, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=remove_comments, ) except ValueError: reject() @given( statement=st.text(), imports=st.lists(st.text()), white_space=st.text(), indent=st.text(), line_length=st.integers(), comments=st.lists(st.text()), line_separator=st.text(), comment_prefix=st.text(), include_trailing_comma=st.booleans(), remove_comments=st.booleans(), ) def test_fuzz_vertical_hanging_indent( statement, imports, white_space, indent, line_length, comments, line_separator, comment_prefix, include_trailing_comma, remove_comments, ): try: isort.wrap_modes.vertical_hanging_indent( statement=statement, imports=imports, white_space=white_space, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=remove_comments, ) except ValueError: reject() @given( statement=st.text(), imports=st.lists(st.text()), white_space=st.text(), indent=st.text(), line_length=st.integers(), comments=st.lists(st.text()), line_separator=st.text(), comment_prefix=st.text(), include_trailing_comma=st.booleans(), remove_comments=st.booleans(), ) def test_fuzz_vertical_hanging_indent_bracket( statement, imports, white_space, indent, line_length, comments, line_separator, comment_prefix, include_trailing_comma, remove_comments, ): try: isort.wrap_modes.vertical_hanging_indent_bracket( statement=statement, imports=imports, white_space=white_space, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=remove_comments, ) except ValueError: reject() @given( statement=st.text(), imports=st.lists(st.text()), white_space=st.text(), indent=st.text(), line_length=st.integers(), comments=st.lists(st.text()), line_separator=st.text(), comment_prefix=st.text(), include_trailing_comma=st.booleans(), remove_comments=st.booleans(), ) def test_fuzz_vertical_prefix_from_module_import( statement, imports, white_space, indent, line_length, comments, line_separator, comment_prefix, include_trailing_comma, remove_comments, ): try: isort.wrap_modes.vertical_prefix_from_module_import( statement=statement, imports=imports, white_space=white_space, indent=indent, line_length=line_length, comments=comments, line_separator=line_separator, comment_prefix=comment_prefix, include_trailing_comma=include_trailing_comma, remove_comments=remove_comments, ) except ValueError: reject() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1702499826.6361854 isort-5.13.2/tests/unit/utils.py0000644000000000000000000000154214536412763013540 0ustar00from io import BytesIO, StringIO, TextIOWrapper import isort class UnseekableTextIOWrapper(TextIOWrapper): def seek(self, *args, **kwargs): raise ValueError("underlying stream is not seekable") class UnreadableStream(StringIO): def readable(self, *args, **kwargs) -> bool: return False def as_stream(text: str) -> UnseekableTextIOWrapper: return UnseekableTextIOWrapper(BytesIO(text.encode("utf8"))) def isort_test(code: str, expected_output: str = "", **config): """Runs isort against the given code snippet and ensures that it gives consistent output across multiple runs, and if an expected_output is given - that it matches that. """ expected_output = expected_output or code output = isort.code(code, **config) assert output == expected_output assert output == isort.code(output, **config) isort-5.13.2/setup.py0000644000000000000000000002722500000000000011353 0ustar00# -*- coding: utf-8 -*- from setuptools import setup packages = \ ['isort', 'isort._vendored.tomli', 'isort.deprecated', 'isort.stdlibs'] package_data = \ {'': ['*']} extras_require = \ {'colors': ['colorama>=0.4.6']} entry_points = \ {'console_scripts': ['isort = isort.main:main', 'isort-identify-imports = ' 'isort.main:identify_imports_main'], 'distutils.commands': ['isort = isort.setuptools_commands:ISortCommand'], 'pylama.linter': ['isort = isort.pylama_isort:Linter']} setup_kwargs = { 'name': 'isort', 'version': '5.13.2', 'description': 'A Python utility / library to sort Python imports.', 'long_description': '[![isort - isort your imports, so you don\'t have to.](https://raw.githubusercontent.com/pycqa/isort/main/art/logo_large.png)](https://pycqa.github.io/isort/)\n\n------------------------------------------------------------------------\n\n[![PyPI version](https://badge.fury.io/py/isort.svg)](https://badge.fury.io/py/isort)\n[![Test Status](https://github.com/pycqa/isort/workflows/Test/badge.svg?branch=develop)](https://github.com/pycqa/isort/actions?query=workflow%3ATest)\n[![Lint Status](https://github.com/pycqa/isort/workflows/Lint/badge.svg?branch=develop)](https://github.com/pycqa/isort/actions?query=workflow%3ALint)\n[![Code coverage Status](https://codecov.io/gh/pycqa/isort/branch/main/graph/badge.svg)](https://codecov.io/gh/pycqa/isort)\n[![License](https://img.shields.io/github/license/mashape/apistatus.svg)](https://pypi.org/project/isort/)\n[![Join the chat at https://gitter.im/timothycrosley/isort](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/timothycrosley/isort?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n[![Downloads](https://pepy.tech/badge/isort)](https://pepy.tech/project/isort)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)\n[![DeepSource](https://static.deepsource.io/deepsource-badge-light-mini.svg)](https://deepsource.io/gh/pycqa/isort/?ref=repository-badge)\n_________________\n\n[Read Latest Documentation](https://pycqa.github.io/isort/) - [Browse GitHub Code Repository](https://github.com/pycqa/isort/)\n_________________\n\nisort your imports, so you don\'t have to.\n\nisort is a Python utility / library to sort imports alphabetically and\nautomatically separate into sections and by type. It provides a command line\nutility, Python library and [plugins for various\neditors](https://github.com/pycqa/isort/wiki/isort-Plugins) to\nquickly sort all your imports. It requires Python 3.8+ to run but\nsupports formatting Python 2 code too.\n\n- [Try isort now from your browser!](https://pycqa.github.io/isort/docs/quick_start/0.-try.html)\n- [Using black? See the isort and black compatibility guide.](https://pycqa.github.io/isort/docs/configuration/black_compatibility.html)\n- [isort has official support for pre-commit!](https://pycqa.github.io/isort/docs/configuration/pre-commit.html)\n\n![Example Usage](https://raw.github.com/pycqa/isort/main/example.gif)\n\nBefore isort:\n\n```python\nfrom my_lib import Object\n\nimport os\n\nfrom my_lib import Object3\n\nfrom my_lib import Object2\n\nimport sys\n\nfrom third_party import lib15, lib1, lib2, lib3, lib4, lib5, lib6, lib7, lib8, lib9, lib10, lib11, lib12, lib13, lib14\n\nimport sys\n\nfrom __future__ import absolute_import\n\nfrom third_party import lib3\n\nprint("Hey")\nprint("yo")\n```\n\nAfter isort:\n\n```python\nfrom __future__ import absolute_import\n\nimport os\nimport sys\n\nfrom third_party import (lib1, lib2, lib3, lib4, lib5, lib6, lib7, lib8,\n lib9, lib10, lib11, lib12, lib13, lib14, lib15)\n\nfrom my_lib import Object, Object2, Object3\n\nprint("Hey")\nprint("yo")\n```\n\n## Installing isort\n\nInstalling isort is as simple as:\n\n```bash\npip install isort\n```\n\n## Using isort\n\n**From the command line**:\n\nTo run on specific files:\n\n```bash\nisort mypythonfile.py mypythonfile2.py\n```\n\nTo apply recursively:\n\n```bash\nisort .\n```\n\nIf [globstar](https://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html)\nis enabled, `isort .` is equivalent to:\n\n```bash\nisort **/*.py\n```\n\nTo view proposed changes without applying them:\n\n```bash\nisort mypythonfile.py --diff\n```\n\nFinally, to atomically run isort against a project, only applying\nchanges if they don\'t introduce syntax errors:\n\n```bash\nisort --atomic .\n```\n\n(Note: this is disabled by default, as it prevents isort from\nrunning against code written using a different version of Python.)\n\n**From within Python**:\n\n```python\nimport isort\n\nisort.file("pythonfile.py")\n```\n\nor:\n\n```python\nimport isort\n\nsorted_code = isort.code("import b\\nimport a\\n")\n```\n\n## Installing isort\'s for your preferred text editor\n\nSeveral plugins have been written that enable to use isort from within a\nvariety of text-editors. You can find a full list of them [on the isort\nwiki](https://github.com/pycqa/isort/wiki/isort-Plugins).\nAdditionally, I will enthusiastically accept pull requests that include\nplugins for other text editors and add documentation for them as I am\nnotified.\n\n## Multi line output modes\n\nYou will notice above the \\"multi\\_line\\_output\\" setting. This setting\ndefines how from imports wrap when they extend past the line\\_length\nlimit and has [12 possible settings](https://pycqa.github.io/isort/docs/configuration/multi_line_output_modes.html).\n\n## Indentation\n\nTo change the how constant indents appear - simply change the\nindent property with the following accepted formats:\n\n- Number of spaces you would like. For example: 4 would cause standard\n 4 space indentation.\n- Tab\n- A verbatim string with quotes around it.\n\nFor example:\n\n```python\n" "\n```\n\nis equivalent to 4.\n\nFor the import styles that use parentheses, you can control whether or\nnot to include a trailing comma after the last import with the\n`include_trailing_comma` option (defaults to `False`).\n\n## Intelligently Balanced Multi-line Imports\n\nAs of isort 3.1.0 support for balanced multi-line imports has been\nadded. With this enabled isort will dynamically change the import length\nto the one that produces the most balanced grid, while staying below the\nmaximum import length defined.\n\nExample:\n\n```python\nfrom __future__ import (absolute_import, division,\n print_function, unicode_literals)\n```\n\nWill be produced instead of:\n\n```python\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\n```\n\nTo enable this set `balanced_wrapping` to `True` in your config or pass\nthe `-e` option into the command line utility.\n\n## Custom Sections and Ordering\n\nisort provides configuration options to change almost every aspect of how\nimports are organized, ordered, or grouped together in sections.\n\n[Click here](https://pycqa.github.io/isort/docs/configuration/custom_sections_and_ordering.html) for an overview of all these options.\n\n## Skip processing of imports (outside of configuration)\n\nTo make isort ignore a single import simply add a comment at the end of\nthe import line containing the text `isort:skip`:\n\n```python\nimport module # isort:skip\n```\n\nor:\n\n```python\nfrom xyz import (abc, # isort:skip\n yo,\n hey)\n```\n\nTo make isort skip an entire file simply add `isort:skip_file` to the\nmodule\'s doc string:\n\n```python\n""" my_module.py\n Best module ever\n\n isort:skip_file\n"""\n\nimport b\nimport a\n```\n\n## Adding or removing an import from multiple files\n\nisort can be ran or configured to add / remove imports automatically.\n\n[See a complete guide here.](https://pycqa.github.io/isort/docs/configuration/add_or_remove_imports.html)\n\n## Using isort to verify code\n\nThe `--check-only` option\n-------------------------\n\nisort can also be used to verify that code is correctly formatted\nby running it with `-c`. Any files that contain incorrectly sorted\nand/or formatted imports will be outputted to `stderr`.\n\n```bash\nisort **/*.py -c -v\n\nSUCCESS: /home/timothy/Projects/Open_Source/isort/isort_kate_plugin.py Everything Looks Good!\nERROR: /home/timothy/Projects/Open_Source/isort/isort/isort.py Imports are incorrectly sorted.\n```\n\nOne great place this can be used is with a pre-commit git hook, such as\nthis one by \\@acdha:\n\n\n\nThis can help to ensure a certain level of code quality throughout a\nproject.\n\n## Git hook\n\nisort provides a hook function that can be integrated into your Git\npre-commit script to check Python code before committing.\n\n[More info here.](https://pycqa.github.io/isort/docs/configuration/git_hook.html)\n\n## Setuptools integration\n\nUpon installation, isort enables a `setuptools` command that checks\nPython files declared by your project.\n\n[More info here.](https://pycqa.github.io/isort/docs/configuration/setuptools_integration.html)\n\n## Spread the word\n\n[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)\n\nPlace this badge at the top of your repository to let others know your project uses isort.\n\nFor README.md:\n\n```markdown\n[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)\n```\n\nOr README.rst:\n\n```rst\n.. image:: https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336\n :target: https://pycqa.github.io/isort/\n```\n\n## Security contact information\n\nTo report a security vulnerability, please use the [Tidelift security\ncontact](https://tidelift.com/security). Tidelift will coordinate the\nfix and disclosure.\n\n## Why isort?\n\nisort simply stands for import sort. It was originally called\n"sortImports" however I got tired of typing the extra characters and\ncame to the realization camelCase is not pythonic.\n\nI wrote isort because in an organization I used to work in the manager\ncame in one day and decided all code must have alphabetically sorted\nimports. The code base was huge - and he meant for us to do it by hand.\nHowever, being a programmer - I\\\'m too lazy to spend 8 hours mindlessly\nperforming a function, but not too lazy to spend 16 hours automating it.\nI was given permission to open source sortImports and here we are :)\n\n------------------------------------------------------------------------\n\n[Get professionally supported isort with the Tidelift\nSubscription](https://tidelift.com/subscription/pkg/pypi-isort?utm_source=pypi-isort&utm_medium=referral&utm_campaign=readme)\n\nProfessional support for isort is available as part of the [Tidelift\nSubscription](https://tidelift.com/subscription/pkg/pypi-isort?utm_source=pypi-isort&utm_medium=referral&utm_campaign=readme).\nTidelift gives software development teams a single source for purchasing\nand maintaining their software, with professional grade assurances from\nthe experts who know it best, while seamlessly integrating with existing\ntools.\n\n------------------------------------------------------------------------\n\nThanks and I hope you find isort useful!\n\n~Timothy Crosley\n', 'author': 'Timothy Crosley', 'author_email': 'timothy.crosley@gmail.com', 'maintainer': 'None', 'maintainer_email': 'None', 'url': 'https://pycqa.github.io/isort/', 'packages': packages, 'package_data': package_data, 'extras_require': extras_require, 'entry_points': entry_points, 'python_requires': '>=3.8.0', } setup(**setup_kwargs) isort-5.13.2/PKG-INFO0000644000000000000000000002770600000000000010742 0ustar00Metadata-Version: 2.1 Name: isort Version: 5.13.2 Summary: A Python utility / library to sort Python imports. Home-page: https://pycqa.github.io/isort/ License: MIT Keywords: Refactor,Lint,Imports,Sort,Clean Author: Timothy Crosley Author-email: timothy.crosley@gmail.com Requires-Python: >=3.8.0 Classifier: Development Status :: 6 - Mature Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Natural Language :: English Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: Topic :: Software Development :: Libraries Classifier: Topic :: Utilities Provides-Extra: colors Provides-Extra: plugins Requires-Dist: colorama (>=0.4.6) ; extra == "colors" Project-URL: Changelog, https://github.com/pycqa/isort/blob/main/CHANGELOG.md Project-URL: Documentation, https://pycqa.github.io/isort/ Project-URL: Repository, https://github.com/pycqa/isort Description-Content-Type: text/markdown [![isort - isort your imports, so you don't have to.](https://raw.githubusercontent.com/pycqa/isort/main/art/logo_large.png)](https://pycqa.github.io/isort/) ------------------------------------------------------------------------ [![PyPI version](https://badge.fury.io/py/isort.svg)](https://badge.fury.io/py/isort) [![Test Status](https://github.com/pycqa/isort/workflows/Test/badge.svg?branch=develop)](https://github.com/pycqa/isort/actions?query=workflow%3ATest) [![Lint Status](https://github.com/pycqa/isort/workflows/Lint/badge.svg?branch=develop)](https://github.com/pycqa/isort/actions?query=workflow%3ALint) [![Code coverage Status](https://codecov.io/gh/pycqa/isort/branch/main/graph/badge.svg)](https://codecov.io/gh/pycqa/isort) [![License](https://img.shields.io/github/license/mashape/apistatus.svg)](https://pypi.org/project/isort/) [![Join the chat at https://gitter.im/timothycrosley/isort](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/timothycrosley/isort?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Downloads](https://pepy.tech/badge/isort)](https://pepy.tech/project/isort) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/) [![DeepSource](https://static.deepsource.io/deepsource-badge-light-mini.svg)](https://deepsource.io/gh/pycqa/isort/?ref=repository-badge) _________________ [Read Latest Documentation](https://pycqa.github.io/isort/) - [Browse GitHub Code Repository](https://github.com/pycqa/isort/) _________________ isort your imports, so you don't have to. isort is a Python utility / library to sort imports alphabetically and automatically separate into sections and by type. It provides a command line utility, Python library and [plugins for various editors](https://github.com/pycqa/isort/wiki/isort-Plugins) to quickly sort all your imports. It requires Python 3.8+ to run but supports formatting Python 2 code too. - [Try isort now from your browser!](https://pycqa.github.io/isort/docs/quick_start/0.-try.html) - [Using black? See the isort and black compatibility guide.](https://pycqa.github.io/isort/docs/configuration/black_compatibility.html) - [isort has official support for pre-commit!](https://pycqa.github.io/isort/docs/configuration/pre-commit.html) ![Example Usage](https://raw.github.com/pycqa/isort/main/example.gif) Before isort: ```python from my_lib import Object import os from my_lib import Object3 from my_lib import Object2 import sys from third_party import lib15, lib1, lib2, lib3, lib4, lib5, lib6, lib7, lib8, lib9, lib10, lib11, lib12, lib13, lib14 import sys from __future__ import absolute_import from third_party import lib3 print("Hey") print("yo") ``` After isort: ```python from __future__ import absolute_import import os import sys from third_party import (lib1, lib2, lib3, lib4, lib5, lib6, lib7, lib8, lib9, lib10, lib11, lib12, lib13, lib14, lib15) from my_lib import Object, Object2, Object3 print("Hey") print("yo") ``` ## Installing isort Installing isort is as simple as: ```bash pip install isort ``` ## Using isort **From the command line**: To run on specific files: ```bash isort mypythonfile.py mypythonfile2.py ``` To apply recursively: ```bash isort . ``` If [globstar](https://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html) is enabled, `isort .` is equivalent to: ```bash isort **/*.py ``` To view proposed changes without applying them: ```bash isort mypythonfile.py --diff ``` Finally, to atomically run isort against a project, only applying changes if they don't introduce syntax errors: ```bash isort --atomic . ``` (Note: this is disabled by default, as it prevents isort from running against code written using a different version of Python.) **From within Python**: ```python import isort isort.file("pythonfile.py") ``` or: ```python import isort sorted_code = isort.code("import b\nimport a\n") ``` ## Installing isort's for your preferred text editor Several plugins have been written that enable to use isort from within a variety of text-editors. You can find a full list of them [on the isort wiki](https://github.com/pycqa/isort/wiki/isort-Plugins). Additionally, I will enthusiastically accept pull requests that include plugins for other text editors and add documentation for them as I am notified. ## Multi line output modes You will notice above the \"multi\_line\_output\" setting. This setting defines how from imports wrap when they extend past the line\_length limit and has [12 possible settings](https://pycqa.github.io/isort/docs/configuration/multi_line_output_modes.html). ## Indentation To change the how constant indents appear - simply change the indent property with the following accepted formats: - Number of spaces you would like. For example: 4 would cause standard 4 space indentation. - Tab - A verbatim string with quotes around it. For example: ```python " " ``` is equivalent to 4. For the import styles that use parentheses, you can control whether or not to include a trailing comma after the last import with the `include_trailing_comma` option (defaults to `False`). ## Intelligently Balanced Multi-line Imports As of isort 3.1.0 support for balanced multi-line imports has been added. With this enabled isort will dynamically change the import length to the one that produces the most balanced grid, while staying below the maximum import length defined. Example: ```python from __future__ import (absolute_import, division, print_function, unicode_literals) ``` Will be produced instead of: ```python from __future__ import (absolute_import, division, print_function, unicode_literals) ``` To enable this set `balanced_wrapping` to `True` in your config or pass the `-e` option into the command line utility. ## Custom Sections and Ordering isort provides configuration options to change almost every aspect of how imports are organized, ordered, or grouped together in sections. [Click here](https://pycqa.github.io/isort/docs/configuration/custom_sections_and_ordering.html) for an overview of all these options. ## Skip processing of imports (outside of configuration) To make isort ignore a single import simply add a comment at the end of the import line containing the text `isort:skip`: ```python import module # isort:skip ``` or: ```python from xyz import (abc, # isort:skip yo, hey) ``` To make isort skip an entire file simply add `isort:skip_file` to the module's doc string: ```python """ my_module.py Best module ever isort:skip_file """ import b import a ``` ## Adding or removing an import from multiple files isort can be ran or configured to add / remove imports automatically. [See a complete guide here.](https://pycqa.github.io/isort/docs/configuration/add_or_remove_imports.html) ## Using isort to verify code The `--check-only` option ------------------------- isort can also be used to verify that code is correctly formatted by running it with `-c`. Any files that contain incorrectly sorted and/or formatted imports will be outputted to `stderr`. ```bash isort **/*.py -c -v SUCCESS: /home/timothy/Projects/Open_Source/isort/isort_kate_plugin.py Everything Looks Good! ERROR: /home/timothy/Projects/Open_Source/isort/isort/isort.py Imports are incorrectly sorted. ``` One great place this can be used is with a pre-commit git hook, such as this one by \@acdha: This can help to ensure a certain level of code quality throughout a project. ## Git hook isort provides a hook function that can be integrated into your Git pre-commit script to check Python code before committing. [More info here.](https://pycqa.github.io/isort/docs/configuration/git_hook.html) ## Setuptools integration Upon installation, isort enables a `setuptools` command that checks Python files declared by your project. [More info here.](https://pycqa.github.io/isort/docs/configuration/setuptools_integration.html) ## Spread the word [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/) Place this badge at the top of your repository to let others know your project uses isort. For README.md: ```markdown [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/) ``` Or README.rst: ```rst .. image:: https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336 :target: https://pycqa.github.io/isort/ ``` ## Security contact information To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. ## Why isort? isort simply stands for import sort. It was originally called "sortImports" however I got tired of typing the extra characters and came to the realization camelCase is not pythonic. I wrote isort because in an organization I used to work in the manager came in one day and decided all code must have alphabetically sorted imports. The code base was huge - and he meant for us to do it by hand. However, being a programmer - I\'m too lazy to spend 8 hours mindlessly performing a function, but not too lazy to spend 16 hours automating it. I was given permission to open source sortImports and here we are :) ------------------------------------------------------------------------ [Get professionally supported isort with the Tidelift Subscription](https://tidelift.com/subscription/pkg/pypi-isort?utm_source=pypi-isort&utm_medium=referral&utm_campaign=readme) Professional support for isort is available as part of the [Tidelift Subscription](https://tidelift.com/subscription/pkg/pypi-isort?utm_source=pypi-isort&utm_medium=referral&utm_campaign=readme). Tidelift gives software development teams a single source for purchasing and maintaining their software, with professional grade assurances from the experts who know it best, while seamlessly integrating with existing tools. ------------------------------------------------------------------------ Thanks and I hope you find isort useful! ~Timothy Crosley