ansible-core-2.16.3/0000755000000000000000000000000014556006441012711 5ustar00rootrootansible-core-2.16.3/COPYING0000644000000000000000000010451514556006441013752 0ustar00rootroot GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . ansible-core-2.16.3/MANIFEST.in0000644000000000000000000000055414556006441014453 0ustar00rootrootinclude COPYING include bin/* include changelogs/CHANGELOG*.rst include changelogs/changelog.yaml include licenses/*.txt include requirements.txt recursive-include packaging *.py *.j2 recursive-include test/integration * recursive-include test/sanity *.in *.json *.py *.txt recursive-include test/support *.py *.ps1 *.psm1 *.cs *.md recursive-include test/units * ansible-core-2.16.3/PKG-INFO0000644000000000000000000001536614556006441014021 0ustar00rootrootMetadata-Version: 2.1 Name: ansible-core Version: 2.16.3 Summary: Radically simple IT automation Home-page: https://ansible.com/ Author: Ansible, Inc. Author-email: info@ansible.com License: GPLv3+ Project-URL: Bug Tracker, https://github.com/ansible/ansible/issues Project-URL: CI: Azure Pipelines, https://dev.azure.com/ansible/ansible/ Project-URL: Code of Conduct, https://docs.ansible.com/ansible/latest/community/code_of_conduct.html Project-URL: Documentation, https://docs.ansible.com/ansible-core/ Project-URL: Mailing lists, https://docs.ansible.com/ansible/latest/community/communication.html#mailing-list-information Project-URL: Source Code, https://github.com/ansible/ansible Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+) Classifier: Natural Language :: English Classifier: Operating System :: POSIX Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Topic :: System :: Installation/Setup Classifier: Topic :: System :: Systems Administration Classifier: Topic :: Utilities Requires-Python: >=3.10 Description-Content-Type: text/markdown License-File: COPYING Requires-Dist: jinja2>=3.0.0 Requires-Dist: PyYAML>=5.1 Requires-Dist: cryptography Requires-Dist: packaging Requires-Dist: resolvelib<1.1.0,>=0.5.3 [![PyPI version](https://img.shields.io/pypi/v/ansible-core.svg)](https://pypi.org/project/ansible-core) [![Docs badge](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://docs.ansible.com/ansible/latest/) [![Chat badge](https://img.shields.io/badge/chat-IRC-brightgreen.svg)](https://docs.ansible.com/ansible/latest/community/communication.html) [![Build Status](https://dev.azure.com/ansible/ansible/_apis/build/status/CI?branchName=devel)](https://dev.azure.com/ansible/ansible/_build/latest?definitionId=20&branchName=devel) [![Ansible Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-silver.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Ansible mailing lists](https://img.shields.io/badge/mailing%20lists-Ansible-orange.svg)](https://docs.ansible.com/ansible/latest/community/communication.html#mailing-list-information) [![Repository License](https://img.shields.io/badge/license-GPL%20v3.0-brightgreen.svg)](COPYING) [![Ansible CII Best Practices certification](https://bestpractices.coreinfrastructure.org/projects/2372/badge)](https://bestpractices.coreinfrastructure.org/projects/2372) # Ansible Ansible is a radically simple IT automation system. It handles configuration management, application deployment, cloud provisioning, ad-hoc task execution, network automation, and multi-node orchestration. Ansible makes complex changes like zero-downtime rolling updates with load balancers easy. More information on the Ansible [website](https://ansible.com/). ## Design Principles * Have an extremely simple setup process with a minimal learning curve. * Manage machines quickly and in parallel. * Avoid custom-agents and additional open ports, be agentless by leveraging the existing SSH daemon. * Describe infrastructure in a language that is both machine and human friendly. * Focus on security and easy auditability/review/rewriting of content. * Manage new remote machines instantly, without bootstrapping any software. * Allow module development in any dynamic language, not just Python. * Be usable as non-root. * Be the easiest IT automation system to use, ever. ## Use Ansible You can install a released version of Ansible with `pip` or a package manager. See our [installation guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) for details on installing Ansible on a variety of platforms. Power users and developers can run the `devel` branch, which has the latest features and fixes, directly. Although it is reasonably stable, you are more likely to encounter breaking changes when running the `devel` branch. We recommend getting involved in the Ansible community if you want to run the `devel` branch. ## Get Involved * Read [Community Information](https://docs.ansible.com/ansible/latest/community) for all kinds of ways to contribute to and interact with the project, including mailing list information and how to submit bug reports and code to Ansible. * Join a [Working Group](https://github.com/ansible/community/wiki), an organized community devoted to a specific technology domain or platform. * Submit a proposed code update through a pull request to the `devel` branch. * Talk to us before making larger changes to avoid duplicate efforts. This not only helps everyone know what is going on, but it also helps save time and effort if we decide some changes are needed. * For a list of email lists, IRC channels and Working Groups, see the [Communication page](https://docs.ansible.com/ansible/latest/community/communication.html) ## Coding Guidelines We document our Coding Guidelines in the [Developer Guide](https://docs.ansible.com/ansible/devel/dev_guide/). We particularly suggest you review: * [Contributing your module to Ansible](https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_checklist.html) * [Conventions, tips, and pitfalls](https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_best_practices.html) ## Branch Info * The `devel` branch corresponds to the release actively under development. * The `stable-2.X` branches correspond to stable releases. * Create a branch based on `devel` and set up a [dev environment](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#common-environment-setup) if you want to open a PR. * See the [Ansible release and maintenance](https://docs.ansible.com/ansible/devel/reference_appendices/release_and_maintenance.html) page for information about active branches. ## Roadmap Based on team and community feedback, an initial roadmap will be published for a major or minor version (ex: 2.7, 2.8). The [Ansible Roadmap page](https://docs.ansible.com/ansible/devel/roadmap/) details what is planned and how to influence the roadmap. ## Authors Ansible was created by [Michael DeHaan](https://github.com/mpdehaan) and has contributions from over 5000 users (and growing). Thanks everyone! [Ansible](https://www.ansible.com) is sponsored by [Red Hat, Inc.](https://www.redhat.com) ## License GNU General Public License v3.0 or later See [COPYING](COPYING) to see the full text. ansible-core-2.16.3/README.md0000644000000000000000000001206514556006441014174 0ustar00rootroot[![PyPI version](https://img.shields.io/pypi/v/ansible-core.svg)](https://pypi.org/project/ansible-core) [![Docs badge](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://docs.ansible.com/ansible/latest/) [![Chat badge](https://img.shields.io/badge/chat-IRC-brightgreen.svg)](https://docs.ansible.com/ansible/latest/community/communication.html) [![Build Status](https://dev.azure.com/ansible/ansible/_apis/build/status/CI?branchName=devel)](https://dev.azure.com/ansible/ansible/_build/latest?definitionId=20&branchName=devel) [![Ansible Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-silver.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Ansible mailing lists](https://img.shields.io/badge/mailing%20lists-Ansible-orange.svg)](https://docs.ansible.com/ansible/latest/community/communication.html#mailing-list-information) [![Repository License](https://img.shields.io/badge/license-GPL%20v3.0-brightgreen.svg)](COPYING) [![Ansible CII Best Practices certification](https://bestpractices.coreinfrastructure.org/projects/2372/badge)](https://bestpractices.coreinfrastructure.org/projects/2372) # Ansible Ansible is a radically simple IT automation system. It handles configuration management, application deployment, cloud provisioning, ad-hoc task execution, network automation, and multi-node orchestration. Ansible makes complex changes like zero-downtime rolling updates with load balancers easy. More information on the Ansible [website](https://ansible.com/). ## Design Principles * Have an extremely simple setup process with a minimal learning curve. * Manage machines quickly and in parallel. * Avoid custom-agents and additional open ports, be agentless by leveraging the existing SSH daemon. * Describe infrastructure in a language that is both machine and human friendly. * Focus on security and easy auditability/review/rewriting of content. * Manage new remote machines instantly, without bootstrapping any software. * Allow module development in any dynamic language, not just Python. * Be usable as non-root. * Be the easiest IT automation system to use, ever. ## Use Ansible You can install a released version of Ansible with `pip` or a package manager. See our [installation guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) for details on installing Ansible on a variety of platforms. Power users and developers can run the `devel` branch, which has the latest features and fixes, directly. Although it is reasonably stable, you are more likely to encounter breaking changes when running the `devel` branch. We recommend getting involved in the Ansible community if you want to run the `devel` branch. ## Get Involved * Read [Community Information](https://docs.ansible.com/ansible/latest/community) for all kinds of ways to contribute to and interact with the project, including mailing list information and how to submit bug reports and code to Ansible. * Join a [Working Group](https://github.com/ansible/community/wiki), an organized community devoted to a specific technology domain or platform. * Submit a proposed code update through a pull request to the `devel` branch. * Talk to us before making larger changes to avoid duplicate efforts. This not only helps everyone know what is going on, but it also helps save time and effort if we decide some changes are needed. * For a list of email lists, IRC channels and Working Groups, see the [Communication page](https://docs.ansible.com/ansible/latest/community/communication.html) ## Coding Guidelines We document our Coding Guidelines in the [Developer Guide](https://docs.ansible.com/ansible/devel/dev_guide/). We particularly suggest you review: * [Contributing your module to Ansible](https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_checklist.html) * [Conventions, tips, and pitfalls](https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_best_practices.html) ## Branch Info * The `devel` branch corresponds to the release actively under development. * The `stable-2.X` branches correspond to stable releases. * Create a branch based on `devel` and set up a [dev environment](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#common-environment-setup) if you want to open a PR. * See the [Ansible release and maintenance](https://docs.ansible.com/ansible/devel/reference_appendices/release_and_maintenance.html) page for information about active branches. ## Roadmap Based on team and community feedback, an initial roadmap will be published for a major or minor version (ex: 2.7, 2.8). The [Ansible Roadmap page](https://docs.ansible.com/ansible/devel/roadmap/) details what is planned and how to influence the roadmap. ## Authors Ansible was created by [Michael DeHaan](https://github.com/mpdehaan) and has contributions from over 5000 users (and growing). Thanks everyone! [Ansible](https://www.ansible.com) is sponsored by [Red Hat, Inc.](https://www.redhat.com) ## License GNU General Public License v3.0 or later See [COPYING](COPYING) to see the full text. ansible-core-2.16.3/bin/0000755000000000000000000000000014556006441013461 5ustar00rootrootansible-core-2.16.3/bin/ansible0000755000000000000000000002006714556006441015031 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2012, Michael DeHaan # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError from ansible.executor.task_queue_manager import TaskQueueManager from ansible.module_utils.common.text.converters import to_text from ansible.parsing.splitter import parse_kv from ansible.parsing.utils.yaml import from_yaml from ansible.playbook import Playbook from ansible.playbook.play import Play from ansible.utils.display import Display display = Display() class AdHocCLI(CLI): ''' is an extra-simple tool/framework/API for doing 'remote things'. this command allows you to define and run a single task 'playbook' against a set of hosts ''' name = 'ansible' def init_parser(self): ''' create an options parser for bin/ansible ''' super(AdHocCLI, self).init_parser(usage='%prog [options]', desc="Define and run a single task 'playbook' against a set of hosts", epilog="Some actions do not make sense in Ad-Hoc (include, meta, etc)") opt_help.add_runas_options(self.parser) opt_help.add_inventory_options(self.parser) opt_help.add_async_options(self.parser) opt_help.add_output_options(self.parser) opt_help.add_connect_options(self.parser) opt_help.add_check_options(self.parser) opt_help.add_runtask_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_fork_options(self.parser) opt_help.add_module_options(self.parser) opt_help.add_basedir_options(self.parser) opt_help.add_tasknoplay_options(self.parser) # options unique to ansible ad-hoc self.parser.add_argument('-a', '--args', dest='module_args', help="The action's options in space separated k=v format: -a 'opt1=val1 opt2=val2' " "or a json string: -a '{\"opt1\": \"val1\", \"opt2\": \"val2\"}'", default=C.DEFAULT_MODULE_ARGS) self.parser.add_argument('-m', '--module-name', dest='module_name', help="Name of the action to execute (default=%s)" % C.DEFAULT_MODULE_NAME, default=C.DEFAULT_MODULE_NAME) self.parser.add_argument('args', metavar='pattern', help='host pattern') def post_process_args(self, options): '''Post process and validate options for bin/ansible ''' options = super(AdHocCLI, self).post_process_args(options) display.verbosity = options.verbosity self.validate_conflicts(options, runas_opts=True, fork_opts=True) return options def _play_ds(self, pattern, async_val, poll): check_raw = context.CLIARGS['module_name'] in C.MODULE_REQUIRE_ARGS module_args_raw = context.CLIARGS['module_args'] module_args = None if module_args_raw and module_args_raw.startswith('{') and module_args_raw.endswith('}'): try: module_args = from_yaml(module_args_raw.strip(), json_only=True) except AnsibleParserError: pass if not module_args: module_args = parse_kv(module_args_raw, check_raw=check_raw) mytask = {'action': {'module': context.CLIARGS['module_name'], 'args': module_args}, 'timeout': context.CLIARGS['task_timeout']} # avoid adding to tasks that don't support it, unless set, then give user an error if context.CLIARGS['module_name'] not in C._ACTION_ALL_INCLUDE_ROLE_TASKS and any(frozenset((async_val, poll))): mytask['async_val'] = async_val mytask['poll'] = poll return dict( name="Ansible Ad-Hoc", hosts=pattern, gather_facts='no', tasks=[mytask]) def run(self): ''' create and execute the single task playbook ''' super(AdHocCLI, self).run() # only thing left should be host pattern pattern = to_text(context.CLIARGS['args'], errors='surrogate_or_strict') # handle password prompts sshpass = None becomepass = None (sshpass, becomepass) = self.ask_passwords() passwords = {'conn_pass': sshpass, 'become_pass': becomepass} # get basic objects loader, inventory, variable_manager = self._play_prereqs() # get list of hosts to execute against try: hosts = self.get_host_list(inventory, context.CLIARGS['subset'], pattern) except AnsibleError: if context.CLIARGS['subset']: raise else: hosts = [] display.warning("No hosts matched, nothing to do") # just listing hosts? if context.CLIARGS['listhosts']: display.display(' hosts (%d):' % len(hosts)) for host in hosts: display.display(' %s' % host) return 0 # verify we have arguments if we know we need em if context.CLIARGS['module_name'] in C.MODULE_REQUIRE_ARGS and not context.CLIARGS['module_args']: err = "No argument passed to %s module" % context.CLIARGS['module_name'] if pattern.endswith(".yml"): err = err + ' (did you mean to run ansible-playbook?)' raise AnsibleOptionsError(err) # Avoid modules that don't work with ad-hoc if context.CLIARGS['module_name'] in C._ACTION_IMPORT_PLAYBOOK: raise AnsibleOptionsError("'%s' is not a valid action for ad-hoc commands" % context.CLIARGS['module_name']) # construct playbook objects to wrap task play_ds = self._play_ds(pattern, context.CLIARGS['seconds'], context.CLIARGS['poll_interval']) play = Play().load(play_ds, variable_manager=variable_manager, loader=loader) # used in start callback playbook = Playbook(loader) playbook._entries.append(play) playbook._file_name = '__adhoc_playbook__' if self.callback: cb = self.callback elif context.CLIARGS['one_line']: cb = 'oneline' # Respect custom 'stdout_callback' only with enabled 'bin_ansible_callbacks' elif C.DEFAULT_LOAD_CALLBACK_PLUGINS and C.DEFAULT_STDOUT_CALLBACK != 'default': cb = C.DEFAULT_STDOUT_CALLBACK else: cb = 'minimal' run_tree = False if context.CLIARGS['tree']: C.CALLBACKS_ENABLED.append('tree') C.TREE_DIR = context.CLIARGS['tree'] run_tree = True # now create a task queue manager to execute the play self._tqm = None try: self._tqm = TaskQueueManager( inventory=inventory, variable_manager=variable_manager, loader=loader, passwords=passwords, stdout_callback=cb, run_additional_callbacks=C.DEFAULT_LOAD_CALLBACK_PLUGINS, run_tree=run_tree, forks=context.CLIARGS['forks'], ) self._tqm.load_callbacks() self._tqm.send_callback('v2_playbook_on_start', playbook) result = self._tqm.run(play) self._tqm.send_callback('v2_playbook_on_stats', self._tqm._stats) finally: if self._tqm: self._tqm.cleanup() if loader: loader.cleanup_all_tmp_files() return result def main(args=None): AdHocCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/bin/ansible-config0000755000000000000000000005374714556006441016307 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import os import yaml import shlex import subprocess from collections.abc import Mapping from ansible import context import ansible.plugins.loader as plugin_loader from ansible import constants as C from ansible.cli.arguments import option_helpers as opt_help from ansible.config.manager import ConfigManager, Setting from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.module_utils.common.text.converters import to_native, to_text, to_bytes from ansible.module_utils.common.json import json_dump from ansible.module_utils.six import string_types from ansible.parsing.quoting import is_quoted from ansible.parsing.yaml.dumper import AnsibleDumper from ansible.utils.color import stringc from ansible.utils.display import Display from ansible.utils.path import unfrackpath display = Display() def yaml_dump(data, default_flow_style=False, default_style=None): return yaml.dump(data, Dumper=AnsibleDumper, default_flow_style=default_flow_style, default_style=default_style) def yaml_short(data): return yaml_dump(data, default_flow_style=True, default_style="''") def get_constants(): ''' helper method to ensure we can template based on existing constants ''' if not hasattr(get_constants, 'cvars'): get_constants.cvars = {k: getattr(C, k) for k in dir(C) if not k.startswith('__')} return get_constants.cvars class ConfigCLI(CLI): """ Config command line class """ name = 'ansible-config' def __init__(self, args, callback=None): self.config_file = None self.config = None super(ConfigCLI, self).__init__(args, callback) def init_parser(self): super(ConfigCLI, self).init_parser( desc="View ansible configuration.", ) common = opt_help.ArgumentParser(add_help=False) opt_help.add_verbosity_options(common) common.add_argument('-c', '--config', dest='config_file', help="path to configuration file, defaults to first file found in precedence.") common.add_argument("-t", "--type", action="store", default='base', dest='type', choices=['all', 'base'] + list(C.CONFIGURABLE_PLUGINS), help="Filter down to a specific plugin type.") common.add_argument('args', help='Specific plugin to target, requires type of plugin to be set', nargs='*') subparsers = self.parser.add_subparsers(dest='action') subparsers.required = True list_parser = subparsers.add_parser('list', help='Print all config options', parents=[common]) list_parser.set_defaults(func=self.execute_list) list_parser.add_argument('--format', '-f', dest='format', action='store', choices=['json', 'yaml'], default='yaml', help='Output format for list') dump_parser = subparsers.add_parser('dump', help='Dump configuration', parents=[common]) dump_parser.set_defaults(func=self.execute_dump) dump_parser.add_argument('--only-changed', '--changed-only', dest='only_changed', action='store_true', help="Only show configurations that have changed from the default") dump_parser.add_argument('--format', '-f', dest='format', action='store', choices=['json', 'yaml', 'display'], default='display', help='Output format for dump') view_parser = subparsers.add_parser('view', help='View configuration file', parents=[common]) view_parser.set_defaults(func=self.execute_view) init_parser = subparsers.add_parser('init', help='Create initial configuration', parents=[common]) init_parser.set_defaults(func=self.execute_init) init_parser.add_argument('--format', '-f', dest='format', action='store', choices=['ini', 'env', 'vars'], default='ini', help='Output format for init') init_parser.add_argument('--disabled', dest='commented', action='store_true', default=False, help='Prefixes all entries with a comment character to disable them') # search_parser = subparsers.add_parser('find', help='Search configuration') # search_parser.set_defaults(func=self.execute_search) # search_parser.add_argument('args', help='Search term', metavar='') def post_process_args(self, options): options = super(ConfigCLI, self).post_process_args(options) display.verbosity = options.verbosity return options def run(self): super(ConfigCLI, self).run() if context.CLIARGS['config_file']: self.config_file = unfrackpath(context.CLIARGS['config_file'], follow=False) b_config = to_bytes(self.config_file) if os.path.exists(b_config) and os.access(b_config, os.R_OK): self.config = ConfigManager(self.config_file) else: raise AnsibleOptionsError('The provided configuration file is missing or not accessible: %s' % to_native(self.config_file)) else: self.config = C.config self.config_file = self.config._config_file if self.config_file: try: if not os.path.exists(self.config_file): raise AnsibleOptionsError("%s does not exist or is not accessible" % (self.config_file)) elif not os.path.isfile(self.config_file): raise AnsibleOptionsError("%s is not a valid file" % (self.config_file)) os.environ['ANSIBLE_CONFIG'] = to_native(self.config_file) except Exception: if context.CLIARGS['action'] in ['view']: raise elif context.CLIARGS['action'] in ['edit', 'update']: display.warning("File does not exist, used empty file: %s" % self.config_file) elif context.CLIARGS['action'] == 'view': raise AnsibleError('Invalid or no config file was supplied') # run the requested action context.CLIARGS['func']() def execute_update(self): ''' Updates a single setting in the specified ansible.cfg ''' raise AnsibleError("Option not implemented yet") # pylint: disable=unreachable if context.CLIARGS['setting'] is None: raise AnsibleOptionsError("update option requires a setting to update") (entry, value) = context.CLIARGS['setting'].split('=') if '.' in entry: (section, option) = entry.split('.') else: section = 'defaults' option = entry subprocess.call([ 'ansible', '-m', 'ini_file', 'localhost', '-c', 'local', '-a', '"dest=%s section=%s option=%s value=%s backup=yes"' % (self.config_file, section, option, value) ]) def execute_view(self): ''' Displays the current config file ''' try: with open(self.config_file, 'rb') as f: self.pager(to_text(f.read(), errors='surrogate_or_strict')) except Exception as e: raise AnsibleError("Failed to open config file: %s" % to_native(e)) def execute_edit(self): ''' Opens ansible.cfg in the default EDITOR ''' raise AnsibleError("Option not implemented yet") # pylint: disable=unreachable try: editor = shlex.split(C.config.get_config_value('EDITOR')) editor.append(self.config_file) subprocess.call(editor) except Exception as e: raise AnsibleError("Failed to open editor: %s" % to_native(e)) def _list_plugin_settings(self, ptype, plugins=None): entries = {} loader = getattr(plugin_loader, '%s_loader' % ptype) # build list if plugins: plugin_cs = [] for plugin in plugins: p = loader.get(plugin, class_only=True) if p is None: display.warning("Skipping %s as we could not find matching plugin" % plugin) else: plugin_cs.append(p) else: plugin_cs = loader.all(class_only=True) # iterate over class instances for plugin in plugin_cs: finalname = name = plugin._load_name if name.startswith('_'): # alias or deprecated if os.path.islink(plugin._original_path): continue else: finalname = name.replace('_', '', 1) + ' (DEPRECATED)' entries[finalname] = self.config.get_configuration_definitions(ptype, name) return entries def _list_entries_from_args(self): ''' build a dict with the list requested configs ''' config_entries = {} if context.CLIARGS['type'] in ('base', 'all'): # this dumps main/common configs config_entries = self.config.get_configuration_definitions(ignore_private=True) if context.CLIARGS['type'] != 'base': config_entries['PLUGINS'] = {} if context.CLIARGS['type'] == 'all': # now each plugin type for ptype in C.CONFIGURABLE_PLUGINS: config_entries['PLUGINS'][ptype.upper()] = self._list_plugin_settings(ptype) elif context.CLIARGS['type'] != 'base': config_entries['PLUGINS'][context.CLIARGS['type']] = self._list_plugin_settings(context.CLIARGS['type'], context.CLIARGS['args']) return config_entries def execute_list(self): ''' list and output available configs ''' config_entries = self._list_entries_from_args() if context.CLIARGS['format'] == 'yaml': output = yaml_dump(config_entries) elif context.CLIARGS['format'] == 'json': output = json_dump(config_entries) self.pager(to_text(output, errors='surrogate_or_strict')) def _get_settings_vars(self, settings, subkey): data = [] if context.CLIARGS['commented']: prefix = '#' else: prefix = '' for setting in settings: if not settings[setting].get('description'): continue default = settings[setting].get('default', '') if subkey == 'env': stype = settings[setting].get('type', '') if stype == 'boolean': if default: default = '1' else: default = '0' elif default: if stype == 'list': if not isinstance(default, string_types): # python lists are not valid env ones try: default = ', '.join(default) except Exception as e: # list of other stuff default = '%s' % to_native(default) if isinstance(default, string_types) and not is_quoted(default): default = shlex.quote(default) elif default is None: default = '' if subkey in settings[setting] and settings[setting][subkey]: entry = settings[setting][subkey][-1]['name'] if isinstance(settings[setting]['description'], string_types): desc = settings[setting]['description'] else: desc = '\n#'.join(settings[setting]['description']) name = settings[setting].get('name', setting) data.append('# %s(%s): %s' % (name, settings[setting].get('type', 'string'), desc)) # TODO: might need quoting and value coercion depending on type if subkey == 'env': if entry.startswith('_ANSIBLE_'): continue data.append('%s%s=%s' % (prefix, entry, default)) elif subkey == 'vars': if entry.startswith('_ansible_'): continue data.append(prefix + '%s: %s' % (entry, to_text(yaml_short(default), errors='surrogate_or_strict'))) data.append('') return data def _get_settings_ini(self, settings, seen): sections = {} for o in sorted(settings.keys()): opt = settings[o] if not isinstance(opt, Mapping): # recursed into one of the few settings that is a mapping, now hitting it's strings continue if not opt.get('description'): # its a plugin new_sections = self._get_settings_ini(opt, seen) for s in new_sections: if s in sections: sections[s].extend(new_sections[s]) else: sections[s] = new_sections[s] continue if isinstance(opt['description'], string_types): desc = '# (%s) %s' % (opt.get('type', 'string'), opt['description']) else: desc = "# (%s) " % opt.get('type', 'string') desc += "\n# ".join(opt['description']) if 'ini' in opt and opt['ini']: entry = opt['ini'][-1] if entry['section'] not in seen: seen[entry['section']] = [] if entry['section'] not in sections: sections[entry['section']] = [] # avoid dupes if entry['key'] not in seen[entry['section']]: seen[entry['section']].append(entry['key']) default = opt.get('default', '') if opt.get('type', '') == 'list' and not isinstance(default, string_types): # python lists are not valid ini ones default = ', '.join(default) elif default is None: default = '' if context.CLIARGS['commented']: entry['key'] = ';%s' % entry['key'] key = desc + '\n%s=%s' % (entry['key'], default) sections[entry['section']].append(key) return sections def execute_init(self): """Create initial configuration""" seen = {} data = [] config_entries = self._list_entries_from_args() plugin_types = config_entries.pop('PLUGINS', None) if context.CLIARGS['format'] == 'ini': sections = self._get_settings_ini(config_entries, seen) if plugin_types: for ptype in plugin_types: plugin_sections = self._get_settings_ini(plugin_types[ptype], seen) for s in plugin_sections: if s in sections: sections[s].extend(plugin_sections[s]) else: sections[s] = plugin_sections[s] if sections: for section in sections.keys(): data.append('[%s]' % section) for key in sections[section]: data.append(key) data.append('') data.append('') elif context.CLIARGS['format'] in ('env', 'vars'): # TODO: add yaml once that config option is added data = self._get_settings_vars(config_entries, context.CLIARGS['format']) if plugin_types: for ptype in plugin_types: for plugin in plugin_types[ptype].keys(): data.extend(self._get_settings_vars(plugin_types[ptype][plugin], context.CLIARGS['format'])) self.pager(to_text('\n'.join(data), errors='surrogate_or_strict')) def _render_settings(self, config): entries = [] for setting in sorted(config): changed = (config[setting].origin not in ('default', 'REQUIRED')) if context.CLIARGS['format'] == 'display': if isinstance(config[setting], Setting): # proceed normally if config[setting].origin == 'default': color = 'green' elif config[setting].origin == 'REQUIRED': # should include '_terms', '_input', etc color = 'red' else: color = 'yellow' msg = "%s(%s) = %s" % (setting, config[setting].origin, config[setting].value) else: color = 'green' msg = "%s(%s) = %s" % (setting, 'default', config[setting].get('default')) entry = stringc(msg, color) else: entry = {} for key in config[setting]._fields: entry[key] = getattr(config[setting], key) if not context.CLIARGS['only_changed'] or changed: entries.append(entry) return entries def _get_global_configs(self): config = self.config.get_configuration_definitions(ignore_private=True).copy() for setting in config.keys(): v, o = C.config.get_config_value_and_origin(setting, cfile=self.config_file, variables=get_constants()) config[setting] = Setting(setting, v, o, None) return self._render_settings(config) def _get_plugin_configs(self, ptype, plugins): # prep loading loader = getattr(plugin_loader, '%s_loader' % ptype) # acumulators output = [] config_entries = {} # build list if plugins: plugin_cs = [] for plugin in plugins: p = loader.get(plugin, class_only=True) if p is None: display.warning("Skipping %s as we could not find matching plugin" % plugin) else: plugin_cs.append(loader.get(plugin, class_only=True)) else: plugin_cs = loader.all(class_only=True) for plugin in plugin_cs: # in case of deprecastion they diverge finalname = name = plugin._load_name if name.startswith('_'): if os.path.islink(plugin._original_path): # skip alias continue # deprecated, but use 'nice name' finalname = name.replace('_', '', 1) + ' (DEPRECATED)' # default entries per plugin config_entries[finalname] = self.config.get_configuration_definitions(ptype, name) try: # populate config entries by loading plugin dump = loader.get(name, class_only=True) except Exception as e: display.warning('Skipping "%s" %s plugin, as we cannot load plugin to check config due to : %s' % (name, ptype, to_native(e))) continue # actually get the values for setting in config_entries[finalname].keys(): try: v, o = C.config.get_config_value_and_origin(setting, cfile=self.config_file, plugin_type=ptype, plugin_name=name, variables=get_constants()) except AnsibleError as e: if to_text(e).startswith('No setting was provided for required configuration'): v = None o = 'REQUIRED' else: raise e if v is None and o is None: # not all cases will be error o = 'REQUIRED' config_entries[finalname][setting] = Setting(setting, v, o, None) # pretty please! results = self._render_settings(config_entries[finalname]) if results: if context.CLIARGS['format'] == 'display': # avoid header for empty lists (only changed!) output.append('\n%s:\n%s' % (finalname, '_' * len(finalname))) output.extend(results) else: output.append({finalname: results}) return output def execute_dump(self): ''' Shows the current settings, merges ansible.cfg if specified ''' if context.CLIARGS['type'] == 'base': # deal with base output = self._get_global_configs() elif context.CLIARGS['type'] == 'all': # deal with base output = self._get_global_configs() # deal with plugins for ptype in C.CONFIGURABLE_PLUGINS: plugin_list = self._get_plugin_configs(ptype, context.CLIARGS['args']) if context.CLIARGS['format'] == 'display': if not context.CLIARGS['only_changed'] or plugin_list: output.append('\n%s:\n%s' % (ptype.upper(), '=' * len(ptype))) output.extend(plugin_list) else: if ptype in ('modules', 'doc_fragments'): pname = ptype.upper() else: pname = '%s_PLUGINS' % ptype.upper() output.append({pname: plugin_list}) else: # deal with plugins output = self._get_plugin_configs(context.CLIARGS['type'], context.CLIARGS['args']) if context.CLIARGS['format'] == 'display': text = '\n'.join(output) if context.CLIARGS['format'] == 'yaml': text = yaml_dump(output) elif context.CLIARGS['format'] == 'json': text = json_dump(output) self.pager(to_text(text, errors='surrogate_or_strict')) def main(args=None): ConfigCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/bin/ansible-connection0000755000000000000000000003232214556006441017163 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import fcntl import hashlib import io import os import pickle import signal import socket import sys import time import traceback import errno import json from contextlib import contextmanager from ansible import constants as C from ansible.cli.arguments import option_helpers as opt_help from ansible.module_utils.common.text.converters import to_bytes, to_text from ansible.module_utils.connection import Connection, ConnectionError, send_data, recv_data from ansible.module_utils.service import fork_process from ansible.parsing.ajson import AnsibleJSONEncoder, AnsibleJSONDecoder from ansible.playbook.play_context import PlayContext from ansible.plugins.loader import connection_loader, init_plugin_loader from ansible.utils.path import unfrackpath, makedirs_safe from ansible.utils.display import Display from ansible.utils.jsonrpc import JsonRpcServer display = Display() def read_stream(byte_stream): size = int(byte_stream.readline().strip()) data = byte_stream.read(size) if len(data) < size: raise Exception("EOF found before data was complete") data_hash = to_text(byte_stream.readline().strip()) if data_hash != hashlib.sha1(data).hexdigest(): raise Exception("Read {0} bytes, but data did not match checksum".format(size)) # restore escaped loose \r characters data = data.replace(br'\r', b'\r') return data @contextmanager def file_lock(lock_path): """ Uses contextmanager to create and release a file lock based on the given path. This allows us to create locks using `with file_lock()` to prevent deadlocks related to failure to unlock properly. """ lock_fd = os.open(lock_path, os.O_RDWR | os.O_CREAT, 0o600) fcntl.lockf(lock_fd, fcntl.LOCK_EX) yield fcntl.lockf(lock_fd, fcntl.LOCK_UN) os.close(lock_fd) class ConnectionProcess(object): ''' The connection process wraps around a Connection object that manages the connection to a remote device that persists over the playbook ''' def __init__(self, fd, play_context, socket_path, original_path, task_uuid=None, ansible_playbook_pid=None): self.play_context = play_context self.socket_path = socket_path self.original_path = original_path self._task_uuid = task_uuid self.fd = fd self.exception = None self.srv = JsonRpcServer() self.sock = None self.connection = None self._ansible_playbook_pid = ansible_playbook_pid def start(self, options): messages = list() result = {} try: messages.append(('vvvv', 'control socket path is %s' % self.socket_path)) # If this is a relative path (~ gets expanded later) then plug the # key's path on to the directory we originally came from, so we can # find it now that our cwd is / if self.play_context.private_key_file and self.play_context.private_key_file[0] not in '~/': self.play_context.private_key_file = os.path.join(self.original_path, self.play_context.private_key_file) self.connection = connection_loader.get(self.play_context.connection, self.play_context, '/dev/null', task_uuid=self._task_uuid, ansible_playbook_pid=self._ansible_playbook_pid) try: self.connection.set_options(direct=options) except ConnectionError as exc: messages.append(('debug', to_text(exc))) raise ConnectionError('Unable to decode JSON from response set_options. See the debug log for more information.') self.connection._socket_path = self.socket_path self.srv.register(self.connection) messages.extend([('vvvv', msg) for msg in sys.stdout.getvalue().splitlines()]) self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.bind(self.socket_path) self.sock.listen(1) messages.append(('vvvv', 'local domain socket listeners started successfully')) except Exception as exc: messages.extend(self.connection.pop_messages()) result['error'] = to_text(exc) result['exception'] = traceback.format_exc() finally: result['messages'] = messages self.fd.write(json.dumps(result, cls=AnsibleJSONEncoder)) self.fd.close() def run(self): try: log_messages = self.connection.get_option('persistent_log_messages') while not self.connection._conn_closed: signal.signal(signal.SIGALRM, self.connect_timeout) signal.signal(signal.SIGTERM, self.handler) signal.alarm(self.connection.get_option('persistent_connect_timeout')) self.exception = None (s, addr) = self.sock.accept() signal.alarm(0) signal.signal(signal.SIGALRM, self.command_timeout) while True: data = recv_data(s) if not data: break if log_messages: display.display("jsonrpc request: %s" % data, log_only=True) request = json.loads(to_text(data, errors='surrogate_or_strict')) if request.get('method') == "exec_command" and not self.connection.connected: self.connection._connect() signal.alarm(self.connection.get_option('persistent_command_timeout')) resp = self.srv.handle_request(data) signal.alarm(0) if log_messages: display.display("jsonrpc response: %s" % resp, log_only=True) send_data(s, to_bytes(resp)) s.close() except Exception as e: # socket.accept() will raise EINTR if the socket.close() is called if hasattr(e, 'errno'): if e.errno != errno.EINTR: self.exception = traceback.format_exc() else: self.exception = traceback.format_exc() finally: # allow time for any exception msg send over socket to receive at other end before shutting down time.sleep(0.1) # when done, close the connection properly and cleanup the socket file so it can be recreated self.shutdown() def connect_timeout(self, signum, frame): msg = 'persistent connection idle timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and ' \ 'Troubleshooting Guide.' % self.connection.get_option('persistent_connect_timeout') display.display(msg, log_only=True) raise Exception(msg) def command_timeout(self, signum, frame): msg = 'command timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.'\ % self.connection.get_option('persistent_command_timeout') display.display(msg, log_only=True) raise Exception(msg) def handler(self, signum, frame): msg = 'signal handler called with signal %s.' % signum display.display(msg, log_only=True) raise Exception(msg) def shutdown(self): """ Shuts down the local domain socket """ lock_path = unfrackpath("%s/.ansible_pc_lock_%s" % os.path.split(self.socket_path)) if os.path.exists(self.socket_path): try: if self.sock: self.sock.close() if self.connection: self.connection.close() if self.connection.get_option("persistent_log_messages"): for _level, message in self.connection.pop_messages(): display.display(message, log_only=True) except Exception: pass finally: if os.path.exists(self.socket_path): os.remove(self.socket_path) setattr(self.connection, '_socket_path', None) setattr(self.connection, '_connected', False) if os.path.exists(lock_path): os.remove(lock_path) display.display('shutdown complete', log_only=True) def main(args=None): """ Called to initiate the connect to the remote device """ parser = opt_help.create_base_parser(prog='ansible-connection') opt_help.add_verbosity_options(parser) parser.add_argument('playbook_pid') parser.add_argument('task_uuid') args = parser.parse_args(args[1:] if args is not None else args) init_plugin_loader() # initialize verbosity display.verbosity = args.verbosity rc = 0 result = {} messages = list() socket_path = None # Need stdin as a byte stream stdin = sys.stdin.buffer # Note: update the below log capture code after Display.display() is refactored. saved_stdout = sys.stdout sys.stdout = io.StringIO() try: # read the play context data via stdin, which means depickling it opts_data = read_stream(stdin) init_data = read_stream(stdin) pc_data = pickle.loads(init_data, encoding='bytes') options = pickle.loads(opts_data, encoding='bytes') play_context = PlayContext() play_context.deserialize(pc_data) except Exception as e: rc = 1 result.update({ 'error': to_text(e), 'exception': traceback.format_exc() }) if rc == 0: ssh = connection_loader.get('ssh', class_only=True) ansible_playbook_pid = args.playbook_pid task_uuid = args.task_uuid cp = ssh._create_control_path(play_context.remote_addr, play_context.port, play_context.remote_user, play_context.connection, ansible_playbook_pid) # create the persistent connection dir if need be and create the paths # which we will be using later tmp_path = unfrackpath(C.PERSISTENT_CONTROL_PATH_DIR) makedirs_safe(tmp_path) socket_path = unfrackpath(cp % dict(directory=tmp_path)) lock_path = unfrackpath("%s/.ansible_pc_lock_%s" % os.path.split(socket_path)) with file_lock(lock_path): if not os.path.exists(socket_path): messages.append(('vvvv', 'local domain socket does not exist, starting it')) original_path = os.getcwd() r, w = os.pipe() pid = fork_process() if pid == 0: try: os.close(r) wfd = os.fdopen(w, 'w') process = ConnectionProcess(wfd, play_context, socket_path, original_path, task_uuid, ansible_playbook_pid) process.start(options) except Exception: messages.append(('error', traceback.format_exc())) rc = 1 if rc == 0: process.run() else: process.shutdown() sys.exit(rc) else: os.close(w) rfd = os.fdopen(r, 'r') data = json.loads(rfd.read(), cls=AnsibleJSONDecoder) messages.extend(data.pop('messages')) result.update(data) else: messages.append(('vvvv', 'found existing local domain socket, using it!')) conn = Connection(socket_path) try: conn.set_options(direct=options) except ConnectionError as exc: messages.append(('debug', to_text(exc))) raise ConnectionError('Unable to decode JSON from response set_options. See the debug log for more information.') pc_data = to_text(init_data) try: conn.update_play_context(pc_data) conn.set_check_prompt(task_uuid) except Exception as exc: # Only network_cli has update_play context and set_check_prompt, so missing this is # not fatal e.g. netconf if isinstance(exc, ConnectionError) and getattr(exc, 'code', None) == -32601: pass else: result.update({ 'error': to_text(exc), 'exception': traceback.format_exc() }) if os.path.exists(socket_path): messages.extend(Connection(socket_path).pop_messages()) messages.append(('vvvv', sys.stdout.getvalue())) result.update({ 'messages': messages, 'socket_path': socket_path }) sys.stdout = saved_stdout if 'exception' in result: rc = 1 sys.stderr.write(json.dumps(result, cls=AnsibleJSONEncoder)) else: rc = 0 sys.stdout.write(json.dumps(result, cls=AnsibleJSONEncoder)) sys.exit(rc) if __name__ == '__main__': main() ansible-core-2.16.3/bin/ansible-console0000755000000000000000000005303014556006441016465 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2014, Nandor Sivok # Copyright: (c) 2016, Redhat Inc # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import atexit import cmd import getpass import readline import os import sys from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.executor.task_queue_manager import TaskQueueManager from ansible.module_utils.common.text.converters import to_native, to_text from ansible.module_utils.parsing.convert_bool import boolean from ansible.parsing.splitter import parse_kv from ansible.playbook.play import Play from ansible.plugins.list import list_plugins from ansible.plugins.loader import module_loader, fragment_loader from ansible.utils import plugin_docs from ansible.utils.color import stringc from ansible.utils.display import Display display = Display() class ConsoleCLI(CLI, cmd.Cmd): ''' A REPL that allows for running ad-hoc tasks against a chosen inventory from a nice shell with built-in tab completion (based on dominis' ``ansible-shell``). It supports several commands, and you can modify its configuration at runtime: - ``cd [pattern]``: change host/group (you can use host patterns eg.: ``app*.dc*:!app01*``) - ``list``: list available hosts in the current path - ``list groups``: list groups included in the current path - ``become``: toggle the become flag - ``!``: forces shell module instead of the ansible module (``!yum update -y``) - ``verbosity [num]``: set the verbosity level - ``forks [num]``: set the number of forks - ``become_user [user]``: set the become_user - ``remote_user [user]``: set the remote_user - ``become_method [method]``: set the privilege escalation method - ``check [bool]``: toggle check mode - ``diff [bool]``: toggle diff mode - ``timeout [integer]``: set the timeout of tasks in seconds (0 to disable) - ``help [command/module]``: display documentation for the command or module - ``exit``: exit ``ansible-console`` ''' name = 'ansible-console' modules = [] # type: list[str] | None ARGUMENTS = {'host-pattern': 'A name of a group in the inventory, a shell-like glob ' 'selecting hosts in inventory or any combination of the two separated by commas.'} # use specific to console, but fallback to highlight for backwards compatibility NORMAL_PROMPT = C.COLOR_CONSOLE_PROMPT or C.COLOR_HIGHLIGHT def __init__(self, args): super(ConsoleCLI, self).__init__(args) self.intro = 'Welcome to the ansible console. Type help or ? to list commands.\n' self.groups = [] self.hosts = [] self.pattern = None self.variable_manager = None self.loader = None self.passwords = dict() self.cwd = '*' # Defaults for these are set from the CLI in run() self.remote_user = None self.become = None self.become_user = None self.become_method = None self.check_mode = None self.diff = None self.forks = None self.task_timeout = None self.collections = None cmd.Cmd.__init__(self) def init_parser(self): super(ConsoleCLI, self).init_parser( desc="REPL console for executing Ansible tasks.", epilog="This is not a live session/connection: each task is executed in the background and returns its results." ) opt_help.add_runas_options(self.parser) opt_help.add_inventory_options(self.parser) opt_help.add_connect_options(self.parser) opt_help.add_check_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_fork_options(self.parser) opt_help.add_module_options(self.parser) opt_help.add_basedir_options(self.parser) opt_help.add_runtask_options(self.parser) opt_help.add_tasknoplay_options(self.parser) # options unique to shell self.parser.add_argument('pattern', help='host pattern', metavar='pattern', default='all', nargs='?') self.parser.add_argument('--step', dest='step', action='store_true', help="one-step-at-a-time: confirm each task before running") def post_process_args(self, options): options = super(ConsoleCLI, self).post_process_args(options) display.verbosity = options.verbosity self.validate_conflicts(options, runas_opts=True, fork_opts=True) return options def get_names(self): return dir(self) def cmdloop(self): try: cmd.Cmd.cmdloop(self) except KeyboardInterrupt: self.cmdloop() except EOFError: self.display("[Ansible-console was exited]") self.do_exit(self) def set_prompt(self): login_user = self.remote_user or getpass.getuser() self.selected = self.inventory.list_hosts(self.cwd) prompt = "%s@%s (%d)[f:%s]" % (login_user, self.cwd, len(self.selected), self.forks) if self.become and self.become_user in [None, 'root']: prompt += "# " color = C.COLOR_ERROR else: prompt += "$ " color = self.NORMAL_PROMPT self.prompt = stringc(prompt, color, wrap_nonvisible_chars=True) def list_modules(self): return list_plugins('module', self.collections) def default(self, line, forceshell=False): """ actually runs modules """ if line.startswith("#"): return False if not self.cwd: display.error("No host found") return False # defaults module = 'shell' module_args = line if forceshell is not True: possible_module, *possible_args = line.split() if module_loader.find_plugin(possible_module): # we found module! module = possible_module if possible_args: module_args = ' '.join(possible_args) else: module_args = '' if self.callback: cb = self.callback elif C.DEFAULT_LOAD_CALLBACK_PLUGINS and C.DEFAULT_STDOUT_CALLBACK != 'default': cb = C.DEFAULT_STDOUT_CALLBACK else: cb = 'minimal' result = None try: check_raw = module in C._ACTION_ALLOWS_RAW_ARGS task = dict(action=dict(module=module, args=parse_kv(module_args, check_raw=check_raw)), timeout=self.task_timeout) play_ds = dict( name="Ansible Shell", hosts=self.cwd, gather_facts='no', tasks=[task], remote_user=self.remote_user, become=self.become, become_user=self.become_user, become_method=self.become_method, check_mode=self.check_mode, diff=self.diff, collections=self.collections, ) play = Play().load(play_ds, variable_manager=self.variable_manager, loader=self.loader) except Exception as e: display.error(u"Unable to build command: %s" % to_text(e)) return False try: # now create a task queue manager to execute the play self._tqm = None try: self._tqm = TaskQueueManager( inventory=self.inventory, variable_manager=self.variable_manager, loader=self.loader, passwords=self.passwords, stdout_callback=cb, run_additional_callbacks=C.DEFAULT_LOAD_CALLBACK_PLUGINS, run_tree=False, forks=self.forks, ) result = self._tqm.run(play) display.debug(result) finally: if self._tqm: self._tqm.cleanup() if self.loader: self.loader.cleanup_all_tmp_files() if result is None: display.error("No hosts found") return False except KeyboardInterrupt: display.error('User interrupted execution') return False except Exception as e: if self.verbosity >= 3: import traceback display.v(traceback.format_exc()) display.error(to_text(e)) return False def emptyline(self): return def do_shell(self, arg): """ You can run shell commands through the shell module. eg.: shell ps uax | grep java | wc -l shell killall python shell halt -n You can use the ! to force the shell module. eg.: !ps aux | grep java | wc -l """ self.default(arg, True) def help_shell(self): display.display("You can run shell commands through the shell module.") def do_forks(self, arg): """Set the number of forks""" if arg: try: forks = int(arg) except TypeError: display.error('Invalid argument for "forks"') self.usage_forks() if forks > 0: self.forks = forks self.set_prompt() else: display.display('forks must be greater than or equal to 1') else: self.usage_forks() def help_forks(self): display.display("Set the number of forks to use per task") self.usage_forks() def usage_forks(self): display.display('Usage: forks ') do_serial = do_forks help_serial = help_forks def do_collections(self, arg): """Set list of collections for 'short name' usage""" if arg in ('', 'none'): self.collections = None elif not arg: self.usage_collections() else: collections = arg.split(',') for collection in collections: if self.collections is None: self.collections = [] self.collections.append(collection.strip()) if self.collections: display.v('Collections name search is set to: %s' % ', '.join(self.collections)) else: display.v('Collections name search is using defaults') def help_collections(self): display.display("Set the collection name search path when using short names for plugins") self.usage_collections() def usage_collections(self): display.display('Usage: collections [, ...]\n Use empty quotes or "none" to reset to default.\n') def do_verbosity(self, arg): """Set verbosity level""" if not arg: display.display('Usage: verbosity ') else: try: display.verbosity = int(arg) display.v('verbosity level set to %s' % arg) except (TypeError, ValueError) as e: display.error('The verbosity must be a valid integer: %s' % to_text(e)) def help_verbosity(self): display.display("Set the verbosity level, equivalent to -v for 1 and -vvvv for 4.") def do_cd(self, arg): """ Change active host/group. You can use hosts patterns as well eg.: cd webservers cd webservers:dbservers cd webservers:!phoenix cd webservers:&staging cd webservers:dbservers:&staging:!phoenix """ if not arg: self.cwd = '*' elif arg in '/*': self.cwd = 'all' elif self.inventory.get_hosts(arg): self.cwd = arg else: display.display("no host matched") self.set_prompt() def help_cd(self): display.display("Change active host/group. ") self.usage_cd() def usage_cd(self): display.display("Usage: cd ||") def do_list(self, arg): """List the hosts in the current group""" if not arg: for host in self.selected: display.display(host.name) elif arg == 'groups': for group in self.groups: display.display(group) else: display.error('Invalid option passed to "list"') self.help_list() def help_list(self): display.display("List the hosts in the current group or a list of groups if you add 'groups'.") def do_become(self, arg): """Toggle whether plays run with become""" if arg: self.become = boolean(arg, strict=False) display.v("become changed to %s" % self.become) self.set_prompt() else: display.display("Please specify become value, e.g. `become yes`") def help_become(self): display.display("Toggle whether the tasks are run with become") def do_remote_user(self, arg): """Given a username, set the remote user plays are run by""" if arg: self.remote_user = arg self.set_prompt() else: display.display("Please specify a remote user, e.g. `remote_user root`") def help_remote_user(self): display.display("Set the user for use as login to the remote target") def do_become_user(self, arg): """Given a username, set the user that plays are run by when using become""" if arg: self.become_user = arg else: display.display("Please specify a user, e.g. `become_user jenkins`") display.v("Current user is %s" % self.become_user) self.set_prompt() def help_become_user(self): display.display("Set the user for use with privilege escalation (which remote user attempts to 'become' when become is enabled)") def do_become_method(self, arg): """Given a become_method, set the privilege escalation method when using become""" if arg: self.become_method = arg display.v("become_method changed to %s" % self.become_method) else: display.display("Please specify a become_method, e.g. `become_method su`") display.v("Current become_method is %s" % self.become_method) def help_become_method(self): display.display("Set the privilege escalation plugin to use when become is enabled") def do_check(self, arg): """Toggle whether plays run with check mode""" if arg: self.check_mode = boolean(arg, strict=False) display.display("check mode changed to %s" % self.check_mode) else: display.display("Please specify check mode value, e.g. `check yes`") display.v("check mode is currently %s." % self.check_mode) def help_check(self): display.display("Toggle check_mode for the tasks") def do_diff(self, arg): """Toggle whether plays run with diff""" if arg: self.diff = boolean(arg, strict=False) display.display("diff mode changed to %s" % self.diff) else: display.display("Please specify a diff value , e.g. `diff yes`") display.v("diff mode is currently %s" % self.diff) def help_diff(self): display.display("Toggle diff output for the tasks") def do_timeout(self, arg): """Set the timeout""" if arg: try: timeout = int(arg) if timeout < 0: display.error('The timeout must be greater than or equal to 1, use 0 to disable') else: self.task_timeout = timeout except (TypeError, ValueError) as e: display.error('The timeout must be a valid positive integer, or 0 to disable: %s' % to_text(e)) else: self.usage_timeout() def help_timeout(self): display.display("Set task timeout in seconds") self.usage_timeout() def usage_timeout(self): display.display('Usage: timeout ') def do_exit(self, args): """Exits from the console""" sys.stdout.write('\nAnsible-console was exited.\n') return -1 def help_exit(self): display.display("LEAVE!") do_EOF = do_exit help_EOF = help_exit def helpdefault(self, module_name): if module_name: in_path = module_loader.find_plugin(module_name) if in_path: oc, a, _dummy1, _dummy2 = plugin_docs.get_docstring(in_path, fragment_loader) if oc: display.display(oc['short_description']) display.display('Parameters:') for opt in oc['options'].keys(): display.display(' ' + stringc(opt, self.NORMAL_PROMPT) + ' ' + oc['options'][opt]['description'][0]) else: display.error('No documentation found for %s.' % module_name) else: display.error('%s is not a valid command, use ? to list all valid commands.' % module_name) def help_help(self): display.warning("Don't be redundant!") def complete_cd(self, text, line, begidx, endidx): mline = line.partition(' ')[2] offs = len(mline) - len(text) if self.cwd in ('all', '*', '\\'): completions = self.hosts + self.groups else: completions = [x.name for x in self.inventory.list_hosts(self.cwd)] return [to_native(s)[offs:] for s in completions if to_native(s).startswith(to_native(mline))] def completedefault(self, text, line, begidx, endidx): if line.split()[0] in self.list_modules(): mline = line.split(' ')[-1] offs = len(mline) - len(text) completions = self.module_args(line.split()[0]) return [s[offs:] + '=' for s in completions if s.startswith(mline)] def module_args(self, module_name): in_path = module_loader.find_plugin(module_name) oc, a, _dummy1, _dummy2 = plugin_docs.get_docstring(in_path, fragment_loader, is_module=True) return list(oc['options'].keys()) def run(self): super(ConsoleCLI, self).run() sshpass = None becomepass = None # hosts self.pattern = context.CLIARGS['pattern'] self.cwd = self.pattern # Defaults from the command line self.remote_user = context.CLIARGS['remote_user'] self.become = context.CLIARGS['become'] self.become_user = context.CLIARGS['become_user'] self.become_method = context.CLIARGS['become_method'] self.check_mode = context.CLIARGS['check'] self.diff = context.CLIARGS['diff'] self.forks = context.CLIARGS['forks'] self.task_timeout = context.CLIARGS['task_timeout'] # set module path if needed if context.CLIARGS['module_path']: for path in context.CLIARGS['module_path']: if path: module_loader.add_directory(path) # dynamically add 'cannonical' modules as commands, aliases coudld be used and dynamically loaded self.modules = self.list_modules() for module in self.modules: setattr(self, 'do_' + module, lambda arg, module=module: self.default(module + ' ' + arg)) setattr(self, 'help_' + module, lambda module=module: self.helpdefault(module)) (sshpass, becomepass) = self.ask_passwords() self.passwords = {'conn_pass': sshpass, 'become_pass': becomepass} self.loader, self.inventory, self.variable_manager = self._play_prereqs() hosts = self.get_host_list(self.inventory, context.CLIARGS['subset'], self.pattern) self.groups = self.inventory.list_groups() self.hosts = [x.name for x in hosts] # This hack is to work around readline issues on a mac: # http://stackoverflow.com/a/7116997/541202 if 'libedit' in readline.__doc__: readline.parse_and_bind("bind ^I rl_complete") else: readline.parse_and_bind("tab: complete") histfile = os.path.join(os.path.expanduser("~"), ".ansible-console_history") try: readline.read_history_file(histfile) except IOError: pass atexit.register(readline.write_history_file, histfile) self.set_prompt() self.cmdloop() def __getattr__(self, name): ''' handle not found to populate dynamically a module function if module matching name exists ''' attr = None if name.startswith('do_'): module = name.replace('do_', '') if module_loader.find_plugin(module): setattr(self, name, lambda arg, module=module: self.default(module + ' ' + arg)) attr = object.__getattr__(self, name) elif name.startswith('help_'): module = name.replace('help_', '') if module_loader.find_plugin(module): setattr(self, name, lambda module=module: self.helpdefault(module)) attr = object.__getattr__(self, name) if attr is None: raise AttributeError(f"{self.__class__} does not have a {name} attribute") return attr def main(args=None): ConsoleCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/bin/ansible-doc0000755000000000000000000017512014556006441015575 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2014, James Tanner # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import pkgutil import os import os.path import re import textwrap import traceback import ansible.plugins.loader as plugin_loader from pathlib import Path from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.collections.list import list_collection_dirs from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError, AnsiblePluginNotFound from ansible.module_utils.common.text.converters import to_native, to_text from ansible.module_utils.common.collections import is_sequence from ansible.module_utils.common.json import json_dump from ansible.module_utils.common.yaml import yaml_dump from ansible.module_utils.compat import importlib from ansible.module_utils.six import string_types from ansible.parsing.plugin_docs import read_docstub from ansible.parsing.utils.yaml import from_yaml from ansible.parsing.yaml.dumper import AnsibleDumper from ansible.plugins.list import list_plugins from ansible.plugins.loader import action_loader, fragment_loader from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path from ansible.utils.display import Display from ansible.utils.plugin_docs import get_plugin_docs, get_docstring, get_versioned_doclink display = Display() TARGET_OPTIONS = C.DOCUMENTABLE_PLUGINS + ('role', 'keyword',) PB_OBJECTS = ['Play', 'Role', 'Block', 'Task'] PB_LOADED = {} SNIPPETS = ['inventory', 'lookup', 'module'] def add_collection_plugins(plugin_list, plugin_type, coll_filter=None): display.deprecated("add_collection_plugins method, use ansible.plugins.list functions instead.", version='2.17') plugin_list.update(list_plugins(plugin_type, coll_filter)) def jdump(text): try: display.display(json_dump(text)) except TypeError as e: display.vvv(traceback.format_exc()) raise AnsibleError('We could not convert all the documentation into JSON as there was a conversion issue: %s' % to_native(e)) class RoleMixin(object): """A mixin containing all methods relevant to role argument specification functionality. Note: The methods for actual display of role data are not present here. """ # Potential locations of the role arg spec file in the meta subdir, with main.yml # having the lowest priority. ROLE_ARGSPEC_FILES = ['argument_specs' + e for e in C.YAML_FILENAME_EXTENSIONS] + ["main" + e for e in C.YAML_FILENAME_EXTENSIONS] def _load_argspec(self, role_name, collection_path=None, role_path=None): """Load the role argument spec data from the source file. :param str role_name: The name of the role for which we want the argspec data. :param str collection_path: Path to the collection containing the role. This will be None for standard roles. :param str role_path: Path to the standard role. This will be None for collection roles. We support two files containing the role arg spec data: either meta/main.yml or meta/argument_spec.yml. The argument_spec.yml file will take precedence over the meta/main.yml file, if it exists. Data is NOT combined between the two files. :returns: A dict of all data underneath the ``argument_specs`` top-level YAML key in the argspec data file. Empty dict is returned if there is no data. """ if collection_path: meta_path = os.path.join(collection_path, 'roles', role_name, 'meta') elif role_path: meta_path = os.path.join(role_path, 'meta') else: raise AnsibleError("A path is required to load argument specs for role '%s'" % role_name) path = None # Check all potential spec files for specfile in self.ROLE_ARGSPEC_FILES: full_path = os.path.join(meta_path, specfile) if os.path.exists(full_path): path = full_path break if path is None: return {} try: with open(path, 'r') as f: data = from_yaml(f.read(), file_name=path) if data is None: data = {} return data.get('argument_specs', {}) except (IOError, OSError) as e: raise AnsibleParserError("An error occurred while trying to read the file '%s': %s" % (path, to_native(e)), orig_exc=e) def _find_all_normal_roles(self, role_paths, name_filters=None): """Find all non-collection roles that have an argument spec file. Note that argument specs do not actually need to exist within the spec file. :param role_paths: A tuple of one or more role paths. When a role with the same name is found in multiple paths, only the first-found role is returned. :param name_filters: A tuple of one or more role names used to filter the results. :returns: A set of tuples consisting of: role name, full role path """ found = set() found_names = set() for path in role_paths: if not os.path.isdir(path): continue # Check each subdir for an argument spec file for entry in os.listdir(path): role_path = os.path.join(path, entry) # Check all potential spec files for specfile in self.ROLE_ARGSPEC_FILES: full_path = os.path.join(role_path, 'meta', specfile) if os.path.exists(full_path): if name_filters is None or entry in name_filters: if entry not in found_names: found.add((entry, role_path)) found_names.add(entry) # select first-found break return found def _find_all_collection_roles(self, name_filters=None, collection_filter=None): """Find all collection roles with an argument spec file. Note that argument specs do not actually need to exist within the spec file. :param name_filters: A tuple of one or more role names used to filter the results. These might be fully qualified with the collection name (e.g., community.general.roleA) or not (e.g., roleA). :param collection_filter: A list of strings containing the FQCN of a collection which will be used to limit results. This filter will take precedence over the name_filters. :returns: A set of tuples consisting of: role name, collection name, collection path """ found = set() b_colldirs = list_collection_dirs(coll_filter=collection_filter) for b_path in b_colldirs: path = to_text(b_path, errors='surrogate_or_strict') collname = _get_collection_name_from_path(b_path) roles_dir = os.path.join(path, 'roles') if os.path.exists(roles_dir): for entry in os.listdir(roles_dir): # Check all potential spec files for specfile in self.ROLE_ARGSPEC_FILES: full_path = os.path.join(roles_dir, entry, 'meta', specfile) if os.path.exists(full_path): if name_filters is None: found.add((entry, collname, path)) else: # Name filters might contain a collection FQCN or not. for fqcn in name_filters: if len(fqcn.split('.')) == 3: (ns, col, role) = fqcn.split('.') if '.'.join([ns, col]) == collname and entry == role: found.add((entry, collname, path)) elif fqcn == entry: found.add((entry, collname, path)) break return found def _build_summary(self, role, collection, argspec): """Build a summary dict for a role. Returns a simplified role arg spec containing only the role entry points and their short descriptions, and the role collection name (if applicable). :param role: The simple role name. :param collection: The collection containing the role (None or empty string if N/A). :param argspec: The complete role argspec data dict. :returns: A tuple with the FQCN role name and a summary dict. """ if collection: fqcn = '.'.join([collection, role]) else: fqcn = role summary = {} summary['collection'] = collection summary['entry_points'] = {} for ep in argspec.keys(): entry_spec = argspec[ep] or {} summary['entry_points'][ep] = entry_spec.get('short_description', '') return (fqcn, summary) def _build_doc(self, role, path, collection, argspec, entry_point): if collection: fqcn = '.'.join([collection, role]) else: fqcn = role doc = {} doc['path'] = path doc['collection'] = collection doc['entry_points'] = {} for ep in argspec.keys(): if entry_point is None or ep == entry_point: entry_spec = argspec[ep] or {} doc['entry_points'][ep] = entry_spec # If we didn't add any entry points (b/c of filtering), ignore this entry. if len(doc['entry_points'].keys()) == 0: doc = None return (fqcn, doc) def _create_role_list(self, fail_on_errors=True): """Return a dict describing the listing of all roles with arg specs. :param role_paths: A tuple of one or more role paths. :returns: A dict indexed by role name, with 'collection' and 'entry_points' keys per role. Example return: results = { 'roleA': { 'collection': '', 'entry_points': { 'main': 'Short description for main' } }, 'a.b.c.roleB': { 'collection': 'a.b.c', 'entry_points': { 'main': 'Short description for main', 'alternate': 'Short description for alternate entry point' } 'x.y.z.roleB': { 'collection': 'x.y.z', 'entry_points': { 'main': 'Short description for main', } }, } """ roles_path = self._get_roles_path() collection_filter = self._get_collection_filter() if not collection_filter: roles = self._find_all_normal_roles(roles_path) else: roles = [] collroles = self._find_all_collection_roles(collection_filter=collection_filter) result = {} for role, role_path in roles: try: argspec = self._load_argspec(role, role_path=role_path) fqcn, summary = self._build_summary(role, '', argspec) result[fqcn] = summary except Exception as e: if fail_on_errors: raise result[role] = { 'error': 'Error while loading role argument spec: %s' % to_native(e), } for role, collection, collection_path in collroles: try: argspec = self._load_argspec(role, collection_path=collection_path) fqcn, summary = self._build_summary(role, collection, argspec) result[fqcn] = summary except Exception as e: if fail_on_errors: raise result['%s.%s' % (collection, role)] = { 'error': 'Error while loading role argument spec: %s' % to_native(e), } return result def _create_role_doc(self, role_names, entry_point=None, fail_on_errors=True): """ :param role_names: A tuple of one or more role names. :param role_paths: A tuple of one or more role paths. :param entry_point: A role entry point name for filtering. :param fail_on_errors: When set to False, include errors in the JSON output instead of raising errors :returns: A dict indexed by role name, with 'collection', 'entry_points', and 'path' keys per role. """ roles_path = self._get_roles_path() roles = self._find_all_normal_roles(roles_path, name_filters=role_names) collroles = self._find_all_collection_roles(name_filters=role_names) result = {} for role, role_path in roles: try: argspec = self._load_argspec(role, role_path=role_path) fqcn, doc = self._build_doc(role, role_path, '', argspec, entry_point) if doc: result[fqcn] = doc except Exception as e: # pylint:disable=broad-except result[role] = { 'error': 'Error while processing role: %s' % to_native(e), } for role, collection, collection_path in collroles: try: argspec = self._load_argspec(role, collection_path=collection_path) fqcn, doc = self._build_doc(role, collection_path, collection, argspec, entry_point) if doc: result[fqcn] = doc except Exception as e: # pylint:disable=broad-except result['%s.%s' % (collection, role)] = { 'error': 'Error while processing role: %s' % to_native(e), } return result class DocCLI(CLI, RoleMixin): ''' displays information on modules installed in Ansible libraries. It displays a terse listing of plugins and their short descriptions, provides a printout of their DOCUMENTATION strings, and it can create a short "snippet" which can be pasted into a playbook. ''' name = 'ansible-doc' # default ignore list for detailed views IGNORE = ('module', 'docuri', 'version_added', 'version_added_collection', 'short_description', 'now_date', 'plainexamples', 'returndocs', 'collection') # Warning: If you add more elements here, you also need to add it to the docsite build (in the # ansible-community/antsibull repo) _ITALIC = re.compile(r"\bI\(([^)]+)\)") _BOLD = re.compile(r"\bB\(([^)]+)\)") _MODULE = re.compile(r"\bM\(([^)]+)\)") _PLUGIN = re.compile(r"\bP\(([^#)]+)#([a-z]+)\)") _LINK = re.compile(r"\bL\(([^)]+), *([^)]+)\)") _URL = re.compile(r"\bU\(([^)]+)\)") _REF = re.compile(r"\bR\(([^)]+), *([^)]+)\)") _CONST = re.compile(r"\bC\(([^)]+)\)") _SEM_PARAMETER_STRING = r"\(((?:[^\\)]+|\\.)+)\)" _SEM_OPTION_NAME = re.compile(r"\bO" + _SEM_PARAMETER_STRING) _SEM_OPTION_VALUE = re.compile(r"\bV" + _SEM_PARAMETER_STRING) _SEM_ENV_VARIABLE = re.compile(r"\bE" + _SEM_PARAMETER_STRING) _SEM_RET_VALUE = re.compile(r"\bRV" + _SEM_PARAMETER_STRING) _RULER = re.compile(r"\bHORIZONTALLINE\b") # helper for unescaping _UNESCAPE = re.compile(r"\\(.)") _FQCN_TYPE_PREFIX_RE = re.compile(r'^([^.]+\.[^.]+\.[^#]+)#([a-z]+):(.*)$') _IGNORE_MARKER = 'ignore:' # rst specific _RST_NOTE = re.compile(r".. note::") _RST_SEEALSO = re.compile(r".. seealso::") _RST_ROLES = re.compile(r":\w+?:`") _RST_DIRECTIVES = re.compile(r".. \w+?::") def __init__(self, args): super(DocCLI, self).__init__(args) self.plugin_list = set() @staticmethod def _tty_ify_sem_simle(matcher): text = DocCLI._UNESCAPE.sub(r'\1', matcher.group(1)) return f"`{text}'" @staticmethod def _tty_ify_sem_complex(matcher): text = DocCLI._UNESCAPE.sub(r'\1', matcher.group(1)) value = None if '=' in text: text, value = text.split('=', 1) m = DocCLI._FQCN_TYPE_PREFIX_RE.match(text) if m: plugin_fqcn = m.group(1) plugin_type = m.group(2) text = m.group(3) elif text.startswith(DocCLI._IGNORE_MARKER): text = text[len(DocCLI._IGNORE_MARKER):] plugin_fqcn = plugin_type = '' else: plugin_fqcn = plugin_type = '' entrypoint = None if ':' in text: entrypoint, text = text.split(':', 1) if value is not None: text = f"{text}={value}" if plugin_fqcn and plugin_type: plugin_suffix = '' if plugin_type in ('role', 'module', 'playbook') else ' plugin' plugin = f"{plugin_type}{plugin_suffix} {plugin_fqcn}" if plugin_type == 'role' and entrypoint is not None: plugin = f"{plugin}, {entrypoint} entrypoint" return f"`{text}' (of {plugin})" return f"`{text}'" @classmethod def find_plugins(cls, path, internal, plugin_type, coll_filter=None): display.deprecated("find_plugins method as it is incomplete/incorrect. use ansible.plugins.list functions instead.", version='2.17') return list_plugins(plugin_type, coll_filter, [path]).keys() @classmethod def tty_ify(cls, text): # general formatting t = cls._ITALIC.sub(r"`\1'", text) # I(word) => `word' t = cls._BOLD.sub(r"*\1*", t) # B(word) => *word* t = cls._MODULE.sub("[" + r"\1" + "]", t) # M(word) => [word] t = cls._URL.sub(r"\1", t) # U(word) => word t = cls._LINK.sub(r"\1 <\2>", t) # L(word, url) => word t = cls._PLUGIN.sub("[" + r"\1" + "]", t) # P(word#type) => [word] t = cls._REF.sub(r"\1", t) # R(word, sphinx-ref) => word t = cls._CONST.sub(r"`\1'", t) # C(word) => `word' t = cls._SEM_OPTION_NAME.sub(cls._tty_ify_sem_complex, t) # O(expr) t = cls._SEM_OPTION_VALUE.sub(cls._tty_ify_sem_simle, t) # V(expr) t = cls._SEM_ENV_VARIABLE.sub(cls._tty_ify_sem_simle, t) # E(expr) t = cls._SEM_RET_VALUE.sub(cls._tty_ify_sem_complex, t) # RV(expr) t = cls._RULER.sub("\n{0}\n".format("-" * 13), t) # HORIZONTALLINE => ------- # remove rst t = cls._RST_SEEALSO.sub(r"See also:", t) # seealso to See also: t = cls._RST_NOTE.sub(r"Note:", t) # .. note:: to note: t = cls._RST_ROLES.sub(r"`", t) # remove :ref: and other tags, keep tilde to match ending one t = cls._RST_DIRECTIVES.sub(r"", t) # remove .. stuff:: in general return t def init_parser(self): coll_filter = 'A supplied argument will be used for filtering, can be a namespace or full collection name.' super(DocCLI, self).init_parser( desc="plugin documentation tool", epilog="See man pages for Ansible CLI options or website for tutorials https://docs.ansible.com" ) opt_help.add_module_options(self.parser) opt_help.add_basedir_options(self.parser) # targets self.parser.add_argument('args', nargs='*', help='Plugin', metavar='plugin') self.parser.add_argument("-t", "--type", action="store", default='module', dest='type', help='Choose which plugin type (defaults to "module"). ' 'Available plugin types are : {0}'.format(TARGET_OPTIONS), choices=TARGET_OPTIONS) # formatting self.parser.add_argument("-j", "--json", action="store_true", default=False, dest='json_format', help='Change output into json format.') # TODO: warn if not used with -t roles # role-specific options self.parser.add_argument("-r", "--roles-path", dest='roles_path', default=C.DEFAULT_ROLES_PATH, type=opt_help.unfrack_path(pathsep=True), action=opt_help.PrependListAction, help='The path to the directory containing your roles.') # modifiers exclusive = self.parser.add_mutually_exclusive_group() # TODO: warn if not used with -t roles exclusive.add_argument("-e", "--entry-point", dest="entry_point", help="Select the entry point for role(s).") # TODO: warn with --json as it is incompatible exclusive.add_argument("-s", "--snippet", action="store_true", default=False, dest='show_snippet', help='Show playbook snippet for these plugin types: %s' % ', '.join(SNIPPETS)) # TODO: warn when arg/plugin is passed exclusive.add_argument("-F", "--list_files", action="store_true", default=False, dest="list_files", help='Show plugin names and their source files without summaries (implies --list). %s' % coll_filter) exclusive.add_argument("-l", "--list", action="store_true", default=False, dest='list_dir', help='List available plugins. %s' % coll_filter) exclusive.add_argument("--metadata-dump", action="store_true", default=False, dest='dump', help='**For internal use only** Dump json metadata for all entries, ignores other options.') self.parser.add_argument("--no-fail-on-errors", action="store_true", default=False, dest='no_fail_on_errors', help='**For internal use only** Only used for --metadata-dump. ' 'Do not fail on errors. Report the error message in the JSON instead.') def post_process_args(self, options): options = super(DocCLI, self).post_process_args(options) display.verbosity = options.verbosity return options def display_plugin_list(self, results): # format for user displace = max(len(x) for x in results.keys()) linelimit = display.columns - displace - 5 text = [] deprecated = [] # format display per option if context.CLIARGS['list_files']: # list plugin file names for plugin in sorted(results.keys()): filename = to_native(results[plugin]) # handle deprecated for builtin/legacy pbreak = plugin.split('.') if pbreak[-1].startswith('_') and pbreak[0] == 'ansible' and pbreak[1] in ('builtin', 'legacy'): pbreak[-1] = pbreak[-1][1:] plugin = '.'.join(pbreak) deprecated.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(filename), filename)) else: text.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(filename), filename)) else: # list plugin names and short desc for plugin in sorted(results.keys()): desc = DocCLI.tty_ify(results[plugin]) if len(desc) > linelimit: desc = desc[:linelimit] + '...' pbreak = plugin.split('.') # TODO: add mark for deprecated collection plugins if pbreak[-1].startswith('_') and plugin.startswith(('ansible.builtin.', 'ansible.legacy.')): # Handle deprecated ansible.builtin plugins pbreak[-1] = pbreak[-1][1:] plugin = '.'.join(pbreak) deprecated.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(desc), desc)) else: text.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(desc), desc)) if len(deprecated) > 0: text.append("\nDEPRECATED:") text.extend(deprecated) # display results DocCLI.pager("\n".join(text)) def _display_available_roles(self, list_json): """Display all roles we can find with a valid argument specification. Output is: fqcn role name, entry point, short description """ roles = list(list_json.keys()) entry_point_names = set() for role in roles: for entry_point in list_json[role]['entry_points'].keys(): entry_point_names.add(entry_point) max_role_len = 0 max_ep_len = 0 if roles: max_role_len = max(len(x) for x in roles) if entry_point_names: max_ep_len = max(len(x) for x in entry_point_names) linelimit = display.columns - max_role_len - max_ep_len - 5 text = [] for role in sorted(roles): for entry_point, desc in list_json[role]['entry_points'].items(): if len(desc) > linelimit: desc = desc[:linelimit] + '...' text.append("%-*s %-*s %s" % (max_role_len, role, max_ep_len, entry_point, desc)) # display results DocCLI.pager("\n".join(text)) def _display_role_doc(self, role_json): roles = list(role_json.keys()) text = [] for role in roles: text += self.get_role_man_text(role, role_json[role]) # display results DocCLI.pager("\n".join(text)) @staticmethod def _list_keywords(): return from_yaml(pkgutil.get_data('ansible', 'keyword_desc.yml')) @staticmethod def _get_keywords_docs(keys): data = {} descs = DocCLI._list_keywords() for key in keys: if key.startswith('with_'): # simplify loops, dont want to handle every with_ combo keyword = 'loop' elif key == 'async': # cause async became reserved in python we had to rename internally keyword = 'async_val' else: keyword = key try: # if no desc, typeerror raised ends this block kdata = {'description': descs[key]} # get playbook objects for keyword and use first to get keyword attributes kdata['applies_to'] = [] for pobj in PB_OBJECTS: if pobj not in PB_LOADED: obj_class = 'ansible.playbook.%s' % pobj.lower() loaded_class = importlib.import_module(obj_class) PB_LOADED[pobj] = getattr(loaded_class, pobj, None) if keyword in PB_LOADED[pobj].fattributes: kdata['applies_to'].append(pobj) # we should only need these once if 'type' not in kdata: fa = PB_LOADED[pobj].fattributes.get(keyword) if getattr(fa, 'private'): kdata = {} raise KeyError kdata['type'] = getattr(fa, 'isa', 'string') if keyword.endswith('when') or keyword in ('until',): # TODO: make this a field attribute property, # would also helps with the warnings on {{}} stacking kdata['template'] = 'implicit' elif getattr(fa, 'static'): kdata['template'] = 'static' else: kdata['template'] = 'explicit' # those that require no processing for visible in ('alias', 'priority'): kdata[visible] = getattr(fa, visible) # remove None keys for k in list(kdata.keys()): if kdata[k] is None: del kdata[k] data[key] = kdata except (AttributeError, KeyError) as e: display.warning("Skipping Invalid keyword '%s' specified: %s" % (key, to_text(e))) if display.verbosity >= 3: display.verbose(traceback.format_exc()) return data def _get_collection_filter(self): coll_filter = None if len(context.CLIARGS['args']) >= 1: coll_filter = context.CLIARGS['args'] for coll_name in coll_filter: if not AnsibleCollectionRef.is_valid_collection_name(coll_name): raise AnsibleError('Invalid collection name (must be of the form namespace.collection): {0}'.format(coll_name)) return coll_filter def _list_plugins(self, plugin_type, content): results = {} self.plugins = {} loader = DocCLI._prep_loader(plugin_type) coll_filter = self._get_collection_filter() self.plugins.update(list_plugins(plugin_type, coll_filter)) # get appropriate content depending on option if content == 'dir': results = self._get_plugin_list_descriptions(loader) elif content == 'files': results = {k: self.plugins[k][0] for k in self.plugins.keys()} else: results = {k: {} for k in self.plugins.keys()} self.plugin_list = set() # reset for next iteration return results def _get_plugins_docs(self, plugin_type, names, fail_ok=False, fail_on_errors=True): loader = DocCLI._prep_loader(plugin_type) # get the docs for plugins in the command line list plugin_docs = {} for plugin in names: doc = {} try: doc, plainexamples, returndocs, metadata = get_plugin_docs(plugin, plugin_type, loader, fragment_loader, (context.CLIARGS['verbosity'] > 0)) except AnsiblePluginNotFound as e: display.warning(to_native(e)) continue except Exception as e: if not fail_on_errors: plugin_docs[plugin] = {'error': 'Missing documentation or could not parse documentation: %s' % to_native(e)} continue display.vvv(traceback.format_exc()) msg = "%s %s missing documentation (or could not parse documentation): %s\n" % (plugin_type, plugin, to_native(e)) if fail_ok: display.warning(msg) else: raise AnsibleError(msg) if not doc: # The doc section existed but was empty if not fail_on_errors: plugin_docs[plugin] = {'error': 'No valid documentation found'} continue docs = DocCLI._combine_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata) if not fail_on_errors: # Check whether JSON serialization would break try: json_dump(docs) except Exception as e: # pylint:disable=broad-except plugin_docs[plugin] = {'error': 'Cannot serialize documentation as JSON: %s' % to_native(e)} continue plugin_docs[plugin] = docs return plugin_docs def _get_roles_path(self): ''' Add any 'roles' subdir in playbook dir to the roles search path. And as a last resort, add the playbook dir itself. Order being: - 'roles' subdir of playbook dir - DEFAULT_ROLES_PATH (default in cliargs) - playbook dir (basedir) NOTE: This matches logic in RoleDefinition._load_role_path() method. ''' roles_path = context.CLIARGS['roles_path'] if context.CLIARGS['basedir'] is not None: subdir = os.path.join(context.CLIARGS['basedir'], "roles") if os.path.isdir(subdir): roles_path = (subdir,) + roles_path roles_path = roles_path + (context.CLIARGS['basedir'],) return roles_path @staticmethod def _prep_loader(plugin_type): ''' return a plugint type specific loader ''' loader = getattr(plugin_loader, '%s_loader' % plugin_type) # add to plugin paths from command line if context.CLIARGS['basedir'] is not None: loader.add_directory(context.CLIARGS['basedir'], with_subdir=True) if context.CLIARGS['module_path']: for path in context.CLIARGS['module_path']: if path: loader.add_directory(path) # save only top level paths for errors loader._paths = None # reset so we can use subdirs later return loader def run(self): super(DocCLI, self).run() basedir = context.CLIARGS['basedir'] plugin_type = context.CLIARGS['type'].lower() do_json = context.CLIARGS['json_format'] or context.CLIARGS['dump'] listing = context.CLIARGS['list_files'] or context.CLIARGS['list_dir'] if context.CLIARGS['list_files']: content = 'files' elif context.CLIARGS['list_dir']: content = 'dir' else: content = None docs = {} if basedir: AnsibleCollectionConfig.playbook_paths = basedir if plugin_type not in TARGET_OPTIONS: raise AnsibleOptionsError("Unknown or undocumentable plugin type: %s" % plugin_type) if context.CLIARGS['dump']: # we always dump all types, ignore restrictions ptypes = TARGET_OPTIONS docs['all'] = {} for ptype in ptypes: no_fail = bool(not context.CLIARGS['no_fail_on_errors']) if ptype == 'role': roles = self._create_role_list(fail_on_errors=no_fail) docs['all'][ptype] = self._create_role_doc(roles.keys(), context.CLIARGS['entry_point'], fail_on_errors=no_fail) elif ptype == 'keyword': names = DocCLI._list_keywords() docs['all'][ptype] = DocCLI._get_keywords_docs(names.keys()) else: plugin_names = self._list_plugins(ptype, None) docs['all'][ptype] = self._get_plugins_docs(ptype, plugin_names, fail_ok=(ptype in ('test', 'filter')), fail_on_errors=no_fail) # reset list after each type to avoid polution elif listing: if plugin_type == 'keyword': docs = DocCLI._list_keywords() elif plugin_type == 'role': docs = self._create_role_list() else: docs = self._list_plugins(plugin_type, content) else: # here we require a name if len(context.CLIARGS['args']) == 0: raise AnsibleOptionsError("Missing name(s), incorrect options passed for detailed documentation.") if plugin_type == 'keyword': docs = DocCLI._get_keywords_docs(context.CLIARGS['args']) elif plugin_type == 'role': docs = self._create_role_doc(context.CLIARGS['args'], context.CLIARGS['entry_point']) else: # display specific plugin docs docs = self._get_plugins_docs(plugin_type, context.CLIARGS['args']) # Display the docs if do_json: jdump(docs) else: text = [] if plugin_type in C.DOCUMENTABLE_PLUGINS: if listing and docs: self.display_plugin_list(docs) elif context.CLIARGS['show_snippet']: if plugin_type not in SNIPPETS: raise AnsibleError('Snippets are only available for the following plugin' ' types: %s' % ', '.join(SNIPPETS)) for plugin, doc_data in docs.items(): try: textret = DocCLI.format_snippet(plugin, plugin_type, doc_data['doc']) except ValueError as e: display.warning("Unable to construct a snippet for" " '{0}': {1}".format(plugin, to_text(e))) else: text.append(textret) else: # Some changes to how plain text docs are formatted for plugin, doc_data in docs.items(): textret = DocCLI.format_plugin_doc(plugin, plugin_type, doc_data['doc'], doc_data['examples'], doc_data['return'], doc_data['metadata']) if textret: text.append(textret) else: display.warning("No valid documentation was retrieved from '%s'" % plugin) elif plugin_type == 'role': if context.CLIARGS['list_dir'] and docs: self._display_available_roles(docs) elif docs: self._display_role_doc(docs) elif docs: text = DocCLI.tty_ify(DocCLI._dump_yaml(docs)) if text: DocCLI.pager(''.join(text)) return 0 @staticmethod def get_all_plugins_of_type(plugin_type): loader = getattr(plugin_loader, '%s_loader' % plugin_type) paths = loader._get_paths_with_context() plugins = {} for path_context in paths: plugins.update(list_plugins(plugin_type)) return sorted(plugins.keys()) @staticmethod def get_plugin_metadata(plugin_type, plugin_name): # if the plugin lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs loader = getattr(plugin_loader, '%s_loader' % plugin_type) result = loader.find_plugin_with_context(plugin_name, mod_type='.py', ignore_deprecated=True, check_aliases=True) if not result.resolved: raise AnsibleError("unable to load {0} plugin named {1} ".format(plugin_type, plugin_name)) filename = result.plugin_resolved_path collection_name = result.plugin_resolved_collection try: doc, __, __, __ = get_docstring(filename, fragment_loader, verbose=(context.CLIARGS['verbosity'] > 0), collection_name=collection_name, plugin_type=plugin_type) except Exception: display.vvv(traceback.format_exc()) raise AnsibleError("%s %s at %s has a documentation formatting error or is missing documentation." % (plugin_type, plugin_name, filename)) if doc is None: # Removed plugins don't have any documentation return None return dict( name=plugin_name, namespace=DocCLI.namespace_from_plugin_filepath(filename, plugin_name, loader.package_path), description=doc.get('short_description', "UNKNOWN"), version_added=doc.get('version_added', "UNKNOWN") ) @staticmethod def namespace_from_plugin_filepath(filepath, plugin_name, basedir): if not basedir.endswith('/'): basedir += '/' rel_path = filepath.replace(basedir, '') extension_free = os.path.splitext(rel_path)[0] namespace_only = extension_free.rsplit(plugin_name, 1)[0].strip('/_') clean_ns = namespace_only.replace('/', '.') if clean_ns == '': clean_ns = None return clean_ns @staticmethod def _combine_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata): # generate extra data if plugin_type == 'module': # is there corresponding action plugin? if plugin in action_loader: doc['has_action'] = True else: doc['has_action'] = False # return everything as one dictionary return {'doc': doc, 'examples': plainexamples, 'return': returndocs, 'metadata': metadata} @staticmethod def format_snippet(plugin, plugin_type, doc): ''' return heavily commented plugin use to insert into play ''' if plugin_type == 'inventory' and doc.get('options', {}).get('plugin'): # these do not take a yaml config that we can write a snippet for raise ValueError('The {0} inventory plugin does not take YAML type config source' ' that can be used with the "auto" plugin so a snippet cannot be' ' created.'.format(plugin)) text = [] if plugin_type == 'lookup': text = _do_lookup_snippet(doc) elif 'options' in doc: text = _do_yaml_snippet(doc) text.append('') return "\n".join(text) @staticmethod def format_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata): collection_name = doc['collection'] # TODO: do we really want this? # add_collection_to_versions_and_dates(doc, '(unknown)', is_module=(plugin_type == 'module')) # remove_current_collection_from_versions_and_dates(doc, collection_name, is_module=(plugin_type == 'module')) # remove_current_collection_from_versions_and_dates( # returndocs, collection_name, is_module=(plugin_type == 'module'), return_docs=True) # assign from other sections doc['plainexamples'] = plainexamples doc['returndocs'] = returndocs doc['metadata'] = metadata try: text = DocCLI.get_man_text(doc, collection_name, plugin_type) except Exception as e: display.vvv(traceback.format_exc()) raise AnsibleError("Unable to retrieve documentation from '%s' due to: %s" % (plugin, to_native(e)), orig_exc=e) return text def _get_plugin_list_descriptions(self, loader): descs = {} for plugin in self.plugins.keys(): # TODO: move to plugin itself i.e: plugin.get_desc() doc = None filename = Path(to_native(self.plugins[plugin][0])) docerror = None try: doc = read_docstub(filename) except Exception as e: docerror = e # plugin file was empty or had error, lets try other options if doc is None: # handle test/filters that are in file with diff name base = plugin.split('.')[-1] basefile = filename.with_name(base + filename.suffix) for extension in C.DOC_EXTENSIONS: docfile = basefile.with_suffix(extension) try: if docfile.exists(): doc = read_docstub(docfile) except Exception as e: docerror = e if docerror: display.warning("%s has a documentation formatting error: %s" % (plugin, docerror)) continue if not doc or not isinstance(doc, dict): desc = 'UNDOCUMENTED' else: desc = doc.get('short_description', 'INVALID SHORT DESCRIPTION').strip() descs[plugin] = desc return descs @staticmethod def print_paths(finder): ''' Returns a string suitable for printing of the search path ''' # Uses a list to get the order right ret = [] for i in finder._get_paths(subdirs=False): i = to_text(i, errors='surrogate_or_strict') if i not in ret: ret.append(i) return os.pathsep.join(ret) @staticmethod def _dump_yaml(struct, flow_style=False): return yaml_dump(struct, default_flow_style=flow_style, default_style="''", Dumper=AnsibleDumper).rstrip('\n') @staticmethod def _indent_lines(text, indent): return DocCLI.tty_ify('\n'.join([indent + line for line in text.split('\n')])) @staticmethod def _format_version_added(version_added, version_added_collection=None): if version_added_collection == 'ansible.builtin': version_added_collection = 'ansible-core' # In ansible-core, version_added can be 'historical' if version_added == 'historical': return 'historical' if version_added_collection: version_added = '%s of %s' % (version_added, version_added_collection) return 'version %s' % (version_added, ) @staticmethod def add_fields(text, fields, limit, opt_indent, return_values=False, base_indent=''): for o in sorted(fields): # Create a copy so we don't modify the original (in case YAML anchors have been used) opt = dict(fields[o]) # required is used as indicator and removed required = opt.pop('required', False) if not isinstance(required, bool): raise AnsibleError("Incorrect value for 'Required', a boolean is needed.: %s" % required) if required: opt_leadin = "=" else: opt_leadin = "-" text.append("%s%s %s" % (base_indent, opt_leadin, o)) # description is specifically formated and can either be string or list of strings if 'description' not in opt: raise AnsibleError("All (sub-)options and return values must have a 'description' field") if is_sequence(opt['description']): for entry_idx, entry in enumerate(opt['description'], 1): if not isinstance(entry, string_types): raise AnsibleError("Expected string in description of %s at index %s, got %s" % (o, entry_idx, type(entry))) text.append(textwrap.fill(DocCLI.tty_ify(entry), limit, initial_indent=opt_indent, subsequent_indent=opt_indent)) else: if not isinstance(opt['description'], string_types): raise AnsibleError("Expected string in description of %s, got %s" % (o, type(opt['description']))) text.append(textwrap.fill(DocCLI.tty_ify(opt['description']), limit, initial_indent=opt_indent, subsequent_indent=opt_indent)) del opt['description'] suboptions = [] for subkey in ('options', 'suboptions', 'contains', 'spec'): if subkey in opt: suboptions.append((subkey, opt.pop(subkey))) if not required and not return_values and 'default' not in opt: opt['default'] = None # sanitize config items conf = {} for config in ('env', 'ini', 'yaml', 'vars', 'keyword'): if config in opt and opt[config]: # Create a copy so we don't modify the original (in case YAML anchors have been used) conf[config] = [dict(item) for item in opt.pop(config)] for ignore in DocCLI.IGNORE: for item in conf[config]: if ignore in item: del item[ignore] # reformat cli optoins if 'cli' in opt and opt['cli']: conf['cli'] = [] for cli in opt['cli']: if 'option' not in cli: conf['cli'].append({'name': cli['name'], 'option': '--%s' % cli['name'].replace('_', '-')}) else: conf['cli'].append(cli) del opt['cli'] # add custom header for conf if conf: text.append(DocCLI._indent_lines(DocCLI._dump_yaml({'set_via': conf}), opt_indent)) # these we handle at the end of generic option processing version_added = opt.pop('version_added', None) version_added_collection = opt.pop('version_added_collection', None) # general processing for options for k in sorted(opt): if k.startswith('_'): continue if is_sequence(opt[k]): text.append(DocCLI._indent_lines('%s: %s' % (k, DocCLI._dump_yaml(opt[k], flow_style=True)), opt_indent)) else: text.append(DocCLI._indent_lines(DocCLI._dump_yaml({k: opt[k]}), opt_indent)) if version_added: text.append("%sadded in: %s\n" % (opt_indent, DocCLI._format_version_added(version_added, version_added_collection))) for subkey, subdata in suboptions: text.append('') text.append("%s%s:\n" % (opt_indent, subkey.upper())) DocCLI.add_fields(text, subdata, limit, opt_indent + ' ', return_values, opt_indent) if not suboptions: text.append('') def get_role_man_text(self, role, role_json): '''Generate text for the supplied role suitable for display. This is similar to get_man_text(), but roles are different enough that we have a separate method for formatting their display. :param role: The role name. :param role_json: The JSON for the given role as returned from _create_role_doc(). :returns: A array of text suitable for displaying to screen. ''' text = [] opt_indent = " " pad = display.columns * 0.20 limit = max(display.columns - int(pad), 70) text.append("> %s (%s)\n" % (role.upper(), role_json.get('path'))) for entry_point in role_json['entry_points']: doc = role_json['entry_points'][entry_point] if doc.get('short_description'): text.append("ENTRY POINT: %s - %s\n" % (entry_point, doc.get('short_description'))) else: text.append("ENTRY POINT: %s\n" % entry_point) if doc.get('description'): if isinstance(doc['description'], list): desc = " ".join(doc['description']) else: desc = doc['description'] text.append("%s\n" % textwrap.fill(DocCLI.tty_ify(desc), limit, initial_indent=opt_indent, subsequent_indent=opt_indent)) if doc.get('options'): text.append("OPTIONS (= is mandatory):\n") DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent) text.append('') if doc.get('attributes'): text.append("ATTRIBUTES:\n") text.append(DocCLI._indent_lines(DocCLI._dump_yaml(doc.pop('attributes')), opt_indent)) text.append('') # generic elements we will handle identically for k in ('author',): if k not in doc: continue if isinstance(doc[k], string_types): text.append('%s: %s' % (k.upper(), textwrap.fill(DocCLI.tty_ify(doc[k]), limit - (len(k) + 2), subsequent_indent=opt_indent))) elif isinstance(doc[k], (list, tuple)): text.append('%s: %s' % (k.upper(), ', '.join(doc[k]))) else: # use empty indent since this affects the start of the yaml doc, not it's keys text.append(DocCLI._indent_lines(DocCLI._dump_yaml({k.upper(): doc[k]}), '')) text.append('') return text @staticmethod def get_man_text(doc, collection_name='', plugin_type=''): # Create a copy so we don't modify the original doc = dict(doc) DocCLI.IGNORE = DocCLI.IGNORE + (context.CLIARGS['type'],) opt_indent = " " text = [] pad = display.columns * 0.20 limit = max(display.columns - int(pad), 70) plugin_name = doc.get(context.CLIARGS['type'], doc.get('name')) or doc.get('plugin_type') or plugin_type if collection_name: plugin_name = '%s.%s' % (collection_name, plugin_name) text.append("> %s (%s)\n" % (plugin_name.upper(), doc.pop('filename'))) if isinstance(doc['description'], list): desc = " ".join(doc.pop('description')) else: desc = doc.pop('description') text.append("%s\n" % textwrap.fill(DocCLI.tty_ify(desc), limit, initial_indent=opt_indent, subsequent_indent=opt_indent)) if 'version_added' in doc: version_added = doc.pop('version_added') version_added_collection = doc.pop('version_added_collection', None) text.append("ADDED IN: %s\n" % DocCLI._format_version_added(version_added, version_added_collection)) if doc.get('deprecated', False): text.append("DEPRECATED: \n") if isinstance(doc['deprecated'], dict): if 'removed_at_date' in doc['deprecated']: text.append( "\tReason: %(why)s\n\tWill be removed in a release after %(removed_at_date)s\n\tAlternatives: %(alternative)s" % doc.pop('deprecated') ) else: if 'version' in doc['deprecated'] and 'removed_in' not in doc['deprecated']: doc['deprecated']['removed_in'] = doc['deprecated']['version'] text.append("\tReason: %(why)s\n\tWill be removed in: Ansible %(removed_in)s\n\tAlternatives: %(alternative)s" % doc.pop('deprecated')) else: text.append("%s" % doc.pop('deprecated')) text.append("\n") if doc.pop('has_action', False): text.append(" * note: %s\n" % "This module has a corresponding action plugin.") if doc.get('options', False): text.append("OPTIONS (= is mandatory):\n") DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent) text.append('') if doc.get('attributes', False): text.append("ATTRIBUTES:\n") text.append(DocCLI._indent_lines(DocCLI._dump_yaml(doc.pop('attributes')), opt_indent)) text.append('') if doc.get('notes', False): text.append("NOTES:") for note in doc['notes']: text.append(textwrap.fill(DocCLI.tty_ify(note), limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent)) text.append('') text.append('') del doc['notes'] if doc.get('seealso', False): text.append("SEE ALSO:") for item in doc['seealso']: if 'module' in item: text.append(textwrap.fill(DocCLI.tty_ify('Module %s' % item['module']), limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent)) description = item.get('description') if description is None and item['module'].startswith('ansible.builtin.'): description = 'The official documentation on the %s module.' % item['module'] if description is not None: text.append(textwrap.fill(DocCLI.tty_ify(description), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) if item['module'].startswith('ansible.builtin.'): relative_url = 'collections/%s_module.html' % item['module'].replace('.', '/', 2) text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink(relative_url)), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent)) elif 'plugin' in item and 'plugin_type' in item: plugin_suffix = ' plugin' if item['plugin_type'] not in ('module', 'role') else '' text.append(textwrap.fill(DocCLI.tty_ify('%s%s %s' % (item['plugin_type'].title(), plugin_suffix, item['plugin'])), limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent)) description = item.get('description') if description is None and item['plugin'].startswith('ansible.builtin.'): description = 'The official documentation on the %s %s%s.' % (item['plugin'], item['plugin_type'], plugin_suffix) if description is not None: text.append(textwrap.fill(DocCLI.tty_ify(description), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) if item['plugin'].startswith('ansible.builtin.'): relative_url = 'collections/%s_%s.html' % (item['plugin'].replace('.', '/', 2), item['plugin_type']) text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink(relative_url)), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent)) elif 'name' in item and 'link' in item and 'description' in item: text.append(textwrap.fill(DocCLI.tty_ify(item['name']), limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent)) text.append(textwrap.fill(DocCLI.tty_ify(item['description']), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) text.append(textwrap.fill(DocCLI.tty_ify(item['link']), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) elif 'ref' in item and 'description' in item: text.append(textwrap.fill(DocCLI.tty_ify('Ansible documentation [%s]' % item['ref']), limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent)) text.append(textwrap.fill(DocCLI.tty_ify(item['description']), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink('/#stq=%s&stp=1' % item['ref'])), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) text.append('') text.append('') del doc['seealso'] if doc.get('requirements', False): req = ", ".join(doc.pop('requirements')) text.append("REQUIREMENTS:%s\n" % textwrap.fill(DocCLI.tty_ify(req), limit - 16, initial_indent=" ", subsequent_indent=opt_indent)) # Generic handler for k in sorted(doc): if k in DocCLI.IGNORE or not doc[k]: continue if isinstance(doc[k], string_types): text.append('%s: %s' % (k.upper(), textwrap.fill(DocCLI.tty_ify(doc[k]), limit - (len(k) + 2), subsequent_indent=opt_indent))) elif isinstance(doc[k], (list, tuple)): text.append('%s: %s' % (k.upper(), ', '.join(doc[k]))) else: # use empty indent since this affects the start of the yaml doc, not it's keys text.append(DocCLI._indent_lines(DocCLI._dump_yaml({k.upper(): doc[k]}), '')) del doc[k] text.append('') if doc.get('plainexamples', False): text.append("EXAMPLES:") text.append('') if isinstance(doc['plainexamples'], string_types): text.append(doc.pop('plainexamples').strip()) else: try: text.append(yaml_dump(doc.pop('plainexamples'), indent=2, default_flow_style=False)) except Exception as e: raise AnsibleParserError("Unable to parse examples section", orig_exc=e) text.append('') text.append('') if doc.get('returndocs', False): text.append("RETURN VALUES:") DocCLI.add_fields(text, doc.pop('returndocs'), limit, opt_indent, return_values=True) return "\n".join(text) def _do_yaml_snippet(doc): text = [] mdesc = DocCLI.tty_ify(doc['short_description']) module = doc.get('module') if module: # this is actually a usable task! text.append("- name: %s" % (mdesc)) text.append(" %s:" % (module)) else: # just a comment, hopefully useful yaml file text.append("# %s:" % doc.get('plugin', doc.get('name'))) pad = 29 subdent = '# '.rjust(pad + 2) limit = display.columns - pad for o in sorted(doc['options'].keys()): opt = doc['options'][o] if isinstance(opt['description'], string_types): desc = DocCLI.tty_ify(opt['description']) else: desc = DocCLI.tty_ify(" ".join(opt['description'])) required = opt.get('required', False) if not isinstance(required, bool): raise ValueError("Incorrect value for 'Required', a boolean is needed: %s" % required) o = '%s:' % o if module: if required: desc = "(required) %s" % desc text.append(" %-20s # %s" % (o, textwrap.fill(desc, limit, subsequent_indent=subdent))) else: if required: default = '(required)' else: default = opt.get('default', 'None') text.append("%s %-9s # %s" % (o, default, textwrap.fill(desc, limit, subsequent_indent=subdent, max_lines=3))) return text def _do_lookup_snippet(doc): text = [] snippet = "lookup('%s', " % doc.get('plugin', doc.get('name')) comment = [] for o in sorted(doc['options'].keys()): opt = doc['options'][o] comment.append('# %s(%s): %s' % (o, opt.get('type', 'string'), opt.get('description', ''))) if o in ('_terms', '_raw', '_list'): # these are 'list of arguments' snippet += '< %s >' % (o) continue required = opt.get('required', False) if not isinstance(required, bool): raise ValueError("Incorrect value for 'Required', a boolean is needed: %s" % required) if required: default = '' else: default = opt.get('default', 'None') if opt.get('type') in ('string', 'str'): snippet += ", %s='%s'" % (o, default) else: snippet += ', %s=%s' % (o, default) snippet += ")" if comment: text.extend(comment) text.append('') text.append(snippet) return text def main(args=None): DocCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/bin/ansible-galaxy0000755000000000000000000027161314556006441016321 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2013, James Cammarata # Copyright: (c) 2018-2021, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import argparse import functools import json import os.path import pathlib import re import shutil import sys import textwrap import time import typing as t from dataclasses import dataclass from yaml.error import YAMLError import ansible.constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info from ansible.galaxy.api import GalaxyAPI, GalaxyError from ansible.galaxy.collection import ( build_collection, download_collections, find_existing_collections, install_collections, publish_collection, validate_collection_name, validate_collection_path, verify_collections, SIGNATURE_COUNT_RE, ) from ansible.galaxy.collection.concrete_artifact_manager import ( ConcreteArtifactsManager, ) from ansible.galaxy.collection.gpg import GPG_ERROR_MAP from ansible.galaxy.dependency_resolution.dataclasses import Requirement from ansible.galaxy.role import GalaxyRole from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel from ansible.module_utils.ansible_release import __version__ as ansible_version from ansible.module_utils.common.collections import is_iterable from ansible.module_utils.common.yaml import yaml_dump, yaml_load from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text from ansible.module_utils import six from ansible.parsing.dataloader import DataLoader from ansible.parsing.yaml.loader import AnsibleLoader from ansible.playbook.role.requirement import RoleRequirement from ansible.template import Templar from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.display import Display from ansible.utils.plugin_docs import get_versioned_doclink display = Display() urlparse = six.moves.urllib.parse.urlparse # config definition by position: name, required, type SERVER_DEF = [ ('url', True, 'str'), ('username', False, 'str'), ('password', False, 'str'), ('token', False, 'str'), ('auth_url', False, 'str'), ('api_version', False, 'int'), ('validate_certs', False, 'bool'), ('client_id', False, 'str'), ('timeout', False, 'int'), ] # config definition fields SERVER_ADDITIONAL = { 'api_version': {'default': None, 'choices': [2, 3]}, 'validate_certs': {'cli': [{'name': 'validate_certs'}]}, 'timeout': {'default': C.GALAXY_SERVER_TIMEOUT, 'cli': [{'name': 'timeout'}]}, 'token': {'default': None}, } def with_collection_artifacts_manager(wrapped_method): """Inject an artifacts manager if not passed explicitly. This decorator constructs a ConcreteArtifactsManager and maintains the related temporary directory auto-cleanup around the target method invocation. """ @functools.wraps(wrapped_method) def method_wrapper(*args, **kwargs): if 'artifacts_manager' in kwargs: return wrapped_method(*args, **kwargs) # FIXME: use validate_certs context from Galaxy servers when downloading collections # .get used here for when this is used in a non-CLI context artifacts_manager_kwargs = {'validate_certs': context.CLIARGS.get('resolved_validate_certs', True)} keyring = context.CLIARGS.get('keyring', None) if keyring is not None: artifacts_manager_kwargs.update({ 'keyring': GalaxyCLI._resolve_path(keyring), 'required_signature_count': context.CLIARGS.get('required_valid_signature_count', None), 'ignore_signature_errors': context.CLIARGS.get('ignore_gpg_errors', None), }) with ConcreteArtifactsManager.under_tmpdir( C.DEFAULT_LOCAL_TMP, **artifacts_manager_kwargs ) as concrete_artifact_cm: kwargs['artifacts_manager'] = concrete_artifact_cm return wrapped_method(*args, **kwargs) return method_wrapper def _display_header(path, h1, h2, w1=10, w2=7): display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format( path, h1, h2, '-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header '-' * max([len(h2), w2]), cwidth=w1, vwidth=w2, )) def _display_role(gr): install_info = gr.install_info version = None if install_info: version = install_info.get("version", None) if not version: version = "(unknown version)" display.display("- %s, %s" % (gr.name, version)) def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7): display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format( fqcn=to_text(collection.fqcn), version=collection.ver, cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header vwidth=max(vwidth, min_vwidth) )) def _get_collection_widths(collections): if not is_iterable(collections): collections = (collections, ) fqcn_set = {to_text(c.fqcn) for c in collections} version_set = {to_text(c.ver) for c in collections} fqcn_length = len(max(fqcn_set or [''], key=len)) version_length = len(max(version_set or [''], key=len)) return fqcn_length, version_length def validate_signature_count(value): match = re.match(SIGNATURE_COUNT_RE, value) if match is None: raise ValueError(f"{value} is not a valid signature count value") return value @dataclass class RoleDistributionServer: _api: t.Union[GalaxyAPI, None] api_servers: list[GalaxyAPI] @property def api(self): if self._api: return self._api for server in self.api_servers: try: if u'v1' in server.available_api_versions: self._api = server break except Exception: continue if not self._api: self._api = self.api_servers[0] return self._api class GalaxyCLI(CLI): '''Command to manage Ansible roles and collections. None of the CLI tools are designed to run concurrently with themselves. Use an external scheduler and/or locking to ensure there are no clashing operations. ''' name = 'ansible-galaxy' SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url") def __init__(self, args): self._raw_args = args self._implicit_role = False if len(args) > 1: # Inject role into sys.argv[1] as a backwards compatibility step if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args: # TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice args.insert(1, 'role') self._implicit_role = True # since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization if args[1:3] == ['role', 'login']: display.error( "The login command was removed in late 2020. An API key is now required to publish roles or collections " "to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the " "ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` " "command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH))) sys.exit(1) self.api_servers = [] self.galaxy = None self.lazy_role_api = None super(GalaxyCLI, self).__init__(args) def init_parser(self): ''' create an options parser for bin/ansible ''' super(GalaxyCLI, self).init_parser( desc="Perform various Role and Collection related operations.", ) # Common arguments that apply to more than 1 action common = opt_help.ArgumentParser(add_help=False) common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL') common.add_argument('--api-version', type=int, choices=[2, 3], help=argparse.SUPPRESS) # Hidden argument that should only be used in our tests common.add_argument('--token', '--api-key', dest='api_key', help='The Ansible Galaxy API key which can be found at ' 'https://galaxy.ansible.com/me/preferences.') common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs', help='Ignore SSL certificate validation errors.', default=None) # --timeout uses the default None to handle two different scenarios. # * --timeout > C.GALAXY_SERVER_TIMEOUT for non-configured servers # * --timeout > server-specific timeout > C.GALAXY_SERVER_TIMEOUT for configured servers. common.add_argument('--timeout', dest='timeout', type=int, help="The time to wait for operations against the galaxy server, defaults to 60s.") opt_help.add_verbosity_options(common) force = opt_help.ArgumentParser(add_help=False) force.add_argument('-f', '--force', dest='force', action='store_true', default=False, help='Force overwriting an existing role or collection') github = opt_help.ArgumentParser(add_help=False) github.add_argument('github_user', help='GitHub username') github.add_argument('github_repo', help='GitHub repository') offline = opt_help.ArgumentParser(add_help=False) offline.add_argument('--offline', dest='offline', default=False, action='store_true', help="Don't query the galaxy API when creating roles") default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '') roles_path = opt_help.ArgumentParser(add_help=False) roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True), default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction, help='The path to the directory containing your roles. The default is the first ' 'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path) collections_path = opt_help.ArgumentParser(add_help=False) collections_path.add_argument('-p', '--collections-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True), action=opt_help.PrependListAction, help="One or more directories to search for collections in addition " "to the default COLLECTIONS_PATHS. Separate multiple paths " "with '{0}'.".format(os.path.pathsep)) cache_options = opt_help.ArgumentParser(add_help=False) cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true', default=False, help='Clear the existing server response cache.') cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False, help='Do not use the server response cache.') # Add sub parser for the Galaxy role type (role or collection) type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type') type_parser.required = True # Add sub parser for the Galaxy collection actions collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.') collection.set_defaults(func=self.execute_collection) # to satisfy doc build collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action') collection_parser.required = True self.add_download_options(collection_parser, parents=[common, cache_options]) self.add_init_options(collection_parser, parents=[common, force]) self.add_build_options(collection_parser, parents=[common, force]) self.add_publish_options(collection_parser, parents=[common]) self.add_install_options(collection_parser, parents=[common, force, cache_options]) self.add_list_options(collection_parser, parents=[common, collections_path]) self.add_verify_options(collection_parser, parents=[common, collections_path]) # Add sub parser for the Galaxy role actions role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.') role.set_defaults(func=self.execute_role) # to satisfy doc build role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action') role_parser.required = True self.add_init_options(role_parser, parents=[common, force, offline]) self.add_remove_options(role_parser, parents=[common, roles_path]) self.add_delete_options(role_parser, parents=[common, github]) self.add_list_options(role_parser, parents=[common, roles_path]) self.add_search_options(role_parser, parents=[common]) self.add_import_options(role_parser, parents=[common, github]) self.add_setup_options(role_parser, parents=[common, roles_path]) self.add_info_options(role_parser, parents=[common, roles_path, offline]) self.add_install_options(role_parser, parents=[common, force, roles_path]) def add_download_options(self, parser, parents=None): download_parser = parser.add_parser('download', parents=parents, help='Download collections and their dependencies as a tarball for an ' 'offline install.') download_parser.set_defaults(func=self.execute_download) download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*') download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False, help="Don't download collection(s) listed as dependencies.") download_parser.add_argument('-p', '--download-path', dest='download_path', default='./collections', help='The directory to download the collections to.') download_parser.add_argument('-r', '--requirements-file', dest='requirements', help='A file containing a list of collections to be downloaded.') download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true', help='Include pre-release versions. Semantic versioning pre-releases are ignored by default') def add_init_options(self, parser, parents=None): galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role' init_parser = parser.add_parser('init', parents=parents, help='Initialize new {0} with the base structure of a ' '{0}.'.format(galaxy_type)) init_parser.set_defaults(func=self.execute_init) init_parser.add_argument('--init-path', dest='init_path', default='./', help='The path in which the skeleton {0} will be created. The default is the ' 'current working directory.'.format(galaxy_type)) init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type), default=C.GALAXY_COLLECTION_SKELETON if galaxy_type == 'collection' else C.GALAXY_ROLE_SKELETON, help='The path to a {0} skeleton that the new {0} should be based ' 'upon.'.format(galaxy_type)) obj_name_kwargs = {} if galaxy_type == 'collection': obj_name_kwargs['type'] = validate_collection_name init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()), **obj_name_kwargs) if galaxy_type == 'role': init_parser.add_argument('--type', dest='role_type', action='store', default='default', help="Initialize using an alternate role type. Valid types include: 'container', " "'apb' and 'network'.") def add_remove_options(self, parser, parents=None): remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.') remove_parser.set_defaults(func=self.execute_remove) remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+') def add_delete_options(self, parser, parents=None): delete_parser = parser.add_parser('delete', parents=parents, help='Removes the role from Galaxy. It does not remove or alter the actual ' 'GitHub repository.') delete_parser.set_defaults(func=self.execute_delete) def add_list_options(self, parser, parents=None): galaxy_type = 'role' if parser.metavar == 'COLLECTION_ACTION': galaxy_type = 'collection' list_parser = parser.add_parser('list', parents=parents, help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type)) list_parser.set_defaults(func=self.execute_list) list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type) if galaxy_type == 'collection': list_parser.add_argument('--format', dest='output_format', choices=('human', 'yaml', 'json'), default='human', help="Format to display the list of collections in.") def add_search_options(self, parser, parents=None): search_parser = parser.add_parser('search', parents=parents, help='Search the Galaxy database by tags, platforms, author and multiple ' 'keywords.') search_parser.set_defaults(func=self.execute_search) search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by') search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by') search_parser.add_argument('--author', dest='author', help='GitHub username') search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*') def add_import_options(self, parser, parents=None): import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server') import_parser.set_defaults(func=self.execute_import) import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True, help="Don't wait for import results.") import_parser.add_argument('--branch', dest='reference', help='The name of a branch to import. Defaults to the repository\'s default branch ' '(usually master)') import_parser.add_argument('--role-name', dest='role_name', help='The name the role should have, if different than the repo name') import_parser.add_argument('--status', dest='check_status', action='store_true', default=False, help='Check the status of the most recent import request for given github_' 'user/github_repo.') def add_setup_options(self, parser, parents=None): setup_parser = parser.add_parser('setup', parents=parents, help='Manage the integration between Galaxy and the given source.') setup_parser.set_defaults(func=self.execute_setup) setup_parser.add_argument('--remove', dest='remove_id', default=None, help='Remove the integration matching the provided ID value. Use --list to see ' 'ID values.') setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False, help='List all of your integrations.') setup_parser.add_argument('source', help='Source') setup_parser.add_argument('github_user', help='GitHub username') setup_parser.add_argument('github_repo', help='GitHub repository') setup_parser.add_argument('secret', help='Secret') def add_info_options(self, parser, parents=None): info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.') info_parser.set_defaults(func=self.execute_info) info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]') def add_verify_options(self, parser, parents=None): galaxy_type = 'collection' verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) ' 'found on the server and the installed copy. This does not verify dependencies.') verify_parser.set_defaults(func=self.execute_verify) verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The installed collection(s) name. ' 'This is mutually exclusive with --requirements-file.') verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False, help='Ignore errors during verification and continue with the next specified collection.') verify_parser.add_argument('--offline', dest='offline', action='store_true', default=False, help='Validate collection integrity locally without contacting server for ' 'canonical manifest hash.') verify_parser.add_argument('-r', '--requirements-file', dest='requirements', help='A file containing a list of collections to be verified.') verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING, help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx? verify_parser.add_argument('--signature', dest='signatures', action='append', help='An additional signature source to verify the authenticity of the MANIFEST.json before using ' 'it to verify the rest of the contents of a collection from a Galaxy server. Use in ' 'conjunction with a positional collection name (mutually exclusive with --requirements-file).') valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \ 'or all to signify that all signatures must be used to verify the collection. ' \ 'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).' ignore_gpg_status_help = 'A space separated list of status codes to ignore during signature verification (for example, NO_PUBKEY FAILURE). ' \ 'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).' \ 'Note: specify these after positional arguments or use -- to separate them.' verify_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count, help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT) verify_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append', help=opt_help.argparse.SUPPRESS, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES, choices=list(GPG_ERROR_MAP.keys())) verify_parser.add_argument('--ignore-signature-status-codes', dest='ignore_gpg_errors', type=str, action='extend', nargs='+', help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES, choices=list(GPG_ERROR_MAP.keys())) def add_install_options(self, parser, parents=None): galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role' args_kwargs = {} if galaxy_type == 'collection': args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \ 'mutually exclusive with --requirements-file.' ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \ 'collection. This will not ignore dependency conflict errors.' else: args_kwargs['help'] = 'Role name, URL or tar file' ignore_errors_help = 'Ignore errors and continue with the next specified role.' install_parser = parser.add_parser('install', parents=parents, help='Install {0}(s) from file(s), URL(s) or Ansible ' 'Galaxy'.format(galaxy_type)) install_parser.set_defaults(func=self.execute_install) install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs) install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False, help=ignore_errors_help) install_exclusive = install_parser.add_mutually_exclusive_group() install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False, help="Don't download {0}s listed as dependencies.".format(galaxy_type)) install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False, help="Force overwriting an existing {0} and its " "dependencies.".format(galaxy_type)) valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \ 'or -1 to signify that all signatures must be used to verify the collection. ' \ 'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).' ignore_gpg_status_help = 'A space separated list of status codes to ignore during signature verification (for example, NO_PUBKEY FAILURE). ' \ 'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).' \ 'Note: specify these after positional arguments or use -- to separate them.' if galaxy_type == 'collection': install_parser.add_argument('-p', '--collections-path', dest='collections_path', default=self._get_default_collection_path(), help='The path to the directory containing your collections.') install_parser.add_argument('-r', '--requirements-file', dest='requirements', help='A file containing a list of collections to be installed.') install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true', help='Include pre-release versions. Semantic versioning pre-releases are ignored by default') install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False, help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided') install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING, help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx? install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true', default=C.GALAXY_DISABLE_GPG_VERIFY, help='Disable GPG signature verification when installing collections from a Galaxy server') install_parser.add_argument('--signature', dest='signatures', action='append', help='An additional signature source to verify the authenticity of the MANIFEST.json before ' 'installing the collection from a Galaxy server. Use in conjunction with a positional ' 'collection name (mutually exclusive with --requirements-file).') install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count, help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT) install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append', help=opt_help.argparse.SUPPRESS, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES, choices=list(GPG_ERROR_MAP.keys())) install_parser.add_argument('--ignore-signature-status-codes', dest='ignore_gpg_errors', type=str, action='extend', nargs='+', help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES, choices=list(GPG_ERROR_MAP.keys())) install_parser.add_argument('--offline', dest='offline', action='store_true', default=False, help='Install collection artifacts (tarballs) without contacting any distribution servers. ' 'This does not apply to collections in remote Git repositories or URLs to remote tarballs.' ) else: install_parser.add_argument('-r', '--role-file', dest='requirements', help='A file containing a list of roles to be installed.') r_re = re.compile(r'^(?] for the values url, username, password, and token. config_dict = dict((k, server_config_def(server_key, k, req, ensure_type)) for k, req, ensure_type in SERVER_DEF) defs = AnsibleLoader(yaml_dump(config_dict)).get_single_data() C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs) # resolve the config created options above with existing config and user options server_options = C.config.get_plugin_options('galaxy_server', server_key) # auth_url is used to create the token, but not directly by GalaxyAPI, so # it doesn't need to be passed as kwarg to GalaxyApi, same for others we pop here auth_url = server_options.pop('auth_url') client_id = server_options.pop('client_id') token_val = server_options['token'] or NoTokenSentinel username = server_options['username'] api_version = server_options.pop('api_version') if server_options['validate_certs'] is None: server_options['validate_certs'] = context.CLIARGS['resolved_validate_certs'] validate_certs = server_options['validate_certs'] # This allows a user to explicitly force use of an API version when # multiple versions are supported. This was added for testing # against pulp_ansible and I'm not sure it has a practical purpose # outside of this use case. As such, this option is not documented # as of now if api_version: display.warning( f'The specified "api_version" configuration for the galaxy server "{server_key}" is ' 'not a public configuration, and may be removed at any time without warning.' ) server_options['available_api_versions'] = {'v%s' % api_version: '/v%s' % api_version} # default case if no auth info is provided. server_options['token'] = None if username: server_options['token'] = BasicAuthToken(username, server_options['password']) else: if token_val: if auth_url: server_options['token'] = KeycloakToken(access_token=token_val, auth_url=auth_url, validate_certs=validate_certs, client_id=client_id) else: # The galaxy v1 / github / django / 'Token' server_options['token'] = GalaxyToken(token=token_val) server_options.update(galaxy_options) config_servers.append(GalaxyAPI( self.galaxy, server_key, priority=server_priority, **server_options )) cmd_server = context.CLIARGS['api_server'] if context.CLIARGS['api_version']: api_version = context.CLIARGS['api_version'] display.warning( 'The --api-version is not a public argument, and may be removed at any time without warning.' ) galaxy_options['available_api_versions'] = {'v%s' % api_version: '/v%s' % api_version} cmd_token = GalaxyToken(token=context.CLIARGS['api_key']) validate_certs = context.CLIARGS['resolved_validate_certs'] default_server_timeout = context.CLIARGS['timeout'] if context.CLIARGS['timeout'] is not None else C.GALAXY_SERVER_TIMEOUT if cmd_server: # Cmd args take precedence over the config entry but fist check if the arg was a name and use that config # entry, otherwise create a new API entry for the server specified. config_server = next((s for s in config_servers if s.name == cmd_server), None) if config_server: self.api_servers.append(config_server) else: self.api_servers.append(GalaxyAPI( self.galaxy, 'cmd_arg', cmd_server, token=cmd_token, priority=len(config_servers) + 1, validate_certs=validate_certs, timeout=default_server_timeout, **galaxy_options )) else: self.api_servers = config_servers # Default to C.GALAXY_SERVER if no servers were defined if len(self.api_servers) == 0: self.api_servers.append(GalaxyAPI( self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token, priority=0, validate_certs=validate_certs, timeout=default_server_timeout, **galaxy_options )) # checks api versions once a GalaxyRole makes an api call # self.api can be used to evaluate the best server immediately self.lazy_role_api = RoleDistributionServer(None, self.api_servers) return context.CLIARGS['func']() @property def api(self): return self.lazy_role_api.api def _get_default_collection_path(self): return C.COLLECTIONS_PATHS[0] def _parse_requirements_file(self, requirements_file, allow_old_format=True, artifacts_manager=None, validate_signature_options=True): """ Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2 requirements file format: # v1 (roles only) - src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball. name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL. scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git. version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master. include: Path to additional requirements.yml files. # v2 (roles and collections) --- roles: # Same as v1 format just under the roles key collections: - namespace.collection - name: namespace.collection version: version identifier, multiple identifiers are separated by ',' source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST type: git|file|url|galaxy :param requirements_file: The path to the requirements file. :param allow_old_format: Will fail if a v1 requirements file is found and this is set to False. :param artifacts_manager: Artifacts manager. :return: a dict containing roles and collections to found in the requirements file. """ requirements = { 'roles': [], 'collections': [], } b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict') if not os.path.exists(b_requirements_file): raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file)) display.vvv("Reading requirement file at '%s'" % requirements_file) with open(b_requirements_file, 'rb') as req_obj: try: file_requirements = yaml_load(req_obj) except YAMLError as err: raise AnsibleError( "Failed to parse the requirements yml at '%s' with the following error:\n%s" % (to_native(requirements_file), to_native(err))) if file_requirements is None: raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file)) def parse_role_req(requirement): if "include" not in requirement: role = RoleRequirement.role_yaml_parse(requirement) display.vvv("found role %s in yaml file" % to_text(role)) if "name" not in role and "src" not in role: raise AnsibleError("Must specify name or src for role") return [GalaxyRole(self.galaxy, self.lazy_role_api, **role)] else: b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict") if not os.path.isfile(b_include_path): raise AnsibleError("Failed to find include requirements file '%s' in '%s'" % (to_native(b_include_path), to_native(requirements_file))) with open(b_include_path, 'rb') as f_include: try: return [GalaxyRole(self.galaxy, self.lazy_role_api, **r) for r in (RoleRequirement.role_yaml_parse(i) for i in yaml_load(f_include))] except Exception as e: raise AnsibleError("Unable to load data from include requirements file: %s %s" % (to_native(requirements_file), to_native(e))) if isinstance(file_requirements, list): # Older format that contains only roles if not allow_old_format: raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains " "a list of collections to install") for role_req in file_requirements: requirements['roles'] += parse_role_req(role_req) elif isinstance(file_requirements, dict): # Newer format with a collections and/or roles key extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections'])) if extra_keys: raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements " "file. Found: %s" % (to_native(", ".join(extra_keys)))) for role_req in file_requirements.get('roles') or []: requirements['roles'] += parse_role_req(role_req) requirements['collections'] = [ Requirement.from_requirement_dict( self._init_coll_req_dict(collection_req), artifacts_manager, validate_signature_options, ) for collection_req in file_requirements.get('collections') or [] ] else: raise AnsibleError(f"Expecting requirements yaml to be a list or dictionary but got {type(file_requirements).__name__}") return requirements def _init_coll_req_dict(self, coll_req): if not isinstance(coll_req, dict): # Assume it's a string: return {'name': coll_req} if ( 'name' not in coll_req or not coll_req.get('source') or coll_req.get('type', 'galaxy') != 'galaxy' ): return coll_req # Try and match up the requirement source with our list of Galaxy API # servers defined in the config, otherwise create a server with that # URL without any auth. coll_req['source'] = next( iter( srvr for srvr in self.api_servers if coll_req['source'] in {srvr.name, srvr.api_server} ), GalaxyAPI( self.galaxy, 'explicit_requirement_{name!s}'.format( name=coll_req['name'], ), coll_req['source'], validate_certs=context.CLIARGS['resolved_validate_certs'], ), ) return coll_req @staticmethod def exit_without_ignore(rc=1): """ Exits with the specified return code unless the option --ignore-errors was specified """ if not context.CLIARGS['ignore_errors']: raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.') @staticmethod def _display_role_info(role_info): text = [u"", u"Role: %s" % to_text(role_info['name'])] # Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description']. galaxy_info = role_info.get('galaxy_info', {}) description = role_info.get('description', galaxy_info.get('description', '')) text.append(u"\tdescription: %s" % description) for k in sorted(role_info.keys()): if k in GalaxyCLI.SKIP_INFO_KEYS: continue if isinstance(role_info[k], dict): text.append(u"\t%s:" % (k)) for key in sorted(role_info[k].keys()): if key in GalaxyCLI.SKIP_INFO_KEYS: continue text.append(u"\t\t%s: %s" % (key, role_info[k][key])) else: text.append(u"\t%s: %s" % (k, role_info[k])) # make sure we have a trailing newline returned text.append(u"") return u'\n'.join(text) @staticmethod def _resolve_path(path): return os.path.abspath(os.path.expanduser(os.path.expandvars(path))) @staticmethod def _get_skeleton_galaxy_yml(template_path, inject_data): with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj: meta_template = to_text(template_obj.read(), errors='surrogate_or_strict') galaxy_meta = get_collections_galaxy_meta_info() required_config = [] optional_config = [] for meta_entry in galaxy_meta: config_list = required_config if meta_entry.get('required', False) else optional_config value = inject_data.get(meta_entry['key'], None) if not value: meta_type = meta_entry.get('type', 'str') if meta_type == 'str': value = '' elif meta_type == 'list': value = [] elif meta_type == 'dict': value = {} meta_entry['value'] = value config_list.append(meta_entry) link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)") const_pattern = re.compile(r"C\(([^)]+)\)") def comment_ify(v): if isinstance(v, list): v = ". ".join([l.rstrip('.') for l in v]) v = link_pattern.sub(r"\1 <\2>", v) v = const_pattern.sub(r"'\1'", v) return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False) loader = DataLoader() templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config}) templar.environment.filters['comment_ify'] = comment_ify meta_value = templar.template(meta_template) return meta_value def _require_one_of_collections_requirements( self, collections, requirements_file, signatures=None, artifacts_manager=None, ): if collections and requirements_file: raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.") elif not collections and not requirements_file: raise AnsibleError("You must specify a collection name or a requirements file.") elif requirements_file: if signatures is not None: raise AnsibleError( "The --signatures option and --requirements-file are mutually exclusive. " "Use the --signatures with positional collection_name args or provide a " "'signatures' key for requirements in the --requirements-file." ) requirements_file = GalaxyCLI._resolve_path(requirements_file) requirements = self._parse_requirements_file( requirements_file, allow_old_format=False, artifacts_manager=artifacts_manager, ) else: requirements = { 'collections': [ Requirement.from_string(coll_input, artifacts_manager, signatures) for coll_input in collections ], 'roles': [], } return requirements ############################ # execute actions ############################ def execute_role(self): """ Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init as listed below. """ # To satisfy doc build pass def execute_collection(self): """ Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as listed below. """ # To satisfy doc build pass def execute_build(self): """ Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy. By default, this command builds from the current working directory. You can optionally pass in the collection input path (where the ``galaxy.yml`` file is). """ force = context.CLIARGS['force'] output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path']) b_output_path = to_bytes(output_path, errors='surrogate_or_strict') if not os.path.exists(b_output_path): os.makedirs(b_output_path) elif os.path.isfile(b_output_path): raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path)) for collection_path in context.CLIARGS['args']: collection_path = GalaxyCLI._resolve_path(collection_path) build_collection( to_text(collection_path, errors='surrogate_or_strict'), to_text(output_path, errors='surrogate_or_strict'), force, ) @with_collection_artifacts_manager def execute_download(self, artifacts_manager=None): """Download collections and their dependencies as a tarball for an offline install.""" collections = context.CLIARGS['args'] no_deps = context.CLIARGS['no_deps'] download_path = context.CLIARGS['download_path'] requirements_file = context.CLIARGS['requirements'] if requirements_file: requirements_file = GalaxyCLI._resolve_path(requirements_file) requirements = self._require_one_of_collections_requirements( collections, requirements_file, artifacts_manager=artifacts_manager, )['collections'] download_path = GalaxyCLI._resolve_path(download_path) b_download_path = to_bytes(download_path, errors='surrogate_or_strict') if not os.path.exists(b_download_path): os.makedirs(b_download_path) download_collections( requirements, download_path, self.api_servers, no_deps, context.CLIARGS['allow_pre_release'], artifacts_manager=artifacts_manager, ) return 0 def execute_init(self): """ Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format. Requires a role or collection name. The collection name must be in the format ``.``. """ galaxy_type = context.CLIARGS['type'] init_path = context.CLIARGS['init_path'] force = context.CLIARGS['force'] obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)] obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)] inject_data = dict( description='your {0} description'.format(galaxy_type), ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'), ) if galaxy_type == 'role': inject_data.update(dict( author='your name', company='your company (optional)', license='license (GPL-2.0-or-later, MIT, etc)', role_name=obj_name, role_type=context.CLIARGS['role_type'], issue_tracker_url='http://example.com/issue/tracker', repository_url='http://example.com/repository', documentation_url='http://docs.example.com', homepage_url='http://example.com', min_ansible_version=ansible_version[:3], # x.y dependencies=[], )) skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE obj_path = os.path.join(init_path, obj_name) elif galaxy_type == 'collection': namespace, collection_name = obj_name.split('.', 1) inject_data.update(dict( namespace=namespace, collection_name=collection_name, version='1.0.0', readme='README.md', authors=['your name '], license=['GPL-2.0-or-later'], repository='http://example.com/repository', documentation='http://docs.example.com', homepage='http://example.com', issues='http://example.com/issue/tracker', build_ignore=[], )) skeleton_ignore_expressions = C.GALAXY_COLLECTION_SKELETON_IGNORE obj_path = os.path.join(init_path, namespace, collection_name) b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict') if os.path.exists(b_obj_path): if os.path.isfile(obj_path): raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path)) elif not force: raise AnsibleError("- the directory %s already exists. " "You can use --force to re-initialize this directory,\n" "however it will reset any main.yml files that may have\n" "been modified there already." % to_native(obj_path)) # delete the contents rather than the collection root in case init was run from the root (--init-path ../../) for root, dirs, files in os.walk(b_obj_path, topdown=True): for old_dir in dirs: path = os.path.join(root, old_dir) shutil.rmtree(path) for old_file in files: path = os.path.join(root, old_file) os.unlink(path) if obj_skeleton is not None: own_skeleton = False else: own_skeleton = True obj_skeleton = self.galaxy.default_role_skeleton_path skeleton_ignore_expressions = ['^.*/.git_keep$'] obj_skeleton = os.path.expanduser(obj_skeleton) skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions] if not os.path.exists(obj_skeleton): raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format( to_native(obj_skeleton), galaxy_type) ) loader = DataLoader() templar = Templar(loader, variables=inject_data) # create role directory if not os.path.exists(b_obj_path): os.makedirs(b_obj_path) for root, dirs, files in os.walk(obj_skeleton, topdown=True): rel_root = os.path.relpath(root, obj_skeleton) rel_dirs = rel_root.split(os.sep) rel_root_dir = rel_dirs[0] if galaxy_type == 'collection': # A collection can contain templates in playbooks/*/templates and roles/*/templates in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs else: in_templates_dir = rel_root_dir == 'templates' # Filter out ignored directory names # Use [:] to mutate the list os.walk uses dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)] for f in files: filename, ext = os.path.splitext(f) if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re): continue if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2': # Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options # dynamically which requires special options to be set. # The templated data's keys must match the key name but the inject data contains collection_name # instead of name. We just make a copy and change the key back to name for this file. template_data = inject_data.copy() template_data['name'] = template_data.pop('collection_name') meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data) b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict') with open(b_dest_file, 'wb') as galaxy_obj: galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict')) elif ext == ".j2" and not in_templates_dir: src_template = os.path.join(root, f) dest_file = os.path.join(obj_path, rel_root, filename) template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict') b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict') with open(dest_file, 'wb') as df: df.write(b_rendered) else: f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton) shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path), follow_symlinks=False) for d in dirs: b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict') if os.path.exists(b_dir_path): continue b_src_dir = to_bytes(os.path.join(root, d), errors='surrogate_or_strict') if os.path.islink(b_src_dir): shutil.copyfile(b_src_dir, b_dir_path, follow_symlinks=False) else: os.makedirs(b_dir_path) display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name)) def execute_info(self): """ prints out detailed information about an installed role as well as info available from the galaxy API. """ roles_path = context.CLIARGS['roles_path'] data = '' for role in context.CLIARGS['args']: role_info = {'path': roles_path} gr = GalaxyRole(self.galaxy, self.lazy_role_api, role) install_info = gr.install_info if install_info: if 'version' in install_info: install_info['installed_version'] = install_info['version'] del install_info['version'] role_info.update(install_info) if not context.CLIARGS['offline']: remote_data = None try: remote_data = self.api.lookup_role_by_name(role, False) except GalaxyError as e: if e.http_code == 400 and 'Bad Request' in e.message: # Role does not exist in Ansible Galaxy data = u"- the role %s was not found" % role break raise AnsibleError("Unable to find info about '%s': %s" % (role, e)) if remote_data: role_info.update(remote_data) else: data = u"- the role %s was not found" % role break elif context.CLIARGS['offline'] and not gr._exists: data = u"- the role %s was not found" % role break if gr.metadata: role_info.update(gr.metadata) req = RoleRequirement() role_spec = req.role_yaml_parse({'role': role}) if role_spec: role_info.update(role_spec) data += self._display_role_info(role_info) self.pager(data) @with_collection_artifacts_manager def execute_verify(self, artifacts_manager=None): """Compare checksums with the collection(s) found on the server and the installed copy. This does not verify dependencies.""" collections = context.CLIARGS['args'] search_paths = AnsibleCollectionConfig.collection_paths ignore_errors = context.CLIARGS['ignore_errors'] local_verify_only = context.CLIARGS['offline'] requirements_file = context.CLIARGS['requirements'] signatures = context.CLIARGS['signatures'] if signatures is not None: signatures = list(signatures) requirements = self._require_one_of_collections_requirements( collections, requirements_file, signatures=signatures, artifacts_manager=artifacts_manager, )['collections'] resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths] results = verify_collections( requirements, resolved_paths, self.api_servers, ignore_errors, local_verify_only=local_verify_only, artifacts_manager=artifacts_manager, ) if any(result for result in results if not result.success): return 1 return 0 @with_collection_artifacts_manager def execute_install(self, artifacts_manager=None): """ Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``). You can pass in a list (roles or collections) or use the file option listed below (these are mutually exclusive). If you pass in a list, it can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file. """ install_items = context.CLIARGS['args'] requirements_file = context.CLIARGS['requirements'] collection_path = None signatures = context.CLIARGS.get('signatures') if signatures is not None: signatures = list(signatures) if requirements_file: requirements_file = GalaxyCLI._resolve_path(requirements_file) two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \ "run 'ansible-galaxy {0} install -r' or to install both at the same time run " \ "'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file) # TODO: Would be nice to share the same behaviour with args and -r in collections and roles. collection_requirements = [] role_requirements = [] if context.CLIARGS['type'] == 'collection': collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path']) requirements = self._require_one_of_collections_requirements( install_items, requirements_file, signatures=signatures, artifacts_manager=artifacts_manager, ) collection_requirements = requirements['collections'] if requirements['roles']: display.vvv(two_type_warning.format('role')) else: if not install_items and requirements_file is None: raise AnsibleOptionsError("- you must specify a user/role name or a roles file") if requirements_file: if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')): raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension") galaxy_args = self._raw_args will_install_collections = self._implicit_role and '-p' not in galaxy_args and '--roles-path' not in galaxy_args requirements = self._parse_requirements_file( requirements_file, artifacts_manager=artifacts_manager, validate_signature_options=will_install_collections, ) role_requirements = requirements['roles'] # We can only install collections and roles at the same time if the type wasn't specified and the -p # argument was not used. If collections are present in the requirements then at least display a msg. if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or '--roles-path' in galaxy_args): # We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user # was explicit about the type and shouldn't care that collections were skipped. display_func = display.warning if self._implicit_role else display.vvv display_func(two_type_warning.format('collection')) else: collection_path = self._get_default_collection_path() collection_requirements = requirements['collections'] else: # roles were specified directly, so we'll just go out grab them # (and their dependencies, unless the user doesn't want us to). for rname in context.CLIARGS['args']: role = RoleRequirement.role_yaml_parse(rname.strip()) role_requirements.append(GalaxyRole(self.galaxy, self.lazy_role_api, **role)) if not role_requirements and not collection_requirements: display.display("Skipping install, no requirements found") return if role_requirements: display.display("Starting galaxy role install process") self._execute_install_role(role_requirements) if collection_requirements: display.display("Starting galaxy collection install process") # Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in # the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above). self._execute_install_collection( collection_requirements, collection_path, artifacts_manager=artifacts_manager, ) def _execute_install_collection( self, requirements, path, artifacts_manager, ): force = context.CLIARGS['force'] ignore_errors = context.CLIARGS['ignore_errors'] no_deps = context.CLIARGS['no_deps'] force_with_deps = context.CLIARGS['force_with_deps'] try: disable_gpg_verify = context.CLIARGS['disable_gpg_verify'] except KeyError: if self._implicit_role: raise AnsibleError( 'Unable to properly parse command line arguments. Please use "ansible-galaxy collection install" ' 'instead of "ansible-galaxy install".' ) raise # If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS allow_pre_release = context.CLIARGS.get('allow_pre_release', False) upgrade = context.CLIARGS.get('upgrade', False) collections_path = C.COLLECTIONS_PATHS managed_paths = set(validate_collection_path(p) for p in C.COLLECTIONS_PATHS) read_req_paths = set(validate_collection_path(p) for p in AnsibleCollectionConfig.collection_paths) unexpected_path = C.GALAXY_COLLECTIONS_PATH_WARNING and not any(p.startswith(path) for p in managed_paths) if unexpected_path and any(p.startswith(path) for p in read_req_paths): display.warning( f"The specified collections path '{path}' appears to be part of the pip Ansible package. " "Managing these directly with ansible-galaxy could break the Ansible package. " "Install collections to a configured collections path, which will take precedence over " "collections found in the PYTHONPATH." ) elif unexpected_path: display.warning("The specified collections path '%s' is not part of the configured Ansible " "collections paths '%s'. The installed collection will not be picked up in an Ansible " "run, unless within a playbook-adjacent collections directory." % (to_text(path), to_text(":".join(collections_path)))) output_path = validate_collection_path(path) b_output_path = to_bytes(output_path, errors='surrogate_or_strict') if not os.path.exists(b_output_path): os.makedirs(b_output_path) install_collections( requirements, output_path, self.api_servers, ignore_errors, no_deps, force, force_with_deps, upgrade, allow_pre_release=allow_pre_release, artifacts_manager=artifacts_manager, disable_gpg_verify=disable_gpg_verify, offline=context.CLIARGS.get('offline', False), read_requirement_paths=read_req_paths, ) return 0 def _execute_install_role(self, requirements): role_file = context.CLIARGS['requirements'] no_deps = context.CLIARGS['no_deps'] force_deps = context.CLIARGS['force_with_deps'] force = context.CLIARGS['force'] or force_deps for role in requirements: # only process roles in roles files when names matches if given if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']: display.vvv('Skipping role %s' % role.name) continue display.vvv('Processing role %s ' % role.name) # query the galaxy API for the role data if role.install_info is not None: if role.install_info['version'] != role.version or force: if force: display.display('- changing role %s from %s to %s' % (role.name, role.install_info['version'], role.version or "unspecified")) role.remove() else: display.warning('- %s (%s) is already installed - use --force to change version to %s' % (role.name, role.install_info['version'], role.version or "unspecified")) continue else: if not force: display.display('- %s is already installed, skipping.' % str(role)) continue try: installed = role.install() except AnsibleError as e: display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e))) self.exit_without_ignore() continue # install dependencies, if we want them if not no_deps and installed: if not role.metadata: # NOTE: the meta file is also required for installing the role, not just dependencies display.warning("Meta file %s is empty. Skipping dependencies." % role.path) else: role_dependencies = role.metadata_dependencies + role.requirements for dep in role_dependencies: display.debug('Installing dep %s' % dep) dep_req = RoleRequirement() dep_info = dep_req.role_yaml_parse(dep) dep_role = GalaxyRole(self.galaxy, self.lazy_role_api, **dep_info) if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None: # we know we can skip this, as it's not going to # be found on galaxy.ansible.com continue if dep_role.install_info is None: if dep_role not in requirements: display.display('- adding dependency: %s' % to_text(dep_role)) requirements.append(dep_role) else: display.display('- dependency %s already pending installation.' % dep_role.name) else: if dep_role.install_info['version'] != dep_role.version: if force_deps: display.display('- changing dependent role %s from %s to %s' % (dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified")) dep_role.remove() requirements.append(dep_role) else: display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' % (to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version'])) else: if force_deps: requirements.append(dep_role) else: display.display('- dependency %s is already installed, skipping.' % dep_role.name) if not installed: display.warning("- %s was NOT installed successfully." % role.name) self.exit_without_ignore() return 0 def execute_remove(self): """ removes the list of roles passed as arguments from the local system. """ if not context.CLIARGS['args']: raise AnsibleOptionsError('- you must specify at least one role to remove.') for role_name in context.CLIARGS['args']: role = GalaxyRole(self.galaxy, self.api, role_name) try: if role.remove(): display.display('- successfully removed %s' % role_name) else: display.display('- %s is not installed, skipping.' % role_name) except Exception as e: raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e))) return 0 def execute_list(self): """ List installed collections or roles """ if context.CLIARGS['type'] == 'role': self.execute_list_role() elif context.CLIARGS['type'] == 'collection': self.execute_list_collection() def execute_list_role(self): """ List all roles installed on the local system or a specific role """ path_found = False role_found = False warnings = [] roles_search_paths = context.CLIARGS['roles_path'] role_name = context.CLIARGS['role'] for path in roles_search_paths: role_path = GalaxyCLI._resolve_path(path) if os.path.isdir(path): path_found = True else: warnings.append("- the configured path {0} does not exist.".format(path)) continue if role_name: # show the requested role, if it exists gr = GalaxyRole(self.galaxy, self.lazy_role_api, role_name, path=os.path.join(role_path, role_name)) if os.path.isdir(gr.path): role_found = True display.display('# %s' % os.path.dirname(gr.path)) _display_role(gr) break warnings.append("- the role %s was not found" % role_name) else: if not os.path.exists(role_path): warnings.append("- the configured path %s does not exist." % role_path) continue if not os.path.isdir(role_path): warnings.append("- the configured path %s, exists, but it is not a directory." % role_path) continue display.display('# %s' % role_path) path_files = os.listdir(role_path) for path_file in path_files: gr = GalaxyRole(self.galaxy, self.lazy_role_api, path_file, path=path) if gr.metadata: _display_role(gr) # Do not warn if the role was found in any of the search paths if role_found and role_name: warnings = [] for w in warnings: display.warning(w) if not path_found: raise AnsibleOptionsError( "- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']) ) return 0 @with_collection_artifacts_manager def execute_list_collection(self, artifacts_manager=None): """ List all collections installed on the local system :param artifacts_manager: Artifacts manager. """ if artifacts_manager is not None: artifacts_manager.require_build_metadata = False output_format = context.CLIARGS['output_format'] collection_name = context.CLIARGS['collection'] default_collections_path = set(C.COLLECTIONS_PATHS) collections_search_paths = ( set(context.CLIARGS['collections_path'] or []) | default_collections_path | set(AnsibleCollectionConfig.collection_paths) ) collections_in_paths = {} warnings = [] path_found = False collection_found = False namespace_filter = None collection_filter = None if collection_name: # list a specific collection validate_collection_name(collection_name) namespace_filter, collection_filter = collection_name.split('.') collections = list(find_existing_collections( list(collections_search_paths), artifacts_manager, namespace_filter=namespace_filter, collection_filter=collection_filter, dedupe=False )) seen = set() fqcn_width, version_width = _get_collection_widths(collections) for collection in sorted(collections, key=lambda c: c.src): collection_found = True collection_path = pathlib.Path(to_text(collection.src)).parent.parent.as_posix() if output_format in {'yaml', 'json'}: collections_in_paths.setdefault(collection_path, {}) collections_in_paths[collection_path][collection.fqcn] = {'version': collection.ver} else: if collection_path not in seen: _display_header( collection_path, 'Collection', 'Version', fqcn_width, version_width ) seen.add(collection_path) _display_collection(collection, fqcn_width, version_width) path_found = False for path in collections_search_paths: if not os.path.exists(path): if path in default_collections_path: # don't warn for missing default paths continue warnings.append("- the configured path {0} does not exist.".format(path)) elif os.path.exists(path) and not os.path.isdir(path): warnings.append("- the configured path {0}, exists, but it is not a directory.".format(path)) else: path_found = True # Do not warn if the specific collection was found in any of the search paths if collection_found and collection_name: warnings = [] for w in warnings: display.warning(w) if not collections and not path_found: raise AnsibleOptionsError( "- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']) ) if output_format == 'json': display.display(json.dumps(collections_in_paths)) elif output_format == 'yaml': display.display(yaml_dump(collections_in_paths)) return 0 def execute_publish(self): """ Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish. """ collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args']) wait = context.CLIARGS['wait'] timeout = context.CLIARGS['import_timeout'] publish_collection(collection_path, self.api, wait, timeout) def execute_search(self): ''' searches for roles on the Ansible Galaxy server''' page_size = 1000 search = None if context.CLIARGS['args']: search = '+'.join(context.CLIARGS['args']) if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']: raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.") response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'], tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size) if response['count'] == 0: display.warning("No roles match your search.") return 0 data = [u''] if response['count'] > page_size: data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size)) else: data.append(u"Found %d roles matching your search:" % response['count']) max_len = [] for role in response['results']: max_len.append(len(role['username'] + '.' + role['name'])) name_len = max(max_len) format_str = u" %%-%ds %%s" % name_len data.append(u'') data.append(format_str % (u"Name", u"Description")) data.append(format_str % (u"----", u"-----------")) for role in response['results']: data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description'])) data = u'\n'.join(data) self.pager(data) return 0 def execute_import(self): """ used to import a role into Ansible Galaxy """ colors = { 'INFO': 'normal', 'WARNING': C.COLOR_WARN, 'ERROR': C.COLOR_ERROR, 'SUCCESS': C.COLOR_OK, 'FAILED': C.COLOR_ERROR, } github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict') github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict') rc = 0 if context.CLIARGS['check_status']: task = self.api.get_import_task(github_user=github_user, github_repo=github_repo) else: # Submit an import request task = self.api.create_import_task(github_user, github_repo, reference=context.CLIARGS['reference'], role_name=context.CLIARGS['role_name']) if len(task) > 1: # found multiple roles associated with github_user/github_repo display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo), color='yellow') display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED) for t in task: display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED) display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo), color=C.COLOR_CHANGED) return rc # found a single role as expected display.display("Successfully submitted import request %d" % task[0]['id']) if not context.CLIARGS['wait']: display.display("Role name: %s" % task[0]['summary_fields']['role']['name']) display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo'])) if context.CLIARGS['check_status'] or context.CLIARGS['wait']: # Get the status of the import msg_list = [] finished = False while not finished: task = self.api.get_import_task(task_id=task[0]['id']) for msg in task[0]['summary_fields']['task_messages']: if msg['id'] not in msg_list: display.display(msg['message_text'], color=colors[msg['message_type']]) msg_list.append(msg['id']) if (state := task[0]['state']) in ['SUCCESS', 'FAILED']: rc = ['SUCCESS', 'FAILED'].index(state) finished = True else: time.sleep(10) return rc def execute_setup(self): """ Setup an integration from Github or Travis for Ansible Galaxy roles""" if context.CLIARGS['setup_list']: # List existing integration secrets secrets = self.api.list_secrets() if len(secrets) == 0: # None found display.display("No integrations found.") return 0 display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK) display.display("---------- ---------- ----------", color=C.COLOR_OK) for secret in secrets: display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'], secret['github_repo']), color=C.COLOR_OK) return 0 if context.CLIARGS['remove_id']: # Remove a secret self.api.remove_secret(context.CLIARGS['remove_id']) display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK) return 0 source = context.CLIARGS['source'] github_user = context.CLIARGS['github_user'] github_repo = context.CLIARGS['github_repo'] secret = context.CLIARGS['secret'] resp = self.api.add_secret(source, github_user, github_repo, secret) display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo'])) return 0 def execute_delete(self): """ Delete a role from Ansible Galaxy. """ github_user = context.CLIARGS['github_user'] github_repo = context.CLIARGS['github_repo'] resp = self.api.delete_role(github_user, github_repo) if len(resp['deleted_roles']) > 1: display.display("Deleted the following roles:") display.display("ID User Name") display.display("------ --------------- ----------") for role in resp['deleted_roles']: display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name)) display.display(resp['status']) return 0 def main(args=None): GalaxyCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/bin/ansible-inventory0000755000000000000000000004250714556006441017067 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2017, Brian Coca # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import sys import argparse from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text from ansible.utils.vars import combine_vars from ansible.utils.display import Display from ansible.vars.plugins import get_vars_from_inventory_sources, get_vars_from_path display = Display() INTERNAL_VARS = frozenset(['ansible_diff_mode', 'ansible_config_file', 'ansible_facts', 'ansible_forks', 'ansible_inventory_sources', 'ansible_limit', 'ansible_playbook_python', 'ansible_run_tags', 'ansible_skip_tags', 'ansible_verbosity', 'ansible_version', 'inventory_dir', 'inventory_file', 'inventory_hostname', 'inventory_hostname_short', 'groups', 'group_names', 'omit', 'playbook_dir', ]) class InventoryCLI(CLI): ''' used to display or dump the configured inventory as Ansible sees it ''' name = 'ansible-inventory' ARGUMENTS = {'host': 'The name of a host to match in the inventory, relevant when using --list', 'group': 'The name of a group in the inventory, relevant when using --graph', } def __init__(self, args): super(InventoryCLI, self).__init__(args) self.vm = None self.loader = None self.inventory = None def init_parser(self): super(InventoryCLI, self).init_parser( usage='usage: %prog [options] [host|group]', desc='Show Ansible inventory information, by default it uses the inventory script JSON format') opt_help.add_inventory_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_basedir_options(self.parser) opt_help.add_runtask_options(self.parser) # remove unused default options self.parser.add_argument('--list-hosts', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument) self.parser.add_argument('args', metavar='host|group', nargs='?') # Actions action_group = self.parser.add_argument_group("Actions", "One of following must be used on invocation, ONLY ONE!") action_group.add_argument("--list", action="store_true", default=False, dest='list', help='Output all hosts info, works as inventory script') action_group.add_argument("--host", action="store", default=None, dest='host', help='Output specific host info, works as inventory script. It will ignore limit') action_group.add_argument("--graph", action="store_true", default=False, dest='graph', help='create inventory graph, if supplying pattern it must be a valid group name. It will ignore limit') self.parser.add_argument_group(action_group) # graph self.parser.add_argument("-y", "--yaml", action="store_true", default=False, dest='yaml', help='Use YAML format instead of default JSON, ignored for --graph') self.parser.add_argument('--toml', action='store_true', default=False, dest='toml', help='Use TOML format instead of default JSON, ignored for --graph') self.parser.add_argument("--vars", action="store_true", default=False, dest='show_vars', help='Add vars to graph display, ignored unless used with --graph') # list self.parser.add_argument("--export", action="store_true", default=C.INVENTORY_EXPORT, dest='export', help="When doing an --list, represent in a way that is optimized for export," "not as an accurate representation of how Ansible has processed it") self.parser.add_argument('--output', default=None, dest='output_file', help="When doing --list, send the inventory to a file instead of to the screen") # self.parser.add_argument("--ignore-vars-plugins", action="store_true", default=False, dest='ignore_vars_plugins', # help="When doing an --list, skip vars data from vars plugins, by default, this would include group_vars/ and host_vars/") def post_process_args(self, options): options = super(InventoryCLI, self).post_process_args(options) display.verbosity = options.verbosity self.validate_conflicts(options) # there can be only one! and, at least, one! used = 0 for opt in (options.list, options.host, options.graph): if opt: used += 1 if used == 0: raise AnsibleOptionsError("No action selected, at least one of --host, --graph or --list needs to be specified.") elif used > 1: raise AnsibleOptionsError("Conflicting options used, only one of --host, --graph or --list can be used at the same time.") # set host pattern to default if not supplied if options.args: options.pattern = options.args else: options.pattern = 'all' return options def run(self): super(InventoryCLI, self).run() # Initialize needed objects self.loader, self.inventory, self.vm = self._play_prereqs() results = None if context.CLIARGS['host']: hosts = self.inventory.get_hosts(context.CLIARGS['host']) if len(hosts) != 1: raise AnsibleOptionsError("You must pass a single valid host to --host parameter") myvars = self._get_host_variables(host=hosts[0]) # FIXME: should we template first? results = self.dump(myvars) else: if context.CLIARGS['subset']: # not doing single host, set limit in general if given self.inventory.subset(context.CLIARGS['subset']) if context.CLIARGS['graph']: results = self.inventory_graph() elif context.CLIARGS['list']: top = self._get_group('all') if context.CLIARGS['yaml']: results = self.yaml_inventory(top) elif context.CLIARGS['toml']: results = self.toml_inventory(top) else: results = self.json_inventory(top) results = self.dump(results) if results: outfile = context.CLIARGS['output_file'] if outfile is None: # FIXME: pager? display.display(results) else: try: with open(to_bytes(outfile), 'wb') as f: f.write(to_bytes(results)) except (OSError, IOError) as e: raise AnsibleError('Unable to write to destination file (%s): %s' % (to_native(outfile), to_native(e))) sys.exit(0) sys.exit(1) @staticmethod def dump(stuff): if context.CLIARGS['yaml']: import yaml from ansible.parsing.yaml.dumper import AnsibleDumper results = to_text(yaml.dump(stuff, Dumper=AnsibleDumper, default_flow_style=False, allow_unicode=True)) elif context.CLIARGS['toml']: from ansible.plugins.inventory.toml import toml_dumps try: results = toml_dumps(stuff) except TypeError as e: raise AnsibleError( 'The source inventory contains a value that cannot be represented in TOML: %s' % e ) except KeyError as e: raise AnsibleError( 'The source inventory contains a non-string key (%s) which cannot be represented in TOML. ' 'The specified key will need to be converted to a string. Be aware that if your playbooks ' 'expect this key to be non-string, your playbooks will need to be modified to support this ' 'change.' % e.args[0] ) else: import json from ansible.parsing.ajson import AnsibleJSONEncoder try: results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4, preprocess_unsafe=True, ensure_ascii=False) except TypeError as e: results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=False, indent=4, preprocess_unsafe=True, ensure_ascii=False) display.warning("Could not sort JSON output due to issues while sorting keys: %s" % to_native(e)) return results def _get_group_variables(self, group): # get info from inventory source res = group.get_vars() # Always load vars plugins res = combine_vars(res, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [group], 'all')) if context.CLIARGS['basedir']: res = combine_vars(res, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [group], 'all')) if group.priority != 1: res['ansible_group_priority'] = group.priority return self._remove_internal(res) def _get_host_variables(self, host): if context.CLIARGS['export']: # only get vars defined directly host hostvars = host.get_vars() # Always load vars plugins hostvars = combine_vars(hostvars, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [host], 'all')) if context.CLIARGS['basedir']: hostvars = combine_vars(hostvars, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [host], 'all')) else: # get all vars flattened by host, but skip magic hostvars hostvars = self.vm.get_vars(host=host, include_hostvars=False, stage='all') return self._remove_internal(hostvars) def _get_group(self, gname): group = self.inventory.groups.get(gname) return group @staticmethod def _remove_internal(dump): for internal in INTERNAL_VARS: if internal in dump: del dump[internal] return dump @staticmethod def _remove_empty_keys(dump): # remove empty keys for x in ('hosts', 'vars', 'children'): if x in dump and not dump[x]: del dump[x] @staticmethod def _show_vars(dump, depth): result = [] for (name, val) in sorted(dump.items()): result.append(InventoryCLI._graph_name('{%s = %s}' % (name, val), depth)) return result @staticmethod def _graph_name(name, depth=0): if depth: name = " |" * (depth) + "--%s" % name return name def _graph_group(self, group, depth=0): result = [self._graph_name('@%s:' % group.name, depth)] depth = depth + 1 for kid in group.child_groups: result.extend(self._graph_group(kid, depth)) if group.name != 'all': for host in group.hosts: result.append(self._graph_name(host.name, depth)) if context.CLIARGS['show_vars']: result.extend(self._show_vars(self._get_host_variables(host), depth + 1)) if context.CLIARGS['show_vars']: result.extend(self._show_vars(self._get_group_variables(group), depth)) return result def inventory_graph(self): start_at = self._get_group(context.CLIARGS['pattern']) if start_at: return '\n'.join(self._graph_group(start_at)) else: raise AnsibleOptionsError("Pattern must be valid group name when using --graph") def json_inventory(self, top): seen_groups = set() def format_group(group, available_hosts): results = {} results[group.name] = {} if group.name != 'all': results[group.name]['hosts'] = [h.name for h in group.hosts if h.name in available_hosts] results[group.name]['children'] = [] for subgroup in group.child_groups: results[group.name]['children'].append(subgroup.name) if subgroup.name not in seen_groups: results.update(format_group(subgroup, available_hosts)) seen_groups.add(subgroup.name) if context.CLIARGS['export']: results[group.name]['vars'] = self._get_group_variables(group) self._remove_empty_keys(results[group.name]) # remove empty groups if not results[group.name]: del results[group.name] return results hosts = self.inventory.get_hosts(top.name) results = format_group(top, frozenset(h.name for h in hosts)) # populate meta results['_meta'] = {'hostvars': {}} for host in hosts: hvars = self._get_host_variables(host) if hvars: results['_meta']['hostvars'][host.name] = hvars return results def yaml_inventory(self, top): seen_hosts = set() seen_groups = set() def format_group(group, available_hosts): results = {} # initialize group + vars results[group.name] = {} # subgroups results[group.name]['children'] = {} for subgroup in group.child_groups: if subgroup.name != 'all': if subgroup.name in seen_groups: results[group.name]['children'].update({subgroup.name: {}}) else: results[group.name]['children'].update(format_group(subgroup, available_hosts)) seen_groups.add(subgroup.name) # hosts for group results[group.name]['hosts'] = {} if group.name != 'all': for h in group.hosts: if h.name not in available_hosts: continue # observe limit myvars = {} if h.name not in seen_hosts: # avoid defining host vars more than once seen_hosts.add(h.name) myvars = self._get_host_variables(host=h) results[group.name]['hosts'][h.name] = myvars if context.CLIARGS['export']: gvars = self._get_group_variables(group) if gvars: results[group.name]['vars'] = gvars self._remove_empty_keys(results[group.name]) # remove empty groups if not results[group.name]: del results[group.name] return results available_hosts = frozenset(h.name for h in self.inventory.get_hosts(top.name)) return format_group(top, available_hosts) def toml_inventory(self, top): seen_hosts = set() seen_hosts = set() has_ungrouped = bool(next(g.hosts for g in top.child_groups if g.name == 'ungrouped')) def format_group(group, available_hosts): results = {} results[group.name] = {} results[group.name]['children'] = [] for subgroup in group.child_groups: if subgroup.name == 'ungrouped' and not has_ungrouped: continue if group.name != 'all': results[group.name]['children'].append(subgroup.name) results.update(format_group(subgroup, available_hosts)) if group.name != 'all': for host in group.hosts: if host.name not in available_hosts: continue if host.name not in seen_hosts: seen_hosts.add(host.name) host_vars = self._get_host_variables(host=host) else: host_vars = {} try: results[group.name]['hosts'][host.name] = host_vars except KeyError: results[group.name]['hosts'] = {host.name: host_vars} if context.CLIARGS['export']: results[group.name]['vars'] = self._get_group_variables(group) self._remove_empty_keys(results[group.name]) # remove empty groups if not results[group.name]: del results[group.name] return results available_hosts = frozenset(h.name for h in self.inventory.get_hosts(top.name)) results = format_group(top, available_hosts) return results def main(args=None): InventoryCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/bin/ansible-playbook0000755000000000000000000002524614556006441016653 0ustar00rootroot#!/usr/bin/env python # (c) 2012, Michael DeHaan # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import os import stat from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError from ansible.executor.playbook_executor import PlaybookExecutor from ansible.module_utils.common.text.converters import to_bytes from ansible.playbook.block import Block from ansible.plugins.loader import add_all_plugin_dirs from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path, _get_collection_playbook_path from ansible.utils.display import Display display = Display() class PlaybookCLI(CLI): ''' the tool to run *Ansible playbooks*, which are a configuration and multinode deployment system. See the project home page (https://docs.ansible.com) for more information. ''' name = 'ansible-playbook' def init_parser(self): # create parser for CLI options super(PlaybookCLI, self).init_parser( usage="%prog [options] playbook.yml [playbook2 ...]", desc="Runs Ansible playbooks, executing the defined tasks on the targeted hosts.") opt_help.add_connect_options(self.parser) opt_help.add_meta_options(self.parser) opt_help.add_runas_options(self.parser) opt_help.add_subset_options(self.parser) opt_help.add_check_options(self.parser) opt_help.add_inventory_options(self.parser) opt_help.add_runtask_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_fork_options(self.parser) opt_help.add_module_options(self.parser) # ansible playbook specific opts self.parser.add_argument('--syntax-check', dest='syntax', action='store_true', help="perform a syntax check on the playbook, but do not execute it") self.parser.add_argument('--list-tasks', dest='listtasks', action='store_true', help="list all tasks that would be executed") self.parser.add_argument('--list-tags', dest='listtags', action='store_true', help="list all available tags") self.parser.add_argument('--step', dest='step', action='store_true', help="one-step-at-a-time: confirm each task before running") self.parser.add_argument('--start-at-task', dest='start_at_task', help="start the playbook at the task matching this name") self.parser.add_argument('args', help='Playbook(s)', metavar='playbook', nargs='+') def post_process_args(self, options): # for listing, we need to know if user had tag input # capture here as parent function sets defaults for tags havetags = bool(options.tags or options.skip_tags) options = super(PlaybookCLI, self).post_process_args(options) if options.listtags: # default to all tags (including never), when listing tags # unless user specified tags if not havetags: options.tags = ['never', 'all'] display.verbosity = options.verbosity self.validate_conflicts(options, runas_opts=True, fork_opts=True) return options def run(self): super(PlaybookCLI, self).run() # Note: slightly wrong, this is written so that implicit localhost # manages passwords sshpass = None becomepass = None passwords = {} # initial error check, to make sure all specified playbooks are accessible # before we start running anything through the playbook executor # also prep plugin paths b_playbook_dirs = [] for playbook in context.CLIARGS['args']: # resolve if it is collection playbook with FQCN notation, if not, leaves unchanged resource = _get_collection_playbook_path(playbook) if resource is not None: playbook_collection = resource[2] else: # not an FQCN so must be a file if not os.path.exists(playbook): raise AnsibleError("the playbook: %s could not be found" % playbook) if not (os.path.isfile(playbook) or stat.S_ISFIFO(os.stat(playbook).st_mode)): raise AnsibleError("the playbook: %s does not appear to be a file" % playbook) # check if playbook is from collection (path can be passed directly) playbook_collection = _get_collection_name_from_path(playbook) # don't add collection playbooks to adjacency search path if not playbook_collection: # setup dirs to enable loading plugins from all playbooks in case they add callbacks/inventory/etc b_playbook_dir = os.path.dirname(os.path.abspath(to_bytes(playbook, errors='surrogate_or_strict'))) add_all_plugin_dirs(b_playbook_dir) b_playbook_dirs.append(b_playbook_dir) if b_playbook_dirs: # allow collections adjacent to these playbooks # we use list copy to avoid opening up 'adjacency' in the previous loop AnsibleCollectionConfig.playbook_paths = b_playbook_dirs # don't deal with privilege escalation or passwords when we don't need to if not (context.CLIARGS['listhosts'] or context.CLIARGS['listtasks'] or context.CLIARGS['listtags'] or context.CLIARGS['syntax']): (sshpass, becomepass) = self.ask_passwords() passwords = {'conn_pass': sshpass, 'become_pass': becomepass} # create base objects loader, inventory, variable_manager = self._play_prereqs() # (which is not returned in list_hosts()) is taken into account for # warning if inventory is empty. But it can't be taken into account for # checking if limit doesn't match any hosts. Instead we don't worry about # limit if only implicit localhost was in inventory to start with. # # Fix this when we rewrite inventory by making localhost a real host (and thus show up in list_hosts()) CLI.get_host_list(inventory, context.CLIARGS['subset']) # flush fact cache if requested if context.CLIARGS['flush_cache']: self._flush_cache(inventory, variable_manager) # create the playbook executor, which manages running the plays via a task queue manager pbex = PlaybookExecutor(playbooks=context.CLIARGS['args'], inventory=inventory, variable_manager=variable_manager, loader=loader, passwords=passwords) results = pbex.run() if isinstance(results, list): for p in results: display.display('\nplaybook: %s' % p['playbook']) for idx, play in enumerate(p['plays']): if play._included_path is not None: loader.set_basedir(play._included_path) else: pb_dir = os.path.realpath(os.path.dirname(p['playbook'])) loader.set_basedir(pb_dir) # show host list if we were able to template into a list try: host_list = ','.join(play.hosts) except TypeError: host_list = '' msg = "\n play #%d (%s): %s" % (idx + 1, host_list, play.name) mytags = set(play.tags) msg += '\tTAGS: [%s]' % (','.join(mytags)) if context.CLIARGS['listhosts']: playhosts = set(inventory.get_hosts(play.hosts)) msg += "\n pattern: %s\n hosts (%d):" % (play.hosts, len(playhosts)) for host in playhosts: msg += "\n %s" % host display.display(msg) all_tags = set() if context.CLIARGS['listtags'] or context.CLIARGS['listtasks']: taskmsg = '' if context.CLIARGS['listtasks']: taskmsg = ' tasks:\n' def _process_block(b): taskmsg = '' for task in b.block: if isinstance(task, Block): taskmsg += _process_block(task) else: if task.action in C._ACTION_META and task.implicit: continue all_tags.update(task.tags) if context.CLIARGS['listtasks']: cur_tags = list(mytags.union(set(task.tags))) cur_tags.sort() if task.name: taskmsg += " %s" % task.get_name() else: taskmsg += " %s" % task.action taskmsg += "\tTAGS: [%s]\n" % ', '.join(cur_tags) return taskmsg all_vars = variable_manager.get_vars(play=play) for block in play.compile(): block = block.filter_tagged_tasks(all_vars) if not block.has_tasks(): continue taskmsg += _process_block(block) if context.CLIARGS['listtags']: cur_tags = list(mytags.union(all_tags)) cur_tags.sort() taskmsg += " TASK TAGS: [%s]\n" % ', '.join(cur_tags) display.display(taskmsg) return 0 else: return results @staticmethod def _flush_cache(inventory, variable_manager): for host in inventory.list_hosts(): hostname = host.get_name() variable_manager.clear_facts(hostname) def main(args=None): PlaybookCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/bin/ansible-pull0000755000000000000000000004167014556006441016006 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2012, Michael DeHaan # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import datetime import os import platform import random import shlex import shutil import socket import sys import time from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleOptionsError from ansible.module_utils.common.text.converters import to_native, to_text from ansible.plugins.loader import module_loader from ansible.utils.cmd_functions import run_cmd from ansible.utils.display import Display display = Display() class PullCLI(CLI): ''' Used to pull a remote copy of ansible on each managed node, each set to run via cron and update playbook source via a source repository. This inverts the default *push* architecture of ansible into a *pull* architecture, which has near-limitless scaling potential. None of the CLI tools are designed to run concurrently with themselves, you should use an external scheduler and/or locking to ensure there are no clashing operations. The setup playbook can be tuned to change the cron frequency, logging locations, and parameters to ansible-pull. This is useful both for extreme scale-out as well as periodic remediation. Usage of the 'fetch' module to retrieve logs from ansible-pull runs would be an excellent way to gather and analyze remote logs from ansible-pull. ''' name = 'ansible-pull' DEFAULT_REPO_TYPE = 'git' DEFAULT_PLAYBOOK = 'local.yml' REPO_CHOICES = ('git', 'subversion', 'hg', 'bzr') PLAYBOOK_ERRORS = { 1: 'File does not exist', 2: 'File is not readable', } ARGUMENTS = {'playbook.yml': 'The name of one the YAML format files to run as an Ansible playbook.' 'This can be a relative path within the checkout. By default, Ansible will' "look for a playbook based on the host's fully-qualified domain name," 'on the host hostname and finally a playbook named *local.yml*.', } SKIP_INVENTORY_DEFAULTS = True @staticmethod def _get_inv_cli(): inv_opts = '' if context.CLIARGS.get('inventory', False): for inv in context.CLIARGS['inventory']: if isinstance(inv, list): inv_opts += " -i '%s' " % ','.join(inv) elif ',' in inv or os.path.exists(inv): inv_opts += ' -i %s ' % inv return inv_opts def init_parser(self): ''' create an options parser for bin/ansible ''' super(PullCLI, self).init_parser( usage='%prog -U [options] []', desc="pulls playbooks from a VCS repo and executes them on target host") # Do not add check_options as there's a conflict with --checkout/-C opt_help.add_connect_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_runtask_options(self.parser) opt_help.add_subset_options(self.parser) opt_help.add_inventory_options(self.parser) opt_help.add_module_options(self.parser) opt_help.add_runas_prompt_options(self.parser) self.parser.add_argument('args', help='Playbook(s)', metavar='playbook.yml', nargs='*') # options unique to pull self.parser.add_argument('--purge', default=False, action='store_true', help='purge checkout after playbook run') self.parser.add_argument('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true', help='only run the playbook if the repository has been updated') self.parser.add_argument('-s', '--sleep', dest='sleep', default=None, help='sleep for random interval (between 0 and n number of seconds) before starting. ' 'This is a useful way to disperse git requests') self.parser.add_argument('-f', '--force', dest='force', default=False, action='store_true', help='run the playbook even if the repository could not be updated') self.parser.add_argument('-d', '--directory', dest='dest', default=None, type=opt_help.unfrack_path(), help='path to the directory to which Ansible will checkout the repository.') self.parser.add_argument('-U', '--url', dest='url', default=None, help='URL of the playbook repository') self.parser.add_argument('--full', dest='fullclone', action='store_true', help='Do a full clone, instead of a shallow one.') self.parser.add_argument('-C', '--checkout', dest='checkout', help='branch/tag/commit to checkout. Defaults to behavior of repository module.') self.parser.add_argument('--accept-host-key', default=False, dest='accept_host_key', action='store_true', help='adds the hostkey for the repo url if not already added') self.parser.add_argument('-m', '--module-name', dest='module_name', default=self.DEFAULT_REPO_TYPE, help='Repository module name, which ansible will use to check out the repo. Choices are %s. Default is %s.' % (self.REPO_CHOICES, self.DEFAULT_REPO_TYPE)) self.parser.add_argument('--verify-commit', dest='verify', default=False, action='store_true', help='verify GPG signature of checked out commit, if it fails abort running the playbook. ' 'This needs the corresponding VCS module to support such an operation') self.parser.add_argument('--clean', dest='clean', default=False, action='store_true', help='modified files in the working repository will be discarded') self.parser.add_argument('--track-subs', dest='tracksubs', default=False, action='store_true', help='submodules will track the latest changes. This is equivalent to specifying the --remote flag to git submodule update') # add a subset of the check_opts flag group manually, as the full set's # shortcodes conflict with above --checkout/-C self.parser.add_argument("--check", default=False, dest='check', action='store_true', help="don't make any changes; instead, try to predict some of the changes that may occur") self.parser.add_argument("--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true', help="when changing (small) files and templates, show the differences in those files; works great with --check") def post_process_args(self, options): options = super(PullCLI, self).post_process_args(options) if not options.dest: hostname = socket.getfqdn() # use a hostname dependent directory, in case of $HOME on nfs options.dest = os.path.join(C.ANSIBLE_HOME, 'pull', hostname) if os.path.exists(options.dest) and not os.path.isdir(options.dest): raise AnsibleOptionsError("%s is not a valid or accessible directory." % options.dest) if options.sleep: try: secs = random.randint(0, int(options.sleep)) options.sleep = secs except ValueError: raise AnsibleOptionsError("%s is not a number." % options.sleep) if not options.url: raise AnsibleOptionsError("URL for repository not specified, use -h for help") if options.module_name not in self.REPO_CHOICES: raise AnsibleOptionsError("Unsupported repo module %s, choices are %s" % (options.module_name, ','.join(self.REPO_CHOICES))) display.verbosity = options.verbosity self.validate_conflicts(options) return options def run(self): ''' use Runner lib to do SSH things ''' super(PullCLI, self).run() # log command line now = datetime.datetime.now() display.display(now.strftime("Starting Ansible Pull at %F %T")) display.display(' '.join(sys.argv)) # Build Checkout command # Now construct the ansible command node = platform.node() host = socket.getfqdn() hostnames = ','.join(set([host, node, host.split('.')[0], node.split('.')[0]])) if hostnames: limit_opts = 'localhost,%s,127.0.0.1' % hostnames else: limit_opts = 'localhost,127.0.0.1' base_opts = '-c local ' if context.CLIARGS['verbosity'] > 0: base_opts += ' -%s' % ''.join(["v" for x in range(0, context.CLIARGS['verbosity'])]) # Attempt to use the inventory passed in as an argument # It might not yet have been downloaded so use localhost as default inv_opts = self._get_inv_cli() if not inv_opts: inv_opts = " -i localhost, " # avoid interpreter discovery since we already know which interpreter to use on localhost inv_opts += '-e %s ' % shlex.quote('ansible_python_interpreter=%s' % sys.executable) # SCM specific options if context.CLIARGS['module_name'] == 'git': repo_opts = "name=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' version=%s' % context.CLIARGS['checkout'] if context.CLIARGS['accept_host_key']: repo_opts += ' accept_hostkey=yes' if context.CLIARGS['private_key_file']: repo_opts += ' key_file=%s' % context.CLIARGS['private_key_file'] if context.CLIARGS['verify']: repo_opts += ' verify_commit=yes' if context.CLIARGS['tracksubs']: repo_opts += ' track_submodules=yes' if not context.CLIARGS['fullclone']: repo_opts += ' depth=1' elif context.CLIARGS['module_name'] == 'subversion': repo_opts = "repo=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' revision=%s' % context.CLIARGS['checkout'] if not context.CLIARGS['fullclone']: repo_opts += ' export=yes' elif context.CLIARGS['module_name'] == 'hg': repo_opts = "repo=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' revision=%s' % context.CLIARGS['checkout'] elif context.CLIARGS['module_name'] == 'bzr': repo_opts = "name=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' version=%s' % context.CLIARGS['checkout'] else: raise AnsibleOptionsError('Unsupported (%s) SCM module for pull, choices are: %s' % (context.CLIARGS['module_name'], ','.join(self.REPO_CHOICES))) # options common to all supported SCMS if context.CLIARGS['clean']: repo_opts += ' force=yes' path = module_loader.find_plugin(context.CLIARGS['module_name']) if path is None: raise AnsibleOptionsError(("module '%s' not found.\n" % context.CLIARGS['module_name'])) bin_path = os.path.dirname(os.path.abspath(sys.argv[0])) # hardcode local and inventory/host as this is just meant to fetch the repo cmd = '%s/ansible %s %s -m %s -a "%s" all -l "%s"' % (bin_path, inv_opts, base_opts, context.CLIARGS['module_name'], repo_opts, limit_opts) for ev in context.CLIARGS['extra_vars']: cmd += ' -e %s' % shlex.quote(ev) # Nap? if context.CLIARGS['sleep']: display.display("Sleeping for %d seconds..." % context.CLIARGS['sleep']) time.sleep(context.CLIARGS['sleep']) # RUN the Checkout command display.debug("running ansible with VCS module to checkout repo") display.vvvv('EXEC: %s' % cmd) rc, b_out, b_err = run_cmd(cmd, live=True) if rc != 0: if context.CLIARGS['force']: display.warning("Unable to update repository. Continuing with (forced) run of playbook.") else: return rc elif context.CLIARGS['ifchanged'] and b'"changed": true' not in b_out: display.display("Repository has not changed, quitting.") return 0 playbook = self.select_playbook(context.CLIARGS['dest']) if playbook is None: raise AnsibleOptionsError("Could not find a playbook to run.") # Build playbook command cmd = '%s/ansible-playbook %s %s' % (bin_path, base_opts, playbook) if context.CLIARGS['vault_password_files']: for vault_password_file in context.CLIARGS['vault_password_files']: cmd += " --vault-password-file=%s" % vault_password_file if context.CLIARGS['vault_ids']: for vault_id in context.CLIARGS['vault_ids']: cmd += " --vault-id=%s" % vault_id if context.CLIARGS['become_password_file']: cmd += " --become-password-file=%s" % context.CLIARGS['become_password_file'] if context.CLIARGS['connection_password_file']: cmd += " --connection-password-file=%s" % context.CLIARGS['connection_password_file'] for ev in context.CLIARGS['extra_vars']: cmd += ' -e %s' % shlex.quote(ev) if context.CLIARGS['become_ask_pass']: cmd += ' --ask-become-pass' if context.CLIARGS['skip_tags']: cmd += ' --skip-tags "%s"' % to_native(u','.join(context.CLIARGS['skip_tags'])) if context.CLIARGS['tags']: cmd += ' -t "%s"' % to_native(u','.join(context.CLIARGS['tags'])) if context.CLIARGS['subset']: cmd += ' -l "%s"' % context.CLIARGS['subset'] else: cmd += ' -l "%s"' % limit_opts if context.CLIARGS['check']: cmd += ' -C' if context.CLIARGS['diff']: cmd += ' -D' os.chdir(context.CLIARGS['dest']) # redo inventory options as new files might exist now inv_opts = self._get_inv_cli() if inv_opts: cmd += inv_opts # RUN THE PLAYBOOK COMMAND display.debug("running ansible-playbook to do actual work") display.debug('EXEC: %s' % cmd) rc, b_out, b_err = run_cmd(cmd, live=True) if context.CLIARGS['purge']: os.chdir('/') try: display.debug("removing: %s" % context.CLIARGS['dest']) shutil.rmtree(context.CLIARGS['dest']) except Exception as e: display.error(u"Failed to remove %s: %s" % (context.CLIARGS['dest'], to_text(e))) return rc @staticmethod def try_playbook(path): if not os.path.exists(path): return 1 if not os.access(path, os.R_OK): return 2 return 0 @staticmethod def select_playbook(path): playbook = None errors = [] if context.CLIARGS['args'] and context.CLIARGS['args'][0] is not None: playbooks = [] for book in context.CLIARGS['args']: book_path = os.path.join(path, book) rc = PullCLI.try_playbook(book_path) if rc != 0: errors.append("%s: %s" % (book_path, PullCLI.PLAYBOOK_ERRORS[rc])) continue playbooks.append(book_path) if 0 < len(errors): display.warning("\n".join(errors)) elif len(playbooks) == len(context.CLIARGS['args']): playbook = " ".join(playbooks) return playbook else: fqdn = socket.getfqdn() hostpb = os.path.join(path, fqdn + '.yml') shorthostpb = os.path.join(path, fqdn.split('.')[0] + '.yml') localpb = os.path.join(path, PullCLI.DEFAULT_PLAYBOOK) for pb in [hostpb, shorthostpb, localpb]: rc = PullCLI.try_playbook(pb) if rc == 0: playbook = pb break else: errors.append("%s: %s" % (pb, PullCLI.PLAYBOOK_ERRORS[rc])) if playbook is None: display.warning("\n".join(errors)) return playbook def main(args=None): PullCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/bin/ansible-test0000755000000000000000000000324714556006441016007 0ustar00rootroot#!/usr/bin/env python # PYTHON_ARGCOMPLETE_OK """Command line entry point for ansible-test.""" # NOTE: This file resides in the _util/target directory to ensure compatibility with all supported Python versions. from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import sys def main(args=None): """Main program entry point.""" ansible_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) source_root = os.path.join(ansible_root, 'test', 'lib') if os.path.exists(os.path.join(source_root, 'ansible_test', '_internal', '__init__.py')): # running from source, use that version of ansible-test instead of any version that may already be installed sys.path.insert(0, source_root) # noinspection PyProtectedMember from ansible_test._util.target.common.constants import CONTROLLER_PYTHON_VERSIONS if version_to_str(sys.version_info[:2]) not in CONTROLLER_PYTHON_VERSIONS: raise SystemExit('This version of ansible-test cannot be executed with Python version %s. Supported Python versions are: %s' % ( version_to_str(sys.version_info[:3]), ', '.join(CONTROLLER_PYTHON_VERSIONS))) if any(not os.get_blocking(handle.fileno()) for handle in (sys.stdin, sys.stdout, sys.stderr)): raise SystemExit('Standard input, output and error file handles must be blocking to run ansible-test.') # noinspection PyProtectedMember from ansible_test._internal import main as cli_main cli_main(args) def version_to_str(version): """Return a version string from a version tuple.""" return '.'.join(str(n) for n in version) if __name__ == '__main__': main() ansible-core-2.16.3/bin/ansible-vault0000755000000000000000000005473714556006441016175 0ustar00rootroot#!/usr/bin/env python # (c) 2014, James Tanner # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import os import sys from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleOptionsError from ansible.module_utils.common.text.converters import to_text, to_bytes from ansible.parsing.dataloader import DataLoader from ansible.parsing.vault import VaultEditor, VaultLib, match_encrypt_secret from ansible.utils.display import Display display = Display() class VaultCLI(CLI): ''' can encrypt any structured data file used by Ansible. This can include *group_vars/* or *host_vars/* inventory variables, variables loaded by *include_vars* or *vars_files*, or variable files passed on the ansible-playbook command line with *-e @file.yml* or *-e @file.json*. Role variables and defaults are also included! Because Ansible tasks, handlers, and other objects are data, these can also be encrypted with vault. If you'd like to not expose what variables you are using, you can keep an individual task file entirely encrypted. ''' name = 'ansible-vault' FROM_STDIN = "stdin" FROM_ARGS = "the command line args" FROM_PROMPT = "the interactive prompt" def __init__(self, args): self.b_vault_pass = None self.b_new_vault_pass = None self.encrypt_string_read_stdin = False self.encrypt_secret = None self.encrypt_vault_id = None self.new_encrypt_secret = None self.new_encrypt_vault_id = None super(VaultCLI, self).__init__(args) def init_parser(self): super(VaultCLI, self).init_parser( desc="encryption/decryption utility for Ansible data files", epilog="\nSee '%s --help' for more information on a specific command.\n\n" % os.path.basename(sys.argv[0]) ) common = opt_help.ArgumentParser(add_help=False) opt_help.add_vault_options(common) opt_help.add_verbosity_options(common) subparsers = self.parser.add_subparsers(dest='action') subparsers.required = True output = opt_help.ArgumentParser(add_help=False) output.add_argument('--output', default=None, dest='output_file', help='output file name for encrypt or decrypt; use - for stdout', type=opt_help.unfrack_path()) # For encrypting actions, we can also specify which of multiple vault ids should be used for encrypting vault_id = opt_help.ArgumentParser(add_help=False) vault_id.add_argument('--encrypt-vault-id', default=[], dest='encrypt_vault_id', action='store', type=str, help='the vault id used to encrypt (required if more than one vault-id is provided)') create_parser = subparsers.add_parser('create', help='Create new vault encrypted file', parents=[vault_id, common]) create_parser.set_defaults(func=self.execute_create) create_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') create_parser.add_argument('--skip-tty-check', default=False, help='allows editor to be opened when no tty attached', dest='skip_tty_check', action='store_true') decrypt_parser = subparsers.add_parser('decrypt', help='Decrypt vault encrypted file', parents=[output, common]) decrypt_parser.set_defaults(func=self.execute_decrypt) decrypt_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') edit_parser = subparsers.add_parser('edit', help='Edit vault encrypted file', parents=[vault_id, common]) edit_parser.set_defaults(func=self.execute_edit) edit_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') view_parser = subparsers.add_parser('view', help='View vault encrypted file', parents=[common]) view_parser.set_defaults(func=self.execute_view) view_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') encrypt_parser = subparsers.add_parser('encrypt', help='Encrypt YAML file', parents=[common, output, vault_id]) encrypt_parser.set_defaults(func=self.execute_encrypt) encrypt_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') enc_str_parser = subparsers.add_parser('encrypt_string', help='Encrypt a string', parents=[common, output, vault_id]) enc_str_parser.set_defaults(func=self.execute_encrypt_string) enc_str_parser.add_argument('args', help='String to encrypt', metavar='string_to_encrypt', nargs='*') enc_str_parser.add_argument('-p', '--prompt', dest='encrypt_string_prompt', action='store_true', help="Prompt for the string to encrypt") enc_str_parser.add_argument('--show-input', dest='show_string_input', default=False, action='store_true', help='Do not hide input when prompted for the string to encrypt') enc_str_parser.add_argument('-n', '--name', dest='encrypt_string_names', action='append', help="Specify the variable name") enc_str_parser.add_argument('--stdin-name', dest='encrypt_string_stdin_name', default=None, help="Specify the variable name for stdin") rekey_parser = subparsers.add_parser('rekey', help='Re-key a vault encrypted file', parents=[common, vault_id]) rekey_parser.set_defaults(func=self.execute_rekey) rekey_new_group = rekey_parser.add_mutually_exclusive_group() rekey_new_group.add_argument('--new-vault-password-file', default=None, dest='new_vault_password_file', help="new vault password file for rekey", type=opt_help.unfrack_path()) rekey_new_group.add_argument('--new-vault-id', default=None, dest='new_vault_id', type=str, help='the new vault identity to use for rekey') rekey_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') def post_process_args(self, options): options = super(VaultCLI, self).post_process_args(options) display.verbosity = options.verbosity if options.vault_ids: for vault_id in options.vault_ids: if u';' in vault_id: raise AnsibleOptionsError("'%s' is not a valid vault id. The character ';' is not allowed in vault ids" % vault_id) if getattr(options, 'output_file', None) and len(options.args) > 1: raise AnsibleOptionsError("At most one input file may be used with the --output option") if options.action == 'encrypt_string': if '-' in options.args or not options.args or options.encrypt_string_stdin_name: self.encrypt_string_read_stdin = True # TODO: prompting from stdin and reading from stdin seem mutually exclusive, but verify that. if options.encrypt_string_prompt and self.encrypt_string_read_stdin: raise AnsibleOptionsError('The --prompt option is not supported if also reading input from stdin') return options def run(self): super(VaultCLI, self).run() loader = DataLoader() # set default restrictive umask old_umask = os.umask(0o077) vault_ids = list(context.CLIARGS['vault_ids']) # there are 3 types of actions, those that just 'read' (decrypt, view) and only # need to ask for a password once, and those that 'write' (create, encrypt) that # ask for a new password and confirm it, and 'read/write (rekey) that asks for the # old password, then asks for a new one and confirms it. default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST vault_ids = default_vault_ids + vault_ids action = context.CLIARGS['action'] # TODO: instead of prompting for these before, we could let VaultEditor # call a callback when it needs it. if action in ['decrypt', 'view', 'rekey', 'edit']: vault_secrets = self.setup_vault_secrets(loader, vault_ids=vault_ids, vault_password_files=list(context.CLIARGS['vault_password_files']), ask_vault_pass=context.CLIARGS['ask_vault_pass']) if not vault_secrets: raise AnsibleOptionsError("A vault password is required to use Ansible's Vault") if action in ['encrypt', 'encrypt_string', 'create']: encrypt_vault_id = None # no --encrypt-vault-id context.CLIARGS['encrypt_vault_id'] for 'edit' if action not in ['edit']: encrypt_vault_id = context.CLIARGS['encrypt_vault_id'] or C.DEFAULT_VAULT_ENCRYPT_IDENTITY vault_secrets = None vault_secrets = \ self.setup_vault_secrets(loader, vault_ids=vault_ids, vault_password_files=list(context.CLIARGS['vault_password_files']), ask_vault_pass=context.CLIARGS['ask_vault_pass'], create_new_password=True) if len(vault_secrets) > 1 and not encrypt_vault_id: raise AnsibleOptionsError("The vault-ids %s are available to encrypt. Specify the vault-id to encrypt with --encrypt-vault-id" % ','.join([x[0] for x in vault_secrets])) if not vault_secrets: raise AnsibleOptionsError("A vault password is required to use Ansible's Vault") encrypt_secret = match_encrypt_secret(vault_secrets, encrypt_vault_id=encrypt_vault_id) # only one secret for encrypt for now, use the first vault_id and use its first secret # TODO: exception if more than one? self.encrypt_vault_id = encrypt_secret[0] self.encrypt_secret = encrypt_secret[1] if action in ['rekey']: encrypt_vault_id = context.CLIARGS['encrypt_vault_id'] or C.DEFAULT_VAULT_ENCRYPT_IDENTITY # print('encrypt_vault_id: %s' % encrypt_vault_id) # print('default_encrypt_vault_id: %s' % default_encrypt_vault_id) # new_vault_ids should only ever be one item, from # load the default vault ids if we are using encrypt-vault-id new_vault_ids = [] if encrypt_vault_id: new_vault_ids = default_vault_ids if context.CLIARGS['new_vault_id']: new_vault_ids.append(context.CLIARGS['new_vault_id']) new_vault_password_files = [] if context.CLIARGS['new_vault_password_file']: new_vault_password_files.append(context.CLIARGS['new_vault_password_file']) new_vault_secrets = \ self.setup_vault_secrets(loader, vault_ids=new_vault_ids, vault_password_files=new_vault_password_files, ask_vault_pass=context.CLIARGS['ask_vault_pass'], create_new_password=True) if not new_vault_secrets: raise AnsibleOptionsError("A new vault password is required to use Ansible's Vault rekey") # There is only one new_vault_id currently and one new_vault_secret, or we # use the id specified in --encrypt-vault-id new_encrypt_secret = match_encrypt_secret(new_vault_secrets, encrypt_vault_id=encrypt_vault_id) self.new_encrypt_vault_id = new_encrypt_secret[0] self.new_encrypt_secret = new_encrypt_secret[1] loader.set_vault_secrets(vault_secrets) # FIXME: do we need to create VaultEditor here? its not reused vault = VaultLib(vault_secrets) self.editor = VaultEditor(vault) context.CLIARGS['func']() # and restore umask os.umask(old_umask) def execute_encrypt(self): ''' encrypt the supplied file using the provided vault secret ''' if not context.CLIARGS['args'] and sys.stdin.isatty(): display.display("Reading plaintext input from stdin", stderr=True) for f in context.CLIARGS['args'] or ['-']: # Fixme: use the correct vau self.editor.encrypt_file(f, self.encrypt_secret, vault_id=self.encrypt_vault_id, output_file=context.CLIARGS['output_file']) if sys.stdout.isatty(): display.display("Encryption successful", stderr=True) @staticmethod def format_ciphertext_yaml(b_ciphertext, indent=None, name=None): indent = indent or 10 block_format_var_name = "" if name: block_format_var_name = "%s: " % name block_format_header = "%s!vault |" % block_format_var_name lines = [] vault_ciphertext = to_text(b_ciphertext) lines.append(block_format_header) for line in vault_ciphertext.splitlines(): lines.append('%s%s' % (' ' * indent, line)) yaml_ciphertext = '\n'.join(lines) return yaml_ciphertext def execute_encrypt_string(self): ''' encrypt the supplied string using the provided vault secret ''' b_plaintext = None # Holds tuples (the_text, the_source_of_the_string, the variable name if its provided). b_plaintext_list = [] # remove the non-option '-' arg (used to indicate 'read from stdin') from the candidate args so # we don't add it to the plaintext list args = [x for x in context.CLIARGS['args'] if x != '-'] # We can prompt and read input, or read from stdin, but not both. if context.CLIARGS['encrypt_string_prompt']: msg = "String to encrypt: " name = None name_prompt_response = display.prompt('Variable name (enter for no name): ') # TODO: enforce var naming rules? if name_prompt_response != "": name = name_prompt_response # TODO: could prompt for which vault_id to use for each plaintext string # currently, it will just be the default hide_input = not context.CLIARGS['show_string_input'] if hide_input: msg = "String to encrypt (hidden): " else: msg = "String to encrypt:" prompt_response = display.prompt(msg, private=hide_input) if prompt_response == '': raise AnsibleOptionsError('The plaintext provided from the prompt was empty, not encrypting') b_plaintext = to_bytes(prompt_response) b_plaintext_list.append((b_plaintext, self.FROM_PROMPT, name)) # read from stdin if self.encrypt_string_read_stdin: if sys.stdout.isatty(): display.display("Reading plaintext input from stdin. (ctrl-d to end input, twice if your content does not already have a newline)", stderr=True) stdin_text = sys.stdin.read() if stdin_text == '': raise AnsibleOptionsError('stdin was empty, not encrypting') if sys.stdout.isatty() and not stdin_text.endswith("\n"): display.display("\n") b_plaintext = to_bytes(stdin_text) # defaults to None name = context.CLIARGS['encrypt_string_stdin_name'] b_plaintext_list.append((b_plaintext, self.FROM_STDIN, name)) # use any leftover args as strings to encrypt # Try to match args up to --name options if context.CLIARGS.get('encrypt_string_names', False): name_and_text_list = list(zip(context.CLIARGS['encrypt_string_names'], args)) # Some but not enough --name's to name each var if len(args) > len(name_and_text_list): # Trying to avoid ever showing the plaintext in the output, so this warning is vague to avoid that. display.display('The number of --name options do not match the number of args.', stderr=True) display.display('The last named variable will be "%s". The rest will not have' ' names.' % context.CLIARGS['encrypt_string_names'][-1], stderr=True) # Add the rest of the args without specifying a name for extra_arg in args[len(name_and_text_list):]: name_and_text_list.append((None, extra_arg)) # if no --names are provided, just use the args without a name. else: name_and_text_list = [(None, x) for x in args] # Convert the plaintext text objects to bytestrings and collect for name_and_text in name_and_text_list: name, plaintext = name_and_text if plaintext == '': raise AnsibleOptionsError('The plaintext provided from the command line args was empty, not encrypting') b_plaintext = to_bytes(plaintext) b_plaintext_list.append((b_plaintext, self.FROM_ARGS, name)) # TODO: specify vault_id per string? # Format the encrypted strings and any corresponding stderr output outputs = self._format_output_vault_strings(b_plaintext_list, vault_id=self.encrypt_vault_id) b_outs = [] for output in outputs: err = output.get('err', None) out = output.get('out', '') if err: sys.stderr.write(err) b_outs.append(to_bytes(out)) # The output must end with a newline to play nice with terminal representation. # Refs: # * https://stackoverflow.com/a/729795/595220 # * https://github.com/ansible/ansible/issues/78932 b_outs.append(b'') self.editor.write_data(b'\n'.join(b_outs), context.CLIARGS['output_file'] or '-') if sys.stdout.isatty(): display.display("Encryption successful", stderr=True) # TODO: offer block or string ala eyaml def _format_output_vault_strings(self, b_plaintext_list, vault_id=None): # If we are only showing one item in the output, we don't need to included commented # delimiters in the text show_delimiter = False if len(b_plaintext_list) > 1: show_delimiter = True # list of dicts {'out': '', 'err': ''} output = [] # Encrypt the plaintext, and format it into a yaml block that can be pasted into a playbook. # For more than one input, show some differentiating info in the stderr output so we can tell them # apart. If we have a var name, we include that in the yaml for index, b_plaintext_info in enumerate(b_plaintext_list): # (the text itself, which input it came from, its name) b_plaintext, src, name = b_plaintext_info b_ciphertext = self.editor.encrypt_bytes(b_plaintext, self.encrypt_secret, vault_id=vault_id) # block formatting yaml_text = self.format_ciphertext_yaml(b_ciphertext, name=name) err_msg = None if show_delimiter: human_index = index + 1 if name: err_msg = '# The encrypted version of variable ("%s", the string #%d from %s).\n' % (name, human_index, src) else: err_msg = '# The encrypted version of the string #%d from %s.)\n' % (human_index, src) output.append({'out': yaml_text, 'err': err_msg}) return output def execute_decrypt(self): ''' decrypt the supplied file using the provided vault secret ''' if not context.CLIARGS['args'] and sys.stdin.isatty(): display.display("Reading ciphertext input from stdin", stderr=True) for f in context.CLIARGS['args'] or ['-']: self.editor.decrypt_file(f, output_file=context.CLIARGS['output_file']) if sys.stdout.isatty(): display.display("Decryption successful", stderr=True) def execute_create(self): ''' create and open a file in an editor that will be encrypted with the provided vault secret when closed''' if len(context.CLIARGS['args']) != 1: raise AnsibleOptionsError("ansible-vault create can take only one filename argument") if sys.stdout.isatty() or context.CLIARGS['skip_tty_check']: self.editor.create_file(context.CLIARGS['args'][0], self.encrypt_secret, vault_id=self.encrypt_vault_id) else: raise AnsibleOptionsError("not a tty, editor cannot be opened") def execute_edit(self): ''' open and decrypt an existing vaulted file in an editor, that will be encrypted again when closed''' for f in context.CLIARGS['args']: self.editor.edit_file(f) def execute_view(self): ''' open, decrypt and view an existing vaulted file using a pager using the supplied vault secret ''' for f in context.CLIARGS['args']: # Note: vault should return byte strings because it could encrypt # and decrypt binary files. We are responsible for changing it to # unicode here because we are displaying it and therefore can make # the decision that the display doesn't have to be precisely what # the input was (leave that to decrypt instead) plaintext = self.editor.plaintext(f) self.pager(to_text(plaintext)) def execute_rekey(self): ''' re-encrypt a vaulted file with a new secret, the previous secret is required ''' for f in context.CLIARGS['args']: # FIXME: plumb in vault_id, use the default new_vault_secret for now self.editor.rekey_file(f, self.new_encrypt_secret, self.new_encrypt_vault_id) display.display("Rekey successful", stderr=True) def main(args=None): VaultCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/changelogs/0000755000000000000000000000000014556006441015023 5ustar00rootrootansible-core-2.16.3/changelogs/CHANGELOG-v2.16.rst0000644000000000000000000011400114556006441017613 0ustar00rootroot============================================= ansible-core 2.16 "All My Love" Release Notes ============================================= .. contents:: Topics v2.16.3 ======= Release Summary --------------- | Release Date: 2024-01-29 | `Porting Guide `__ Security Fixes -------------- - ANSIBLE_NO_LOG - Address issue where ANSIBLE_NO_LOG was ignored (CVE-2024-0690) Bugfixes -------- - Run all handlers with the same ``listen`` topic, even when notified from another handler (https://github.com/ansible/ansible/issues/82363). - ``ansible-galaxy role import`` - fix using the ``role_name`` in a standalone role's ``galaxy_info`` metadata by disabling automatic removal of the ``ansible-role-`` prefix. This matches the behavior of the Galaxy UI which also no longer implicitly removes the ``ansible-role-`` prefix. Use the ``--role-name`` option or add a ``role_name`` to the ``galaxy_info`` dictionary in the role's ``meta/main.yml`` to use an alternate role name. - ``ansible-test sanity --test runtime-metadata`` - add ``action_plugin`` as a valid field for modules in the schema (https://github.com/ansible/ansible/pull/82562). - ansible-config init will now dedupe ini entries from plugins. - ansible-galaxy role import - exit with 1 when the import fails (https://github.com/ansible/ansible/issues/82175). - ansible-galaxy role install - normalize tarfile paths and symlinks using ``ansible.utils.path.unfrackpath`` and consider them valid as long as the realpath is in the tarfile's role directory (https://github.com/ansible/ansible/issues/81965). - delegate_to when set to an empty or undefined variable will now give a proper error. - dwim functions for lookups should be better at detectging role context even in abscense of tasks/main. - roles, code cleanup and performance optimization of dependencies, now cached, and ``public`` setting is now determined once, at role instantiation. - roles, the ``static`` property is now correctly set, this will fix issues with ``public`` and ``DEFAULT_PRIVATE_ROLE_VARS`` controls on exporting vars. - unsafe data - Enable directly using ``AnsibleUnsafeText`` with Python ``pathlib`` (https://github.com/ansible/ansible/issues/82414) v2.16.2 ======= Release Summary --------------- | Release Date: 2023-12-11 | `Porting Guide `__ Bugfixes -------- - unsafe data - Address an incompatibility when iterating or getting a single index from ``AnsibleUnsafeBytes`` - unsafe data - Address an incompatibility with ``AnsibleUnsafeText`` and ``AnsibleUnsafeBytes`` when pickling with ``protocol=0`` v2.16.1 ======= Release Summary --------------- | Release Date: 2023-12-04 | `Porting Guide `__ Breaking Changes / Porting Guide -------------------------------- - assert - Nested templating may result in an inability for the conditional to be evaluated. See the porting guide for more information. Security Fixes -------------- - templating - Address issues where internal templating can cause unsafe variables to lose their unsafe designation (CVE-2023-5764) Bugfixes -------- - Fix issue where an ``include_tasks`` handler in a role was not able to locate a file in ``tasks/`` when ``tasks_from`` was used as a role entry point and ``main.yml`` was not present (https://github.com/ansible/ansible/issues/82241) - Plugin loader does not dedupe nor cache filter/test plugins by file basename, but full path name. - Restoring the ability of filters/tests can have same file base name but different tests/filters defined inside. - ansible-pull now will expand relative paths for the ``-d|--directory`` option is now expanded before use. - ansible-pull will now correctly handle become and connection password file options for ansible-playbook. - flush_handlers - properly handle a handler failure in a nested block when ``force_handlers`` is set (http://github.com/ansible/ansible/issues/81532) - module no_log will no longer affect top level booleans, for example ``no_log_module_parameter='a'`` will no longer hide ``changed=False`` as a 'no log value' (matches 'a'). - role params now have higher precedence than host facts again, matching documentation, this had unintentionally changed in 2.15. - wait_for should not handle 'non mmapable files' again. v2.16.0 ======= Release Summary --------------- | Release Date: 2023-11-06 | `Porting Guide `__ Minor Changes ------------- - Add Python type hints to the Display class (https://github.com/ansible/ansible/issues/80841) - Add ``GALAXY_COLLECTIONS_PATH_WARNING`` option to disable the warning given by ``ansible-galaxy collection install`` when installing a collection to a path that isn't in the configured collection paths. - Add ``python3.12`` to the default ``INTERPRETER_PYTHON_FALLBACK`` list. - Add ``utcfromtimestamp`` and ``utcnow`` to ``ansible.module_utils.compat.datetime`` to return fixed offset datetime objects. - Add a general ``GALAXY_SERVER_TIMEOUT`` config option for distribution servers (https://github.com/ansible/ansible/issues/79833). - Added Python type annotation to connection plugins - CLI argument parsing - Automatically prepend to the help of CLI arguments that support being specified multiple times. (https://github.com/ansible/ansible/issues/22396) - DEFAULT_TRANSPORT now defaults to 'ssh', the old 'smart' option is being deprecated as versions of OpenSSH without control persist are basically not present anymore. - Documentation for set filters ``intersect``, ``difference``, ``symmetric_difference`` and ``union`` now states that the returned list items are in arbitrary order. - Record ``removal_date`` in runtime metadata as a string instead of a date. - Remove the ``CleansingNodeVisitor`` class and its usage due to the templating changes that made it superfluous. Also simplify the ``Conditional`` class. - Removed ``exclude`` and ``recursive-exclude`` commands for generated files from the ``MANIFEST.in`` file. These excludes were unnecessary since releases are expected to be built with a clean worktree. - Removed ``exclude`` commands for sanity test files from the ``MANIFEST.in`` file. These tests were previously excluded because they did not pass when run from an sdist. However, sanity tests are not expected to pass from an sdist, so excluding some (but not all) of the failing tests makes little sense. - Removed redundant ``include`` commands from the ``MANIFEST.in`` file. These includes either duplicated default behavior or another command. - The ``ansible-core`` sdist no longer contains pre-generated man pages. Instead, a ``packaging/cli-doc/build.py`` script is included in the sdist. This script can generate man pages and standalone RST documentation for ``ansible-core`` CLI programs. - The ``docs`` and ``examples`` directories are no longer included in the ``ansible-core`` sdist. These directories have been moved to the https://github.com/ansible/ansible-documentation repository. - The minimum required ``setuptools`` version is now 66.1.0, as it is the oldest version to support Python 3.12. - Update ``ansible_service_mgr`` fact to include init system for SMGL OS family - Use ``ansible.module_utils.common.text.converters`` instead of ``ansible.module_utils._text``. - Use ``importlib.resources.abc.TraversableResources`` instead of deprecated ``importlib.abc.TraversableResources`` where available (https:/github.com/ansible/ansible/pull/81082). - Use ``include`` where ``recursive-include`` is unnecessary in the ``MANIFEST.in`` file. - Use ``package_data`` instead of ``include_package_data`` for ``setup.cfg`` to avoid ``setuptools`` warnings. - Utilize gpg check provided internally by the ``transaction.run`` method as oppose to calling it manually. - ``Templar`` - do not add the ``dict`` constructor to ``globals`` as all required Jinja2 versions already do so - ansible-doc - allow to filter listing of collections and metadata dump by more than one collection (https://github.com/ansible/ansible/pull/81450). - ansible-galaxy - Add a plural option to improve ignoring multiple signature error status codes when installing or verifying collections. A space-separated list of error codes can follow --ignore-signature-status-codes in addition to specifying --ignore-signature-status-code multiple times (for example, ``--ignore-signature-status-codes NO_PUBKEY UNEXPECTED``). - ansible-galaxy - Remove internal configuration argument ``v3`` (https://github.com/ansible/ansible/pull/80721) - ansible-galaxy - add note to the collection dependency resolver error message about pre-releases if ``--pre`` was not provided (https://github.com/ansible/ansible/issues/80048). - ansible-galaxy - used to crash out with a "Errno 20 Not a directory" error when extracting files from a role when hitting a file with an illegal name (https://github.com/ansible/ansible/pull/81553). Now it gives a warning identifying the culprit file and the rule violation (e.g., ``my$class.jar`` has a ``$`` in the name) before crashing out, giving the user a chance to remove the invalid file and try again. (https://github.com/ansible/ansible/pull/81555). - ansible-test - Add Alpine 3.18 to remotes - ansible-test - Add Fedora 38 container. - ansible-test - Add Fedora 38 remote. - ansible-test - Add FreeBSD 13.2 remote. - ansible-test - Add new pylint checker for new ``# deprecated:`` comments within code to trigger errors when time to remove code that has no user facing deprecation message. Only supported in ansible-core, not collections. - ansible-test - Add support for RHEL 8.8 remotes. - ansible-test - Add support for RHEL 9.2 remotes. - ansible-test - Add support for testing with Python 3.12. - ansible-test - Allow float values for the ``--timeout`` option to the ``env`` command. This simplifies testing. - ansible-test - Enable ``thread`` code coverage in addition to the existing ``multiprocessing`` coverage. - ansible-test - Make Python 3.12 the default version used in the ``base`` and ``default`` containers. - ansible-test - RHEL 8.8 provisioning can now be used with the ``--python 3.11`` option. - ansible-test - RHEL 9.2 provisioning can now be used with the ``--python 3.11`` option. - ansible-test - Refactored ``env`` command logic and timeout handling. - ansible-test - Remove Fedora 37 remote support. - ansible-test - Remove Fedora 37 test container. - ansible-test - Remove Python 3.8 and 3.9 from RHEL 8.8. - ansible-test - Remove obsolete embedded script for configuring WinRM on Windows remotes. - ansible-test - Removed Ubuntu 20.04 LTS image from the `--remote` option. - ansible-test - Removed `freebsd/12.4` remote. - ansible-test - Removed `freebsd/13.1` remote. - ansible-test - Removed test remotes: rhel/8.7, rhel/9.1 - ansible-test - Removed the deprecated ``--docker-no-pull`` option. - ansible-test - Removed the deprecated ``--no-pip-check`` option. - ansible-test - Removed the deprecated ``foreman`` test plugin. - ansible-test - Removed the deprecated ``govcsim`` support from the ``vcenter`` test plugin. - ansible-test - Replace the ``pytest-forked`` pytest plugin with a custom plugin. - ansible-test - The ``no-get-exception`` sanity test is now limited to plugins in collections. Previously any Python file in a collection was checked for ``get_exception`` usage. - ansible-test - The ``replace-urlopen`` sanity test is now limited to plugins in collections. Previously any Python file in a collection was checked for ``urlopen`` usage. - ansible-test - The ``use-compat-six`` sanity test is now limited to plugins in collections. Previously any Python file in a collection was checked for ``six`` usage. - ansible-test - The openSUSE test container has been updated to openSUSE Leap 15.5. - ansible-test - Update pip to ``23.1.2`` and setuptools to ``67.7.2``. - ansible-test - Update the ``default`` containers. - ansible-test - Update the ``nios-test-container`` to version 2.0.0, which supports API version 2.9. - ansible-test - Update the logic used to detect when ``ansible-test`` is running from source. - ansible-test - Updated the CloudStack test container to version 1.6.1. - ansible-test - Updated the distro test containers to version 6.3.0 to include coverage 7.3.2 for Python 3.8+. The alpine3 container is now based on 3.18 instead of 3.17 and includes Python 3.11 instead of Python 3.10. - ansible-test - Use ``datetime.datetime.now`` with ``tz`` specified instead of ``datetime.datetime.utcnow``. - ansible-test - Use a context manager to perform cleanup at exit instead of using the built-in ``atexit`` module. - ansible-test - When invoking ``sleep`` in containers during container setup, the ``env`` command is used to avoid invoking the shell builtin, if present. - ansible-test - remove Alpine 3.17 from remotes - ansible-test — Python 3.8–3.12 will use ``coverage`` v7.3.2. - ansible-test — ``coverage`` v6.5.0 is to be used only under Python 3.7. - ansible-vault create: Now raises an error when opening the editor without tty. The flag --skip-tty-check restores previous behaviour. - ansible_user_module - tweaked macos user defaults to reflect expected defaults (https://github.com/ansible/ansible/issues/44316) - apt - return calculated diff while running apt clean operation. - blockinfile - add append_newline and prepend_newline options (https://github.com/ansible/ansible/issues/80835). - cli - Added short option '-J' for asking for vault password (https://github.com/ansible/ansible/issues/80523). - command - Add option ``expand_argument_vars`` to disable argument expansion and use literal values - https://github.com/ansible/ansible/issues/54162 - config lookup new option show_origin to also return the origin of a configuration value. - display methods for warning and deprecation are now proxied to main process when issued from a fork. This allows for the deduplication of warnings and deprecations to work globally. - dnf5 - enable environment groups installation testing in CI as its support was added. - dnf5 - enable now implemented ``cacheonly`` functionality - executor now skips persistent connection when it detects an action that does not require a connection. - find module - Add ability to filter based on modes - gather_facts now will use gather_timeout setting to limit parallel execution of modules that do not themselves use gather_timeout. - group - remove extraneous warning shown when user does not exist (https://github.com/ansible/ansible/issues/77049). - include_vars - os.walk now follows symbolic links when traversing directories (https://github.com/ansible/ansible/pull/80460) - module compression is now sourced directly via config, bypassing play_context possibly stale values. - reboot - show last error message in verbose logs (https://github.com/ansible/ansible/issues/81574). - service_facts now returns more info for rcctl managed systesm (OpenBSD). - tasks - the ``retries`` keyword can be specified without ``until`` in which case the task is retried until it succeeds but at most ``retries`` times (https://github.com/ansible/ansible/issues/20802) - user - add new option ``password_expire_warn`` (supported on Linux only) to set the number of days of warning before a password change is required (https://github.com/ansible/ansible/issues/79882). - yum_repository - Align module documentation with parameters Breaking Changes / Porting Guide -------------------------------- - Any plugin using the config system and the `cli` entry to use the `timeout` from the command line, will see the value change if the use had configured it in any of the lower precedence methods. If relying on this behaviour to consume the global/generic timeout from the DEFAULT_TIMEOUT constant, please consult the documentation on plugin configuration to add the overlaping entries. - ansible-test - Test plugins that rely on containers no longer support reusing running containers. The previous behavior was an undocumented, untested feature. - service module will not permanently configure variables/flags for openbsd when doing enable/disable operation anymore, this module was never meant to do this type of work, just to manage the service state itself. A rcctl_config or similar module should be created and used instead. Deprecated Features ------------------- - Deprecated ini config option ``collections_paths``, use the singular form ``collections_path`` instead - Deprecated the env var ``ANSIBLE_COLLECTIONS_PATHS``, use the singular form ``ANSIBLE_COLLECTIONS_PATH`` instead - Old style vars plugins which use the entrypoints `get_host_vars` or `get_group_vars` are deprecated. The plugin should be updated to inherit from `BaseVarsPlugin` and define a `get_vars` method as the entrypoint. - Support for Windows Server 2012 and 2012 R2 has been removed as the support end of life from Microsoft is October 10th 2023. These versions of Windows will no longer be tested in this Ansible release and it cannot be guaranteed that they will continue to work going forward. - ``STRING_CONVERSION_ACTION`` config option is deprecated as it is no longer used in the Ansible Core code base. - the 'smart' option for setting a connection plugin is being removed as it's main purpose (choosing between ssh and paramiko) is now irrelevant. - vault and unfault filters - the undocumented ``vaultid`` parameter is deprecated and will be removed in ansible-core 2.20. Use ``vault_id`` instead. - yum_repository - deprecated parameter 'keepcache' (https://github.com/ansible/ansible/issues/78693). Removed Features (previously deprecated) ---------------------------------------- - ActionBase - remove deprecated ``_remote_checksum`` method - PlayIterator - remove deprecated ``cache_block_tasks`` and ``get_original_task`` methods - Remove deprecated ``FileLock`` class - Removed Python 3.9 as a supported version on the controller. Python 3.10 or newer is required. - Removed ``include`` which has been deprecated in Ansible 2.12. Use ``include_tasks`` or ``import_tasks`` instead. - ``Templar`` - remove deprecated ``shared_loader_obj`` parameter of ``__init__`` - ``fetch_url`` - remove auto disabling ``decompress`` when gzip is not available - ``get_action_args_with_defaults`` - remove deprecated ``redirected_names`` method parameter - ansible-test - Removed support for the remote Windows targets 2012 and 2012-R2 - inventory_cache - remove deprecated ``default.fact_caching_prefix`` ini configuration option, use ``defaults.fact_caching_prefix`` instead. - module_utils/basic.py - Removed Python 3.5 as a supported remote version. Python 2.7 or Python 3.6+ is now required. - stat - removed unused `get_md5` parameter. Security Fixes -------------- - ansible-galaxy - Prevent roles from using symlinks to overwrite files outside of the installation directory (CVE-2023-5115) Bugfixes -------- - Allow for searching handler subdir for included task via include_role (https://github.com/ansible/ansible/issues/81722) - AnsibleModule.run_command - Only use selectors when needed, and rely on Python stdlib subprocess for the simple task of collecting stdout/stderr when prompt matching is not required. - Cache host_group_vars after instantiating it once and limit the amount of repetitive work it needs to do every time it runs. - Call PluginLoader.all() once for vars plugins, and load vars plugins that run automatically or are enabled specifically by name subsequently. - Display - Defensively configure writing to stdout and stderr with a custom encoding error handler that will replace invalid characters while providing a deprecation warning that non-utf8 text will result in an error in a future version. - Exclude internal options from man pages and docs. - Fix ``ansible-config init`` man page option indentation. - Fix ``ast`` deprecation warnings for ``Str`` and ``value.s`` when using Python 3.12. - Fix ``run_once`` being incorrectly interpreted on handlers (https://github.com/ansible/ansible/issues/81666) - Fix exceptions caused by various inputs when performing arg splitting or parsing key/value pairs. Resolves issue https://github.com/ansible/ansible/issues/46379 and issue https://github.com/ansible/ansible/issues/61497 - Fix incorrect parsing of multi-line Jinja2 blocks when performing arg splitting or parsing key/value pairs. - Fix post-validating looped task fields so the strategy uses the correct values after task execution. - Fixed `pip` module failure in case of usage quotes for `virtualenv_command` option for the venv command. (https://github.com/ansible/ansible/issues/76372) - From issue https://github.com/ansible/ansible/issues/80880, when notifying a handler from another handler, handler notifications must be registered immediately as the flush_handler call is not recursive. - Import ``FILE_ATTRIBUTES`` from ``ansible.module_utils.common.file`` in ``ansible.module_utils.basic`` instead of defining it twice. - Inventory scripts parser not treat exception when getting hostsvar (https://github.com/ansible/ansible/issues/81103) - On Python 3 use datetime methods ``fromtimestamp`` and ``now`` with UTC timezone instead of ``utcfromtimestamp`` and ``utcnow``, which are deprecated in Python 3.12. - PluginLoader - fix Jinja plugin performance issues (https://github.com/ansible/ansible/issues/79652) - PowerShell - Remove some code which is no longer valid for dotnet 5+ - Prevent running same handler multiple times when included via ``include_role`` (https://github.com/ansible/ansible/issues/73643) - Prompting - add a short sleep between polling for user input to reduce CPU consumption (https://github.com/ansible/ansible/issues/81516). - Properly disable ``jinja2_native`` in the template module when jinja2 override is used in the template (https://github.com/ansible/ansible/issues/80605) - Properly template tags in parent blocks (https://github.com/ansible/ansible/issues/81053) - Remove unreachable parser error for removed ``static`` parameter of ``include_role`` - Replace uses of ``configparser.ConfigParser.readfp()`` which was removed in Python 3.12 with ``configparser.ConfigParser.read_file()`` (https://github.com/ansible/ansible/issues/81656) - Set filters ``intersect``, ``difference``, ``symmetric_difference`` and ``union`` now always return a ``list``, never a ``set``. Previously, a ``set`` would be returned if the inputs were a hashable type such as ``str``, instead of a collection, such as a ``list`` or ``tuple``. - Set filters ``intersect``, ``difference``, ``symmetric_difference`` and ``union`` now use set operations when the given items are hashable. Previously, list operations were performed unless the inputs were a hashable type such as ``str``, instead of a collection, such as a ``list`` or ``tuple``. - Switch result queue from a ``multiprocessing.queues.Queue` to ``multiprocessing.queues.SimpleQueue``, primarily to allow properly handling pickling errors, to prevent an infinite hang waiting for task results - The ``ansible-config init`` command now has a documentation description. - The ``ansible-galaxy collection download`` command now has a documentation description. - The ``ansible-galaxy collection install`` command documentation is now visible (previously hidden by a decorator). - The ``ansible-galaxy collection verify`` command now has a documentation description. - The ``ansible-galaxy role install`` command documentation is now visible (previously hidden by a decorator). - The ``ansible-inventory`` command command now has a documentation description (previously used as the epilog). - The ``hostname`` module now also updates both current and permanent hostname on OpenBSD. Before it only updated the permanent hostname (https://github.com/ansible/ansible/issues/80520). - Update module_utils.urls unit test to work with cryptography >= 41.0.0. - When generating man pages, use ``func`` to find the command function instead of looking it up by the command name. - ``StrategyBase._process_pending_results`` - create a ``Templar`` on demand for templating ``changed_when``/``failed_when``. - ``ansible-galaxy`` now considers all collection paths when identifying which collection requirements are already installed. Use the ``COLLECTIONS_PATHS`` and ``COLLECTIONS_SCAN_SYS_PATHS`` config options to modify these. Previously only the install path was considered when resolving the candidates. The install path will remain the only one potentially modified. (https://github.com/ansible/ansible/issues/79767, https://github.com/ansible/ansible/issues/81163) - ``ansible.module_utils.service`` - ensure binary data transmission in ``daemonize()`` - ``ansible.module_utils.service`` - fix inter-process communication in ``daemonize()`` - ``import_role`` reverts to previous behavior of exporting vars at compile time. - ``pkg_mgr`` - fix the default dnf version detection - ansiballz - Prevent issue where the time on the control host could change part way through building the ansiballz file, potentially causing a pre-1980 date to be used during ansiballz unpacking leading to a zip file error (https://github.com/ansible/ansible/issues/80089) - ansible terminal color settings were incorrectly limited to 16 options via 'choices', removing so all 256 can be accessed. - ansible-console - fix filtering by collection names when a collection search path was set (https://github.com/ansible/ansible/pull/81450). - ansible-galaxy - Enabled the ``data`` tarfile filter during role installation for Python versions that support it. A probing mechanism is used to avoid Python versions with a broken implementation. - ansible-galaxy - Fix issue installing collections containing directories with more than 100 characters on python versions before 3.10.6 - ansible-galaxy - Fix variable type error when installing subdir collections (https://github.com/ansible/ansible/issues/80943) - ansible-galaxy - Provide a better error message when using a requirements file with an invalid format - https://github.com/ansible/ansible/issues/81901 - ansible-galaxy - fix installing collections from directories that have a trailing path separator (https://github.com/ansible/ansible/issues/77803). - ansible-galaxy - fix installing signed collections (https://github.com/ansible/ansible/issues/80648). - ansible-galaxy - reduce API calls to servers by fetching signatures only for final candidates. - ansible-galaxy - started allowing the use of pre-releases for collections that do not have any stable versions published. (https://github.com/ansible/ansible/pull/81606) - ansible-galaxy - started allowing the use of pre-releases for dependencies on any level of the dependency tree that specifically demand exact pre-release versions of collections and not version ranges. (https://github.com/ansible/ansible/pull/81606) - ansible-galaxy collection verify - fix verifying signed collections when the keyring is not configured. - ansible-galaxy info - fix reporting no role found when lookup_role_by_name returns None. - ansible-inventory - index available_hosts for major performance boost when dumping large inventories - ansible-test - Add a ``pylint`` plugin to work around a known issue on Python 3.12. - ansible-test - Add support for ``argcomplete`` version 3. - ansible-test - All containers created by ansible-test now include the current test session ID in their name. This avoids conflicts between concurrent ansible-test invocations using the same container host. - ansible-test - Always use ansible-test managed entry points for ansible-core CLI tools when not running from source. This fixes issues where CLI entry points created during install are not compatible with ansible-test. - ansible-test - Fix a traceback that occurs when attempting to test Ansible source using a different ansible-test. A clear error message is now given when this scenario occurs. - ansible-test - Fix handling of timeouts exceeding one day. - ansible-test - Fix parsing of cgroup entries which contain a ``:`` in the path (https://github.com/ansible/ansible/issues/81977). - ansible-test - Fix several possible tracebacks when using the ``-e`` option with sanity tests. - ansible-test - Fix various cases where the test timeout could expire without terminating the tests. - ansible-test - Include missing ``pylint`` requirements for Python 3.10. - ansible-test - Pre-build a PyYAML wheel before installing requirements to avoid a potential Cython build failure. - ansible-test - Remove redundant warning about missing programs before attempting to execute them. - ansible-test - The ``import`` sanity test now checks the collection loader for remote-only Python support when testing ansible-core. - ansible-test - Unit tests now report warnings generated during test runs. Previously only warnings generated during test collection were reported. - ansible-test - Update ``pylint`` to 2.17.2 to resolve several possible false positives. - ansible-test - Update ``pylint`` to 2.17.3 to resolve several possible false positives. - ansible-test - Update ``pylint`` to version 3.0.1. - ansible-test - Use ``raise ... from ...`` when raising exceptions from within an exception handler. - ansible-test - When bootstrapping remote FreeBSD instances, use the OS packaged ``setuptools`` instead of installing the latest version from PyPI. - ansible-test local change detection - use ``git merge-base HEAD`` instead of ``git merge-base --fork-point `` (https://github.com/ansible/ansible/pull/79734). - ansible-vault - fail when the destination file location is not writable before performing encryption (https://github.com/ansible/ansible/issues/81455). - apt - ignore fail_on_autoremove and allow_downgrade parameters when using aptitude (https://github.com/ansible/ansible/issues/77868). - blockinfile - avoid crash with Python 3 if creating the directory fails when ``create=true`` (https://github.com/ansible/ansible/pull/81662). - connection timeouts defined in ansible.cfg will now be properly used, the --timeout cli option was obscuring them by always being set. - copy - print correct destination filename when using `content` and `--diff` (https://github.com/ansible/ansible/issues/79749). - copy unit tests - Fixing "dir all perms" documentation and formatting for easier reading. - core will now also look at the connection plugin to force 'local' interpreter for networking path compatibility as just ansible_network_os could be misleading. - deb822_repository - use http-agent for receiving content (https://github.com/ansible/ansible/issues/80809). - debconf - idempotency in questions with type 'password' (https://github.com/ansible/ansible/issues/47676). - distribution facts - fix Source Mage family mapping - dnf - fix a failure when a package from URI was specified and ``update_only`` was set (https://github.com/ansible/ansible/issues/81376). - dnf5 - Update dnf5 module to handle API change for setting the download directory (https://github.com/ansible/ansible/issues/80887) - dnf5 - Use ``transaction.check_gpg_signatures`` API call to check package signatures AND possibly to recover from when keys are missing. - dnf5 - fix module and package names in the message following failed module respawn attempt - dnf5 - use the logs API to determine transaction problems - dpkg_selections - check if the package exists before performing the selection operation (https://github.com/ansible/ansible/issues/81404). - encrypt - deprecate passlib_or_crypt API (https://github.com/ansible/ansible/issues/55839). - fetch - Handle unreachable errors properly (https://github.com/ansible/ansible/issues/27816) - file modules - Make symbolic modes with X use the computed permission, not original file (https://github.com/ansible/ansible/issues/80128) - file modules - fix validating invalid symbolic modes. - first found lookup has been updated to use the normalized argument parsing (pythonic) matching the documented examples. - first found lookup, fixed an issue with subsequent items clobbering information from previous ones. - first_found lookup now gets 'untemplated' loop entries and handles templating itself as task_executor was removing even 'templatable' entries and breaking functionality. https://github.com/ansible/ansible/issues/70772 - galaxy - check if the target for symlink exists (https://github.com/ansible/ansible/pull/81586). - galaxy - cross check the collection type and collection source (https://github.com/ansible/ansible/issues/79463). - gather_facts parallel option was doing the reverse of what was stated, now it does run modules in parallel when True and serially when False. - handlers - fix ``v2_playbook_on_notify`` callback not being called when notifying handlers - handlers - the ``listen`` keyword can affect only one handler with the same name, the last one defined as it is a case with the ``notify`` keyword (https://github.com/ansible/ansible/issues/81013) - include_role - expose variables from parent roles to role's handlers (https://github.com/ansible/ansible/issues/80459) - inventory_ini - handle SyntaxWarning while parsing ini file in inventory (https://github.com/ansible/ansible/issues/81457). - iptables - remove default rule creation when creating iptables chain to be more similar to the command line utility (https://github.com/ansible/ansible/issues/80256). - lib/ansible/utils/encrypt.py - remove unused private ``_LOCK`` (https://github.com/ansible/ansible/issues/81613) - lookup/url.py - Fix incorrect var/env/ini entry for `force_basic_auth` - man page build - Remove the dependency on the ``docs`` directory for building man pages. - man page build - Sub commands of ``ansible-galaxy role`` and ``ansible-galaxy collection`` are now documented. - module responses - Ensure that module responses are utf-8 adhereing to JSON RFC and expectations of the core code. - module/role argument spec - validate the type for options that are None when the option is required or has a non-None default (https://github.com/ansible/ansible/issues/79656). - modules/user.py - Add check for valid directory when creating new user homedir (allows /dev/null as skeleton) (https://github.com/ansible/ansible/issues/75063) - paramiko_ssh, psrp, and ssh connection plugins - ensure that all values for options that should be strings are actually converted to strings (https://github.com/ansible/ansible/pull/81029). - password_hash - fix salt format for ``crypt`` (only used if ``passlib`` is not installed) for the ``bcrypt`` algorithm. - pep517 build backend - Copy symlinks when copying the source tree. This avoids tracebacks in various scenarios, such as when a venv is present in the source tree. - pep517 build backend - Use the documented ``import_module`` import from ``importlib``. - pip module - Update module to prefer use of the python ``packaging`` and ``importlib.metadata`` modules due to ``pkg_resources`` being deprecated (https://github.com/ansible/ansible/issues/80488) - pkg_mgr.py - Fix `ansible_pkg_mgr` incorrect in TencentOS Server Linux - pkg_mgr.py - Fix `ansible_pkg_mgr` is unknown in Kylin Linux (https://github.com/ansible/ansible/issues/81332) - powershell modules - Only set an rc of 1 if the PowerShell pipeline signaled an error occurred AND there are error records present. Previously it would do so only if the error signal was present without checking the error count. - replace - handle exception when bad escape character is provided in replace (https://github.com/ansible/ansible/issues/79364). - role deduplication - don't deduplicate before a role has had a task run for that particular host (https://github.com/ansible/ansible/issues/81486). - service module, does not permanently configure flags flags on Openbsd when enabling/disabling a service. - service module, enable/disable is not a exclusive action in checkmode anymore. - setup gather_timeout - Fix timeout in get_mounts_facts for linux. - setup module (fact gathering) will now try to be smarter about different versions of facter emitting error when --puppet flag is used w/o puppet. - syntax check - Limit ``--syntax-check`` to ``ansible-playbook`` only, as that is the only CLI affected by this argument (https://github.com/ansible/ansible/issues/80506) - tarfile - handle data filter deprecation warning message for extract and extractall (https://github.com/ansible/ansible/issues/80832). - template - Fix for formatting issues when a template path contains valid jinja/strftime pattern (especially line break one) and using the template path in ansible_managed (https://github.com/ansible/ansible/pull/79129) - templating - In the template action and lookup, use local jinja2 environment overlay overrides instead of mutating the templars environment - templating - prevent setting arbitrary attributes on Jinja2 environments via Jinja2 overrides in templates - templating escape and single var optimization now use correct delimiters when custom ones are provided either via task or template header. - unarchive - fix unarchiving sources that are copied to the remote node using a relative temporory directory path (https://github.com/ansible/ansible/issues/80710). - uri - fix search for JSON type to include complex strings containing '+' - uri/urls - Add compat function to handle the ability to parse the filename from a Content-Disposition header (https://github.com/ansible/ansible/issues/81806) - urls.py - fixed cert_file and key_file parameters when running on Python 3.12 - https://github.com/ansible/ansible/issues/80490 - user - set expiration value correctly when unable to retrieve the current value from the system (https://github.com/ansible/ansible/issues/71916) - validate-modules sanity test - replace semantic markup parsing and validating code with the code from `antsibull-docs-parser 0.2.0 `__ (https://github.com/ansible/ansible/pull/80406). - vars_prompt - internally convert the ``unsafe`` value to ``bool`` - vault and unvault filters now properly take ``vault_id`` parameter. - win_fetch - Add support for using file with wildcards in file name. (https://github.com/ansible/ansible/issues/73128) - winrm - Better handle send input failures when communicating with hosts under load Known Issues ------------ - ansible-galaxy - dies in the middle of installing a role when that role contains Java inner classes (files with $ in the file name). This is by design, to exclude temporary or backup files. (https://github.com/ansible/ansible/pull/81553). - ansible-test - The ``pep8`` sanity test is unable to detect f-string spacing issues (E201, E202) on Python 3.10 and 3.11. They are correctly detected under Python 3.12. See (https://github.com/PyCQA/pycodestyle/issues/1190). ansible-core-2.16.3/changelogs/changelog.yaml0000644000000000000000000015010314556006441017636 0ustar00rootrootancestor: 2.15.0 releases: 2.16.0: changes: bugfixes: - ansible-test - Fix parsing of cgroup entries which contain a ``:`` in the path (https://github.com/ansible/ansible/issues/81977). release_summary: '| Release Date: 2023-11-06 | `Porting Guide `__ ' codename: All My Love fragments: - 2.16.0_summary.yaml - ansible-test-cgroup-split.yml release_date: '2023-11-06' 2.16.0b1: changes: breaking_changes: - Any plugin using the config system and the `cli` entry to use the `timeout` from the command line, will see the value change if the use had configured it in any of the lower precedence methods. If relying on this behaviour to consume the global/generic timeout from the DEFAULT_TIMEOUT constant, please consult the documentation on plugin configuration to add the overlaping entries. - ansible-test - Test plugins that rely on containers no longer support reusing running containers. The previous behavior was an undocumented, untested feature. - service module will not permanently configure variables/flags for openbsd when doing enable/disable operation anymore, this module was never meant to do this type of work, just to manage the service state itself. A rcctl_config or similar module should be created and used instead. bugfixes: - Allow for searching handler subdir for included task via include_role (https://github.com/ansible/ansible/issues/81722) - AnsibleModule.run_command - Only use selectors when needed, and rely on Python stdlib subprocess for the simple task of collecting stdout/stderr when prompt matching is not required. - Display - Defensively configure writing to stdout and stderr with a custom encoding error handler that will replace invalid characters while providing a deprecation warning that non-utf8 text will result in an error in a future version. - Exclude internal options from man pages and docs. - Fix ``ansible-config init`` man page option indentation. - Fix ``ast`` deprecation warnings for ``Str`` and ``value.s`` when using Python 3.12. - Fix exceptions caused by various inputs when performing arg splitting or parsing key/value pairs. Resolves issue https://github.com/ansible/ansible/issues/46379 and issue https://github.com/ansible/ansible/issues/61497 - Fix incorrect parsing of multi-line Jinja2 blocks when performing arg splitting or parsing key/value pairs. - Fix post-validating looped task fields so the strategy uses the correct values after task execution. - Fixed `pip` module failure in case of usage quotes for `virtualenv_command` option for the venv command. (https://github.com/ansible/ansible/issues/76372) - From issue https://github.com/ansible/ansible/issues/80880, when notifying a handler from another handler, handler notifications must be registered immediately as the flush_handler call is not recursive. - Import ``FILE_ATTRIBUTES`` from ``ansible.module_utils.common.file`` in ``ansible.module_utils.basic`` instead of defining it twice. - Inventory scripts parser not treat exception when getting hostsvar (https://github.com/ansible/ansible/issues/81103) - On Python 3 use datetime methods ``fromtimestamp`` and ``now`` with UTC timezone instead of ``utcfromtimestamp`` and ``utcnow``, which are deprecated in Python 3.12. - PluginLoader - fix Jinja plugin performance issues (https://github.com/ansible/ansible/issues/79652) - PowerShell - Remove some code which is no longer valid for dotnet 5+ - Prevent running same handler multiple times when included via ``include_role`` (https://github.com/ansible/ansible/issues/73643) - Prompting - add a short sleep between polling for user input to reduce CPU consumption (https://github.com/ansible/ansible/issues/81516). - Properly disable ``jinja2_native`` in the template module when jinja2 override is used in the template (https://github.com/ansible/ansible/issues/80605) - Remove unreachable parser error for removed ``static`` parameter of ``include_role`` - Replace uses of ``configparser.ConfigParser.readfp()`` which was removed in Python 3.12 with ``configparser.ConfigParser.read_file()`` (https://github.com/ansible/ansible/issues/81656) - Set filters ``intersect``, ``difference``, ``symmetric_difference`` and ``union`` now always return a ``list``, never a ``set``. Previously, a ``set`` would be returned if the inputs were a hashable type such as ``str``, instead of a collection, such as a ``list`` or ``tuple``. - Set filters ``intersect``, ``difference``, ``symmetric_difference`` and ``union`` now use set operations when the given items are hashable. Previously, list operations were performed unless the inputs were a hashable type such as ``str``, instead of a collection, such as a ``list`` or ``tuple``. - Switch result queue from a ``multiprocessing.queues.Queue` to ``multiprocessing.queues.SimpleQueue``, primarily to allow properly handling pickling errors, to prevent an infinite hang waiting for task results - The ``ansible-config init`` command now has a documentation description. - The ``ansible-galaxy collection download`` command now has a documentation description. - The ``ansible-galaxy collection install`` command documentation is now visible (previously hidden by a decorator). - The ``ansible-galaxy collection verify`` command now has a documentation description. - The ``ansible-galaxy role install`` command documentation is now visible (previously hidden by a decorator). - The ``ansible-inventory`` command command now has a documentation description (previously used as the epilog). - The ``hostname`` module now also updates both current and permanent hostname on OpenBSD. Before it only updated the permanent hostname (https://github.com/ansible/ansible/issues/80520). - Update module_utils.urls unit test to work with cryptography >= 41.0.0. - When generating man pages, use ``func`` to find the command function instead of looking it up by the command name. - '``StrategyBase._process_pending_results`` - create a ``Templar`` on demand for templating ``changed_when``/``failed_when``.' - '``ansible-galaxy`` now considers all collection paths when identifying which collection requirements are already installed. Use the ``COLLECTIONS_PATHS`` and ``COLLECTIONS_SCAN_SYS_PATHS`` config options to modify these. Previously only the install path was considered when resolving the candidates. The install path will remain the only one potentially modified. (https://github.com/ansible/ansible/issues/79767, https://github.com/ansible/ansible/issues/81163)' - '``ansible.module_utils.service`` - ensure binary data transmission in ``daemonize()``' - '``ansible.module_utils.service`` - fix inter-process communication in ``daemonize()``' - '``pkg_mgr`` - fix the default dnf version detection' - ansiballz - Prevent issue where the time on the control host could change part way through building the ansiballz file, potentially causing a pre-1980 date to be used during ansiballz unpacking leading to a zip file error (https://github.com/ansible/ansible/issues/80089) - ansible terminal color settings were incorrectly limited to 16 options via 'choices', removing so all 256 can be accessed. - ansible-console - fix filtering by collection names when a collection search path was set (https://github.com/ansible/ansible/pull/81450). - ansible-galaxy - Enabled the ``data`` tarfile filter during role installation for Python versions that support it. A probing mechanism is used to avoid Python versions with a broken implementation. - ansible-galaxy - Fix issue installing collections containing directories with more than 100 characters on python versions before 3.10.6 - ansible-galaxy - Fix variable type error when installing subdir collections (https://github.com/ansible/ansible/issues/80943) - ansible-galaxy - fix installing collections from directories that have a trailing path separator (https://github.com/ansible/ansible/issues/77803). - ansible-galaxy - fix installing signed collections (https://github.com/ansible/ansible/issues/80648). - ansible-galaxy - reduce API calls to servers by fetching signatures only for final candidates. - ansible-galaxy - started allowing the use of pre-releases for collections that do not have any stable versions published. (https://github.com/ansible/ansible/pull/81606) - ansible-galaxy - started allowing the use of pre-releases for dependencies on any level of the dependency tree that specifically demand exact pre-release versions of collections and not version ranges. (https://github.com/ansible/ansible/pull/81606) - ansible-galaxy collection verify - fix verifying signed collections when the keyring is not configured. - ansible-test - Add support for ``argcomplete`` version 3. - ansible-test - All containers created by ansible-test now include the current test session ID in their name. This avoids conflicts between concurrent ansible-test invocations using the same container host. - ansible-test - Always use ansible-test managed entry points for ansible-core CLI tools when not running from source. This fixes issues where CLI entry points created during install are not compatible with ansible-test. - ansible-test - Fix a traceback that occurs when attempting to test Ansible source using a different ansible-test. A clear error message is now given when this scenario occurs. - ansible-test - Fix handling of timeouts exceeding one day. - ansible-test - Fix several possible tracebacks when using the ``-e`` option with sanity tests. - ansible-test - Fix various cases where the test timeout could expire without terminating the tests. - ansible-test - Pre-build a PyYAML wheel before installing requirements to avoid a potential Cython build failure. - ansible-test - Remove redundant warning about missing programs before attempting to execute them. - ansible-test - The ``import`` sanity test now checks the collection loader for remote-only Python support when testing ansible-core. - ansible-test - Unit tests now report warnings generated during test runs. Previously only warnings generated during test collection were reported. - ansible-test - Update ``pylint`` to 2.17.2 to resolve several possible false positives. - ansible-test - Update ``pylint`` to 2.17.3 to resolve several possible false positives. - ansible-test - Use ``raise ... from ...`` when raising exceptions from within an exception handler. - ansible-test - When bootstrapping remote FreeBSD instances, use the OS packaged ``setuptools`` instead of installing the latest version from PyPI. - ansible-test local change detection - use ``git merge-base HEAD`` instead of ``git merge-base --fork-point `` (https://github.com/ansible/ansible/pull/79734). - ansible-vault - fail when the destination file location is not writable before performing encryption (https://github.com/ansible/ansible/issues/81455). - apt - ignore fail_on_autoremove and allow_downgrade parameters when using aptitude (https://github.com/ansible/ansible/issues/77868). - blockinfile - avoid crash with Python 3 if creating the directory fails when ``create=true`` (https://github.com/ansible/ansible/pull/81662). - connection timeouts defined in ansible.cfg will now be properly used, the --timeout cli option was obscuring them by always being set. - copy - print correct destination filename when using `content` and `--diff` (https://github.com/ansible/ansible/issues/79749). - copy unit tests - Fixing "dir all perms" documentation and formatting for easier reading. - core will now also look at the connection plugin to force 'local' interpreter for networking path compatibility as just ansible_network_os could be misleading. - deb822_repository - use http-agent for receiving content (https://github.com/ansible/ansible/issues/80809). - debconf - idempotency in questions with type 'password' (https://github.com/ansible/ansible/issues/47676). - distribution facts - fix Source Mage family mapping - dnf - fix a failure when a package from URI was specified and ``update_only`` was set (https://github.com/ansible/ansible/issues/81376). - dnf5 - Update dnf5 module to handle API change for setting the download directory (https://github.com/ansible/ansible/issues/80887) - dnf5 - Use ``transaction.check_gpg_signatures`` API call to check package signatures AND possibly to recover from when keys are missing. - dnf5 - fix module and package names in the message following failed module respawn attempt - dnf5 - use the logs API to determine transaction problems - dpkg_selections - check if the package exists before performing the selection operation (https://github.com/ansible/ansible/issues/81404). - encrypt - deprecate passlib_or_crypt API (https://github.com/ansible/ansible/issues/55839). - fetch - Handle unreachable errors properly (https://github.com/ansible/ansible/issues/27816) - file modules - Make symbolic modes with X use the computed permission, not original file (https://github.com/ansible/ansible/issues/80128) - file modules - fix validating invalid symbolic modes. - first found lookup has been updated to use the normalized argument parsing (pythonic) matching the documented examples. - first found lookup, fixed an issue with subsequent items clobbering information from previous ones. - first_found lookup now gets 'untemplated' loop entries and handles templating itself as task_executor was removing even 'templatable' entries and breaking functionality. https://github.com/ansible/ansible/issues/70772 - galaxy - check if the target for symlink exists (https://github.com/ansible/ansible/pull/81586). - galaxy - cross check the collection type and collection source (https://github.com/ansible/ansible/issues/79463). - gather_facts parallel option was doing the reverse of what was stated, now it does run modules in parallel when True and serially when False. - handlers - fix ``v2_playbook_on_notify`` callback not being called when notifying handlers - handlers - the ``listen`` keyword can affect only one handler with the same name, the last one defined as it is a case with the ``notify`` keyword (https://github.com/ansible/ansible/issues/81013) - include_role - expose variables from parent roles to role's handlers (https://github.com/ansible/ansible/issues/80459) - inventory_ini - handle SyntaxWarning while parsing ini file in inventory (https://github.com/ansible/ansible/issues/81457). - iptables - remove default rule creation when creating iptables chain to be more similar to the command line utility (https://github.com/ansible/ansible/issues/80256). - lib/ansible/utils/encrypt.py - remove unused private ``_LOCK`` (https://github.com/ansible/ansible/issues/81613) - lookup/url.py - Fix incorrect var/env/ini entry for `force_basic_auth` - man page build - Remove the dependency on the ``docs`` directory for building man pages. - man page build - Sub commands of ``ansible-galaxy role`` and ``ansible-galaxy collection`` are now documented. - module responses - Ensure that module responses are utf-8 adhereing to JSON RFC and expectations of the core code. - module/role argument spec - validate the type for options that are None when the option is required or has a non-None default (https://github.com/ansible/ansible/issues/79656). - modules/user.py - Add check for valid directory when creating new user homedir (allows /dev/null as skeleton) (https://github.com/ansible/ansible/issues/75063) - paramiko_ssh, psrp, and ssh connection plugins - ensure that all values for options that should be strings are actually converted to strings (https://github.com/ansible/ansible/pull/81029). - password_hash - fix salt format for ``crypt`` (only used if ``passlib`` is not installed) for the ``bcrypt`` algorithm. - pep517 build backend - Copy symlinks when copying the source tree. This avoids tracebacks in various scenarios, such as when a venv is present in the source tree. - pep517 build backend - Use the documented ``import_module`` import from ``importlib``. - pip module - Update module to prefer use of the python ``packaging`` and ``importlib.metadata`` modules due to ``pkg_resources`` being deprecated (https://github.com/ansible/ansible/issues/80488) - pkg_mgr.py - Fix `ansible_pkg_mgr` incorrect in TencentOS Server Linux - pkg_mgr.py - Fix `ansible_pkg_mgr` is unknown in Kylin Linux (https://github.com/ansible/ansible/issues/81332) - powershell modules - Only set an rc of 1 if the PowerShell pipeline signaled an error occurred AND there are error records present. Previously it would do so only if the error signal was present without checking the error count. - replace - handle exception when bad escape character is provided in replace (https://github.com/ansible/ansible/issues/79364). - role deduplication - don't deduplicate before a role has had a task run for that particular host (https://github.com/ansible/ansible/issues/81486). - service module, does not permanently configure flags flags on Openbsd when enabling/disabling a service. - service module, enable/disable is not a exclusive action in checkmode anymore. - setup gather_timeout - Fix timeout in get_mounts_facts for linux. - setup module (fact gathering) will now try to be smarter about different versions of facter emitting error when --puppet flag is used w/o puppet. - syntax check - Limit ``--syntax-check`` to ``ansible-playbook`` only, as that is the only CLI affected by this argument (https://github.com/ansible/ansible/issues/80506) - tarfile - handle data filter deprecation warning message for extract and extractall (https://github.com/ansible/ansible/issues/80832). - template - Fix for formatting issues when a template path contains valid jinja/strftime pattern (especially line break one) and using the template path in ansible_managed (https://github.com/ansible/ansible/pull/79129) - templating - In the template action and lookup, use local jinja2 environment overlay overrides instead of mutating the templars environment - templating - prevent setting arbitrary attributes on Jinja2 environments via Jinja2 overrides in templates - templating escape and single var optimization now use correct delimiters when custom ones are provided either via task or template header. - unarchive - fix unarchiving sources that are copied to the remote node using a relative temporory directory path (https://github.com/ansible/ansible/issues/80710). - uri - fix search for JSON type to include complex strings containing '+' - urls.py - fixed cert_file and key_file parameters when running on Python 3.12 - https://github.com/ansible/ansible/issues/80490 - user - set expiration value correctly when unable to retrieve the current value from the system (https://github.com/ansible/ansible/issues/71916) - validate-modules sanity test - replace semantic markup parsing and validating code with the code from `antsibull-docs-parser 0.2.0 `__ (https://github.com/ansible/ansible/pull/80406). - vars_prompt - internally convert the ``unsafe`` value to ``bool`` - vault and unvault filters now properly take ``vault_id`` parameter. - win_fetch - Add support for using file with wildcards in file name. (https://github.com/ansible/ansible/issues/73128) deprecated_features: - Deprecated ini config option ``collections_paths``, use the singular form ``collections_path`` instead - Deprecated the env var ``ANSIBLE_COLLECTIONS_PATHS``, use the singular form ``ANSIBLE_COLLECTIONS_PATH`` instead - Support for Windows Server 2012 and 2012 R2 has been removed as the support end of life from Microsoft is October 10th 2023. These versions of Windows will no longer be tested in this Ansible release and it cannot be guaranteed that they will continue to work going forward. - '``STRING_CONVERSION_ACTION`` config option is deprecated as it is no longer used in the Ansible Core code base.' - the 'smart' option for setting a connection plugin is being removed as it's main purpose (choosing between ssh and paramiko) is now irrelevant. - vault and unfault filters - the undocumented ``vaultid`` parameter is deprecated and will be removed in ansible-core 2.20. Use ``vault_id`` instead. - yum_repository - deprecated parameter 'keepcache' (https://github.com/ansible/ansible/issues/78693). known_issues: - ansible-galaxy - dies in the middle of installing a role when that role contains Java inner classes (files with $ in the file name). This is by design, to exclude temporary or backup files. (https://github.com/ansible/ansible/pull/81553). - ansible-test - The ``pep8`` sanity test is unable to detect f-string spacing issues (E201, E202) on Python 3.10 and 3.11. They are correctly detected under Python 3.12. See (https://github.com/PyCQA/pycodestyle/issues/1190). minor_changes: - Add Python type hints to the Display class (https://github.com/ansible/ansible/issues/80841) - Add ``GALAXY_COLLECTIONS_PATH_WARNING`` option to disable the warning given by ``ansible-galaxy collection install`` when installing a collection to a path that isn't in the configured collection paths. - Add ``python3.12`` to the default ``INTERPRETER_PYTHON_FALLBACK`` list. - Add ``utcfromtimestamp`` and ``utcnow`` to ``ansible.module_utils.compat.datetime`` to return fixed offset datetime objects. - Add a general ``GALAXY_SERVER_TIMEOUT`` config option for distribution servers (https://github.com/ansible/ansible/issues/79833). - Added Python type annotation to connection plugins - CLI argument parsing - Automatically prepend to the help of CLI arguments that support being specified multiple times. (https://github.com/ansible/ansible/issues/22396) - DEFAULT_TRANSPORT now defaults to 'ssh', the old 'smart' option is being deprecated as versions of OpenSSH without control persist are basically not present anymore. - Documentation for set filters ``intersect``, ``difference``, ``symmetric_difference`` and ``union`` now states that the returned list items are in arbitrary order. - Record ``removal_date`` in runtime metadata as a string instead of a date. - Remove the ``CleansingNodeVisitor`` class and its usage due to the templating changes that made it superfluous. Also simplify the ``Conditional`` class. - Removed ``exclude`` and ``recursive-exclude`` commands for generated files from the ``MANIFEST.in`` file. These excludes were unnecessary since releases are expected to be built with a clean worktree. - Removed ``exclude`` commands for sanity test files from the ``MANIFEST.in`` file. These tests were previously excluded because they did not pass when run from an sdist. However, sanity tests are not expected to pass from an sdist, so excluding some (but not all) of the failing tests makes little sense. - Removed redundant ``include`` commands from the ``MANIFEST.in`` file. These includes either duplicated default behavior or another command. - The ``ansible-core`` sdist no longer contains pre-generated man pages. Instead, a ``packaging/cli-doc/build.py`` script is included in the sdist. This script can generate man pages and standalone RST documentation for ``ansible-core`` CLI programs. - The ``docs`` and ``examples`` directories are no longer included in the ``ansible-core`` sdist. These directories have been moved to the https://github.com/ansible/ansible-documentation repository. - The minimum required ``setuptools`` version is now 66.1.0, as it is the oldest version to support Python 3.12. - Update ``ansible_service_mgr`` fact to include init system for SMGL OS family - Use ``ansible.module_utils.common.text.converters`` instead of ``ansible.module_utils._text``. - Use ``importlib.resources.abc.TraversableResources`` instead of deprecated ``importlib.abc.TraversableResources`` where available (https:/github.com/ansible/ansible/pull/81082). - Use ``include`` where ``recursive-include`` is unnecessary in the ``MANIFEST.in`` file. - Use ``package_data`` instead of ``include_package_data`` for ``setup.cfg`` to avoid ``setuptools`` warnings. - Utilize gpg check provided internally by the ``transaction.run`` method as oppose to calling it manually. - '``Templar`` - do not add the ``dict`` constructor to ``globals`` as all required Jinja2 versions already do so' - ansible-doc - allow to filter listing of collections and metadata dump by more than one collection (https://github.com/ansible/ansible/pull/81450). - ansible-galaxy - Add a plural option to improve ignoring multiple signature error status codes when installing or verifying collections. A space-separated list of error codes can follow --ignore-signature-status-codes in addition to specifying --ignore-signature-status-code multiple times (for example, ``--ignore-signature-status-codes NO_PUBKEY UNEXPECTED``). - ansible-galaxy - Remove internal configuration argument ``v3`` (https://github.com/ansible/ansible/pull/80721) - ansible-galaxy - add note to the collection dependency resolver error message about pre-releases if ``--pre`` was not provided (https://github.com/ansible/ansible/issues/80048). - ansible-galaxy - used to crash out with a "Errno 20 Not a directory" error when extracting files from a role when hitting a file with an illegal name (https://github.com/ansible/ansible/pull/81553). Now it gives a warning identifying the culprit file and the rule violation (e.g., ``my$class.jar`` has a ``$`` in the name) before crashing out, giving the user a chance to remove the invalid file and try again. (https://github.com/ansible/ansible/pull/81555). - ansible-test - Add Alpine 3.18 to remotes - ansible-test - Add Fedora 38 container. - ansible-test - Add Fedora 38 remote. - ansible-test - Add FreeBSD 13.2 remote. - ansible-test - Add new pylint checker for new ``# deprecated:`` comments within code to trigger errors when time to remove code that has no user facing deprecation message. Only supported in ansible-core, not collections. - ansible-test - Add support for RHEL 8.8 remotes. - ansible-test - Add support for RHEL 9.2 remotes. - ansible-test - Add support for testing with Python 3.12. - ansible-test - Allow float values for the ``--timeout`` option to the ``env`` command. This simplifies testing. - ansible-test - Enable ``thread`` code coverage in addition to the existing ``multiprocessing`` coverage. - ansible-test - RHEL 8.8 provisioning can now be used with the ``--python 3.11`` option. - ansible-test - RHEL 9.2 provisioning can now be used with the ``--python 3.11`` option. - ansible-test - Refactored ``env`` command logic and timeout handling. - ansible-test - Remove Fedora 37 remote support. - ansible-test - Remove Fedora 37 test container. - ansible-test - Remove Python 3.8 and 3.9 from RHEL 8.8. - ansible-test - Remove obsolete embedded script for configuring WinRM on Windows remotes. - ansible-test - Removed Ubuntu 20.04 LTS image from the `--remote` option. - ansible-test - Removed `freebsd/12.4` remote. - ansible-test - Removed `freebsd/13.1` remote. - 'ansible-test - Removed test remotes: rhel/8.7, rhel/9.1' - ansible-test - Removed the deprecated ``--docker-no-pull`` option. - ansible-test - Removed the deprecated ``--no-pip-check`` option. - ansible-test - Removed the deprecated ``foreman`` test plugin. - ansible-test - Removed the deprecated ``govcsim`` support from the ``vcenter`` test plugin. - ansible-test - Replace the ``pytest-forked`` pytest plugin with a custom plugin. - ansible-test - The ``no-get-exception`` sanity test is now limited to plugins in collections. Previously any Python file in a collection was checked for ``get_exception`` usage. - ansible-test - The ``replace-urlopen`` sanity test is now limited to plugins in collections. Previously any Python file in a collection was checked for ``urlopen`` usage. - ansible-test - The ``use-compat-six`` sanity test is now limited to plugins in collections. Previously any Python file in a collection was checked for ``six`` usage. - ansible-test - The openSUSE test container has been updated to openSUSE Leap 15.5. - ansible-test - Update pip to ``23.1.2`` and setuptools to ``67.7.2``. - ansible-test - Update the ``default`` containers. - ansible-test - Update the ``nios-test-container`` to version 2.0.0, which supports API version 2.9. - ansible-test - Update the logic used to detect when ``ansible-test`` is running from source. - ansible-test - Updated the CloudStack test container to version 1.6.1. - ansible-test - Updated the distro test containers to version 6.3.0 to include coverage 7.3.2 for Python 3.8+. The alpine3 container is now based on 3.18 instead of 3.17 and includes Python 3.11 instead of Python 3.10. - ansible-test - Use ``datetime.datetime.now`` with ``tz`` specified instead of ``datetime.datetime.utcnow``. - ansible-test - Use a context manager to perform cleanup at exit instead of using the built-in ``atexit`` module. - ansible-test - remove Alpine 3.17 from remotes - "ansible-test \u2014 Python 3.8\u20133.12 will use ``coverage`` v7.3.2." - "ansible-test \u2014 ``coverage`` v6.5.0 is to be used only under Python 3.7." - 'ansible-vault create: Now raises an error when opening the editor without tty. The flag --skip-tty-check restores previous behaviour.' - ansible_user_module - tweaked macos user defaults to reflect expected defaults (https://github.com/ansible/ansible/issues/44316) - apt - return calculated diff while running apt clean operation. - blockinfile - add append_newline and prepend_newline options (https://github.com/ansible/ansible/issues/80835). - cli - Added short option '-J' for asking for vault password (https://github.com/ansible/ansible/issues/80523). - command - Add option ``expand_argument_vars`` to disable argument expansion and use literal values - https://github.com/ansible/ansible/issues/54162 - config lookup new option show_origin to also return the origin of a configuration value. - display methods for warning and deprecation are now proxied to main process when issued from a fork. This allows for the deduplication of warnings and deprecations to work globally. - dnf5 - enable environment groups installation testing in CI as its support was added. - dnf5 - enable now implemented ``cacheonly`` functionality - executor now skips persistent connection when it detects an action that does not require a connection. - find module - Add ability to filter based on modes - gather_facts now will use gather_timeout setting to limit parallel execution of modules that do not themselves use gather_timeout. - group - remove extraneous warning shown when user does not exist (https://github.com/ansible/ansible/issues/77049). - include_vars - os.walk now follows symbolic links when traversing directories (https://github.com/ansible/ansible/pull/80460) - module compression is now sourced directly via config, bypassing play_context possibly stale values. - reboot - show last error message in verbose logs (https://github.com/ansible/ansible/issues/81574). - service_facts now returns more info for rcctl managed systesm (OpenBSD). - tasks - the ``retries`` keyword can be specified without ``until`` in which case the task is retried until it succeeds but at most ``retries`` times (https://github.com/ansible/ansible/issues/20802) - user - add new option ``password_expire_warn`` (supported on Linux only) to set the number of days of warning before a password change is required (https://github.com/ansible/ansible/issues/79882). - yum_repository - Align module documentation with parameters release_summary: '| Release Date: 2023-09-26 | `Porting Guide `__ ' removed_features: - ActionBase - remove deprecated ``_remote_checksum`` method - PlayIterator - remove deprecated ``cache_block_tasks`` and ``get_original_task`` methods - Remove deprecated ``FileLock`` class - Removed Python 3.9 as a supported version on the controller. Python 3.10 or newer is required. - Removed ``include`` which has been deprecated in Ansible 2.12. Use ``include_tasks`` or ``import_tasks`` instead. - '``Templar`` - remove deprecated ``shared_loader_obj`` parameter of ``__init__``' - '``fetch_url`` - remove auto disabling ``decompress`` when gzip is not available' - '``get_action_args_with_defaults`` - remove deprecated ``redirected_names`` method parameter' - ansible-test - Removed support for the remote Windows targets 2012 and 2012-R2 - inventory_cache - remove deprecated ``default.fact_caching_prefix`` ini configuration option, use ``defaults.fact_caching_prefix`` instead. - module_utils/basic.py - Removed Python 3.5 as a supported remote version. Python 2.7 or Python 3.6+ is now required. - stat - removed unused `get_md5` parameter. codename: All My Love fragments: - 2.16.0b1_summary.yaml - 20802-until-default.yml - 22396-indicate-which-args-are-multi.yml - 27816-fetch-unreachable.yml - 50603-tty-check.yaml - 71916-user-expires-int.yml - 73643-handlers-prevent-multiple-runs.yml - 74723-support-wildcard-win_fetch.yml - 75063-allow-dev-nul-as-skeleton-for-new-homedir.yml - 76372-fix-pip-virtualenv-command-parsing.yml - 78487-galaxy-collections-path-warnings.yml - 79129-ansible-managed-filename-format.yaml - 79364_replace.yml - 79677-fix-argspec-type-check.yml - 79734-ansible-test-change-detection.yml - 79844-fix-timeout-mounts-linux.yml - 79999-ansible-user-tweak-macos-defaults.yaml - 80089-prevent-module-build-date-issue.yml - 80128-symbolic-modes-X-use-computed.yml - 80257-iptables-chain-creation-does-not-populate-a-rule.yml - 80258-defensive-display-non-utf8.yml - 80334-reduce-ansible-galaxy-api-calls.yml - 80406-validate-modules-semantic-markup.yml - 80449-fix-symbolic-mode-error-msg.yml - 80459-handlers-nested-includes-vars.yml - 80460-add-symbolic-links-with-dir.yml - 80476-fix-loop-task-post-validation.yml - 80488-pip-pkg-resources.yml - 80506-syntax-check-playbook-only.yml - 80520-fix-current-hostname-openbsd.yml - 80523_-_adding_short_option_for_--ask-vault-pass.yml - 80605-template-overlay-native-jinja.yml - 80648-fix-ansible-galaxy-cache-signatures-bug.yml - 80721-ansible-galaxy.yml - 80738-abs-unarachive-src.yml - 80841-display-type-annotation.yml - 80880-register-handlers-immediately-if-iterating-handlers.yml - 80887-dnf5-api-change.yml - 80943-ansible-galaxy-collection-subdir-install.yml - 80968-replace-deprecated-ast-attr.yml - 80985-fix-smgl-family-mapping.yml - 81005-use-overlay-overrides.yml - 81013-handlers-listen-last-defined-only.yml - 81029-connection-types.yml - 81064-daemonize-fixes.yml - 81082-deprecated-importlib-abc.yml - 81083-add-blockinfile-append-and-prepend-new-line-options.yml - 81104-inventory-script-plugin-raise-execution-error.yml - 81319-cloudstack-test-container-bump-version.yml - 81332-fix-pkg-mgr-in-kylin.yml - 81450-list-filters.yml - 81494-remove-duplicated-file-attribute-constant.yml - 81555-add-warning-for-illegal-filenames-in-roles.yaml - 81584-daemonize-follow-up-fixes.yml - 81606-ansible-galaxy-collection-pre-releases.yml - 81613-remove-unusued-private-lock.yml - 81656-cf_readfp-deprecated.yml - 81662-blockinfile-exc.yml - 81722-handler-subdir-include_tasks.yml - CleansingNodeVisitor-removal.yml - a-g-col-install-directory-with-trailing-sep.yml - a-g-col-prevent-reinstalling-satisfied-req.yml - a_test_rmv_alpine_317.yml - add-missing-cli-docs.yml - ag-ignore-multiple-signature-statuses.yml - ansible-galaxy-server-timeout.yml - ansible-runtime-metadata-removal-date.yml - ansible-test-added-fedora-38.yml - ansible-test-argcomplete-3.yml - ansible-test-atexit.yml - ansible-test-coverage-update.yml - ansible-test-default-containers.yml - ansible-test-deprecated-cleanup.yml - ansible-test-distro-containers.yml - ansible-test-entry-points.yml - ansible-test-explain-traceback.yml - ansible-test-fedora-37.yml - ansible-test-freebsd-bootstrap-setuptools.yml - ansible-test-import-sanity-fix.yml - ansible-test-layout-detection.yml - ansible-test-long-timeout-fix.yml - ansible-test-minimum-setuptools.yml - ansible-test-nios-container.yml - ansible-test-pylint-update.yml - ansible-test-pytest-forked.yml - ansible-test-python-3.12.yml - ansible-test-pyyaml-build.yml - ansible-test-remove-old-rhel-remotes.yml - ansible-test-remove-ubuntu-2004.yml - ansible-test-rhel-9.2-python-3.11.yml - ansible-test-rhel-9.2.yml - ansible-test-sanity-scope.yml - ansible-test-source-detection.yml - ansible-test-thread-coverage.yml - ansible-test-timeout-fix.yml - ansible-test-unique-container-names.yml - ansible-test-use-raise-from.yml - ansible-test-utcnow.yml - ansible-test-winrm-config.yml - ansible-vault.yml - ansible_test_alpine_3.18.yml - apt_fail_on_autoremove.yml - aptclean_diff.yml - basestrategy-lazy-templar.yml - ci_freebsd_new.yml - collections_paths-deprecation.yml - colors.yml - command-expand-args.yml - config_origins_option.yml - connection-type-annotation.yml - copy_diff.yml - deb822_open_url.yml - debconf.yml - deprecated_string_conversion_action.yml - display_proxy.yml - dnf-update-only-latest.yml - dnf5-cacheonly.yml - dnf5-fix-interpreter-fail-msg.yml - dnf5-gpg-check-api.yml - dnf5-gpg-check-builtin.yml - dnf5-logs-api.yml - dnf5-test-env-groups.yml - dotnet-preparation.yml - dpkg_selections.yml - fbsd13_1_remove.yml - fetch_url-remove-auto-disable-decompress.yml - find-mode.yml - first_found_fixes.yml - first_found_template_fix.yml - fix-display-prompt-cpu-consumption.yml - fix-handlers-callback.yml - fix-pkg-mgr-in-TencentOS.yml - fix-setuptools-warnings.yml - fix-url-lookup-plugin-docs.yml - forced_local+fix+.yml - freebsd_12_4_removal.yml - galaxy_check_type.yml - galaxy_symlink.yml - gather_facts_fix_parallel.yml - get_action_args_with_defaults-remove-deprecated-arg.yml - group_warning.yml - inventory_cache-remove-deprecated-default-section.yml - inventory_ini.yml - jinja_plugin_cache_cleanup.yml - long-collection-paths-fix.yml - man-page-build-docs-dependency.yml - man-page-subcommands.yml - manifest-in-cleanup.yml - mc_from_config.yml - missing-doc-func.yml - no-arbitrary-j2-override.yml - omit-man-pages-from-sdist.yml - parsing-splitter-fixes.yml - passlib_or_crypt.yml - password_hash-fix-crypt-salt-bcrypt.yml - pep517-backend-import-fix.yml - pep517-backend-traceback-fix.yml - pep8-known-issue.yml - persist_skip.yml - pkg_mgr-default-dnf.yml - powershell-module-error-handling.yml - pre-release-hint-for-dep-resolution-error.yml - pylint-deprecated-comment-checker.yml - reboot.yml - remove-deprecated-actionbase-_remote_checksum.yml - remove-deprecated-datetime-methods.yml - remove-deprecated-filelock-class.yml - remove-docs-examples.yml - remove-include.yml - remove-play_iterator-deprecated-methods.yml - remove-python3.5.yml - remove-python3.9-controller-support.yml - remove-templar-shared_loader_obj-arg.yml - remove-unreachable-include_role-static-err.yml - remove_md5.yml - role-deduplication-condition.yml - run-command-selectors-prompt-only.yml - server2012-deprecation.yml - service_facts_rcctl.yml - service_facts_simpleinit_msb.yml - service_fix_obsd.yml - set-filters.yml - setup_facter_fix.yml - simple-result-queue.yml - smart_connection_bye.yml - suppressed-options.yml - tarfile_extract_warn.yml - templar-globals-dict.yml - templating_fixes.yml - text-converters.yml - timeout_config_fix.yml - update-maybe-json-uri.yml - urls-client-cert-py12.yml - urls-unit-test-latest-cryptography.yml - user-add-password-exp-warning.yml - v2.16.0-initial-commit.yaml - vault_unvault_id_fix.yml - yum-repository-docs-fixes.yml - yum_repository_keepcache.yml release_date: '2023-09-26' 2.16.0b2: changes: bugfixes: - '``import_role`` reverts to previous behavior of exporting vars at compile time.' - ansible-galaxy info - fix reporting no role found when lookup_role_by_name returns None. - uri/urls - Add compat function to handle the ability to parse the filename from a Content-Disposition header (https://github.com/ansible/ansible/issues/81806) - winrm - Better handle send input failures when communicating with hosts under load minor_changes: - ansible-test - When invoking ``sleep`` in containers during container setup, the ``env`` command is used to avoid invoking the shell builtin, if present. release_summary: '| Release Date: 2023-10-03 | `Porting Guide `__ ' security_fixes: - ansible-galaxy - Prevent roles from using symlinks to overwrite files outside of the installation directory (CVE-2023-5115) codename: All My Love fragments: - 2.16.0b2_summary.yaml - 81806-py2-content-disposition.yml - ansible-test-container-sleep.yml - cve-2023-5115.yml - fix-ansible-galaxy-info-no-role-found.yml - import_role_goes_public.yml - winrm-send-input.yml release_date: '2023-10-03' 2.16.0rc1: changes: bugfixes: - Cache host_group_vars after instantiating it once and limit the amount of repetitive work it needs to do every time it runs. - Call PluginLoader.all() once for vars plugins, and load vars plugins that run automatically or are enabled specifically by name subsequently. - Fix ``run_once`` being incorrectly interpreted on handlers (https://github.com/ansible/ansible/issues/81666) - Properly template tags in parent blocks (https://github.com/ansible/ansible/issues/81053) - ansible-galaxy - Provide a better error message when using a requirements file with an invalid format - https://github.com/ansible/ansible/issues/81901 - ansible-inventory - index available_hosts for major performance boost when dumping large inventories - ansible-test - Add a ``pylint`` plugin to work around a known issue on Python 3.12. - ansible-test - Include missing ``pylint`` requirements for Python 3.10. - ansible-test - Update ``pylint`` to version 3.0.1. deprecated_features: - Old style vars plugins which use the entrypoints `get_host_vars` or `get_group_vars` are deprecated. The plugin should be updated to inherit from `BaseVarsPlugin` and define a `get_vars` method as the entrypoint. minor_changes: - ansible-test - Make Python 3.12 the default version used in the ``base`` and ``default`` containers. release_summary: '| Release Date: 2023-10-16 | `Porting Guide `__ ' codename: All My Love fragments: - 2.16.0rc1_summary.yaml - 79945-host_group_vars-improvements.yml - 81053-templated-tags-inheritance.yml - 81666-handlers-run_once.yml - 81901-galaxy-requirements-format.yml - ansible-test-pylint3-update.yml - ansible-test-python-3.12-compat.yml - ansible-test-python-default.yml - inv_available_hosts_to_frozenset.yml release_date: '2023-10-16' 2.16.1: changes: release_summary: '| Release Date: 2023-12-04 | `Porting Guide `__ ' codename: All My Love fragments: - 2.16.1_summary.yaml release_date: '2023-12-04' 2.16.1rc1: changes: breaking_changes: - assert - Nested templating may result in an inability for the conditional to be evaluated. See the porting guide for more information. bugfixes: - Fix issue where an ``include_tasks`` handler in a role was not able to locate a file in ``tasks/`` when ``tasks_from`` was used as a role entry point and ``main.yml`` was not present (https://github.com/ansible/ansible/issues/82241) - Plugin loader does not dedupe nor cache filter/test plugins by file basename, but full path name. - Restoring the ability of filters/tests can have same file base name but different tests/filters defined inside. - ansible-pull now will expand relative paths for the ``-d|--directory`` option is now expanded before use. - ansible-pull will now correctly handle become and connection password file options for ansible-playbook. - flush_handlers - properly handle a handler failure in a nested block when ``force_handlers`` is set (http://github.com/ansible/ansible/issues/81532) - module no_log will no longer affect top level booleans, for example ``no_log_module_parameter='a'`` will no longer hide ``changed=False`` as a 'no log value' (matches 'a'). - role params now have higher precedence than host facts again, matching documentation, this had unintentionally changed in 2.15. - wait_for should not handle 'non mmapable files' again. release_summary: '| Release Date: 2023-11-27 | `Porting Guide `__ ' security_fixes: - templating - Address issues where internal templating can cause unsafe variables to lose their unsafe designation (CVE-2023-5764) codename: All My Love fragments: - 2.16.1rc1_summary.yaml - 81532-fix-nested-flush_handlers.yml - 82241-handler-include-tasks-from.yml - cve-2023-5764.yml - j2_load_fix.yml - no_log_booly.yml - pull_file_secrets.yml - pull_unfrack_dest.yml - restore_role_param_precedence.yml - wait_for_mmap.yml release_date: '2023-11-27' 2.16.2: changes: bugfixes: - unsafe data - Address an incompatibility when iterating or getting a single index from ``AnsibleUnsafeBytes`` - unsafe data - Address an incompatibility with ``AnsibleUnsafeText`` and ``AnsibleUnsafeBytes`` when pickling with ``protocol=0`` release_summary: '| Release Date: 2023-12-11 | `Porting Guide `__ ' codename: All My Love fragments: - 2.16.2_summary.yaml - unsafe-fixes-2.yml release_date: '2023-12-11' 2.16.3: changes: release_summary: '| Release Date: 2024-01-29 | `Porting Guide `__ ' codename: All My Love fragments: - 2.16.3_summary.yaml release_date: '2024-01-29' 2.16.3rc1: changes: bugfixes: - Run all handlers with the same ``listen`` topic, even when notified from another handler (https://github.com/ansible/ansible/issues/82363). - '``ansible-galaxy role import`` - fix using the ``role_name`` in a standalone role''s ``galaxy_info`` metadata by disabling automatic removal of the ``ansible-role-`` prefix. This matches the behavior of the Galaxy UI which also no longer implicitly removes the ``ansible-role-`` prefix. Use the ``--role-name`` option or add a ``role_name`` to the ``galaxy_info`` dictionary in the role''s ``meta/main.yml`` to use an alternate role name.' - '``ansible-test sanity --test runtime-metadata`` - add ``action_plugin`` as a valid field for modules in the schema (https://github.com/ansible/ansible/pull/82562).' - ansible-config init will now dedupe ini entries from plugins. - ansible-galaxy role import - exit with 1 when the import fails (https://github.com/ansible/ansible/issues/82175). - ansible-galaxy role install - normalize tarfile paths and symlinks using ``ansible.utils.path.unfrackpath`` and consider them valid as long as the realpath is in the tarfile's role directory (https://github.com/ansible/ansible/issues/81965). - delegate_to when set to an empty or undefined variable will now give a proper error. - dwim functions for lookups should be better at detectging role context even in abscense of tasks/main. - roles, code cleanup and performance optimization of dependencies, now cached, and ``public`` setting is now determined once, at role instantiation. - roles, the ``static`` property is now correctly set, this will fix issues with ``public`` and ``DEFAULT_PRIVATE_ROLE_VARS`` controls on exporting vars. - unsafe data - Enable directly using ``AnsibleUnsafeText`` with Python ``pathlib`` (https://github.com/ansible/ansible/issues/82414) release_summary: '| Release Date: 2024-01-22 | `Porting Guide `__ ' security_fixes: - ANSIBLE_NO_LOG - Address issue where ANSIBLE_NO_LOG was ignored (CVE-2024-0690) codename: All My Love fragments: - 2.16.3rc1_summary.yaml - 82175-fix-ansible-galaxy-role-import-rc.yml - 82363-multiple-handlers-with-recursive-notification.yml - ansible-galaxy-role-install-symlink.yml - cve-2024-0690.yml - dedupe_config_init.yml - delegate_to_invalid.yml - dwim_is_role_fix.yml - fix-default-ansible-galaxy-role-import-name.yml - fix-runtime-metadata-modules-action_plugin.yml - role_fixes.yml - unsafe-intern.yml release_date: '2024-01-22' ansible-core-2.16.3/lib/0000755000000000000000000000000014556006441013457 5ustar00rootrootansible-core-2.16.3/lib/ansible/0000755000000000000000000000000014556006441015074 5ustar00rootrootansible-core-2.16.3/lib/ansible/__init__.py0000644000000000000000000000242314556006441017206 0ustar00rootroot# (c) 2012-2014, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type # make vendored top-level modules accessible EARLY import ansible._vendor # Note: Do not add any code to this file. The ansible module may be # a namespace package when using Ansible-2.1+ Anything in this file may not be # available if one of the other packages in the namespace is loaded first. # # This is for backwards compat. Code should be ported to get these from # ansible.release instead of from here. from ansible.release import __version__, __author__ ansible-core-2.16.3/lib/ansible/__main__.py0000644000000000000000000000256314556006441017174 0ustar00rootroot# Copyright: (c) 2021, Matt Martz # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) import argparse import importlib import os import sys from importlib.metadata import distribution def _short_name(name): return name.removeprefix('ansible-').replace('ansible', 'adhoc') def main(): dist = distribution('ansible-core') ep_map = {_short_name(ep.name): ep for ep in dist.entry_points if ep.group == 'console_scripts'} parser = argparse.ArgumentParser(prog='python -m ansible', add_help=False) parser.add_argument('entry_point', choices=list(ep_map) + ['test']) args, extra = parser.parse_known_args() if args.entry_point == 'test': ansible_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) source_root = os.path.join(ansible_root, 'test', 'lib') if os.path.exists(os.path.join(source_root, 'ansible_test', '_internal', '__init__.py')): # running from source, use that version of ansible-test instead of any version that may already be installed sys.path.insert(0, source_root) module = importlib.import_module('ansible_test._util.target.cli.ansible_test_cli_stub') main = module.main else: main = ep_map[args.entry_point].load() main([args.entry_point] + extra) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/_vendor/0000755000000000000000000000000014556006441016530 5ustar00rootrootansible-core-2.16.3/lib/ansible/_vendor/__init__.py0000644000000000000000000000404614556006441020645 0ustar00rootroot# (c) 2020 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import pkgutil import sys import warnings # This package exists to host vendored top-level Python packages for downstream packaging. Any Python packages # installed beneath this one will be masked from the Ansible loader, and available from the front of sys.path. # It is expected that the vendored packages will be loaded very early, so a warning will be fired on import of # the top-level ansible package if any packages beneath this are already loaded at that point. # # Python packages may be installed here during downstream packaging using something like: # pip install --upgrade -t (path to this dir) cryptography pyyaml packaging jinja2 # mask vendored content below this package from being accessed as an ansible subpackage __path__ = [] def _ensure_vendored_path_entry(): """ Ensure that any downstream-bundled content beneath this package is available at the top of sys.path """ # patch our vendored dir onto sys.path vendored_path_entry = os.path.dirname(__file__) vendored_module_names = set(m[1] for m in pkgutil.iter_modules([vendored_path_entry], '')) # m[1] == m.name if vendored_module_names: # patch us early to load vendored deps transparently if vendored_path_entry in sys.path: # handle reload case by removing the existing entry, wherever it might be sys.path.remove(vendored_path_entry) sys.path.insert(0, vendored_path_entry) already_loaded_vendored_modules = set(sys.modules.keys()).intersection(vendored_module_names) if already_loaded_vendored_modules: warnings.warn('One or more Python packages bundled by this ansible-core distribution were already ' 'loaded ({0}). This may result in undefined behavior.'.format(', '.join(sorted(already_loaded_vendored_modules)))) _ensure_vendored_path_entry() ansible-core-2.16.3/lib/ansible/cli/0000755000000000000000000000000014556006441015643 5ustar00rootrootansible-core-2.16.3/lib/ansible/cli/__init__.py0000644000000000000000000007004214556006441017757 0ustar00rootroot# Copyright: (c) 2012-2014, Michael DeHaan # Copyright: (c) 2016, Toshio Kuratomi # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import locale import os import sys # Used for determining if the system is running a new enough python version # and should only restrict on our documented minimum versions if sys.version_info < (3, 10): raise SystemExit( 'ERROR: Ansible requires Python 3.10 or newer on the controller. ' 'Current version: %s' % ''.join(sys.version.splitlines()) ) def check_blocking_io(): """Check stdin/stdout/stderr to make sure they are using blocking IO.""" handles = [] for handle in (sys.stdin, sys.stdout, sys.stderr): # noinspection PyBroadException try: fd = handle.fileno() except Exception: continue # not a real file handle, such as during the import sanity test if not os.get_blocking(fd): handles.append(getattr(handle, 'name', None) or '#%s' % fd) if handles: raise SystemExit('ERROR: Ansible requires blocking IO on stdin/stdout/stderr. ' 'Non-blocking file handles detected: %s' % ', '.join(_io for _io in handles)) check_blocking_io() def initialize_locale(): """Set the locale to the users default setting and ensure the locale and filesystem encoding are UTF-8. """ try: locale.setlocale(locale.LC_ALL, '') dummy, encoding = locale.getlocale() except (locale.Error, ValueError) as e: raise SystemExit( 'ERROR: Ansible could not initialize the preferred locale: %s' % e ) if not encoding or encoding.lower() not in ('utf-8', 'utf8'): raise SystemExit('ERROR: Ansible requires the locale encoding to be UTF-8; Detected %s.' % encoding) fs_enc = sys.getfilesystemencoding() if fs_enc.lower() != 'utf-8': raise SystemExit('ERROR: Ansible requires the filesystem encoding to be UTF-8; Detected %s.' % fs_enc) initialize_locale() from importlib.metadata import version from ansible.module_utils.compat.version import LooseVersion # Used for determining if the system is running a new enough Jinja2 version # and should only restrict on our documented minimum versions jinja2_version = version('jinja2') if jinja2_version < LooseVersion('3.0'): raise SystemExit( 'ERROR: Ansible requires Jinja2 3.0 or newer on the controller. ' 'Current version: %s' % jinja2_version ) import errno import getpass import subprocess import traceback from abc import ABC, abstractmethod from pathlib import Path try: from ansible import constants as C from ansible.utils.display import Display display = Display() except Exception as e: print('ERROR: %s' % e, file=sys.stderr) sys.exit(5) from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError from ansible.inventory.manager import InventoryManager from ansible.module_utils.six import string_types from ansible.module_utils.common.text.converters import to_bytes, to_text from ansible.module_utils.common.collections import is_sequence from ansible.module_utils.common.file import is_executable from ansible.parsing.dataloader import DataLoader from ansible.parsing.vault import PromptVaultSecret, get_file_vault_secret from ansible.plugins.loader import add_all_plugin_dirs, init_plugin_loader from ansible.release import __version__ from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path from ansible.utils.path import unfrackpath from ansible.utils.unsafe_proxy import to_unsafe_text from ansible.vars.manager import VariableManager try: import argcomplete HAS_ARGCOMPLETE = True except ImportError: HAS_ARGCOMPLETE = False class CLI(ABC): ''' code behind bin/ansible* programs ''' PAGER = C.config.get_config_value('PAGER') # -F (quit-if-one-screen) -R (allow raw ansi control chars) # -S (chop long lines) -X (disable termcap init and de-init) LESS_OPTS = 'FRSX' SKIP_INVENTORY_DEFAULTS = False def __init__(self, args, callback=None): """ Base init method for all command line programs """ if not args: raise ValueError('A non-empty list for args is required') self.args = args self.parser = None self.callback = callback if C.DEVEL_WARNING and __version__.endswith('dev0'): display.warning( 'You are running the development version of Ansible. You should only run Ansible from "devel" if ' 'you are modifying the Ansible engine, or trying out features under development. This is a rapidly ' 'changing source of code and can become unstable at any point.' ) @abstractmethod def run(self): """Run the ansible command Subclasses must implement this method. It does the actual work of running an Ansible command. """ self.parse() # Initialize plugin loader after parse, so that the init code can utilize parsed arguments cli_collections_path = context.CLIARGS.get('collections_path') or [] if not is_sequence(cli_collections_path): # In some contexts ``collections_path`` is singular cli_collections_path = [cli_collections_path] init_plugin_loader(cli_collections_path) display.vv(to_text(opt_help.version(self.parser.prog))) if C.CONFIG_FILE: display.v(u"Using %s as config file" % to_text(C.CONFIG_FILE)) else: display.v(u"No config file found; using defaults") # warn about deprecated config options for deprecated in C.config.DEPRECATED: name = deprecated[0] why = deprecated[1]['why'] if 'alternatives' in deprecated[1]: alt = ', use %s instead' % deprecated[1]['alternatives'] else: alt = '' ver = deprecated[1].get('version') date = deprecated[1].get('date') collection_name = deprecated[1].get('collection_name') display.deprecated("%s option, %s%s" % (name, why, alt), version=ver, date=date, collection_name=collection_name) @staticmethod def split_vault_id(vault_id): # return (before_@, after_@) # if no @, return whole string as after_ if '@' not in vault_id: return (None, vault_id) parts = vault_id.split('@', 1) ret = tuple(parts) return ret @staticmethod def build_vault_ids(vault_ids, vault_password_files=None, ask_vault_pass=None, create_new_password=None, auto_prompt=True): vault_password_files = vault_password_files or [] vault_ids = vault_ids or [] # convert vault_password_files into vault_ids slugs for password_file in vault_password_files: id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, password_file) # note this makes --vault-id higher precedence than --vault-password-file # if we want to intertwingle them in order probably need a cli callback to populate vault_ids # used by --vault-id and --vault-password-file vault_ids.append(id_slug) # if an action needs an encrypt password (create_new_password=True) and we dont # have other secrets setup, then automatically add a password prompt as well. # prompts cant/shouldnt work without a tty, so dont add prompt secrets if ask_vault_pass or (not vault_ids and auto_prompt): id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, u'prompt_ask_vault_pass') vault_ids.append(id_slug) return vault_ids # TODO: remove the now unused args @staticmethod def setup_vault_secrets(loader, vault_ids, vault_password_files=None, ask_vault_pass=None, create_new_password=False, auto_prompt=True): # list of tuples vault_secrets = [] # Depending on the vault_id value (including how --ask-vault-pass / --vault-password-file create a vault_id) # we need to show different prompts. This is for compat with older Towers that expect a # certain vault password prompt format, so 'promp_ask_vault_pass' vault_id gets the old format. prompt_formats = {} # If there are configured default vault identities, they are considered 'first' # so we prepend them to vault_ids (from cli) here vault_password_files = vault_password_files or [] if C.DEFAULT_VAULT_PASSWORD_FILE: vault_password_files.append(C.DEFAULT_VAULT_PASSWORD_FILE) if create_new_password: prompt_formats['prompt'] = ['New vault password (%(vault_id)s): ', 'Confirm new vault password (%(vault_id)s): '] # 2.3 format prompts for --ask-vault-pass prompt_formats['prompt_ask_vault_pass'] = ['New Vault password: ', 'Confirm New Vault password: '] else: prompt_formats['prompt'] = ['Vault password (%(vault_id)s): '] # The format when we use just --ask-vault-pass needs to match 'Vault password:\s*?$' prompt_formats['prompt_ask_vault_pass'] = ['Vault password: '] vault_ids = CLI.build_vault_ids(vault_ids, vault_password_files, ask_vault_pass, create_new_password, auto_prompt=auto_prompt) last_exception = found_vault_secret = None for vault_id_slug in vault_ids: vault_id_name, vault_id_value = CLI.split_vault_id(vault_id_slug) if vault_id_value in ['prompt', 'prompt_ask_vault_pass']: # --vault-id some_name@prompt_ask_vault_pass --vault-id other_name@prompt_ask_vault_pass will be a little # confusing since it will use the old format without the vault id in the prompt built_vault_id = vault_id_name or C.DEFAULT_VAULT_IDENTITY # choose the prompt based on --vault-id=prompt or --ask-vault-pass. --ask-vault-pass # always gets the old format for Tower compatibility. # ie, we used --ask-vault-pass, so we need to use the old vault password prompt # format since Tower needs to match on that format. prompted_vault_secret = PromptVaultSecret(prompt_formats=prompt_formats[vault_id_value], vault_id=built_vault_id) # a empty or invalid password from the prompt will warn and continue to the next # without erroring globally try: prompted_vault_secret.load() except AnsibleError as exc: display.warning('Error in vault password prompt (%s): %s' % (vault_id_name, exc)) raise found_vault_secret = True vault_secrets.append((built_vault_id, prompted_vault_secret)) # update loader with new secrets incrementally, so we can load a vault password # that is encrypted with a vault secret provided earlier loader.set_vault_secrets(vault_secrets) continue # assuming anything else is a password file display.vvvvv('Reading vault password file: %s' % vault_id_value) # read vault_pass from a file try: file_vault_secret = get_file_vault_secret(filename=vault_id_value, vault_id=vault_id_name, loader=loader) except AnsibleError as exc: display.warning('Error getting vault password file (%s): %s' % (vault_id_name, to_text(exc))) last_exception = exc continue try: file_vault_secret.load() except AnsibleError as exc: display.warning('Error in vault password file loading (%s): %s' % (vault_id_name, to_text(exc))) last_exception = exc continue found_vault_secret = True if vault_id_name: vault_secrets.append((vault_id_name, file_vault_secret)) else: vault_secrets.append((C.DEFAULT_VAULT_IDENTITY, file_vault_secret)) # update loader with as-yet-known vault secrets loader.set_vault_secrets(vault_secrets) # An invalid or missing password file will error globally # if no valid vault secret was found. if last_exception and not found_vault_secret: raise last_exception return vault_secrets @staticmethod def _get_secret(prompt): secret = getpass.getpass(prompt=prompt) if secret: secret = to_unsafe_text(secret) return secret @staticmethod def ask_passwords(): ''' prompt for connection and become passwords if needed ''' op = context.CLIARGS sshpass = None becomepass = None become_prompt = '' become_prompt_method = "BECOME" if C.AGNOSTIC_BECOME_PROMPT else op['become_method'].upper() try: become_prompt = "%s password: " % become_prompt_method if op['ask_pass']: sshpass = CLI._get_secret("SSH password: ") become_prompt = "%s password[defaults to SSH password]: " % become_prompt_method elif op['connection_password_file']: sshpass = CLI.get_password_from_file(op['connection_password_file']) if op['become_ask_pass']: becomepass = CLI._get_secret(become_prompt) if op['ask_pass'] and becomepass == '': becomepass = sshpass elif op['become_password_file']: becomepass = CLI.get_password_from_file(op['become_password_file']) except EOFError: pass return (sshpass, becomepass) def validate_conflicts(self, op, runas_opts=False, fork_opts=False): ''' check for conflicting options ''' if fork_opts: if op.forks < 1: self.parser.error("The number of processes (--forks) must be >= 1") return op @abstractmethod def init_parser(self, usage="", desc=None, epilog=None): """ Create an options parser for most ansible scripts Subclasses need to implement this method. They will usually call the base class's init_parser to create a basic version and then add their own options on top of that. An implementation will look something like this:: def init_parser(self): super(MyCLI, self).init_parser(usage="My Ansible CLI", inventory_opts=True) ansible.arguments.option_helpers.add_runas_options(self.parser) self.parser.add_option('--my-option', dest='my_option', action='store') """ self.parser = opt_help.create_base_parser(self.name, usage=usage, desc=desc, epilog=epilog) @abstractmethod def post_process_args(self, options): """Process the command line args Subclasses need to implement this method. This method validates and transforms the command line arguments. It can be used to check whether conflicting values were given, whether filenames exist, etc. An implementation will look something like this:: def post_process_args(self, options): options = super(MyCLI, self).post_process_args(options) if options.addition and options.subtraction: raise AnsibleOptionsError('Only one of --addition and --subtraction can be specified') if isinstance(options.listofhosts, string_types): options.listofhosts = string_types.split(',') return options """ # process tags if hasattr(options, 'tags') and not options.tags: # optparse defaults does not do what's expected # More specifically, we want `--tags` to be additive. So we cannot # simply change C.TAGS_RUN's default to ["all"] because then passing # --tags foo would cause us to have ['all', 'foo'] options.tags = ['all'] if hasattr(options, 'tags') and options.tags: tags = set() for tag_set in options.tags: for tag in tag_set.split(u','): tags.add(tag.strip()) options.tags = list(tags) # process skip_tags if hasattr(options, 'skip_tags') and options.skip_tags: skip_tags = set() for tag_set in options.skip_tags: for tag in tag_set.split(u','): skip_tags.add(tag.strip()) options.skip_tags = list(skip_tags) # process inventory options except for CLIs that require their own processing if hasattr(options, 'inventory') and not self.SKIP_INVENTORY_DEFAULTS: if options.inventory: # should always be list if isinstance(options.inventory, string_types): options.inventory = [options.inventory] # Ensure full paths when needed options.inventory = [unfrackpath(opt, follow=False) if ',' not in opt else opt for opt in options.inventory] else: options.inventory = C.DEFAULT_HOST_LIST return options def parse(self): """Parse the command line args This method parses the command line arguments. It uses the parser stored in the self.parser attribute and saves the args and options in context.CLIARGS. Subclasses need to implement two helper methods, init_parser() and post_process_args() which are called from this function before and after parsing the arguments. """ self.init_parser() if HAS_ARGCOMPLETE: argcomplete.autocomplete(self.parser) try: options = self.parser.parse_args(self.args[1:]) except SystemExit as ex: if ex.code != 0: self.parser.exit(status=2, message=" \n%s" % self.parser.format_help()) raise options = self.post_process_args(options) context._init_global_context(options) @staticmethod def version_info(gitinfo=False): ''' return full ansible version info ''' if gitinfo: # expensive call, user with care ansible_version_string = opt_help.version() else: ansible_version_string = __version__ ansible_version = ansible_version_string.split()[0] ansible_versions = ansible_version.split('.') for counter in range(len(ansible_versions)): if ansible_versions[counter] == "": ansible_versions[counter] = 0 try: ansible_versions[counter] = int(ansible_versions[counter]) except Exception: pass if len(ansible_versions) < 3: for counter in range(len(ansible_versions), 3): ansible_versions.append(0) return {'string': ansible_version_string.strip(), 'full': ansible_version, 'major': ansible_versions[0], 'minor': ansible_versions[1], 'revision': ansible_versions[2]} @staticmethod def pager(text): ''' find reasonable way to display text ''' # this is a much simpler form of what is in pydoc.py if not sys.stdout.isatty(): display.display(text, screen_only=True) elif CLI.PAGER: if sys.platform == 'win32': display.display(text, screen_only=True) else: CLI.pager_pipe(text) else: p = subprocess.Popen('less --version', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p.communicate() if p.returncode == 0: CLI.pager_pipe(text, 'less') else: display.display(text, screen_only=True) @staticmethod def pager_pipe(text): ''' pipe text through a pager ''' if 'less' in CLI.PAGER: os.environ['LESS'] = CLI.LESS_OPTS try: cmd = subprocess.Popen(CLI.PAGER, shell=True, stdin=subprocess.PIPE, stdout=sys.stdout) cmd.communicate(input=to_bytes(text)) except IOError: pass except KeyboardInterrupt: pass @staticmethod def _play_prereqs(): # TODO: evaluate moving all of the code that touches ``AnsibleCollectionConfig`` # into ``init_plugin_loader`` so that we can specifically remove # ``AnsibleCollectionConfig.playbook_paths`` to make it immutable after instantiation options = context.CLIARGS # all needs loader loader = DataLoader() basedir = options.get('basedir', False) if basedir: loader.set_basedir(basedir) add_all_plugin_dirs(basedir) AnsibleCollectionConfig.playbook_paths = basedir default_collection = _get_collection_name_from_path(basedir) if default_collection: display.warning(u'running with default collection {0}'.format(default_collection)) AnsibleCollectionConfig.default_collection = default_collection vault_ids = list(options['vault_ids']) default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST vault_ids = default_vault_ids + vault_ids vault_secrets = CLI.setup_vault_secrets(loader, vault_ids=vault_ids, vault_password_files=list(options['vault_password_files']), ask_vault_pass=options['ask_vault_pass'], auto_prompt=False) loader.set_vault_secrets(vault_secrets) # create the inventory, and filter it based on the subset specified (if any) inventory = InventoryManager(loader=loader, sources=options['inventory'], cache=(not options.get('flush_cache'))) # create the variable manager, which will be shared throughout # the code, ensuring a consistent view of global variables variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False)) return loader, inventory, variable_manager @staticmethod def get_host_list(inventory, subset, pattern='all'): no_hosts = False if len(inventory.list_hosts()) == 0: # Empty inventory if C.LOCALHOST_WARNING and pattern not in C.LOCALHOST: display.warning("provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'") no_hosts = True inventory.subset(subset) hosts = inventory.list_hosts(pattern) if not hosts and no_hosts is False: raise AnsibleError("Specified inventory, host pattern and/or --limit leaves us with no hosts to target.") return hosts @staticmethod def get_password_from_file(pwd_file): b_pwd_file = to_bytes(pwd_file) secret = None if b_pwd_file == b'-': # ensure its read as bytes secret = sys.stdin.buffer.read() elif not os.path.exists(b_pwd_file): raise AnsibleError("The password file %s was not found" % pwd_file) elif is_executable(b_pwd_file): display.vvvv(u'The password file %s is a script.' % to_text(pwd_file)) cmd = [b_pwd_file] try: p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) except OSError as e: raise AnsibleError("Problem occured when trying to run the password script %s (%s)." " If this is not a script, remove the executable bit from the file." % (pwd_file, e)) stdout, stderr = p.communicate() if p.returncode != 0: raise AnsibleError("The password script %s returned an error (rc=%s): %s" % (pwd_file, p.returncode, stderr)) secret = stdout else: try: f = open(b_pwd_file, "rb") secret = f.read().strip() f.close() except (OSError, IOError) as e: raise AnsibleError("Could not read password file %s: %s" % (pwd_file, e)) secret = secret.strip(b'\r\n') if not secret: raise AnsibleError('Empty password was provided from file (%s)' % pwd_file) return to_unsafe_text(secret) @classmethod def cli_executor(cls, args=None): if args is None: args = sys.argv try: display.debug("starting run") ansible_dir = Path(C.ANSIBLE_HOME).expanduser() try: ansible_dir.mkdir(mode=0o700) except OSError as exc: if exc.errno != errno.EEXIST: display.warning( "Failed to create the directory '%s': %s" % (ansible_dir, to_text(exc, errors='surrogate_or_replace')) ) else: display.debug("Created the '%s' directory" % ansible_dir) try: args = [to_text(a, errors='surrogate_or_strict') for a in args] except UnicodeError: display.error('Command line args are not in utf-8, unable to continue. Ansible currently only understands utf-8') display.display(u"The full traceback was:\n\n%s" % to_text(traceback.format_exc())) exit_code = 6 else: cli = cls(args) exit_code = cli.run() except AnsibleOptionsError as e: cli.parser.print_help() display.error(to_text(e), wrap_text=False) exit_code = 5 except AnsibleParserError as e: display.error(to_text(e), wrap_text=False) exit_code = 4 # TQM takes care of these, but leaving comment to reserve the exit codes # except AnsibleHostUnreachable as e: # display.error(str(e)) # exit_code = 3 # except AnsibleHostFailed as e: # display.error(str(e)) # exit_code = 2 except AnsibleError as e: display.error(to_text(e), wrap_text=False) exit_code = 1 except KeyboardInterrupt: display.error("User interrupted execution") exit_code = 99 except Exception as e: if C.DEFAULT_DEBUG: # Show raw stacktraces in debug mode, It also allow pdb to # enter post mortem mode. raise have_cli_options = bool(context.CLIARGS) display.error("Unexpected Exception, this is probably a bug: %s" % to_text(e), wrap_text=False) if not have_cli_options or have_cli_options and context.CLIARGS['verbosity'] > 2: log_only = False if hasattr(e, 'orig_exc'): display.vvv('\nexception type: %s' % to_text(type(e.orig_exc))) why = to_text(e.orig_exc) if to_text(e) != why: display.vvv('\noriginal msg: %s' % why) else: display.display("to see the full traceback, use -vvv") log_only = True display.display(u"the full traceback was:\n\n%s" % to_text(traceback.format_exc()), log_only=log_only) exit_code = 250 sys.exit(exit_code) ansible-core-2.16.3/lib/ansible/cli/adhoc.py0000755000000000000000000002006714556006441017303 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2012, Michael DeHaan # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError from ansible.executor.task_queue_manager import TaskQueueManager from ansible.module_utils.common.text.converters import to_text from ansible.parsing.splitter import parse_kv from ansible.parsing.utils.yaml import from_yaml from ansible.playbook import Playbook from ansible.playbook.play import Play from ansible.utils.display import Display display = Display() class AdHocCLI(CLI): ''' is an extra-simple tool/framework/API for doing 'remote things'. this command allows you to define and run a single task 'playbook' against a set of hosts ''' name = 'ansible' def init_parser(self): ''' create an options parser for bin/ansible ''' super(AdHocCLI, self).init_parser(usage='%prog [options]', desc="Define and run a single task 'playbook' against a set of hosts", epilog="Some actions do not make sense in Ad-Hoc (include, meta, etc)") opt_help.add_runas_options(self.parser) opt_help.add_inventory_options(self.parser) opt_help.add_async_options(self.parser) opt_help.add_output_options(self.parser) opt_help.add_connect_options(self.parser) opt_help.add_check_options(self.parser) opt_help.add_runtask_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_fork_options(self.parser) opt_help.add_module_options(self.parser) opt_help.add_basedir_options(self.parser) opt_help.add_tasknoplay_options(self.parser) # options unique to ansible ad-hoc self.parser.add_argument('-a', '--args', dest='module_args', help="The action's options in space separated k=v format: -a 'opt1=val1 opt2=val2' " "or a json string: -a '{\"opt1\": \"val1\", \"opt2\": \"val2\"}'", default=C.DEFAULT_MODULE_ARGS) self.parser.add_argument('-m', '--module-name', dest='module_name', help="Name of the action to execute (default=%s)" % C.DEFAULT_MODULE_NAME, default=C.DEFAULT_MODULE_NAME) self.parser.add_argument('args', metavar='pattern', help='host pattern') def post_process_args(self, options): '''Post process and validate options for bin/ansible ''' options = super(AdHocCLI, self).post_process_args(options) display.verbosity = options.verbosity self.validate_conflicts(options, runas_opts=True, fork_opts=True) return options def _play_ds(self, pattern, async_val, poll): check_raw = context.CLIARGS['module_name'] in C.MODULE_REQUIRE_ARGS module_args_raw = context.CLIARGS['module_args'] module_args = None if module_args_raw and module_args_raw.startswith('{') and module_args_raw.endswith('}'): try: module_args = from_yaml(module_args_raw.strip(), json_only=True) except AnsibleParserError: pass if not module_args: module_args = parse_kv(module_args_raw, check_raw=check_raw) mytask = {'action': {'module': context.CLIARGS['module_name'], 'args': module_args}, 'timeout': context.CLIARGS['task_timeout']} # avoid adding to tasks that don't support it, unless set, then give user an error if context.CLIARGS['module_name'] not in C._ACTION_ALL_INCLUDE_ROLE_TASKS and any(frozenset((async_val, poll))): mytask['async_val'] = async_val mytask['poll'] = poll return dict( name="Ansible Ad-Hoc", hosts=pattern, gather_facts='no', tasks=[mytask]) def run(self): ''' create and execute the single task playbook ''' super(AdHocCLI, self).run() # only thing left should be host pattern pattern = to_text(context.CLIARGS['args'], errors='surrogate_or_strict') # handle password prompts sshpass = None becomepass = None (sshpass, becomepass) = self.ask_passwords() passwords = {'conn_pass': sshpass, 'become_pass': becomepass} # get basic objects loader, inventory, variable_manager = self._play_prereqs() # get list of hosts to execute against try: hosts = self.get_host_list(inventory, context.CLIARGS['subset'], pattern) except AnsibleError: if context.CLIARGS['subset']: raise else: hosts = [] display.warning("No hosts matched, nothing to do") # just listing hosts? if context.CLIARGS['listhosts']: display.display(' hosts (%d):' % len(hosts)) for host in hosts: display.display(' %s' % host) return 0 # verify we have arguments if we know we need em if context.CLIARGS['module_name'] in C.MODULE_REQUIRE_ARGS and not context.CLIARGS['module_args']: err = "No argument passed to %s module" % context.CLIARGS['module_name'] if pattern.endswith(".yml"): err = err + ' (did you mean to run ansible-playbook?)' raise AnsibleOptionsError(err) # Avoid modules that don't work with ad-hoc if context.CLIARGS['module_name'] in C._ACTION_IMPORT_PLAYBOOK: raise AnsibleOptionsError("'%s' is not a valid action for ad-hoc commands" % context.CLIARGS['module_name']) # construct playbook objects to wrap task play_ds = self._play_ds(pattern, context.CLIARGS['seconds'], context.CLIARGS['poll_interval']) play = Play().load(play_ds, variable_manager=variable_manager, loader=loader) # used in start callback playbook = Playbook(loader) playbook._entries.append(play) playbook._file_name = '__adhoc_playbook__' if self.callback: cb = self.callback elif context.CLIARGS['one_line']: cb = 'oneline' # Respect custom 'stdout_callback' only with enabled 'bin_ansible_callbacks' elif C.DEFAULT_LOAD_CALLBACK_PLUGINS and C.DEFAULT_STDOUT_CALLBACK != 'default': cb = C.DEFAULT_STDOUT_CALLBACK else: cb = 'minimal' run_tree = False if context.CLIARGS['tree']: C.CALLBACKS_ENABLED.append('tree') C.TREE_DIR = context.CLIARGS['tree'] run_tree = True # now create a task queue manager to execute the play self._tqm = None try: self._tqm = TaskQueueManager( inventory=inventory, variable_manager=variable_manager, loader=loader, passwords=passwords, stdout_callback=cb, run_additional_callbacks=C.DEFAULT_LOAD_CALLBACK_PLUGINS, run_tree=run_tree, forks=context.CLIARGS['forks'], ) self._tqm.load_callbacks() self._tqm.send_callback('v2_playbook_on_start', playbook) result = self._tqm.run(play) self._tqm.send_callback('v2_playbook_on_stats', self._tqm._stats) finally: if self._tqm: self._tqm.cleanup() if loader: loader.cleanup_all_tmp_files() return result def main(args=None): AdHocCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/cli/arguments/0000755000000000000000000000000014556006441017650 5ustar00rootrootansible-core-2.16.3/lib/ansible/cli/arguments/__init__.py0000644000000000000000000000033514556006441021762 0ustar00rootroot# Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type ansible-core-2.16.3/lib/ansible/cli/arguments/option_helpers.py0000644000000000000000000004415514556006441023265 0ustar00rootroot# Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import copy import operator import argparse import os import os.path import sys import time from jinja2 import __version__ as j2_version import ansible from ansible import constants as C from ansible.module_utils.common.text.converters import to_native from ansible.module_utils.common.yaml import HAS_LIBYAML, yaml_load from ansible.release import __version__ from ansible.utils.path import unfrackpath # # Special purpose OptionParsers # class SortingHelpFormatter(argparse.HelpFormatter): def add_arguments(self, actions): actions = sorted(actions, key=operator.attrgetter('option_strings')) super(SortingHelpFormatter, self).add_arguments(actions) class ArgumentParser(argparse.ArgumentParser): def add_argument(self, *args, **kwargs): action = kwargs.get('action') help = kwargs.get('help') if help and action in {'append', 'append_const', 'count', 'extend', PrependListAction}: help = f'{help.rstrip(".")}. This argument may be specified multiple times.' kwargs['help'] = help return super().add_argument(*args, **kwargs) class AnsibleVersion(argparse.Action): def __call__(self, parser, namespace, values, option_string=None): ansible_version = to_native(version(getattr(parser, 'prog'))) print(ansible_version) parser.exit() class UnrecognizedArgument(argparse.Action): def __init__(self, option_strings, dest, const=True, default=None, required=False, help=None, metavar=None, nargs=0): super(UnrecognizedArgument, self).__init__(option_strings=option_strings, dest=dest, nargs=nargs, const=const, default=default, required=required, help=help) def __call__(self, parser, namespace, values, option_string=None): parser.error('unrecognized arguments: %s' % option_string) class PrependListAction(argparse.Action): """A near clone of ``argparse._AppendAction``, but designed to prepend list values instead of appending. """ def __init__(self, option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None): if nargs == 0: raise ValueError('nargs for append actions must be > 0; if arg ' 'strings are not supplying the value to append, ' 'the append const action may be more appropriate') if const is not None and nargs != argparse.OPTIONAL: raise ValueError('nargs must be %r to supply const' % argparse.OPTIONAL) super(PrependListAction, self).__init__( option_strings=option_strings, dest=dest, nargs=nargs, const=const, default=default, type=type, choices=choices, required=required, help=help, metavar=metavar ) def __call__(self, parser, namespace, values, option_string=None): items = copy.copy(ensure_value(namespace, self.dest, [])) items[0:0] = values setattr(namespace, self.dest, items) def ensure_value(namespace, name, value): if getattr(namespace, name, None) is None: setattr(namespace, name, value) return getattr(namespace, name) # # Callbacks to validate and normalize Options # def unfrack_path(pathsep=False, follow=True): """Turn an Option's data into a single path in Ansible locations""" def inner(value): if pathsep: return [unfrackpath(x, follow=follow) for x in value.split(os.pathsep) if x] if value == '-': return value return unfrackpath(value, follow=follow) return inner def maybe_unfrack_path(beacon): def inner(value): if value.startswith(beacon): return beacon + unfrackpath(value[1:]) return value return inner def _git_repo_info(repo_path): """ returns a string containing git branch, commit id and commit date """ result = None if os.path.exists(repo_path): # Check if the .git is a file. If it is a file, it means that we are in a submodule structure. if os.path.isfile(repo_path): try: with open(repo_path) as f: gitdir = yaml_load(f).get('gitdir') # There is a possibility the .git file to have an absolute path. if os.path.isabs(gitdir): repo_path = gitdir else: repo_path = os.path.join(repo_path[:-4], gitdir) except (IOError, AttributeError): return '' with open(os.path.join(repo_path, "HEAD")) as f: line = f.readline().rstrip("\n") if line.startswith("ref:"): branch_path = os.path.join(repo_path, line[5:]) else: branch_path = None if branch_path and os.path.exists(branch_path): branch = '/'.join(line.split('/')[2:]) with open(branch_path) as f: commit = f.readline()[:10] else: # detached HEAD commit = line[:10] branch = 'detached HEAD' branch_path = os.path.join(repo_path, "HEAD") date = time.localtime(os.stat(branch_path).st_mtime) if time.daylight == 0: offset = time.timezone else: offset = time.altzone result = "({0} {1}) last updated {2} (GMT {3:+04d})".format(branch, commit, time.strftime("%Y/%m/%d %H:%M:%S", date), int(offset / -36)) else: result = '' return result def _gitinfo(): basedir = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..', '..', '..')) repo_path = os.path.join(basedir, '.git') return _git_repo_info(repo_path) def version(prog=None): """ return ansible version """ if prog: result = ["{0} [core {1}]".format(prog, __version__)] else: result = [__version__] gitinfo = _gitinfo() if gitinfo: result[0] = "{0} {1}".format(result[0], gitinfo) result.append(" config file = %s" % C.CONFIG_FILE) if C.DEFAULT_MODULE_PATH is None: cpath = "Default w/o overrides" else: cpath = C.DEFAULT_MODULE_PATH result.append(" configured module search path = %s" % cpath) result.append(" ansible python module location = %s" % ':'.join(ansible.__path__)) result.append(" ansible collection location = %s" % ':'.join(C.COLLECTIONS_PATHS)) result.append(" executable location = %s" % sys.argv[0]) result.append(" python version = %s (%s)" % (''.join(sys.version.splitlines()), to_native(sys.executable))) result.append(" jinja version = %s" % j2_version) result.append(" libyaml = %s" % HAS_LIBYAML) return "\n".join(result) # # Functions to add pre-canned options to an OptionParser # def create_base_parser(prog, usage="", desc=None, epilog=None): """ Create an options parser for all ansible scripts """ # base opts parser = ArgumentParser( prog=prog, formatter_class=SortingHelpFormatter, epilog=epilog, description=desc, conflict_handler='resolve', ) version_help = "show program's version number, config file location, configured module search path," \ " module location, executable location and exit" parser.add_argument('--version', action=AnsibleVersion, nargs=0, help=version_help) add_verbosity_options(parser) return parser def add_verbosity_options(parser): """Add options for verbosity""" parser.add_argument('-v', '--verbose', dest='verbosity', default=C.DEFAULT_VERBOSITY, action="count", help="Causes Ansible to print more debug messages. Adding multiple -v will increase the verbosity, " "the builtin plugins currently evaluate up to -vvvvvv. A reasonable level to start is -vvv, " "connection debugging might require -vvvv.") def add_async_options(parser): """Add options for commands which can launch async tasks""" parser.add_argument('-P', '--poll', default=C.DEFAULT_POLL_INTERVAL, type=int, dest='poll_interval', help="set the poll interval if using -B (default=%s)" % C.DEFAULT_POLL_INTERVAL) parser.add_argument('-B', '--background', dest='seconds', type=int, default=0, help='run asynchronously, failing after X seconds (default=N/A)') def add_basedir_options(parser): """Add options for commands which can set a playbook basedir""" parser.add_argument('--playbook-dir', default=C.PLAYBOOK_DIR, dest='basedir', action='store', help="Since this tool does not use playbooks, use this as a substitute playbook directory. " "This sets the relative path for many features including roles/ group_vars/ etc.", type=unfrack_path()) def add_check_options(parser): """Add options for commands which can run with diagnostic information of tasks""" parser.add_argument("-C", "--check", default=False, dest='check', action='store_true', help="don't make any changes; instead, try to predict some of the changes that may occur") parser.add_argument("-D", "--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true', help="when changing (small) files and templates, show the differences in those" " files; works great with --check") def add_connect_options(parser): """Add options for commands which need to connection to other hosts""" connect_group = parser.add_argument_group("Connection Options", "control as whom and how to connect to hosts") connect_group.add_argument('--private-key', '--key-file', default=C.DEFAULT_PRIVATE_KEY_FILE, dest='private_key_file', help='use this file to authenticate the connection', type=unfrack_path()) connect_group.add_argument('-u', '--user', default=C.DEFAULT_REMOTE_USER, dest='remote_user', help='connect as this user (default=%s)' % C.DEFAULT_REMOTE_USER) connect_group.add_argument('-c', '--connection', dest='connection', default=C.DEFAULT_TRANSPORT, help="connection type to use (default=%s)" % C.DEFAULT_TRANSPORT) connect_group.add_argument('-T', '--timeout', default=None, type=int, dest='timeout', help="override the connection timeout in seconds (default depends on connection)") # ssh only connect_group.add_argument('--ssh-common-args', default=None, dest='ssh_common_args', help="specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)") connect_group.add_argument('--sftp-extra-args', default=None, dest='sftp_extra_args', help="specify extra arguments to pass to sftp only (e.g. -f, -l)") connect_group.add_argument('--scp-extra-args', default=None, dest='scp_extra_args', help="specify extra arguments to pass to scp only (e.g. -l)") connect_group.add_argument('--ssh-extra-args', default=None, dest='ssh_extra_args', help="specify extra arguments to pass to ssh only (e.g. -R)") parser.add_argument_group(connect_group) connect_password_group = parser.add_mutually_exclusive_group() connect_password_group.add_argument('-k', '--ask-pass', default=C.DEFAULT_ASK_PASS, dest='ask_pass', action='store_true', help='ask for connection password') connect_password_group.add_argument('--connection-password-file', '--conn-pass-file', default=C.CONNECTION_PASSWORD_FILE, dest='connection_password_file', help="Connection password file", type=unfrack_path(), action='store') parser.add_argument_group(connect_password_group) def add_fork_options(parser): """Add options for commands that can fork worker processes""" parser.add_argument('-f', '--forks', dest='forks', default=C.DEFAULT_FORKS, type=int, help="specify number of parallel processes to use (default=%s)" % C.DEFAULT_FORKS) def add_inventory_options(parser): """Add options for commands that utilize inventory""" parser.add_argument('-i', '--inventory', '--inventory-file', dest='inventory', action="append", help="specify inventory host path or comma separated host list. --inventory-file is deprecated") parser.add_argument('--list-hosts', dest='listhosts', action='store_true', help='outputs a list of matching hosts; does not execute anything else') parser.add_argument('-l', '--limit', default=C.DEFAULT_SUBSET, dest='subset', help='further limit selected hosts to an additional pattern') def add_meta_options(parser): """Add options for commands which can launch meta tasks from the command line""" parser.add_argument('--force-handlers', default=C.DEFAULT_FORCE_HANDLERS, dest='force_handlers', action='store_true', help="run handlers even if a task fails") parser.add_argument('--flush-cache', dest='flush_cache', action='store_true', help="clear the fact cache for every host in inventory") def add_module_options(parser): """Add options for commands that load modules""" module_path = C.config.get_configuration_definition('DEFAULT_MODULE_PATH').get('default', '') parser.add_argument('-M', '--module-path', dest='module_path', default=None, help="prepend colon-separated path(s) to module library (default=%s)" % module_path, type=unfrack_path(pathsep=True), action=PrependListAction) def add_output_options(parser): """Add options for commands which can change their output""" parser.add_argument('-o', '--one-line', dest='one_line', action='store_true', help='condense output') parser.add_argument('-t', '--tree', dest='tree', default=None, help='log output to this directory') def add_runas_options(parser): """ Add options for commands which can run tasks as another user Note that this includes the options from add_runas_prompt_options(). Only one of these functions should be used. """ runas_group = parser.add_argument_group("Privilege Escalation Options", "control how and which user you become as on target hosts") # consolidated privilege escalation (become) runas_group.add_argument("-b", "--become", default=C.DEFAULT_BECOME, action="store_true", dest='become', help="run operations with become (does not imply password prompting)") runas_group.add_argument('--become-method', dest='become_method', default=C.DEFAULT_BECOME_METHOD, help='privilege escalation method to use (default=%s)' % C.DEFAULT_BECOME_METHOD + ', use `ansible-doc -t become -l` to list valid choices.') runas_group.add_argument('--become-user', default=None, dest='become_user', type=str, help='run operations as this user (default=%s)' % C.DEFAULT_BECOME_USER) parser.add_argument_group(runas_group) add_runas_prompt_options(parser) def add_runas_prompt_options(parser, runas_group=None): """ Add options for commands which need to prompt for privilege escalation credentials Note that add_runas_options() includes these options already. Only one of the two functions should be used. """ if runas_group is not None: parser.add_argument_group(runas_group) runas_pass_group = parser.add_mutually_exclusive_group() runas_pass_group.add_argument('-K', '--ask-become-pass', dest='become_ask_pass', action='store_true', default=C.DEFAULT_BECOME_ASK_PASS, help='ask for privilege escalation password') runas_pass_group.add_argument('--become-password-file', '--become-pass-file', default=C.BECOME_PASSWORD_FILE, dest='become_password_file', help="Become password file", type=unfrack_path(), action='store') parser.add_argument_group(runas_pass_group) def add_runtask_options(parser): """Add options for commands that run a task""" parser.add_argument('-e', '--extra-vars', dest="extra_vars", action="append", type=maybe_unfrack_path('@'), help="set additional variables as key=value or YAML/JSON, if filename prepend with @", default=[]) def add_tasknoplay_options(parser): """Add options for commands that run a task w/o a defined play""" parser.add_argument('--task-timeout', type=int, dest="task_timeout", action="store", default=C.TASK_TIMEOUT, help="set task timeout limit in seconds, must be positive integer.") def add_subset_options(parser): """Add options for commands which can run a subset of tasks""" parser.add_argument('-t', '--tags', dest='tags', default=C.TAGS_RUN, action='append', help="only run plays and tasks tagged with these values") parser.add_argument('--skip-tags', dest='skip_tags', default=C.TAGS_SKIP, action='append', help="only run plays and tasks whose tags do not match these values") def add_vault_options(parser): """Add options for loading vault files""" parser.add_argument('--vault-id', default=[], dest='vault_ids', action='append', type=str, help='the vault identity to use') base_group = parser.add_mutually_exclusive_group() base_group.add_argument('-J', '--ask-vault-password', '--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true', help='ask for vault password') base_group.add_argument('--vault-password-file', '--vault-pass-file', default=[], dest='vault_password_files', help="vault password file", type=unfrack_path(follow=False), action='append') ansible-core-2.16.3/lib/ansible/cli/config.py0000755000000000000000000005374714556006441017505 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import os import yaml import shlex import subprocess from collections.abc import Mapping from ansible import context import ansible.plugins.loader as plugin_loader from ansible import constants as C from ansible.cli.arguments import option_helpers as opt_help from ansible.config.manager import ConfigManager, Setting from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.module_utils.common.text.converters import to_native, to_text, to_bytes from ansible.module_utils.common.json import json_dump from ansible.module_utils.six import string_types from ansible.parsing.quoting import is_quoted from ansible.parsing.yaml.dumper import AnsibleDumper from ansible.utils.color import stringc from ansible.utils.display import Display from ansible.utils.path import unfrackpath display = Display() def yaml_dump(data, default_flow_style=False, default_style=None): return yaml.dump(data, Dumper=AnsibleDumper, default_flow_style=default_flow_style, default_style=default_style) def yaml_short(data): return yaml_dump(data, default_flow_style=True, default_style="''") def get_constants(): ''' helper method to ensure we can template based on existing constants ''' if not hasattr(get_constants, 'cvars'): get_constants.cvars = {k: getattr(C, k) for k in dir(C) if not k.startswith('__')} return get_constants.cvars class ConfigCLI(CLI): """ Config command line class """ name = 'ansible-config' def __init__(self, args, callback=None): self.config_file = None self.config = None super(ConfigCLI, self).__init__(args, callback) def init_parser(self): super(ConfigCLI, self).init_parser( desc="View ansible configuration.", ) common = opt_help.ArgumentParser(add_help=False) opt_help.add_verbosity_options(common) common.add_argument('-c', '--config', dest='config_file', help="path to configuration file, defaults to first file found in precedence.") common.add_argument("-t", "--type", action="store", default='base', dest='type', choices=['all', 'base'] + list(C.CONFIGURABLE_PLUGINS), help="Filter down to a specific plugin type.") common.add_argument('args', help='Specific plugin to target, requires type of plugin to be set', nargs='*') subparsers = self.parser.add_subparsers(dest='action') subparsers.required = True list_parser = subparsers.add_parser('list', help='Print all config options', parents=[common]) list_parser.set_defaults(func=self.execute_list) list_parser.add_argument('--format', '-f', dest='format', action='store', choices=['json', 'yaml'], default='yaml', help='Output format for list') dump_parser = subparsers.add_parser('dump', help='Dump configuration', parents=[common]) dump_parser.set_defaults(func=self.execute_dump) dump_parser.add_argument('--only-changed', '--changed-only', dest='only_changed', action='store_true', help="Only show configurations that have changed from the default") dump_parser.add_argument('--format', '-f', dest='format', action='store', choices=['json', 'yaml', 'display'], default='display', help='Output format for dump') view_parser = subparsers.add_parser('view', help='View configuration file', parents=[common]) view_parser.set_defaults(func=self.execute_view) init_parser = subparsers.add_parser('init', help='Create initial configuration', parents=[common]) init_parser.set_defaults(func=self.execute_init) init_parser.add_argument('--format', '-f', dest='format', action='store', choices=['ini', 'env', 'vars'], default='ini', help='Output format for init') init_parser.add_argument('--disabled', dest='commented', action='store_true', default=False, help='Prefixes all entries with a comment character to disable them') # search_parser = subparsers.add_parser('find', help='Search configuration') # search_parser.set_defaults(func=self.execute_search) # search_parser.add_argument('args', help='Search term', metavar='') def post_process_args(self, options): options = super(ConfigCLI, self).post_process_args(options) display.verbosity = options.verbosity return options def run(self): super(ConfigCLI, self).run() if context.CLIARGS['config_file']: self.config_file = unfrackpath(context.CLIARGS['config_file'], follow=False) b_config = to_bytes(self.config_file) if os.path.exists(b_config) and os.access(b_config, os.R_OK): self.config = ConfigManager(self.config_file) else: raise AnsibleOptionsError('The provided configuration file is missing or not accessible: %s' % to_native(self.config_file)) else: self.config = C.config self.config_file = self.config._config_file if self.config_file: try: if not os.path.exists(self.config_file): raise AnsibleOptionsError("%s does not exist or is not accessible" % (self.config_file)) elif not os.path.isfile(self.config_file): raise AnsibleOptionsError("%s is not a valid file" % (self.config_file)) os.environ['ANSIBLE_CONFIG'] = to_native(self.config_file) except Exception: if context.CLIARGS['action'] in ['view']: raise elif context.CLIARGS['action'] in ['edit', 'update']: display.warning("File does not exist, used empty file: %s" % self.config_file) elif context.CLIARGS['action'] == 'view': raise AnsibleError('Invalid or no config file was supplied') # run the requested action context.CLIARGS['func']() def execute_update(self): ''' Updates a single setting in the specified ansible.cfg ''' raise AnsibleError("Option not implemented yet") # pylint: disable=unreachable if context.CLIARGS['setting'] is None: raise AnsibleOptionsError("update option requires a setting to update") (entry, value) = context.CLIARGS['setting'].split('=') if '.' in entry: (section, option) = entry.split('.') else: section = 'defaults' option = entry subprocess.call([ 'ansible', '-m', 'ini_file', 'localhost', '-c', 'local', '-a', '"dest=%s section=%s option=%s value=%s backup=yes"' % (self.config_file, section, option, value) ]) def execute_view(self): ''' Displays the current config file ''' try: with open(self.config_file, 'rb') as f: self.pager(to_text(f.read(), errors='surrogate_or_strict')) except Exception as e: raise AnsibleError("Failed to open config file: %s" % to_native(e)) def execute_edit(self): ''' Opens ansible.cfg in the default EDITOR ''' raise AnsibleError("Option not implemented yet") # pylint: disable=unreachable try: editor = shlex.split(C.config.get_config_value('EDITOR')) editor.append(self.config_file) subprocess.call(editor) except Exception as e: raise AnsibleError("Failed to open editor: %s" % to_native(e)) def _list_plugin_settings(self, ptype, plugins=None): entries = {} loader = getattr(plugin_loader, '%s_loader' % ptype) # build list if plugins: plugin_cs = [] for plugin in plugins: p = loader.get(plugin, class_only=True) if p is None: display.warning("Skipping %s as we could not find matching plugin" % plugin) else: plugin_cs.append(p) else: plugin_cs = loader.all(class_only=True) # iterate over class instances for plugin in plugin_cs: finalname = name = plugin._load_name if name.startswith('_'): # alias or deprecated if os.path.islink(plugin._original_path): continue else: finalname = name.replace('_', '', 1) + ' (DEPRECATED)' entries[finalname] = self.config.get_configuration_definitions(ptype, name) return entries def _list_entries_from_args(self): ''' build a dict with the list requested configs ''' config_entries = {} if context.CLIARGS['type'] in ('base', 'all'): # this dumps main/common configs config_entries = self.config.get_configuration_definitions(ignore_private=True) if context.CLIARGS['type'] != 'base': config_entries['PLUGINS'] = {} if context.CLIARGS['type'] == 'all': # now each plugin type for ptype in C.CONFIGURABLE_PLUGINS: config_entries['PLUGINS'][ptype.upper()] = self._list_plugin_settings(ptype) elif context.CLIARGS['type'] != 'base': config_entries['PLUGINS'][context.CLIARGS['type']] = self._list_plugin_settings(context.CLIARGS['type'], context.CLIARGS['args']) return config_entries def execute_list(self): ''' list and output available configs ''' config_entries = self._list_entries_from_args() if context.CLIARGS['format'] == 'yaml': output = yaml_dump(config_entries) elif context.CLIARGS['format'] == 'json': output = json_dump(config_entries) self.pager(to_text(output, errors='surrogate_or_strict')) def _get_settings_vars(self, settings, subkey): data = [] if context.CLIARGS['commented']: prefix = '#' else: prefix = '' for setting in settings: if not settings[setting].get('description'): continue default = settings[setting].get('default', '') if subkey == 'env': stype = settings[setting].get('type', '') if stype == 'boolean': if default: default = '1' else: default = '0' elif default: if stype == 'list': if not isinstance(default, string_types): # python lists are not valid env ones try: default = ', '.join(default) except Exception as e: # list of other stuff default = '%s' % to_native(default) if isinstance(default, string_types) and not is_quoted(default): default = shlex.quote(default) elif default is None: default = '' if subkey in settings[setting] and settings[setting][subkey]: entry = settings[setting][subkey][-1]['name'] if isinstance(settings[setting]['description'], string_types): desc = settings[setting]['description'] else: desc = '\n#'.join(settings[setting]['description']) name = settings[setting].get('name', setting) data.append('# %s(%s): %s' % (name, settings[setting].get('type', 'string'), desc)) # TODO: might need quoting and value coercion depending on type if subkey == 'env': if entry.startswith('_ANSIBLE_'): continue data.append('%s%s=%s' % (prefix, entry, default)) elif subkey == 'vars': if entry.startswith('_ansible_'): continue data.append(prefix + '%s: %s' % (entry, to_text(yaml_short(default), errors='surrogate_or_strict'))) data.append('') return data def _get_settings_ini(self, settings, seen): sections = {} for o in sorted(settings.keys()): opt = settings[o] if not isinstance(opt, Mapping): # recursed into one of the few settings that is a mapping, now hitting it's strings continue if not opt.get('description'): # its a plugin new_sections = self._get_settings_ini(opt, seen) for s in new_sections: if s in sections: sections[s].extend(new_sections[s]) else: sections[s] = new_sections[s] continue if isinstance(opt['description'], string_types): desc = '# (%s) %s' % (opt.get('type', 'string'), opt['description']) else: desc = "# (%s) " % opt.get('type', 'string') desc += "\n# ".join(opt['description']) if 'ini' in opt and opt['ini']: entry = opt['ini'][-1] if entry['section'] not in seen: seen[entry['section']] = [] if entry['section'] not in sections: sections[entry['section']] = [] # avoid dupes if entry['key'] not in seen[entry['section']]: seen[entry['section']].append(entry['key']) default = opt.get('default', '') if opt.get('type', '') == 'list' and not isinstance(default, string_types): # python lists are not valid ini ones default = ', '.join(default) elif default is None: default = '' if context.CLIARGS['commented']: entry['key'] = ';%s' % entry['key'] key = desc + '\n%s=%s' % (entry['key'], default) sections[entry['section']].append(key) return sections def execute_init(self): """Create initial configuration""" seen = {} data = [] config_entries = self._list_entries_from_args() plugin_types = config_entries.pop('PLUGINS', None) if context.CLIARGS['format'] == 'ini': sections = self._get_settings_ini(config_entries, seen) if plugin_types: for ptype in plugin_types: plugin_sections = self._get_settings_ini(plugin_types[ptype], seen) for s in plugin_sections: if s in sections: sections[s].extend(plugin_sections[s]) else: sections[s] = plugin_sections[s] if sections: for section in sections.keys(): data.append('[%s]' % section) for key in sections[section]: data.append(key) data.append('') data.append('') elif context.CLIARGS['format'] in ('env', 'vars'): # TODO: add yaml once that config option is added data = self._get_settings_vars(config_entries, context.CLIARGS['format']) if plugin_types: for ptype in plugin_types: for plugin in plugin_types[ptype].keys(): data.extend(self._get_settings_vars(plugin_types[ptype][plugin], context.CLIARGS['format'])) self.pager(to_text('\n'.join(data), errors='surrogate_or_strict')) def _render_settings(self, config): entries = [] for setting in sorted(config): changed = (config[setting].origin not in ('default', 'REQUIRED')) if context.CLIARGS['format'] == 'display': if isinstance(config[setting], Setting): # proceed normally if config[setting].origin == 'default': color = 'green' elif config[setting].origin == 'REQUIRED': # should include '_terms', '_input', etc color = 'red' else: color = 'yellow' msg = "%s(%s) = %s" % (setting, config[setting].origin, config[setting].value) else: color = 'green' msg = "%s(%s) = %s" % (setting, 'default', config[setting].get('default')) entry = stringc(msg, color) else: entry = {} for key in config[setting]._fields: entry[key] = getattr(config[setting], key) if not context.CLIARGS['only_changed'] or changed: entries.append(entry) return entries def _get_global_configs(self): config = self.config.get_configuration_definitions(ignore_private=True).copy() for setting in config.keys(): v, o = C.config.get_config_value_and_origin(setting, cfile=self.config_file, variables=get_constants()) config[setting] = Setting(setting, v, o, None) return self._render_settings(config) def _get_plugin_configs(self, ptype, plugins): # prep loading loader = getattr(plugin_loader, '%s_loader' % ptype) # acumulators output = [] config_entries = {} # build list if plugins: plugin_cs = [] for plugin in plugins: p = loader.get(plugin, class_only=True) if p is None: display.warning("Skipping %s as we could not find matching plugin" % plugin) else: plugin_cs.append(loader.get(plugin, class_only=True)) else: plugin_cs = loader.all(class_only=True) for plugin in plugin_cs: # in case of deprecastion they diverge finalname = name = plugin._load_name if name.startswith('_'): if os.path.islink(plugin._original_path): # skip alias continue # deprecated, but use 'nice name' finalname = name.replace('_', '', 1) + ' (DEPRECATED)' # default entries per plugin config_entries[finalname] = self.config.get_configuration_definitions(ptype, name) try: # populate config entries by loading plugin dump = loader.get(name, class_only=True) except Exception as e: display.warning('Skipping "%s" %s plugin, as we cannot load plugin to check config due to : %s' % (name, ptype, to_native(e))) continue # actually get the values for setting in config_entries[finalname].keys(): try: v, o = C.config.get_config_value_and_origin(setting, cfile=self.config_file, plugin_type=ptype, plugin_name=name, variables=get_constants()) except AnsibleError as e: if to_text(e).startswith('No setting was provided for required configuration'): v = None o = 'REQUIRED' else: raise e if v is None and o is None: # not all cases will be error o = 'REQUIRED' config_entries[finalname][setting] = Setting(setting, v, o, None) # pretty please! results = self._render_settings(config_entries[finalname]) if results: if context.CLIARGS['format'] == 'display': # avoid header for empty lists (only changed!) output.append('\n%s:\n%s' % (finalname, '_' * len(finalname))) output.extend(results) else: output.append({finalname: results}) return output def execute_dump(self): ''' Shows the current settings, merges ansible.cfg if specified ''' if context.CLIARGS['type'] == 'base': # deal with base output = self._get_global_configs() elif context.CLIARGS['type'] == 'all': # deal with base output = self._get_global_configs() # deal with plugins for ptype in C.CONFIGURABLE_PLUGINS: plugin_list = self._get_plugin_configs(ptype, context.CLIARGS['args']) if context.CLIARGS['format'] == 'display': if not context.CLIARGS['only_changed'] or plugin_list: output.append('\n%s:\n%s' % (ptype.upper(), '=' * len(ptype))) output.extend(plugin_list) else: if ptype in ('modules', 'doc_fragments'): pname = ptype.upper() else: pname = '%s_PLUGINS' % ptype.upper() output.append({pname: plugin_list}) else: # deal with plugins output = self._get_plugin_configs(context.CLIARGS['type'], context.CLIARGS['args']) if context.CLIARGS['format'] == 'display': text = '\n'.join(output) if context.CLIARGS['format'] == 'yaml': text = yaml_dump(output) elif context.CLIARGS['format'] == 'json': text = json_dump(output) self.pager(to_text(text, errors='surrogate_or_strict')) def main(args=None): ConfigCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/cli/console.py0000755000000000000000000005303014556006441017663 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2014, Nandor Sivok # Copyright: (c) 2016, Redhat Inc # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import atexit import cmd import getpass import readline import os import sys from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.executor.task_queue_manager import TaskQueueManager from ansible.module_utils.common.text.converters import to_native, to_text from ansible.module_utils.parsing.convert_bool import boolean from ansible.parsing.splitter import parse_kv from ansible.playbook.play import Play from ansible.plugins.list import list_plugins from ansible.plugins.loader import module_loader, fragment_loader from ansible.utils import plugin_docs from ansible.utils.color import stringc from ansible.utils.display import Display display = Display() class ConsoleCLI(CLI, cmd.Cmd): ''' A REPL that allows for running ad-hoc tasks against a chosen inventory from a nice shell with built-in tab completion (based on dominis' ``ansible-shell``). It supports several commands, and you can modify its configuration at runtime: - ``cd [pattern]``: change host/group (you can use host patterns eg.: ``app*.dc*:!app01*``) - ``list``: list available hosts in the current path - ``list groups``: list groups included in the current path - ``become``: toggle the become flag - ``!``: forces shell module instead of the ansible module (``!yum update -y``) - ``verbosity [num]``: set the verbosity level - ``forks [num]``: set the number of forks - ``become_user [user]``: set the become_user - ``remote_user [user]``: set the remote_user - ``become_method [method]``: set the privilege escalation method - ``check [bool]``: toggle check mode - ``diff [bool]``: toggle diff mode - ``timeout [integer]``: set the timeout of tasks in seconds (0 to disable) - ``help [command/module]``: display documentation for the command or module - ``exit``: exit ``ansible-console`` ''' name = 'ansible-console' modules = [] # type: list[str] | None ARGUMENTS = {'host-pattern': 'A name of a group in the inventory, a shell-like glob ' 'selecting hosts in inventory or any combination of the two separated by commas.'} # use specific to console, but fallback to highlight for backwards compatibility NORMAL_PROMPT = C.COLOR_CONSOLE_PROMPT or C.COLOR_HIGHLIGHT def __init__(self, args): super(ConsoleCLI, self).__init__(args) self.intro = 'Welcome to the ansible console. Type help or ? to list commands.\n' self.groups = [] self.hosts = [] self.pattern = None self.variable_manager = None self.loader = None self.passwords = dict() self.cwd = '*' # Defaults for these are set from the CLI in run() self.remote_user = None self.become = None self.become_user = None self.become_method = None self.check_mode = None self.diff = None self.forks = None self.task_timeout = None self.collections = None cmd.Cmd.__init__(self) def init_parser(self): super(ConsoleCLI, self).init_parser( desc="REPL console for executing Ansible tasks.", epilog="This is not a live session/connection: each task is executed in the background and returns its results." ) opt_help.add_runas_options(self.parser) opt_help.add_inventory_options(self.parser) opt_help.add_connect_options(self.parser) opt_help.add_check_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_fork_options(self.parser) opt_help.add_module_options(self.parser) opt_help.add_basedir_options(self.parser) opt_help.add_runtask_options(self.parser) opt_help.add_tasknoplay_options(self.parser) # options unique to shell self.parser.add_argument('pattern', help='host pattern', metavar='pattern', default='all', nargs='?') self.parser.add_argument('--step', dest='step', action='store_true', help="one-step-at-a-time: confirm each task before running") def post_process_args(self, options): options = super(ConsoleCLI, self).post_process_args(options) display.verbosity = options.verbosity self.validate_conflicts(options, runas_opts=True, fork_opts=True) return options def get_names(self): return dir(self) def cmdloop(self): try: cmd.Cmd.cmdloop(self) except KeyboardInterrupt: self.cmdloop() except EOFError: self.display("[Ansible-console was exited]") self.do_exit(self) def set_prompt(self): login_user = self.remote_user or getpass.getuser() self.selected = self.inventory.list_hosts(self.cwd) prompt = "%s@%s (%d)[f:%s]" % (login_user, self.cwd, len(self.selected), self.forks) if self.become and self.become_user in [None, 'root']: prompt += "# " color = C.COLOR_ERROR else: prompt += "$ " color = self.NORMAL_PROMPT self.prompt = stringc(prompt, color, wrap_nonvisible_chars=True) def list_modules(self): return list_plugins('module', self.collections) def default(self, line, forceshell=False): """ actually runs modules """ if line.startswith("#"): return False if not self.cwd: display.error("No host found") return False # defaults module = 'shell' module_args = line if forceshell is not True: possible_module, *possible_args = line.split() if module_loader.find_plugin(possible_module): # we found module! module = possible_module if possible_args: module_args = ' '.join(possible_args) else: module_args = '' if self.callback: cb = self.callback elif C.DEFAULT_LOAD_CALLBACK_PLUGINS and C.DEFAULT_STDOUT_CALLBACK != 'default': cb = C.DEFAULT_STDOUT_CALLBACK else: cb = 'minimal' result = None try: check_raw = module in C._ACTION_ALLOWS_RAW_ARGS task = dict(action=dict(module=module, args=parse_kv(module_args, check_raw=check_raw)), timeout=self.task_timeout) play_ds = dict( name="Ansible Shell", hosts=self.cwd, gather_facts='no', tasks=[task], remote_user=self.remote_user, become=self.become, become_user=self.become_user, become_method=self.become_method, check_mode=self.check_mode, diff=self.diff, collections=self.collections, ) play = Play().load(play_ds, variable_manager=self.variable_manager, loader=self.loader) except Exception as e: display.error(u"Unable to build command: %s" % to_text(e)) return False try: # now create a task queue manager to execute the play self._tqm = None try: self._tqm = TaskQueueManager( inventory=self.inventory, variable_manager=self.variable_manager, loader=self.loader, passwords=self.passwords, stdout_callback=cb, run_additional_callbacks=C.DEFAULT_LOAD_CALLBACK_PLUGINS, run_tree=False, forks=self.forks, ) result = self._tqm.run(play) display.debug(result) finally: if self._tqm: self._tqm.cleanup() if self.loader: self.loader.cleanup_all_tmp_files() if result is None: display.error("No hosts found") return False except KeyboardInterrupt: display.error('User interrupted execution') return False except Exception as e: if self.verbosity >= 3: import traceback display.v(traceback.format_exc()) display.error(to_text(e)) return False def emptyline(self): return def do_shell(self, arg): """ You can run shell commands through the shell module. eg.: shell ps uax | grep java | wc -l shell killall python shell halt -n You can use the ! to force the shell module. eg.: !ps aux | grep java | wc -l """ self.default(arg, True) def help_shell(self): display.display("You can run shell commands through the shell module.") def do_forks(self, arg): """Set the number of forks""" if arg: try: forks = int(arg) except TypeError: display.error('Invalid argument for "forks"') self.usage_forks() if forks > 0: self.forks = forks self.set_prompt() else: display.display('forks must be greater than or equal to 1') else: self.usage_forks() def help_forks(self): display.display("Set the number of forks to use per task") self.usage_forks() def usage_forks(self): display.display('Usage: forks ') do_serial = do_forks help_serial = help_forks def do_collections(self, arg): """Set list of collections for 'short name' usage""" if arg in ('', 'none'): self.collections = None elif not arg: self.usage_collections() else: collections = arg.split(',') for collection in collections: if self.collections is None: self.collections = [] self.collections.append(collection.strip()) if self.collections: display.v('Collections name search is set to: %s' % ', '.join(self.collections)) else: display.v('Collections name search is using defaults') def help_collections(self): display.display("Set the collection name search path when using short names for plugins") self.usage_collections() def usage_collections(self): display.display('Usage: collections [, ...]\n Use empty quotes or "none" to reset to default.\n') def do_verbosity(self, arg): """Set verbosity level""" if not arg: display.display('Usage: verbosity ') else: try: display.verbosity = int(arg) display.v('verbosity level set to %s' % arg) except (TypeError, ValueError) as e: display.error('The verbosity must be a valid integer: %s' % to_text(e)) def help_verbosity(self): display.display("Set the verbosity level, equivalent to -v for 1 and -vvvv for 4.") def do_cd(self, arg): """ Change active host/group. You can use hosts patterns as well eg.: cd webservers cd webservers:dbservers cd webservers:!phoenix cd webservers:&staging cd webservers:dbservers:&staging:!phoenix """ if not arg: self.cwd = '*' elif arg in '/*': self.cwd = 'all' elif self.inventory.get_hosts(arg): self.cwd = arg else: display.display("no host matched") self.set_prompt() def help_cd(self): display.display("Change active host/group. ") self.usage_cd() def usage_cd(self): display.display("Usage: cd ||") def do_list(self, arg): """List the hosts in the current group""" if not arg: for host in self.selected: display.display(host.name) elif arg == 'groups': for group in self.groups: display.display(group) else: display.error('Invalid option passed to "list"') self.help_list() def help_list(self): display.display("List the hosts in the current group or a list of groups if you add 'groups'.") def do_become(self, arg): """Toggle whether plays run with become""" if arg: self.become = boolean(arg, strict=False) display.v("become changed to %s" % self.become) self.set_prompt() else: display.display("Please specify become value, e.g. `become yes`") def help_become(self): display.display("Toggle whether the tasks are run with become") def do_remote_user(self, arg): """Given a username, set the remote user plays are run by""" if arg: self.remote_user = arg self.set_prompt() else: display.display("Please specify a remote user, e.g. `remote_user root`") def help_remote_user(self): display.display("Set the user for use as login to the remote target") def do_become_user(self, arg): """Given a username, set the user that plays are run by when using become""" if arg: self.become_user = arg else: display.display("Please specify a user, e.g. `become_user jenkins`") display.v("Current user is %s" % self.become_user) self.set_prompt() def help_become_user(self): display.display("Set the user for use with privilege escalation (which remote user attempts to 'become' when become is enabled)") def do_become_method(self, arg): """Given a become_method, set the privilege escalation method when using become""" if arg: self.become_method = arg display.v("become_method changed to %s" % self.become_method) else: display.display("Please specify a become_method, e.g. `become_method su`") display.v("Current become_method is %s" % self.become_method) def help_become_method(self): display.display("Set the privilege escalation plugin to use when become is enabled") def do_check(self, arg): """Toggle whether plays run with check mode""" if arg: self.check_mode = boolean(arg, strict=False) display.display("check mode changed to %s" % self.check_mode) else: display.display("Please specify check mode value, e.g. `check yes`") display.v("check mode is currently %s." % self.check_mode) def help_check(self): display.display("Toggle check_mode for the tasks") def do_diff(self, arg): """Toggle whether plays run with diff""" if arg: self.diff = boolean(arg, strict=False) display.display("diff mode changed to %s" % self.diff) else: display.display("Please specify a diff value , e.g. `diff yes`") display.v("diff mode is currently %s" % self.diff) def help_diff(self): display.display("Toggle diff output for the tasks") def do_timeout(self, arg): """Set the timeout""" if arg: try: timeout = int(arg) if timeout < 0: display.error('The timeout must be greater than or equal to 1, use 0 to disable') else: self.task_timeout = timeout except (TypeError, ValueError) as e: display.error('The timeout must be a valid positive integer, or 0 to disable: %s' % to_text(e)) else: self.usage_timeout() def help_timeout(self): display.display("Set task timeout in seconds") self.usage_timeout() def usage_timeout(self): display.display('Usage: timeout ') def do_exit(self, args): """Exits from the console""" sys.stdout.write('\nAnsible-console was exited.\n') return -1 def help_exit(self): display.display("LEAVE!") do_EOF = do_exit help_EOF = help_exit def helpdefault(self, module_name): if module_name: in_path = module_loader.find_plugin(module_name) if in_path: oc, a, _dummy1, _dummy2 = plugin_docs.get_docstring(in_path, fragment_loader) if oc: display.display(oc['short_description']) display.display('Parameters:') for opt in oc['options'].keys(): display.display(' ' + stringc(opt, self.NORMAL_PROMPT) + ' ' + oc['options'][opt]['description'][0]) else: display.error('No documentation found for %s.' % module_name) else: display.error('%s is not a valid command, use ? to list all valid commands.' % module_name) def help_help(self): display.warning("Don't be redundant!") def complete_cd(self, text, line, begidx, endidx): mline = line.partition(' ')[2] offs = len(mline) - len(text) if self.cwd in ('all', '*', '\\'): completions = self.hosts + self.groups else: completions = [x.name for x in self.inventory.list_hosts(self.cwd)] return [to_native(s)[offs:] for s in completions if to_native(s).startswith(to_native(mline))] def completedefault(self, text, line, begidx, endidx): if line.split()[0] in self.list_modules(): mline = line.split(' ')[-1] offs = len(mline) - len(text) completions = self.module_args(line.split()[0]) return [s[offs:] + '=' for s in completions if s.startswith(mline)] def module_args(self, module_name): in_path = module_loader.find_plugin(module_name) oc, a, _dummy1, _dummy2 = plugin_docs.get_docstring(in_path, fragment_loader, is_module=True) return list(oc['options'].keys()) def run(self): super(ConsoleCLI, self).run() sshpass = None becomepass = None # hosts self.pattern = context.CLIARGS['pattern'] self.cwd = self.pattern # Defaults from the command line self.remote_user = context.CLIARGS['remote_user'] self.become = context.CLIARGS['become'] self.become_user = context.CLIARGS['become_user'] self.become_method = context.CLIARGS['become_method'] self.check_mode = context.CLIARGS['check'] self.diff = context.CLIARGS['diff'] self.forks = context.CLIARGS['forks'] self.task_timeout = context.CLIARGS['task_timeout'] # set module path if needed if context.CLIARGS['module_path']: for path in context.CLIARGS['module_path']: if path: module_loader.add_directory(path) # dynamically add 'cannonical' modules as commands, aliases coudld be used and dynamically loaded self.modules = self.list_modules() for module in self.modules: setattr(self, 'do_' + module, lambda arg, module=module: self.default(module + ' ' + arg)) setattr(self, 'help_' + module, lambda module=module: self.helpdefault(module)) (sshpass, becomepass) = self.ask_passwords() self.passwords = {'conn_pass': sshpass, 'become_pass': becomepass} self.loader, self.inventory, self.variable_manager = self._play_prereqs() hosts = self.get_host_list(self.inventory, context.CLIARGS['subset'], self.pattern) self.groups = self.inventory.list_groups() self.hosts = [x.name for x in hosts] # This hack is to work around readline issues on a mac: # http://stackoverflow.com/a/7116997/541202 if 'libedit' in readline.__doc__: readline.parse_and_bind("bind ^I rl_complete") else: readline.parse_and_bind("tab: complete") histfile = os.path.join(os.path.expanduser("~"), ".ansible-console_history") try: readline.read_history_file(histfile) except IOError: pass atexit.register(readline.write_history_file, histfile) self.set_prompt() self.cmdloop() def __getattr__(self, name): ''' handle not found to populate dynamically a module function if module matching name exists ''' attr = None if name.startswith('do_'): module = name.replace('do_', '') if module_loader.find_plugin(module): setattr(self, name, lambda arg, module=module: self.default(module + ' ' + arg)) attr = object.__getattr__(self, name) elif name.startswith('help_'): module = name.replace('help_', '') if module_loader.find_plugin(module): setattr(self, name, lambda module=module: self.helpdefault(module)) attr = object.__getattr__(self, name) if attr is None: raise AttributeError(f"{self.__class__} does not have a {name} attribute") return attr def main(args=None): ConsoleCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/cli/doc.py0000755000000000000000000017512014556006441016773 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2014, James Tanner # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import pkgutil import os import os.path import re import textwrap import traceback import ansible.plugins.loader as plugin_loader from pathlib import Path from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.collections.list import list_collection_dirs from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError, AnsiblePluginNotFound from ansible.module_utils.common.text.converters import to_native, to_text from ansible.module_utils.common.collections import is_sequence from ansible.module_utils.common.json import json_dump from ansible.module_utils.common.yaml import yaml_dump from ansible.module_utils.compat import importlib from ansible.module_utils.six import string_types from ansible.parsing.plugin_docs import read_docstub from ansible.parsing.utils.yaml import from_yaml from ansible.parsing.yaml.dumper import AnsibleDumper from ansible.plugins.list import list_plugins from ansible.plugins.loader import action_loader, fragment_loader from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path from ansible.utils.display import Display from ansible.utils.plugin_docs import get_plugin_docs, get_docstring, get_versioned_doclink display = Display() TARGET_OPTIONS = C.DOCUMENTABLE_PLUGINS + ('role', 'keyword',) PB_OBJECTS = ['Play', 'Role', 'Block', 'Task'] PB_LOADED = {} SNIPPETS = ['inventory', 'lookup', 'module'] def add_collection_plugins(plugin_list, plugin_type, coll_filter=None): display.deprecated("add_collection_plugins method, use ansible.plugins.list functions instead.", version='2.17') plugin_list.update(list_plugins(plugin_type, coll_filter)) def jdump(text): try: display.display(json_dump(text)) except TypeError as e: display.vvv(traceback.format_exc()) raise AnsibleError('We could not convert all the documentation into JSON as there was a conversion issue: %s' % to_native(e)) class RoleMixin(object): """A mixin containing all methods relevant to role argument specification functionality. Note: The methods for actual display of role data are not present here. """ # Potential locations of the role arg spec file in the meta subdir, with main.yml # having the lowest priority. ROLE_ARGSPEC_FILES = ['argument_specs' + e for e in C.YAML_FILENAME_EXTENSIONS] + ["main" + e for e in C.YAML_FILENAME_EXTENSIONS] def _load_argspec(self, role_name, collection_path=None, role_path=None): """Load the role argument spec data from the source file. :param str role_name: The name of the role for which we want the argspec data. :param str collection_path: Path to the collection containing the role. This will be None for standard roles. :param str role_path: Path to the standard role. This will be None for collection roles. We support two files containing the role arg spec data: either meta/main.yml or meta/argument_spec.yml. The argument_spec.yml file will take precedence over the meta/main.yml file, if it exists. Data is NOT combined between the two files. :returns: A dict of all data underneath the ``argument_specs`` top-level YAML key in the argspec data file. Empty dict is returned if there is no data. """ if collection_path: meta_path = os.path.join(collection_path, 'roles', role_name, 'meta') elif role_path: meta_path = os.path.join(role_path, 'meta') else: raise AnsibleError("A path is required to load argument specs for role '%s'" % role_name) path = None # Check all potential spec files for specfile in self.ROLE_ARGSPEC_FILES: full_path = os.path.join(meta_path, specfile) if os.path.exists(full_path): path = full_path break if path is None: return {} try: with open(path, 'r') as f: data = from_yaml(f.read(), file_name=path) if data is None: data = {} return data.get('argument_specs', {}) except (IOError, OSError) as e: raise AnsibleParserError("An error occurred while trying to read the file '%s': %s" % (path, to_native(e)), orig_exc=e) def _find_all_normal_roles(self, role_paths, name_filters=None): """Find all non-collection roles that have an argument spec file. Note that argument specs do not actually need to exist within the spec file. :param role_paths: A tuple of one or more role paths. When a role with the same name is found in multiple paths, only the first-found role is returned. :param name_filters: A tuple of one or more role names used to filter the results. :returns: A set of tuples consisting of: role name, full role path """ found = set() found_names = set() for path in role_paths: if not os.path.isdir(path): continue # Check each subdir for an argument spec file for entry in os.listdir(path): role_path = os.path.join(path, entry) # Check all potential spec files for specfile in self.ROLE_ARGSPEC_FILES: full_path = os.path.join(role_path, 'meta', specfile) if os.path.exists(full_path): if name_filters is None or entry in name_filters: if entry not in found_names: found.add((entry, role_path)) found_names.add(entry) # select first-found break return found def _find_all_collection_roles(self, name_filters=None, collection_filter=None): """Find all collection roles with an argument spec file. Note that argument specs do not actually need to exist within the spec file. :param name_filters: A tuple of one or more role names used to filter the results. These might be fully qualified with the collection name (e.g., community.general.roleA) or not (e.g., roleA). :param collection_filter: A list of strings containing the FQCN of a collection which will be used to limit results. This filter will take precedence over the name_filters. :returns: A set of tuples consisting of: role name, collection name, collection path """ found = set() b_colldirs = list_collection_dirs(coll_filter=collection_filter) for b_path in b_colldirs: path = to_text(b_path, errors='surrogate_or_strict') collname = _get_collection_name_from_path(b_path) roles_dir = os.path.join(path, 'roles') if os.path.exists(roles_dir): for entry in os.listdir(roles_dir): # Check all potential spec files for specfile in self.ROLE_ARGSPEC_FILES: full_path = os.path.join(roles_dir, entry, 'meta', specfile) if os.path.exists(full_path): if name_filters is None: found.add((entry, collname, path)) else: # Name filters might contain a collection FQCN or not. for fqcn in name_filters: if len(fqcn.split('.')) == 3: (ns, col, role) = fqcn.split('.') if '.'.join([ns, col]) == collname and entry == role: found.add((entry, collname, path)) elif fqcn == entry: found.add((entry, collname, path)) break return found def _build_summary(self, role, collection, argspec): """Build a summary dict for a role. Returns a simplified role arg spec containing only the role entry points and their short descriptions, and the role collection name (if applicable). :param role: The simple role name. :param collection: The collection containing the role (None or empty string if N/A). :param argspec: The complete role argspec data dict. :returns: A tuple with the FQCN role name and a summary dict. """ if collection: fqcn = '.'.join([collection, role]) else: fqcn = role summary = {} summary['collection'] = collection summary['entry_points'] = {} for ep in argspec.keys(): entry_spec = argspec[ep] or {} summary['entry_points'][ep] = entry_spec.get('short_description', '') return (fqcn, summary) def _build_doc(self, role, path, collection, argspec, entry_point): if collection: fqcn = '.'.join([collection, role]) else: fqcn = role doc = {} doc['path'] = path doc['collection'] = collection doc['entry_points'] = {} for ep in argspec.keys(): if entry_point is None or ep == entry_point: entry_spec = argspec[ep] or {} doc['entry_points'][ep] = entry_spec # If we didn't add any entry points (b/c of filtering), ignore this entry. if len(doc['entry_points'].keys()) == 0: doc = None return (fqcn, doc) def _create_role_list(self, fail_on_errors=True): """Return a dict describing the listing of all roles with arg specs. :param role_paths: A tuple of one or more role paths. :returns: A dict indexed by role name, with 'collection' and 'entry_points' keys per role. Example return: results = { 'roleA': { 'collection': '', 'entry_points': { 'main': 'Short description for main' } }, 'a.b.c.roleB': { 'collection': 'a.b.c', 'entry_points': { 'main': 'Short description for main', 'alternate': 'Short description for alternate entry point' } 'x.y.z.roleB': { 'collection': 'x.y.z', 'entry_points': { 'main': 'Short description for main', } }, } """ roles_path = self._get_roles_path() collection_filter = self._get_collection_filter() if not collection_filter: roles = self._find_all_normal_roles(roles_path) else: roles = [] collroles = self._find_all_collection_roles(collection_filter=collection_filter) result = {} for role, role_path in roles: try: argspec = self._load_argspec(role, role_path=role_path) fqcn, summary = self._build_summary(role, '', argspec) result[fqcn] = summary except Exception as e: if fail_on_errors: raise result[role] = { 'error': 'Error while loading role argument spec: %s' % to_native(e), } for role, collection, collection_path in collroles: try: argspec = self._load_argspec(role, collection_path=collection_path) fqcn, summary = self._build_summary(role, collection, argspec) result[fqcn] = summary except Exception as e: if fail_on_errors: raise result['%s.%s' % (collection, role)] = { 'error': 'Error while loading role argument spec: %s' % to_native(e), } return result def _create_role_doc(self, role_names, entry_point=None, fail_on_errors=True): """ :param role_names: A tuple of one or more role names. :param role_paths: A tuple of one or more role paths. :param entry_point: A role entry point name for filtering. :param fail_on_errors: When set to False, include errors in the JSON output instead of raising errors :returns: A dict indexed by role name, with 'collection', 'entry_points', and 'path' keys per role. """ roles_path = self._get_roles_path() roles = self._find_all_normal_roles(roles_path, name_filters=role_names) collroles = self._find_all_collection_roles(name_filters=role_names) result = {} for role, role_path in roles: try: argspec = self._load_argspec(role, role_path=role_path) fqcn, doc = self._build_doc(role, role_path, '', argspec, entry_point) if doc: result[fqcn] = doc except Exception as e: # pylint:disable=broad-except result[role] = { 'error': 'Error while processing role: %s' % to_native(e), } for role, collection, collection_path in collroles: try: argspec = self._load_argspec(role, collection_path=collection_path) fqcn, doc = self._build_doc(role, collection_path, collection, argspec, entry_point) if doc: result[fqcn] = doc except Exception as e: # pylint:disable=broad-except result['%s.%s' % (collection, role)] = { 'error': 'Error while processing role: %s' % to_native(e), } return result class DocCLI(CLI, RoleMixin): ''' displays information on modules installed in Ansible libraries. It displays a terse listing of plugins and their short descriptions, provides a printout of their DOCUMENTATION strings, and it can create a short "snippet" which can be pasted into a playbook. ''' name = 'ansible-doc' # default ignore list for detailed views IGNORE = ('module', 'docuri', 'version_added', 'version_added_collection', 'short_description', 'now_date', 'plainexamples', 'returndocs', 'collection') # Warning: If you add more elements here, you also need to add it to the docsite build (in the # ansible-community/antsibull repo) _ITALIC = re.compile(r"\bI\(([^)]+)\)") _BOLD = re.compile(r"\bB\(([^)]+)\)") _MODULE = re.compile(r"\bM\(([^)]+)\)") _PLUGIN = re.compile(r"\bP\(([^#)]+)#([a-z]+)\)") _LINK = re.compile(r"\bL\(([^)]+), *([^)]+)\)") _URL = re.compile(r"\bU\(([^)]+)\)") _REF = re.compile(r"\bR\(([^)]+), *([^)]+)\)") _CONST = re.compile(r"\bC\(([^)]+)\)") _SEM_PARAMETER_STRING = r"\(((?:[^\\)]+|\\.)+)\)" _SEM_OPTION_NAME = re.compile(r"\bO" + _SEM_PARAMETER_STRING) _SEM_OPTION_VALUE = re.compile(r"\bV" + _SEM_PARAMETER_STRING) _SEM_ENV_VARIABLE = re.compile(r"\bE" + _SEM_PARAMETER_STRING) _SEM_RET_VALUE = re.compile(r"\bRV" + _SEM_PARAMETER_STRING) _RULER = re.compile(r"\bHORIZONTALLINE\b") # helper for unescaping _UNESCAPE = re.compile(r"\\(.)") _FQCN_TYPE_PREFIX_RE = re.compile(r'^([^.]+\.[^.]+\.[^#]+)#([a-z]+):(.*)$') _IGNORE_MARKER = 'ignore:' # rst specific _RST_NOTE = re.compile(r".. note::") _RST_SEEALSO = re.compile(r".. seealso::") _RST_ROLES = re.compile(r":\w+?:`") _RST_DIRECTIVES = re.compile(r".. \w+?::") def __init__(self, args): super(DocCLI, self).__init__(args) self.plugin_list = set() @staticmethod def _tty_ify_sem_simle(matcher): text = DocCLI._UNESCAPE.sub(r'\1', matcher.group(1)) return f"`{text}'" @staticmethod def _tty_ify_sem_complex(matcher): text = DocCLI._UNESCAPE.sub(r'\1', matcher.group(1)) value = None if '=' in text: text, value = text.split('=', 1) m = DocCLI._FQCN_TYPE_PREFIX_RE.match(text) if m: plugin_fqcn = m.group(1) plugin_type = m.group(2) text = m.group(3) elif text.startswith(DocCLI._IGNORE_MARKER): text = text[len(DocCLI._IGNORE_MARKER):] plugin_fqcn = plugin_type = '' else: plugin_fqcn = plugin_type = '' entrypoint = None if ':' in text: entrypoint, text = text.split(':', 1) if value is not None: text = f"{text}={value}" if plugin_fqcn and plugin_type: plugin_suffix = '' if plugin_type in ('role', 'module', 'playbook') else ' plugin' plugin = f"{plugin_type}{plugin_suffix} {plugin_fqcn}" if plugin_type == 'role' and entrypoint is not None: plugin = f"{plugin}, {entrypoint} entrypoint" return f"`{text}' (of {plugin})" return f"`{text}'" @classmethod def find_plugins(cls, path, internal, plugin_type, coll_filter=None): display.deprecated("find_plugins method as it is incomplete/incorrect. use ansible.plugins.list functions instead.", version='2.17') return list_plugins(plugin_type, coll_filter, [path]).keys() @classmethod def tty_ify(cls, text): # general formatting t = cls._ITALIC.sub(r"`\1'", text) # I(word) => `word' t = cls._BOLD.sub(r"*\1*", t) # B(word) => *word* t = cls._MODULE.sub("[" + r"\1" + "]", t) # M(word) => [word] t = cls._URL.sub(r"\1", t) # U(word) => word t = cls._LINK.sub(r"\1 <\2>", t) # L(word, url) => word t = cls._PLUGIN.sub("[" + r"\1" + "]", t) # P(word#type) => [word] t = cls._REF.sub(r"\1", t) # R(word, sphinx-ref) => word t = cls._CONST.sub(r"`\1'", t) # C(word) => `word' t = cls._SEM_OPTION_NAME.sub(cls._tty_ify_sem_complex, t) # O(expr) t = cls._SEM_OPTION_VALUE.sub(cls._tty_ify_sem_simle, t) # V(expr) t = cls._SEM_ENV_VARIABLE.sub(cls._tty_ify_sem_simle, t) # E(expr) t = cls._SEM_RET_VALUE.sub(cls._tty_ify_sem_complex, t) # RV(expr) t = cls._RULER.sub("\n{0}\n".format("-" * 13), t) # HORIZONTALLINE => ------- # remove rst t = cls._RST_SEEALSO.sub(r"See also:", t) # seealso to See also: t = cls._RST_NOTE.sub(r"Note:", t) # .. note:: to note: t = cls._RST_ROLES.sub(r"`", t) # remove :ref: and other tags, keep tilde to match ending one t = cls._RST_DIRECTIVES.sub(r"", t) # remove .. stuff:: in general return t def init_parser(self): coll_filter = 'A supplied argument will be used for filtering, can be a namespace or full collection name.' super(DocCLI, self).init_parser( desc="plugin documentation tool", epilog="See man pages for Ansible CLI options or website for tutorials https://docs.ansible.com" ) opt_help.add_module_options(self.parser) opt_help.add_basedir_options(self.parser) # targets self.parser.add_argument('args', nargs='*', help='Plugin', metavar='plugin') self.parser.add_argument("-t", "--type", action="store", default='module', dest='type', help='Choose which plugin type (defaults to "module"). ' 'Available plugin types are : {0}'.format(TARGET_OPTIONS), choices=TARGET_OPTIONS) # formatting self.parser.add_argument("-j", "--json", action="store_true", default=False, dest='json_format', help='Change output into json format.') # TODO: warn if not used with -t roles # role-specific options self.parser.add_argument("-r", "--roles-path", dest='roles_path', default=C.DEFAULT_ROLES_PATH, type=opt_help.unfrack_path(pathsep=True), action=opt_help.PrependListAction, help='The path to the directory containing your roles.') # modifiers exclusive = self.parser.add_mutually_exclusive_group() # TODO: warn if not used with -t roles exclusive.add_argument("-e", "--entry-point", dest="entry_point", help="Select the entry point for role(s).") # TODO: warn with --json as it is incompatible exclusive.add_argument("-s", "--snippet", action="store_true", default=False, dest='show_snippet', help='Show playbook snippet for these plugin types: %s' % ', '.join(SNIPPETS)) # TODO: warn when arg/plugin is passed exclusive.add_argument("-F", "--list_files", action="store_true", default=False, dest="list_files", help='Show plugin names and their source files without summaries (implies --list). %s' % coll_filter) exclusive.add_argument("-l", "--list", action="store_true", default=False, dest='list_dir', help='List available plugins. %s' % coll_filter) exclusive.add_argument("--metadata-dump", action="store_true", default=False, dest='dump', help='**For internal use only** Dump json metadata for all entries, ignores other options.') self.parser.add_argument("--no-fail-on-errors", action="store_true", default=False, dest='no_fail_on_errors', help='**For internal use only** Only used for --metadata-dump. ' 'Do not fail on errors. Report the error message in the JSON instead.') def post_process_args(self, options): options = super(DocCLI, self).post_process_args(options) display.verbosity = options.verbosity return options def display_plugin_list(self, results): # format for user displace = max(len(x) for x in results.keys()) linelimit = display.columns - displace - 5 text = [] deprecated = [] # format display per option if context.CLIARGS['list_files']: # list plugin file names for plugin in sorted(results.keys()): filename = to_native(results[plugin]) # handle deprecated for builtin/legacy pbreak = plugin.split('.') if pbreak[-1].startswith('_') and pbreak[0] == 'ansible' and pbreak[1] in ('builtin', 'legacy'): pbreak[-1] = pbreak[-1][1:] plugin = '.'.join(pbreak) deprecated.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(filename), filename)) else: text.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(filename), filename)) else: # list plugin names and short desc for plugin in sorted(results.keys()): desc = DocCLI.tty_ify(results[plugin]) if len(desc) > linelimit: desc = desc[:linelimit] + '...' pbreak = plugin.split('.') # TODO: add mark for deprecated collection plugins if pbreak[-1].startswith('_') and plugin.startswith(('ansible.builtin.', 'ansible.legacy.')): # Handle deprecated ansible.builtin plugins pbreak[-1] = pbreak[-1][1:] plugin = '.'.join(pbreak) deprecated.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(desc), desc)) else: text.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(desc), desc)) if len(deprecated) > 0: text.append("\nDEPRECATED:") text.extend(deprecated) # display results DocCLI.pager("\n".join(text)) def _display_available_roles(self, list_json): """Display all roles we can find with a valid argument specification. Output is: fqcn role name, entry point, short description """ roles = list(list_json.keys()) entry_point_names = set() for role in roles: for entry_point in list_json[role]['entry_points'].keys(): entry_point_names.add(entry_point) max_role_len = 0 max_ep_len = 0 if roles: max_role_len = max(len(x) for x in roles) if entry_point_names: max_ep_len = max(len(x) for x in entry_point_names) linelimit = display.columns - max_role_len - max_ep_len - 5 text = [] for role in sorted(roles): for entry_point, desc in list_json[role]['entry_points'].items(): if len(desc) > linelimit: desc = desc[:linelimit] + '...' text.append("%-*s %-*s %s" % (max_role_len, role, max_ep_len, entry_point, desc)) # display results DocCLI.pager("\n".join(text)) def _display_role_doc(self, role_json): roles = list(role_json.keys()) text = [] for role in roles: text += self.get_role_man_text(role, role_json[role]) # display results DocCLI.pager("\n".join(text)) @staticmethod def _list_keywords(): return from_yaml(pkgutil.get_data('ansible', 'keyword_desc.yml')) @staticmethod def _get_keywords_docs(keys): data = {} descs = DocCLI._list_keywords() for key in keys: if key.startswith('with_'): # simplify loops, dont want to handle every with_ combo keyword = 'loop' elif key == 'async': # cause async became reserved in python we had to rename internally keyword = 'async_val' else: keyword = key try: # if no desc, typeerror raised ends this block kdata = {'description': descs[key]} # get playbook objects for keyword and use first to get keyword attributes kdata['applies_to'] = [] for pobj in PB_OBJECTS: if pobj not in PB_LOADED: obj_class = 'ansible.playbook.%s' % pobj.lower() loaded_class = importlib.import_module(obj_class) PB_LOADED[pobj] = getattr(loaded_class, pobj, None) if keyword in PB_LOADED[pobj].fattributes: kdata['applies_to'].append(pobj) # we should only need these once if 'type' not in kdata: fa = PB_LOADED[pobj].fattributes.get(keyword) if getattr(fa, 'private'): kdata = {} raise KeyError kdata['type'] = getattr(fa, 'isa', 'string') if keyword.endswith('when') or keyword in ('until',): # TODO: make this a field attribute property, # would also helps with the warnings on {{}} stacking kdata['template'] = 'implicit' elif getattr(fa, 'static'): kdata['template'] = 'static' else: kdata['template'] = 'explicit' # those that require no processing for visible in ('alias', 'priority'): kdata[visible] = getattr(fa, visible) # remove None keys for k in list(kdata.keys()): if kdata[k] is None: del kdata[k] data[key] = kdata except (AttributeError, KeyError) as e: display.warning("Skipping Invalid keyword '%s' specified: %s" % (key, to_text(e))) if display.verbosity >= 3: display.verbose(traceback.format_exc()) return data def _get_collection_filter(self): coll_filter = None if len(context.CLIARGS['args']) >= 1: coll_filter = context.CLIARGS['args'] for coll_name in coll_filter: if not AnsibleCollectionRef.is_valid_collection_name(coll_name): raise AnsibleError('Invalid collection name (must be of the form namespace.collection): {0}'.format(coll_name)) return coll_filter def _list_plugins(self, plugin_type, content): results = {} self.plugins = {} loader = DocCLI._prep_loader(plugin_type) coll_filter = self._get_collection_filter() self.plugins.update(list_plugins(plugin_type, coll_filter)) # get appropriate content depending on option if content == 'dir': results = self._get_plugin_list_descriptions(loader) elif content == 'files': results = {k: self.plugins[k][0] for k in self.plugins.keys()} else: results = {k: {} for k in self.plugins.keys()} self.plugin_list = set() # reset for next iteration return results def _get_plugins_docs(self, plugin_type, names, fail_ok=False, fail_on_errors=True): loader = DocCLI._prep_loader(plugin_type) # get the docs for plugins in the command line list plugin_docs = {} for plugin in names: doc = {} try: doc, plainexamples, returndocs, metadata = get_plugin_docs(plugin, plugin_type, loader, fragment_loader, (context.CLIARGS['verbosity'] > 0)) except AnsiblePluginNotFound as e: display.warning(to_native(e)) continue except Exception as e: if not fail_on_errors: plugin_docs[plugin] = {'error': 'Missing documentation or could not parse documentation: %s' % to_native(e)} continue display.vvv(traceback.format_exc()) msg = "%s %s missing documentation (or could not parse documentation): %s\n" % (plugin_type, plugin, to_native(e)) if fail_ok: display.warning(msg) else: raise AnsibleError(msg) if not doc: # The doc section existed but was empty if not fail_on_errors: plugin_docs[plugin] = {'error': 'No valid documentation found'} continue docs = DocCLI._combine_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata) if not fail_on_errors: # Check whether JSON serialization would break try: json_dump(docs) except Exception as e: # pylint:disable=broad-except plugin_docs[plugin] = {'error': 'Cannot serialize documentation as JSON: %s' % to_native(e)} continue plugin_docs[plugin] = docs return plugin_docs def _get_roles_path(self): ''' Add any 'roles' subdir in playbook dir to the roles search path. And as a last resort, add the playbook dir itself. Order being: - 'roles' subdir of playbook dir - DEFAULT_ROLES_PATH (default in cliargs) - playbook dir (basedir) NOTE: This matches logic in RoleDefinition._load_role_path() method. ''' roles_path = context.CLIARGS['roles_path'] if context.CLIARGS['basedir'] is not None: subdir = os.path.join(context.CLIARGS['basedir'], "roles") if os.path.isdir(subdir): roles_path = (subdir,) + roles_path roles_path = roles_path + (context.CLIARGS['basedir'],) return roles_path @staticmethod def _prep_loader(plugin_type): ''' return a plugint type specific loader ''' loader = getattr(plugin_loader, '%s_loader' % plugin_type) # add to plugin paths from command line if context.CLIARGS['basedir'] is not None: loader.add_directory(context.CLIARGS['basedir'], with_subdir=True) if context.CLIARGS['module_path']: for path in context.CLIARGS['module_path']: if path: loader.add_directory(path) # save only top level paths for errors loader._paths = None # reset so we can use subdirs later return loader def run(self): super(DocCLI, self).run() basedir = context.CLIARGS['basedir'] plugin_type = context.CLIARGS['type'].lower() do_json = context.CLIARGS['json_format'] or context.CLIARGS['dump'] listing = context.CLIARGS['list_files'] or context.CLIARGS['list_dir'] if context.CLIARGS['list_files']: content = 'files' elif context.CLIARGS['list_dir']: content = 'dir' else: content = None docs = {} if basedir: AnsibleCollectionConfig.playbook_paths = basedir if plugin_type not in TARGET_OPTIONS: raise AnsibleOptionsError("Unknown or undocumentable plugin type: %s" % plugin_type) if context.CLIARGS['dump']: # we always dump all types, ignore restrictions ptypes = TARGET_OPTIONS docs['all'] = {} for ptype in ptypes: no_fail = bool(not context.CLIARGS['no_fail_on_errors']) if ptype == 'role': roles = self._create_role_list(fail_on_errors=no_fail) docs['all'][ptype] = self._create_role_doc(roles.keys(), context.CLIARGS['entry_point'], fail_on_errors=no_fail) elif ptype == 'keyword': names = DocCLI._list_keywords() docs['all'][ptype] = DocCLI._get_keywords_docs(names.keys()) else: plugin_names = self._list_plugins(ptype, None) docs['all'][ptype] = self._get_plugins_docs(ptype, plugin_names, fail_ok=(ptype in ('test', 'filter')), fail_on_errors=no_fail) # reset list after each type to avoid polution elif listing: if plugin_type == 'keyword': docs = DocCLI._list_keywords() elif plugin_type == 'role': docs = self._create_role_list() else: docs = self._list_plugins(plugin_type, content) else: # here we require a name if len(context.CLIARGS['args']) == 0: raise AnsibleOptionsError("Missing name(s), incorrect options passed for detailed documentation.") if plugin_type == 'keyword': docs = DocCLI._get_keywords_docs(context.CLIARGS['args']) elif plugin_type == 'role': docs = self._create_role_doc(context.CLIARGS['args'], context.CLIARGS['entry_point']) else: # display specific plugin docs docs = self._get_plugins_docs(plugin_type, context.CLIARGS['args']) # Display the docs if do_json: jdump(docs) else: text = [] if plugin_type in C.DOCUMENTABLE_PLUGINS: if listing and docs: self.display_plugin_list(docs) elif context.CLIARGS['show_snippet']: if plugin_type not in SNIPPETS: raise AnsibleError('Snippets are only available for the following plugin' ' types: %s' % ', '.join(SNIPPETS)) for plugin, doc_data in docs.items(): try: textret = DocCLI.format_snippet(plugin, plugin_type, doc_data['doc']) except ValueError as e: display.warning("Unable to construct a snippet for" " '{0}': {1}".format(plugin, to_text(e))) else: text.append(textret) else: # Some changes to how plain text docs are formatted for plugin, doc_data in docs.items(): textret = DocCLI.format_plugin_doc(plugin, plugin_type, doc_data['doc'], doc_data['examples'], doc_data['return'], doc_data['metadata']) if textret: text.append(textret) else: display.warning("No valid documentation was retrieved from '%s'" % plugin) elif plugin_type == 'role': if context.CLIARGS['list_dir'] and docs: self._display_available_roles(docs) elif docs: self._display_role_doc(docs) elif docs: text = DocCLI.tty_ify(DocCLI._dump_yaml(docs)) if text: DocCLI.pager(''.join(text)) return 0 @staticmethod def get_all_plugins_of_type(plugin_type): loader = getattr(plugin_loader, '%s_loader' % plugin_type) paths = loader._get_paths_with_context() plugins = {} for path_context in paths: plugins.update(list_plugins(plugin_type)) return sorted(plugins.keys()) @staticmethod def get_plugin_metadata(plugin_type, plugin_name): # if the plugin lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs loader = getattr(plugin_loader, '%s_loader' % plugin_type) result = loader.find_plugin_with_context(plugin_name, mod_type='.py', ignore_deprecated=True, check_aliases=True) if not result.resolved: raise AnsibleError("unable to load {0} plugin named {1} ".format(plugin_type, plugin_name)) filename = result.plugin_resolved_path collection_name = result.plugin_resolved_collection try: doc, __, __, __ = get_docstring(filename, fragment_loader, verbose=(context.CLIARGS['verbosity'] > 0), collection_name=collection_name, plugin_type=plugin_type) except Exception: display.vvv(traceback.format_exc()) raise AnsibleError("%s %s at %s has a documentation formatting error or is missing documentation." % (plugin_type, plugin_name, filename)) if doc is None: # Removed plugins don't have any documentation return None return dict( name=plugin_name, namespace=DocCLI.namespace_from_plugin_filepath(filename, plugin_name, loader.package_path), description=doc.get('short_description', "UNKNOWN"), version_added=doc.get('version_added', "UNKNOWN") ) @staticmethod def namespace_from_plugin_filepath(filepath, plugin_name, basedir): if not basedir.endswith('/'): basedir += '/' rel_path = filepath.replace(basedir, '') extension_free = os.path.splitext(rel_path)[0] namespace_only = extension_free.rsplit(plugin_name, 1)[0].strip('/_') clean_ns = namespace_only.replace('/', '.') if clean_ns == '': clean_ns = None return clean_ns @staticmethod def _combine_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata): # generate extra data if plugin_type == 'module': # is there corresponding action plugin? if plugin in action_loader: doc['has_action'] = True else: doc['has_action'] = False # return everything as one dictionary return {'doc': doc, 'examples': plainexamples, 'return': returndocs, 'metadata': metadata} @staticmethod def format_snippet(plugin, plugin_type, doc): ''' return heavily commented plugin use to insert into play ''' if plugin_type == 'inventory' and doc.get('options', {}).get('plugin'): # these do not take a yaml config that we can write a snippet for raise ValueError('The {0} inventory plugin does not take YAML type config source' ' that can be used with the "auto" plugin so a snippet cannot be' ' created.'.format(plugin)) text = [] if plugin_type == 'lookup': text = _do_lookup_snippet(doc) elif 'options' in doc: text = _do_yaml_snippet(doc) text.append('') return "\n".join(text) @staticmethod def format_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata): collection_name = doc['collection'] # TODO: do we really want this? # add_collection_to_versions_and_dates(doc, '(unknown)', is_module=(plugin_type == 'module')) # remove_current_collection_from_versions_and_dates(doc, collection_name, is_module=(plugin_type == 'module')) # remove_current_collection_from_versions_and_dates( # returndocs, collection_name, is_module=(plugin_type == 'module'), return_docs=True) # assign from other sections doc['plainexamples'] = plainexamples doc['returndocs'] = returndocs doc['metadata'] = metadata try: text = DocCLI.get_man_text(doc, collection_name, plugin_type) except Exception as e: display.vvv(traceback.format_exc()) raise AnsibleError("Unable to retrieve documentation from '%s' due to: %s" % (plugin, to_native(e)), orig_exc=e) return text def _get_plugin_list_descriptions(self, loader): descs = {} for plugin in self.plugins.keys(): # TODO: move to plugin itself i.e: plugin.get_desc() doc = None filename = Path(to_native(self.plugins[plugin][0])) docerror = None try: doc = read_docstub(filename) except Exception as e: docerror = e # plugin file was empty or had error, lets try other options if doc is None: # handle test/filters that are in file with diff name base = plugin.split('.')[-1] basefile = filename.with_name(base + filename.suffix) for extension in C.DOC_EXTENSIONS: docfile = basefile.with_suffix(extension) try: if docfile.exists(): doc = read_docstub(docfile) except Exception as e: docerror = e if docerror: display.warning("%s has a documentation formatting error: %s" % (plugin, docerror)) continue if not doc or not isinstance(doc, dict): desc = 'UNDOCUMENTED' else: desc = doc.get('short_description', 'INVALID SHORT DESCRIPTION').strip() descs[plugin] = desc return descs @staticmethod def print_paths(finder): ''' Returns a string suitable for printing of the search path ''' # Uses a list to get the order right ret = [] for i in finder._get_paths(subdirs=False): i = to_text(i, errors='surrogate_or_strict') if i not in ret: ret.append(i) return os.pathsep.join(ret) @staticmethod def _dump_yaml(struct, flow_style=False): return yaml_dump(struct, default_flow_style=flow_style, default_style="''", Dumper=AnsibleDumper).rstrip('\n') @staticmethod def _indent_lines(text, indent): return DocCLI.tty_ify('\n'.join([indent + line for line in text.split('\n')])) @staticmethod def _format_version_added(version_added, version_added_collection=None): if version_added_collection == 'ansible.builtin': version_added_collection = 'ansible-core' # In ansible-core, version_added can be 'historical' if version_added == 'historical': return 'historical' if version_added_collection: version_added = '%s of %s' % (version_added, version_added_collection) return 'version %s' % (version_added, ) @staticmethod def add_fields(text, fields, limit, opt_indent, return_values=False, base_indent=''): for o in sorted(fields): # Create a copy so we don't modify the original (in case YAML anchors have been used) opt = dict(fields[o]) # required is used as indicator and removed required = opt.pop('required', False) if not isinstance(required, bool): raise AnsibleError("Incorrect value for 'Required', a boolean is needed.: %s" % required) if required: opt_leadin = "=" else: opt_leadin = "-" text.append("%s%s %s" % (base_indent, opt_leadin, o)) # description is specifically formated and can either be string or list of strings if 'description' not in opt: raise AnsibleError("All (sub-)options and return values must have a 'description' field") if is_sequence(opt['description']): for entry_idx, entry in enumerate(opt['description'], 1): if not isinstance(entry, string_types): raise AnsibleError("Expected string in description of %s at index %s, got %s" % (o, entry_idx, type(entry))) text.append(textwrap.fill(DocCLI.tty_ify(entry), limit, initial_indent=opt_indent, subsequent_indent=opt_indent)) else: if not isinstance(opt['description'], string_types): raise AnsibleError("Expected string in description of %s, got %s" % (o, type(opt['description']))) text.append(textwrap.fill(DocCLI.tty_ify(opt['description']), limit, initial_indent=opt_indent, subsequent_indent=opt_indent)) del opt['description'] suboptions = [] for subkey in ('options', 'suboptions', 'contains', 'spec'): if subkey in opt: suboptions.append((subkey, opt.pop(subkey))) if not required and not return_values and 'default' not in opt: opt['default'] = None # sanitize config items conf = {} for config in ('env', 'ini', 'yaml', 'vars', 'keyword'): if config in opt and opt[config]: # Create a copy so we don't modify the original (in case YAML anchors have been used) conf[config] = [dict(item) for item in opt.pop(config)] for ignore in DocCLI.IGNORE: for item in conf[config]: if ignore in item: del item[ignore] # reformat cli optoins if 'cli' in opt and opt['cli']: conf['cli'] = [] for cli in opt['cli']: if 'option' not in cli: conf['cli'].append({'name': cli['name'], 'option': '--%s' % cli['name'].replace('_', '-')}) else: conf['cli'].append(cli) del opt['cli'] # add custom header for conf if conf: text.append(DocCLI._indent_lines(DocCLI._dump_yaml({'set_via': conf}), opt_indent)) # these we handle at the end of generic option processing version_added = opt.pop('version_added', None) version_added_collection = opt.pop('version_added_collection', None) # general processing for options for k in sorted(opt): if k.startswith('_'): continue if is_sequence(opt[k]): text.append(DocCLI._indent_lines('%s: %s' % (k, DocCLI._dump_yaml(opt[k], flow_style=True)), opt_indent)) else: text.append(DocCLI._indent_lines(DocCLI._dump_yaml({k: opt[k]}), opt_indent)) if version_added: text.append("%sadded in: %s\n" % (opt_indent, DocCLI._format_version_added(version_added, version_added_collection))) for subkey, subdata in suboptions: text.append('') text.append("%s%s:\n" % (opt_indent, subkey.upper())) DocCLI.add_fields(text, subdata, limit, opt_indent + ' ', return_values, opt_indent) if not suboptions: text.append('') def get_role_man_text(self, role, role_json): '''Generate text for the supplied role suitable for display. This is similar to get_man_text(), but roles are different enough that we have a separate method for formatting their display. :param role: The role name. :param role_json: The JSON for the given role as returned from _create_role_doc(). :returns: A array of text suitable for displaying to screen. ''' text = [] opt_indent = " " pad = display.columns * 0.20 limit = max(display.columns - int(pad), 70) text.append("> %s (%s)\n" % (role.upper(), role_json.get('path'))) for entry_point in role_json['entry_points']: doc = role_json['entry_points'][entry_point] if doc.get('short_description'): text.append("ENTRY POINT: %s - %s\n" % (entry_point, doc.get('short_description'))) else: text.append("ENTRY POINT: %s\n" % entry_point) if doc.get('description'): if isinstance(doc['description'], list): desc = " ".join(doc['description']) else: desc = doc['description'] text.append("%s\n" % textwrap.fill(DocCLI.tty_ify(desc), limit, initial_indent=opt_indent, subsequent_indent=opt_indent)) if doc.get('options'): text.append("OPTIONS (= is mandatory):\n") DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent) text.append('') if doc.get('attributes'): text.append("ATTRIBUTES:\n") text.append(DocCLI._indent_lines(DocCLI._dump_yaml(doc.pop('attributes')), opt_indent)) text.append('') # generic elements we will handle identically for k in ('author',): if k not in doc: continue if isinstance(doc[k], string_types): text.append('%s: %s' % (k.upper(), textwrap.fill(DocCLI.tty_ify(doc[k]), limit - (len(k) + 2), subsequent_indent=opt_indent))) elif isinstance(doc[k], (list, tuple)): text.append('%s: %s' % (k.upper(), ', '.join(doc[k]))) else: # use empty indent since this affects the start of the yaml doc, not it's keys text.append(DocCLI._indent_lines(DocCLI._dump_yaml({k.upper(): doc[k]}), '')) text.append('') return text @staticmethod def get_man_text(doc, collection_name='', plugin_type=''): # Create a copy so we don't modify the original doc = dict(doc) DocCLI.IGNORE = DocCLI.IGNORE + (context.CLIARGS['type'],) opt_indent = " " text = [] pad = display.columns * 0.20 limit = max(display.columns - int(pad), 70) plugin_name = doc.get(context.CLIARGS['type'], doc.get('name')) or doc.get('plugin_type') or plugin_type if collection_name: plugin_name = '%s.%s' % (collection_name, plugin_name) text.append("> %s (%s)\n" % (plugin_name.upper(), doc.pop('filename'))) if isinstance(doc['description'], list): desc = " ".join(doc.pop('description')) else: desc = doc.pop('description') text.append("%s\n" % textwrap.fill(DocCLI.tty_ify(desc), limit, initial_indent=opt_indent, subsequent_indent=opt_indent)) if 'version_added' in doc: version_added = doc.pop('version_added') version_added_collection = doc.pop('version_added_collection', None) text.append("ADDED IN: %s\n" % DocCLI._format_version_added(version_added, version_added_collection)) if doc.get('deprecated', False): text.append("DEPRECATED: \n") if isinstance(doc['deprecated'], dict): if 'removed_at_date' in doc['deprecated']: text.append( "\tReason: %(why)s\n\tWill be removed in a release after %(removed_at_date)s\n\tAlternatives: %(alternative)s" % doc.pop('deprecated') ) else: if 'version' in doc['deprecated'] and 'removed_in' not in doc['deprecated']: doc['deprecated']['removed_in'] = doc['deprecated']['version'] text.append("\tReason: %(why)s\n\tWill be removed in: Ansible %(removed_in)s\n\tAlternatives: %(alternative)s" % doc.pop('deprecated')) else: text.append("%s" % doc.pop('deprecated')) text.append("\n") if doc.pop('has_action', False): text.append(" * note: %s\n" % "This module has a corresponding action plugin.") if doc.get('options', False): text.append("OPTIONS (= is mandatory):\n") DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent) text.append('') if doc.get('attributes', False): text.append("ATTRIBUTES:\n") text.append(DocCLI._indent_lines(DocCLI._dump_yaml(doc.pop('attributes')), opt_indent)) text.append('') if doc.get('notes', False): text.append("NOTES:") for note in doc['notes']: text.append(textwrap.fill(DocCLI.tty_ify(note), limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent)) text.append('') text.append('') del doc['notes'] if doc.get('seealso', False): text.append("SEE ALSO:") for item in doc['seealso']: if 'module' in item: text.append(textwrap.fill(DocCLI.tty_ify('Module %s' % item['module']), limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent)) description = item.get('description') if description is None and item['module'].startswith('ansible.builtin.'): description = 'The official documentation on the %s module.' % item['module'] if description is not None: text.append(textwrap.fill(DocCLI.tty_ify(description), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) if item['module'].startswith('ansible.builtin.'): relative_url = 'collections/%s_module.html' % item['module'].replace('.', '/', 2) text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink(relative_url)), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent)) elif 'plugin' in item and 'plugin_type' in item: plugin_suffix = ' plugin' if item['plugin_type'] not in ('module', 'role') else '' text.append(textwrap.fill(DocCLI.tty_ify('%s%s %s' % (item['plugin_type'].title(), plugin_suffix, item['plugin'])), limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent)) description = item.get('description') if description is None and item['plugin'].startswith('ansible.builtin.'): description = 'The official documentation on the %s %s%s.' % (item['plugin'], item['plugin_type'], plugin_suffix) if description is not None: text.append(textwrap.fill(DocCLI.tty_ify(description), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) if item['plugin'].startswith('ansible.builtin.'): relative_url = 'collections/%s_%s.html' % (item['plugin'].replace('.', '/', 2), item['plugin_type']) text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink(relative_url)), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent)) elif 'name' in item and 'link' in item and 'description' in item: text.append(textwrap.fill(DocCLI.tty_ify(item['name']), limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent)) text.append(textwrap.fill(DocCLI.tty_ify(item['description']), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) text.append(textwrap.fill(DocCLI.tty_ify(item['link']), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) elif 'ref' in item and 'description' in item: text.append(textwrap.fill(DocCLI.tty_ify('Ansible documentation [%s]' % item['ref']), limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent)) text.append(textwrap.fill(DocCLI.tty_ify(item['description']), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink('/#stq=%s&stp=1' % item['ref'])), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' ')) text.append('') text.append('') del doc['seealso'] if doc.get('requirements', False): req = ", ".join(doc.pop('requirements')) text.append("REQUIREMENTS:%s\n" % textwrap.fill(DocCLI.tty_ify(req), limit - 16, initial_indent=" ", subsequent_indent=opt_indent)) # Generic handler for k in sorted(doc): if k in DocCLI.IGNORE or not doc[k]: continue if isinstance(doc[k], string_types): text.append('%s: %s' % (k.upper(), textwrap.fill(DocCLI.tty_ify(doc[k]), limit - (len(k) + 2), subsequent_indent=opt_indent))) elif isinstance(doc[k], (list, tuple)): text.append('%s: %s' % (k.upper(), ', '.join(doc[k]))) else: # use empty indent since this affects the start of the yaml doc, not it's keys text.append(DocCLI._indent_lines(DocCLI._dump_yaml({k.upper(): doc[k]}), '')) del doc[k] text.append('') if doc.get('plainexamples', False): text.append("EXAMPLES:") text.append('') if isinstance(doc['plainexamples'], string_types): text.append(doc.pop('plainexamples').strip()) else: try: text.append(yaml_dump(doc.pop('plainexamples'), indent=2, default_flow_style=False)) except Exception as e: raise AnsibleParserError("Unable to parse examples section", orig_exc=e) text.append('') text.append('') if doc.get('returndocs', False): text.append("RETURN VALUES:") DocCLI.add_fields(text, doc.pop('returndocs'), limit, opt_indent, return_values=True) return "\n".join(text) def _do_yaml_snippet(doc): text = [] mdesc = DocCLI.tty_ify(doc['short_description']) module = doc.get('module') if module: # this is actually a usable task! text.append("- name: %s" % (mdesc)) text.append(" %s:" % (module)) else: # just a comment, hopefully useful yaml file text.append("# %s:" % doc.get('plugin', doc.get('name'))) pad = 29 subdent = '# '.rjust(pad + 2) limit = display.columns - pad for o in sorted(doc['options'].keys()): opt = doc['options'][o] if isinstance(opt['description'], string_types): desc = DocCLI.tty_ify(opt['description']) else: desc = DocCLI.tty_ify(" ".join(opt['description'])) required = opt.get('required', False) if not isinstance(required, bool): raise ValueError("Incorrect value for 'Required', a boolean is needed: %s" % required) o = '%s:' % o if module: if required: desc = "(required) %s" % desc text.append(" %-20s # %s" % (o, textwrap.fill(desc, limit, subsequent_indent=subdent))) else: if required: default = '(required)' else: default = opt.get('default', 'None') text.append("%s %-9s # %s" % (o, default, textwrap.fill(desc, limit, subsequent_indent=subdent, max_lines=3))) return text def _do_lookup_snippet(doc): text = [] snippet = "lookup('%s', " % doc.get('plugin', doc.get('name')) comment = [] for o in sorted(doc['options'].keys()): opt = doc['options'][o] comment.append('# %s(%s): %s' % (o, opt.get('type', 'string'), opt.get('description', ''))) if o in ('_terms', '_raw', '_list'): # these are 'list of arguments' snippet += '< %s >' % (o) continue required = opt.get('required', False) if not isinstance(required, bool): raise ValueError("Incorrect value for 'Required', a boolean is needed: %s" % required) if required: default = '' else: default = opt.get('default', 'None') if opt.get('type') in ('string', 'str'): snippet += ", %s='%s'" % (o, default) else: snippet += ', %s=%s' % (o, default) snippet += ")" if comment: text.extend(comment) text.append('') text.append(snippet) return text def main(args=None): DocCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/cli/galaxy.py0000755000000000000000000027161314556006441017517 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2013, James Cammarata # Copyright: (c) 2018-2021, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import argparse import functools import json import os.path import pathlib import re import shutil import sys import textwrap import time import typing as t from dataclasses import dataclass from yaml.error import YAMLError import ansible.constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info from ansible.galaxy.api import GalaxyAPI, GalaxyError from ansible.galaxy.collection import ( build_collection, download_collections, find_existing_collections, install_collections, publish_collection, validate_collection_name, validate_collection_path, verify_collections, SIGNATURE_COUNT_RE, ) from ansible.galaxy.collection.concrete_artifact_manager import ( ConcreteArtifactsManager, ) from ansible.galaxy.collection.gpg import GPG_ERROR_MAP from ansible.galaxy.dependency_resolution.dataclasses import Requirement from ansible.galaxy.role import GalaxyRole from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel from ansible.module_utils.ansible_release import __version__ as ansible_version from ansible.module_utils.common.collections import is_iterable from ansible.module_utils.common.yaml import yaml_dump, yaml_load from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text from ansible.module_utils import six from ansible.parsing.dataloader import DataLoader from ansible.parsing.yaml.loader import AnsibleLoader from ansible.playbook.role.requirement import RoleRequirement from ansible.template import Templar from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.display import Display from ansible.utils.plugin_docs import get_versioned_doclink display = Display() urlparse = six.moves.urllib.parse.urlparse # config definition by position: name, required, type SERVER_DEF = [ ('url', True, 'str'), ('username', False, 'str'), ('password', False, 'str'), ('token', False, 'str'), ('auth_url', False, 'str'), ('api_version', False, 'int'), ('validate_certs', False, 'bool'), ('client_id', False, 'str'), ('timeout', False, 'int'), ] # config definition fields SERVER_ADDITIONAL = { 'api_version': {'default': None, 'choices': [2, 3]}, 'validate_certs': {'cli': [{'name': 'validate_certs'}]}, 'timeout': {'default': C.GALAXY_SERVER_TIMEOUT, 'cli': [{'name': 'timeout'}]}, 'token': {'default': None}, } def with_collection_artifacts_manager(wrapped_method): """Inject an artifacts manager if not passed explicitly. This decorator constructs a ConcreteArtifactsManager and maintains the related temporary directory auto-cleanup around the target method invocation. """ @functools.wraps(wrapped_method) def method_wrapper(*args, **kwargs): if 'artifacts_manager' in kwargs: return wrapped_method(*args, **kwargs) # FIXME: use validate_certs context from Galaxy servers when downloading collections # .get used here for when this is used in a non-CLI context artifacts_manager_kwargs = {'validate_certs': context.CLIARGS.get('resolved_validate_certs', True)} keyring = context.CLIARGS.get('keyring', None) if keyring is not None: artifacts_manager_kwargs.update({ 'keyring': GalaxyCLI._resolve_path(keyring), 'required_signature_count': context.CLIARGS.get('required_valid_signature_count', None), 'ignore_signature_errors': context.CLIARGS.get('ignore_gpg_errors', None), }) with ConcreteArtifactsManager.under_tmpdir( C.DEFAULT_LOCAL_TMP, **artifacts_manager_kwargs ) as concrete_artifact_cm: kwargs['artifacts_manager'] = concrete_artifact_cm return wrapped_method(*args, **kwargs) return method_wrapper def _display_header(path, h1, h2, w1=10, w2=7): display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format( path, h1, h2, '-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header '-' * max([len(h2), w2]), cwidth=w1, vwidth=w2, )) def _display_role(gr): install_info = gr.install_info version = None if install_info: version = install_info.get("version", None) if not version: version = "(unknown version)" display.display("- %s, %s" % (gr.name, version)) def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7): display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format( fqcn=to_text(collection.fqcn), version=collection.ver, cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header vwidth=max(vwidth, min_vwidth) )) def _get_collection_widths(collections): if not is_iterable(collections): collections = (collections, ) fqcn_set = {to_text(c.fqcn) for c in collections} version_set = {to_text(c.ver) for c in collections} fqcn_length = len(max(fqcn_set or [''], key=len)) version_length = len(max(version_set or [''], key=len)) return fqcn_length, version_length def validate_signature_count(value): match = re.match(SIGNATURE_COUNT_RE, value) if match is None: raise ValueError(f"{value} is not a valid signature count value") return value @dataclass class RoleDistributionServer: _api: t.Union[GalaxyAPI, None] api_servers: list[GalaxyAPI] @property def api(self): if self._api: return self._api for server in self.api_servers: try: if u'v1' in server.available_api_versions: self._api = server break except Exception: continue if not self._api: self._api = self.api_servers[0] return self._api class GalaxyCLI(CLI): '''Command to manage Ansible roles and collections. None of the CLI tools are designed to run concurrently with themselves. Use an external scheduler and/or locking to ensure there are no clashing operations. ''' name = 'ansible-galaxy' SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url") def __init__(self, args): self._raw_args = args self._implicit_role = False if len(args) > 1: # Inject role into sys.argv[1] as a backwards compatibility step if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args: # TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice args.insert(1, 'role') self._implicit_role = True # since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization if args[1:3] == ['role', 'login']: display.error( "The login command was removed in late 2020. An API key is now required to publish roles or collections " "to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the " "ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` " "command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH))) sys.exit(1) self.api_servers = [] self.galaxy = None self.lazy_role_api = None super(GalaxyCLI, self).__init__(args) def init_parser(self): ''' create an options parser for bin/ansible ''' super(GalaxyCLI, self).init_parser( desc="Perform various Role and Collection related operations.", ) # Common arguments that apply to more than 1 action common = opt_help.ArgumentParser(add_help=False) common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL') common.add_argument('--api-version', type=int, choices=[2, 3], help=argparse.SUPPRESS) # Hidden argument that should only be used in our tests common.add_argument('--token', '--api-key', dest='api_key', help='The Ansible Galaxy API key which can be found at ' 'https://galaxy.ansible.com/me/preferences.') common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs', help='Ignore SSL certificate validation errors.', default=None) # --timeout uses the default None to handle two different scenarios. # * --timeout > C.GALAXY_SERVER_TIMEOUT for non-configured servers # * --timeout > server-specific timeout > C.GALAXY_SERVER_TIMEOUT for configured servers. common.add_argument('--timeout', dest='timeout', type=int, help="The time to wait for operations against the galaxy server, defaults to 60s.") opt_help.add_verbosity_options(common) force = opt_help.ArgumentParser(add_help=False) force.add_argument('-f', '--force', dest='force', action='store_true', default=False, help='Force overwriting an existing role or collection') github = opt_help.ArgumentParser(add_help=False) github.add_argument('github_user', help='GitHub username') github.add_argument('github_repo', help='GitHub repository') offline = opt_help.ArgumentParser(add_help=False) offline.add_argument('--offline', dest='offline', default=False, action='store_true', help="Don't query the galaxy API when creating roles") default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '') roles_path = opt_help.ArgumentParser(add_help=False) roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True), default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction, help='The path to the directory containing your roles. The default is the first ' 'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path) collections_path = opt_help.ArgumentParser(add_help=False) collections_path.add_argument('-p', '--collections-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True), action=opt_help.PrependListAction, help="One or more directories to search for collections in addition " "to the default COLLECTIONS_PATHS. Separate multiple paths " "with '{0}'.".format(os.path.pathsep)) cache_options = opt_help.ArgumentParser(add_help=False) cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true', default=False, help='Clear the existing server response cache.') cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False, help='Do not use the server response cache.') # Add sub parser for the Galaxy role type (role or collection) type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type') type_parser.required = True # Add sub parser for the Galaxy collection actions collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.') collection.set_defaults(func=self.execute_collection) # to satisfy doc build collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action') collection_parser.required = True self.add_download_options(collection_parser, parents=[common, cache_options]) self.add_init_options(collection_parser, parents=[common, force]) self.add_build_options(collection_parser, parents=[common, force]) self.add_publish_options(collection_parser, parents=[common]) self.add_install_options(collection_parser, parents=[common, force, cache_options]) self.add_list_options(collection_parser, parents=[common, collections_path]) self.add_verify_options(collection_parser, parents=[common, collections_path]) # Add sub parser for the Galaxy role actions role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.') role.set_defaults(func=self.execute_role) # to satisfy doc build role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action') role_parser.required = True self.add_init_options(role_parser, parents=[common, force, offline]) self.add_remove_options(role_parser, parents=[common, roles_path]) self.add_delete_options(role_parser, parents=[common, github]) self.add_list_options(role_parser, parents=[common, roles_path]) self.add_search_options(role_parser, parents=[common]) self.add_import_options(role_parser, parents=[common, github]) self.add_setup_options(role_parser, parents=[common, roles_path]) self.add_info_options(role_parser, parents=[common, roles_path, offline]) self.add_install_options(role_parser, parents=[common, force, roles_path]) def add_download_options(self, parser, parents=None): download_parser = parser.add_parser('download', parents=parents, help='Download collections and their dependencies as a tarball for an ' 'offline install.') download_parser.set_defaults(func=self.execute_download) download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*') download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False, help="Don't download collection(s) listed as dependencies.") download_parser.add_argument('-p', '--download-path', dest='download_path', default='./collections', help='The directory to download the collections to.') download_parser.add_argument('-r', '--requirements-file', dest='requirements', help='A file containing a list of collections to be downloaded.') download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true', help='Include pre-release versions. Semantic versioning pre-releases are ignored by default') def add_init_options(self, parser, parents=None): galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role' init_parser = parser.add_parser('init', parents=parents, help='Initialize new {0} with the base structure of a ' '{0}.'.format(galaxy_type)) init_parser.set_defaults(func=self.execute_init) init_parser.add_argument('--init-path', dest='init_path', default='./', help='The path in which the skeleton {0} will be created. The default is the ' 'current working directory.'.format(galaxy_type)) init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type), default=C.GALAXY_COLLECTION_SKELETON if galaxy_type == 'collection' else C.GALAXY_ROLE_SKELETON, help='The path to a {0} skeleton that the new {0} should be based ' 'upon.'.format(galaxy_type)) obj_name_kwargs = {} if galaxy_type == 'collection': obj_name_kwargs['type'] = validate_collection_name init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()), **obj_name_kwargs) if galaxy_type == 'role': init_parser.add_argument('--type', dest='role_type', action='store', default='default', help="Initialize using an alternate role type. Valid types include: 'container', " "'apb' and 'network'.") def add_remove_options(self, parser, parents=None): remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.') remove_parser.set_defaults(func=self.execute_remove) remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+') def add_delete_options(self, parser, parents=None): delete_parser = parser.add_parser('delete', parents=parents, help='Removes the role from Galaxy. It does not remove or alter the actual ' 'GitHub repository.') delete_parser.set_defaults(func=self.execute_delete) def add_list_options(self, parser, parents=None): galaxy_type = 'role' if parser.metavar == 'COLLECTION_ACTION': galaxy_type = 'collection' list_parser = parser.add_parser('list', parents=parents, help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type)) list_parser.set_defaults(func=self.execute_list) list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type) if galaxy_type == 'collection': list_parser.add_argument('--format', dest='output_format', choices=('human', 'yaml', 'json'), default='human', help="Format to display the list of collections in.") def add_search_options(self, parser, parents=None): search_parser = parser.add_parser('search', parents=parents, help='Search the Galaxy database by tags, platforms, author and multiple ' 'keywords.') search_parser.set_defaults(func=self.execute_search) search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by') search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by') search_parser.add_argument('--author', dest='author', help='GitHub username') search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*') def add_import_options(self, parser, parents=None): import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server') import_parser.set_defaults(func=self.execute_import) import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True, help="Don't wait for import results.") import_parser.add_argument('--branch', dest='reference', help='The name of a branch to import. Defaults to the repository\'s default branch ' '(usually master)') import_parser.add_argument('--role-name', dest='role_name', help='The name the role should have, if different than the repo name') import_parser.add_argument('--status', dest='check_status', action='store_true', default=False, help='Check the status of the most recent import request for given github_' 'user/github_repo.') def add_setup_options(self, parser, parents=None): setup_parser = parser.add_parser('setup', parents=parents, help='Manage the integration between Galaxy and the given source.') setup_parser.set_defaults(func=self.execute_setup) setup_parser.add_argument('--remove', dest='remove_id', default=None, help='Remove the integration matching the provided ID value. Use --list to see ' 'ID values.') setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False, help='List all of your integrations.') setup_parser.add_argument('source', help='Source') setup_parser.add_argument('github_user', help='GitHub username') setup_parser.add_argument('github_repo', help='GitHub repository') setup_parser.add_argument('secret', help='Secret') def add_info_options(self, parser, parents=None): info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.') info_parser.set_defaults(func=self.execute_info) info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]') def add_verify_options(self, parser, parents=None): galaxy_type = 'collection' verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) ' 'found on the server and the installed copy. This does not verify dependencies.') verify_parser.set_defaults(func=self.execute_verify) verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The installed collection(s) name. ' 'This is mutually exclusive with --requirements-file.') verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False, help='Ignore errors during verification and continue with the next specified collection.') verify_parser.add_argument('--offline', dest='offline', action='store_true', default=False, help='Validate collection integrity locally without contacting server for ' 'canonical manifest hash.') verify_parser.add_argument('-r', '--requirements-file', dest='requirements', help='A file containing a list of collections to be verified.') verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING, help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx? verify_parser.add_argument('--signature', dest='signatures', action='append', help='An additional signature source to verify the authenticity of the MANIFEST.json before using ' 'it to verify the rest of the contents of a collection from a Galaxy server. Use in ' 'conjunction with a positional collection name (mutually exclusive with --requirements-file).') valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \ 'or all to signify that all signatures must be used to verify the collection. ' \ 'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).' ignore_gpg_status_help = 'A space separated list of status codes to ignore during signature verification (for example, NO_PUBKEY FAILURE). ' \ 'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).' \ 'Note: specify these after positional arguments or use -- to separate them.' verify_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count, help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT) verify_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append', help=opt_help.argparse.SUPPRESS, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES, choices=list(GPG_ERROR_MAP.keys())) verify_parser.add_argument('--ignore-signature-status-codes', dest='ignore_gpg_errors', type=str, action='extend', nargs='+', help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES, choices=list(GPG_ERROR_MAP.keys())) def add_install_options(self, parser, parents=None): galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role' args_kwargs = {} if galaxy_type == 'collection': args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \ 'mutually exclusive with --requirements-file.' ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \ 'collection. This will not ignore dependency conflict errors.' else: args_kwargs['help'] = 'Role name, URL or tar file' ignore_errors_help = 'Ignore errors and continue with the next specified role.' install_parser = parser.add_parser('install', parents=parents, help='Install {0}(s) from file(s), URL(s) or Ansible ' 'Galaxy'.format(galaxy_type)) install_parser.set_defaults(func=self.execute_install) install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs) install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False, help=ignore_errors_help) install_exclusive = install_parser.add_mutually_exclusive_group() install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False, help="Don't download {0}s listed as dependencies.".format(galaxy_type)) install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False, help="Force overwriting an existing {0} and its " "dependencies.".format(galaxy_type)) valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \ 'or -1 to signify that all signatures must be used to verify the collection. ' \ 'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).' ignore_gpg_status_help = 'A space separated list of status codes to ignore during signature verification (for example, NO_PUBKEY FAILURE). ' \ 'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).' \ 'Note: specify these after positional arguments or use -- to separate them.' if galaxy_type == 'collection': install_parser.add_argument('-p', '--collections-path', dest='collections_path', default=self._get_default_collection_path(), help='The path to the directory containing your collections.') install_parser.add_argument('-r', '--requirements-file', dest='requirements', help='A file containing a list of collections to be installed.') install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true', help='Include pre-release versions. Semantic versioning pre-releases are ignored by default') install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False, help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided') install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING, help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx? install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true', default=C.GALAXY_DISABLE_GPG_VERIFY, help='Disable GPG signature verification when installing collections from a Galaxy server') install_parser.add_argument('--signature', dest='signatures', action='append', help='An additional signature source to verify the authenticity of the MANIFEST.json before ' 'installing the collection from a Galaxy server. Use in conjunction with a positional ' 'collection name (mutually exclusive with --requirements-file).') install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count, help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT) install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append', help=opt_help.argparse.SUPPRESS, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES, choices=list(GPG_ERROR_MAP.keys())) install_parser.add_argument('--ignore-signature-status-codes', dest='ignore_gpg_errors', type=str, action='extend', nargs='+', help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES, choices=list(GPG_ERROR_MAP.keys())) install_parser.add_argument('--offline', dest='offline', action='store_true', default=False, help='Install collection artifacts (tarballs) without contacting any distribution servers. ' 'This does not apply to collections in remote Git repositories or URLs to remote tarballs.' ) else: install_parser.add_argument('-r', '--role-file', dest='requirements', help='A file containing a list of roles to be installed.') r_re = re.compile(r'^(?] for the values url, username, password, and token. config_dict = dict((k, server_config_def(server_key, k, req, ensure_type)) for k, req, ensure_type in SERVER_DEF) defs = AnsibleLoader(yaml_dump(config_dict)).get_single_data() C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs) # resolve the config created options above with existing config and user options server_options = C.config.get_plugin_options('galaxy_server', server_key) # auth_url is used to create the token, but not directly by GalaxyAPI, so # it doesn't need to be passed as kwarg to GalaxyApi, same for others we pop here auth_url = server_options.pop('auth_url') client_id = server_options.pop('client_id') token_val = server_options['token'] or NoTokenSentinel username = server_options['username'] api_version = server_options.pop('api_version') if server_options['validate_certs'] is None: server_options['validate_certs'] = context.CLIARGS['resolved_validate_certs'] validate_certs = server_options['validate_certs'] # This allows a user to explicitly force use of an API version when # multiple versions are supported. This was added for testing # against pulp_ansible and I'm not sure it has a practical purpose # outside of this use case. As such, this option is not documented # as of now if api_version: display.warning( f'The specified "api_version" configuration for the galaxy server "{server_key}" is ' 'not a public configuration, and may be removed at any time without warning.' ) server_options['available_api_versions'] = {'v%s' % api_version: '/v%s' % api_version} # default case if no auth info is provided. server_options['token'] = None if username: server_options['token'] = BasicAuthToken(username, server_options['password']) else: if token_val: if auth_url: server_options['token'] = KeycloakToken(access_token=token_val, auth_url=auth_url, validate_certs=validate_certs, client_id=client_id) else: # The galaxy v1 / github / django / 'Token' server_options['token'] = GalaxyToken(token=token_val) server_options.update(galaxy_options) config_servers.append(GalaxyAPI( self.galaxy, server_key, priority=server_priority, **server_options )) cmd_server = context.CLIARGS['api_server'] if context.CLIARGS['api_version']: api_version = context.CLIARGS['api_version'] display.warning( 'The --api-version is not a public argument, and may be removed at any time without warning.' ) galaxy_options['available_api_versions'] = {'v%s' % api_version: '/v%s' % api_version} cmd_token = GalaxyToken(token=context.CLIARGS['api_key']) validate_certs = context.CLIARGS['resolved_validate_certs'] default_server_timeout = context.CLIARGS['timeout'] if context.CLIARGS['timeout'] is not None else C.GALAXY_SERVER_TIMEOUT if cmd_server: # Cmd args take precedence over the config entry but fist check if the arg was a name and use that config # entry, otherwise create a new API entry for the server specified. config_server = next((s for s in config_servers if s.name == cmd_server), None) if config_server: self.api_servers.append(config_server) else: self.api_servers.append(GalaxyAPI( self.galaxy, 'cmd_arg', cmd_server, token=cmd_token, priority=len(config_servers) + 1, validate_certs=validate_certs, timeout=default_server_timeout, **galaxy_options )) else: self.api_servers = config_servers # Default to C.GALAXY_SERVER if no servers were defined if len(self.api_servers) == 0: self.api_servers.append(GalaxyAPI( self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token, priority=0, validate_certs=validate_certs, timeout=default_server_timeout, **galaxy_options )) # checks api versions once a GalaxyRole makes an api call # self.api can be used to evaluate the best server immediately self.lazy_role_api = RoleDistributionServer(None, self.api_servers) return context.CLIARGS['func']() @property def api(self): return self.lazy_role_api.api def _get_default_collection_path(self): return C.COLLECTIONS_PATHS[0] def _parse_requirements_file(self, requirements_file, allow_old_format=True, artifacts_manager=None, validate_signature_options=True): """ Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2 requirements file format: # v1 (roles only) - src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball. name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL. scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git. version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master. include: Path to additional requirements.yml files. # v2 (roles and collections) --- roles: # Same as v1 format just under the roles key collections: - namespace.collection - name: namespace.collection version: version identifier, multiple identifiers are separated by ',' source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST type: git|file|url|galaxy :param requirements_file: The path to the requirements file. :param allow_old_format: Will fail if a v1 requirements file is found and this is set to False. :param artifacts_manager: Artifacts manager. :return: a dict containing roles and collections to found in the requirements file. """ requirements = { 'roles': [], 'collections': [], } b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict') if not os.path.exists(b_requirements_file): raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file)) display.vvv("Reading requirement file at '%s'" % requirements_file) with open(b_requirements_file, 'rb') as req_obj: try: file_requirements = yaml_load(req_obj) except YAMLError as err: raise AnsibleError( "Failed to parse the requirements yml at '%s' with the following error:\n%s" % (to_native(requirements_file), to_native(err))) if file_requirements is None: raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file)) def parse_role_req(requirement): if "include" not in requirement: role = RoleRequirement.role_yaml_parse(requirement) display.vvv("found role %s in yaml file" % to_text(role)) if "name" not in role and "src" not in role: raise AnsibleError("Must specify name or src for role") return [GalaxyRole(self.galaxy, self.lazy_role_api, **role)] else: b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict") if not os.path.isfile(b_include_path): raise AnsibleError("Failed to find include requirements file '%s' in '%s'" % (to_native(b_include_path), to_native(requirements_file))) with open(b_include_path, 'rb') as f_include: try: return [GalaxyRole(self.galaxy, self.lazy_role_api, **r) for r in (RoleRequirement.role_yaml_parse(i) for i in yaml_load(f_include))] except Exception as e: raise AnsibleError("Unable to load data from include requirements file: %s %s" % (to_native(requirements_file), to_native(e))) if isinstance(file_requirements, list): # Older format that contains only roles if not allow_old_format: raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains " "a list of collections to install") for role_req in file_requirements: requirements['roles'] += parse_role_req(role_req) elif isinstance(file_requirements, dict): # Newer format with a collections and/or roles key extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections'])) if extra_keys: raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements " "file. Found: %s" % (to_native(", ".join(extra_keys)))) for role_req in file_requirements.get('roles') or []: requirements['roles'] += parse_role_req(role_req) requirements['collections'] = [ Requirement.from_requirement_dict( self._init_coll_req_dict(collection_req), artifacts_manager, validate_signature_options, ) for collection_req in file_requirements.get('collections') or [] ] else: raise AnsibleError(f"Expecting requirements yaml to be a list or dictionary but got {type(file_requirements).__name__}") return requirements def _init_coll_req_dict(self, coll_req): if not isinstance(coll_req, dict): # Assume it's a string: return {'name': coll_req} if ( 'name' not in coll_req or not coll_req.get('source') or coll_req.get('type', 'galaxy') != 'galaxy' ): return coll_req # Try and match up the requirement source with our list of Galaxy API # servers defined in the config, otherwise create a server with that # URL without any auth. coll_req['source'] = next( iter( srvr for srvr in self.api_servers if coll_req['source'] in {srvr.name, srvr.api_server} ), GalaxyAPI( self.galaxy, 'explicit_requirement_{name!s}'.format( name=coll_req['name'], ), coll_req['source'], validate_certs=context.CLIARGS['resolved_validate_certs'], ), ) return coll_req @staticmethod def exit_without_ignore(rc=1): """ Exits with the specified return code unless the option --ignore-errors was specified """ if not context.CLIARGS['ignore_errors']: raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.') @staticmethod def _display_role_info(role_info): text = [u"", u"Role: %s" % to_text(role_info['name'])] # Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description']. galaxy_info = role_info.get('galaxy_info', {}) description = role_info.get('description', galaxy_info.get('description', '')) text.append(u"\tdescription: %s" % description) for k in sorted(role_info.keys()): if k in GalaxyCLI.SKIP_INFO_KEYS: continue if isinstance(role_info[k], dict): text.append(u"\t%s:" % (k)) for key in sorted(role_info[k].keys()): if key in GalaxyCLI.SKIP_INFO_KEYS: continue text.append(u"\t\t%s: %s" % (key, role_info[k][key])) else: text.append(u"\t%s: %s" % (k, role_info[k])) # make sure we have a trailing newline returned text.append(u"") return u'\n'.join(text) @staticmethod def _resolve_path(path): return os.path.abspath(os.path.expanduser(os.path.expandvars(path))) @staticmethod def _get_skeleton_galaxy_yml(template_path, inject_data): with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj: meta_template = to_text(template_obj.read(), errors='surrogate_or_strict') galaxy_meta = get_collections_galaxy_meta_info() required_config = [] optional_config = [] for meta_entry in galaxy_meta: config_list = required_config if meta_entry.get('required', False) else optional_config value = inject_data.get(meta_entry['key'], None) if not value: meta_type = meta_entry.get('type', 'str') if meta_type == 'str': value = '' elif meta_type == 'list': value = [] elif meta_type == 'dict': value = {} meta_entry['value'] = value config_list.append(meta_entry) link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)") const_pattern = re.compile(r"C\(([^)]+)\)") def comment_ify(v): if isinstance(v, list): v = ". ".join([l.rstrip('.') for l in v]) v = link_pattern.sub(r"\1 <\2>", v) v = const_pattern.sub(r"'\1'", v) return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False) loader = DataLoader() templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config}) templar.environment.filters['comment_ify'] = comment_ify meta_value = templar.template(meta_template) return meta_value def _require_one_of_collections_requirements( self, collections, requirements_file, signatures=None, artifacts_manager=None, ): if collections and requirements_file: raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.") elif not collections and not requirements_file: raise AnsibleError("You must specify a collection name or a requirements file.") elif requirements_file: if signatures is not None: raise AnsibleError( "The --signatures option and --requirements-file are mutually exclusive. " "Use the --signatures with positional collection_name args or provide a " "'signatures' key for requirements in the --requirements-file." ) requirements_file = GalaxyCLI._resolve_path(requirements_file) requirements = self._parse_requirements_file( requirements_file, allow_old_format=False, artifacts_manager=artifacts_manager, ) else: requirements = { 'collections': [ Requirement.from_string(coll_input, artifacts_manager, signatures) for coll_input in collections ], 'roles': [], } return requirements ############################ # execute actions ############################ def execute_role(self): """ Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init as listed below. """ # To satisfy doc build pass def execute_collection(self): """ Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as listed below. """ # To satisfy doc build pass def execute_build(self): """ Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy. By default, this command builds from the current working directory. You can optionally pass in the collection input path (where the ``galaxy.yml`` file is). """ force = context.CLIARGS['force'] output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path']) b_output_path = to_bytes(output_path, errors='surrogate_or_strict') if not os.path.exists(b_output_path): os.makedirs(b_output_path) elif os.path.isfile(b_output_path): raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path)) for collection_path in context.CLIARGS['args']: collection_path = GalaxyCLI._resolve_path(collection_path) build_collection( to_text(collection_path, errors='surrogate_or_strict'), to_text(output_path, errors='surrogate_or_strict'), force, ) @with_collection_artifacts_manager def execute_download(self, artifacts_manager=None): """Download collections and their dependencies as a tarball for an offline install.""" collections = context.CLIARGS['args'] no_deps = context.CLIARGS['no_deps'] download_path = context.CLIARGS['download_path'] requirements_file = context.CLIARGS['requirements'] if requirements_file: requirements_file = GalaxyCLI._resolve_path(requirements_file) requirements = self._require_one_of_collections_requirements( collections, requirements_file, artifacts_manager=artifacts_manager, )['collections'] download_path = GalaxyCLI._resolve_path(download_path) b_download_path = to_bytes(download_path, errors='surrogate_or_strict') if not os.path.exists(b_download_path): os.makedirs(b_download_path) download_collections( requirements, download_path, self.api_servers, no_deps, context.CLIARGS['allow_pre_release'], artifacts_manager=artifacts_manager, ) return 0 def execute_init(self): """ Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format. Requires a role or collection name. The collection name must be in the format ``.``. """ galaxy_type = context.CLIARGS['type'] init_path = context.CLIARGS['init_path'] force = context.CLIARGS['force'] obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)] obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)] inject_data = dict( description='your {0} description'.format(galaxy_type), ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'), ) if galaxy_type == 'role': inject_data.update(dict( author='your name', company='your company (optional)', license='license (GPL-2.0-or-later, MIT, etc)', role_name=obj_name, role_type=context.CLIARGS['role_type'], issue_tracker_url='http://example.com/issue/tracker', repository_url='http://example.com/repository', documentation_url='http://docs.example.com', homepage_url='http://example.com', min_ansible_version=ansible_version[:3], # x.y dependencies=[], )) skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE obj_path = os.path.join(init_path, obj_name) elif galaxy_type == 'collection': namespace, collection_name = obj_name.split('.', 1) inject_data.update(dict( namespace=namespace, collection_name=collection_name, version='1.0.0', readme='README.md', authors=['your name '], license=['GPL-2.0-or-later'], repository='http://example.com/repository', documentation='http://docs.example.com', homepage='http://example.com', issues='http://example.com/issue/tracker', build_ignore=[], )) skeleton_ignore_expressions = C.GALAXY_COLLECTION_SKELETON_IGNORE obj_path = os.path.join(init_path, namespace, collection_name) b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict') if os.path.exists(b_obj_path): if os.path.isfile(obj_path): raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path)) elif not force: raise AnsibleError("- the directory %s already exists. " "You can use --force to re-initialize this directory,\n" "however it will reset any main.yml files that may have\n" "been modified there already." % to_native(obj_path)) # delete the contents rather than the collection root in case init was run from the root (--init-path ../../) for root, dirs, files in os.walk(b_obj_path, topdown=True): for old_dir in dirs: path = os.path.join(root, old_dir) shutil.rmtree(path) for old_file in files: path = os.path.join(root, old_file) os.unlink(path) if obj_skeleton is not None: own_skeleton = False else: own_skeleton = True obj_skeleton = self.galaxy.default_role_skeleton_path skeleton_ignore_expressions = ['^.*/.git_keep$'] obj_skeleton = os.path.expanduser(obj_skeleton) skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions] if not os.path.exists(obj_skeleton): raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format( to_native(obj_skeleton), galaxy_type) ) loader = DataLoader() templar = Templar(loader, variables=inject_data) # create role directory if not os.path.exists(b_obj_path): os.makedirs(b_obj_path) for root, dirs, files in os.walk(obj_skeleton, topdown=True): rel_root = os.path.relpath(root, obj_skeleton) rel_dirs = rel_root.split(os.sep) rel_root_dir = rel_dirs[0] if galaxy_type == 'collection': # A collection can contain templates in playbooks/*/templates and roles/*/templates in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs else: in_templates_dir = rel_root_dir == 'templates' # Filter out ignored directory names # Use [:] to mutate the list os.walk uses dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)] for f in files: filename, ext = os.path.splitext(f) if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re): continue if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2': # Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options # dynamically which requires special options to be set. # The templated data's keys must match the key name but the inject data contains collection_name # instead of name. We just make a copy and change the key back to name for this file. template_data = inject_data.copy() template_data['name'] = template_data.pop('collection_name') meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data) b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict') with open(b_dest_file, 'wb') as galaxy_obj: galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict')) elif ext == ".j2" and not in_templates_dir: src_template = os.path.join(root, f) dest_file = os.path.join(obj_path, rel_root, filename) template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict') b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict') with open(dest_file, 'wb') as df: df.write(b_rendered) else: f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton) shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path), follow_symlinks=False) for d in dirs: b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict') if os.path.exists(b_dir_path): continue b_src_dir = to_bytes(os.path.join(root, d), errors='surrogate_or_strict') if os.path.islink(b_src_dir): shutil.copyfile(b_src_dir, b_dir_path, follow_symlinks=False) else: os.makedirs(b_dir_path) display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name)) def execute_info(self): """ prints out detailed information about an installed role as well as info available from the galaxy API. """ roles_path = context.CLIARGS['roles_path'] data = '' for role in context.CLIARGS['args']: role_info = {'path': roles_path} gr = GalaxyRole(self.galaxy, self.lazy_role_api, role) install_info = gr.install_info if install_info: if 'version' in install_info: install_info['installed_version'] = install_info['version'] del install_info['version'] role_info.update(install_info) if not context.CLIARGS['offline']: remote_data = None try: remote_data = self.api.lookup_role_by_name(role, False) except GalaxyError as e: if e.http_code == 400 and 'Bad Request' in e.message: # Role does not exist in Ansible Galaxy data = u"- the role %s was not found" % role break raise AnsibleError("Unable to find info about '%s': %s" % (role, e)) if remote_data: role_info.update(remote_data) else: data = u"- the role %s was not found" % role break elif context.CLIARGS['offline'] and not gr._exists: data = u"- the role %s was not found" % role break if gr.metadata: role_info.update(gr.metadata) req = RoleRequirement() role_spec = req.role_yaml_parse({'role': role}) if role_spec: role_info.update(role_spec) data += self._display_role_info(role_info) self.pager(data) @with_collection_artifacts_manager def execute_verify(self, artifacts_manager=None): """Compare checksums with the collection(s) found on the server and the installed copy. This does not verify dependencies.""" collections = context.CLIARGS['args'] search_paths = AnsibleCollectionConfig.collection_paths ignore_errors = context.CLIARGS['ignore_errors'] local_verify_only = context.CLIARGS['offline'] requirements_file = context.CLIARGS['requirements'] signatures = context.CLIARGS['signatures'] if signatures is not None: signatures = list(signatures) requirements = self._require_one_of_collections_requirements( collections, requirements_file, signatures=signatures, artifacts_manager=artifacts_manager, )['collections'] resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths] results = verify_collections( requirements, resolved_paths, self.api_servers, ignore_errors, local_verify_only=local_verify_only, artifacts_manager=artifacts_manager, ) if any(result for result in results if not result.success): return 1 return 0 @with_collection_artifacts_manager def execute_install(self, artifacts_manager=None): """ Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``). You can pass in a list (roles or collections) or use the file option listed below (these are mutually exclusive). If you pass in a list, it can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file. """ install_items = context.CLIARGS['args'] requirements_file = context.CLIARGS['requirements'] collection_path = None signatures = context.CLIARGS.get('signatures') if signatures is not None: signatures = list(signatures) if requirements_file: requirements_file = GalaxyCLI._resolve_path(requirements_file) two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \ "run 'ansible-galaxy {0} install -r' or to install both at the same time run " \ "'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file) # TODO: Would be nice to share the same behaviour with args and -r in collections and roles. collection_requirements = [] role_requirements = [] if context.CLIARGS['type'] == 'collection': collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path']) requirements = self._require_one_of_collections_requirements( install_items, requirements_file, signatures=signatures, artifacts_manager=artifacts_manager, ) collection_requirements = requirements['collections'] if requirements['roles']: display.vvv(two_type_warning.format('role')) else: if not install_items and requirements_file is None: raise AnsibleOptionsError("- you must specify a user/role name or a roles file") if requirements_file: if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')): raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension") galaxy_args = self._raw_args will_install_collections = self._implicit_role and '-p' not in galaxy_args and '--roles-path' not in galaxy_args requirements = self._parse_requirements_file( requirements_file, artifacts_manager=artifacts_manager, validate_signature_options=will_install_collections, ) role_requirements = requirements['roles'] # We can only install collections and roles at the same time if the type wasn't specified and the -p # argument was not used. If collections are present in the requirements then at least display a msg. if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or '--roles-path' in galaxy_args): # We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user # was explicit about the type and shouldn't care that collections were skipped. display_func = display.warning if self._implicit_role else display.vvv display_func(two_type_warning.format('collection')) else: collection_path = self._get_default_collection_path() collection_requirements = requirements['collections'] else: # roles were specified directly, so we'll just go out grab them # (and their dependencies, unless the user doesn't want us to). for rname in context.CLIARGS['args']: role = RoleRequirement.role_yaml_parse(rname.strip()) role_requirements.append(GalaxyRole(self.galaxy, self.lazy_role_api, **role)) if not role_requirements and not collection_requirements: display.display("Skipping install, no requirements found") return if role_requirements: display.display("Starting galaxy role install process") self._execute_install_role(role_requirements) if collection_requirements: display.display("Starting galaxy collection install process") # Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in # the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above). self._execute_install_collection( collection_requirements, collection_path, artifacts_manager=artifacts_manager, ) def _execute_install_collection( self, requirements, path, artifacts_manager, ): force = context.CLIARGS['force'] ignore_errors = context.CLIARGS['ignore_errors'] no_deps = context.CLIARGS['no_deps'] force_with_deps = context.CLIARGS['force_with_deps'] try: disable_gpg_verify = context.CLIARGS['disable_gpg_verify'] except KeyError: if self._implicit_role: raise AnsibleError( 'Unable to properly parse command line arguments. Please use "ansible-galaxy collection install" ' 'instead of "ansible-galaxy install".' ) raise # If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS allow_pre_release = context.CLIARGS.get('allow_pre_release', False) upgrade = context.CLIARGS.get('upgrade', False) collections_path = C.COLLECTIONS_PATHS managed_paths = set(validate_collection_path(p) for p in C.COLLECTIONS_PATHS) read_req_paths = set(validate_collection_path(p) for p in AnsibleCollectionConfig.collection_paths) unexpected_path = C.GALAXY_COLLECTIONS_PATH_WARNING and not any(p.startswith(path) for p in managed_paths) if unexpected_path and any(p.startswith(path) for p in read_req_paths): display.warning( f"The specified collections path '{path}' appears to be part of the pip Ansible package. " "Managing these directly with ansible-galaxy could break the Ansible package. " "Install collections to a configured collections path, which will take precedence over " "collections found in the PYTHONPATH." ) elif unexpected_path: display.warning("The specified collections path '%s' is not part of the configured Ansible " "collections paths '%s'. The installed collection will not be picked up in an Ansible " "run, unless within a playbook-adjacent collections directory." % (to_text(path), to_text(":".join(collections_path)))) output_path = validate_collection_path(path) b_output_path = to_bytes(output_path, errors='surrogate_or_strict') if not os.path.exists(b_output_path): os.makedirs(b_output_path) install_collections( requirements, output_path, self.api_servers, ignore_errors, no_deps, force, force_with_deps, upgrade, allow_pre_release=allow_pre_release, artifacts_manager=artifacts_manager, disable_gpg_verify=disable_gpg_verify, offline=context.CLIARGS.get('offline', False), read_requirement_paths=read_req_paths, ) return 0 def _execute_install_role(self, requirements): role_file = context.CLIARGS['requirements'] no_deps = context.CLIARGS['no_deps'] force_deps = context.CLIARGS['force_with_deps'] force = context.CLIARGS['force'] or force_deps for role in requirements: # only process roles in roles files when names matches if given if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']: display.vvv('Skipping role %s' % role.name) continue display.vvv('Processing role %s ' % role.name) # query the galaxy API for the role data if role.install_info is not None: if role.install_info['version'] != role.version or force: if force: display.display('- changing role %s from %s to %s' % (role.name, role.install_info['version'], role.version or "unspecified")) role.remove() else: display.warning('- %s (%s) is already installed - use --force to change version to %s' % (role.name, role.install_info['version'], role.version or "unspecified")) continue else: if not force: display.display('- %s is already installed, skipping.' % str(role)) continue try: installed = role.install() except AnsibleError as e: display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e))) self.exit_without_ignore() continue # install dependencies, if we want them if not no_deps and installed: if not role.metadata: # NOTE: the meta file is also required for installing the role, not just dependencies display.warning("Meta file %s is empty. Skipping dependencies." % role.path) else: role_dependencies = role.metadata_dependencies + role.requirements for dep in role_dependencies: display.debug('Installing dep %s' % dep) dep_req = RoleRequirement() dep_info = dep_req.role_yaml_parse(dep) dep_role = GalaxyRole(self.galaxy, self.lazy_role_api, **dep_info) if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None: # we know we can skip this, as it's not going to # be found on galaxy.ansible.com continue if dep_role.install_info is None: if dep_role not in requirements: display.display('- adding dependency: %s' % to_text(dep_role)) requirements.append(dep_role) else: display.display('- dependency %s already pending installation.' % dep_role.name) else: if dep_role.install_info['version'] != dep_role.version: if force_deps: display.display('- changing dependent role %s from %s to %s' % (dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified")) dep_role.remove() requirements.append(dep_role) else: display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' % (to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version'])) else: if force_deps: requirements.append(dep_role) else: display.display('- dependency %s is already installed, skipping.' % dep_role.name) if not installed: display.warning("- %s was NOT installed successfully." % role.name) self.exit_without_ignore() return 0 def execute_remove(self): """ removes the list of roles passed as arguments from the local system. """ if not context.CLIARGS['args']: raise AnsibleOptionsError('- you must specify at least one role to remove.') for role_name in context.CLIARGS['args']: role = GalaxyRole(self.galaxy, self.api, role_name) try: if role.remove(): display.display('- successfully removed %s' % role_name) else: display.display('- %s is not installed, skipping.' % role_name) except Exception as e: raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e))) return 0 def execute_list(self): """ List installed collections or roles """ if context.CLIARGS['type'] == 'role': self.execute_list_role() elif context.CLIARGS['type'] == 'collection': self.execute_list_collection() def execute_list_role(self): """ List all roles installed on the local system or a specific role """ path_found = False role_found = False warnings = [] roles_search_paths = context.CLIARGS['roles_path'] role_name = context.CLIARGS['role'] for path in roles_search_paths: role_path = GalaxyCLI._resolve_path(path) if os.path.isdir(path): path_found = True else: warnings.append("- the configured path {0} does not exist.".format(path)) continue if role_name: # show the requested role, if it exists gr = GalaxyRole(self.galaxy, self.lazy_role_api, role_name, path=os.path.join(role_path, role_name)) if os.path.isdir(gr.path): role_found = True display.display('# %s' % os.path.dirname(gr.path)) _display_role(gr) break warnings.append("- the role %s was not found" % role_name) else: if not os.path.exists(role_path): warnings.append("- the configured path %s does not exist." % role_path) continue if not os.path.isdir(role_path): warnings.append("- the configured path %s, exists, but it is not a directory." % role_path) continue display.display('# %s' % role_path) path_files = os.listdir(role_path) for path_file in path_files: gr = GalaxyRole(self.galaxy, self.lazy_role_api, path_file, path=path) if gr.metadata: _display_role(gr) # Do not warn if the role was found in any of the search paths if role_found and role_name: warnings = [] for w in warnings: display.warning(w) if not path_found: raise AnsibleOptionsError( "- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']) ) return 0 @with_collection_artifacts_manager def execute_list_collection(self, artifacts_manager=None): """ List all collections installed on the local system :param artifacts_manager: Artifacts manager. """ if artifacts_manager is not None: artifacts_manager.require_build_metadata = False output_format = context.CLIARGS['output_format'] collection_name = context.CLIARGS['collection'] default_collections_path = set(C.COLLECTIONS_PATHS) collections_search_paths = ( set(context.CLIARGS['collections_path'] or []) | default_collections_path | set(AnsibleCollectionConfig.collection_paths) ) collections_in_paths = {} warnings = [] path_found = False collection_found = False namespace_filter = None collection_filter = None if collection_name: # list a specific collection validate_collection_name(collection_name) namespace_filter, collection_filter = collection_name.split('.') collections = list(find_existing_collections( list(collections_search_paths), artifacts_manager, namespace_filter=namespace_filter, collection_filter=collection_filter, dedupe=False )) seen = set() fqcn_width, version_width = _get_collection_widths(collections) for collection in sorted(collections, key=lambda c: c.src): collection_found = True collection_path = pathlib.Path(to_text(collection.src)).parent.parent.as_posix() if output_format in {'yaml', 'json'}: collections_in_paths.setdefault(collection_path, {}) collections_in_paths[collection_path][collection.fqcn] = {'version': collection.ver} else: if collection_path not in seen: _display_header( collection_path, 'Collection', 'Version', fqcn_width, version_width ) seen.add(collection_path) _display_collection(collection, fqcn_width, version_width) path_found = False for path in collections_search_paths: if not os.path.exists(path): if path in default_collections_path: # don't warn for missing default paths continue warnings.append("- the configured path {0} does not exist.".format(path)) elif os.path.exists(path) and not os.path.isdir(path): warnings.append("- the configured path {0}, exists, but it is not a directory.".format(path)) else: path_found = True # Do not warn if the specific collection was found in any of the search paths if collection_found and collection_name: warnings = [] for w in warnings: display.warning(w) if not collections and not path_found: raise AnsibleOptionsError( "- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']) ) if output_format == 'json': display.display(json.dumps(collections_in_paths)) elif output_format == 'yaml': display.display(yaml_dump(collections_in_paths)) return 0 def execute_publish(self): """ Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish. """ collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args']) wait = context.CLIARGS['wait'] timeout = context.CLIARGS['import_timeout'] publish_collection(collection_path, self.api, wait, timeout) def execute_search(self): ''' searches for roles on the Ansible Galaxy server''' page_size = 1000 search = None if context.CLIARGS['args']: search = '+'.join(context.CLIARGS['args']) if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']: raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.") response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'], tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size) if response['count'] == 0: display.warning("No roles match your search.") return 0 data = [u''] if response['count'] > page_size: data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size)) else: data.append(u"Found %d roles matching your search:" % response['count']) max_len = [] for role in response['results']: max_len.append(len(role['username'] + '.' + role['name'])) name_len = max(max_len) format_str = u" %%-%ds %%s" % name_len data.append(u'') data.append(format_str % (u"Name", u"Description")) data.append(format_str % (u"----", u"-----------")) for role in response['results']: data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description'])) data = u'\n'.join(data) self.pager(data) return 0 def execute_import(self): """ used to import a role into Ansible Galaxy """ colors = { 'INFO': 'normal', 'WARNING': C.COLOR_WARN, 'ERROR': C.COLOR_ERROR, 'SUCCESS': C.COLOR_OK, 'FAILED': C.COLOR_ERROR, } github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict') github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict') rc = 0 if context.CLIARGS['check_status']: task = self.api.get_import_task(github_user=github_user, github_repo=github_repo) else: # Submit an import request task = self.api.create_import_task(github_user, github_repo, reference=context.CLIARGS['reference'], role_name=context.CLIARGS['role_name']) if len(task) > 1: # found multiple roles associated with github_user/github_repo display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo), color='yellow') display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED) for t in task: display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED) display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo), color=C.COLOR_CHANGED) return rc # found a single role as expected display.display("Successfully submitted import request %d" % task[0]['id']) if not context.CLIARGS['wait']: display.display("Role name: %s" % task[0]['summary_fields']['role']['name']) display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo'])) if context.CLIARGS['check_status'] or context.CLIARGS['wait']: # Get the status of the import msg_list = [] finished = False while not finished: task = self.api.get_import_task(task_id=task[0]['id']) for msg in task[0]['summary_fields']['task_messages']: if msg['id'] not in msg_list: display.display(msg['message_text'], color=colors[msg['message_type']]) msg_list.append(msg['id']) if (state := task[0]['state']) in ['SUCCESS', 'FAILED']: rc = ['SUCCESS', 'FAILED'].index(state) finished = True else: time.sleep(10) return rc def execute_setup(self): """ Setup an integration from Github or Travis for Ansible Galaxy roles""" if context.CLIARGS['setup_list']: # List existing integration secrets secrets = self.api.list_secrets() if len(secrets) == 0: # None found display.display("No integrations found.") return 0 display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK) display.display("---------- ---------- ----------", color=C.COLOR_OK) for secret in secrets: display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'], secret['github_repo']), color=C.COLOR_OK) return 0 if context.CLIARGS['remove_id']: # Remove a secret self.api.remove_secret(context.CLIARGS['remove_id']) display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK) return 0 source = context.CLIARGS['source'] github_user = context.CLIARGS['github_user'] github_repo = context.CLIARGS['github_repo'] secret = context.CLIARGS['secret'] resp = self.api.add_secret(source, github_user, github_repo, secret) display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo'])) return 0 def execute_delete(self): """ Delete a role from Ansible Galaxy. """ github_user = context.CLIARGS['github_user'] github_repo = context.CLIARGS['github_repo'] resp = self.api.delete_role(github_user, github_repo) if len(resp['deleted_roles']) > 1: display.display("Deleted the following roles:") display.display("ID User Name") display.display("------ --------------- ----------") for role in resp['deleted_roles']: display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name)) display.display(resp['status']) return 0 def main(args=None): GalaxyCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/cli/inventory.py0000755000000000000000000004250714556006441020265 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2017, Brian Coca # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import sys import argparse from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text from ansible.utils.vars import combine_vars from ansible.utils.display import Display from ansible.vars.plugins import get_vars_from_inventory_sources, get_vars_from_path display = Display() INTERNAL_VARS = frozenset(['ansible_diff_mode', 'ansible_config_file', 'ansible_facts', 'ansible_forks', 'ansible_inventory_sources', 'ansible_limit', 'ansible_playbook_python', 'ansible_run_tags', 'ansible_skip_tags', 'ansible_verbosity', 'ansible_version', 'inventory_dir', 'inventory_file', 'inventory_hostname', 'inventory_hostname_short', 'groups', 'group_names', 'omit', 'playbook_dir', ]) class InventoryCLI(CLI): ''' used to display or dump the configured inventory as Ansible sees it ''' name = 'ansible-inventory' ARGUMENTS = {'host': 'The name of a host to match in the inventory, relevant when using --list', 'group': 'The name of a group in the inventory, relevant when using --graph', } def __init__(self, args): super(InventoryCLI, self).__init__(args) self.vm = None self.loader = None self.inventory = None def init_parser(self): super(InventoryCLI, self).init_parser( usage='usage: %prog [options] [host|group]', desc='Show Ansible inventory information, by default it uses the inventory script JSON format') opt_help.add_inventory_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_basedir_options(self.parser) opt_help.add_runtask_options(self.parser) # remove unused default options self.parser.add_argument('--list-hosts', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument) self.parser.add_argument('args', metavar='host|group', nargs='?') # Actions action_group = self.parser.add_argument_group("Actions", "One of following must be used on invocation, ONLY ONE!") action_group.add_argument("--list", action="store_true", default=False, dest='list', help='Output all hosts info, works as inventory script') action_group.add_argument("--host", action="store", default=None, dest='host', help='Output specific host info, works as inventory script. It will ignore limit') action_group.add_argument("--graph", action="store_true", default=False, dest='graph', help='create inventory graph, if supplying pattern it must be a valid group name. It will ignore limit') self.parser.add_argument_group(action_group) # graph self.parser.add_argument("-y", "--yaml", action="store_true", default=False, dest='yaml', help='Use YAML format instead of default JSON, ignored for --graph') self.parser.add_argument('--toml', action='store_true', default=False, dest='toml', help='Use TOML format instead of default JSON, ignored for --graph') self.parser.add_argument("--vars", action="store_true", default=False, dest='show_vars', help='Add vars to graph display, ignored unless used with --graph') # list self.parser.add_argument("--export", action="store_true", default=C.INVENTORY_EXPORT, dest='export', help="When doing an --list, represent in a way that is optimized for export," "not as an accurate representation of how Ansible has processed it") self.parser.add_argument('--output', default=None, dest='output_file', help="When doing --list, send the inventory to a file instead of to the screen") # self.parser.add_argument("--ignore-vars-plugins", action="store_true", default=False, dest='ignore_vars_plugins', # help="When doing an --list, skip vars data from vars plugins, by default, this would include group_vars/ and host_vars/") def post_process_args(self, options): options = super(InventoryCLI, self).post_process_args(options) display.verbosity = options.verbosity self.validate_conflicts(options) # there can be only one! and, at least, one! used = 0 for opt in (options.list, options.host, options.graph): if opt: used += 1 if used == 0: raise AnsibleOptionsError("No action selected, at least one of --host, --graph or --list needs to be specified.") elif used > 1: raise AnsibleOptionsError("Conflicting options used, only one of --host, --graph or --list can be used at the same time.") # set host pattern to default if not supplied if options.args: options.pattern = options.args else: options.pattern = 'all' return options def run(self): super(InventoryCLI, self).run() # Initialize needed objects self.loader, self.inventory, self.vm = self._play_prereqs() results = None if context.CLIARGS['host']: hosts = self.inventory.get_hosts(context.CLIARGS['host']) if len(hosts) != 1: raise AnsibleOptionsError("You must pass a single valid host to --host parameter") myvars = self._get_host_variables(host=hosts[0]) # FIXME: should we template first? results = self.dump(myvars) else: if context.CLIARGS['subset']: # not doing single host, set limit in general if given self.inventory.subset(context.CLIARGS['subset']) if context.CLIARGS['graph']: results = self.inventory_graph() elif context.CLIARGS['list']: top = self._get_group('all') if context.CLIARGS['yaml']: results = self.yaml_inventory(top) elif context.CLIARGS['toml']: results = self.toml_inventory(top) else: results = self.json_inventory(top) results = self.dump(results) if results: outfile = context.CLIARGS['output_file'] if outfile is None: # FIXME: pager? display.display(results) else: try: with open(to_bytes(outfile), 'wb') as f: f.write(to_bytes(results)) except (OSError, IOError) as e: raise AnsibleError('Unable to write to destination file (%s): %s' % (to_native(outfile), to_native(e))) sys.exit(0) sys.exit(1) @staticmethod def dump(stuff): if context.CLIARGS['yaml']: import yaml from ansible.parsing.yaml.dumper import AnsibleDumper results = to_text(yaml.dump(stuff, Dumper=AnsibleDumper, default_flow_style=False, allow_unicode=True)) elif context.CLIARGS['toml']: from ansible.plugins.inventory.toml import toml_dumps try: results = toml_dumps(stuff) except TypeError as e: raise AnsibleError( 'The source inventory contains a value that cannot be represented in TOML: %s' % e ) except KeyError as e: raise AnsibleError( 'The source inventory contains a non-string key (%s) which cannot be represented in TOML. ' 'The specified key will need to be converted to a string. Be aware that if your playbooks ' 'expect this key to be non-string, your playbooks will need to be modified to support this ' 'change.' % e.args[0] ) else: import json from ansible.parsing.ajson import AnsibleJSONEncoder try: results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4, preprocess_unsafe=True, ensure_ascii=False) except TypeError as e: results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=False, indent=4, preprocess_unsafe=True, ensure_ascii=False) display.warning("Could not sort JSON output due to issues while sorting keys: %s" % to_native(e)) return results def _get_group_variables(self, group): # get info from inventory source res = group.get_vars() # Always load vars plugins res = combine_vars(res, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [group], 'all')) if context.CLIARGS['basedir']: res = combine_vars(res, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [group], 'all')) if group.priority != 1: res['ansible_group_priority'] = group.priority return self._remove_internal(res) def _get_host_variables(self, host): if context.CLIARGS['export']: # only get vars defined directly host hostvars = host.get_vars() # Always load vars plugins hostvars = combine_vars(hostvars, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [host], 'all')) if context.CLIARGS['basedir']: hostvars = combine_vars(hostvars, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [host], 'all')) else: # get all vars flattened by host, but skip magic hostvars hostvars = self.vm.get_vars(host=host, include_hostvars=False, stage='all') return self._remove_internal(hostvars) def _get_group(self, gname): group = self.inventory.groups.get(gname) return group @staticmethod def _remove_internal(dump): for internal in INTERNAL_VARS: if internal in dump: del dump[internal] return dump @staticmethod def _remove_empty_keys(dump): # remove empty keys for x in ('hosts', 'vars', 'children'): if x in dump and not dump[x]: del dump[x] @staticmethod def _show_vars(dump, depth): result = [] for (name, val) in sorted(dump.items()): result.append(InventoryCLI._graph_name('{%s = %s}' % (name, val), depth)) return result @staticmethod def _graph_name(name, depth=0): if depth: name = " |" * (depth) + "--%s" % name return name def _graph_group(self, group, depth=0): result = [self._graph_name('@%s:' % group.name, depth)] depth = depth + 1 for kid in group.child_groups: result.extend(self._graph_group(kid, depth)) if group.name != 'all': for host in group.hosts: result.append(self._graph_name(host.name, depth)) if context.CLIARGS['show_vars']: result.extend(self._show_vars(self._get_host_variables(host), depth + 1)) if context.CLIARGS['show_vars']: result.extend(self._show_vars(self._get_group_variables(group), depth)) return result def inventory_graph(self): start_at = self._get_group(context.CLIARGS['pattern']) if start_at: return '\n'.join(self._graph_group(start_at)) else: raise AnsibleOptionsError("Pattern must be valid group name when using --graph") def json_inventory(self, top): seen_groups = set() def format_group(group, available_hosts): results = {} results[group.name] = {} if group.name != 'all': results[group.name]['hosts'] = [h.name for h in group.hosts if h.name in available_hosts] results[group.name]['children'] = [] for subgroup in group.child_groups: results[group.name]['children'].append(subgroup.name) if subgroup.name not in seen_groups: results.update(format_group(subgroup, available_hosts)) seen_groups.add(subgroup.name) if context.CLIARGS['export']: results[group.name]['vars'] = self._get_group_variables(group) self._remove_empty_keys(results[group.name]) # remove empty groups if not results[group.name]: del results[group.name] return results hosts = self.inventory.get_hosts(top.name) results = format_group(top, frozenset(h.name for h in hosts)) # populate meta results['_meta'] = {'hostvars': {}} for host in hosts: hvars = self._get_host_variables(host) if hvars: results['_meta']['hostvars'][host.name] = hvars return results def yaml_inventory(self, top): seen_hosts = set() seen_groups = set() def format_group(group, available_hosts): results = {} # initialize group + vars results[group.name] = {} # subgroups results[group.name]['children'] = {} for subgroup in group.child_groups: if subgroup.name != 'all': if subgroup.name in seen_groups: results[group.name]['children'].update({subgroup.name: {}}) else: results[group.name]['children'].update(format_group(subgroup, available_hosts)) seen_groups.add(subgroup.name) # hosts for group results[group.name]['hosts'] = {} if group.name != 'all': for h in group.hosts: if h.name not in available_hosts: continue # observe limit myvars = {} if h.name not in seen_hosts: # avoid defining host vars more than once seen_hosts.add(h.name) myvars = self._get_host_variables(host=h) results[group.name]['hosts'][h.name] = myvars if context.CLIARGS['export']: gvars = self._get_group_variables(group) if gvars: results[group.name]['vars'] = gvars self._remove_empty_keys(results[group.name]) # remove empty groups if not results[group.name]: del results[group.name] return results available_hosts = frozenset(h.name for h in self.inventory.get_hosts(top.name)) return format_group(top, available_hosts) def toml_inventory(self, top): seen_hosts = set() seen_hosts = set() has_ungrouped = bool(next(g.hosts for g in top.child_groups if g.name == 'ungrouped')) def format_group(group, available_hosts): results = {} results[group.name] = {} results[group.name]['children'] = [] for subgroup in group.child_groups: if subgroup.name == 'ungrouped' and not has_ungrouped: continue if group.name != 'all': results[group.name]['children'].append(subgroup.name) results.update(format_group(subgroup, available_hosts)) if group.name != 'all': for host in group.hosts: if host.name not in available_hosts: continue if host.name not in seen_hosts: seen_hosts.add(host.name) host_vars = self._get_host_variables(host=host) else: host_vars = {} try: results[group.name]['hosts'][host.name] = host_vars except KeyError: results[group.name]['hosts'] = {host.name: host_vars} if context.CLIARGS['export']: results[group.name]['vars'] = self._get_group_variables(group) self._remove_empty_keys(results[group.name]) # remove empty groups if not results[group.name]: del results[group.name] return results available_hosts = frozenset(h.name for h in self.inventory.get_hosts(top.name)) results = format_group(top, available_hosts) return results def main(args=None): InventoryCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/cli/playbook.py0000755000000000000000000002524614556006441020051 0ustar00rootroot#!/usr/bin/env python # (c) 2012, Michael DeHaan # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import os import stat from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError from ansible.executor.playbook_executor import PlaybookExecutor from ansible.module_utils.common.text.converters import to_bytes from ansible.playbook.block import Block from ansible.plugins.loader import add_all_plugin_dirs from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path, _get_collection_playbook_path from ansible.utils.display import Display display = Display() class PlaybookCLI(CLI): ''' the tool to run *Ansible playbooks*, which are a configuration and multinode deployment system. See the project home page (https://docs.ansible.com) for more information. ''' name = 'ansible-playbook' def init_parser(self): # create parser for CLI options super(PlaybookCLI, self).init_parser( usage="%prog [options] playbook.yml [playbook2 ...]", desc="Runs Ansible playbooks, executing the defined tasks on the targeted hosts.") opt_help.add_connect_options(self.parser) opt_help.add_meta_options(self.parser) opt_help.add_runas_options(self.parser) opt_help.add_subset_options(self.parser) opt_help.add_check_options(self.parser) opt_help.add_inventory_options(self.parser) opt_help.add_runtask_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_fork_options(self.parser) opt_help.add_module_options(self.parser) # ansible playbook specific opts self.parser.add_argument('--syntax-check', dest='syntax', action='store_true', help="perform a syntax check on the playbook, but do not execute it") self.parser.add_argument('--list-tasks', dest='listtasks', action='store_true', help="list all tasks that would be executed") self.parser.add_argument('--list-tags', dest='listtags', action='store_true', help="list all available tags") self.parser.add_argument('--step', dest='step', action='store_true', help="one-step-at-a-time: confirm each task before running") self.parser.add_argument('--start-at-task', dest='start_at_task', help="start the playbook at the task matching this name") self.parser.add_argument('args', help='Playbook(s)', metavar='playbook', nargs='+') def post_process_args(self, options): # for listing, we need to know if user had tag input # capture here as parent function sets defaults for tags havetags = bool(options.tags or options.skip_tags) options = super(PlaybookCLI, self).post_process_args(options) if options.listtags: # default to all tags (including never), when listing tags # unless user specified tags if not havetags: options.tags = ['never', 'all'] display.verbosity = options.verbosity self.validate_conflicts(options, runas_opts=True, fork_opts=True) return options def run(self): super(PlaybookCLI, self).run() # Note: slightly wrong, this is written so that implicit localhost # manages passwords sshpass = None becomepass = None passwords = {} # initial error check, to make sure all specified playbooks are accessible # before we start running anything through the playbook executor # also prep plugin paths b_playbook_dirs = [] for playbook in context.CLIARGS['args']: # resolve if it is collection playbook with FQCN notation, if not, leaves unchanged resource = _get_collection_playbook_path(playbook) if resource is not None: playbook_collection = resource[2] else: # not an FQCN so must be a file if not os.path.exists(playbook): raise AnsibleError("the playbook: %s could not be found" % playbook) if not (os.path.isfile(playbook) or stat.S_ISFIFO(os.stat(playbook).st_mode)): raise AnsibleError("the playbook: %s does not appear to be a file" % playbook) # check if playbook is from collection (path can be passed directly) playbook_collection = _get_collection_name_from_path(playbook) # don't add collection playbooks to adjacency search path if not playbook_collection: # setup dirs to enable loading plugins from all playbooks in case they add callbacks/inventory/etc b_playbook_dir = os.path.dirname(os.path.abspath(to_bytes(playbook, errors='surrogate_or_strict'))) add_all_plugin_dirs(b_playbook_dir) b_playbook_dirs.append(b_playbook_dir) if b_playbook_dirs: # allow collections adjacent to these playbooks # we use list copy to avoid opening up 'adjacency' in the previous loop AnsibleCollectionConfig.playbook_paths = b_playbook_dirs # don't deal with privilege escalation or passwords when we don't need to if not (context.CLIARGS['listhosts'] or context.CLIARGS['listtasks'] or context.CLIARGS['listtags'] or context.CLIARGS['syntax']): (sshpass, becomepass) = self.ask_passwords() passwords = {'conn_pass': sshpass, 'become_pass': becomepass} # create base objects loader, inventory, variable_manager = self._play_prereqs() # (which is not returned in list_hosts()) is taken into account for # warning if inventory is empty. But it can't be taken into account for # checking if limit doesn't match any hosts. Instead we don't worry about # limit if only implicit localhost was in inventory to start with. # # Fix this when we rewrite inventory by making localhost a real host (and thus show up in list_hosts()) CLI.get_host_list(inventory, context.CLIARGS['subset']) # flush fact cache if requested if context.CLIARGS['flush_cache']: self._flush_cache(inventory, variable_manager) # create the playbook executor, which manages running the plays via a task queue manager pbex = PlaybookExecutor(playbooks=context.CLIARGS['args'], inventory=inventory, variable_manager=variable_manager, loader=loader, passwords=passwords) results = pbex.run() if isinstance(results, list): for p in results: display.display('\nplaybook: %s' % p['playbook']) for idx, play in enumerate(p['plays']): if play._included_path is not None: loader.set_basedir(play._included_path) else: pb_dir = os.path.realpath(os.path.dirname(p['playbook'])) loader.set_basedir(pb_dir) # show host list if we were able to template into a list try: host_list = ','.join(play.hosts) except TypeError: host_list = '' msg = "\n play #%d (%s): %s" % (idx + 1, host_list, play.name) mytags = set(play.tags) msg += '\tTAGS: [%s]' % (','.join(mytags)) if context.CLIARGS['listhosts']: playhosts = set(inventory.get_hosts(play.hosts)) msg += "\n pattern: %s\n hosts (%d):" % (play.hosts, len(playhosts)) for host in playhosts: msg += "\n %s" % host display.display(msg) all_tags = set() if context.CLIARGS['listtags'] or context.CLIARGS['listtasks']: taskmsg = '' if context.CLIARGS['listtasks']: taskmsg = ' tasks:\n' def _process_block(b): taskmsg = '' for task in b.block: if isinstance(task, Block): taskmsg += _process_block(task) else: if task.action in C._ACTION_META and task.implicit: continue all_tags.update(task.tags) if context.CLIARGS['listtasks']: cur_tags = list(mytags.union(set(task.tags))) cur_tags.sort() if task.name: taskmsg += " %s" % task.get_name() else: taskmsg += " %s" % task.action taskmsg += "\tTAGS: [%s]\n" % ', '.join(cur_tags) return taskmsg all_vars = variable_manager.get_vars(play=play) for block in play.compile(): block = block.filter_tagged_tasks(all_vars) if not block.has_tasks(): continue taskmsg += _process_block(block) if context.CLIARGS['listtags']: cur_tags = list(mytags.union(all_tags)) cur_tags.sort() taskmsg += " TASK TAGS: [%s]\n" % ', '.join(cur_tags) display.display(taskmsg) return 0 else: return results @staticmethod def _flush_cache(inventory, variable_manager): for host in inventory.list_hosts(): hostname = host.get_name() variable_manager.clear_facts(hostname) def main(args=None): PlaybookCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/cli/pull.py0000755000000000000000000004167014556006441017204 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2012, Michael DeHaan # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import datetime import os import platform import random import shlex import shutil import socket import sys import time from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleOptionsError from ansible.module_utils.common.text.converters import to_native, to_text from ansible.plugins.loader import module_loader from ansible.utils.cmd_functions import run_cmd from ansible.utils.display import Display display = Display() class PullCLI(CLI): ''' Used to pull a remote copy of ansible on each managed node, each set to run via cron and update playbook source via a source repository. This inverts the default *push* architecture of ansible into a *pull* architecture, which has near-limitless scaling potential. None of the CLI tools are designed to run concurrently with themselves, you should use an external scheduler and/or locking to ensure there are no clashing operations. The setup playbook can be tuned to change the cron frequency, logging locations, and parameters to ansible-pull. This is useful both for extreme scale-out as well as periodic remediation. Usage of the 'fetch' module to retrieve logs from ansible-pull runs would be an excellent way to gather and analyze remote logs from ansible-pull. ''' name = 'ansible-pull' DEFAULT_REPO_TYPE = 'git' DEFAULT_PLAYBOOK = 'local.yml' REPO_CHOICES = ('git', 'subversion', 'hg', 'bzr') PLAYBOOK_ERRORS = { 1: 'File does not exist', 2: 'File is not readable', } ARGUMENTS = {'playbook.yml': 'The name of one the YAML format files to run as an Ansible playbook.' 'This can be a relative path within the checkout. By default, Ansible will' "look for a playbook based on the host's fully-qualified domain name," 'on the host hostname and finally a playbook named *local.yml*.', } SKIP_INVENTORY_DEFAULTS = True @staticmethod def _get_inv_cli(): inv_opts = '' if context.CLIARGS.get('inventory', False): for inv in context.CLIARGS['inventory']: if isinstance(inv, list): inv_opts += " -i '%s' " % ','.join(inv) elif ',' in inv or os.path.exists(inv): inv_opts += ' -i %s ' % inv return inv_opts def init_parser(self): ''' create an options parser for bin/ansible ''' super(PullCLI, self).init_parser( usage='%prog -U [options] []', desc="pulls playbooks from a VCS repo and executes them on target host") # Do not add check_options as there's a conflict with --checkout/-C opt_help.add_connect_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_runtask_options(self.parser) opt_help.add_subset_options(self.parser) opt_help.add_inventory_options(self.parser) opt_help.add_module_options(self.parser) opt_help.add_runas_prompt_options(self.parser) self.parser.add_argument('args', help='Playbook(s)', metavar='playbook.yml', nargs='*') # options unique to pull self.parser.add_argument('--purge', default=False, action='store_true', help='purge checkout after playbook run') self.parser.add_argument('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true', help='only run the playbook if the repository has been updated') self.parser.add_argument('-s', '--sleep', dest='sleep', default=None, help='sleep for random interval (between 0 and n number of seconds) before starting. ' 'This is a useful way to disperse git requests') self.parser.add_argument('-f', '--force', dest='force', default=False, action='store_true', help='run the playbook even if the repository could not be updated') self.parser.add_argument('-d', '--directory', dest='dest', default=None, type=opt_help.unfrack_path(), help='path to the directory to which Ansible will checkout the repository.') self.parser.add_argument('-U', '--url', dest='url', default=None, help='URL of the playbook repository') self.parser.add_argument('--full', dest='fullclone', action='store_true', help='Do a full clone, instead of a shallow one.') self.parser.add_argument('-C', '--checkout', dest='checkout', help='branch/tag/commit to checkout. Defaults to behavior of repository module.') self.parser.add_argument('--accept-host-key', default=False, dest='accept_host_key', action='store_true', help='adds the hostkey for the repo url if not already added') self.parser.add_argument('-m', '--module-name', dest='module_name', default=self.DEFAULT_REPO_TYPE, help='Repository module name, which ansible will use to check out the repo. Choices are %s. Default is %s.' % (self.REPO_CHOICES, self.DEFAULT_REPO_TYPE)) self.parser.add_argument('--verify-commit', dest='verify', default=False, action='store_true', help='verify GPG signature of checked out commit, if it fails abort running the playbook. ' 'This needs the corresponding VCS module to support such an operation') self.parser.add_argument('--clean', dest='clean', default=False, action='store_true', help='modified files in the working repository will be discarded') self.parser.add_argument('--track-subs', dest='tracksubs', default=False, action='store_true', help='submodules will track the latest changes. This is equivalent to specifying the --remote flag to git submodule update') # add a subset of the check_opts flag group manually, as the full set's # shortcodes conflict with above --checkout/-C self.parser.add_argument("--check", default=False, dest='check', action='store_true', help="don't make any changes; instead, try to predict some of the changes that may occur") self.parser.add_argument("--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true', help="when changing (small) files and templates, show the differences in those files; works great with --check") def post_process_args(self, options): options = super(PullCLI, self).post_process_args(options) if not options.dest: hostname = socket.getfqdn() # use a hostname dependent directory, in case of $HOME on nfs options.dest = os.path.join(C.ANSIBLE_HOME, 'pull', hostname) if os.path.exists(options.dest) and not os.path.isdir(options.dest): raise AnsibleOptionsError("%s is not a valid or accessible directory." % options.dest) if options.sleep: try: secs = random.randint(0, int(options.sleep)) options.sleep = secs except ValueError: raise AnsibleOptionsError("%s is not a number." % options.sleep) if not options.url: raise AnsibleOptionsError("URL for repository not specified, use -h for help") if options.module_name not in self.REPO_CHOICES: raise AnsibleOptionsError("Unsupported repo module %s, choices are %s" % (options.module_name, ','.join(self.REPO_CHOICES))) display.verbosity = options.verbosity self.validate_conflicts(options) return options def run(self): ''' use Runner lib to do SSH things ''' super(PullCLI, self).run() # log command line now = datetime.datetime.now() display.display(now.strftime("Starting Ansible Pull at %F %T")) display.display(' '.join(sys.argv)) # Build Checkout command # Now construct the ansible command node = platform.node() host = socket.getfqdn() hostnames = ','.join(set([host, node, host.split('.')[0], node.split('.')[0]])) if hostnames: limit_opts = 'localhost,%s,127.0.0.1' % hostnames else: limit_opts = 'localhost,127.0.0.1' base_opts = '-c local ' if context.CLIARGS['verbosity'] > 0: base_opts += ' -%s' % ''.join(["v" for x in range(0, context.CLIARGS['verbosity'])]) # Attempt to use the inventory passed in as an argument # It might not yet have been downloaded so use localhost as default inv_opts = self._get_inv_cli() if not inv_opts: inv_opts = " -i localhost, " # avoid interpreter discovery since we already know which interpreter to use on localhost inv_opts += '-e %s ' % shlex.quote('ansible_python_interpreter=%s' % sys.executable) # SCM specific options if context.CLIARGS['module_name'] == 'git': repo_opts = "name=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' version=%s' % context.CLIARGS['checkout'] if context.CLIARGS['accept_host_key']: repo_opts += ' accept_hostkey=yes' if context.CLIARGS['private_key_file']: repo_opts += ' key_file=%s' % context.CLIARGS['private_key_file'] if context.CLIARGS['verify']: repo_opts += ' verify_commit=yes' if context.CLIARGS['tracksubs']: repo_opts += ' track_submodules=yes' if not context.CLIARGS['fullclone']: repo_opts += ' depth=1' elif context.CLIARGS['module_name'] == 'subversion': repo_opts = "repo=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' revision=%s' % context.CLIARGS['checkout'] if not context.CLIARGS['fullclone']: repo_opts += ' export=yes' elif context.CLIARGS['module_name'] == 'hg': repo_opts = "repo=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' revision=%s' % context.CLIARGS['checkout'] elif context.CLIARGS['module_name'] == 'bzr': repo_opts = "name=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' version=%s' % context.CLIARGS['checkout'] else: raise AnsibleOptionsError('Unsupported (%s) SCM module for pull, choices are: %s' % (context.CLIARGS['module_name'], ','.join(self.REPO_CHOICES))) # options common to all supported SCMS if context.CLIARGS['clean']: repo_opts += ' force=yes' path = module_loader.find_plugin(context.CLIARGS['module_name']) if path is None: raise AnsibleOptionsError(("module '%s' not found.\n" % context.CLIARGS['module_name'])) bin_path = os.path.dirname(os.path.abspath(sys.argv[0])) # hardcode local and inventory/host as this is just meant to fetch the repo cmd = '%s/ansible %s %s -m %s -a "%s" all -l "%s"' % (bin_path, inv_opts, base_opts, context.CLIARGS['module_name'], repo_opts, limit_opts) for ev in context.CLIARGS['extra_vars']: cmd += ' -e %s' % shlex.quote(ev) # Nap? if context.CLIARGS['sleep']: display.display("Sleeping for %d seconds..." % context.CLIARGS['sleep']) time.sleep(context.CLIARGS['sleep']) # RUN the Checkout command display.debug("running ansible with VCS module to checkout repo") display.vvvv('EXEC: %s' % cmd) rc, b_out, b_err = run_cmd(cmd, live=True) if rc != 0: if context.CLIARGS['force']: display.warning("Unable to update repository. Continuing with (forced) run of playbook.") else: return rc elif context.CLIARGS['ifchanged'] and b'"changed": true' not in b_out: display.display("Repository has not changed, quitting.") return 0 playbook = self.select_playbook(context.CLIARGS['dest']) if playbook is None: raise AnsibleOptionsError("Could not find a playbook to run.") # Build playbook command cmd = '%s/ansible-playbook %s %s' % (bin_path, base_opts, playbook) if context.CLIARGS['vault_password_files']: for vault_password_file in context.CLIARGS['vault_password_files']: cmd += " --vault-password-file=%s" % vault_password_file if context.CLIARGS['vault_ids']: for vault_id in context.CLIARGS['vault_ids']: cmd += " --vault-id=%s" % vault_id if context.CLIARGS['become_password_file']: cmd += " --become-password-file=%s" % context.CLIARGS['become_password_file'] if context.CLIARGS['connection_password_file']: cmd += " --connection-password-file=%s" % context.CLIARGS['connection_password_file'] for ev in context.CLIARGS['extra_vars']: cmd += ' -e %s' % shlex.quote(ev) if context.CLIARGS['become_ask_pass']: cmd += ' --ask-become-pass' if context.CLIARGS['skip_tags']: cmd += ' --skip-tags "%s"' % to_native(u','.join(context.CLIARGS['skip_tags'])) if context.CLIARGS['tags']: cmd += ' -t "%s"' % to_native(u','.join(context.CLIARGS['tags'])) if context.CLIARGS['subset']: cmd += ' -l "%s"' % context.CLIARGS['subset'] else: cmd += ' -l "%s"' % limit_opts if context.CLIARGS['check']: cmd += ' -C' if context.CLIARGS['diff']: cmd += ' -D' os.chdir(context.CLIARGS['dest']) # redo inventory options as new files might exist now inv_opts = self._get_inv_cli() if inv_opts: cmd += inv_opts # RUN THE PLAYBOOK COMMAND display.debug("running ansible-playbook to do actual work") display.debug('EXEC: %s' % cmd) rc, b_out, b_err = run_cmd(cmd, live=True) if context.CLIARGS['purge']: os.chdir('/') try: display.debug("removing: %s" % context.CLIARGS['dest']) shutil.rmtree(context.CLIARGS['dest']) except Exception as e: display.error(u"Failed to remove %s: %s" % (context.CLIARGS['dest'], to_text(e))) return rc @staticmethod def try_playbook(path): if not os.path.exists(path): return 1 if not os.access(path, os.R_OK): return 2 return 0 @staticmethod def select_playbook(path): playbook = None errors = [] if context.CLIARGS['args'] and context.CLIARGS['args'][0] is not None: playbooks = [] for book in context.CLIARGS['args']: book_path = os.path.join(path, book) rc = PullCLI.try_playbook(book_path) if rc != 0: errors.append("%s: %s" % (book_path, PullCLI.PLAYBOOK_ERRORS[rc])) continue playbooks.append(book_path) if 0 < len(errors): display.warning("\n".join(errors)) elif len(playbooks) == len(context.CLIARGS['args']): playbook = " ".join(playbooks) return playbook else: fqdn = socket.getfqdn() hostpb = os.path.join(path, fqdn + '.yml') shorthostpb = os.path.join(path, fqdn.split('.')[0] + '.yml') localpb = os.path.join(path, PullCLI.DEFAULT_PLAYBOOK) for pb in [hostpb, shorthostpb, localpb]: rc = PullCLI.try_playbook(pb) if rc == 0: playbook = pb break else: errors.append("%s: %s" % (pb, PullCLI.PLAYBOOK_ERRORS[rc])) if playbook is None: display.warning("\n".join(errors)) return playbook def main(args=None): PullCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/cli/scripts/0000755000000000000000000000000014556006441017332 5ustar00rootrootansible-core-2.16.3/lib/ansible/cli/scripts/__init__.py0000644000000000000000000000000014556006441021431 0ustar00rootrootansible-core-2.16.3/lib/ansible/cli/scripts/ansible_connection_cli_stub.py0000755000000000000000000003232214556006441025431 0ustar00rootroot#!/usr/bin/env python # Copyright: (c) 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import fcntl import hashlib import io import os import pickle import signal import socket import sys import time import traceback import errno import json from contextlib import contextmanager from ansible import constants as C from ansible.cli.arguments import option_helpers as opt_help from ansible.module_utils.common.text.converters import to_bytes, to_text from ansible.module_utils.connection import Connection, ConnectionError, send_data, recv_data from ansible.module_utils.service import fork_process from ansible.parsing.ajson import AnsibleJSONEncoder, AnsibleJSONDecoder from ansible.playbook.play_context import PlayContext from ansible.plugins.loader import connection_loader, init_plugin_loader from ansible.utils.path import unfrackpath, makedirs_safe from ansible.utils.display import Display from ansible.utils.jsonrpc import JsonRpcServer display = Display() def read_stream(byte_stream): size = int(byte_stream.readline().strip()) data = byte_stream.read(size) if len(data) < size: raise Exception("EOF found before data was complete") data_hash = to_text(byte_stream.readline().strip()) if data_hash != hashlib.sha1(data).hexdigest(): raise Exception("Read {0} bytes, but data did not match checksum".format(size)) # restore escaped loose \r characters data = data.replace(br'\r', b'\r') return data @contextmanager def file_lock(lock_path): """ Uses contextmanager to create and release a file lock based on the given path. This allows us to create locks using `with file_lock()` to prevent deadlocks related to failure to unlock properly. """ lock_fd = os.open(lock_path, os.O_RDWR | os.O_CREAT, 0o600) fcntl.lockf(lock_fd, fcntl.LOCK_EX) yield fcntl.lockf(lock_fd, fcntl.LOCK_UN) os.close(lock_fd) class ConnectionProcess(object): ''' The connection process wraps around a Connection object that manages the connection to a remote device that persists over the playbook ''' def __init__(self, fd, play_context, socket_path, original_path, task_uuid=None, ansible_playbook_pid=None): self.play_context = play_context self.socket_path = socket_path self.original_path = original_path self._task_uuid = task_uuid self.fd = fd self.exception = None self.srv = JsonRpcServer() self.sock = None self.connection = None self._ansible_playbook_pid = ansible_playbook_pid def start(self, options): messages = list() result = {} try: messages.append(('vvvv', 'control socket path is %s' % self.socket_path)) # If this is a relative path (~ gets expanded later) then plug the # key's path on to the directory we originally came from, so we can # find it now that our cwd is / if self.play_context.private_key_file and self.play_context.private_key_file[0] not in '~/': self.play_context.private_key_file = os.path.join(self.original_path, self.play_context.private_key_file) self.connection = connection_loader.get(self.play_context.connection, self.play_context, '/dev/null', task_uuid=self._task_uuid, ansible_playbook_pid=self._ansible_playbook_pid) try: self.connection.set_options(direct=options) except ConnectionError as exc: messages.append(('debug', to_text(exc))) raise ConnectionError('Unable to decode JSON from response set_options. See the debug log for more information.') self.connection._socket_path = self.socket_path self.srv.register(self.connection) messages.extend([('vvvv', msg) for msg in sys.stdout.getvalue().splitlines()]) self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.bind(self.socket_path) self.sock.listen(1) messages.append(('vvvv', 'local domain socket listeners started successfully')) except Exception as exc: messages.extend(self.connection.pop_messages()) result['error'] = to_text(exc) result['exception'] = traceback.format_exc() finally: result['messages'] = messages self.fd.write(json.dumps(result, cls=AnsibleJSONEncoder)) self.fd.close() def run(self): try: log_messages = self.connection.get_option('persistent_log_messages') while not self.connection._conn_closed: signal.signal(signal.SIGALRM, self.connect_timeout) signal.signal(signal.SIGTERM, self.handler) signal.alarm(self.connection.get_option('persistent_connect_timeout')) self.exception = None (s, addr) = self.sock.accept() signal.alarm(0) signal.signal(signal.SIGALRM, self.command_timeout) while True: data = recv_data(s) if not data: break if log_messages: display.display("jsonrpc request: %s" % data, log_only=True) request = json.loads(to_text(data, errors='surrogate_or_strict')) if request.get('method') == "exec_command" and not self.connection.connected: self.connection._connect() signal.alarm(self.connection.get_option('persistent_command_timeout')) resp = self.srv.handle_request(data) signal.alarm(0) if log_messages: display.display("jsonrpc response: %s" % resp, log_only=True) send_data(s, to_bytes(resp)) s.close() except Exception as e: # socket.accept() will raise EINTR if the socket.close() is called if hasattr(e, 'errno'): if e.errno != errno.EINTR: self.exception = traceback.format_exc() else: self.exception = traceback.format_exc() finally: # allow time for any exception msg send over socket to receive at other end before shutting down time.sleep(0.1) # when done, close the connection properly and cleanup the socket file so it can be recreated self.shutdown() def connect_timeout(self, signum, frame): msg = 'persistent connection idle timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and ' \ 'Troubleshooting Guide.' % self.connection.get_option('persistent_connect_timeout') display.display(msg, log_only=True) raise Exception(msg) def command_timeout(self, signum, frame): msg = 'command timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.'\ % self.connection.get_option('persistent_command_timeout') display.display(msg, log_only=True) raise Exception(msg) def handler(self, signum, frame): msg = 'signal handler called with signal %s.' % signum display.display(msg, log_only=True) raise Exception(msg) def shutdown(self): """ Shuts down the local domain socket """ lock_path = unfrackpath("%s/.ansible_pc_lock_%s" % os.path.split(self.socket_path)) if os.path.exists(self.socket_path): try: if self.sock: self.sock.close() if self.connection: self.connection.close() if self.connection.get_option("persistent_log_messages"): for _level, message in self.connection.pop_messages(): display.display(message, log_only=True) except Exception: pass finally: if os.path.exists(self.socket_path): os.remove(self.socket_path) setattr(self.connection, '_socket_path', None) setattr(self.connection, '_connected', False) if os.path.exists(lock_path): os.remove(lock_path) display.display('shutdown complete', log_only=True) def main(args=None): """ Called to initiate the connect to the remote device """ parser = opt_help.create_base_parser(prog='ansible-connection') opt_help.add_verbosity_options(parser) parser.add_argument('playbook_pid') parser.add_argument('task_uuid') args = parser.parse_args(args[1:] if args is not None else args) init_plugin_loader() # initialize verbosity display.verbosity = args.verbosity rc = 0 result = {} messages = list() socket_path = None # Need stdin as a byte stream stdin = sys.stdin.buffer # Note: update the below log capture code after Display.display() is refactored. saved_stdout = sys.stdout sys.stdout = io.StringIO() try: # read the play context data via stdin, which means depickling it opts_data = read_stream(stdin) init_data = read_stream(stdin) pc_data = pickle.loads(init_data, encoding='bytes') options = pickle.loads(opts_data, encoding='bytes') play_context = PlayContext() play_context.deserialize(pc_data) except Exception as e: rc = 1 result.update({ 'error': to_text(e), 'exception': traceback.format_exc() }) if rc == 0: ssh = connection_loader.get('ssh', class_only=True) ansible_playbook_pid = args.playbook_pid task_uuid = args.task_uuid cp = ssh._create_control_path(play_context.remote_addr, play_context.port, play_context.remote_user, play_context.connection, ansible_playbook_pid) # create the persistent connection dir if need be and create the paths # which we will be using later tmp_path = unfrackpath(C.PERSISTENT_CONTROL_PATH_DIR) makedirs_safe(tmp_path) socket_path = unfrackpath(cp % dict(directory=tmp_path)) lock_path = unfrackpath("%s/.ansible_pc_lock_%s" % os.path.split(socket_path)) with file_lock(lock_path): if not os.path.exists(socket_path): messages.append(('vvvv', 'local domain socket does not exist, starting it')) original_path = os.getcwd() r, w = os.pipe() pid = fork_process() if pid == 0: try: os.close(r) wfd = os.fdopen(w, 'w') process = ConnectionProcess(wfd, play_context, socket_path, original_path, task_uuid, ansible_playbook_pid) process.start(options) except Exception: messages.append(('error', traceback.format_exc())) rc = 1 if rc == 0: process.run() else: process.shutdown() sys.exit(rc) else: os.close(w) rfd = os.fdopen(r, 'r') data = json.loads(rfd.read(), cls=AnsibleJSONDecoder) messages.extend(data.pop('messages')) result.update(data) else: messages.append(('vvvv', 'found existing local domain socket, using it!')) conn = Connection(socket_path) try: conn.set_options(direct=options) except ConnectionError as exc: messages.append(('debug', to_text(exc))) raise ConnectionError('Unable to decode JSON from response set_options. See the debug log for more information.') pc_data = to_text(init_data) try: conn.update_play_context(pc_data) conn.set_check_prompt(task_uuid) except Exception as exc: # Only network_cli has update_play context and set_check_prompt, so missing this is # not fatal e.g. netconf if isinstance(exc, ConnectionError) and getattr(exc, 'code', None) == -32601: pass else: result.update({ 'error': to_text(exc), 'exception': traceback.format_exc() }) if os.path.exists(socket_path): messages.extend(Connection(socket_path).pop_messages()) messages.append(('vvvv', sys.stdout.getvalue())) result.update({ 'messages': messages, 'socket_path': socket_path }) sys.stdout = saved_stdout if 'exception' in result: rc = 1 sys.stderr.write(json.dumps(result, cls=AnsibleJSONEncoder)) else: rc = 0 sys.stdout.write(json.dumps(result, cls=AnsibleJSONEncoder)) sys.exit(rc) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/cli/vault.py0000755000000000000000000005473714556006441017373 0ustar00rootroot#!/usr/bin/env python # (c) 2014, James Tanner # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import os import sys from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleOptionsError from ansible.module_utils.common.text.converters import to_text, to_bytes from ansible.parsing.dataloader import DataLoader from ansible.parsing.vault import VaultEditor, VaultLib, match_encrypt_secret from ansible.utils.display import Display display = Display() class VaultCLI(CLI): ''' can encrypt any structured data file used by Ansible. This can include *group_vars/* or *host_vars/* inventory variables, variables loaded by *include_vars* or *vars_files*, or variable files passed on the ansible-playbook command line with *-e @file.yml* or *-e @file.json*. Role variables and defaults are also included! Because Ansible tasks, handlers, and other objects are data, these can also be encrypted with vault. If you'd like to not expose what variables you are using, you can keep an individual task file entirely encrypted. ''' name = 'ansible-vault' FROM_STDIN = "stdin" FROM_ARGS = "the command line args" FROM_PROMPT = "the interactive prompt" def __init__(self, args): self.b_vault_pass = None self.b_new_vault_pass = None self.encrypt_string_read_stdin = False self.encrypt_secret = None self.encrypt_vault_id = None self.new_encrypt_secret = None self.new_encrypt_vault_id = None super(VaultCLI, self).__init__(args) def init_parser(self): super(VaultCLI, self).init_parser( desc="encryption/decryption utility for Ansible data files", epilog="\nSee '%s --help' for more information on a specific command.\n\n" % os.path.basename(sys.argv[0]) ) common = opt_help.ArgumentParser(add_help=False) opt_help.add_vault_options(common) opt_help.add_verbosity_options(common) subparsers = self.parser.add_subparsers(dest='action') subparsers.required = True output = opt_help.ArgumentParser(add_help=False) output.add_argument('--output', default=None, dest='output_file', help='output file name for encrypt or decrypt; use - for stdout', type=opt_help.unfrack_path()) # For encrypting actions, we can also specify which of multiple vault ids should be used for encrypting vault_id = opt_help.ArgumentParser(add_help=False) vault_id.add_argument('--encrypt-vault-id', default=[], dest='encrypt_vault_id', action='store', type=str, help='the vault id used to encrypt (required if more than one vault-id is provided)') create_parser = subparsers.add_parser('create', help='Create new vault encrypted file', parents=[vault_id, common]) create_parser.set_defaults(func=self.execute_create) create_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') create_parser.add_argument('--skip-tty-check', default=False, help='allows editor to be opened when no tty attached', dest='skip_tty_check', action='store_true') decrypt_parser = subparsers.add_parser('decrypt', help='Decrypt vault encrypted file', parents=[output, common]) decrypt_parser.set_defaults(func=self.execute_decrypt) decrypt_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') edit_parser = subparsers.add_parser('edit', help='Edit vault encrypted file', parents=[vault_id, common]) edit_parser.set_defaults(func=self.execute_edit) edit_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') view_parser = subparsers.add_parser('view', help='View vault encrypted file', parents=[common]) view_parser.set_defaults(func=self.execute_view) view_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') encrypt_parser = subparsers.add_parser('encrypt', help='Encrypt YAML file', parents=[common, output, vault_id]) encrypt_parser.set_defaults(func=self.execute_encrypt) encrypt_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') enc_str_parser = subparsers.add_parser('encrypt_string', help='Encrypt a string', parents=[common, output, vault_id]) enc_str_parser.set_defaults(func=self.execute_encrypt_string) enc_str_parser.add_argument('args', help='String to encrypt', metavar='string_to_encrypt', nargs='*') enc_str_parser.add_argument('-p', '--prompt', dest='encrypt_string_prompt', action='store_true', help="Prompt for the string to encrypt") enc_str_parser.add_argument('--show-input', dest='show_string_input', default=False, action='store_true', help='Do not hide input when prompted for the string to encrypt') enc_str_parser.add_argument('-n', '--name', dest='encrypt_string_names', action='append', help="Specify the variable name") enc_str_parser.add_argument('--stdin-name', dest='encrypt_string_stdin_name', default=None, help="Specify the variable name for stdin") rekey_parser = subparsers.add_parser('rekey', help='Re-key a vault encrypted file', parents=[common, vault_id]) rekey_parser.set_defaults(func=self.execute_rekey) rekey_new_group = rekey_parser.add_mutually_exclusive_group() rekey_new_group.add_argument('--new-vault-password-file', default=None, dest='new_vault_password_file', help="new vault password file for rekey", type=opt_help.unfrack_path()) rekey_new_group.add_argument('--new-vault-id', default=None, dest='new_vault_id', type=str, help='the new vault identity to use for rekey') rekey_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*') def post_process_args(self, options): options = super(VaultCLI, self).post_process_args(options) display.verbosity = options.verbosity if options.vault_ids: for vault_id in options.vault_ids: if u';' in vault_id: raise AnsibleOptionsError("'%s' is not a valid vault id. The character ';' is not allowed in vault ids" % vault_id) if getattr(options, 'output_file', None) and len(options.args) > 1: raise AnsibleOptionsError("At most one input file may be used with the --output option") if options.action == 'encrypt_string': if '-' in options.args or not options.args or options.encrypt_string_stdin_name: self.encrypt_string_read_stdin = True # TODO: prompting from stdin and reading from stdin seem mutually exclusive, but verify that. if options.encrypt_string_prompt and self.encrypt_string_read_stdin: raise AnsibleOptionsError('The --prompt option is not supported if also reading input from stdin') return options def run(self): super(VaultCLI, self).run() loader = DataLoader() # set default restrictive umask old_umask = os.umask(0o077) vault_ids = list(context.CLIARGS['vault_ids']) # there are 3 types of actions, those that just 'read' (decrypt, view) and only # need to ask for a password once, and those that 'write' (create, encrypt) that # ask for a new password and confirm it, and 'read/write (rekey) that asks for the # old password, then asks for a new one and confirms it. default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST vault_ids = default_vault_ids + vault_ids action = context.CLIARGS['action'] # TODO: instead of prompting for these before, we could let VaultEditor # call a callback when it needs it. if action in ['decrypt', 'view', 'rekey', 'edit']: vault_secrets = self.setup_vault_secrets(loader, vault_ids=vault_ids, vault_password_files=list(context.CLIARGS['vault_password_files']), ask_vault_pass=context.CLIARGS['ask_vault_pass']) if not vault_secrets: raise AnsibleOptionsError("A vault password is required to use Ansible's Vault") if action in ['encrypt', 'encrypt_string', 'create']: encrypt_vault_id = None # no --encrypt-vault-id context.CLIARGS['encrypt_vault_id'] for 'edit' if action not in ['edit']: encrypt_vault_id = context.CLIARGS['encrypt_vault_id'] or C.DEFAULT_VAULT_ENCRYPT_IDENTITY vault_secrets = None vault_secrets = \ self.setup_vault_secrets(loader, vault_ids=vault_ids, vault_password_files=list(context.CLIARGS['vault_password_files']), ask_vault_pass=context.CLIARGS['ask_vault_pass'], create_new_password=True) if len(vault_secrets) > 1 and not encrypt_vault_id: raise AnsibleOptionsError("The vault-ids %s are available to encrypt. Specify the vault-id to encrypt with --encrypt-vault-id" % ','.join([x[0] for x in vault_secrets])) if not vault_secrets: raise AnsibleOptionsError("A vault password is required to use Ansible's Vault") encrypt_secret = match_encrypt_secret(vault_secrets, encrypt_vault_id=encrypt_vault_id) # only one secret for encrypt for now, use the first vault_id and use its first secret # TODO: exception if more than one? self.encrypt_vault_id = encrypt_secret[0] self.encrypt_secret = encrypt_secret[1] if action in ['rekey']: encrypt_vault_id = context.CLIARGS['encrypt_vault_id'] or C.DEFAULT_VAULT_ENCRYPT_IDENTITY # print('encrypt_vault_id: %s' % encrypt_vault_id) # print('default_encrypt_vault_id: %s' % default_encrypt_vault_id) # new_vault_ids should only ever be one item, from # load the default vault ids if we are using encrypt-vault-id new_vault_ids = [] if encrypt_vault_id: new_vault_ids = default_vault_ids if context.CLIARGS['new_vault_id']: new_vault_ids.append(context.CLIARGS['new_vault_id']) new_vault_password_files = [] if context.CLIARGS['new_vault_password_file']: new_vault_password_files.append(context.CLIARGS['new_vault_password_file']) new_vault_secrets = \ self.setup_vault_secrets(loader, vault_ids=new_vault_ids, vault_password_files=new_vault_password_files, ask_vault_pass=context.CLIARGS['ask_vault_pass'], create_new_password=True) if not new_vault_secrets: raise AnsibleOptionsError("A new vault password is required to use Ansible's Vault rekey") # There is only one new_vault_id currently and one new_vault_secret, or we # use the id specified in --encrypt-vault-id new_encrypt_secret = match_encrypt_secret(new_vault_secrets, encrypt_vault_id=encrypt_vault_id) self.new_encrypt_vault_id = new_encrypt_secret[0] self.new_encrypt_secret = new_encrypt_secret[1] loader.set_vault_secrets(vault_secrets) # FIXME: do we need to create VaultEditor here? its not reused vault = VaultLib(vault_secrets) self.editor = VaultEditor(vault) context.CLIARGS['func']() # and restore umask os.umask(old_umask) def execute_encrypt(self): ''' encrypt the supplied file using the provided vault secret ''' if not context.CLIARGS['args'] and sys.stdin.isatty(): display.display("Reading plaintext input from stdin", stderr=True) for f in context.CLIARGS['args'] or ['-']: # Fixme: use the correct vau self.editor.encrypt_file(f, self.encrypt_secret, vault_id=self.encrypt_vault_id, output_file=context.CLIARGS['output_file']) if sys.stdout.isatty(): display.display("Encryption successful", stderr=True) @staticmethod def format_ciphertext_yaml(b_ciphertext, indent=None, name=None): indent = indent or 10 block_format_var_name = "" if name: block_format_var_name = "%s: " % name block_format_header = "%s!vault |" % block_format_var_name lines = [] vault_ciphertext = to_text(b_ciphertext) lines.append(block_format_header) for line in vault_ciphertext.splitlines(): lines.append('%s%s' % (' ' * indent, line)) yaml_ciphertext = '\n'.join(lines) return yaml_ciphertext def execute_encrypt_string(self): ''' encrypt the supplied string using the provided vault secret ''' b_plaintext = None # Holds tuples (the_text, the_source_of_the_string, the variable name if its provided). b_plaintext_list = [] # remove the non-option '-' arg (used to indicate 'read from stdin') from the candidate args so # we don't add it to the plaintext list args = [x for x in context.CLIARGS['args'] if x != '-'] # We can prompt and read input, or read from stdin, but not both. if context.CLIARGS['encrypt_string_prompt']: msg = "String to encrypt: " name = None name_prompt_response = display.prompt('Variable name (enter for no name): ') # TODO: enforce var naming rules? if name_prompt_response != "": name = name_prompt_response # TODO: could prompt for which vault_id to use for each plaintext string # currently, it will just be the default hide_input = not context.CLIARGS['show_string_input'] if hide_input: msg = "String to encrypt (hidden): " else: msg = "String to encrypt:" prompt_response = display.prompt(msg, private=hide_input) if prompt_response == '': raise AnsibleOptionsError('The plaintext provided from the prompt was empty, not encrypting') b_plaintext = to_bytes(prompt_response) b_plaintext_list.append((b_plaintext, self.FROM_PROMPT, name)) # read from stdin if self.encrypt_string_read_stdin: if sys.stdout.isatty(): display.display("Reading plaintext input from stdin. (ctrl-d to end input, twice if your content does not already have a newline)", stderr=True) stdin_text = sys.stdin.read() if stdin_text == '': raise AnsibleOptionsError('stdin was empty, not encrypting') if sys.stdout.isatty() and not stdin_text.endswith("\n"): display.display("\n") b_plaintext = to_bytes(stdin_text) # defaults to None name = context.CLIARGS['encrypt_string_stdin_name'] b_plaintext_list.append((b_plaintext, self.FROM_STDIN, name)) # use any leftover args as strings to encrypt # Try to match args up to --name options if context.CLIARGS.get('encrypt_string_names', False): name_and_text_list = list(zip(context.CLIARGS['encrypt_string_names'], args)) # Some but not enough --name's to name each var if len(args) > len(name_and_text_list): # Trying to avoid ever showing the plaintext in the output, so this warning is vague to avoid that. display.display('The number of --name options do not match the number of args.', stderr=True) display.display('The last named variable will be "%s". The rest will not have' ' names.' % context.CLIARGS['encrypt_string_names'][-1], stderr=True) # Add the rest of the args without specifying a name for extra_arg in args[len(name_and_text_list):]: name_and_text_list.append((None, extra_arg)) # if no --names are provided, just use the args without a name. else: name_and_text_list = [(None, x) for x in args] # Convert the plaintext text objects to bytestrings and collect for name_and_text in name_and_text_list: name, plaintext = name_and_text if plaintext == '': raise AnsibleOptionsError('The plaintext provided from the command line args was empty, not encrypting') b_plaintext = to_bytes(plaintext) b_plaintext_list.append((b_plaintext, self.FROM_ARGS, name)) # TODO: specify vault_id per string? # Format the encrypted strings and any corresponding stderr output outputs = self._format_output_vault_strings(b_plaintext_list, vault_id=self.encrypt_vault_id) b_outs = [] for output in outputs: err = output.get('err', None) out = output.get('out', '') if err: sys.stderr.write(err) b_outs.append(to_bytes(out)) # The output must end with a newline to play nice with terminal representation. # Refs: # * https://stackoverflow.com/a/729795/595220 # * https://github.com/ansible/ansible/issues/78932 b_outs.append(b'') self.editor.write_data(b'\n'.join(b_outs), context.CLIARGS['output_file'] or '-') if sys.stdout.isatty(): display.display("Encryption successful", stderr=True) # TODO: offer block or string ala eyaml def _format_output_vault_strings(self, b_plaintext_list, vault_id=None): # If we are only showing one item in the output, we don't need to included commented # delimiters in the text show_delimiter = False if len(b_plaintext_list) > 1: show_delimiter = True # list of dicts {'out': '', 'err': ''} output = [] # Encrypt the plaintext, and format it into a yaml block that can be pasted into a playbook. # For more than one input, show some differentiating info in the stderr output so we can tell them # apart. If we have a var name, we include that in the yaml for index, b_plaintext_info in enumerate(b_plaintext_list): # (the text itself, which input it came from, its name) b_plaintext, src, name = b_plaintext_info b_ciphertext = self.editor.encrypt_bytes(b_plaintext, self.encrypt_secret, vault_id=vault_id) # block formatting yaml_text = self.format_ciphertext_yaml(b_ciphertext, name=name) err_msg = None if show_delimiter: human_index = index + 1 if name: err_msg = '# The encrypted version of variable ("%s", the string #%d from %s).\n' % (name, human_index, src) else: err_msg = '# The encrypted version of the string #%d from %s.)\n' % (human_index, src) output.append({'out': yaml_text, 'err': err_msg}) return output def execute_decrypt(self): ''' decrypt the supplied file using the provided vault secret ''' if not context.CLIARGS['args'] and sys.stdin.isatty(): display.display("Reading ciphertext input from stdin", stderr=True) for f in context.CLIARGS['args'] or ['-']: self.editor.decrypt_file(f, output_file=context.CLIARGS['output_file']) if sys.stdout.isatty(): display.display("Decryption successful", stderr=True) def execute_create(self): ''' create and open a file in an editor that will be encrypted with the provided vault secret when closed''' if len(context.CLIARGS['args']) != 1: raise AnsibleOptionsError("ansible-vault create can take only one filename argument") if sys.stdout.isatty() or context.CLIARGS['skip_tty_check']: self.editor.create_file(context.CLIARGS['args'][0], self.encrypt_secret, vault_id=self.encrypt_vault_id) else: raise AnsibleOptionsError("not a tty, editor cannot be opened") def execute_edit(self): ''' open and decrypt an existing vaulted file in an editor, that will be encrypted again when closed''' for f in context.CLIARGS['args']: self.editor.edit_file(f) def execute_view(self): ''' open, decrypt and view an existing vaulted file using a pager using the supplied vault secret ''' for f in context.CLIARGS['args']: # Note: vault should return byte strings because it could encrypt # and decrypt binary files. We are responsible for changing it to # unicode here because we are displaying it and therefore can make # the decision that the display doesn't have to be precisely what # the input was (leave that to decrypt instead) plaintext = self.editor.plaintext(f) self.pager(to_text(plaintext)) def execute_rekey(self): ''' re-encrypt a vaulted file with a new secret, the previous secret is required ''' for f in context.CLIARGS['args']: # FIXME: plumb in vault_id, use the default new_vault_secret for now self.editor.rekey_file(f, self.new_encrypt_secret, self.new_encrypt_vault_id) display.display("Rekey successful", stderr=True) def main(args=None): VaultCLI.cli_executor(args) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/collections/0000755000000000000000000000000014556006441017412 5ustar00rootrootansible-core-2.16.3/lib/ansible/collections/__init__.py0000644000000000000000000000000014556006441021511 0ustar00rootrootansible-core-2.16.3/lib/ansible/collections/list.py0000644000000000000000000000542614556006441020746 0ustar00rootroot# (c) 2019 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from ansible.errors import AnsibleError from ansible.cli.galaxy import with_collection_artifacts_manager from ansible.galaxy.collection import find_existing_collections from ansible.module_utils.common.text.converters import to_bytes from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path from ansible.utils.display import Display display = Display() @with_collection_artifacts_manager def list_collections(coll_filter=None, search_paths=None, dedupe=True, artifacts_manager=None): collections = {} for candidate in list_collection_dirs(search_paths=search_paths, coll_filter=coll_filter, artifacts_manager=artifacts_manager, dedupe=dedupe): collection = _get_collection_name_from_path(candidate) collections[collection] = candidate return collections @with_collection_artifacts_manager def list_collection_dirs(search_paths=None, coll_filter=None, artifacts_manager=None, dedupe=True): """ Return paths for the specific collections found in passed or configured search paths :param search_paths: list of text-string paths, if none load default config :param coll_filter: limit collections to just the specific namespace or collection, if None all are returned :return: list of collection directory paths """ namespace_filter = None collection_filter = None has_pure_namespace_filter = False # whether at least one coll_filter is a namespace-only filter if coll_filter is not None: if isinstance(coll_filter, str): coll_filter = [coll_filter] namespace_filter = set() for coll_name in coll_filter: if '.' in coll_name: try: namespace, collection = coll_name.split('.') except ValueError: raise AnsibleError("Invalid collection pattern supplied: %s" % coll_name) namespace_filter.add(namespace) if not has_pure_namespace_filter: if collection_filter is None: collection_filter = [] collection_filter.append(collection) else: namespace_filter.add(coll_name) has_pure_namespace_filter = True collection_filter = None namespace_filter = sorted(namespace_filter) for req in find_existing_collections(search_paths, artifacts_manager, namespace_filter=namespace_filter, collection_filter=collection_filter, dedupe=dedupe): if not has_pure_namespace_filter and coll_filter is not None and req.fqcn not in coll_filter: continue yield to_bytes(req.src) ansible-core-2.16.3/lib/ansible/compat/0000755000000000000000000000000014556006441016357 5ustar00rootrootansible-core-2.16.3/lib/ansible/compat/__init__.py0000644000000000000000000000207714556006441020476 0ustar00rootroot# (c) 2014, Toshio Kuratomi # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type ''' Compat library for ansible. This contains compatibility definitions for older python When we need to import a module differently depending on python version, do it here. Then in the code we can simply import from compat in order to get what we want. ''' ansible-core-2.16.3/lib/ansible/compat/importlib_resources.py0000644000000000000000000000116114556006441023023 0ustar00rootroot# Copyright: Contributors to the Ansible project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import sys HAS_IMPORTLIB_RESOURCES = False if sys.version_info < (3, 10): try: from importlib_resources import files # type: ignore[import] # pylint: disable=unused-import except ImportError: files = None # type: ignore[assignment] else: HAS_IMPORTLIB_RESOURCES = True else: from importlib.resources import files HAS_IMPORTLIB_RESOURCES = True ansible-core-2.16.3/lib/ansible/compat/selectors/0000755000000000000000000000000014556006441020362 5ustar00rootrootansible-core-2.16.3/lib/ansible/compat/selectors/__init__.py0000644000000000000000000000234314556006441022475 0ustar00rootroot# (c) 2014, 2017 Toshio Kuratomi # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type # NOT_BUNDLED ''' Compat selectors library. Python-3.5 has this builtin. The selectors2 package exists on pypi to backport the functionality as far as python-2.6. Implementation previously resided here - maintaining this file after the move to ansible.module_utils for code backwards compatibility. ''' import sys from ansible.module_utils.compat import selectors sys.modules['ansible.compat.selectors'] = selectors ansible-core-2.16.3/lib/ansible/config/0000755000000000000000000000000014556006441016341 5ustar00rootrootansible-core-2.16.3/lib/ansible/config/__init__.py0000644000000000000000000000000014556006441020440 0ustar00rootrootansible-core-2.16.3/lib/ansible/config/ansible_builtin_runtime.yml0000644000000000000000000133633714556006441024012 0ustar00rootroot# Copyright (c) 2020 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) plugin_routing: connection: # test entries redirected_local: redirect: ansible.builtin.local buildah: redirect: containers.podman.buildah podman: redirect: containers.podman.podman aws_ssm: redirect: community.aws.aws_ssm chroot: redirect: community.general.chroot docker: redirect: community.docker.docker funcd: redirect: community.general.funcd iocage: redirect: community.general.iocage jail: redirect: community.general.jail kubectl: redirect: kubernetes.core.kubectl libvirt_lxc: redirect: community.libvirt.libvirt_lxc lxc: redirect: community.general.lxc lxd: redirect: community.general.lxd oc: redirect: community.okd.oc qubes: redirect: community.general.qubes saltstack: redirect: community.general.saltstack zone: redirect: community.general.zone vmware_tools: redirect: community.vmware.vmware_tools httpapi: redirect: ansible.netcommon.httpapi napalm: redirect: ansible.netcommon.napalm netconf: redirect: ansible.netcommon.netconf network_cli: redirect: ansible.netcommon.network_cli persistent: redirect: ansible.netcommon.persistent modules: # test entry formerly_core_ping: redirect: testns.testcoll.ping # test entry uses_redirected_action: redirect: ansible.builtin.ping podman_container_info: redirect: containers.podman.podman_container_info podman_image_info: redirect: containers.podman.podman_image_info podman_image: redirect: containers.podman.podman_image podman_volume_info: redirect: containers.podman.podman_volume_info frr_facts: redirect: frr.frr.frr_facts frr_bgp: redirect: frr.frr.frr_bgp apt_repo: redirect: community.general.apt_repo aws_acm_facts: redirect: community.aws.aws_acm_facts aws_kms_facts: redirect: community.aws.aws_kms_facts aws_region_facts: redirect: community.aws.aws_region_facts aws_s3_bucket_facts: redirect: community.aws.aws_s3_bucket_facts aws_sgw_facts: redirect: community.aws.aws_sgw_facts aws_waf_facts: redirect: community.aws.aws_waf_facts cloudfront_facts: redirect: community.aws.cloudfront_facts cloudwatchlogs_log_group_facts: redirect: community.aws.cloudwatchlogs_log_group_facts ec2_asg_facts: redirect: community.aws.ec2_asg_facts ec2_customer_gateway_facts: redirect: community.aws.ec2_customer_gateway_facts ec2_instance_facts: redirect: community.aws.ec2_instance_facts ec2_eip_facts: redirect: community.aws.ec2_eip_facts ec2_elb_facts: redirect: community.aws.ec2_elb_facts ec2_lc_facts: redirect: community.aws.ec2_lc_facts ec2_placement_group_facts: redirect: community.aws.ec2_placement_group_facts ec2_vpc_endpoint_facts: redirect: community.aws.ec2_vpc_endpoint_facts ec2_vpc_igw_facts: redirect: community.aws.ec2_vpc_igw_facts ec2_vpc_nacl_facts: redirect: community.aws.ec2_vpc_nacl_facts ec2_vpc_nat_gateway_facts: redirect: community.aws.ec2_vpc_nat_gateway_facts ec2_vpc_peering_facts: redirect: community.aws.ec2_vpc_peering_facts ec2_vpc_route_table_facts: redirect: community.aws.ec2_vpc_route_table_facts ec2_vpc_vgw_facts: redirect: community.aws.ec2_vpc_vgw_facts ec2_vpc_vpn_facts: redirect: community.aws.ec2_vpc_vpn_facts ecs_service_facts: redirect: community.aws.ecs_service_facts ecs_taskdefinition_facts: redirect: community.aws.ecs_taskdefinition_facts efs_facts: redirect: community.aws.efs_facts elasticache_facts: redirect: community.aws.elasticache_facts elb_application_lb_facts: redirect: community.aws.elb_application_lb_facts elb_classic_lb_facts: redirect: community.aws.elb_classic_lb_facts elb_target_facts: redirect: community.aws.elb_target_facts elb_target_group_facts: redirect: community.aws.elb_target_group_facts iam_cert_facts: redirect: community.aws.iam_cert_facts iam_mfa_device_facts: redirect: community.aws.iam_mfa_device_facts iam_role_facts: redirect: community.aws.iam_role_facts iam_server_certificate_facts: redirect: community.aws.iam_server_certificate_facts lambda_facts: redirect: community.aws.lambda_facts rds_instance_facts: redirect: community.aws.rds_instance_facts rds_snapshot_facts: redirect: community.aws.rds_snapshot_facts redshift_facts: redirect: community.aws.redshift_facts route53_facts: redirect: community.aws.route53_facts aws_acm: redirect: community.aws.aws_acm aws_acm_info: redirect: community.aws.aws_acm_info aws_api_gateway: redirect: community.aws.aws_api_gateway aws_application_scaling_policy: redirect: community.aws.aws_application_scaling_policy aws_batch_compute_environment: redirect: community.aws.aws_batch_compute_environment aws_batch_job_definition: redirect: community.aws.aws_batch_job_definition aws_batch_job_queue: redirect: community.aws.aws_batch_job_queue aws_codebuild: redirect: community.aws.aws_codebuild aws_codecommit: redirect: community.aws.aws_codecommit aws_codepipeline: redirect: community.aws.aws_codepipeline aws_config_aggregation_authorization: redirect: community.aws.aws_config_aggregation_authorization aws_config_aggregator: redirect: community.aws.aws_config_aggregator aws_config_delivery_channel: redirect: community.aws.aws_config_delivery_channel aws_config_recorder: redirect: community.aws.aws_config_recorder aws_config_rule: redirect: community.aws.aws_config_rule aws_direct_connect_connection: redirect: community.aws.aws_direct_connect_connection aws_direct_connect_gateway: redirect: community.aws.aws_direct_connect_gateway aws_direct_connect_link_aggregation_group: redirect: community.aws.aws_direct_connect_link_aggregation_group aws_direct_connect_virtual_interface: redirect: community.aws.aws_direct_connect_virtual_interface aws_eks_cluster: redirect: community.aws.aws_eks_cluster aws_elasticbeanstalk_app: redirect: community.aws.aws_elasticbeanstalk_app aws_glue_connection: redirect: community.aws.aws_glue_connection aws_glue_job: redirect: community.aws.aws_glue_job aws_inspector_target: redirect: community.aws.aws_inspector_target aws_kms: redirect: community.aws.aws_kms aws_kms_info: redirect: community.aws.aws_kms_info aws_region_info: redirect: community.aws.aws_region_info aws_s3_bucket_info: redirect: community.aws.aws_s3_bucket_info aws_s3_cors: redirect: community.aws.aws_s3_cors aws_secret: redirect: community.aws.aws_secret aws_ses_identity: redirect: community.aws.aws_ses_identity aws_ses_identity_policy: redirect: community.aws.aws_ses_identity_policy aws_ses_rule_set: redirect: community.aws.aws_ses_rule_set aws_sgw_info: redirect: community.aws.aws_sgw_info aws_ssm_parameter_store: redirect: community.aws.aws_ssm_parameter_store aws_step_functions_state_machine: redirect: community.aws.aws_step_functions_state_machine aws_step_functions_state_machine_execution: redirect: community.aws.aws_step_functions_state_machine_execution aws_waf_condition: redirect: community.aws.aws_waf_condition aws_waf_info: redirect: community.aws.aws_waf_info aws_waf_rule: redirect: community.aws.aws_waf_rule aws_waf_web_acl: redirect: community.aws.aws_waf_web_acl cloudformation_stack_set: redirect: community.aws.cloudformation_stack_set cloudformation_exports_info: redirect: community.aws.cloudformation_exports_info cloudfront_distribution: redirect: community.aws.cloudfront_distribution cloudfront_info: redirect: community.aws.cloudfront_info cloudfront_invalidation: redirect: community.aws.cloudfront_invalidation cloudfront_origin_access_identity: redirect: community.aws.cloudfront_origin_access_identity cloudtrail: redirect: community.aws.cloudtrail cloudwatchevent_rule: redirect: community.aws.cloudwatchevent_rule cloudwatchlogs_log_group: redirect: community.aws.cloudwatchlogs_log_group cloudwatchlogs_log_group_info: redirect: community.aws.cloudwatchlogs_log_group_info cloudwatchlogs_log_group_metric_filter: redirect: community.aws.cloudwatchlogs_log_group_metric_filter data_pipeline: redirect: community.aws.data_pipeline dms_endpoint: redirect: community.aws.dms_endpoint dms_replication_subnet_group: redirect: community.aws.dms_replication_subnet_group dynamodb_table: redirect: community.aws.dynamodb_table dynamodb_ttl: redirect: community.aws.dynamodb_ttl ec2_ami_copy: redirect: community.aws.ec2_ami_copy ec2_asg: redirect: community.aws.ec2_asg ec2_asg_info: redirect: community.aws.ec2_asg_info ec2_asg_lifecycle_hook: redirect: community.aws.ec2_asg_lifecycle_hook ec2_customer_gateway: redirect: community.aws.ec2_customer_gateway ec2_customer_gateway_info: redirect: community.aws.ec2_customer_gateway_info ec2_eip: redirect: community.aws.ec2_eip ec2_eip_info: redirect: community.aws.ec2_eip_info ec2_elb: redirect: community.aws.ec2_elb ec2_elb_info: redirect: community.aws.ec2_elb_info ec2_instance: redirect: community.aws.ec2_instance ec2_instance_info: redirect: community.aws.ec2_instance_info ec2_launch_template: redirect: community.aws.ec2_launch_template ec2_lc: redirect: community.aws.ec2_lc ec2_lc_find: redirect: community.aws.ec2_lc_find ec2_lc_info: redirect: community.aws.ec2_lc_info ec2_metric_alarm: redirect: community.aws.ec2_metric_alarm ec2_placement_group: redirect: community.aws.ec2_placement_group ec2_placement_group_info: redirect: community.aws.ec2_placement_group_info ec2_scaling_policy: redirect: community.aws.ec2_scaling_policy ec2_snapshot_copy: redirect: community.aws.ec2_snapshot_copy ec2_transit_gateway: redirect: community.aws.ec2_transit_gateway ec2_transit_gateway_info: redirect: community.aws.ec2_transit_gateway_info ec2_vpc_egress_igw: redirect: community.aws.ec2_vpc_egress_igw ec2_vpc_endpoint: redirect: community.aws.ec2_vpc_endpoint ec2_vpc_endpoint_info: redirect: community.aws.ec2_vpc_endpoint_info ec2_vpc_igw: redirect: community.aws.ec2_vpc_igw ec2_vpc_igw_info: redirect: community.aws.ec2_vpc_igw_info ec2_vpc_nacl: redirect: community.aws.ec2_vpc_nacl ec2_vpc_nacl_info: redirect: community.aws.ec2_vpc_nacl_info ec2_vpc_nat_gateway: redirect: community.aws.ec2_vpc_nat_gateway ec2_vpc_nat_gateway_info: redirect: community.aws.ec2_vpc_nat_gateway_info ec2_vpc_peer: redirect: community.aws.ec2_vpc_peer ec2_vpc_peering_info: redirect: community.aws.ec2_vpc_peering_info ec2_vpc_route_table: redirect: community.aws.ec2_vpc_route_table ec2_vpc_route_table_info: redirect: community.aws.ec2_vpc_route_table_info ec2_vpc_vgw: redirect: community.aws.ec2_vpc_vgw ec2_vpc_vgw_info: redirect: community.aws.ec2_vpc_vgw_info ec2_vpc_vpn: redirect: community.aws.ec2_vpc_vpn ec2_vpc_vpn_info: redirect: community.aws.ec2_vpc_vpn_info ec2_win_password: redirect: community.aws.ec2_win_password ecs_attribute: redirect: community.aws.ecs_attribute ecs_cluster: redirect: community.aws.ecs_cluster ecs_ecr: redirect: community.aws.ecs_ecr ecs_service: redirect: community.aws.ecs_service ecs_service_info: redirect: community.aws.ecs_service_info ecs_tag: redirect: community.aws.ecs_tag ecs_task: redirect: community.aws.ecs_task ecs_taskdefinition: redirect: community.aws.ecs_taskdefinition ecs_taskdefinition_info: redirect: community.aws.ecs_taskdefinition_info efs: redirect: community.aws.efs efs_info: redirect: community.aws.efs_info elasticache: redirect: community.aws.elasticache elasticache_info: redirect: community.aws.elasticache_info elasticache_parameter_group: redirect: community.aws.elasticache_parameter_group elasticache_snapshot: redirect: community.aws.elasticache_snapshot elasticache_subnet_group: redirect: community.aws.elasticache_subnet_group elb_application_lb: redirect: community.aws.elb_application_lb elb_application_lb_info: redirect: community.aws.elb_application_lb_info elb_classic_lb: redirect: community.aws.elb_classic_lb elb_classic_lb_info: redirect: community.aws.elb_classic_lb_info elb_instance: redirect: community.aws.elb_instance elb_network_lb: redirect: community.aws.elb_network_lb elb_target: redirect: community.aws.elb_target elb_target_group: redirect: community.aws.elb_target_group elb_target_group_info: redirect: community.aws.elb_target_group_info elb_target_info: redirect: community.aws.elb_target_info execute_lambda: redirect: community.aws.execute_lambda iam: redirect: community.aws.iam iam_cert: redirect: community.aws.iam_cert iam_group: redirect: community.aws.iam_group iam_managed_policy: redirect: community.aws.iam_managed_policy iam_mfa_device_info: redirect: community.aws.iam_mfa_device_info iam_password_policy: redirect: community.aws.iam_password_policy iam_policy: redirect: community.aws.iam_policy iam_policy_info: redirect: community.aws.iam_policy_info iam_role: redirect: community.aws.iam_role iam_role_info: redirect: community.aws.iam_role_info iam_saml_federation: redirect: community.aws.iam_saml_federation iam_server_certificate_info: redirect: community.aws.iam_server_certificate_info iam_user: redirect: community.aws.iam_user iam_user_info: redirect: community.aws.iam_user_info kinesis_stream: redirect: community.aws.kinesis_stream lambda: redirect: community.aws.lambda lambda_alias: redirect: community.aws.lambda_alias lambda_event: redirect: community.aws.lambda_event lambda_info: redirect: community.aws.lambda_info lambda_policy: redirect: community.aws.lambda_policy lightsail: redirect: community.aws.lightsail rds: redirect: community.aws.rds rds_instance: redirect: community.aws.rds_instance rds_instance_info: redirect: community.aws.rds_instance_info rds_param_group: redirect: community.aws.rds_param_group rds_snapshot: redirect: community.aws.rds_snapshot rds_snapshot_info: redirect: community.aws.rds_snapshot_info rds_subnet_group: redirect: community.aws.rds_subnet_group redshift: redirect: community.aws.redshift redshift_cross_region_snapshots: redirect: community.aws.redshift_cross_region_snapshots redshift_info: redirect: community.aws.redshift_info redshift_subnet_group: redirect: community.aws.redshift_subnet_group route53: redirect: community.aws.route53 route53_health_check: redirect: community.aws.route53_health_check route53_info: redirect: community.aws.route53_info route53_zone: redirect: community.aws.route53_zone s3_bucket_notification: redirect: community.aws.s3_bucket_notification s3_lifecycle: redirect: community.aws.s3_lifecycle s3_logging: redirect: community.aws.s3_logging s3_sync: redirect: community.aws.s3_sync s3_website: redirect: community.aws.s3_website sns: redirect: community.aws.sns sns_topic: redirect: community.aws.sns_topic sqs_queue: redirect: community.aws.sqs_queue sts_assume_role: redirect: community.aws.sts_assume_role sts_session_token: redirect: community.aws.sts_session_token ali_instance_facts: redirect: community.general.ali_instance_facts ali_instance: redirect: community.general.ali_instance ali_instance_info: redirect: community.general.ali_instance_info atomic_container: redirect: community.general.atomic_container atomic_host: redirect: community.general.atomic_host atomic_image: redirect: community.general.atomic_image clc_aa_policy: redirect: community.general.clc_aa_policy clc_alert_policy: redirect: community.general.clc_alert_policy clc_blueprint_package: redirect: community.general.clc_blueprint_package clc_firewall_policy: redirect: community.general.clc_firewall_policy clc_group: redirect: community.general.clc_group clc_loadbalancer: redirect: community.general.clc_loadbalancer clc_modify_server: redirect: community.general.clc_modify_server clc_publicip: redirect: community.general.clc_publicip clc_server: redirect: community.general.clc_server clc_server_snapshot: redirect: community.general.clc_server_snapshot cloudscale_floating_ip: redirect: cloudscale_ch.cloud.floating_ip cloudscale_server: redirect: cloudscale_ch.cloud.server cloudscale_server_group: redirect: cloudscale_ch.cloud.server_group cloudscale_volume: redirect: cloudscale_ch.cloud.volume cs_instance_facts: redirect: ngine_io.cloudstack.cs_instance_info cs_zone_facts: redirect: ngine_io.cloudstack.cs_zone_info cs_account: redirect: ngine_io.cloudstack.cs_account cs_affinitygroup: redirect: ngine_io.cloudstack.cs_affinitygroup cs_cluster: redirect: ngine_io.cloudstack.cs_cluster cs_configuration: redirect: ngine_io.cloudstack.cs_configuration cs_disk_offering: redirect: ngine_io.cloudstack.cs_disk_offering cs_domain: redirect: ngine_io.cloudstack.cs_domain cs_facts: redirect: ngine_io.cloudstack.cs_facts cs_firewall: redirect: ngine_io.cloudstack.cs_firewall cs_host: redirect: ngine_io.cloudstack.cs_host cs_image_store: redirect: ngine_io.cloudstack.cs_image_store cs_instance: redirect: ngine_io.cloudstack.cs_instance cs_instance_info: redirect: ngine_io.cloudstack.cs_instance_info cs_instance_nic: redirect: ngine_io.cloudstack.cs_instance_nic cs_instance_nic_secondaryip: redirect: ngine_io.cloudstack.cs_instance_nic_secondaryip cs_instance_password_reset: redirect: ngine_io.cloudstack.cs_instance_password_reset cs_instancegroup: redirect: ngine_io.cloudstack.cs_instancegroup cs_ip_address: redirect: ngine_io.cloudstack.cs_ip_address cs_iso: redirect: ngine_io.cloudstack.cs_iso cs_loadbalancer_rule: redirect: ngine_io.cloudstack.cs_loadbalancer_rule cs_loadbalancer_rule_member: redirect: ngine_io.cloudstack.cs_loadbalancer_rule_member cs_network: redirect: ngine_io.cloudstack.cs_network cs_network_acl: redirect: ngine_io.cloudstack.cs_network_acl cs_network_acl_rule: redirect: ngine_io.cloudstack.cs_network_acl_rule cs_network_offering: redirect: ngine_io.cloudstack.cs_network_offering cs_physical_network: redirect: ngine_io.cloudstack.cs_physical_network cs_pod: redirect: ngine_io.cloudstack.cs_pod cs_portforward: redirect: ngine_io.cloudstack.cs_portforward cs_project: redirect: ngine_io.cloudstack.cs_project cs_region: redirect: ngine_io.cloudstack.cs_region cs_resourcelimit: redirect: ngine_io.cloudstack.cs_resourcelimit cs_role: redirect: ngine_io.cloudstack.cs_role cs_role_permission: redirect: ngine_io.cloudstack.cs_role_permission cs_router: redirect: ngine_io.cloudstack.cs_router cs_securitygroup: redirect: ngine_io.cloudstack.cs_securitygroup cs_securitygroup_rule: redirect: ngine_io.cloudstack.cs_securitygroup_rule cs_service_offering: redirect: ngine_io.cloudstack.cs_service_offering cs_snapshot_policy: redirect: ngine_io.cloudstack.cs_snapshot_policy cs_sshkeypair: redirect: ngine_io.cloudstack.cs_sshkeypair cs_staticnat: redirect: ngine_io.cloudstack.cs_staticnat cs_storage_pool: redirect: ngine_io.cloudstack.cs_storage_pool cs_template: redirect: ngine_io.cloudstack.cs_template cs_traffic_type: redirect: ngine_io.cloudstack.cs_traffic_type cs_user: redirect: ngine_io.cloudstack.cs_user cs_vlan_ip_range: redirect: ngine_io.cloudstack.cs_vlan_ip_range cs_vmsnapshot: redirect: ngine_io.cloudstack.cs_vmsnapshot cs_volume: redirect: ngine_io.cloudstack.cs_volume cs_vpc: redirect: ngine_io.cloudstack.cs_vpc cs_vpc_offering: redirect: ngine_io.cloudstack.cs_vpc_offering cs_vpn_connection: redirect: ngine_io.cloudstack.cs_vpn_connection cs_vpn_customer_gateway: redirect: ngine_io.cloudstack.cs_vpn_customer_gateway cs_vpn_gateway: redirect: ngine_io.cloudstack.cs_vpn_gateway cs_zone: redirect: ngine_io.cloudstack.cs_zone cs_zone_info: redirect: ngine_io.cloudstack.cs_zone_info digital_ocean: redirect: community.digitalocean.digital_ocean digital_ocean_account_facts: redirect: community.digitalocean.digital_ocean_account_facts digital_ocean_certificate_facts: redirect: community.digitalocean.digital_ocean_certificate_facts digital_ocean_domain_facts: redirect: community.digitalocean.digital_ocean_domain_facts digital_ocean_firewall_facts: redirect: community.digitalocean.digital_ocean_firewall_facts digital_ocean_floating_ip_facts: redirect: community.digitalocean.digital_ocean_floating_ip_facts digital_ocean_image_facts: redirect: community.digitalocean.digital_ocean_image_facts digital_ocean_load_balancer_facts: redirect: community.digitalocean.digital_ocean_load_balancer_facts digital_ocean_region_facts: redirect: community.digitalocean.digital_ocean_region_facts digital_ocean_size_facts: redirect: community.digitalocean.digital_ocean_size_facts digital_ocean_snapshot_facts: redirect: community.digitalocean.digital_ocean_snapshot_facts digital_ocean_sshkey_facts: redirect: community.digitalocean.digital_ocean_sshkey_facts digital_ocean_tag_facts: redirect: community.digitalocean.digital_ocean_tag_facts digital_ocean_volume_facts: redirect: community.digitalocean.digital_ocean_volume_facts digital_ocean_account_info: redirect: community.digitalocean.digital_ocean_account_info digital_ocean_block_storage: redirect: community.digitalocean.digital_ocean_block_storage digital_ocean_certificate: redirect: community.digitalocean.digital_ocean_certificate digital_ocean_certificate_info: redirect: community.digitalocean.digital_ocean_certificate_info digital_ocean_domain: redirect: community.digitalocean.digital_ocean_domain digital_ocean_domain_info: redirect: community.digitalocean.digital_ocean_domain_info digital_ocean_droplet: redirect: community.digitalocean.digital_ocean_droplet digital_ocean_firewall_info: redirect: community.digitalocean.digital_ocean_firewall_info digital_ocean_floating_ip: redirect: community.digitalocean.digital_ocean_floating_ip digital_ocean_floating_ip_info: redirect: community.digitalocean.digital_ocean_floating_ip_info digital_ocean_image_info: redirect: community.digitalocean.digital_ocean_image_info digital_ocean_load_balancer_info: redirect: community.digitalocean.digital_ocean_load_balancer_info digital_ocean_region_info: redirect: community.digitalocean.digital_ocean_region_info digital_ocean_size_info: redirect: community.digitalocean.digital_ocean_size_info digital_ocean_snapshot_info: redirect: community.digitalocean.digital_ocean_snapshot_info digital_ocean_sshkey: redirect: community.digitalocean.digital_ocean_sshkey digital_ocean_sshkey_info: redirect: community.digitalocean.digital_ocean_sshkey_info digital_ocean_tag: redirect: community.digitalocean.digital_ocean_tag digital_ocean_tag_info: redirect: community.digitalocean.digital_ocean_tag_info digital_ocean_volume_info: redirect: community.digitalocean.digital_ocean_volume_info dimensiondata_network: redirect: community.general.dimensiondata_network dimensiondata_vlan: redirect: community.general.dimensiondata_vlan docker_image_facts: redirect: community.general.docker_image_facts docker_service: redirect: community.general.docker_service docker_compose: redirect: community.docker.docker_compose docker_config: redirect: community.docker.docker_config docker_container: redirect: community.docker.docker_container docker_container_info: redirect: community.docker.docker_container_info docker_host_info: redirect: community.docker.docker_host_info docker_image: redirect: community.docker.docker_image docker_image_info: redirect: community.docker.docker_image_info docker_login: redirect: community.docker.docker_login docker_network: redirect: community.docker.docker_network docker_network_info: redirect: community.docker.docker_network_info docker_node: redirect: community.docker.docker_node docker_node_info: redirect: community.docker.docker_node_info docker_prune: redirect: community.docker.docker_prune docker_secret: redirect: community.docker.docker_secret docker_stack: redirect: community.docker.docker_stack docker_swarm: redirect: community.docker.docker_swarm docker_swarm_info: redirect: community.docker.docker_swarm_info docker_swarm_service: redirect: community.docker.docker_swarm_service docker_swarm_service_info: redirect: community.docker.docker_swarm_service_info docker_volume: redirect: community.docker.docker_volume docker_volume_info: redirect: community.docker.docker_volume_info gcdns_record: redirect: community.general.gcdns_record gcdns_zone: redirect: community.general.gcdns_zone gce: redirect: community.general.gce gcp_backend_service: redirect: community.general.gcp_backend_service gcp_bigquery_dataset_facts: redirect: google.cloud.gcp_bigquery_dataset_info gcp_bigquery_table_facts: redirect: google.cloud.gcp_bigquery_table_info gcp_cloudbuild_trigger_facts: redirect: google.cloud.gcp_cloudbuild_trigger_info gcp_compute_address_facts: redirect: google.cloud.gcp_compute_address_info gcp_compute_backend_bucket_facts: redirect: google.cloud.gcp_compute_backend_bucket_info gcp_compute_backend_service_facts: redirect: google.cloud.gcp_compute_backend_service_info gcp_compute_disk_facts: redirect: google.cloud.gcp_compute_disk_info gcp_compute_firewall_facts: redirect: google.cloud.gcp_compute_firewall_info gcp_compute_forwarding_rule_facts: redirect: google.cloud.gcp_compute_forwarding_rule_info gcp_compute_global_address_facts: redirect: google.cloud.gcp_compute_global_address_info gcp_compute_global_forwarding_rule_facts: redirect: google.cloud.gcp_compute_global_forwarding_rule_info gcp_compute_health_check_facts: redirect: google.cloud.gcp_compute_health_check_info gcp_compute_http_health_check_facts: redirect: google.cloud.gcp_compute_http_health_check_info gcp_compute_https_health_check_facts: redirect: google.cloud.gcp_compute_https_health_check_info gcp_compute_image_facts: redirect: google.cloud.gcp_compute_image_info gcp_compute_instance_facts: redirect: google.cloud.gcp_compute_instance_info gcp_compute_instance_group_facts: redirect: google.cloud.gcp_compute_instance_group_info gcp_compute_instance_group_manager_facts: redirect: google.cloud.gcp_compute_instance_group_manager_info gcp_compute_instance_template_facts: redirect: google.cloud.gcp_compute_instance_template_info gcp_compute_interconnect_attachment_facts: redirect: google.cloud.gcp_compute_interconnect_attachment_info gcp_compute_network_facts: redirect: google.cloud.gcp_compute_network_info gcp_compute_region_disk_facts: redirect: google.cloud.gcp_compute_region_disk_info gcp_compute_route_facts: redirect: google.cloud.gcp_compute_route_info gcp_compute_router_facts: redirect: google.cloud.gcp_compute_router_info gcp_compute_ssl_certificate_facts: redirect: google.cloud.gcp_compute_ssl_certificate_info gcp_compute_ssl_policy_facts: redirect: google.cloud.gcp_compute_ssl_policy_info gcp_compute_subnetwork_facts: redirect: google.cloud.gcp_compute_subnetwork_info gcp_compute_target_http_proxy_facts: redirect: google.cloud.gcp_compute_target_http_proxy_info gcp_compute_target_https_proxy_facts: redirect: google.cloud.gcp_compute_target_https_proxy_info gcp_compute_target_pool_facts: redirect: google.cloud.gcp_compute_target_pool_info gcp_compute_target_ssl_proxy_facts: redirect: google.cloud.gcp_compute_target_ssl_proxy_info gcp_compute_target_tcp_proxy_facts: redirect: google.cloud.gcp_compute_target_tcp_proxy_info gcp_compute_target_vpn_gateway_facts: redirect: google.cloud.gcp_compute_target_vpn_gateway_info gcp_compute_url_map_facts: redirect: google.cloud.gcp_compute_url_map_info gcp_compute_vpn_tunnel_facts: redirect: google.cloud.gcp_compute_vpn_tunnel_info gcp_container_cluster_facts: redirect: google.cloud.gcp_container_cluster_info gcp_container_node_pool_facts: redirect: google.cloud.gcp_container_node_pool_info gcp_dns_managed_zone_facts: redirect: google.cloud.gcp_dns_managed_zone_info gcp_dns_resource_record_set_facts: redirect: google.cloud.gcp_dns_resource_record_set_info gcp_forwarding_rule: redirect: community.general.gcp_forwarding_rule gcp_healthcheck: redirect: community.general.gcp_healthcheck gcp_iam_role_facts: redirect: google.cloud.gcp_iam_role_info gcp_iam_service_account_facts: redirect: google.cloud.gcp_iam_service_account_info gcp_pubsub_subscription_facts: redirect: google.cloud.gcp_pubsub_subscription_info gcp_pubsub_topic_facts: redirect: google.cloud.gcp_pubsub_topic_info gcp_redis_instance_facts: redirect: google.cloud.gcp_redis_instance_info gcp_resourcemanager_project_facts: redirect: google.cloud.gcp_resourcemanager_project_info gcp_sourcerepo_repository_facts: redirect: google.cloud.gcp_sourcerepo_repository_info gcp_spanner_database_facts: redirect: google.cloud.gcp_spanner_database_info gcp_spanner_instance_facts: redirect: google.cloud.gcp_spanner_instance_info gcp_sql_database_facts: redirect: google.cloud.gcp_sql_database_info gcp_sql_instance_facts: redirect: google.cloud.gcp_sql_instance_info gcp_sql_user_facts: redirect: google.cloud.gcp_sql_user_info gcp_target_proxy: redirect: community.general.gcp_target_proxy gcp_tpu_node_facts: redirect: google.cloud.gcp_tpu_node_info gcp_url_map: redirect: community.general.gcp_url_map gcpubsub_facts: redirect: community.general.gcpubsub_facts gcspanner: redirect: community.general.gcspanner gc_storage: redirect: community.google.gc_storage gce_eip: redirect: community.google.gce_eip gce_img: redirect: community.google.gce_img gce_instance_template: redirect: community.google.gce_instance_template gce_labels: redirect: community.google.gce_labels gce_lb: redirect: community.google.gce_lb gce_mig: redirect: community.google.gce_mig gce_net: redirect: community.google.gce_net gce_pd: redirect: community.google.gce_pd gce_snapshot: redirect: community.google.gce_snapshot gce_tag: redirect: community.google.gce_tag gcpubsub: redirect: community.google.gcpubsub gcpubsub_info: redirect: community.google.gcpubsub_info heroku_collaborator: redirect: community.general.heroku_collaborator hwc_ecs_instance: redirect: community.general.hwc_ecs_instance hwc_evs_disk: redirect: community.general.hwc_evs_disk hwc_network_vpc: redirect: community.general.hwc_network_vpc hwc_smn_topic: redirect: community.general.hwc_smn_topic hwc_vpc_eip: redirect: community.general.hwc_vpc_eip hwc_vpc_peering_connect: redirect: community.general.hwc_vpc_peering_connect hwc_vpc_port: redirect: community.general.hwc_vpc_port hwc_vpc_private_ip: redirect: community.general.hwc_vpc_private_ip hwc_vpc_route: redirect: community.general.hwc_vpc_route hwc_vpc_security_group: redirect: community.general.hwc_vpc_security_group hwc_vpc_security_group_rule: redirect: community.general.hwc_vpc_security_group_rule hwc_vpc_subnet: redirect: community.general.hwc_vpc_subnet kubevirt_cdi_upload: redirect: community.kubevirt.kubevirt_cdi_upload kubevirt_preset: redirect: community.kubevirt.kubevirt_preset kubevirt_pvc: redirect: community.kubevirt.kubevirt_pvc kubevirt_rs: redirect: community.kubevirt.kubevirt_rs kubevirt_template: redirect: community.kubevirt.kubevirt_template kubevirt_vm: redirect: community.kubevirt.kubevirt_vm linode: redirect: community.general.linode linode_v4: redirect: community.general.linode_v4 lxc_container: redirect: community.general.lxc_container lxd_container: redirect: community.general.lxd_container lxd_profile: redirect: community.general.lxd_profile memset_memstore_facts: redirect: community.general.memset_memstore_facts memset_server_facts: redirect: community.general.memset_server_facts memset_dns_reload: redirect: community.general.memset_dns_reload memset_memstore_info: redirect: community.general.memset_memstore_info memset_server_info: redirect: community.general.memset_server_info memset_zone: redirect: community.general.memset_zone memset_zone_domain: redirect: community.general.memset_zone_domain memset_zone_record: redirect: community.general.memset_zone_record cloud_init_data_facts: redirect: community.general.cloud_init_data_facts helm: redirect: community.general.helm ovirt: redirect: community.general.ovirt proxmox: redirect: community.general.proxmox proxmox_kvm: redirect: community.general.proxmox_kvm proxmox_template: redirect: community.general.proxmox_template rhevm: redirect: community.general.rhevm serverless: redirect: community.general.serverless terraform: redirect: community.general.terraform virt: redirect: community.libvirt.virt virt_net: redirect: community.libvirt.virt_net virt_pool: redirect: community.libvirt.virt_pool xenserver_facts: redirect: community.general.xenserver_facts oneandone_firewall_policy: redirect: community.general.oneandone_firewall_policy oneandone_load_balancer: redirect: community.general.oneandone_load_balancer oneandone_monitoring_policy: redirect: community.general.oneandone_monitoring_policy oneandone_private_network: redirect: community.general.oneandone_private_network oneandone_public_ip: redirect: community.general.oneandone_public_ip oneandone_server: redirect: community.general.oneandone_server online_server_facts: redirect: community.general.online_server_facts online_user_facts: redirect: community.general.online_user_facts online_server_info: redirect: community.general.online_server_info online_user_info: redirect: community.general.online_user_info one_image_facts: redirect: community.general.one_image_facts one_host: redirect: community.general.one_host one_image: redirect: community.general.one_image one_image_info: redirect: community.general.one_image_info one_service: redirect: community.general.one_service one_vm: redirect: community.general.one_vm os_flavor_facts: redirect: openstack.cloud.os_flavor_info os_image_facts: redirect: openstack.cloud.os_image_info os_keystone_domain_facts: redirect: openstack.cloud.os_keystone_domain_info os_networks_facts: redirect: openstack.cloud.os_networks_info os_port_facts: redirect: openstack.cloud.os_port_info os_project_facts: redirect: openstack.cloud.os_project_info os_server_facts: redirect: openstack.cloud.os_server_info os_subnets_facts: redirect: openstack.cloud.os_subnets_info os_user_facts: redirect: openstack.cloud.os_user_info oci_vcn: redirect: community.general.oci_vcn ovh_ip_failover: redirect: community.general.ovh_ip_failover ovh_ip_loadbalancing_backend: redirect: community.general.ovh_ip_loadbalancing_backend ovh_monthly_billing: redirect: community.general.ovh_monthly_billing ovirt_affinity_label_facts: redirect: community.general.ovirt_affinity_label_facts ovirt_api_facts: redirect: community.general.ovirt_api_facts ovirt_cluster_facts: redirect: community.general.ovirt_cluster_facts ovirt_datacenter_facts: redirect: community.general.ovirt_datacenter_facts ovirt_disk_facts: redirect: community.general.ovirt_disk_facts ovirt_event_facts: redirect: community.general.ovirt_event_facts ovirt_external_provider_facts: redirect: community.general.ovirt_external_provider_facts ovirt_group_facts: redirect: community.general.ovirt_group_facts ovirt_host_facts: redirect: community.general.ovirt_host_facts ovirt_host_storage_facts: redirect: community.general.ovirt_host_storage_facts ovirt_network_facts: redirect: community.general.ovirt_network_facts ovirt_nic_facts: redirect: community.general.ovirt_nic_facts ovirt_permission_facts: redirect: community.general.ovirt_permission_facts ovirt_quota_facts: redirect: community.general.ovirt_quota_facts ovirt_scheduling_policy_facts: redirect: community.general.ovirt_scheduling_policy_facts ovirt_snapshot_facts: redirect: community.general.ovirt_snapshot_facts ovirt_storage_domain_facts: redirect: community.general.ovirt_storage_domain_facts ovirt_storage_template_facts: redirect: community.general.ovirt_storage_template_facts ovirt_storage_vm_facts: redirect: community.general.ovirt_storage_vm_facts ovirt_tag_facts: redirect: community.general.ovirt_tag_facts ovirt_template_facts: redirect: community.general.ovirt_template_facts ovirt_user_facts: redirect: community.general.ovirt_user_facts ovirt_vm_facts: redirect: community.general.ovirt_vm_facts ovirt_vmpool_facts: redirect: community.general.ovirt_vmpool_facts packet_device: redirect: community.general.packet_device packet_ip_subnet: redirect: community.general.packet_ip_subnet packet_project: redirect: community.general.packet_project packet_sshkey: redirect: community.general.packet_sshkey packet_volume: redirect: community.general.packet_volume packet_volume_attachment: redirect: community.general.packet_volume_attachment profitbricks: redirect: community.general.profitbricks profitbricks_datacenter: redirect: community.general.profitbricks_datacenter profitbricks_nic: redirect: community.general.profitbricks_nic profitbricks_volume: redirect: community.general.profitbricks_volume profitbricks_volume_attachments: redirect: community.general.profitbricks_volume_attachments pubnub_blocks: redirect: community.general.pubnub_blocks rax: redirect: community.general.rax rax_cbs: redirect: community.general.rax_cbs rax_cbs_attachments: redirect: community.general.rax_cbs_attachments rax_cdb: redirect: community.general.rax_cdb rax_cdb_database: redirect: community.general.rax_cdb_database rax_cdb_user: redirect: community.general.rax_cdb_user rax_clb: redirect: community.general.rax_clb rax_clb_nodes: redirect: community.general.rax_clb_nodes rax_clb_ssl: redirect: community.general.rax_clb_ssl rax_dns: redirect: community.general.rax_dns rax_dns_record: redirect: community.general.rax_dns_record rax_facts: redirect: community.general.rax_facts rax_files: redirect: community.general.rax_files rax_files_objects: redirect: community.general.rax_files_objects rax_identity: redirect: community.general.rax_identity rax_keypair: redirect: community.general.rax_keypair rax_meta: redirect: community.general.rax_meta rax_mon_alarm: redirect: community.general.rax_mon_alarm rax_mon_check: redirect: community.general.rax_mon_check rax_mon_entity: redirect: community.general.rax_mon_entity rax_mon_notification: redirect: community.general.rax_mon_notification rax_mon_notification_plan: redirect: community.general.rax_mon_notification_plan rax_network: redirect: community.general.rax_network rax_queue: redirect: community.general.rax_queue rax_scaling_group: redirect: community.general.rax_scaling_group rax_scaling_policy: redirect: community.general.rax_scaling_policy scaleway_image_facts: redirect: community.general.scaleway_image_facts scaleway_ip_facts: redirect: community.general.scaleway_ip_facts scaleway_organization_facts: redirect: community.general.scaleway_organization_facts scaleway_security_group_facts: redirect: community.general.scaleway_security_group_facts scaleway_server_facts: redirect: community.general.scaleway_server_facts scaleway_snapshot_facts: redirect: community.general.scaleway_snapshot_facts scaleway_volume_facts: redirect: community.general.scaleway_volume_facts scaleway_compute: redirect: community.general.scaleway_compute scaleway_image_info: redirect: community.general.scaleway_image_info scaleway_ip: redirect: community.general.scaleway_ip scaleway_ip_info: redirect: community.general.scaleway_ip_info scaleway_lb: redirect: community.general.scaleway_lb scaleway_organization_info: redirect: community.general.scaleway_organization_info scaleway_security_group: redirect: community.general.scaleway_security_group scaleway_security_group_info: redirect: community.general.scaleway_security_group_info scaleway_security_group_rule: redirect: community.general.scaleway_security_group_rule scaleway_server_info: redirect: community.general.scaleway_server_info scaleway_snapshot_info: redirect: community.general.scaleway_snapshot_info scaleway_sshkey: redirect: community.general.scaleway_sshkey scaleway_user_data: redirect: community.general.scaleway_user_data scaleway_volume: redirect: community.general.scaleway_volume scaleway_volume_info: redirect: community.general.scaleway_volume_info smartos_image_facts: redirect: community.general.smartos_image_facts imgadm: redirect: community.general.imgadm nictagadm: redirect: community.general.nictagadm smartos_image_info: redirect: community.general.smartos_image_info vmadm: redirect: community.general.vmadm sl_vm: redirect: community.general.sl_vm spotinst_aws_elastigroup: redirect: community.general.spotinst_aws_elastigroup udm_dns_record: redirect: community.general.udm_dns_record udm_dns_zone: redirect: community.general.udm_dns_zone udm_group: redirect: community.general.udm_group udm_share: redirect: community.general.udm_share udm_user: redirect: community.general.udm_user vr_account_facts: redirect: ngine_io.vultr.vultr_account_facts vr_dns_domain: redirect: ngine_io.vultr.vultr_dns_domain vr_dns_record: redirect: ngine_io.vultr.vultr_dns_record vr_firewall_group: redirect: ngine_io.vultr.vultr_firewall_group vr_firewall_rule: redirect: ngine_io.vultr.vultr_firewall_rule vr_server: redirect: ngine_io.vultr.vultr_server vr_ssh_key: redirect: ngine_io.vultr.vultr_ssh_key vr_startup_script: redirect: ngine_io.vultr.vultr_startup_script vr_user: redirect: ngine_io.vultr.vultr_user vultr_account_facts: redirect: ngine_io.vultr.vultr_account_info vultr_block_storage_facts: redirect: ngine_io.vultr.vultr_block_storage_info vultr_dns_domain_facts: redirect: ngine_io.vultr.vultr_dns_domain_info vultr_firewall_group_facts: redirect: ngine_io.vultr.vultr_firewall_group_info vultr_network_facts: redirect: ngine_io.vultr.vultr_network_info vultr_os_facts: redirect: ngine_io.vultr.vultr_os_info vultr_plan_facts: redirect: ngine_io.vultr.vultr_plan_info vultr_region_facts: redirect: ngine_io.vultr.vultr_region_info vultr_server_facts: redirect: ngine_io.vultr.vultr_server_info vultr_ssh_key_facts: redirect: ngine_io.vultr.vultr_ssh_key_info vultr_startup_script_facts: redirect: ngine_io.vultr.vultr_startup_script_info vultr_user_facts: redirect: ngine_io.vultr.vultr_user_info vultr_account_info: redirect: ngine_io.vultr.vultr_account_info vultr_block_storage: redirect: ngine_io.vultr.vultr_block_storage vultr_block_storage_info: redirect: ngine_io.vultr.vultr_block_storage_info vultr_dns_domain: redirect: ngine_io.vultr.vultr_dns_domain vultr_dns_domain_info: redirect: ngine_io.vultr.vultr_dns_domain_info vultr_dns_record: redirect: ngine_io.vultr.vultr_dns_record vultr_firewall_group: redirect: ngine_io.vultr.vultr_firewall_group vultr_firewall_group_info: redirect: ngine_io.vultr.vultr_firewall_group_info vultr_firewall_rule: redirect: ngine_io.vultr.vultr_firewall_rule vultr_network: redirect: ngine_io.vultr.vultr_network vultr_network_info: redirect: ngine_io.vultr.vultr_network_info vultr_os_info: redirect: ngine_io.vultr.vultr_os_info vultr_plan_info: redirect: ngine_io.vultr.vultr_plan_info vultr_region_info: redirect: ngine_io.vultr.vultr_region_info vultr_server: redirect: ngine_io.vultr.vultr_server vultr_server_info: redirect: ngine_io.vultr.vultr_server_info vultr_ssh_key: redirect: ngine_io.vultr.vultr_ssh_key vultr_ssh_key_info: redirect: ngine_io.vultr.vultr_ssh_key_info vultr_startup_script: redirect: ngine_io.vultr.vultr_startup_script vultr_startup_script_info: redirect: ngine_io.vultr.vultr_startup_script_info vultr_user: redirect: ngine_io.vultr.vultr_user vultr_user_info: redirect: ngine_io.vultr.vultr_user_info webfaction_app: redirect: community.general.webfaction_app webfaction_db: redirect: community.general.webfaction_db webfaction_domain: redirect: community.general.webfaction_domain webfaction_mailbox: redirect: community.general.webfaction_mailbox webfaction_site: redirect: community.general.webfaction_site xenserver_guest_facts: redirect: community.general.xenserver_guest_facts xenserver_guest: redirect: community.general.xenserver_guest xenserver_guest_info: redirect: community.general.xenserver_guest_info xenserver_guest_powerstate: redirect: community.general.xenserver_guest_powerstate consul: redirect: community.general.consul consul_acl: redirect: community.general.consul_acl consul_kv: redirect: community.general.consul_kv consul_session: redirect: community.general.consul_session etcd3: redirect: community.general.etcd3 pacemaker_cluster: redirect: community.general.pacemaker_cluster znode: redirect: community.general.znode aerospike_migrations: redirect: community.general.aerospike_migrations influxdb_database: redirect: community.general.influxdb_database influxdb_query: redirect: community.general.influxdb_query influxdb_retention_policy: redirect: community.general.influxdb_retention_policy influxdb_user: redirect: community.general.influxdb_user influxdb_write: redirect: community.general.influxdb_write elasticsearch_plugin: redirect: community.general.elasticsearch_plugin kibana_plugin: redirect: community.general.kibana_plugin redis: redirect: community.general.redis riak: redirect: community.general.riak mssql_db: redirect: community.general.mssql_db mysql_db: redirect: community.mysql.mysql_db mysql_info: redirect: community.mysql.mysql_info mysql_query: redirect: community.mysql.mysql_query mysql_replication: redirect: community.mysql.mysql_replication mysql_user: redirect: community.mysql.mysql_user mysql_variables: redirect: community.mysql.mysql_variables postgresql_copy: redirect: community.postgresql.postgresql_copy postgresql_db: redirect: community.postgresql.postgresql_db postgresql_ext: redirect: community.postgresql.postgresql_ext postgresql_idx: redirect: community.postgresql.postgresql_idx postgresql_info: redirect: community.postgresql.postgresql_info postgresql_lang: redirect: community.postgresql.postgresql_lang postgresql_membership: redirect: community.postgresql.postgresql_membership postgresql_owner: redirect: community.postgresql.postgresql_owner postgresql_pg_hba: redirect: community.postgresql.postgresql_pg_hba postgresql_ping: redirect: community.postgresql.postgresql_ping postgresql_privs: redirect: community.postgresql.postgresql_privs postgresql_publication: redirect: community.postgresql.postgresql_publication postgresql_query: redirect: community.postgresql.postgresql_query postgresql_schema: redirect: community.postgresql.postgresql_schema postgresql_sequence: redirect: community.postgresql.postgresql_sequence postgresql_set: redirect: community.postgresql.postgresql_set postgresql_slot: redirect: community.postgresql.postgresql_slot postgresql_subscription: redirect: community.postgresql.postgresql_subscription postgresql_table: redirect: community.postgresql.postgresql_table postgresql_tablespace: redirect: community.postgresql.postgresql_tablespace postgresql_user: redirect: community.postgresql.postgresql_user postgresql_user_obj_stat_info: redirect: community.postgresql.postgresql_user_obj_stat_info proxysql_backend_servers: redirect: community.proxysql.proxysql_backend_servers proxysql_global_variables: redirect: community.proxysql.proxysql_global_variables proxysql_manage_config: redirect: community.proxysql.proxysql_manage_config proxysql_mysql_users: redirect: community.proxysql.proxysql_mysql_users proxysql_query_rules: redirect: community.proxysql.proxysql_query_rules proxysql_replication_hostgroups: redirect: community.proxysql.proxysql_replication_hostgroups proxysql_scheduler: redirect: community.proxysql.proxysql_scheduler vertica_facts: redirect: community.general.vertica_facts vertica_configuration: redirect: community.general.vertica_configuration vertica_info: redirect: community.general.vertica_info vertica_role: redirect: community.general.vertica_role vertica_schema: redirect: community.general.vertica_schema vertica_user: redirect: community.general.vertica_user archive: redirect: community.general.archive ini_file: redirect: community.general.ini_file iso_extract: redirect: community.general.iso_extract patch: redirect: ansible.posix.patch read_csv: redirect: community.general.read_csv xattr: redirect: community.general.xattr xml: redirect: community.general.xml onepassword_facts: redirect: community.general.onepassword_facts ipa_config: redirect: community.general.ipa_config ipa_dnsrecord: redirect: community.general.ipa_dnsrecord ipa_dnszone: redirect: community.general.ipa_dnszone ipa_group: redirect: community.general.ipa_group ipa_hbacrule: redirect: community.general.ipa_hbacrule ipa_host: redirect: community.general.ipa_host ipa_hostgroup: redirect: community.general.ipa_hostgroup ipa_role: redirect: community.general.ipa_role ipa_service: redirect: community.general.ipa_service ipa_subca: redirect: community.general.ipa_subca ipa_sudocmd: redirect: community.general.ipa_sudocmd ipa_sudocmdgroup: redirect: community.general.ipa_sudocmdgroup ipa_sudorule: redirect: community.general.ipa_sudorule ipa_user: redirect: community.general.ipa_user ipa_vault: redirect: community.general.ipa_vault keycloak_client: redirect: community.general.keycloak_client keycloak_clienttemplate: redirect: community.general.keycloak_clienttemplate keycloak_group: redirect: community.general.keycloak_group onepassword_info: redirect: community.general.onepassword_info opendj_backendprop: redirect: community.general.opendj_backendprop rabbitmq_binding: redirect: community.rabbitmq.rabbitmq_binding rabbitmq_exchange: redirect: community.rabbitmq.rabbitmq_exchange rabbitmq_global_parameter: redirect: community.rabbitmq.rabbitmq_global_parameter rabbitmq_parameter: redirect: community.rabbitmq.rabbitmq_parameter rabbitmq_plugin: redirect: community.rabbitmq.rabbitmq_plugin rabbitmq_policy: redirect: community.rabbitmq.rabbitmq_policy rabbitmq_queue: redirect: community.rabbitmq.rabbitmq_queue rabbitmq_user: redirect: community.rabbitmq.rabbitmq_user rabbitmq_vhost: redirect: community.rabbitmq.rabbitmq_vhost rabbitmq_vhost_limits: redirect: community.rabbitmq.rabbitmq_vhost_limits airbrake_deployment: redirect: community.general.airbrake_deployment bigpanda: redirect: community.general.bigpanda circonus_annotation: redirect: community.general.circonus_annotation datadog_event: redirect: community.general.datadog_event datadog_monitor: redirect: community.general.datadog_monitor honeybadger_deployment: redirect: community.general.honeybadger_deployment icinga2_feature: redirect: community.general.icinga2_feature icinga2_host: redirect: community.general.icinga2_host librato_annotation: redirect: community.general.librato_annotation logentries: redirect: community.general.logentries logicmonitor: redirect: community.general.logicmonitor logicmonitor_facts: redirect: community.general.logicmonitor_facts logstash_plugin: redirect: community.general.logstash_plugin monit: redirect: community.general.monit nagios: redirect: community.general.nagios newrelic_deployment: redirect: community.general.newrelic_deployment pagerduty: redirect: community.general.pagerduty pagerduty_alert: redirect: community.general.pagerduty_alert pingdom: redirect: community.general.pingdom rollbar_deployment: redirect: community.general.rollbar_deployment sensu_check: redirect: community.general.sensu_check sensu_client: redirect: community.general.sensu_client sensu_handler: redirect: community.general.sensu_handler sensu_silence: redirect: community.general.sensu_silence sensu_subscription: redirect: community.general.sensu_subscription spectrum_device: redirect: community.general.spectrum_device stackdriver: redirect: community.general.stackdriver statusio_maintenance: redirect: community.general.statusio_maintenance uptimerobot: redirect: community.general.uptimerobot zabbix_group_facts: redirect: community.zabbix.zabbix_group_facts zabbix_host_facts: redirect: community.zabbix.zabbix_host_facts zabbix_action: redirect: community.zabbix.zabbix_action zabbix_group: redirect: community.zabbix.zabbix_group zabbix_group_info: redirect: community.zabbix.zabbix_group_info zabbix_host: redirect: community.zabbix.zabbix_host zabbix_host_events_info: redirect: community.zabbix.zabbix_host_events_info zabbix_host_info: redirect: community.zabbix.zabbix_host_info zabbix_hostmacro: redirect: community.zabbix.zabbix_hostmacro zabbix_maintenance: redirect: community.zabbix.zabbix_maintenance zabbix_map: redirect: community.zabbix.zabbix_map zabbix_mediatype: redirect: community.zabbix.zabbix_mediatype zabbix_proxy: redirect: community.zabbix.zabbix_proxy zabbix_screen: redirect: community.zabbix.zabbix_screen zabbix_service: redirect: community.zabbix.zabbix_service zabbix_template: redirect: community.zabbix.zabbix_template zabbix_template_info: redirect: community.zabbix.zabbix_template_info zabbix_user: redirect: community.zabbix.zabbix_user zabbix_user_info: redirect: community.zabbix.zabbix_user_info zabbix_valuemap: redirect: community.zabbix.zabbix_valuemap cloudflare_dns: redirect: community.general.cloudflare_dns dnsimple: redirect: community.general.dnsimple dnsmadeeasy: redirect: community.general.dnsmadeeasy exo_dns_domain: redirect: ngine_io.exoscale.exo_dns_domain exo_dns_record: redirect: ngine_io.exoscale.exo_dns_record haproxy: redirect: community.general.haproxy hetzner_failover_ip: redirect: community.hrobot.failover_ip hetzner_failover_ip_info: redirect: community.hrobot.failover_ip_info hetzner_firewall: redirect: community.hrobot.firewall hetzner_firewall_info: redirect: community.hrobot.firewall_info infinity: redirect: community.general.infinity ip_netns: redirect: community.general.ip_netns ipify_facts: redirect: community.general.ipify_facts ipinfoio_facts: redirect: community.general.ipinfoio_facts ipwcli_dns: redirect: community.general.ipwcli_dns ldap_attr: redirect: community.general.ldap_attr ldap_attrs: redirect: community.general.ldap_attrs ldap_entry: redirect: community.general.ldap_entry ldap_passwd: redirect: community.general.ldap_passwd lldp: redirect: community.general.lldp netcup_dns: redirect: community.general.netcup_dns nios_a_record: redirect: community.general.nios_a_record nios_aaaa_record: redirect: community.general.nios_aaaa_record nios_cname_record: redirect: community.general.nios_cname_record nios_dns_view: redirect: community.general.nios_dns_view nios_fixed_address: redirect: community.general.nios_fixed_address nios_host_record: redirect: community.general.nios_host_record nios_member: redirect: community.general.nios_member nios_mx_record: redirect: community.general.nios_mx_record nios_naptr_record: redirect: community.general.nios_naptr_record nios_network: redirect: community.general.nios_network nios_network_view: redirect: community.general.nios_network_view nios_nsgroup: redirect: community.general.nios_nsgroup nios_ptr_record: redirect: community.general.nios_ptr_record nios_srv_record: redirect: community.general.nios_srv_record nios_txt_record: redirect: community.general.nios_txt_record nios_zone: redirect: community.general.nios_zone nmcli: redirect: community.general.nmcli nsupdate: redirect: community.general.nsupdate omapi_host: redirect: community.general.omapi_host snmp_facts: redirect: community.general.snmp_facts a10_server: redirect: community.network.a10_server a10_server_axapi3: redirect: community.network.a10_server_axapi3 a10_service_group: redirect: community.network.a10_service_group a10_virtual_server: redirect: community.network.a10_virtual_server aci_intf_policy_fc: redirect: cisco.aci.aci_interface_policy_fc aci_intf_policy_l2: redirect: cisco.aci.aci_interface_policy_l2 aci_intf_policy_lldp: redirect: cisco.aci.aci_interface_policy_lldp aci_intf_policy_mcp: redirect: cisco.aci.aci_interface_policy_mcp aci_intf_policy_port_channel: redirect: cisco.aci.aci_interface_policy_port_channel aci_intf_policy_port_security: redirect: cisco.aci.aci_interface_policy_port_security mso_schema_template_external_epg_contract: redirect: cisco.mso.mso_schema_template_external_epg_contract mso_schema_template_external_epg_subnet: redirect: cisco.mso.mso_schema_template_external_epg_subnet aireos_command: redirect: community.network.aireos_command aireos_config: redirect: community.network.aireos_config apconos_command: redirect: community.network.apconos_command aruba_command: redirect: community.network.aruba_command aruba_config: redirect: community.network.aruba_config avi_actiongroupconfig: redirect: community.network.avi_actiongroupconfig avi_alertconfig: redirect: community.network.avi_alertconfig avi_alertemailconfig: redirect: community.network.avi_alertemailconfig avi_alertscriptconfig: redirect: community.network.avi_alertscriptconfig avi_alertsyslogconfig: redirect: community.network.avi_alertsyslogconfig avi_analyticsprofile: redirect: community.network.avi_analyticsprofile avi_api_session: redirect: community.network.avi_api_session avi_api_version: redirect: community.network.avi_api_version avi_applicationpersistenceprofile: redirect: community.network.avi_applicationpersistenceprofile avi_applicationprofile: redirect: community.network.avi_applicationprofile avi_authprofile: redirect: community.network.avi_authprofile avi_autoscalelaunchconfig: redirect: community.network.avi_autoscalelaunchconfig avi_backup: redirect: community.network.avi_backup avi_backupconfiguration: redirect: community.network.avi_backupconfiguration avi_certificatemanagementprofile: redirect: community.network.avi_certificatemanagementprofile avi_cloud: redirect: community.network.avi_cloud avi_cloudconnectoruser: redirect: community.network.avi_cloudconnectoruser avi_cloudproperties: redirect: community.network.avi_cloudproperties avi_cluster: redirect: community.network.avi_cluster avi_clusterclouddetails: redirect: community.network.avi_clusterclouddetails avi_controllerproperties: redirect: community.network.avi_controllerproperties avi_customipamdnsprofile: redirect: community.network.avi_customipamdnsprofile avi_dnspolicy: redirect: community.network.avi_dnspolicy avi_errorpagebody: redirect: community.network.avi_errorpagebody avi_errorpageprofile: redirect: community.network.avi_errorpageprofile avi_gslb: redirect: community.network.avi_gslb avi_gslbgeodbprofile: redirect: community.network.avi_gslbgeodbprofile avi_gslbservice: redirect: community.network.avi_gslbservice avi_gslbservice_patch_member: redirect: community.network.avi_gslbservice_patch_member avi_hardwaresecuritymodulegroup: redirect: community.network.avi_hardwaresecuritymodulegroup avi_healthmonitor: redirect: community.network.avi_healthmonitor avi_httppolicyset: redirect: community.network.avi_httppolicyset avi_ipaddrgroup: redirect: community.network.avi_ipaddrgroup avi_ipamdnsproviderprofile: redirect: community.network.avi_ipamdnsproviderprofile avi_l4policyset: redirect: community.network.avi_l4policyset avi_microservicegroup: redirect: community.network.avi_microservicegroup avi_network: redirect: community.network.avi_network avi_networkprofile: redirect: community.network.avi_networkprofile avi_networksecuritypolicy: redirect: community.network.avi_networksecuritypolicy avi_pkiprofile: redirect: community.network.avi_pkiprofile avi_pool: redirect: community.network.avi_pool avi_poolgroup: redirect: community.network.avi_poolgroup avi_poolgroupdeploymentpolicy: redirect: community.network.avi_poolgroupdeploymentpolicy avi_prioritylabels: redirect: community.network.avi_prioritylabels avi_role: redirect: community.network.avi_role avi_scheduler: redirect: community.network.avi_scheduler avi_seproperties: redirect: community.network.avi_seproperties avi_serverautoscalepolicy: redirect: community.network.avi_serverautoscalepolicy avi_serviceengine: redirect: community.network.avi_serviceengine avi_serviceenginegroup: redirect: community.network.avi_serviceenginegroup avi_snmptrapprofile: redirect: community.network.avi_snmptrapprofile avi_sslkeyandcertificate: redirect: community.network.avi_sslkeyandcertificate avi_sslprofile: redirect: community.network.avi_sslprofile avi_stringgroup: redirect: community.network.avi_stringgroup avi_systemconfiguration: redirect: community.network.avi_systemconfiguration avi_tenant: redirect: community.network.avi_tenant avi_trafficcloneprofile: redirect: community.network.avi_trafficcloneprofile avi_user: redirect: community.network.avi_user avi_useraccount: redirect: community.network.avi_useraccount avi_useraccountprofile: redirect: community.network.avi_useraccountprofile avi_virtualservice: redirect: community.network.avi_virtualservice avi_vrfcontext: redirect: community.network.avi_vrfcontext avi_vsdatascriptset: redirect: community.network.avi_vsdatascriptset avi_vsvip: redirect: community.network.avi_vsvip avi_webhook: redirect: community.network.avi_webhook bcf_switch: redirect: community.network.bcf_switch bigmon_chain: redirect: community.network.bigmon_chain bigmon_policy: redirect: community.network.bigmon_policy checkpoint_access_layer_facts: redirect: check_point.mgmt.checkpoint_access_layer_facts checkpoint_access_rule: redirect: check_point.mgmt.checkpoint_access_rule checkpoint_access_rule_facts: redirect: check_point.mgmt.checkpoint_access_rule_facts checkpoint_host: redirect: check_point.mgmt.checkpoint_host checkpoint_host_facts: redirect: check_point.mgmt.checkpoint_host_facts checkpoint_object_facts: redirect: check_point.mgmt.checkpoint_object_facts checkpoint_run_script: redirect: check_point.mgmt.checkpoint_run_script checkpoint_session: redirect: check_point.mgmt.checkpoint_session checkpoint_task_facts: redirect: check_point.mgmt.checkpoint_task_facts cp_publish: redirect: community.network.cp_publish ce_aaa_server: redirect: community.network.ce_aaa_server ce_aaa_server_host: redirect: community.network.ce_aaa_server_host ce_acl: redirect: community.network.ce_acl ce_acl_advance: redirect: community.network.ce_acl_advance ce_acl_interface: redirect: community.network.ce_acl_interface ce_bfd_global: redirect: community.network.ce_bfd_global ce_bfd_session: redirect: community.network.ce_bfd_session ce_bfd_view: redirect: community.network.ce_bfd_view ce_bgp: redirect: community.network.ce_bgp ce_bgp_af: redirect: community.network.ce_bgp_af ce_bgp_neighbor: redirect: community.network.ce_bgp_neighbor ce_bgp_neighbor_af: redirect: community.network.ce_bgp_neighbor_af ce_command: redirect: community.network.ce_command ce_config: redirect: community.network.ce_config ce_dldp: redirect: community.network.ce_dldp ce_dldp_interface: redirect: community.network.ce_dldp_interface ce_eth_trunk: redirect: community.network.ce_eth_trunk ce_evpn_bd_vni: redirect: community.network.ce_evpn_bd_vni ce_evpn_bgp: redirect: community.network.ce_evpn_bgp ce_evpn_bgp_rr: redirect: community.network.ce_evpn_bgp_rr ce_evpn_global: redirect: community.network.ce_evpn_global ce_facts: redirect: community.network.ce_facts ce_file_copy: redirect: community.network.ce_file_copy ce_info_center_debug: redirect: community.network.ce_info_center_debug ce_info_center_global: redirect: community.network.ce_info_center_global ce_info_center_log: redirect: community.network.ce_info_center_log ce_info_center_trap: redirect: community.network.ce_info_center_trap ce_interface: redirect: community.network.ce_interface ce_interface_ospf: redirect: community.network.ce_interface_ospf ce_ip_interface: redirect: community.network.ce_ip_interface ce_is_is_instance: redirect: community.network.ce_is_is_instance ce_is_is_interface: redirect: community.network.ce_is_is_interface ce_is_is_view: redirect: community.network.ce_is_is_view ce_lacp: redirect: community.network.ce_lacp ce_link_status: redirect: community.network.ce_link_status ce_lldp: redirect: community.network.ce_lldp ce_lldp_interface: redirect: community.network.ce_lldp_interface ce_mdn_interface: redirect: community.network.ce_mdn_interface ce_mlag_config: redirect: community.network.ce_mlag_config ce_mlag_interface: redirect: community.network.ce_mlag_interface ce_mtu: redirect: community.network.ce_mtu ce_multicast_global: redirect: community.network.ce_multicast_global ce_multicast_igmp_enable: redirect: community.network.ce_multicast_igmp_enable ce_netconf: redirect: community.network.ce_netconf ce_netstream_aging: redirect: community.network.ce_netstream_aging ce_netstream_export: redirect: community.network.ce_netstream_export ce_netstream_global: redirect: community.network.ce_netstream_global ce_netstream_template: redirect: community.network.ce_netstream_template ce_ntp: redirect: community.network.ce_ntp ce_ntp_auth: redirect: community.network.ce_ntp_auth ce_ospf: redirect: community.network.ce_ospf ce_ospf_vrf: redirect: community.network.ce_ospf_vrf ce_reboot: redirect: community.network.ce_reboot ce_rollback: redirect: community.network.ce_rollback ce_sflow: redirect: community.network.ce_sflow ce_snmp_community: redirect: community.network.ce_snmp_community ce_snmp_contact: redirect: community.network.ce_snmp_contact ce_snmp_location: redirect: community.network.ce_snmp_location ce_snmp_target_host: redirect: community.network.ce_snmp_target_host ce_snmp_traps: redirect: community.network.ce_snmp_traps ce_snmp_user: redirect: community.network.ce_snmp_user ce_startup: redirect: community.network.ce_startup ce_static_route: redirect: community.network.ce_static_route ce_static_route_bfd: redirect: community.network.ce_static_route_bfd ce_stp: redirect: community.network.ce_stp ce_switchport: redirect: community.network.ce_switchport ce_vlan: redirect: community.network.ce_vlan ce_vrf: redirect: community.network.ce_vrf ce_vrf_af: redirect: community.network.ce_vrf_af ce_vrf_interface: redirect: community.network.ce_vrf_interface ce_vrrp: redirect: community.network.ce_vrrp ce_vxlan_arp: redirect: community.network.ce_vxlan_arp ce_vxlan_gateway: redirect: community.network.ce_vxlan_gateway ce_vxlan_global: redirect: community.network.ce_vxlan_global ce_vxlan_tunnel: redirect: community.network.ce_vxlan_tunnel ce_vxlan_vap: redirect: community.network.ce_vxlan_vap cv_server_provision: redirect: community.network.cv_server_provision cnos_backup: redirect: community.network.cnos_backup cnos_banner: redirect: community.network.cnos_banner cnos_bgp: redirect: community.network.cnos_bgp cnos_command: redirect: community.network.cnos_command cnos_conditional_command: redirect: community.network.cnos_conditional_command cnos_conditional_template: redirect: community.network.cnos_conditional_template cnos_config: redirect: community.network.cnos_config cnos_factory: redirect: community.network.cnos_factory cnos_facts: redirect: community.network.cnos_facts cnos_image: redirect: community.network.cnos_image cnos_interface: redirect: community.network.cnos_interface cnos_l2_interface: redirect: community.network.cnos_l2_interface cnos_l3_interface: redirect: community.network.cnos_l3_interface cnos_linkagg: redirect: community.network.cnos_linkagg cnos_lldp: redirect: community.network.cnos_lldp cnos_logging: redirect: community.network.cnos_logging cnos_reload: redirect: community.network.cnos_reload cnos_rollback: redirect: community.network.cnos_rollback cnos_save: redirect: community.network.cnos_save cnos_showrun: redirect: community.network.cnos_showrun cnos_static_route: redirect: community.network.cnos_static_route cnos_system: redirect: community.network.cnos_system cnos_template: redirect: community.network.cnos_template cnos_user: redirect: community.network.cnos_user cnos_vlag: redirect: community.network.cnos_vlag cnos_vlan: redirect: community.network.cnos_vlan cnos_vrf: redirect: community.network.cnos_vrf nclu: redirect: community.network.nclu edgeos_command: redirect: community.network.edgeos_command edgeos_config: redirect: community.network.edgeos_config edgeos_facts: redirect: community.network.edgeos_facts edgeswitch_facts: redirect: community.network.edgeswitch_facts edgeswitch_vlan: redirect: community.network.edgeswitch_vlan enos_command: redirect: community.network.enos_command enos_config: redirect: community.network.enos_config enos_facts: redirect: community.network.enos_facts eric_eccli_command: redirect: community.network.eric_eccli_command exos_command: redirect: community.network.exos_command exos_config: redirect: community.network.exos_config exos_facts: redirect: community.network.exos_facts exos_l2_interfaces: redirect: community.network.exos_l2_interfaces exos_lldp_global: redirect: community.network.exos_lldp_global exos_lldp_interfaces: redirect: community.network.exos_lldp_interfaces exos_vlans: redirect: community.network.exos_vlans bigip_asm_policy: tombstone: removal_date: "2019-11-06" warning_text: bigip_asm_policy has been removed please use bigip_asm_policy_manage instead. bigip_device_facts: redirect: f5networks.f5_modules.bigip_device_info bigip_iapplx_package: redirect: f5networks.f5_modules.bigip_lx_package bigip_security_address_list: redirect: f5networks.f5_modules.bigip_firewall_address_list bigip_security_port_list: redirect: f5networks.f5_modules.bigip_firewall_port_list bigip_traffic_group: redirect: f5networks.f5_modules.bigip_device_traffic_group bigip_facts: tombstone: removal_date: "2019-11-06" warning_text: bigip_facts has been removed please use bigip_device_info module. bigip_gtm_facts: tombstone: removal_date: "2019-11-06" warning_text: bigip_gtm_facts has been removed please use bigip_device_info module. faz_device: redirect: community.fortios.faz_device fmgr_device: redirect: community.fortios.fmgr_device fmgr_device_config: redirect: community.fortios.fmgr_device_config fmgr_device_group: redirect: community.fortios.fmgr_device_group fmgr_device_provision_template: redirect: community.fortios.fmgr_device_provision_template fmgr_fwobj_address: redirect: community.fortios.fmgr_fwobj_address fmgr_fwobj_ippool: redirect: community.fortios.fmgr_fwobj_ippool fmgr_fwobj_ippool6: redirect: community.fortios.fmgr_fwobj_ippool6 fmgr_fwobj_service: redirect: community.fortios.fmgr_fwobj_service fmgr_fwobj_vip: redirect: community.fortios.fmgr_fwobj_vip fmgr_fwpol_ipv4: redirect: community.fortios.fmgr_fwpol_ipv4 fmgr_fwpol_package: redirect: community.fortios.fmgr_fwpol_package fmgr_ha: redirect: community.fortios.fmgr_ha fmgr_provisioning: redirect: community.fortios.fmgr_provisioning fmgr_query: redirect: community.fortios.fmgr_query fmgr_script: redirect: community.fortios.fmgr_script fmgr_secprof_appctrl: redirect: community.fortios.fmgr_secprof_appctrl fmgr_secprof_av: redirect: community.fortios.fmgr_secprof_av fmgr_secprof_dns: redirect: community.fortios.fmgr_secprof_dns fmgr_secprof_ips: redirect: community.fortios.fmgr_secprof_ips fmgr_secprof_profile_group: redirect: community.fortios.fmgr_secprof_profile_group fmgr_secprof_proxy: redirect: community.fortios.fmgr_secprof_proxy fmgr_secprof_spam: redirect: community.fortios.fmgr_secprof_spam fmgr_secprof_ssl_ssh: redirect: community.fortios.fmgr_secprof_ssl_ssh fmgr_secprof_voip: redirect: community.fortios.fmgr_secprof_voip fmgr_secprof_waf: redirect: community.fortios.fmgr_secprof_waf fmgr_secprof_wanopt: redirect: community.fortios.fmgr_secprof_wanopt fmgr_secprof_web: redirect: community.fortios.fmgr_secprof_web ftd_configuration: redirect: community.network.ftd_configuration ftd_file_download: redirect: community.network.ftd_file_download ftd_file_upload: redirect: community.network.ftd_file_upload ftd_install: redirect: community.network.ftd_install icx_banner: redirect: community.network.icx_banner icx_command: redirect: community.network.icx_command icx_config: redirect: community.network.icx_config icx_copy: redirect: community.network.icx_copy icx_facts: redirect: community.network.icx_facts icx_interface: redirect: community.network.icx_interface icx_l3_interface: redirect: community.network.icx_l3_interface icx_linkagg: redirect: community.network.icx_linkagg icx_lldp: redirect: community.network.icx_lldp icx_logging: redirect: community.network.icx_logging icx_ping: redirect: community.network.icx_ping icx_static_route: redirect: community.network.icx_static_route icx_system: redirect: community.network.icx_system icx_user: redirect: community.network.icx_user icx_vlan: redirect: community.network.icx_vlan dladm_etherstub: redirect: community.network.dladm_etherstub dladm_iptun: redirect: community.network.dladm_iptun dladm_linkprop: redirect: community.network.dladm_linkprop dladm_vlan: redirect: community.network.dladm_vlan dladm_vnic: redirect: community.network.dladm_vnic flowadm: redirect: community.network.flowadm ipadm_addr: redirect: community.network.ipadm_addr ipadm_addrprop: redirect: community.network.ipadm_addrprop ipadm_if: redirect: community.network.ipadm_if ipadm_ifprop: redirect: community.network.ipadm_ifprop ipadm_prop: redirect: community.network.ipadm_prop ig_config: redirect: community.network.ig_config ig_unit_information: redirect: community.network.ig_unit_information ironware_command: redirect: community.network.ironware_command ironware_config: redirect: community.network.ironware_config ironware_facts: redirect: community.network.ironware_facts iap_start_workflow: redirect: community.network.iap_start_workflow iap_token: redirect: community.network.iap_token netact_cm_command: redirect: community.network.netact_cm_command netscaler_cs_action: redirect: community.network.netscaler_cs_action netscaler_cs_policy: redirect: community.network.netscaler_cs_policy netscaler_cs_vserver: redirect: community.network.netscaler_cs_vserver netscaler_gslb_service: redirect: community.network.netscaler_gslb_service netscaler_gslb_site: redirect: community.network.netscaler_gslb_site netscaler_gslb_vserver: redirect: community.network.netscaler_gslb_vserver netscaler_lb_monitor: redirect: community.network.netscaler_lb_monitor netscaler_lb_vserver: redirect: community.network.netscaler_lb_vserver netscaler_nitro_request: redirect: community.network.netscaler_nitro_request netscaler_save_config: redirect: community.network.netscaler_save_config netscaler_server: redirect: community.network.netscaler_server netscaler_service: redirect: community.network.netscaler_service netscaler_servicegroup: redirect: community.network.netscaler_servicegroup netscaler_ssl_certkey: redirect: community.network.netscaler_ssl_certkey pn_cluster: redirect: community.network.pn_cluster pn_ospf: redirect: community.network.pn_ospf pn_ospfarea: redirect: community.network.pn_ospfarea pn_show: redirect: community.network.pn_show pn_trunk: redirect: community.network.pn_trunk pn_vlag: redirect: community.network.pn_vlag pn_vlan: redirect: community.network.pn_vlan pn_vrouter: redirect: community.network.pn_vrouter pn_vrouterbgp: redirect: community.network.pn_vrouterbgp pn_vrouterif: redirect: community.network.pn_vrouterif pn_vrouterlbif: redirect: community.network.pn_vrouterlbif pn_access_list: redirect: community.network.pn_access_list pn_access_list_ip: redirect: community.network.pn_access_list_ip pn_admin_service: redirect: community.network.pn_admin_service pn_admin_session_timeout: redirect: community.network.pn_admin_session_timeout pn_admin_syslog: redirect: community.network.pn_admin_syslog pn_connection_stats_settings: redirect: community.network.pn_connection_stats_settings pn_cpu_class: redirect: community.network.pn_cpu_class pn_cpu_mgmt_class: redirect: community.network.pn_cpu_mgmt_class pn_dhcp_filter: redirect: community.network.pn_dhcp_filter pn_dscp_map: redirect: community.network.pn_dscp_map pn_dscp_map_pri_map: redirect: community.network.pn_dscp_map_pri_map pn_fabric_local: redirect: community.network.pn_fabric_local pn_igmp_snooping: redirect: community.network.pn_igmp_snooping pn_ipv6security_raguard: redirect: community.network.pn_ipv6security_raguard pn_ipv6security_raguard_port: redirect: community.network.pn_ipv6security_raguard_port pn_ipv6security_raguard_vlan: redirect: community.network.pn_ipv6security_raguard_vlan pn_log_audit_exception: redirect: community.network.pn_log_audit_exception pn_port_config: redirect: community.network.pn_port_config pn_port_cos_bw: redirect: community.network.pn_port_cos_bw pn_port_cos_rate_setting: redirect: community.network.pn_port_cos_rate_setting pn_prefix_list: redirect: community.network.pn_prefix_list pn_prefix_list_network: redirect: community.network.pn_prefix_list_network pn_role: redirect: community.network.pn_role pn_snmp_community: redirect: community.network.pn_snmp_community pn_snmp_trap_sink: redirect: community.network.pn_snmp_trap_sink pn_snmp_vacm: redirect: community.network.pn_snmp_vacm pn_stp: redirect: community.network.pn_stp pn_stp_port: redirect: community.network.pn_stp_port pn_switch_setup: redirect: community.network.pn_switch_setup pn_user: redirect: community.network.pn_user pn_vflow_table_profile: redirect: community.network.pn_vflow_table_profile pn_vrouter_bgp: redirect: community.network.pn_vrouter_bgp pn_vrouter_bgp_network: redirect: community.network.pn_vrouter_bgp_network pn_vrouter_interface_ip: redirect: community.network.pn_vrouter_interface_ip pn_vrouter_loopback_interface: redirect: community.network.pn_vrouter_loopback_interface pn_vrouter_ospf: redirect: community.network.pn_vrouter_ospf pn_vrouter_ospf6: redirect: community.network.pn_vrouter_ospf6 pn_vrouter_packet_relay: redirect: community.network.pn_vrouter_packet_relay pn_vrouter_pim_config: redirect: community.network.pn_vrouter_pim_config pn_vtep: redirect: community.network.pn_vtep nos_command: redirect: community.network.nos_command nos_config: redirect: community.network.nos_config nos_facts: redirect: community.network.nos_facts nso_action: redirect: cisco.nso.nso_action nso_config: redirect: cisco.nso.nso_config nso_query: redirect: cisco.nso.nso_query nso_show: redirect: cisco.nso.nso_show nso_verify: redirect: cisco.nso.nso_verify nuage_vspk: redirect: community.network.nuage_vspk onyx_aaa: redirect: mellanox.onyx.onyx_aaa onyx_bfd: redirect: mellanox.onyx.onyx_bfd onyx_bgp: redirect: mellanox.onyx.onyx_bgp onyx_buffer_pool: redirect: mellanox.onyx.onyx_buffer_pool onyx_command: redirect: mellanox.onyx.onyx_command onyx_config: redirect: mellanox.onyx.onyx_config onyx_facts: redirect: mellanox.onyx.onyx_facts onyx_igmp: redirect: mellanox.onyx.onyx_igmp onyx_igmp_interface: redirect: mellanox.onyx.onyx_igmp_interface onyx_igmp_vlan: redirect: mellanox.onyx.onyx_igmp_vlan onyx_interface: redirect: mellanox.onyx.onyx_interface onyx_l2_interface: redirect: mellanox.onyx.onyx_l2_interface onyx_l3_interface: redirect: mellanox.onyx.onyx_l3_interface onyx_linkagg: redirect: mellanox.onyx.onyx_linkagg onyx_lldp: redirect: mellanox.onyx.onyx_lldp onyx_lldp_interface: redirect: mellanox.onyx.onyx_lldp_interface onyx_magp: redirect: mellanox.onyx.onyx_magp onyx_mlag_ipl: redirect: mellanox.onyx.onyx_mlag_ipl onyx_mlag_vip: redirect: mellanox.onyx.onyx_mlag_vip onyx_ntp: redirect: mellanox.onyx.onyx_ntp onyx_ntp_servers_peers: redirect: mellanox.onyx.onyx_ntp_servers_peers onyx_ospf: redirect: mellanox.onyx.onyx_ospf onyx_pfc_interface: redirect: mellanox.onyx.onyx_pfc_interface onyx_protocol: redirect: mellanox.onyx.onyx_protocol onyx_ptp_global: redirect: mellanox.onyx.onyx_ptp_global onyx_ptp_interface: redirect: mellanox.onyx.onyx_ptp_interface onyx_qos: redirect: mellanox.onyx.onyx_qos onyx_snmp: redirect: mellanox.onyx.onyx_snmp onyx_snmp_hosts: redirect: mellanox.onyx.onyx_snmp_hosts onyx_snmp_users: redirect: mellanox.onyx.onyx_snmp_users onyx_syslog_files: redirect: mellanox.onyx.onyx_syslog_files onyx_syslog_remote: redirect: mellanox.onyx.onyx_syslog_remote onyx_traffic_class: redirect: mellanox.onyx.onyx_traffic_class onyx_username: redirect: mellanox.onyx.onyx_username onyx_vlan: redirect: mellanox.onyx.onyx_vlan onyx_vxlan: redirect: mellanox.onyx.onyx_vxlan onyx_wjh: redirect: mellanox.onyx.onyx_wjh opx_cps: redirect: community.network.opx_cps ordnance_config: redirect: community.network.ordnance_config ordnance_facts: redirect: community.network.ordnance_facts panos_admin: redirect: community.network.panos_admin panos_admpwd: redirect: community.network.panos_admpwd panos_cert_gen_ssh: redirect: community.network.panos_cert_gen_ssh panos_check: redirect: community.network.panos_check panos_commit: redirect: community.network.panos_commit panos_dag: redirect: community.network.panos_dag panos_dag_tags: redirect: community.network.panos_dag_tags panos_import: redirect: community.network.panos_import panos_interface: redirect: community.network.panos_interface panos_lic: redirect: community.network.panos_lic panos_loadcfg: redirect: community.network.panos_loadcfg panos_match_rule: redirect: community.network.panos_match_rule panos_mgtconfig: redirect: community.network.panos_mgtconfig panos_nat_rule: redirect: community.network.panos_nat_rule panos_object: redirect: community.network.panos_object panos_op: redirect: community.network.panos_op panos_pg: redirect: community.network.panos_pg panos_query_rules: redirect: community.network.panos_query_rules panos_restart: redirect: community.network.panos_restart panos_sag: redirect: community.network.panos_sag panos_security_rule: redirect: community.network.panos_security_rule panos_set: redirect: community.network.panos_set vdirect_commit: redirect: community.network.vdirect_commit vdirect_file: redirect: community.network.vdirect_file vdirect_runnable: redirect: community.network.vdirect_runnable routeros_command: redirect: community.routeros.command routeros_facts: redirect: community.routeros.facts slxos_command: redirect: community.network.slxos_command slxos_config: redirect: community.network.slxos_config slxos_facts: redirect: community.network.slxos_facts slxos_interface: redirect: community.network.slxos_interface slxos_l2_interface: redirect: community.network.slxos_l2_interface slxos_l3_interface: redirect: community.network.slxos_l3_interface slxos_linkagg: redirect: community.network.slxos_linkagg slxos_lldp: redirect: community.network.slxos_lldp slxos_vlan: redirect: community.network.slxos_vlan sros_command: redirect: community.network.sros_command sros_config: redirect: community.network.sros_config sros_rollback: redirect: community.network.sros_rollback voss_command: redirect: community.network.voss_command voss_config: redirect: community.network.voss_config voss_facts: redirect: community.network.voss_facts osx_say: redirect: community.general.say bearychat: redirect: community.general.bearychat campfire: redirect: community.general.campfire catapult: redirect: community.general.catapult cisco_spark: redirect: community.general.cisco_spark flowdock: redirect: community.general.flowdock grove: redirect: community.general.grove hipchat: redirect: community.general.hipchat irc: redirect: community.general.irc jabber: redirect: community.general.jabber logentries_msg: redirect: community.general.logentries_msg mail: redirect: community.general.mail matrix: redirect: community.general.matrix mattermost: redirect: community.general.mattermost mqtt: redirect: community.general.mqtt nexmo: redirect: community.general.nexmo office_365_connector_card: redirect: community.general.office_365_connector_card pushbullet: redirect: community.general.pushbullet pushover: redirect: community.general.pushover rabbitmq_publish: redirect: community.rabbitmq.rabbitmq_publish rocketchat: redirect: community.general.rocketchat say: redirect: community.general.say sendgrid: redirect: community.general.sendgrid slack: redirect: community.general.slack syslogger: redirect: community.general.syslogger telegram: redirect: community.general.telegram twilio: redirect: community.general.twilio typetalk: redirect: community.general.typetalk bower: redirect: community.general.bower bundler: redirect: community.general.bundler composer: redirect: community.general.composer cpanm: redirect: community.general.cpanm easy_install: redirect: community.general.easy_install gem: redirect: community.general.gem maven_artifact: redirect: community.general.maven_artifact npm: redirect: community.general.npm pear: redirect: community.general.pear pip_package_info: redirect: community.general.pip_package_info yarn: redirect: community.general.yarn apk: redirect: community.general.apk apt_rpm: redirect: community.general.apt_rpm flatpak: redirect: community.general.flatpak flatpak_remote: redirect: community.general.flatpak_remote homebrew: redirect: community.general.homebrew homebrew_cask: redirect: community.general.homebrew_cask homebrew_tap: redirect: community.general.homebrew_tap installp: redirect: community.general.installp layman: redirect: community.general.layman macports: redirect: community.general.macports mas: redirect: community.general.mas openbsd_pkg: redirect: community.general.openbsd_pkg opkg: redirect: community.general.opkg pacman: redirect: community.general.pacman pkg5: redirect: community.general.pkg5 pkg5_publisher: redirect: community.general.pkg5_publisher pkgin: redirect: community.general.pkgin pkgng: redirect: community.general.pkgng pkgutil: redirect: community.general.pkgutil portage: redirect: community.general.portage portinstall: redirect: community.general.portinstall pulp_repo: redirect: community.general.pulp_repo redhat_subscription: redirect: community.general.redhat_subscription rhn_channel: redirect: community.general.rhn_channel rhn_register: redirect: community.general.rhn_register rhsm_release: redirect: community.general.rhsm_release rhsm_repository: redirect: community.general.rhsm_repository slackpkg: redirect: community.general.slackpkg snap: redirect: community.general.snap sorcery: redirect: community.general.sorcery svr4pkg: redirect: community.general.svr4pkg swdepot: redirect: community.general.swdepot swupd: redirect: community.general.swupd urpmi: redirect: community.general.urpmi xbps: redirect: community.general.xbps zypper: redirect: community.general.zypper zypper_repository: redirect: community.general.zypper_repository cobbler_sync: redirect: community.general.cobbler_sync cobbler_system: redirect: community.general.cobbler_system idrac_firmware: redirect: dellemc.openmanage.idrac_firmware idrac_server_config_profile: redirect: dellemc.openmanage.idrac_server_config_profile ome_device_info: redirect: dellemc.openmanage.ome_device_info foreman: redirect: community.general.foreman katello: redirect: community.general.katello hpilo_facts: redirect: community.general.hpilo_facts hpilo_boot: redirect: community.general.hpilo_boot hpilo_info: redirect: community.general.hpilo_info hponcfg: redirect: community.general.hponcfg imc_rest: redirect: community.general.imc_rest ipmi_boot: redirect: community.general.ipmi_boot ipmi_power: redirect: community.general.ipmi_power lxca_cmms: redirect: community.general.lxca_cmms lxca_nodes: redirect: community.general.lxca_nodes manageiq_alert_profiles: redirect: community.general.manageiq_alert_profiles manageiq_alerts: redirect: community.general.manageiq_alerts manageiq_group: redirect: community.general.manageiq_group manageiq_policies: redirect: community.general.manageiq_policies manageiq_provider: redirect: community.general.manageiq_provider manageiq_tags: redirect: community.general.manageiq_tags manageiq_tenant: redirect: community.general.manageiq_tenant manageiq_user: redirect: community.general.manageiq_user oneview_datacenter_facts: redirect: community.general.oneview_datacenter_facts oneview_enclosure_facts: redirect: community.general.oneview_enclosure_facts oneview_ethernet_network_facts: redirect: community.general.oneview_ethernet_network_facts oneview_fc_network_facts: redirect: community.general.oneview_fc_network_facts oneview_fcoe_network_facts: redirect: community.general.oneview_fcoe_network_facts oneview_logical_interconnect_group_facts: redirect: community.general.oneview_logical_interconnect_group_facts oneview_network_set_facts: redirect: community.general.oneview_network_set_facts oneview_san_manager_facts: redirect: community.general.oneview_san_manager_facts oneview_datacenter_info: redirect: community.general.oneview_datacenter_info oneview_enclosure_info: redirect: community.general.oneview_enclosure_info oneview_ethernet_network: redirect: community.general.oneview_ethernet_network oneview_ethernet_network_info: redirect: community.general.oneview_ethernet_network_info oneview_fc_network: redirect: community.general.oneview_fc_network oneview_fc_network_info: redirect: community.general.oneview_fc_network_info oneview_fcoe_network: redirect: community.general.oneview_fcoe_network oneview_fcoe_network_info: redirect: community.general.oneview_fcoe_network_info oneview_logical_interconnect_group: redirect: community.general.oneview_logical_interconnect_group oneview_logical_interconnect_group_info: redirect: community.general.oneview_logical_interconnect_group_info oneview_network_set: redirect: community.general.oneview_network_set oneview_network_set_info: redirect: community.general.oneview_network_set_info oneview_san_manager: redirect: community.general.oneview_san_manager oneview_san_manager_info: redirect: community.general.oneview_san_manager_info idrac_redfish_facts: redirect: community.general.idrac_redfish_facts redfish_facts: redirect: community.general.redfish_facts idrac_redfish_command: redirect: community.general.idrac_redfish_command idrac_redfish_config: redirect: community.general.idrac_redfish_config idrac_redfish_info: redirect: community.general.idrac_redfish_info redfish_command: redirect: community.general.redfish_command redfish_config: redirect: community.general.redfish_config redfish_info: redirect: community.general.redfish_info stacki_host: redirect: community.general.stacki_host wakeonlan: redirect: community.general.wakeonlan bitbucket_access_key: redirect: community.general.bitbucket_access_key bitbucket_pipeline_key_pair: redirect: community.general.bitbucket_pipeline_key_pair bitbucket_pipeline_known_host: redirect: community.general.bitbucket_pipeline_known_host bitbucket_pipeline_variable: redirect: community.general.bitbucket_pipeline_variable bzr: redirect: community.general.bzr git_config: redirect: community.general.git_config github_hooks: redirect: community.general.github_hooks github_webhook_facts: redirect: community.general.github_webhook_info github_deploy_key: redirect: community.general.github_deploy_key github_issue: redirect: community.general.github_issue github_key: redirect: community.general.github_key github_release: redirect: community.general.github_release github_webhook: redirect: community.general.github_webhook github_webhook_info: redirect: community.general.github_webhook_info gitlab_hooks: redirect: community.general.gitlab_hook gitlab_deploy_key: redirect: community.general.gitlab_deploy_key gitlab_group: redirect: community.general.gitlab_group gitlab_hook: redirect: community.general.gitlab_hook gitlab_project: redirect: community.general.gitlab_project gitlab_project_variable: redirect: community.general.gitlab_project_variable gitlab_runner: redirect: community.general.gitlab_runner gitlab_user: redirect: community.general.gitlab_user hg: redirect: community.general.hg emc_vnx_sg_member: redirect: community.general.emc_vnx_sg_member gluster_heal_facts: redirect: gluster.gluster.gluster_heal_info gluster_heal_info: redirect: gluster.gluster.gluster_heal_info gluster_peer: redirect: gluster.gluster.gluster_peer gluster_volume: redirect: gluster.gluster.gluster_volume ss_3par_cpg: redirect: community.general.ss_3par_cpg ibm_sa_domain: redirect: community.general.ibm_sa_domain ibm_sa_host: redirect: community.general.ibm_sa_host ibm_sa_host_ports: redirect: community.general.ibm_sa_host_ports ibm_sa_pool: redirect: community.general.ibm_sa_pool ibm_sa_vol: redirect: community.general.ibm_sa_vol ibm_sa_vol_map: redirect: community.general.ibm_sa_vol_map infini_export: redirect: infinidat.infinibox.infini_export infini_export_client: redirect: infinidat.infinibox.infini_export_client infini_fs: redirect: infinidat.infinibox.infini_fs infini_host: redirect: infinidat.infinibox.infini_host infini_pool: redirect: infinidat.infinibox.infini_pool infini_vol: redirect: infinidat.infinibox.infini_vol na_cdot_aggregate: redirect: community.general.na_cdot_aggregate na_cdot_license: redirect: community.general.na_cdot_license na_cdot_lun: redirect: community.general.na_cdot_lun na_cdot_qtree: redirect: community.general.na_cdot_qtree na_cdot_svm: redirect: community.general.na_cdot_svm na_cdot_user: redirect: community.general.na_cdot_user na_cdot_user_role: redirect: community.general.na_cdot_user_role na_cdot_volume: redirect: community.general.na_cdot_volume na_ontap_gather_facts: redirect: community.general.na_ontap_gather_facts sf_account_manager: redirect: community.general.sf_account_manager sf_check_connections: redirect: community.general.sf_check_connections sf_snapshot_schedule_manager: redirect: community.general.sf_snapshot_schedule_manager sf_volume_access_group_manager: redirect: community.general.sf_volume_access_group_manager sf_volume_manager: redirect: community.general.sf_volume_manager netapp_e_alerts: redirect: netapp_eseries.santricity.netapp_e_alerts netapp_e_amg: redirect: netapp_eseries.santricity.netapp_e_amg netapp_e_amg_role: redirect: netapp_eseries.santricity.netapp_e_amg_role netapp_e_amg_sync: redirect: netapp_eseries.santricity.netapp_e_amg_sync netapp_e_asup: redirect: netapp_eseries.santricity.netapp_e_asup netapp_e_auditlog: redirect: netapp_eseries.santricity.netapp_e_auditlog netapp_e_auth: redirect: netapp_eseries.santricity.netapp_e_auth netapp_e_drive_firmware: redirect: netapp_eseries.santricity.netapp_e_drive_firmware netapp_e_facts: redirect: netapp_eseries.santricity.netapp_e_facts netapp_e_firmware: redirect: netapp_eseries.santricity.netapp_e_firmware netapp_e_flashcache: redirect: netapp_eseries.santricity.netapp_e_flashcache netapp_e_global: redirect: netapp_eseries.santricity.netapp_e_global netapp_e_host: redirect: netapp_eseries.santricity.netapp_e_host netapp_e_hostgroup: redirect: netapp_eseries.santricity.netapp_e_hostgroup netapp_e_iscsi_interface: redirect: netapp_eseries.santricity.netapp_e_iscsi_interface netapp_e_iscsi_target: redirect: netapp_eseries.santricity.netapp_e_iscsi_target netapp_e_ldap: redirect: netapp_eseries.santricity.netapp_e_ldap netapp_e_lun_mapping: redirect: netapp_eseries.santricity.netapp_e_lun_mapping netapp_e_mgmt_interface: redirect: netapp_eseries.santricity.netapp_e_mgmt_interface netapp_e_snapshot_group: redirect: netapp_eseries.santricity.netapp_e_snapshot_group netapp_e_snapshot_images: redirect: netapp_eseries.santricity.netapp_e_snapshot_images netapp_e_snapshot_volume: redirect: netapp_eseries.santricity.netapp_e_snapshot_volume netapp_e_storage_system: redirect: netapp_eseries.santricity.netapp_e_storage_system netapp_e_storagepool: redirect: netapp_eseries.santricity.netapp_e_storagepool netapp_e_syslog: redirect: netapp_eseries.santricity.netapp_e_syslog netapp_e_volume: redirect: netapp_eseries.santricity.netapp_e_volume netapp_e_volume_copy: redirect: netapp_eseries.santricity.netapp_e_volume_copy purefa_facts: redirect: community.general.purefa_facts purefb_facts: redirect: community.general.purefb_facts vexata_eg: redirect: community.general.vexata_eg vexata_volume: redirect: community.general.vexata_volume zfs: redirect: community.general.zfs zfs_delegate_admin: redirect: community.general.zfs_delegate_admin zfs_facts: redirect: community.general.zfs_facts zpool_facts: redirect: community.general.zpool_facts python_requirements_facts: redirect: community.general.python_requirements_facts aix_devices: redirect: community.general.aix_devices aix_filesystem: redirect: community.general.aix_filesystem aix_inittab: redirect: community.general.aix_inittab aix_lvg: redirect: community.general.aix_lvg aix_lvol: redirect: community.general.aix_lvol alternatives: redirect: community.general.alternatives awall: redirect: community.general.awall beadm: redirect: community.general.beadm capabilities: redirect: community.general.capabilities cronvar: redirect: community.general.cronvar crypttab: redirect: community.general.crypttab dconf: redirect: community.general.dconf facter: redirect: community.general.facter filesystem: redirect: community.general.filesystem firewalld: redirect: ansible.posix.firewalld gconftool2: redirect: community.general.gconftool2 interfaces_file: redirect: community.general.interfaces_file java_cert: redirect: community.general.java_cert java_keystore: redirect: community.general.java_keystore kernel_blacklist: redirect: community.general.kernel_blacklist lbu: redirect: community.general.lbu listen_ports_facts: redirect: community.general.listen_ports_facts locale_gen: redirect: community.general.locale_gen lvg: redirect: community.general.lvg lvol: redirect: community.general.lvol make: redirect: community.general.make mksysb: redirect: community.general.mksysb modprobe: redirect: community.general.modprobe nosh: redirect: community.general.nosh ohai: redirect: community.general.ohai open_iscsi: redirect: community.general.open_iscsi openwrt_init: redirect: community.general.openwrt_init osx_defaults: redirect: community.general.osx_defaults pam_limits: redirect: community.general.pam_limits pamd: redirect: community.general.pamd parted: redirect: community.general.parted pids: redirect: community.general.pids puppet: redirect: community.general.puppet python_requirements_info: redirect: community.general.python_requirements_info runit: redirect: community.general.runit sefcontext: redirect: community.general.sefcontext selinux_permissive: redirect: community.general.selinux_permissive selogin: redirect: community.general.selogin seport: redirect: community.general.seport solaris_zone: redirect: community.general.solaris_zone svc: redirect: community.general.svc syspatch: redirect: community.general.syspatch timezone: redirect: community.general.timezone ufw: redirect: community.general.ufw vdo: redirect: community.general.vdo xfconf: redirect: community.general.xfconf xfs_quota: redirect: community.general.xfs_quota jenkins_job_facts: redirect: community.general.jenkins_job_facts nginx_status_facts: redirect: community.general.nginx_status_facts apache2_mod_proxy: redirect: community.general.apache2_mod_proxy apache2_module: redirect: community.general.apache2_module deploy_helper: redirect: community.general.deploy_helper django_manage: redirect: community.general.django_manage ejabberd_user: redirect: community.general.ejabberd_user gunicorn: redirect: community.general.gunicorn htpasswd: redirect: community.general.htpasswd jboss: redirect: community.general.jboss jenkins_job: redirect: community.general.jenkins_job jenkins_job_info: redirect: community.general.jenkins_job_info jenkins_plugin: redirect: community.general.jenkins_plugin jenkins_script: redirect: community.general.jenkins_script jira: redirect: community.general.jira nginx_status_info: redirect: community.general.nginx_status_info rundeck_acl_policy: redirect: community.general.rundeck_acl_policy rundeck_project: redirect: community.general.rundeck_project utm_aaa_group: redirect: community.general.utm_aaa_group utm_aaa_group_info: redirect: community.general.utm_aaa_group_info utm_ca_host_key_cert: redirect: community.general.utm_ca_host_key_cert utm_ca_host_key_cert_info: redirect: community.general.utm_ca_host_key_cert_info utm_dns_host: redirect: community.general.utm_dns_host utm_network_interface_address: redirect: community.general.utm_network_interface_address utm_network_interface_address_info: redirect: community.general.utm_network_interface_address_info utm_proxy_auth_profile: redirect: community.general.utm_proxy_auth_profile utm_proxy_exception: redirect: community.general.utm_proxy_exception utm_proxy_frontend: redirect: community.general.utm_proxy_frontend utm_proxy_frontend_info: redirect: community.general.utm_proxy_frontend_info utm_proxy_location: redirect: community.general.utm_proxy_location utm_proxy_location_info: redirect: community.general.utm_proxy_location_info supervisorctl: redirect: community.general.supervisorctl taiga_issue: redirect: community.general.taiga_issue grafana_dashboard: redirect: community.grafana.grafana_dashboard grafana_datasource: redirect: community.grafana.grafana_datasource grafana_plugin: redirect: community.grafana.grafana_plugin k8s_facts: redirect: kubernetes.core.k8s_facts k8s_raw: redirect: kubernetes.core.k8s_raw k8s: redirect: kubernetes.core.k8s k8s_auth: redirect: kubernetes.core.k8s_auth k8s_info: redirect: kubernetes.core.k8s_info k8s_scale: redirect: kubernetes.core.k8s_scale k8s_service: redirect: kubernetes.core.k8s_service openshift_raw: redirect: kubernetes.core.openshift_raw openshift_scale: redirect: kubernetes.core.openshift_scale openssh_cert: redirect: community.crypto.openssh_cert openssl_pkcs12: redirect: community.crypto.openssl_pkcs12 openssl_csr: redirect: community.crypto.openssl_csr openssl_certificate: redirect: community.crypto.x509_certificate openssl_certificate_info: redirect: community.crypto.x509_certificate_info x509_crl: redirect: community.crypto.x509_crl openssl_privatekey_info: redirect: community.crypto.openssl_privatekey_info x509_crl_info: redirect: community.crypto.x509_crl_info get_certificate: redirect: community.crypto.get_certificate openssh_keypair: redirect: community.crypto.openssh_keypair openssl_publickey: redirect: community.crypto.openssl_publickey openssl_csr_info: redirect: community.crypto.openssl_csr_info luks_device: redirect: community.crypto.luks_device openssl_dhparam: redirect: community.crypto.openssl_dhparam openssl_privatekey: redirect: community.crypto.openssl_privatekey certificate_complete_chain: redirect: community.crypto.certificate_complete_chain acme_inspect: redirect: community.crypto.acme_inspect acme_certificate_revoke: redirect: community.crypto.acme_certificate_revoke acme_certificate: redirect: community.crypto.acme_certificate acme_account: redirect: community.crypto.acme_account acme_account_facts: redirect: community.crypto.acme_account_facts acme_challenge_cert_helper: redirect: community.crypto.acme_challenge_cert_helper acme_account_info: redirect: community.crypto.acme_account_info ecs_domain: redirect: community.crypto.ecs_domain ecs_certificate: redirect: community.crypto.ecs_certificate mongodb_parameter: redirect: community.mongodb.mongodb_parameter mongodb_info: redirect: community.mongodb.mongodb_info mongodb_replicaset: redirect: community.mongodb.mongodb_replicaset mongodb_user: redirect: community.mongodb.mongodb_user mongodb_shard: redirect: community.mongodb.mongodb_shard vmware_appliance_access_info: redirect: vmware.vmware_rest.vmware_appliance_access_info vmware_appliance_health_info: redirect: vmware.vmware_rest.vmware_appliance_health_info vmware_cis_category_info: redirect: vmware.vmware_rest.vmware_cis_category_info vmware_core_info: redirect: vmware.vmware_rest.vmware_core_info vcenter_extension_facts: redirect: community.vmware.vcenter_extension_facts vmware_about_facts: redirect: community.vmware.vmware_about_facts vmware_category_facts: redirect: community.vmware.vmware_category_facts vmware_cluster_facts: redirect: community.vmware.vmware_cluster_facts vmware_datastore_facts: redirect: community.vmware.vmware_datastore_facts vmware_dns_config: redirect: community.vmware.vmware_dns_config vmware_drs_group_facts: redirect: community.vmware.vmware_drs_group_facts vmware_drs_rule_facts: redirect: community.vmware.vmware_drs_rule_facts vmware_dvs_portgroup_facts: redirect: community.vmware.vmware_dvs_portgroup_facts vmware_guest_boot_facts: redirect: community.vmware.vmware_guest_boot_facts vmware_guest_customization_facts: redirect: community.vmware.vmware_guest_customization_facts vmware_guest_disk_facts: redirect: community.vmware.vmware_guest_disk_facts vmware_guest_facts: redirect: community.vmware.vmware_guest_facts vmware_guest_snapshot_facts: redirect: community.vmware.vmware_guest_snapshot_facts vmware_host_capability_facts: redirect: community.vmware.vmware_host_capability_facts vmware_host_config_facts: redirect: community.vmware.vmware_host_config_facts vmware_host_dns_facts: redirect: community.vmware.vmware_host_dns_facts vmware_host_feature_facts: redirect: community.vmware.vmware_host_feature_facts vmware_host_firewall_facts: redirect: community.vmware.vmware_host_firewall_facts vmware_host_ntp_facts: redirect: community.vmware.vmware_host_ntp_facts vmware_host_package_facts: redirect: community.vmware.vmware_host_package_facts vmware_host_service_facts: redirect: community.vmware.vmware_host_service_facts vmware_host_ssl_facts: redirect: community.vmware.vmware_host_ssl_facts vmware_host_vmhba_facts: redirect: community.vmware.vmware_host_vmhba_facts vmware_host_vmnic_facts: redirect: community.vmware.vmware_host_vmnic_facts vmware_local_role_facts: redirect: community.vmware.vmware_local_role_facts vmware_local_user_facts: redirect: community.vmware.vmware_local_user_facts vmware_portgroup_facts: redirect: community.vmware.vmware_portgroup_facts vmware_resource_pool_facts: redirect: community.vmware.vmware_resource_pool_facts vmware_tag_facts: redirect: community.vmware.vmware_tag_facts vmware_target_canonical_facts: redirect: community.vmware.vmware_target_canonical_facts vmware_vm_facts: redirect: community.vmware.vmware_vm_facts vmware_vmkernel_facts: redirect: community.vmware.vmware_vmkernel_facts vmware_vswitch_facts: redirect: community.vmware.vmware_vswitch_facts vca_fw: redirect: community.vmware.vca_fw vca_nat: redirect: community.vmware.vca_nat vca_vapp: redirect: community.vmware.vca_vapp vcenter_extension: redirect: community.vmware.vcenter_extension vcenter_extension_info: redirect: community.vmware.vcenter_extension_info vcenter_folder: redirect: community.vmware.vcenter_folder vcenter_license: redirect: community.vmware.vcenter_license vmware_about_info: redirect: community.vmware.vmware_about_info vmware_category: redirect: community.vmware.vmware_category vmware_category_info: redirect: community.vmware.vmware_category_info vmware_cfg_backup: redirect: community.vmware.vmware_cfg_backup vmware_cluster: redirect: community.vmware.vmware_cluster vmware_cluster_drs: redirect: community.vmware.vmware_cluster_drs vmware_cluster_ha: redirect: community.vmware.vmware_cluster_ha vmware_cluster_info: redirect: community.vmware.vmware_cluster_info vmware_cluster_vsan: redirect: community.vmware.vmware_cluster_vsan vmware_content_deploy_template: redirect: community.vmware.vmware_content_deploy_template vmware_content_library_info: redirect: community.vmware.vmware_content_library_info vmware_content_library_manager: redirect: community.vmware.vmware_content_library_manager vmware_datacenter: redirect: community.vmware.vmware_datacenter vmware_datastore_cluster: redirect: community.vmware.vmware_datastore_cluster vmware_datastore_info: redirect: community.vmware.vmware_datastore_info vmware_datastore_maintenancemode: redirect: community.vmware.vmware_datastore_maintenancemode vmware_deploy_ovf: redirect: community.vmware.vmware_deploy_ovf vmware_drs_group: redirect: community.vmware.vmware_drs_group vmware_drs_group_info: redirect: community.vmware.vmware_drs_group_info vmware_drs_rule_info: redirect: community.vmware.vmware_drs_rule_info vmware_dvs_host: redirect: community.vmware.vmware_dvs_host vmware_dvs_portgroup: redirect: community.vmware.vmware_dvs_portgroup vmware_dvs_portgroup_find: redirect: community.vmware.vmware_dvs_portgroup_find vmware_dvs_portgroup_info: redirect: community.vmware.vmware_dvs_portgroup_info vmware_dvswitch: redirect: community.vmware.vmware_dvswitch vmware_dvswitch_lacp: redirect: community.vmware.vmware_dvswitch_lacp vmware_dvswitch_nioc: redirect: community.vmware.vmware_dvswitch_nioc vmware_dvswitch_pvlans: redirect: community.vmware.vmware_dvswitch_pvlans vmware_dvswitch_uplink_pg: redirect: community.vmware.vmware_dvswitch_uplink_pg vmware_evc_mode: redirect: community.vmware.vmware_evc_mode vmware_export_ovf: redirect: community.vmware.vmware_export_ovf vmware_folder_info: redirect: community.vmware.vmware_folder_info vmware_guest: redirect: community.vmware.vmware_guest vmware_guest_boot_info: redirect: community.vmware.vmware_guest_boot_info vmware_guest_boot_manager: redirect: community.vmware.vmware_guest_boot_manager vmware_guest_controller: redirect: community.vmware.vmware_guest_controller vmware_guest_cross_vc_clone: redirect: community.vmware.vmware_guest_cross_vc_clone vmware_guest_custom_attribute_defs: redirect: community.vmware.vmware_guest_custom_attribute_defs vmware_guest_custom_attributes: redirect: community.vmware.vmware_guest_custom_attributes vmware_guest_customization_info: redirect: community.vmware.vmware_guest_customization_info vmware_guest_disk: redirect: community.vmware.vmware_guest_disk vmware_guest_disk_info: redirect: community.vmware.vmware_guest_disk_info vmware_guest_file_operation: redirect: community.vmware.vmware_guest_file_operation vmware_guest_find: redirect: community.vmware.vmware_guest_find vmware_guest_info: redirect: community.vmware.vmware_guest_info vmware_guest_move: redirect: community.vmware.vmware_guest_move vmware_guest_network: redirect: community.vmware.vmware_guest_network vmware_guest_powerstate: redirect: community.vmware.vmware_guest_powerstate vmware_guest_register_operation: redirect: community.vmware.vmware_guest_register_operation vmware_guest_screenshot: redirect: community.vmware.vmware_guest_screenshot vmware_guest_sendkey: redirect: community.vmware.vmware_guest_sendkey vmware_guest_serial_port: redirect: community.vmware.vmware_guest_serial_port vmware_guest_snapshot: redirect: community.vmware.vmware_guest_snapshot vmware_guest_snapshot_info: redirect: community.vmware.vmware_guest_snapshot_info vmware_guest_tools_info: redirect: community.vmware.vmware_guest_tools_info vmware_guest_tools_upgrade: redirect: community.vmware.vmware_guest_tools_upgrade vmware_guest_tools_wait: redirect: community.vmware.vmware_guest_tools_wait vmware_guest_video: redirect: community.vmware.vmware_guest_video vmware_guest_vnc: redirect: community.vmware.vmware_guest_vnc vmware_host: redirect: community.vmware.vmware_host vmware_host_acceptance: redirect: community.vmware.vmware_host_acceptance vmware_host_active_directory: redirect: community.vmware.vmware_host_active_directory vmware_host_auto_start: redirect: community.vmware.vmware_host_auto_start vmware_host_capability_info: redirect: community.vmware.vmware_host_capability_info vmware_host_config_info: redirect: community.vmware.vmware_host_config_info vmware_host_config_manager: redirect: community.vmware.vmware_host_config_manager vmware_host_datastore: redirect: community.vmware.vmware_host_datastore vmware_host_dns: redirect: community.vmware.vmware_host_dns vmware_host_dns_info: redirect: community.vmware.vmware_host_dns_info vmware_host_facts: redirect: community.vmware.vmware_host_facts vmware_host_feature_info: redirect: community.vmware.vmware_host_feature_info vmware_host_firewall_info: redirect: community.vmware.vmware_host_firewall_info vmware_host_firewall_manager: redirect: community.vmware.vmware_host_firewall_manager vmware_host_hyperthreading: redirect: community.vmware.vmware_host_hyperthreading vmware_host_ipv6: redirect: community.vmware.vmware_host_ipv6 vmware_host_kernel_manager: redirect: community.vmware.vmware_host_kernel_manager vmware_host_lockdown: redirect: community.vmware.vmware_host_lockdown vmware_host_ntp: redirect: community.vmware.vmware_host_ntp vmware_host_ntp_info: redirect: community.vmware.vmware_host_ntp_info vmware_host_package_info: redirect: community.vmware.vmware_host_package_info vmware_host_powermgmt_policy: redirect: community.vmware.vmware_host_powermgmt_policy vmware_host_powerstate: redirect: community.vmware.vmware_host_powerstate vmware_host_scanhba: redirect: community.vmware.vmware_host_scanhba vmware_host_service_info: redirect: community.vmware.vmware_host_service_info vmware_host_service_manager: redirect: community.vmware.vmware_host_service_manager vmware_host_snmp: redirect: community.vmware.vmware_host_snmp vmware_host_ssl_info: redirect: community.vmware.vmware_host_ssl_info vmware_host_vmhba_info: redirect: community.vmware.vmware_host_vmhba_info vmware_host_vmnic_info: redirect: community.vmware.vmware_host_vmnic_info vmware_local_role_info: redirect: community.vmware.vmware_local_role_info vmware_local_role_manager: redirect: community.vmware.vmware_local_role_manager vmware_local_user_info: redirect: community.vmware.vmware_local_user_info vmware_local_user_manager: redirect: community.vmware.vmware_local_user_manager vmware_maintenancemode: redirect: community.vmware.vmware_maintenancemode vmware_migrate_vmk: redirect: community.vmware.vmware_migrate_vmk vmware_object_role_permission: redirect: community.vmware.vmware_object_role_permission vmware_portgroup: redirect: community.vmware.vmware_portgroup vmware_portgroup_info: redirect: community.vmware.vmware_portgroup_info vmware_resource_pool: redirect: community.vmware.vmware_resource_pool vmware_resource_pool_info: redirect: community.vmware.vmware_resource_pool_info vmware_tag: redirect: community.vmware.vmware_tag vmware_tag_info: redirect: community.vmware.vmware_tag_info vmware_tag_manager: redirect: community.vmware.vmware_tag_manager vmware_target_canonical_info: redirect: community.vmware.vmware_target_canonical_info vmware_vcenter_settings: redirect: community.vmware.vmware_vcenter_settings vmware_vcenter_statistics: redirect: community.vmware.vmware_vcenter_statistics vmware_vm_host_drs_rule: redirect: community.vmware.vmware_vm_host_drs_rule vmware_vm_info: redirect: community.vmware.vmware_vm_info vmware_vm_shell: redirect: community.vmware.vmware_vm_shell vmware_vm_storage_policy_info: redirect: community.vmware.vmware_vm_storage_policy_info vmware_vm_vm_drs_rule: redirect: community.vmware.vmware_vm_vm_drs_rule vmware_vm_vss_dvs_migrate: redirect: community.vmware.vmware_vm_vss_dvs_migrate vmware_vmkernel: redirect: community.vmware.vmware_vmkernel vmware_vmkernel_info: redirect: community.vmware.vmware_vmkernel_info vmware_vmkernel_ip_config: redirect: community.vmware.vmware_vmkernel_ip_config vmware_vmotion: redirect: community.vmware.vmware_vmotion vmware_vsan_cluster: redirect: community.vmware.vmware_vsan_cluster vmware_vsan_health_info: redirect: community.vmware.vmware_vsan_health_info vmware_vspan_session: redirect: community.vmware.vmware_vspan_session vmware_vswitch: redirect: community.vmware.vmware_vswitch vmware_vswitch_info: redirect: community.vmware.vmware_vswitch_info vsphere_copy: redirect: community.vmware.vsphere_copy vsphere_file: redirect: community.vmware.vsphere_file psexec: redirect: community.windows.psexec win_audit_policy_system: redirect: community.windows.win_audit_policy_system win_audit_rule: redirect: community.windows.win_audit_rule win_chocolatey: redirect: chocolatey.chocolatey.win_chocolatey win_chocolatey_config: redirect: chocolatey.chocolatey.win_chocolatey_config win_chocolatey_facts: redirect: chocolatey.chocolatey.win_chocolatey_facts win_chocolatey_feature: redirect: chocolatey.chocolatey.win_chocolatey_feature win_chocolatey_source: redirect: chocolatey.chocolatey.win_chocolatey_source win_credential: redirect: community.windows.win_credential win_defrag: redirect: community.windows.win_defrag win_disk_facts: redirect: community.windows.win_disk_facts win_disk_image: redirect: community.windows.win_disk_image win_dns_record: redirect: community.windows.win_dns_record win_domain_computer: redirect: community.windows.win_domain_computer win_domain_group: redirect: community.windows.win_domain_group win_domain_group_membership: redirect: community.windows.win_domain_group_membership win_domain_user: redirect: community.windows.win_domain_user win_dotnet_ngen: redirect: community.windows.win_dotnet_ngen win_eventlog: redirect: community.windows.win_eventlog win_eventlog_entry: redirect: community.windows.win_eventlog_entry win_file_version: redirect: community.windows.win_file_version win_firewall: redirect: community.windows.win_firewall win_firewall_rule: redirect: community.windows.win_firewall_rule win_format: redirect: community.windows.win_format win_hosts: redirect: community.windows.win_hosts win_hotfix: redirect: community.windows.win_hotfix win_http_proxy: redirect: community.windows.win_http_proxy win_iis_virtualdirectory: redirect: community.windows.win_iis_virtualdirectory win_iis_webapplication: redirect: community.windows.win_iis_webapplication win_iis_webapppool: redirect: community.windows.win_iis_webapppool win_iis_webbinding: redirect: community.windows.win_iis_webbinding win_iis_website: redirect: community.windows.win_iis_website win_inet_proxy: redirect: community.windows.win_inet_proxy win_initialize_disk: redirect: community.windows.win_initialize_disk win_lineinfile: redirect: community.windows.win_lineinfile win_mapped_drive: redirect: community.windows.win_mapped_drive win_msg: redirect: community.windows.win_msg win_netbios: redirect: community.windows.win_netbios win_nssm: redirect: community.windows.win_nssm win_pagefile: redirect: community.windows.win_pagefile win_partition: redirect: community.windows.win_partition win_pester: redirect: community.windows.win_pester win_power_plan: redirect: community.windows.win_power_plan win_product_facts: redirect: community.windows.win_product_facts win_psexec: redirect: community.windows.win_psexec win_psmodule: redirect: community.windows.win_psmodule win_psrepository: redirect: community.windows.win_psrepository win_rabbitmq_plugin: redirect: community.windows.win_rabbitmq_plugin win_rds_cap: redirect: community.windows.win_rds_cap win_rds_rap: redirect: community.windows.win_rds_rap win_rds_settings: redirect: community.windows.win_rds_settings win_region: redirect: community.windows.win_region win_regmerge: redirect: community.windows.win_regmerge win_robocopy: redirect: community.windows.win_robocopy win_route: redirect: community.windows.win_route win_say: redirect: community.windows.win_say win_scheduled_task: redirect: community.windows.win_scheduled_task win_scheduled_task_stat: redirect: community.windows.win_scheduled_task_stat win_security_policy: redirect: community.windows.win_security_policy win_shortcut: redirect: community.windows.win_shortcut win_snmp: redirect: community.windows.win_snmp win_timezone: redirect: community.windows.win_timezone win_toast: redirect: community.windows.win_toast win_unzip: redirect: community.windows.win_unzip win_user_profile: redirect: community.windows.win_user_profile win_wait_for_process: redirect: community.windows.win_wait_for_process win_wakeonlan: redirect: community.windows.win_wakeonlan win_webpicmd: redirect: community.windows.win_webpicmd win_xml: redirect: community.windows.win_xml azure_rm_aks_facts: redirect: community.azure.azure_rm_aks_facts azure_rm_dnsrecordset_facts: redirect: community.azure.azure_rm_dnsrecordset_facts azure_rm_dnszone_facts: redirect: community.azure.azure_rm_dnszone_facts azure_rm_networkinterface_facts: redirect: community.azure.azure_rm_networkinterface_facts azure_rm_publicipaddress_facts: redirect: community.azure.azure_rm_publicipaddress_facts azure_rm_securitygroup_facts: redirect: community.azure.azure_rm_securitygroup_facts azure_rm_storageaccount_facts: redirect: community.azure.azure_rm_storageaccount_facts azure_rm_virtualmachine_facts: redirect: community.azure.azure_rm_virtualmachine_facts azure_rm_virtualnetwork_facts: redirect: community.azure.azure_rm_virtualnetwork_facts azure_rm_roledefinition_facts: redirect: community.azure.azure_rm_roledefinition_facts azure_rm_autoscale_facts: redirect: community.azure.azure_rm_autoscale_facts azure_rm_mysqldatabase_facts: redirect: community.azure.azure_rm_mysqldatabase_facts azure_rm_devtestlabschedule_facts: redirect: community.azure.azure_rm_devtestlabschedule_facts azure_rm_virtualmachinescaleset_facts: redirect: community.azure.azure_rm_virtualmachinescaleset_facts azure_rm_devtestlabcustomimage_facts: redirect: community.azure.azure_rm_devtestlabcustomimage_facts azure_rm_cosmosdbaccount_facts: redirect: community.azure.azure_rm_cosmosdbaccount_facts azure_rm_subnet_facts: redirect: community.azure.azure_rm_subnet_facts azure_rm_aksversion_facts: redirect: community.azure.azure_rm_aksversion_facts azure_rm_hdinsightcluster_facts: redirect: community.azure.azure_rm_hdinsightcluster_facts azure_rm_virtualmachinescalesetextension_facts: redirect: community.azure.azure_rm_virtualmachinescalesetextension_facts azure_rm_loadbalancer_facts: redirect: community.azure.azure_rm_loadbalancer_facts azure_rm_roleassignment_facts: redirect: community.azure.azure_rm_roleassignment_facts azure_rm_manageddisk_facts: redirect: community.azure.azure_rm_manageddisk_facts azure_rm_mysqlserver_facts: redirect: community.azure.azure_rm_mysqlserver_facts azure_rm_servicebus_facts: redirect: community.azure.azure_rm_servicebus_facts azure_rm_rediscache_facts: redirect: community.azure.azure_rm_rediscache_facts azure_rm_resource_facts: redirect: community.azure.azure_rm_resource_facts azure_rm_routetable_facts: redirect: community.azure.azure_rm_routetable_facts azure_rm_virtualmachine_extension: redirect: community.azure.azure_rm_virtualmachine_extension azure_rm_loganalyticsworkspace_facts: redirect: community.azure.azure_rm_loganalyticsworkspace_facts azure_rm_sqldatabase_facts: redirect: community.azure.azure_rm_sqldatabase_facts azure_rm_devtestlabartifactsource_facts: redirect: community.azure.azure_rm_devtestlabartifactsource_facts azure_rm_deployment_facts: redirect: community.azure.azure_rm_deployment_facts azure_rm_virtualmachineextension_facts: redirect: community.azure.azure_rm_virtualmachineextension_facts azure_rm_applicationsecuritygroup_facts: redirect: community.azure.azure_rm_applicationsecuritygroup_facts azure_rm_availabilityset_facts: redirect: community.azure.azure_rm_availabilityset_facts azure_rm_mariadbdatabase_facts: redirect: community.azure.azure_rm_mariadbdatabase_facts azure_rm_devtestlabenvironment_facts: redirect: community.azure.azure_rm_devtestlabenvironment_facts azure_rm_appserviceplan_facts: redirect: community.azure.azure_rm_appserviceplan_facts azure_rm_containerinstance_facts: redirect: community.azure.azure_rm_containerinstance_facts azure_rm_devtestlabarmtemplate_facts: redirect: community.azure.azure_rm_devtestlabarmtemplate_facts azure_rm_devtestlabartifact_facts: redirect: community.azure.azure_rm_devtestlabartifact_facts azure_rm_virtualmachinescalesetinstance_facts: redirect: community.azure.azure_rm_virtualmachinescalesetinstance_facts azure_rm_cdnendpoint_facts: redirect: community.azure.azure_rm_cdnendpoint_facts azure_rm_trafficmanagerprofile_facts: redirect: community.azure.azure_rm_trafficmanagerprofile_facts azure_rm_functionapp_facts: redirect: community.azure.azure_rm_functionapp_facts azure_rm_virtualmachineimage_facts: redirect: community.azure.azure_rm_virtualmachineimage_facts azure_rm_mariadbconfiguration_facts: redirect: community.azure.azure_rm_mariadbconfiguration_facts azure_rm_virtualnetworkpeering_facts: redirect: community.azure.azure_rm_virtualnetworkpeering_facts azure_rm_sqlserver_facts: redirect: community.azure.azure_rm_sqlserver_facts azure_rm_mariadbfirewallrule_facts: redirect: community.azure.azure_rm_mariadbfirewallrule_facts azure_rm_mysqlconfiguration_facts: redirect: community.azure.azure_rm_mysqlconfiguration_facts azure_rm_mysqlfirewallrule_facts: redirect: community.azure.azure_rm_mysqlfirewallrule_facts azure_rm_postgresqlfirewallrule_facts: redirect: community.azure.azure_rm_postgresqlfirewallrule_facts azure_rm_mariadbserver_facts: redirect: community.azure.azure_rm_mariadbserver_facts azure_rm_postgresqldatabase_facts: redirect: community.azure.azure_rm_postgresqldatabase_facts azure_rm_devtestlabvirtualnetwork_facts: redirect: community.azure.azure_rm_devtestlabvirtualnetwork_facts azure_rm_devtestlabpolicy_facts: redirect: community.azure.azure_rm_devtestlabpolicy_facts azure_rm_trafficmanagerendpoint_facts: redirect: community.azure.azure_rm_trafficmanagerendpoint_facts azure_rm_sqlfirewallrule_facts: redirect: community.azure.azure_rm_sqlfirewallrule_facts azure_rm_containerregistry_facts: redirect: community.azure.azure_rm_containerregistry_facts azure_rm_postgresqlconfiguration_facts: redirect: community.azure.azure_rm_postgresqlconfiguration_facts azure_rm_postgresqlserver_facts: redirect: community.azure.azure_rm_postgresqlserver_facts azure_rm_devtestlab_facts: redirect: community.azure.azure_rm_devtestlab_facts azure_rm_cdnprofile_facts: redirect: community.azure.azure_rm_cdnprofile_facts azure_rm_virtualmachine_scaleset: redirect: community.azure.azure_rm_virtualmachine_scaleset azure_rm_webapp_facts: redirect: community.azure.azure_rm_webapp_facts azure_rm_devtestlabvirtualmachine_facts: redirect: community.azure.azure_rm_devtestlabvirtualmachine_facts azure_rm_image_facts: redirect: community.azure.azure_rm_image_facts azure_rm_managed_disk: redirect: community.azure.azure_rm_managed_disk azure_rm_automationaccount_facts: redirect: community.azure.azure_rm_automationaccount_facts azure_rm_lock_facts: redirect: community.azure.azure_rm_lock_facts azure_rm_managed_disk_facts: redirect: community.azure.azure_rm_managed_disk_facts azure_rm_resourcegroup_facts: redirect: community.azure.azure_rm_resourcegroup_facts azure_rm_virtualmachine_scaleset_facts: redirect: community.azure.azure_rm_virtualmachine_scaleset_facts snow_record: redirect: servicenow.servicenow.snow_record snow_record_find: redirect: servicenow.servicenow.snow_record_find aws_az_facts: redirect: amazon.aws.aws_az_facts aws_caller_facts: redirect: amazon.aws.aws_caller_facts cloudformation_facts: redirect: amazon.aws.cloudformation_facts ec2_ami_facts: redirect: amazon.aws.ec2_ami_facts ec2_eni_facts: redirect: amazon.aws.ec2_eni_facts ec2_group_facts: redirect: amazon.aws.ec2_group_facts ec2_snapshot_facts: redirect: amazon.aws.ec2_snapshot_facts ec2_vol_facts: redirect: amazon.aws.ec2_vol_facts ec2_vpc_dhcp_option_facts: redirect: amazon.aws.ec2_vpc_dhcp_option_facts ec2_vpc_net_facts: redirect: amazon.aws.ec2_vpc_net_facts ec2_vpc_subnet_facts: redirect: amazon.aws.ec2_vpc_subnet_facts aws_az_info: redirect: amazon.aws.aws_az_info aws_caller_info: redirect: amazon.aws.aws_caller_info aws_s3: redirect: amazon.aws.aws_s3 cloudformation: redirect: amazon.aws.cloudformation cloudformation_info: redirect: amazon.aws.cloudformation_info ec2: redirect: amazon.aws.ec2 ec2_ami: redirect: amazon.aws.ec2_ami ec2_ami_info: redirect: amazon.aws.ec2_ami_info ec2_elb_lb: redirect: amazon.aws.ec2_elb_lb ec2_eni: redirect: amazon.aws.ec2_eni ec2_eni_info: redirect: amazon.aws.ec2_eni_info ec2_group: redirect: amazon.aws.ec2_group ec2_group_info: redirect: amazon.aws.ec2_group_info ec2_key: redirect: amazon.aws.ec2_key ec2_metadata_facts: redirect: amazon.aws.ec2_metadata_facts ec2_snapshot: redirect: amazon.aws.ec2_snapshot ec2_snapshot_info: redirect: amazon.aws.ec2_snapshot_info ec2_tag: redirect: amazon.aws.ec2_tag ec2_tag_info: redirect: amazon.aws.ec2_tag_info ec2_vol: redirect: amazon.aws.ec2_vol ec2_vol_info: redirect: amazon.aws.ec2_vol_info ec2_vpc_dhcp_option: redirect: amazon.aws.ec2_vpc_dhcp_option ec2_vpc_dhcp_option_info: redirect: amazon.aws.ec2_vpc_dhcp_option_info ec2_vpc_net: redirect: amazon.aws.ec2_vpc_net ec2_vpc_net_info: redirect: amazon.aws.ec2_vpc_net_info ec2_vpc_subnet: redirect: amazon.aws.ec2_vpc_subnet ec2_vpc_subnet_info: redirect: amazon.aws.ec2_vpc_subnet_info s3_bucket: redirect: amazon.aws.s3_bucket telnet: redirect: ansible.netcommon.telnet cli_command: redirect: ansible.netcommon.cli_command cli_config: redirect: ansible.netcommon.cli_config net_put: redirect: ansible.netcommon.net_put net_get: redirect: ansible.netcommon.net_get net_linkagg: redirect: ansible.netcommon.net_linkagg net_interface: redirect: ansible.netcommon.net_interface net_lldp_interface: redirect: ansible.netcommon.net_lldp_interface net_vlan: redirect: ansible.netcommon.net_vlan net_l2_interface: redirect: ansible.netcommon.net_l2_interface net_l3_interface: redirect: ansible.netcommon.net_l3_interface net_vrf: redirect: ansible.netcommon.net_vrf netconf_config: redirect: ansible.netcommon.netconf_config netconf_rpc: redirect: ansible.netcommon.netconf_rpc netconf_get: redirect: ansible.netcommon.netconf_get net_lldp: redirect: ansible.netcommon.net_lldp restconf_get: redirect: ansible.netcommon.restconf_get restconf_config: redirect: ansible.netcommon.restconf_config net_static_route: redirect: ansible.netcommon.net_static_route net_system: redirect: ansible.netcommon.net_system net_logging: redirect: ansible.netcommon.net_logging net_user: redirect: ansible.netcommon.net_user net_ping: redirect: ansible.netcommon.net_ping net_banner: redirect: ansible.netcommon.net_banner acl: redirect: ansible.posix.acl synchronize: redirect: ansible.posix.synchronize at: redirect: ansible.posix.at authorized_key: redirect: ansible.posix.authorized_key mount: redirect: ansible.posix.mount seboolean: redirect: ansible.posix.seboolean selinux: redirect: ansible.posix.selinux sysctl: redirect: ansible.posix.sysctl async_status.ps1: redirect: ansible.windows.async_status setup.ps1: redirect: ansible.windows.setup slurp.ps1: redirect: ansible.windows.slurp win_acl: redirect: ansible.windows.win_acl win_acl_inheritance: redirect: ansible.windows.win_acl_inheritance win_certificate_store: redirect: ansible.windows.win_certificate_store win_command: redirect: ansible.windows.win_command win_copy: redirect: ansible.windows.win_copy win_dns_client: redirect: ansible.windows.win_dns_client win_domain: redirect: ansible.windows.win_domain win_domain_controller: redirect: ansible.windows.win_domain_controller win_domain_membership: redirect: ansible.windows.win_domain_membership win_dsc: redirect: ansible.windows.win_dsc win_environment: redirect: ansible.windows.win_environment win_feature: redirect: ansible.windows.win_feature win_file: redirect: ansible.windows.win_file win_find: redirect: ansible.windows.win_find win_get_url: redirect: ansible.windows.win_get_url win_group: redirect: ansible.windows.win_group win_group_membership: redirect: ansible.windows.win_group_membership win_hostname: redirect: ansible.windows.win_hostname win_optional_feature: redirect: ansible.windows.win_optional_feature win_owner: redirect: ansible.windows.win_owner win_package: redirect: ansible.windows.win_package win_path: redirect: ansible.windows.win_path win_ping: redirect: ansible.windows.win_ping win_reboot: redirect: ansible.windows.win_reboot win_reg_stat: redirect: ansible.windows.win_reg_stat win_regedit: redirect: ansible.windows.win_regedit win_service: redirect: ansible.windows.win_service win_share: redirect: ansible.windows.win_share win_shell: redirect: ansible.windows.win_shell win_stat: redirect: ansible.windows.win_stat win_tempfile: redirect: ansible.windows.win_tempfile win_template: redirect: ansible.windows.win_template win_updates: redirect: ansible.windows.win_updates win_uri: redirect: ansible.windows.win_uri win_user: redirect: ansible.windows.win_user win_user_right: redirect: ansible.windows.win_user_right win_wait_for: redirect: ansible.windows.win_wait_for win_whoami: redirect: ansible.windows.win_whoami fortios_address: redirect: fortinet.fortios.fortios_address fortios_alertemail_setting: redirect: fortinet.fortios.fortios_alertemail_setting fortios_antivirus_heuristic: redirect: fortinet.fortios.fortios_antivirus_heuristic fortios_antivirus_profile: redirect: fortinet.fortios.fortios_antivirus_profile fortios_antivirus_quarantine: redirect: fortinet.fortios.fortios_antivirus_quarantine fortios_antivirus_settings: redirect: fortinet.fortios.fortios_antivirus_settings fortios_application_custom: redirect: fortinet.fortios.fortios_application_custom fortios_application_group: redirect: fortinet.fortios.fortios_application_group fortios_application_list: redirect: fortinet.fortios.fortios_application_list fortios_application_name: redirect: fortinet.fortios.fortios_application_name fortios_application_rule_settings: redirect: fortinet.fortios.fortios_application_rule_settings fortios_authentication_rule: redirect: fortinet.fortios.fortios_authentication_rule fortios_authentication_scheme: redirect: fortinet.fortios.fortios_authentication_scheme fortios_authentication_setting: redirect: fortinet.fortios.fortios_authentication_setting fortios_config: redirect: fortinet.fortios.fortios_config fortios_dlp_filepattern: redirect: fortinet.fortios.fortios_dlp_filepattern fortios_dlp_fp_doc_source: redirect: fortinet.fortios.fortios_dlp_fp_doc_source fortios_dlp_fp_sensitivity: redirect: fortinet.fortios.fortios_dlp_fp_sensitivity fortios_dlp_sensor: redirect: fortinet.fortios.fortios_dlp_sensor fortios_dlp_settings: redirect: fortinet.fortios.fortios_dlp_settings fortios_dnsfilter_domain_filter: redirect: fortinet.fortios.fortios_dnsfilter_domain_filter fortios_dnsfilter_profile: redirect: fortinet.fortios.fortios_dnsfilter_profile fortios_endpoint_control_client: redirect: fortinet.fortios.fortios_endpoint_control_client fortios_endpoint_control_forticlient_ems: redirect: fortinet.fortios.fortios_endpoint_control_forticlient_ems fortios_endpoint_control_forticlient_registration_sync: redirect: fortinet.fortios.fortios_endpoint_control_forticlient_registration_sync fortios_endpoint_control_profile: redirect: fortinet.fortios.fortios_endpoint_control_profile fortios_endpoint_control_settings: redirect: fortinet.fortios.fortios_endpoint_control_settings fortios_extender_controller_extender: redirect: fortinet.fortios.fortios_extender_controller_extender fortios_facts: redirect: fortinet.fortios.fortios_facts fortios_firewall_address: redirect: fortinet.fortios.fortios_firewall_address fortios_firewall_address6: redirect: fortinet.fortios.fortios_firewall_address6 fortios_firewall_address6_template: redirect: fortinet.fortios.fortios_firewall_address6_template fortios_firewall_addrgrp: redirect: fortinet.fortios.fortios_firewall_addrgrp fortios_firewall_addrgrp6: redirect: fortinet.fortios.fortios_firewall_addrgrp6 fortios_firewall_auth_portal: redirect: fortinet.fortios.fortios_firewall_auth_portal fortios_firewall_central_snat_map: redirect: fortinet.fortios.fortios_firewall_central_snat_map fortios_firewall_DoS_policy: redirect: fortinet.fortios.fortios_firewall_DoS_policy fortios_firewall_DoS_policy6: redirect: fortinet.fortios.fortios_firewall_DoS_policy6 fortios_firewall_dnstranslation: redirect: fortinet.fortios.fortios_firewall_dnstranslation fortios_firewall_identity_based_route: redirect: fortinet.fortios.fortios_firewall_identity_based_route fortios_firewall_interface_policy: redirect: fortinet.fortios.fortios_firewall_interface_policy fortios_firewall_interface_policy6: redirect: fortinet.fortios.fortios_firewall_interface_policy6 fortios_firewall_internet_service: redirect: fortinet.fortios.fortios_firewall_internet_service fortios_firewall_internet_service_custom: redirect: fortinet.fortios.fortios_firewall_internet_service_custom fortios_firewall_internet_service_group: redirect: fortinet.fortios.fortios_firewall_internet_service_group fortios_firewall_ip_translation: redirect: fortinet.fortios.fortios_firewall_ip_translation fortios_firewall_ipmacbinding_setting: redirect: fortinet.fortios.fortios_firewall_ipmacbinding_setting fortios_firewall_ipmacbinding_table: redirect: fortinet.fortios.fortios_firewall_ipmacbinding_table fortios_firewall_ippool: redirect: fortinet.fortios.fortios_firewall_ippool fortios_firewall_ippool6: redirect: fortinet.fortios.fortios_firewall_ippool6 fortios_firewall_ipv6_eh_filter: redirect: fortinet.fortios.fortios_firewall_ipv6_eh_filter fortios_firewall_ldb_monitor: redirect: fortinet.fortios.fortios_firewall_ldb_monitor fortios_firewall_local_in_policy: redirect: fortinet.fortios.fortios_firewall_local_in_policy fortios_firewall_local_in_policy6: redirect: fortinet.fortios.fortios_firewall_local_in_policy6 fortios_firewall_multicast_address: redirect: fortinet.fortios.fortios_firewall_multicast_address fortios_firewall_multicast_address6: redirect: fortinet.fortios.fortios_firewall_multicast_address6 fortios_firewall_multicast_policy: redirect: fortinet.fortios.fortios_firewall_multicast_policy fortios_firewall_multicast_policy6: redirect: fortinet.fortios.fortios_firewall_multicast_policy6 fortios_firewall_policy: redirect: fortinet.fortios.fortios_firewall_policy fortios_firewall_policy46: redirect: fortinet.fortios.fortios_firewall_policy46 fortios_firewall_policy6: redirect: fortinet.fortios.fortios_firewall_policy6 fortios_firewall_policy64: redirect: fortinet.fortios.fortios_firewall_policy64 fortios_firewall_profile_group: redirect: fortinet.fortios.fortios_firewall_profile_group fortios_firewall_profile_protocol_options: redirect: fortinet.fortios.fortios_firewall_profile_protocol_options fortios_firewall_proxy_address: redirect: fortinet.fortios.fortios_firewall_proxy_address fortios_firewall_proxy_addrgrp: redirect: fortinet.fortios.fortios_firewall_proxy_addrgrp fortios_firewall_proxy_policy: redirect: fortinet.fortios.fortios_firewall_proxy_policy fortios_firewall_schedule_group: redirect: fortinet.fortios.fortios_firewall_schedule_group fortios_firewall_schedule_onetime: redirect: fortinet.fortios.fortios_firewall_schedule_onetime fortios_firewall_schedule_recurring: redirect: fortinet.fortios.fortios_firewall_schedule_recurring fortios_firewall_service_category: redirect: fortinet.fortios.fortios_firewall_service_category fortios_firewall_service_custom: redirect: fortinet.fortios.fortios_firewall_service_custom fortios_firewall_service_group: redirect: fortinet.fortios.fortios_firewall_service_group fortios_firewall_shaper_per_ip_shaper: redirect: fortinet.fortios.fortios_firewall_shaper_per_ip_shaper fortios_firewall_shaper_traffic_shaper: redirect: fortinet.fortios.fortios_firewall_shaper_traffic_shaper fortios_firewall_shaping_policy: redirect: fortinet.fortios.fortios_firewall_shaping_policy fortios_firewall_shaping_profile: redirect: fortinet.fortios.fortios_firewall_shaping_profile fortios_firewall_sniffer: redirect: fortinet.fortios.fortios_firewall_sniffer fortios_firewall_ssh_host_key: redirect: fortinet.fortios.fortios_firewall_ssh_host_key fortios_firewall_ssh_local_ca: redirect: fortinet.fortios.fortios_firewall_ssh_local_ca fortios_firewall_ssh_local_key: redirect: fortinet.fortios.fortios_firewall_ssh_local_key fortios_firewall_ssh_setting: redirect: fortinet.fortios.fortios_firewall_ssh_setting fortios_firewall_ssl_server: redirect: fortinet.fortios.fortios_firewall_ssl_server fortios_firewall_ssl_setting: redirect: fortinet.fortios.fortios_firewall_ssl_setting fortios_firewall_ssl_ssh_profile: redirect: fortinet.fortios.fortios_firewall_ssl_ssh_profile fortios_firewall_ttl_policy: redirect: fortinet.fortios.fortios_firewall_ttl_policy fortios_firewall_vip: redirect: fortinet.fortios.fortios_firewall_vip fortios_firewall_vip46: redirect: fortinet.fortios.fortios_firewall_vip46 fortios_firewall_vip6: redirect: fortinet.fortios.fortios_firewall_vip6 fortios_firewall_vip64: redirect: fortinet.fortios.fortios_firewall_vip64 fortios_firewall_vipgrp: redirect: fortinet.fortios.fortios_firewall_vipgrp fortios_firewall_vipgrp46: redirect: fortinet.fortios.fortios_firewall_vipgrp46 fortios_firewall_vipgrp6: redirect: fortinet.fortios.fortios_firewall_vipgrp6 fortios_firewall_vipgrp64: redirect: fortinet.fortios.fortios_firewall_vipgrp64 fortios_firewall_wildcard_fqdn_custom: redirect: fortinet.fortios.fortios_firewall_wildcard_fqdn_custom fortios_firewall_wildcard_fqdn_group: redirect: fortinet.fortios.fortios_firewall_wildcard_fqdn_group fortios_ftp_proxy_explicit: redirect: fortinet.fortios.fortios_ftp_proxy_explicit fortios_icap_profile: redirect: fortinet.fortios.fortios_icap_profile fortios_icap_server: redirect: fortinet.fortios.fortios_icap_server fortios_ips_custom: redirect: fortinet.fortios.fortios_ips_custom fortios_ips_decoder: redirect: fortinet.fortios.fortios_ips_decoder fortios_ips_global: redirect: fortinet.fortios.fortios_ips_global fortios_ips_rule: redirect: fortinet.fortios.fortios_ips_rule fortios_ips_rule_settings: redirect: fortinet.fortios.fortios_ips_rule_settings fortios_ips_sensor: redirect: fortinet.fortios.fortios_ips_sensor fortios_ips_settings: redirect: fortinet.fortios.fortios_ips_settings fortios_ipv4_policy: redirect: fortinet.fortios.fortios_ipv4_policy fortios_log_custom_field: redirect: fortinet.fortios.fortios_log_custom_field fortios_log_disk_filter: redirect: fortinet.fortios.fortios_log_disk_filter fortios_log_disk_setting: redirect: fortinet.fortios.fortios_log_disk_setting fortios_log_eventfilter: redirect: fortinet.fortios.fortios_log_eventfilter fortios_log_fortianalyzer2_filter: redirect: fortinet.fortios.fortios_log_fortianalyzer2_filter fortios_log_fortianalyzer2_setting: redirect: fortinet.fortios.fortios_log_fortianalyzer2_setting fortios_log_fortianalyzer3_filter: redirect: fortinet.fortios.fortios_log_fortianalyzer3_filter fortios_log_fortianalyzer3_setting: redirect: fortinet.fortios.fortios_log_fortianalyzer3_setting fortios_log_fortianalyzer_filter: redirect: fortinet.fortios.fortios_log_fortianalyzer_filter fortios_log_fortianalyzer_override_filter: redirect: fortinet.fortios.fortios_log_fortianalyzer_override_filter fortios_log_fortianalyzer_override_setting: redirect: fortinet.fortios.fortios_log_fortianalyzer_override_setting fortios_log_fortianalyzer_setting: redirect: fortinet.fortios.fortios_log_fortianalyzer_setting fortios_log_fortiguard_filter: redirect: fortinet.fortios.fortios_log_fortiguard_filter fortios_log_fortiguard_override_filter: redirect: fortinet.fortios.fortios_log_fortiguard_override_filter fortios_log_fortiguard_override_setting: redirect: fortinet.fortios.fortios_log_fortiguard_override_setting fortios_log_fortiguard_setting: redirect: fortinet.fortios.fortios_log_fortiguard_setting fortios_log_gui_display: redirect: fortinet.fortios.fortios_log_gui_display fortios_log_memory_filter: redirect: fortinet.fortios.fortios_log_memory_filter fortios_log_memory_global_setting: redirect: fortinet.fortios.fortios_log_memory_global_setting fortios_log_memory_setting: redirect: fortinet.fortios.fortios_log_memory_setting fortios_log_null_device_filter: redirect: fortinet.fortios.fortios_log_null_device_filter fortios_log_null_device_setting: redirect: fortinet.fortios.fortios_log_null_device_setting fortios_log_setting: redirect: fortinet.fortios.fortios_log_setting fortios_log_syslogd2_filter: redirect: fortinet.fortios.fortios_log_syslogd2_filter fortios_log_syslogd2_setting: redirect: fortinet.fortios.fortios_log_syslogd2_setting fortios_log_syslogd3_filter: redirect: fortinet.fortios.fortios_log_syslogd3_filter fortios_log_syslogd3_setting: redirect: fortinet.fortios.fortios_log_syslogd3_setting fortios_log_syslogd4_filter: redirect: fortinet.fortios.fortios_log_syslogd4_filter fortios_log_syslogd4_setting: redirect: fortinet.fortios.fortios_log_syslogd4_setting fortios_log_syslogd_filter: redirect: fortinet.fortios.fortios_log_syslogd_filter fortios_log_syslogd_override_filter: redirect: fortinet.fortios.fortios_log_syslogd_override_filter fortios_log_syslogd_override_setting: redirect: fortinet.fortios.fortios_log_syslogd_override_setting fortios_log_syslogd_setting: redirect: fortinet.fortios.fortios_log_syslogd_setting fortios_log_threat_weight: redirect: fortinet.fortios.fortios_log_threat_weight fortios_log_webtrends_filter: redirect: fortinet.fortios.fortios_log_webtrends_filter fortios_log_webtrends_setting: redirect: fortinet.fortios.fortios_log_webtrends_setting fortios_report_chart: redirect: fortinet.fortios.fortios_report_chart fortios_report_dataset: redirect: fortinet.fortios.fortios_report_dataset fortios_report_layout: redirect: fortinet.fortios.fortios_report_layout fortios_report_setting: redirect: fortinet.fortios.fortios_report_setting fortios_report_style: redirect: fortinet.fortios.fortios_report_style fortios_report_theme: redirect: fortinet.fortios.fortios_report_theme fortios_router_access_list: redirect: fortinet.fortios.fortios_router_access_list fortios_router_access_list6: redirect: fortinet.fortios.fortios_router_access_list6 fortios_router_aspath_list: redirect: fortinet.fortios.fortios_router_aspath_list fortios_router_auth_path: redirect: fortinet.fortios.fortios_router_auth_path fortios_router_bfd: redirect: fortinet.fortios.fortios_router_bfd fortios_router_bfd6: redirect: fortinet.fortios.fortios_router_bfd6 fortios_router_bgp: redirect: fortinet.fortios.fortios_router_bgp fortios_router_community_list: redirect: fortinet.fortios.fortios_router_community_list fortios_router_isis: redirect: fortinet.fortios.fortios_router_isis fortios_router_key_chain: redirect: fortinet.fortios.fortios_router_key_chain fortios_router_multicast: redirect: fortinet.fortios.fortios_router_multicast fortios_router_multicast6: redirect: fortinet.fortios.fortios_router_multicast6 fortios_router_multicast_flow: redirect: fortinet.fortios.fortios_router_multicast_flow fortios_router_ospf: redirect: fortinet.fortios.fortios_router_ospf fortios_router_ospf6: redirect: fortinet.fortios.fortios_router_ospf6 fortios_router_policy: redirect: fortinet.fortios.fortios_router_policy fortios_router_policy6: redirect: fortinet.fortios.fortios_router_policy6 fortios_router_prefix_list: redirect: fortinet.fortios.fortios_router_prefix_list fortios_router_prefix_list6: redirect: fortinet.fortios.fortios_router_prefix_list6 fortios_router_rip: redirect: fortinet.fortios.fortios_router_rip fortios_router_ripng: redirect: fortinet.fortios.fortios_router_ripng fortios_router_route_map: redirect: fortinet.fortios.fortios_router_route_map fortios_router_setting: redirect: fortinet.fortios.fortios_router_setting fortios_router_static: redirect: fortinet.fortios.fortios_router_static fortios_router_static6: redirect: fortinet.fortios.fortios_router_static6 fortios_spamfilter_bwl: redirect: fortinet.fortios.fortios_spamfilter_bwl fortios_spamfilter_bword: redirect: fortinet.fortios.fortios_spamfilter_bword fortios_spamfilter_dnsbl: redirect: fortinet.fortios.fortios_spamfilter_dnsbl fortios_spamfilter_fortishield: redirect: fortinet.fortios.fortios_spamfilter_fortishield fortios_spamfilter_iptrust: redirect: fortinet.fortios.fortios_spamfilter_iptrust fortios_spamfilter_mheader: redirect: fortinet.fortios.fortios_spamfilter_mheader fortios_spamfilter_options: redirect: fortinet.fortios.fortios_spamfilter_options fortios_spamfilter_profile: redirect: fortinet.fortios.fortios_spamfilter_profile fortios_ssh_filter_profile: redirect: fortinet.fortios.fortios_ssh_filter_profile fortios_switch_controller_802_1X_settings: redirect: fortinet.fortios.fortios_switch_controller_802_1X_settings fortios_switch_controller_custom_command: redirect: fortinet.fortios.fortios_switch_controller_custom_command fortios_switch_controller_global: redirect: fortinet.fortios.fortios_switch_controller_global fortios_switch_controller_igmp_snooping: redirect: fortinet.fortios.fortios_switch_controller_igmp_snooping fortios_switch_controller_lldp_profile: redirect: fortinet.fortios.fortios_switch_controller_lldp_profile fortios_switch_controller_lldp_settings: redirect: fortinet.fortios.fortios_switch_controller_lldp_settings fortios_switch_controller_mac_sync_settings: redirect: fortinet.fortios.fortios_switch_controller_mac_sync_settings fortios_switch_controller_managed_switch: redirect: fortinet.fortios.fortios_switch_controller_managed_switch fortios_switch_controller_network_monitor_settings: redirect: fortinet.fortios.fortios_switch_controller_network_monitor_settings fortios_switch_controller_qos_dot1p_map: redirect: fortinet.fortios.fortios_switch_controller_qos_dot1p_map fortios_switch_controller_qos_ip_dscp_map: redirect: fortinet.fortios.fortios_switch_controller_qos_ip_dscp_map fortios_switch_controller_qos_qos_policy: redirect: fortinet.fortios.fortios_switch_controller_qos_qos_policy fortios_switch_controller_qos_queue_policy: redirect: fortinet.fortios.fortios_switch_controller_qos_queue_policy fortios_switch_controller_quarantine: redirect: fortinet.fortios.fortios_switch_controller_quarantine fortios_switch_controller_security_policy_802_1X: redirect: fortinet.fortios.fortios_switch_controller_security_policy_802_1X fortios_switch_controller_security_policy_captive_portal: redirect: fortinet.fortios.fortios_switch_controller_security_policy_captive_portal fortios_switch_controller_sflow: redirect: fortinet.fortios.fortios_switch_controller_sflow fortios_switch_controller_storm_control: redirect: fortinet.fortios.fortios_switch_controller_storm_control fortios_switch_controller_stp_settings: redirect: fortinet.fortios.fortios_switch_controller_stp_settings fortios_switch_controller_switch_group: redirect: fortinet.fortios.fortios_switch_controller_switch_group fortios_switch_controller_switch_interface_tag: redirect: fortinet.fortios.fortios_switch_controller_switch_interface_tag fortios_switch_controller_switch_log: redirect: fortinet.fortios.fortios_switch_controller_switch_log fortios_switch_controller_switch_profile: redirect: fortinet.fortios.fortios_switch_controller_switch_profile fortios_switch_controller_system: redirect: fortinet.fortios.fortios_switch_controller_system fortios_switch_controller_virtual_port_pool: redirect: fortinet.fortios.fortios_switch_controller_virtual_port_pool fortios_switch_controller_vlan: redirect: fortinet.fortios.fortios_switch_controller_vlan fortios_system_accprofile: redirect: fortinet.fortios.fortios_system_accprofile fortios_system_admin: redirect: fortinet.fortios.fortios_system_admin fortios_system_affinity_interrupt: redirect: fortinet.fortios.fortios_system_affinity_interrupt fortios_system_affinity_packet_redistribution: redirect: fortinet.fortios.fortios_system_affinity_packet_redistribution fortios_system_alarm: redirect: fortinet.fortios.fortios_system_alarm fortios_system_alias: redirect: fortinet.fortios.fortios_system_alias fortios_system_api_user: redirect: fortinet.fortios.fortios_system_api_user fortios_system_arp_table: redirect: fortinet.fortios.fortios_system_arp_table fortios_system_auto_install: redirect: fortinet.fortios.fortios_system_auto_install fortios_system_auto_script: redirect: fortinet.fortios.fortios_system_auto_script fortios_system_automation_action: redirect: fortinet.fortios.fortios_system_automation_action fortios_system_automation_destination: redirect: fortinet.fortios.fortios_system_automation_destination fortios_system_automation_stitch: redirect: fortinet.fortios.fortios_system_automation_stitch fortios_system_automation_trigger: redirect: fortinet.fortios.fortios_system_automation_trigger fortios_system_autoupdate_push_update: redirect: fortinet.fortios.fortios_system_autoupdate_push_update fortios_system_autoupdate_schedule: redirect: fortinet.fortios.fortios_system_autoupdate_schedule fortios_system_autoupdate_tunneling: redirect: fortinet.fortios.fortios_system_autoupdate_tunneling fortios_system_central_management: redirect: fortinet.fortios.fortios_system_central_management fortios_system_cluster_sync: redirect: fortinet.fortios.fortios_system_cluster_sync fortios_system_console: redirect: fortinet.fortios.fortios_system_console fortios_system_csf: redirect: fortinet.fortios.fortios_system_csf fortios_system_custom_language: redirect: fortinet.fortios.fortios_system_custom_language fortios_system_ddns: redirect: fortinet.fortios.fortios_system_ddns fortios_system_dedicated_mgmt: redirect: fortinet.fortios.fortios_system_dedicated_mgmt fortios_system_dhcp6_server: redirect: fortinet.fortios.fortios_system_dhcp6_server fortios_system_dhcp_server: redirect: fortinet.fortios.fortios_system_dhcp_server fortios_system_dns: redirect: fortinet.fortios.fortios_system_dns fortios_system_dns_database: redirect: fortinet.fortios.fortios_system_dns_database fortios_system_dns_server: redirect: fortinet.fortios.fortios_system_dns_server fortios_system_dscp_based_priority: redirect: fortinet.fortios.fortios_system_dscp_based_priority fortios_system_email_server: redirect: fortinet.fortios.fortios_system_email_server fortios_system_external_resource: redirect: fortinet.fortios.fortios_system_external_resource fortios_system_fips_cc: redirect: fortinet.fortios.fortios_system_fips_cc fortios_system_firmware_upgrade: redirect: fortinet.fortios.fortios_system_firmware_upgrade fortios_system_fm: redirect: fortinet.fortios.fortios_system_fm fortios_system_fortiguard: redirect: fortinet.fortios.fortios_system_fortiguard fortios_system_fortimanager: redirect: fortinet.fortios.fortios_system_fortimanager fortios_system_fortisandbox: redirect: fortinet.fortios.fortios_system_fortisandbox fortios_system_fsso_polling: redirect: fortinet.fortios.fortios_system_fsso_polling fortios_system_ftm_push: redirect: fortinet.fortios.fortios_system_ftm_push fortios_system_geoip_override: redirect: fortinet.fortios.fortios_system_geoip_override fortios_system_global: redirect: fortinet.fortios.fortios_system_global fortios_system_gre_tunnel: redirect: fortinet.fortios.fortios_system_gre_tunnel fortios_system_ha: redirect: fortinet.fortios.fortios_system_ha fortios_system_ha_monitor: redirect: fortinet.fortios.fortios_system_ha_monitor fortios_system_interface: redirect: fortinet.fortios.fortios_system_interface fortios_system_ipip_tunnel: redirect: fortinet.fortios.fortios_system_ipip_tunnel fortios_system_ips_urlfilter_dns: redirect: fortinet.fortios.fortios_system_ips_urlfilter_dns fortios_system_ips_urlfilter_dns6: redirect: fortinet.fortios.fortios_system_ips_urlfilter_dns6 fortios_system_ipv6_neighbor_cache: redirect: fortinet.fortios.fortios_system_ipv6_neighbor_cache fortios_system_ipv6_tunnel: redirect: fortinet.fortios.fortios_system_ipv6_tunnel fortios_system_link_monitor: redirect: fortinet.fortios.fortios_system_link_monitor fortios_system_mac_address_table: redirect: fortinet.fortios.fortios_system_mac_address_table fortios_system_management_tunnel: redirect: fortinet.fortios.fortios_system_management_tunnel fortios_system_mobile_tunnel: redirect: fortinet.fortios.fortios_system_mobile_tunnel fortios_system_nat64: redirect: fortinet.fortios.fortios_system_nat64 fortios_system_nd_proxy: redirect: fortinet.fortios.fortios_system_nd_proxy fortios_system_netflow: redirect: fortinet.fortios.fortios_system_netflow fortios_system_network_visibility: redirect: fortinet.fortios.fortios_system_network_visibility fortios_system_ntp: redirect: fortinet.fortios.fortios_system_ntp fortios_system_object_tagging: redirect: fortinet.fortios.fortios_system_object_tagging fortios_system_password_policy: redirect: fortinet.fortios.fortios_system_password_policy fortios_system_password_policy_guest_admin: redirect: fortinet.fortios.fortios_system_password_policy_guest_admin fortios_system_pppoe_interface: redirect: fortinet.fortios.fortios_system_pppoe_interface fortios_system_probe_response: redirect: fortinet.fortios.fortios_system_probe_response fortios_system_proxy_arp: redirect: fortinet.fortios.fortios_system_proxy_arp fortios_system_replacemsg_admin: redirect: fortinet.fortios.fortios_system_replacemsg_admin fortios_system_replacemsg_alertmail: redirect: fortinet.fortios.fortios_system_replacemsg_alertmail fortios_system_replacemsg_auth: redirect: fortinet.fortios.fortios_system_replacemsg_auth fortios_system_replacemsg_device_detection_portal: redirect: fortinet.fortios.fortios_system_replacemsg_device_detection_portal fortios_system_replacemsg_ec: redirect: fortinet.fortios.fortios_system_replacemsg_ec fortios_system_replacemsg_fortiguard_wf: redirect: fortinet.fortios.fortios_system_replacemsg_fortiguard_wf fortios_system_replacemsg_ftp: redirect: fortinet.fortios.fortios_system_replacemsg_ftp fortios_system_replacemsg_group: redirect: fortinet.fortios.fortios_system_replacemsg_group fortios_system_replacemsg_http: redirect: fortinet.fortios.fortios_system_replacemsg_http fortios_system_replacemsg_icap: redirect: fortinet.fortios.fortios_system_replacemsg_icap fortios_system_replacemsg_image: redirect: fortinet.fortios.fortios_system_replacemsg_image fortios_system_replacemsg_mail: redirect: fortinet.fortios.fortios_system_replacemsg_mail fortios_system_replacemsg_nac_quar: redirect: fortinet.fortios.fortios_system_replacemsg_nac_quar fortios_system_replacemsg_nntp: redirect: fortinet.fortios.fortios_system_replacemsg_nntp fortios_system_replacemsg_spam: redirect: fortinet.fortios.fortios_system_replacemsg_spam fortios_system_replacemsg_sslvpn: redirect: fortinet.fortios.fortios_system_replacemsg_sslvpn fortios_system_replacemsg_traffic_quota: redirect: fortinet.fortios.fortios_system_replacemsg_traffic_quota fortios_system_replacemsg_utm: redirect: fortinet.fortios.fortios_system_replacemsg_utm fortios_system_replacemsg_webproxy: redirect: fortinet.fortios.fortios_system_replacemsg_webproxy fortios_system_resource_limits: redirect: fortinet.fortios.fortios_system_resource_limits fortios_system_sdn_connector: redirect: fortinet.fortios.fortios_system_sdn_connector fortios_system_session_helper: redirect: fortinet.fortios.fortios_system_session_helper fortios_system_session_ttl: redirect: fortinet.fortios.fortios_system_session_ttl fortios_system_settings: redirect: fortinet.fortios.fortios_system_settings fortios_system_sflow: redirect: fortinet.fortios.fortios_system_sflow fortios_system_sit_tunnel: redirect: fortinet.fortios.fortios_system_sit_tunnel fortios_system_sms_server: redirect: fortinet.fortios.fortios_system_sms_server fortios_system_snmp_community: redirect: fortinet.fortios.fortios_system_snmp_community fortios_system_snmp_sysinfo: redirect: fortinet.fortios.fortios_system_snmp_sysinfo fortios_system_snmp_user: redirect: fortinet.fortios.fortios_system_snmp_user fortios_system_storage: redirect: fortinet.fortios.fortios_system_storage fortios_system_switch_interface: redirect: fortinet.fortios.fortios_system_switch_interface fortios_system_tos_based_priority: redirect: fortinet.fortios.fortios_system_tos_based_priority fortios_system_vdom: redirect: fortinet.fortios.fortios_system_vdom fortios_system_vdom_dns: redirect: fortinet.fortios.fortios_system_vdom_dns fortios_system_vdom_exception: redirect: fortinet.fortios.fortios_system_vdom_exception fortios_system_vdom_link: redirect: fortinet.fortios.fortios_system_vdom_link fortios_system_vdom_netflow: redirect: fortinet.fortios.fortios_system_vdom_netflow fortios_system_vdom_property: redirect: fortinet.fortios.fortios_system_vdom_property fortios_system_vdom_radius_server: redirect: fortinet.fortios.fortios_system_vdom_radius_server fortios_system_vdom_sflow: redirect: fortinet.fortios.fortios_system_vdom_sflow fortios_system_virtual_wan_link: redirect: fortinet.fortios.fortios_system_virtual_wan_link fortios_system_virtual_wire_pair: redirect: fortinet.fortios.fortios_system_virtual_wire_pair fortios_system_vxlan: redirect: fortinet.fortios.fortios_system_vxlan fortios_system_wccp: redirect: fortinet.fortios.fortios_system_wccp fortios_system_zone: redirect: fortinet.fortios.fortios_system_zone fortios_user_adgrp: redirect: fortinet.fortios.fortios_user_adgrp fortios_user_device: redirect: fortinet.fortios.fortios_user_device fortios_user_device_access_list: redirect: fortinet.fortios.fortios_user_device_access_list fortios_user_device_category: redirect: fortinet.fortios.fortios_user_device_category fortios_user_device_group: redirect: fortinet.fortios.fortios_user_device_group fortios_user_domain_controller: redirect: fortinet.fortios.fortios_user_domain_controller fortios_user_fortitoken: redirect: fortinet.fortios.fortios_user_fortitoken fortios_user_fsso: redirect: fortinet.fortios.fortios_user_fsso fortios_user_fsso_polling: redirect: fortinet.fortios.fortios_user_fsso_polling fortios_user_group: redirect: fortinet.fortios.fortios_user_group fortios_user_krb_keytab: redirect: fortinet.fortios.fortios_user_krb_keytab fortios_user_ldap: redirect: fortinet.fortios.fortios_user_ldap fortios_user_local: redirect: fortinet.fortios.fortios_user_local fortios_user_password_policy: redirect: fortinet.fortios.fortios_user_password_policy fortios_user_peer: redirect: fortinet.fortios.fortios_user_peer fortios_user_peergrp: redirect: fortinet.fortios.fortios_user_peergrp fortios_user_pop3: redirect: fortinet.fortios.fortios_user_pop3 fortios_user_quarantine: redirect: fortinet.fortios.fortios_user_quarantine fortios_user_radius: redirect: fortinet.fortios.fortios_user_radius fortios_user_security_exempt_list: redirect: fortinet.fortios.fortios_user_security_exempt_list fortios_user_setting: redirect: fortinet.fortios.fortios_user_setting fortios_user_tacacsplus: redirect: fortinet.fortios.fortios_user_tacacsplus fortios_voip_profile: redirect: fortinet.fortios.fortios_voip_profile fortios_vpn_certificate_ca: redirect: fortinet.fortios.fortios_vpn_certificate_ca fortios_vpn_certificate_crl: redirect: fortinet.fortios.fortios_vpn_certificate_crl fortios_vpn_certificate_local: redirect: fortinet.fortios.fortios_vpn_certificate_local fortios_vpn_certificate_ocsp_server: redirect: fortinet.fortios.fortios_vpn_certificate_ocsp_server fortios_vpn_certificate_remote: redirect: fortinet.fortios.fortios_vpn_certificate_remote fortios_vpn_certificate_setting: redirect: fortinet.fortios.fortios_vpn_certificate_setting fortios_vpn_ipsec_concentrator: redirect: fortinet.fortios.fortios_vpn_ipsec_concentrator fortios_vpn_ipsec_forticlient: redirect: fortinet.fortios.fortios_vpn_ipsec_forticlient fortios_vpn_ipsec_manualkey: redirect: fortinet.fortios.fortios_vpn_ipsec_manualkey fortios_vpn_ipsec_manualkey_interface: redirect: fortinet.fortios.fortios_vpn_ipsec_manualkey_interface fortios_vpn_ipsec_phase1: redirect: fortinet.fortios.fortios_vpn_ipsec_phase1 fortios_vpn_ipsec_phase1_interface: redirect: fortinet.fortios.fortios_vpn_ipsec_phase1_interface fortios_vpn_ipsec_phase2: redirect: fortinet.fortios.fortios_vpn_ipsec_phase2 fortios_vpn_ipsec_phase2_interface: redirect: fortinet.fortios.fortios_vpn_ipsec_phase2_interface fortios_vpn_l2tp: redirect: fortinet.fortios.fortios_vpn_l2tp fortios_vpn_pptp: redirect: fortinet.fortios.fortios_vpn_pptp fortios_vpn_ssl_settings: redirect: fortinet.fortios.fortios_vpn_ssl_settings fortios_vpn_ssl_web_host_check_software: redirect: fortinet.fortios.fortios_vpn_ssl_web_host_check_software fortios_vpn_ssl_web_portal: redirect: fortinet.fortios.fortios_vpn_ssl_web_portal fortios_vpn_ssl_web_realm: redirect: fortinet.fortios.fortios_vpn_ssl_web_realm fortios_vpn_ssl_web_user_bookmark: redirect: fortinet.fortios.fortios_vpn_ssl_web_user_bookmark fortios_vpn_ssl_web_user_group_bookmark: redirect: fortinet.fortios.fortios_vpn_ssl_web_user_group_bookmark fortios_waf_main_class: redirect: fortinet.fortios.fortios_waf_main_class fortios_waf_profile: redirect: fortinet.fortios.fortios_waf_profile fortios_waf_signature: redirect: fortinet.fortios.fortios_waf_signature fortios_waf_sub_class: redirect: fortinet.fortios.fortios_waf_sub_class fortios_wanopt_auth_group: redirect: fortinet.fortios.fortios_wanopt_auth_group fortios_wanopt_cache_service: redirect: fortinet.fortios.fortios_wanopt_cache_service fortios_wanopt_content_delivery_network_rule: redirect: fortinet.fortios.fortios_wanopt_content_delivery_network_rule fortios_wanopt_peer: redirect: fortinet.fortios.fortios_wanopt_peer fortios_wanopt_profile: redirect: fortinet.fortios.fortios_wanopt_profile fortios_wanopt_remote_storage: redirect: fortinet.fortios.fortios_wanopt_remote_storage fortios_wanopt_settings: redirect: fortinet.fortios.fortios_wanopt_settings fortios_wanopt_webcache: redirect: fortinet.fortios.fortios_wanopt_webcache fortios_web_proxy_debug_url: redirect: fortinet.fortios.fortios_web_proxy_debug_url fortios_web_proxy_explicit: redirect: fortinet.fortios.fortios_web_proxy_explicit fortios_web_proxy_forward_server: redirect: fortinet.fortios.fortios_web_proxy_forward_server fortios_web_proxy_forward_server_group: redirect: fortinet.fortios.fortios_web_proxy_forward_server_group fortios_web_proxy_global: redirect: fortinet.fortios.fortios_web_proxy_global fortios_web_proxy_profile: redirect: fortinet.fortios.fortios_web_proxy_profile fortios_web_proxy_url_match: redirect: fortinet.fortios.fortios_web_proxy_url_match fortios_web_proxy_wisp: redirect: fortinet.fortios.fortios_web_proxy_wisp fortios_webfilter: redirect: fortinet.fortios.fortios_webfilter fortios_webfilter_content: redirect: fortinet.fortios.fortios_webfilter_content fortios_webfilter_content_header: redirect: fortinet.fortios.fortios_webfilter_content_header fortios_webfilter_fortiguard: redirect: fortinet.fortios.fortios_webfilter_fortiguard fortios_webfilter_ftgd_local_cat: redirect: fortinet.fortios.fortios_webfilter_ftgd_local_cat fortios_webfilter_ftgd_local_rating: redirect: fortinet.fortios.fortios_webfilter_ftgd_local_rating fortios_webfilter_ips_urlfilter_cache_setting: redirect: fortinet.fortios.fortios_webfilter_ips_urlfilter_cache_setting fortios_webfilter_ips_urlfilter_setting: redirect: fortinet.fortios.fortios_webfilter_ips_urlfilter_setting fortios_webfilter_ips_urlfilter_setting6: redirect: fortinet.fortios.fortios_webfilter_ips_urlfilter_setting6 fortios_webfilter_override: redirect: fortinet.fortios.fortios_webfilter_override fortios_webfilter_profile: redirect: fortinet.fortios.fortios_webfilter_profile fortios_webfilter_search_engine: redirect: fortinet.fortios.fortios_webfilter_search_engine fortios_webfilter_urlfilter: redirect: fortinet.fortios.fortios_webfilter_urlfilter fortios_wireless_controller_ap_status: redirect: fortinet.fortios.fortios_wireless_controller_ap_status fortios_wireless_controller_ble_profile: redirect: fortinet.fortios.fortios_wireless_controller_ble_profile fortios_wireless_controller_bonjour_profile: redirect: fortinet.fortios.fortios_wireless_controller_bonjour_profile fortios_wireless_controller_global: redirect: fortinet.fortios.fortios_wireless_controller_global fortios_wireless_controller_hotspot20_anqp_3gpp_cellular: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_anqp_3gpp_cellular fortios_wireless_controller_hotspot20_anqp_ip_address_type: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_anqp_ip_address_type fortios_wireless_controller_hotspot20_anqp_nai_realm: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_anqp_nai_realm fortios_wireless_controller_hotspot20_anqp_network_auth_type: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_anqp_network_auth_type fortios_wireless_controller_hotspot20_anqp_roaming_consortium: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_anqp_roaming_consortium fortios_wireless_controller_hotspot20_anqp_venue_name: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_anqp_venue_name fortios_wireless_controller_hotspot20_h2qp_conn_capability: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_h2qp_conn_capability fortios_wireless_controller_hotspot20_h2qp_operator_name: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_h2qp_operator_name fortios_wireless_controller_hotspot20_h2qp_osu_provider: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_h2qp_osu_provider fortios_wireless_controller_hotspot20_h2qp_wan_metric: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_h2qp_wan_metric fortios_wireless_controller_hotspot20_hs_profile: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_hs_profile fortios_wireless_controller_hotspot20_icon: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_icon fortios_wireless_controller_hotspot20_qos_map: redirect: fortinet.fortios.fortios_wireless_controller_hotspot20_qos_map fortios_wireless_controller_inter_controller: redirect: fortinet.fortios.fortios_wireless_controller_inter_controller fortios_wireless_controller_qos_profile: redirect: fortinet.fortios.fortios_wireless_controller_qos_profile fortios_wireless_controller_setting: redirect: fortinet.fortios.fortios_wireless_controller_setting fortios_wireless_controller_timers: redirect: fortinet.fortios.fortios_wireless_controller_timers fortios_wireless_controller_utm_profile: redirect: fortinet.fortios.fortios_wireless_controller_utm_profile fortios_wireless_controller_vap: redirect: fortinet.fortios.fortios_wireless_controller_vap fortios_wireless_controller_vap_group: redirect: fortinet.fortios.fortios_wireless_controller_vap_group fortios_wireless_controller_wids_profile: redirect: fortinet.fortios.fortios_wireless_controller_wids_profile fortios_wireless_controller_wtp: redirect: fortinet.fortios.fortios_wireless_controller_wtp fortios_wireless_controller_wtp_group: redirect: fortinet.fortios.fortios_wireless_controller_wtp_group fortios_wireless_controller_wtp_profile: redirect: fortinet.fortios.fortios_wireless_controller_wtp_profile netbox_device: redirect: netbox.netbox.netbox_device netbox_ip_address: redirect: netbox.netbox.netbox_ip_address netbox_interface: redirect: netbox.netbox.netbox_interface netbox_prefix: redirect: netbox.netbox.netbox_prefix netbox_site: redirect: netbox.netbox.netbox_site aws_netapp_cvs_FileSystems: redirect: netapp.aws.aws_netapp_cvs_filesystems aws_netapp_cvs_active_directory: redirect: netapp.aws.aws_netapp_cvs_active_directory aws_netapp_cvs_pool: redirect: netapp.aws.aws_netapp_cvs_pool aws_netapp_cvs_snapshots: redirect: netapp.aws.aws_netapp_cvs_snapshots na_elementsw_access_group: redirect: netapp.elementsw.na_elementsw_access_group na_elementsw_account: redirect: netapp.elementsw.na_elementsw_account na_elementsw_admin_users: redirect: netapp.elementsw.na_elementsw_admin_users na_elementsw_backup: redirect: netapp.elementsw.na_elementsw_backup na_elementsw_check_connections: redirect: netapp.elementsw.na_elementsw_check_connections na_elementsw_cluster: redirect: netapp.elementsw.na_elementsw_cluster na_elementsw_cluster_config: redirect: netapp.elementsw.na_elementsw_cluster_config na_elementsw_cluster_pair: redirect: netapp.elementsw.na_elementsw_cluster_pair na_elementsw_cluster_snmp: redirect: netapp.elementsw.na_elementsw_cluster_snmp na_elementsw_drive: redirect: netapp.elementsw.na_elementsw_drive na_elementsw_initiators: redirect: netapp.elementsw.na_elementsw_initiators na_elementsw_ldap: redirect: netapp.elementsw.na_elementsw_ldap na_elementsw_network_interfaces: redirect: netapp.elementsw.na_elementsw_network_interfaces na_elementsw_node: redirect: netapp.elementsw.na_elementsw_node na_elementsw_snapshot: redirect: netapp.elementsw.na_elementsw_snapshot na_elementsw_snapshot_restore: redirect: netapp.elementsw.na_elementsw_snapshot_restore na_elementsw_snapshot_schedule: redirect: netapp.elementsw.na_elementsw_snapshot_schedule na_elementsw_vlan: redirect: netapp.elementsw.na_elementsw_vlan na_elementsw_volume: redirect: netapp.elementsw.na_elementsw_volume na_elementsw_volume_clone: redirect: netapp.elementsw.na_elementsw_volume_clone na_elementsw_volume_pair: redirect: netapp.elementsw.na_elementsw_volume_pair na_ontap_aggregate: redirect: netapp.ontap.na_ontap_aggregate na_ontap_autosupport: redirect: netapp.ontap.na_ontap_autosupport na_ontap_broadcast_domain: redirect: netapp.ontap.na_ontap_broadcast_domain na_ontap_broadcast_domain_ports: redirect: netapp.ontap.na_ontap_broadcast_domain_ports na_ontap_cg_snapshot: redirect: netapp.ontap.na_ontap_cg_snapshot na_ontap_cifs: redirect: netapp.ontap.na_ontap_cifs na_ontap_cifs_acl: redirect: netapp.ontap.na_ontap_cifs_acl na_ontap_cifs_server: redirect: netapp.ontap.na_ontap_cifs_server na_ontap_cluster: redirect: netapp.ontap.na_ontap_cluster na_ontap_cluster_ha: redirect: netapp.ontap.na_ontap_cluster_ha na_ontap_cluster_peer: redirect: netapp.ontap.na_ontap_cluster_peer na_ontap_command: redirect: netapp.ontap.na_ontap_command na_ontap_disks: redirect: netapp.ontap.na_ontap_disks na_ontap_dns: redirect: netapp.ontap.na_ontap_dns na_ontap_export_policy: redirect: netapp.ontap.na_ontap_export_policy na_ontap_export_policy_rule: redirect: netapp.ontap.na_ontap_export_policy_rule na_ontap_fcp: redirect: netapp.ontap.na_ontap_fcp na_ontap_firewall_policy: redirect: netapp.ontap.na_ontap_firewall_policy na_ontap_firmware_upgrade: redirect: netapp.ontap.na_ontap_firmware_upgrade na_ontap_flexcache: redirect: netapp.ontap.na_ontap_flexcache na_ontap_igroup: redirect: netapp.ontap.na_ontap_igroup na_ontap_igroup_initiator: redirect: netapp.ontap.na_ontap_igroup_initiator na_ontap_info: redirect: netapp.ontap.na_ontap_info na_ontap_interface: redirect: netapp.ontap.na_ontap_interface na_ontap_ipspace: redirect: netapp.ontap.na_ontap_ipspace na_ontap_iscsi: redirect: netapp.ontap.na_ontap_iscsi na_ontap_job_schedule: redirect: netapp.ontap.na_ontap_job_schedule na_ontap_kerberos_realm: redirect: netapp.ontap.na_ontap_kerberos_realm na_ontap_ldap: redirect: netapp.ontap.na_ontap_ldap na_ontap_ldap_client: redirect: netapp.ontap.na_ontap_ldap_client na_ontap_license: redirect: netapp.ontap.na_ontap_license na_ontap_lun: redirect: netapp.ontap.na_ontap_lun na_ontap_lun_copy: redirect: netapp.ontap.na_ontap_lun_copy na_ontap_lun_map: redirect: netapp.ontap.na_ontap_lun_map na_ontap_motd: redirect: netapp.ontap.na_ontap_motd na_ontap_ndmp: redirect: netapp.ontap.na_ontap_ndmp na_ontap_net_ifgrp: redirect: netapp.ontap.na_ontap_net_ifgrp na_ontap_net_port: redirect: netapp.ontap.na_ontap_net_port na_ontap_net_routes: redirect: netapp.ontap.na_ontap_net_routes na_ontap_net_subnet: redirect: netapp.ontap.na_ontap_net_subnet na_ontap_net_vlan: redirect: netapp.ontap.na_ontap_net_vlan na_ontap_nfs: redirect: netapp.ontap.na_ontap_nfs na_ontap_node: redirect: netapp.ontap.na_ontap_node na_ontap_ntp: redirect: netapp.ontap.na_ontap_ntp na_ontap_nvme: redirect: netapp.ontap.na_ontap_nvme na_ontap_nvme_namespace: redirect: netapp.ontap.na_ontap_nvme_namespace na_ontap_nvme_subsystem: redirect: netapp.ontap.na_ontap_nvme_subsystem na_ontap_object_store: redirect: netapp.ontap.na_ontap_object_store na_ontap_ports: redirect: netapp.ontap.na_ontap_ports na_ontap_portset: redirect: netapp.ontap.na_ontap_portset na_ontap_qos_adaptive_policy_group: redirect: netapp.ontap.na_ontap_qos_adaptive_policy_group na_ontap_qos_policy_group: redirect: netapp.ontap.na_ontap_qos_policy_group na_ontap_qtree: redirect: netapp.ontap.na_ontap_qtree na_ontap_quotas: redirect: netapp.ontap.na_ontap_quotas na_ontap_security_key_manager: redirect: netapp.ontap.na_ontap_security_key_manager na_ontap_service_processor_network: redirect: netapp.ontap.na_ontap_service_processor_network na_ontap_snapmirror: redirect: netapp.ontap.na_ontap_snapmirror na_ontap_snapshot: redirect: netapp.ontap.na_ontap_snapshot na_ontap_snapshot_policy: redirect: netapp.ontap.na_ontap_snapshot_policy na_ontap_snmp: redirect: netapp.ontap.na_ontap_snmp na_ontap_software_update: redirect: netapp.ontap.na_ontap_software_update na_ontap_svm: redirect: netapp.ontap.na_ontap_svm na_ontap_svm_options: redirect: netapp.ontap.na_ontap_svm_options na_ontap_ucadapter: redirect: netapp.ontap.na_ontap_ucadapter na_ontap_unix_group: redirect: netapp.ontap.na_ontap_unix_group na_ontap_unix_user: redirect: netapp.ontap.na_ontap_unix_user na_ontap_user: redirect: netapp.ontap.na_ontap_user na_ontap_user_role: redirect: netapp.ontap.na_ontap_user_role na_ontap_volume: redirect: netapp.ontap.na_ontap_volume na_ontap_volume_autosize: redirect: netapp.ontap.na_ontap_volume_autosize na_ontap_volume_clone: redirect: netapp.ontap.na_ontap_volume_clone na_ontap_vscan: redirect: netapp.ontap.na_ontap_vscan na_ontap_vscan_on_access_policy: redirect: netapp.ontap.na_ontap_vscan_on_access_policy na_ontap_vscan_on_demand_task: redirect: netapp.ontap.na_ontap_vscan_on_demand_task na_ontap_vscan_scanner_pool: redirect: netapp.ontap.na_ontap_vscan_scanner_pool na_ontap_vserver_cifs_security: redirect: netapp.ontap.na_ontap_vserver_cifs_security na_ontap_vserver_peer: redirect: netapp.ontap.na_ontap_vserver_peer cp_mgmt_access_layer: redirect: check_point.mgmt.cp_mgmt_access_layer cp_mgmt_access_layer_facts: redirect: check_point.mgmt.cp_mgmt_access_layer_facts cp_mgmt_access_role: redirect: check_point.mgmt.cp_mgmt_access_role cp_mgmt_access_role_facts: redirect: check_point.mgmt.cp_mgmt_access_role_facts cp_mgmt_access_rule: redirect: check_point.mgmt.cp_mgmt_access_rule cp_mgmt_access_rule_facts: redirect: check_point.mgmt.cp_mgmt_access_rule_facts cp_mgmt_address_range: redirect: check_point.mgmt.cp_mgmt_address_range cp_mgmt_address_range_facts: redirect: check_point.mgmt.cp_mgmt_address_range_facts cp_mgmt_administrator: redirect: check_point.mgmt.cp_mgmt_administrator cp_mgmt_administrator_facts: redirect: check_point.mgmt.cp_mgmt_administrator_facts cp_mgmt_application_site: redirect: check_point.mgmt.cp_mgmt_application_site cp_mgmt_application_site_category: redirect: check_point.mgmt.cp_mgmt_application_site_category cp_mgmt_application_site_category_facts: redirect: check_point.mgmt.cp_mgmt_application_site_category_facts cp_mgmt_application_site_facts: redirect: check_point.mgmt.cp_mgmt_application_site_facts cp_mgmt_application_site_group: redirect: check_point.mgmt.cp_mgmt_application_site_group cp_mgmt_application_site_group_facts: redirect: check_point.mgmt.cp_mgmt_application_site_group_facts cp_mgmt_assign_global_assignment: redirect: check_point.mgmt.cp_mgmt_assign_global_assignment cp_mgmt_discard: redirect: check_point.mgmt.cp_mgmt_discard cp_mgmt_dns_domain: redirect: check_point.mgmt.cp_mgmt_dns_domain cp_mgmt_dns_domain_facts: redirect: check_point.mgmt.cp_mgmt_dns_domain_facts cp_mgmt_dynamic_object: redirect: check_point.mgmt.cp_mgmt_dynamic_object cp_mgmt_dynamic_object_facts: redirect: check_point.mgmt.cp_mgmt_dynamic_object_facts cp_mgmt_exception_group: redirect: check_point.mgmt.cp_mgmt_exception_group cp_mgmt_exception_group_facts: redirect: check_point.mgmt.cp_mgmt_exception_group_facts cp_mgmt_global_assignment: redirect: check_point.mgmt.cp_mgmt_global_assignment cp_mgmt_global_assignment_facts: redirect: check_point.mgmt.cp_mgmt_global_assignment_facts cp_mgmt_group: redirect: check_point.mgmt.cp_mgmt_group cp_mgmt_group_facts: redirect: check_point.mgmt.cp_mgmt_group_facts cp_mgmt_group_with_exclusion: redirect: check_point.mgmt.cp_mgmt_group_with_exclusion cp_mgmt_group_with_exclusion_facts: redirect: check_point.mgmt.cp_mgmt_group_with_exclusion_facts cp_mgmt_host: redirect: check_point.mgmt.cp_mgmt_host cp_mgmt_host_facts: redirect: check_point.mgmt.cp_mgmt_host_facts cp_mgmt_install_policy: redirect: check_point.mgmt.cp_mgmt_install_policy cp_mgmt_mds_facts: redirect: check_point.mgmt.cp_mgmt_mds_facts cp_mgmt_multicast_address_range: redirect: check_point.mgmt.cp_mgmt_multicast_address_range cp_mgmt_multicast_address_range_facts: redirect: check_point.mgmt.cp_mgmt_multicast_address_range_facts cp_mgmt_network: redirect: check_point.mgmt.cp_mgmt_network cp_mgmt_network_facts: redirect: check_point.mgmt.cp_mgmt_network_facts cp_mgmt_package: redirect: check_point.mgmt.cp_mgmt_package cp_mgmt_package_facts: redirect: check_point.mgmt.cp_mgmt_package_facts cp_mgmt_publish: redirect: check_point.mgmt.cp_mgmt_publish cp_mgmt_put_file: redirect: check_point.mgmt.cp_mgmt_put_file cp_mgmt_run_ips_update: redirect: check_point.mgmt.cp_mgmt_run_ips_update cp_mgmt_run_script: redirect: check_point.mgmt.cp_mgmt_run_script cp_mgmt_security_zone: redirect: check_point.mgmt.cp_mgmt_security_zone cp_mgmt_security_zone_facts: redirect: check_point.mgmt.cp_mgmt_security_zone_facts cp_mgmt_service_dce_rpc: redirect: check_point.mgmt.cp_mgmt_service_dce_rpc cp_mgmt_service_dce_rpc_facts: redirect: check_point.mgmt.cp_mgmt_service_dce_rpc_facts cp_mgmt_service_group: redirect: check_point.mgmt.cp_mgmt_service_group cp_mgmt_service_group_facts: redirect: check_point.mgmt.cp_mgmt_service_group_facts cp_mgmt_service_icmp: redirect: check_point.mgmt.cp_mgmt_service_icmp cp_mgmt_service_icmp6: redirect: check_point.mgmt.cp_mgmt_service_icmp6 cp_mgmt_service_icmp6_facts: redirect: check_point.mgmt.cp_mgmt_service_icmp6_facts cp_mgmt_service_icmp_facts: redirect: check_point.mgmt.cp_mgmt_service_icmp_facts cp_mgmt_service_other: redirect: check_point.mgmt.cp_mgmt_service_other cp_mgmt_service_other_facts: redirect: check_point.mgmt.cp_mgmt_service_other_facts cp_mgmt_service_rpc: redirect: check_point.mgmt.cp_mgmt_service_rpc cp_mgmt_service_rpc_facts: redirect: check_point.mgmt.cp_mgmt_service_rpc_facts cp_mgmt_service_sctp: redirect: check_point.mgmt.cp_mgmt_service_sctp cp_mgmt_service_sctp_facts: redirect: check_point.mgmt.cp_mgmt_service_sctp_facts cp_mgmt_service_tcp: redirect: check_point.mgmt.cp_mgmt_service_tcp cp_mgmt_service_tcp_facts: redirect: check_point.mgmt.cp_mgmt_service_tcp_facts cp_mgmt_service_udp: redirect: check_point.mgmt.cp_mgmt_service_udp cp_mgmt_service_udp_facts: redirect: check_point.mgmt.cp_mgmt_service_udp_facts cp_mgmt_session_facts: redirect: check_point.mgmt.cp_mgmt_session_facts cp_mgmt_simple_gateway: redirect: check_point.mgmt.cp_mgmt_simple_gateway cp_mgmt_simple_gateway_facts: redirect: check_point.mgmt.cp_mgmt_simple_gateway_facts cp_mgmt_tag: redirect: check_point.mgmt.cp_mgmt_tag cp_mgmt_tag_facts: redirect: check_point.mgmt.cp_mgmt_tag_facts cp_mgmt_threat_exception: redirect: check_point.mgmt.cp_mgmt_threat_exception cp_mgmt_threat_exception_facts: redirect: check_point.mgmt.cp_mgmt_threat_exception_facts cp_mgmt_threat_indicator: redirect: check_point.mgmt.cp_mgmt_threat_indicator cp_mgmt_threat_indicator_facts: redirect: check_point.mgmt.cp_mgmt_threat_indicator_facts cp_mgmt_threat_layer: redirect: check_point.mgmt.cp_mgmt_threat_layer cp_mgmt_threat_layer_facts: redirect: check_point.mgmt.cp_mgmt_threat_layer_facts cp_mgmt_threat_profile: redirect: check_point.mgmt.cp_mgmt_threat_profile cp_mgmt_threat_profile_facts: redirect: check_point.mgmt.cp_mgmt_threat_profile_facts cp_mgmt_threat_protection_override: redirect: check_point.mgmt.cp_mgmt_threat_protection_override cp_mgmt_threat_rule: redirect: check_point.mgmt.cp_mgmt_threat_rule cp_mgmt_threat_rule_facts: redirect: check_point.mgmt.cp_mgmt_threat_rule_facts cp_mgmt_time: redirect: check_point.mgmt.cp_mgmt_time cp_mgmt_time_facts: redirect: check_point.mgmt.cp_mgmt_time_facts cp_mgmt_verify_policy: redirect: check_point.mgmt.cp_mgmt_verify_policy cp_mgmt_vpn_community_meshed: redirect: check_point.mgmt.cp_mgmt_vpn_community_meshed cp_mgmt_vpn_community_meshed_facts: redirect: check_point.mgmt.cp_mgmt_vpn_community_meshed_facts cp_mgmt_vpn_community_star: redirect: check_point.mgmt.cp_mgmt_vpn_community_star cp_mgmt_vpn_community_star_facts: redirect: check_point.mgmt.cp_mgmt_vpn_community_star_facts cp_mgmt_wildcard: redirect: check_point.mgmt.cp_mgmt_wildcard cp_mgmt_wildcard_facts: redirect: check_point.mgmt.cp_mgmt_wildcard_facts eos_ospfv2: redirect: arista.eos.eos_ospfv2 eos_static_route: redirect: arista.eos.eos_static_route eos_acls: redirect: arista.eos.eos_acls eos_interfaces: redirect: arista.eos.eos_interfaces eos_facts: redirect: arista.eos.eos_facts eos_logging: redirect: arista.eos.eos_logging eos_lag_interfaces: redirect: arista.eos.eos_lag_interfaces eos_l2_interfaces: redirect: arista.eos.eos_l2_interfaces eos_l3_interface: redirect: arista.eos.eos_l3_interface eos_lacp: redirect: arista.eos.eos_lacp eos_lldp_global: redirect: arista.eos.eos_lldp_global eos_static_routes: redirect: arista.eos.eos_static_routes eos_lacp_interfaces: redirect: arista.eos.eos_lacp_interfaces eos_system: redirect: arista.eos.eos_system eos_vlan: redirect: arista.eos.eos_vlan eos_eapi: redirect: arista.eos.eos_eapi eos_acl_interfaces: redirect: arista.eos.eos_acl_interfaces eos_l2_interface: redirect: arista.eos.eos_l2_interface eos_lldp_interfaces: redirect: arista.eos.eos_lldp_interfaces eos_command: redirect: arista.eos.eos_command eos_linkagg: redirect: arista.eos.eos_linkagg eos_l3_interfaces: redirect: arista.eos.eos_l3_interfaces eos_vlans: redirect: arista.eos.eos_vlans eos_user: redirect: arista.eos.eos_user eos_banner: redirect: arista.eos.eos_banner eos_lldp: redirect: arista.eos.eos_lldp eos_interface: redirect: arista.eos.eos_interface eos_config: redirect: arista.eos.eos_config eos_bgp: redirect: arista.eos.eos_bgp eos_vrf: redirect: arista.eos.eos_vrf aci_aaa_user: redirect: cisco.aci.aci_aaa_user aci_aaa_user_certificate: redirect: cisco.aci.aci_aaa_user_certificate aci_access_port_block_to_access_port: redirect: cisco.aci.aci_access_port_block_to_access_port aci_access_port_to_interface_policy_leaf_profile: redirect: cisco.aci.aci_access_port_to_interface_policy_leaf_profile aci_access_sub_port_block_to_access_port: redirect: cisco.aci.aci_access_sub_port_block_to_access_port aci_aep: redirect: cisco.aci.aci_aep aci_aep_to_domain: redirect: cisco.aci.aci_aep_to_domain aci_ap: redirect: cisco.aci.aci_ap aci_bd: redirect: cisco.aci.aci_bd aci_bd_subnet: redirect: cisco.aci.aci_bd_subnet aci_bd_to_l3out: redirect: cisco.aci.aci_bd_to_l3out aci_config_rollback: redirect: cisco.aci.aci_config_rollback aci_config_snapshot: redirect: cisco.aci.aci_config_snapshot aci_contract: redirect: cisco.aci.aci_contract aci_contract_subject: redirect: cisco.aci.aci_contract_subject aci_contract_subject_to_filter: redirect: cisco.aci.aci_contract_subject_to_filter aci_domain: redirect: cisco.aci.aci_domain aci_domain_to_encap_pool: redirect: cisco.aci.aci_domain_to_encap_pool aci_domain_to_vlan_pool: redirect: cisco.aci.aci_domain_to_vlan_pool aci_encap_pool: redirect: cisco.aci.aci_encap_pool aci_encap_pool_range: redirect: cisco.aci.aci_encap_pool_range aci_epg: redirect: cisco.aci.aci_epg aci_epg_monitoring_policy: redirect: cisco.aci.aci_epg_monitoring_policy aci_epg_to_contract: redirect: cisco.aci.aci_epg_to_contract aci_epg_to_domain: redirect: cisco.aci.aci_epg_to_domain aci_fabric_node: redirect: cisco.aci.aci_fabric_node aci_fabric_scheduler: redirect: cisco.aci.aci_fabric_scheduler aci_filter: redirect: cisco.aci.aci_filter aci_filter_entry: redirect: cisco.aci.aci_filter_entry aci_firmware_group: redirect: cisco.aci.aci_firmware_group aci_firmware_group_node: redirect: cisco.aci.aci_firmware_group_node aci_firmware_policy: redirect: cisco.aci.aci_firmware_policy aci_firmware_source: redirect: cisco.aci.aci_firmware_source aci_interface_policy_cdp: redirect: cisco.aci.aci_interface_policy_cdp aci_interface_policy_fc: redirect: cisco.aci.aci_interface_policy_fc aci_interface_policy_l2: redirect: cisco.aci.aci_interface_policy_l2 aci_interface_policy_leaf_policy_group: redirect: cisco.aci.aci_interface_policy_leaf_policy_group aci_interface_policy_leaf_profile: redirect: cisco.aci.aci_interface_policy_leaf_profile aci_interface_policy_lldp: redirect: cisco.aci.aci_interface_policy_lldp aci_interface_policy_mcp: redirect: cisco.aci.aci_interface_policy_mcp aci_interface_policy_ospf: redirect: cisco.aci.aci_interface_policy_ospf aci_interface_policy_port_channel: redirect: cisco.aci.aci_interface_policy_port_channel aci_interface_policy_port_security: redirect: cisco.aci.aci_interface_policy_port_security aci_interface_selector_to_switch_policy_leaf_profile: redirect: cisco.aci.aci_interface_selector_to_switch_policy_leaf_profile aci_l3out: redirect: cisco.aci.aci_l3out aci_l3out_extepg: redirect: cisco.aci.aci_l3out_extepg aci_l3out_extsubnet: redirect: cisco.aci.aci_l3out_extsubnet aci_l3out_route_tag_policy: redirect: cisco.aci.aci_l3out_route_tag_policy aci_maintenance_group: redirect: cisco.aci.aci_maintenance_group aci_maintenance_group_node: redirect: cisco.aci.aci_maintenance_group_node aci_maintenance_policy: redirect: cisco.aci.aci_maintenance_policy aci_rest: redirect: cisco.aci.aci_rest aci_static_binding_to_epg: redirect: cisco.aci.aci_static_binding_to_epg aci_switch_leaf_selector: redirect: cisco.aci.aci_switch_leaf_selector aci_switch_policy_leaf_profile: redirect: cisco.aci.aci_switch_policy_leaf_profile aci_switch_policy_vpc_protection_group: redirect: cisco.aci.aci_switch_policy_vpc_protection_group aci_taboo_contract: redirect: cisco.aci.aci_taboo_contract aci_tenant: redirect: cisco.aci.aci_tenant aci_tenant_action_rule_profile: redirect: cisco.aci.aci_tenant_action_rule_profile aci_tenant_ep_retention_policy: redirect: cisco.aci.aci_tenant_ep_retention_policy aci_tenant_span_dst_group: redirect: cisco.aci.aci_tenant_span_dst_group aci_tenant_span_src_group: redirect: cisco.aci.aci_tenant_span_src_group aci_tenant_span_src_group_to_dst_group: redirect: cisco.aci.aci_tenant_span_src_group_to_dst_group aci_vlan_pool: redirect: cisco.aci.aci_vlan_pool aci_vlan_pool_encap_block: redirect: cisco.aci.aci_vlan_pool_encap_block aci_vmm_credential: redirect: cisco.aci.aci_vmm_credential aci_vrf: redirect: cisco.aci.aci_vrf asa_acl: redirect: cisco.asa.asa_acl asa_config: redirect: cisco.asa.asa_config asa_og: redirect: cisco.asa.asa_og asa_command: redirect: cisco.asa.asa_command intersight_facts: redirect: cisco.intersight.intersight_info intersight_info: redirect: cisco.intersight.intersight_info intersight_rest_api: redirect: cisco.intersight.intersight_rest_api ios_ospfv2: redirect: cisco.ios.ios_ospfv2 ios_l3_interfaces: redirect: cisco.ios.ios_l3_interfaces ios_lldp: redirect: cisco.ios.ios_lldp ios_interface: redirect: cisco.ios.ios_interface ios_lldp_interfaces: redirect: cisco.ios.ios_lldp_interfaces ios_l3_interface: redirect: cisco.ios.ios_l3_interface ios_acl_interfaces: redirect: cisco.ios.ios_acl_interfaces ios_static_routes: redirect: cisco.ios.ios_static_routes ios_l2_interfaces: redirect: cisco.ios.ios_l2_interfaces ios_logging: redirect: cisco.ios.ios_logging ios_vlan: redirect: cisco.ios.ios_vlan ios_command: redirect: cisco.ios.ios_command ios_static_route: redirect: cisco.ios.ios_static_route ios_lldp_global: redirect: cisco.ios.ios_lldp_global ios_banner: redirect: cisco.ios.ios_banner ios_lag_interfaces: redirect: cisco.ios.ios_lag_interfaces ios_linkagg: redirect: cisco.ios.ios_linkagg ios_user: redirect: cisco.ios.ios_user ios_system: redirect: cisco.ios.ios_system ios_facts: redirect: cisco.ios.ios_facts ios_ping: redirect: cisco.ios.ios_ping ios_vlans: redirect: cisco.ios.ios_vlans ios_vrf: redirect: cisco.ios.ios_vrf ios_bgp: redirect: cisco.ios.ios_bgp ios_ntp: redirect: cisco.ios.ios_ntp ios_lacp_interfaces: redirect: cisco.ios.ios_lacp_interfaces ios_lacp: redirect: cisco.ios.ios_lacp ios_config: redirect: cisco.ios.ios_config ios_l2_interface: redirect: cisco.ios.ios_l2_interface ios_acls: redirect: cisco.ios.ios_acls ios_interfaces: redirect: cisco.ios.ios_interfaces iosxr_ospfv2: redirect: cisco.iosxr.iosxr_ospfv2 iosxr_bgp: redirect: cisco.iosxr.iosxr_bgp iosxr_lldp_interfaces: redirect: cisco.iosxr.iosxr_lldp_interfaces iosxr_l3_interfaces: redirect: cisco.iosxr.iosxr_l3_interfaces iosxr_netconf: redirect: cisco.iosxr.iosxr_netconf iosxr_static_routes: redirect: cisco.iosxr.iosxr_static_routes iosxr_lldp_global: redirect: cisco.iosxr.iosxr_lldp_global iosxr_config: redirect: cisco.iosxr.iosxr_config iosxr_lag_interfaces: redirect: cisco.iosxr.iosxr_lag_interfaces iosxr_interface: redirect: cisco.iosxr.iosxr_interface iosxr_user: redirect: cisco.iosxr.iosxr_user iosxr_facts: redirect: cisco.iosxr.iosxr_facts iosxr_interfaces: redirect: cisco.iosxr.iosxr_interfaces iosxr_acl_interfaces: redirect: cisco.iosxr.iosxr_acl_interfaces iosxr_l2_interfaces: redirect: cisco.iosxr.iosxr_l2_interfaces iosxr_logging: redirect: cisco.iosxr.iosxr_logging iosxr_lacp: redirect: cisco.iosxr.iosxr_lacp iosxr_acls: redirect: cisco.iosxr.iosxr_acls iosxr_system: redirect: cisco.iosxr.iosxr_system iosxr_command: redirect: cisco.iosxr.iosxr_command iosxr_lacp_interfaces: redirect: cisco.iosxr.iosxr_lacp_interfaces iosxr_banner: redirect: cisco.iosxr.iosxr_banner meraki_admin: redirect: cisco.meraki.meraki_admin meraki_config_template: redirect: cisco.meraki.meraki_config_template meraki_content_filtering: redirect: cisco.meraki.meraki_content_filtering meraki_device: redirect: cisco.meraki.meraki_device meraki_firewalled_services: redirect: cisco.meraki.meraki_firewalled_services meraki_malware: redirect: cisco.meraki.meraki_malware meraki_mr_l3_firewall: redirect: cisco.meraki.meraki_mr_l3_firewall meraki_mx_l3_firewall: redirect: cisco.meraki.meraki_mx_l3_firewall meraki_mx_l7_firewall: redirect: cisco.meraki.meraki_mx_l7_firewall meraki_nat: redirect: cisco.meraki.meraki_nat meraki_network: redirect: cisco.meraki.meraki_network meraki_organization: redirect: cisco.meraki.meraki_organization meraki_snmp: redirect: cisco.meraki.meraki_snmp meraki_ssid: redirect: cisco.meraki.meraki_ssid meraki_static_route: redirect: cisco.meraki.meraki_static_route meraki_switchport: redirect: cisco.meraki.meraki_switchport meraki_syslog: redirect: cisco.meraki.meraki_syslog meraki_vlan: redirect: cisco.meraki.meraki_vlan meraki_webhook: redirect: cisco.meraki.meraki_webhook mso_label: redirect: cisco.mso.mso_label mso_role: redirect: cisco.mso.mso_role mso_schema: redirect: cisco.mso.mso_schema mso_schema_site: redirect: cisco.mso.mso_schema_site mso_schema_site_anp: redirect: cisco.mso.mso_schema_site_anp mso_schema_site_anp_epg: redirect: cisco.mso.mso_schema_site_anp_epg mso_schema_site_anp_epg_domain: redirect: cisco.mso.mso_schema_site_anp_epg_domain mso_schema_site_anp_epg_staticleaf: redirect: cisco.mso.mso_schema_site_anp_epg_staticleaf mso_schema_site_anp_epg_staticport: redirect: cisco.mso.mso_schema_site_anp_epg_staticport mso_schema_site_anp_epg_subnet: redirect: cisco.mso.mso_schema_site_anp_epg_subnet mso_schema_site_bd: redirect: cisco.mso.mso_schema_site_bd mso_schema_site_bd_l3out: redirect: cisco.mso.mso_schema_site_bd_l3out mso_schema_site_bd_subnet: redirect: cisco.mso.mso_schema_site_bd_subnet mso_schema_site_vrf: redirect: cisco.mso.mso_schema_site_vrf mso_schema_site_vrf_region: redirect: cisco.mso.mso_schema_site_vrf_region mso_schema_site_vrf_region_cidr: redirect: cisco.mso.mso_schema_site_vrf_region_cidr mso_schema_site_vrf_region_cidr_subnet: redirect: cisco.mso.mso_schema_site_vrf_region_cidr_subnet mso_schema_template: redirect: cisco.mso.mso_schema_template mso_schema_template_anp: redirect: cisco.mso.mso_schema_template_anp mso_schema_template_anp_epg: redirect: cisco.mso.mso_schema_template_anp_epg mso_schema_template_anp_epg_contract: redirect: cisco.mso.mso_schema_template_anp_epg_contract mso_schema_template_anp_epg_subnet: redirect: cisco.mso.mso_schema_template_anp_epg_subnet mso_schema_template_bd: redirect: cisco.mso.mso_schema_template_bd mso_schema_template_bd_subnet: redirect: cisco.mso.mso_schema_template_bd_subnet mso_schema_template_contract_filter: redirect: cisco.mso.mso_schema_template_contract_filter mso_schema_template_deploy: redirect: cisco.mso.mso_schema_template_deploy mso_schema_template_externalepg: redirect: cisco.mso.mso_schema_template_externalepg mso_schema_template_filter_entry: redirect: cisco.mso.mso_schema_template_filter_entry mso_schema_template_l3out: redirect: cisco.mso.mso_schema_template_l3out mso_schema_template_vrf: redirect: cisco.mso.mso_schema_template_vrf mso_site: redirect: cisco.mso.mso_site mso_tenant: redirect: cisco.mso.mso_tenant mso_user: redirect: cisco.mso.mso_user nxos_telemetry: redirect: cisco.nxos.nxos_telemetry nxos_user: redirect: cisco.nxos.nxos_user nxos_bfd_interfaces: redirect: cisco.nxos.nxos_bfd_interfaces nxos_ospf: redirect: cisco.nxos.nxos_ospf nxos_ospfv2: redirect: cisco.nxos.nxos_ospfv2 nxos_system: redirect: cisco.nxos.nxos_system nxos_l3_interface: redirect: cisco.nxos.nxos_l3_interface nxos_smu: redirect: cisco.nxos.nxos_smu nxos_reboot: redirect: cisco.nxos.nxos_reboot nxos_static_routes: redirect: cisco.nxos.nxos_static_routes nxos_static_route: redirect: cisco.nxos.nxos_static_route nxos_acl_interfaces: redirect: cisco.nxos.nxos_acl_interfaces nxos_vpc: redirect: cisco.nxos.nxos_vpc nxos_linkagg: redirect: cisco.nxos.nxos_linkagg nxos_vxlan_vtep_vni: redirect: cisco.nxos.nxos_vxlan_vtep_vni nxos_vrrp: redirect: cisco.nxos.nxos_vrrp nxos_lldp: redirect: cisco.nxos.nxos_lldp nxos_interface: redirect: cisco.nxos.nxos_interface nxos_lacp_interfaces: redirect: cisco.nxos.nxos_lacp_interfaces nxos_gir_profile_management: redirect: cisco.nxos.nxos_gir_profile_management nxos_snmp_community: redirect: cisco.nxos.nxos_snmp_community nxos_lag_interfaces: redirect: cisco.nxos.nxos_lag_interfaces nxos_acl: redirect: cisco.nxos.nxos_acl nxos_hsrp_interfaces: redirect: cisco.nxos.nxos_hsrp_interfaces nxos_lldp_global: redirect: cisco.nxos.nxos_lldp_global nxos_snmp_contact: redirect: cisco.nxos.nxos_snmp_contact nxos_vrf_interface: redirect: cisco.nxos.nxos_vrf_interface nxos_rpm: redirect: cisco.nxos.nxos_rpm nxos_ntp_options: redirect: cisco.nxos.nxos_ntp_options nxos_ospf_vrf: redirect: cisco.nxos.nxos_ospf_vrf nxos_vtp_version: redirect: cisco.nxos.nxos_vtp_version nxos_igmp_interface: redirect: cisco.nxos.nxos_igmp_interface nxos_bgp_neighbor: redirect: cisco.nxos.nxos_bgp_neighbor nxos_bgp: redirect: cisco.nxos.nxos_bgp nxos_rollback: redirect: cisco.nxos.nxos_rollback nxos_aaa_server: redirect: cisco.nxos.nxos_aaa_server nxos_udld_interface: redirect: cisco.nxos.nxos_udld_interface nxos_bgp_af: redirect: cisco.nxos.nxos_bgp_af nxos_feature: redirect: cisco.nxos.nxos_feature nxos_snmp_traps: redirect: cisco.nxos.nxos_snmp_traps nxos_evpn_global: redirect: cisco.nxos.nxos_evpn_global nxos_igmp: redirect: cisco.nxos.nxos_igmp nxos_aaa_server_host: redirect: cisco.nxos.nxos_aaa_server_host nxos_vrf_af: redirect: cisco.nxos.nxos_vrf_af nxos_snapshot: redirect: cisco.nxos.nxos_snapshot nxos_gir: redirect: cisco.nxos.nxos_gir nxos_command: redirect: cisco.nxos.nxos_command nxos_vxlan_vtep: redirect: cisco.nxos.nxos_vxlan_vtep nxos_snmp_location: redirect: cisco.nxos.nxos_snmp_location nxos_evpn_vni: redirect: cisco.nxos.nxos_evpn_vni nxos_vpc_interface: redirect: cisco.nxos.nxos_vpc_interface nxos_logging: redirect: cisco.nxos.nxos_logging nxos_pim: redirect: cisco.nxos.nxos_pim nxos_ping: redirect: cisco.nxos.nxos_ping nxos_pim_rp_address: redirect: cisco.nxos.nxos_pim_rp_address nxos_pim_interface: redirect: cisco.nxos.nxos_pim_interface nxos_install_os: redirect: cisco.nxos.nxos_install_os nxos_nxapi: redirect: cisco.nxos.nxos_nxapi nxos_l2_interface: redirect: cisco.nxos.nxos_l2_interface nxos_bgp_neighbor_af: redirect: cisco.nxos.nxos_bgp_neighbor_af nxos_lacp: redirect: cisco.nxos.nxos_lacp nxos_lldp_interfaces: redirect: cisco.nxos.nxos_lldp_interfaces nxos_acl_interface: redirect: cisco.nxos.nxos_acl_interface nxos_vrf: redirect: cisco.nxos.nxos_vrf nxos_interface_ospf: redirect: cisco.nxos.nxos_interface_ospf nxos_acls: redirect: cisco.nxos.nxos_acls nxos_vtp_password: redirect: cisco.nxos.nxos_vtp_password nxos_l3_interfaces: redirect: cisco.nxos.nxos_l3_interfaces nxos_igmp_snooping: redirect: cisco.nxos.nxos_igmp_snooping nxos_banner: redirect: cisco.nxos.nxos_banner nxos_bfd_global: redirect: cisco.nxos.nxos_bfd_global nxos_udld: redirect: cisco.nxos.nxos_udld nxos_vtp_domain: redirect: cisco.nxos.nxos_vtp_domain nxos_snmp_host: redirect: cisco.nxos.nxos_snmp_host nxos_l2_interfaces: redirect: cisco.nxos.nxos_l2_interfaces nxos_hsrp: redirect: cisco.nxos.nxos_hsrp nxos_interfaces: redirect: cisco.nxos.nxos_interfaces nxos_overlay_global: redirect: cisco.nxos.nxos_overlay_global nxos_snmp_user: redirect: cisco.nxos.nxos_snmp_user nxos_vlans: redirect: cisco.nxos.nxos_vlans nxos_ntp: redirect: cisco.nxos.nxos_ntp nxos_file_copy: redirect: cisco.nxos.nxos_file_copy nxos_ntp_auth: redirect: cisco.nxos.nxos_ntp_auth nxos_config: redirect: cisco.nxos.nxos_config nxos_vlan: redirect: cisco.nxos.nxos_vlan nxos_facts: redirect: cisco.nxos.nxos_facts nxos_zone_zoneset: redirect: cisco.nxos.nxos_zone_zoneset nxos_vsan: redirect: cisco.nxos.nxos_vsan nxos_devicealias: redirect: cisco.nxos.nxos_devicealias ucs_managed_objects: redirect: cisco.ucs.ucs_managed_objects ucs_vnic_template: redirect: cisco.ucs.ucs_vnic_template ucs_query: redirect: cisco.ucs.ucs_query ucs_dns_server: redirect: cisco.ucs.ucs_dns_server ucs_lan_connectivity: redirect: cisco.ucs.ucs_lan_connectivity ucs_vhba_template: redirect: cisco.ucs.ucs_vhba_template ucs_san_connectivity: redirect: cisco.ucs.ucs_san_connectivity ucs_disk_group_policy: redirect: cisco.ucs.ucs_disk_group_policy ucs_uuid_pool: redirect: cisco.ucs.ucs_uuid_pool ucs_vlan_find: redirect: cisco.ucs.ucs_vlan_find ucs_vlans: redirect: cisco.ucs.ucs_vlans ucs_service_profile_template: redirect: cisco.ucs.ucs_service_profile_template ucs_ip_pool: redirect: cisco.ucs.ucs_ip_pool ucs_timezone: redirect: cisco.ucs.ucs_timezone ucs_ntp_server: redirect: cisco.ucs.ucs_ntp_server ucs_mac_pool: redirect: cisco.ucs.ucs_mac_pool ucs_storage_profile: redirect: cisco.ucs.ucs_storage_profile ucs_org: redirect: cisco.ucs.ucs_org ucs_vsans: redirect: cisco.ucs.ucs_vsans ucs_wwn_pool: redirect: cisco.ucs.ucs_wwn_pool bigip_apm_acl: redirect: f5networks.f5_modules.bigip_apm_acl bigip_apm_network_access: redirect: f5networks.f5_modules.bigip_apm_network_access bigip_apm_policy_fetch: redirect: f5networks.f5_modules.bigip_apm_policy_fetch bigip_apm_policy_import: redirect: f5networks.f5_modules.bigip_apm_policy_import bigip_appsvcs_extension: redirect: f5networks.f5_modules.bigip_appsvcs_extension bigip_asm_dos_application: redirect: f5networks.f5_modules.bigip_asm_dos_application bigip_asm_policy_fetch: redirect: f5networks.f5_modules.bigip_asm_policy_fetch bigip_asm_policy_import: redirect: f5networks.f5_modules.bigip_asm_policy_import bigip_asm_policy_manage: redirect: f5networks.f5_modules.bigip_asm_policy_manage bigip_asm_policy_server_technology: redirect: f5networks.f5_modules.bigip_asm_policy_server_technology bigip_asm_policy_signature_set: redirect: f5networks.f5_modules.bigip_asm_policy_signature_set bigip_cli_alias: redirect: f5networks.f5_modules.bigip_cli_alias bigip_cli_script: redirect: f5networks.f5_modules.bigip_cli_script bigip_command: redirect: f5networks.f5_modules.bigip_command bigip_config: redirect: f5networks.f5_modules.bigip_config bigip_configsync_action: redirect: f5networks.f5_modules.bigip_configsync_action bigip_data_group: redirect: f5networks.f5_modules.bigip_data_group bigip_device_auth: redirect: f5networks.f5_modules.bigip_device_auth bigip_device_auth_ldap: redirect: f5networks.f5_modules.bigip_device_auth_ldap bigip_device_certificate: redirect: f5networks.f5_modules.bigip_device_certificate bigip_device_connectivity: redirect: f5networks.f5_modules.bigip_device_connectivity bigip_device_dns: redirect: f5networks.f5_modules.bigip_device_dns bigip_device_group: redirect: f5networks.f5_modules.bigip_device_group bigip_device_group_member: redirect: f5networks.f5_modules.bigip_device_group_member bigip_device_ha_group: redirect: f5networks.f5_modules.bigip_device_ha_group bigip_device_httpd: redirect: f5networks.f5_modules.bigip_device_httpd bigip_device_info: redirect: f5networks.f5_modules.bigip_device_info bigip_device_license: redirect: f5networks.f5_modules.bigip_device_license bigip_device_ntp: redirect: f5networks.f5_modules.bigip_device_ntp bigip_device_sshd: redirect: f5networks.f5_modules.bigip_device_sshd bigip_device_syslog: redirect: f5networks.f5_modules.bigip_device_syslog bigip_device_traffic_group: redirect: f5networks.f5_modules.bigip_device_traffic_group bigip_device_trust: redirect: f5networks.f5_modules.bigip_device_trust bigip_dns_cache_resolver: redirect: f5networks.f5_modules.bigip_dns_cache_resolver bigip_dns_nameserver: redirect: f5networks.f5_modules.bigip_dns_nameserver bigip_dns_resolver: redirect: f5networks.f5_modules.bigip_dns_resolver bigip_dns_zone: redirect: f5networks.f5_modules.bigip_dns_zone bigip_file_copy: redirect: f5networks.f5_modules.bigip_file_copy bigip_firewall_address_list: redirect: f5networks.f5_modules.bigip_firewall_address_list bigip_firewall_dos_profile: redirect: f5networks.f5_modules.bigip_firewall_dos_profile bigip_firewall_dos_vector: redirect: f5networks.f5_modules.bigip_firewall_dos_vector bigip_firewall_global_rules: redirect: f5networks.f5_modules.bigip_firewall_global_rules bigip_firewall_log_profile: redirect: f5networks.f5_modules.bigip_firewall_log_profile bigip_firewall_log_profile_network: redirect: f5networks.f5_modules.bigip_firewall_log_profile_network bigip_firewall_policy: redirect: f5networks.f5_modules.bigip_firewall_policy bigip_firewall_port_list: redirect: f5networks.f5_modules.bigip_firewall_port_list bigip_firewall_rule: redirect: f5networks.f5_modules.bigip_firewall_rule bigip_firewall_rule_list: redirect: f5networks.f5_modules.bigip_firewall_rule_list bigip_firewall_schedule: redirect: f5networks.f5_modules.bigip_firewall_schedule bigip_gtm_datacenter: redirect: f5networks.f5_modules.bigip_gtm_datacenter bigip_gtm_global: redirect: f5networks.f5_modules.bigip_gtm_global bigip_gtm_monitor_bigip: redirect: f5networks.f5_modules.bigip_gtm_monitor_bigip bigip_gtm_monitor_external: redirect: f5networks.f5_modules.bigip_gtm_monitor_external bigip_gtm_monitor_firepass: redirect: f5networks.f5_modules.bigip_gtm_monitor_firepass bigip_gtm_monitor_http: redirect: f5networks.f5_modules.bigip_gtm_monitor_http bigip_gtm_monitor_https: redirect: f5networks.f5_modules.bigip_gtm_monitor_https bigip_gtm_monitor_tcp: redirect: f5networks.f5_modules.bigip_gtm_monitor_tcp bigip_gtm_monitor_tcp_half_open: redirect: f5networks.f5_modules.bigip_gtm_monitor_tcp_half_open bigip_gtm_pool: redirect: f5networks.f5_modules.bigip_gtm_pool bigip_gtm_pool_member: redirect: f5networks.f5_modules.bigip_gtm_pool_member bigip_gtm_server: redirect: f5networks.f5_modules.bigip_gtm_server bigip_gtm_topology_record: redirect: f5networks.f5_modules.bigip_gtm_topology_record bigip_gtm_topology_region: redirect: f5networks.f5_modules.bigip_gtm_topology_region bigip_gtm_virtual_server: redirect: f5networks.f5_modules.bigip_gtm_virtual_server bigip_gtm_wide_ip: redirect: f5networks.f5_modules.bigip_gtm_wide_ip bigip_hostname: redirect: f5networks.f5_modules.bigip_hostname bigip_iapp_service: redirect: f5networks.f5_modules.bigip_iapp_service bigip_iapp_template: redirect: f5networks.f5_modules.bigip_iapp_template bigip_ike_peer: redirect: f5networks.f5_modules.bigip_ike_peer bigip_imish_config: redirect: f5networks.f5_modules.bigip_imish_config bigip_ipsec_policy: redirect: f5networks.f5_modules.bigip_ipsec_policy bigip_irule: redirect: f5networks.f5_modules.bigip_irule bigip_log_destination: redirect: f5networks.f5_modules.bigip_log_destination bigip_log_publisher: redirect: f5networks.f5_modules.bigip_log_publisher bigip_lx_package: redirect: f5networks.f5_modules.bigip_lx_package bigip_management_route: redirect: f5networks.f5_modules.bigip_management_route bigip_message_routing_peer: redirect: f5networks.f5_modules.bigip_message_routing_peer bigip_message_routing_protocol: redirect: f5networks.f5_modules.bigip_message_routing_protocol bigip_message_routing_route: redirect: f5networks.f5_modules.bigip_message_routing_route bigip_message_routing_router: redirect: f5networks.f5_modules.bigip_message_routing_router bigip_message_routing_transport_config: redirect: f5networks.f5_modules.bigip_message_routing_transport_config bigip_monitor_dns: redirect: f5networks.f5_modules.bigip_monitor_dns bigip_monitor_external: redirect: f5networks.f5_modules.bigip_monitor_external bigip_monitor_gateway_icmp: redirect: f5networks.f5_modules.bigip_monitor_gateway_icmp bigip_monitor_http: redirect: f5networks.f5_modules.bigip_monitor_http bigip_monitor_https: redirect: f5networks.f5_modules.bigip_monitor_https bigip_monitor_ldap: redirect: f5networks.f5_modules.bigip_monitor_ldap bigip_monitor_snmp_dca: redirect: f5networks.f5_modules.bigip_monitor_snmp_dca bigip_monitor_tcp: redirect: f5networks.f5_modules.bigip_monitor_tcp bigip_monitor_tcp_echo: redirect: f5networks.f5_modules.bigip_monitor_tcp_echo bigip_monitor_tcp_half_open: redirect: f5networks.f5_modules.bigip_monitor_tcp_half_open bigip_monitor_udp: redirect: f5networks.f5_modules.bigip_monitor_udp bigip_node: redirect: f5networks.f5_modules.bigip_node bigip_partition: redirect: f5networks.f5_modules.bigip_partition bigip_password_policy: redirect: f5networks.f5_modules.bigip_password_policy bigip_policy: redirect: f5networks.f5_modules.bigip_policy bigip_policy_rule: redirect: f5networks.f5_modules.bigip_policy_rule bigip_pool: redirect: f5networks.f5_modules.bigip_pool bigip_pool_member: redirect: f5networks.f5_modules.bigip_pool_member bigip_profile_analytics: redirect: f5networks.f5_modules.bigip_profile_analytics bigip_profile_client_ssl: redirect: f5networks.f5_modules.bigip_profile_client_ssl bigip_profile_dns: redirect: f5networks.f5_modules.bigip_profile_dns bigip_profile_fastl4: redirect: f5networks.f5_modules.bigip_profile_fastl4 bigip_profile_http: redirect: f5networks.f5_modules.bigip_profile_http bigip_profile_http2: redirect: f5networks.f5_modules.bigip_profile_http2 bigip_profile_http_compression: redirect: f5networks.f5_modules.bigip_profile_http_compression bigip_profile_oneconnect: redirect: f5networks.f5_modules.bigip_profile_oneconnect bigip_profile_persistence_cookie: redirect: f5networks.f5_modules.bigip_profile_persistence_cookie bigip_profile_persistence_src_addr: redirect: f5networks.f5_modules.bigip_profile_persistence_src_addr bigip_profile_server_ssl: redirect: f5networks.f5_modules.bigip_profile_server_ssl bigip_profile_tcp: redirect: f5networks.f5_modules.bigip_profile_tcp bigip_profile_udp: redirect: f5networks.f5_modules.bigip_profile_udp bigip_provision: redirect: f5networks.f5_modules.bigip_provision bigip_qkview: redirect: f5networks.f5_modules.bigip_qkview bigip_remote_role: redirect: f5networks.f5_modules.bigip_remote_role bigip_remote_syslog: redirect: f5networks.f5_modules.bigip_remote_syslog bigip_remote_user: redirect: f5networks.f5_modules.bigip_remote_user bigip_routedomain: redirect: f5networks.f5_modules.bigip_routedomain bigip_selfip: redirect: f5networks.f5_modules.bigip_selfip bigip_service_policy: redirect: f5networks.f5_modules.bigip_service_policy bigip_smtp: redirect: f5networks.f5_modules.bigip_smtp bigip_snat_pool: redirect: f5networks.f5_modules.bigip_snat_pool bigip_snat_translation: redirect: f5networks.f5_modules.bigip_snat_translation bigip_snmp: redirect: f5networks.f5_modules.bigip_snmp bigip_snmp_community: redirect: f5networks.f5_modules.bigip_snmp_community bigip_snmp_trap: redirect: f5networks.f5_modules.bigip_snmp_trap bigip_software_image: redirect: f5networks.f5_modules.bigip_software_image bigip_software_install: redirect: f5networks.f5_modules.bigip_software_install bigip_software_update: redirect: f5networks.f5_modules.bigip_software_update bigip_ssl_certificate: redirect: f5networks.f5_modules.bigip_ssl_certificate bigip_ssl_key: redirect: f5networks.f5_modules.bigip_ssl_key bigip_ssl_ocsp: redirect: f5networks.f5_modules.bigip_ssl_ocsp bigip_static_route: redirect: f5networks.f5_modules.bigip_static_route bigip_sys_daemon_log_tmm: redirect: f5networks.f5_modules.bigip_sys_daemon_log_tmm bigip_sys_db: redirect: f5networks.f5_modules.bigip_sys_db bigip_sys_global: redirect: f5networks.f5_modules.bigip_sys_global bigip_timer_policy: redirect: f5networks.f5_modules.bigip_timer_policy bigip_traffic_selector: redirect: f5networks.f5_modules.bigip_traffic_selector bigip_trunk: redirect: f5networks.f5_modules.bigip_trunk bigip_tunnel: redirect: f5networks.f5_modules.bigip_tunnel bigip_ucs: redirect: f5networks.f5_modules.bigip_ucs bigip_ucs_fetch: redirect: f5networks.f5_modules.bigip_ucs_fetch bigip_user: redirect: f5networks.f5_modules.bigip_user bigip_vcmp_guest: redirect: f5networks.f5_modules.bigip_vcmp_guest bigip_virtual_address: redirect: f5networks.f5_modules.bigip_virtual_address bigip_virtual_server: redirect: f5networks.f5_modules.bigip_virtual_server bigip_vlan: redirect: f5networks.f5_modules.bigip_vlan bigip_wait: redirect: f5networks.f5_modules.bigip_wait bigiq_application_fasthttp: redirect: f5networks.f5_modules.bigiq_application_fasthttp bigiq_application_fastl4_tcp: redirect: f5networks.f5_modules.bigiq_application_fastl4_tcp bigiq_application_fastl4_udp: redirect: f5networks.f5_modules.bigiq_application_fastl4_udp bigiq_application_http: redirect: f5networks.f5_modules.bigiq_application_http bigiq_application_https_offload: redirect: f5networks.f5_modules.bigiq_application_https_offload bigiq_application_https_waf: redirect: f5networks.f5_modules.bigiq_application_https_waf bigiq_device_discovery: redirect: f5networks.f5_modules.bigiq_device_discovery bigiq_device_info: redirect: f5networks.f5_modules.bigiq_device_info bigiq_regkey_license: redirect: f5networks.f5_modules.bigiq_regkey_license bigiq_regkey_license_assignment: redirect: f5networks.f5_modules.bigiq_regkey_license_assignment bigiq_regkey_pool: redirect: f5networks.f5_modules.bigiq_regkey_pool bigiq_utility_license: redirect: f5networks.f5_modules.bigiq_utility_license bigiq_utility_license_assignment: redirect: f5networks.f5_modules.bigiq_utility_license_assignment os_auth: redirect: openstack.cloud.auth os_client_config: redirect: openstack.cloud.config os_coe_cluster: redirect: openstack.cloud.coe_cluster os_coe_cluster_template: redirect: openstack.cloud.coe_cluster_template os_flavor_info: redirect: openstack.cloud.compute_flavor_info os_floating_ip: redirect: openstack.cloud.floating_ip os_group: redirect: openstack.cloud.identity_group os_group_info: redirect: openstack.cloud.identity_group_info os_image: redirect: openstack.cloud.image os_image_info: redirect: openstack.cloud.image_info os_ironic: redirect: openstack.cloud.baremetal_node os_ironic_inspect: redirect: openstack.cloud.baremetal_inspect os_ironic_node: redirect: openstack.cloud.baremetal_node_action os_keypair: redirect: openstack.cloud.keypair os_keystone_domain: redirect: openstack.cloud.identity_domain os_keystone_domain_info: redirect: openstack.cloud.identity_domain_info os_keystone_endpoint: redirect: openstack.cloud.endpoint os_keystone_role: redirect: openstack.cloud.identity_role os_keystone_service: redirect: openstack.cloud.catalog_service os_listener: redirect: openstack.cloud.lb_listener os_loadbalancer: redirect: openstack.cloud.loadbalancer os_member: redirect: openstack.cloud.lb_member os_network: redirect: openstack.cloud.network os_networks_info: redirect: openstack.cloud.networks_info os_nova_flavor: redirect: openstack.cloud.compute_flavor os_nova_host_aggregate: redirect: openstack.cloud.host_aggregate os_object: redirect: openstack.cloud.object os_pool: redirect: openstack.cloud.lb_pool os_port: redirect: openstack.cloud.port os_port_info: redirect: openstack.cloud.port_info os_project: redirect: openstack.cloud.project os_project_access: redirect: openstack.cloud.project_access os_project_info: redirect: openstack.cloud.project_info os_quota: redirect: openstack.cloud.quota os_recordset: redirect: openstack.cloud.recordset os_router: redirect: openstack.cloud.router os_security_group: redirect: openstack.cloud.security_group os_security_group_rule: redirect: openstack.cloud.security_group_rule os_server: redirect: openstack.cloud.server os_server_action: redirect: openstack.cloud.server_action os_server_group: redirect: openstack.cloud.server_group os_server_info: redirect: openstack.cloud.server_info os_server_metadata: redirect: openstack.cloud.server_metadata os_server_volume: redirect: openstack.cloud.server_volume os_stack: redirect: openstack.cloud.stack os_subnet: redirect: openstack.cloud.subnet os_subnets_info: redirect: openstack.cloud.subnets_info os_user: redirect: openstack.cloud.identity_user os_user_group: redirect: openstack.cloud.group_assignment os_user_info: redirect: openstack.cloud.identity_user_info os_user_role: redirect: openstack.cloud.role_assignment os_volume: redirect: openstack.cloud.volume os_volume_snapshot: redirect: openstack.cloud.volume_snapshot os_zone: redirect: openstack.cloud.dns_zone junos_acls: redirect: junipernetworks.junos.junos_acls junos_acl_interfaces: redirect: junipernetworks.junos.junos_acl_interfaces junos_ospfv2: redirect: junipernetworks.junos.junos_ospfv2 junos_user: redirect: junipernetworks.junos.junos_user junos_l2_interface: redirect: junipernetworks.junos.junos_l2_interface junos_lldp: redirect: junipernetworks.junos.junos_lldp junos_rpc: redirect: junipernetworks.junos.junos_rpc junos_l2_interfaces: redirect: junipernetworks.junos.junos_l2_interfaces junos_lldp_interface: redirect: junipernetworks.junos.junos_lldp_interface junos_static_route: redirect: junipernetworks.junos.junos_static_route junos_lacp: redirect: junipernetworks.junos.junos_lacp junos_lacp_interfaces: redirect: junipernetworks.junos.junos_lacp_interfaces junos_vlans: redirect: junipernetworks.junos.junos_vlans junos_linkagg: redirect: junipernetworks.junos.junos_linkagg junos_scp: redirect: junipernetworks.junos.junos_scp junos_banner: redirect: junipernetworks.junos.junos_banner junos_l3_interface: redirect: junipernetworks.junos.junos_l3_interface junos_logging: redirect: junipernetworks.junos.junos_logging junos_package: redirect: junipernetworks.junos.junos_package junos_netconf: redirect: junipernetworks.junos.junos_netconf junos_facts: redirect: junipernetworks.junos.junos_facts junos_ping: redirect: junipernetworks.junos.junos_ping junos_interface: redirect: junipernetworks.junos.junos_interface junos_lldp_global: redirect: junipernetworks.junos.junos_lldp_global junos_config: redirect: junipernetworks.junos.junos_config junos_static_routes: redirect: junipernetworks.junos.junos_static_routes junos_command: redirect: junipernetworks.junos.junos_command junos_lag_interfaces: redirect: junipernetworks.junos.junos_lag_interfaces junos_l3_interfaces: redirect: junipernetworks.junos.junos_l3_interfaces junos_lldp_interfaces: redirect: junipernetworks.junos.junos_lldp_interfaces junos_vlan: redirect: junipernetworks.junos.junos_vlan junos_system: redirect: junipernetworks.junos.junos_system junos_interfaces: redirect: junipernetworks.junos.junos_interfaces junos_vrf: redirect: junipernetworks.junos.junos_vrf tower_credential: redirect: awx.awx.tower_credential tower_credential_type: redirect: awx.awx.tower_credential_type tower_group: redirect: awx.awx.tower_group tower_host: redirect: awx.awx.tower_host tower_inventory: redirect: awx.awx.tower_inventory tower_inventory_source: redirect: awx.awx.tower_inventory_source tower_job_cancel: redirect: awx.awx.tower_job_cancel tower_job_launch: redirect: awx.awx.tower_job_launch tower_job_list: redirect: awx.awx.tower_job_list tower_job_template: redirect: awx.awx.tower_job_template tower_job_wait: redirect: awx.awx.tower_job_wait tower_label: redirect: awx.awx.tower_label tower_notification: redirect: awx.awx.tower_notification tower_organization: redirect: awx.awx.tower_organization tower_project: redirect: awx.awx.tower_project tower_receive: redirect: awx.awx.tower_receive tower_role: redirect: awx.awx.tower_role tower_send: redirect: awx.awx.tower_send tower_settings: redirect: awx.awx.tower_settings tower_team: redirect: awx.awx.tower_team tower_user: redirect: awx.awx.tower_user tower_workflow_launch: redirect: awx.awx.tower_workflow_launch tower_workflow_template: redirect: awx.awx.tower_workflow_template ovirt_affinity_group: redirect: ovirt.ovirt.ovirt_affinity_group ovirt_affinity_label: redirect: ovirt.ovirt.ovirt_affinity_label ovirt_affinity_label_info: redirect: ovirt.ovirt.ovirt_affinity_label_info ovirt_api_info: redirect: ovirt.ovirt.ovirt_api_info ovirt_auth: redirect: ovirt.ovirt.ovirt_auth ovirt_cluster: redirect: ovirt.ovirt.ovirt_cluster ovirt_cluster_info: redirect: ovirt.ovirt.ovirt_cluster_info ovirt_datacenter: redirect: ovirt.ovirt.ovirt_datacenter ovirt_datacenter_info: redirect: ovirt.ovirt.ovirt_datacenter_info ovirt_disk: redirect: ovirt.ovirt.ovirt_disk ovirt_disk_info: redirect: ovirt.ovirt.ovirt_disk_info ovirt_event: redirect: ovirt.ovirt.ovirt_event ovirt_event_info: redirect: ovirt.ovirt.ovirt_event_info ovirt_external_provider: redirect: ovirt.ovirt.ovirt_external_provider ovirt_external_provider_info: redirect: ovirt.ovirt.ovirt_external_provider_info ovirt_group: redirect: ovirt.ovirt.ovirt_group ovirt_group_info: redirect: ovirt.ovirt.ovirt_group_info ovirt_host: redirect: ovirt.ovirt.ovirt_host ovirt_host_info: redirect: ovirt.ovirt.ovirt_host_info ovirt_host_network: redirect: ovirt.ovirt.ovirt_host_network ovirt_host_pm: redirect: ovirt.ovirt.ovirt_host_pm ovirt_host_storage_info: redirect: ovirt.ovirt.ovirt_host_storage_info ovirt_instance_type: redirect: ovirt.ovirt.ovirt_instance_type ovirt_job: redirect: ovirt.ovirt.ovirt_job ovirt_mac_pool: redirect: ovirt.ovirt.ovirt_mac_pool ovirt_network: redirect: ovirt.ovirt.ovirt_network ovirt_network_info: redirect: ovirt.ovirt.ovirt_network_info ovirt_nic: redirect: ovirt.ovirt.ovirt_nic ovirt_nic_info: redirect: ovirt.ovirt.ovirt_nic_info ovirt_permission: redirect: ovirt.ovirt.ovirt_permission ovirt_permission_info: redirect: ovirt.ovirt.ovirt_permission_info ovirt_quota: redirect: ovirt.ovirt.ovirt_quota ovirt_quota_info: redirect: ovirt.ovirt.ovirt_quota_info ovirt_role: redirect: ovirt.ovirt.ovirt_role ovirt_scheduling_policy_info: redirect: ovirt.ovirt.ovirt_scheduling_policy_info ovirt_snapshot: redirect: ovirt.ovirt.ovirt_snapshot ovirt_snapshot_info: redirect: ovirt.ovirt.ovirt_snapshot_info ovirt_storage_connection: redirect: ovirt.ovirt.ovirt_storage_connection ovirt_storage_domain: redirect: ovirt.ovirt.ovirt_storage_domain ovirt_storage_domain_info: redirect: ovirt.ovirt.ovirt_storage_domain_info ovirt_storage_template_info: redirect: ovirt.ovirt.ovirt_storage_template_info ovirt_storage_vm_info: redirect: ovirt.ovirt.ovirt_storage_vm_info ovirt_tag: redirect: ovirt.ovirt.ovirt_tag ovirt_tag_info: redirect: ovirt.ovirt.ovirt_tag_info ovirt_template: redirect: ovirt.ovirt.ovirt_template ovirt_template_info: redirect: ovirt.ovirt.ovirt_template_info ovirt_user: redirect: ovirt.ovirt.ovirt_user ovirt_user_info: redirect: ovirt.ovirt.ovirt_user_info ovirt_vm: redirect: ovirt.ovirt.ovirt_vm ovirt_vm_info: redirect: ovirt.ovirt.ovirt_vm_info ovirt_vmpool: redirect: ovirt.ovirt.ovirt_vmpool ovirt_vmpool_info: redirect: ovirt.ovirt.ovirt_vmpool_info ovirt_vnic_profile: redirect: ovirt.ovirt.ovirt_vnic_profile ovirt_vnic_profile_info: redirect: ovirt.ovirt.ovirt_vnic_profile_info dellos10_command: redirect: dellemc.os10.os10_command dellos10_facts: redirect: dellemc.os10.os10_facts dellos10_config: redirect: dellemc.os10.os10_config dellos9_facts: redirect: dellemc.os9.os9_facts dellos9_command: redirect: dellemc.os9.os9_command dellos9_config: redirect: dellemc.os9.os9_config dellos6_facts: redirect: dellemc.os6.os6_facts dellos6_config: redirect: dellemc.os6.os6_config dellos6_command: redirect: dellemc.os6.os6_command hcloud_location_facts: redirect: hetzner.hcloud.hcloud_location_facts hcloud_server_info: redirect: hetzner.hcloud.hcloud_server_info hcloud_server_network: redirect: hetzner.hcloud.hcloud_server_network hcloud_server_type_info: redirect: hetzner.hcloud.hcloud_server_type_info hcloud_route: redirect: hetzner.hcloud.hcloud_route hcloud_server: redirect: hetzner.hcloud.hcloud_server hcloud_volume_info: redirect: hetzner.hcloud.hcloud_volume_info hcloud_server_type_facts: redirect: hetzner.hcloud.hcloud_server_type_facts hcloud_ssh_key_info: redirect: hetzner.hcloud.hcloud_ssh_key_info hcloud_network_info: redirect: hetzner.hcloud.hcloud_network_info hcloud_datacenter_info: redirect: hetzner.hcloud.hcloud_datacenter_info hcloud_image_facts: redirect: hetzner.hcloud.hcloud_image_facts hcloud_volume_facts: redirect: hetzner.hcloud.hcloud_volume_facts hcloud_floating_ip_info: redirect: hetzner.hcloud.hcloud_floating_ip_info hcloud_floating_ip_facts: redirect: hetzner.hcloud.hcloud_floating_ip_facts hcloud_image_info: redirect: hetzner.hcloud.hcloud_image_info hcloud_ssh_key_facts: redirect: hetzner.hcloud.hcloud_ssh_key_facts hcloud_location_info: redirect: hetzner.hcloud.hcloud_location_info hcloud_network: redirect: hetzner.hcloud.hcloud_network hcloud_volume: redirect: hetzner.hcloud.hcloud_volume hcloud_ssh_key: redirect: hetzner.hcloud.hcloud_ssh_key hcloud_datacenter_facts: redirect: hetzner.hcloud.hcloud_datacenter_facts hcloud_rdns: redirect: hetzner.hcloud.hcloud_rdns hcloud_floating_ip: redirect: hetzner.hcloud.hcloud_floating_ip hcloud_server_facts: redirect: hetzner.hcloud.hcloud_server_facts hcloud_subnetwork: redirect: hetzner.hcloud.hcloud_subnetwork skydive_capture: redirect: community.skydive.skydive_capture skydive_edge: redirect: community.skydive.skydive_edge skydive_node: redirect: community.skydive.skydive_node cyberark_authentication: redirect: cyberark.pas.cyberark_authentication cyberark_user: redirect: cyberark.pas.cyberark_user gcp_appengine_firewall_rule: redirect: google.cloud.gcp_appengine_firewall_rule gcp_appengine_firewall_rule_info: redirect: google.cloud.gcp_appengine_firewall_rule_info gcp_bigquery_dataset: redirect: google.cloud.gcp_bigquery_dataset gcp_bigquery_dataset_info: redirect: google.cloud.gcp_bigquery_dataset_info gcp_bigquery_table: redirect: google.cloud.gcp_bigquery_table gcp_bigquery_table_info: redirect: google.cloud.gcp_bigquery_table_info gcp_cloudbuild_trigger: redirect: google.cloud.gcp_cloudbuild_trigger gcp_cloudbuild_trigger_info: redirect: google.cloud.gcp_cloudbuild_trigger_info gcp_cloudfunctions_cloud_function: redirect: google.cloud.gcp_cloudfunctions_cloud_function gcp_cloudfunctions_cloud_function_info: redirect: google.cloud.gcp_cloudfunctions_cloud_function_info gcp_cloudscheduler_job: redirect: google.cloud.gcp_cloudscheduler_job gcp_cloudscheduler_job_info: redirect: google.cloud.gcp_cloudscheduler_job_info gcp_cloudtasks_queue: redirect: google.cloud.gcp_cloudtasks_queue gcp_cloudtasks_queue_info: redirect: google.cloud.gcp_cloudtasks_queue_info gcp_compute_address: redirect: google.cloud.gcp_compute_address gcp_compute_address_info: redirect: google.cloud.gcp_compute_address_info gcp_compute_autoscaler: redirect: google.cloud.gcp_compute_autoscaler gcp_compute_autoscaler_info: redirect: google.cloud.gcp_compute_autoscaler_info gcp_compute_backend_bucket: redirect: google.cloud.gcp_compute_backend_bucket gcp_compute_backend_bucket_info: redirect: google.cloud.gcp_compute_backend_bucket_info gcp_compute_backend_service: redirect: google.cloud.gcp_compute_backend_service gcp_compute_backend_service_info: redirect: google.cloud.gcp_compute_backend_service_info gcp_compute_disk: redirect: google.cloud.gcp_compute_disk gcp_compute_disk_info: redirect: google.cloud.gcp_compute_disk_info gcp_compute_firewall: redirect: google.cloud.gcp_compute_firewall gcp_compute_firewall_info: redirect: google.cloud.gcp_compute_firewall_info gcp_compute_forwarding_rule: redirect: google.cloud.gcp_compute_forwarding_rule gcp_compute_forwarding_rule_info: redirect: google.cloud.gcp_compute_forwarding_rule_info gcp_compute_global_address: redirect: google.cloud.gcp_compute_global_address gcp_compute_global_address_info: redirect: google.cloud.gcp_compute_global_address_info gcp_compute_global_forwarding_rule: redirect: google.cloud.gcp_compute_global_forwarding_rule gcp_compute_global_forwarding_rule_info: redirect: google.cloud.gcp_compute_global_forwarding_rule_info gcp_compute_health_check: redirect: google.cloud.gcp_compute_health_check gcp_compute_health_check_info: redirect: google.cloud.gcp_compute_health_check_info gcp_compute_http_health_check: redirect: google.cloud.gcp_compute_http_health_check gcp_compute_http_health_check_info: redirect: google.cloud.gcp_compute_http_health_check_info gcp_compute_https_health_check: redirect: google.cloud.gcp_compute_https_health_check gcp_compute_https_health_check_info: redirect: google.cloud.gcp_compute_https_health_check_info gcp_compute_image: redirect: google.cloud.gcp_compute_image gcp_compute_image_info: redirect: google.cloud.gcp_compute_image_info gcp_compute_instance: redirect: google.cloud.gcp_compute_instance gcp_compute_instance_group: redirect: google.cloud.gcp_compute_instance_group gcp_compute_instance_group_info: redirect: google.cloud.gcp_compute_instance_group_info gcp_compute_instance_group_manager: redirect: google.cloud.gcp_compute_instance_group_manager gcp_compute_instance_group_manager_info: redirect: google.cloud.gcp_compute_instance_group_manager_info gcp_compute_instance_info: redirect: google.cloud.gcp_compute_instance_info gcp_compute_instance_template: redirect: google.cloud.gcp_compute_instance_template gcp_compute_instance_template_info: redirect: google.cloud.gcp_compute_instance_template_info gcp_compute_interconnect_attachment: redirect: google.cloud.gcp_compute_interconnect_attachment gcp_compute_interconnect_attachment_info: redirect: google.cloud.gcp_compute_interconnect_attachment_info gcp_compute_network: redirect: google.cloud.gcp_compute_network gcp_compute_network_endpoint_group: redirect: google.cloud.gcp_compute_network_endpoint_group gcp_compute_network_endpoint_group_info: redirect: google.cloud.gcp_compute_network_endpoint_group_info gcp_compute_network_info: redirect: google.cloud.gcp_compute_network_info gcp_compute_node_group: redirect: google.cloud.gcp_compute_node_group gcp_compute_node_group_info: redirect: google.cloud.gcp_compute_node_group_info gcp_compute_node_template: redirect: google.cloud.gcp_compute_node_template gcp_compute_node_template_info: redirect: google.cloud.gcp_compute_node_template_info gcp_compute_region_backend_service: redirect: google.cloud.gcp_compute_region_backend_service gcp_compute_region_backend_service_info: redirect: google.cloud.gcp_compute_region_backend_service_info gcp_compute_region_disk: redirect: google.cloud.gcp_compute_region_disk gcp_compute_region_disk_info: redirect: google.cloud.gcp_compute_region_disk_info gcp_compute_reservation: redirect: google.cloud.gcp_compute_reservation gcp_compute_reservation_info: redirect: google.cloud.gcp_compute_reservation_info gcp_compute_route: redirect: google.cloud.gcp_compute_route gcp_compute_route_info: redirect: google.cloud.gcp_compute_route_info gcp_compute_router: redirect: google.cloud.gcp_compute_router gcp_compute_router_info: redirect: google.cloud.gcp_compute_router_info gcp_compute_snapshot: redirect: google.cloud.gcp_compute_snapshot gcp_compute_snapshot_info: redirect: google.cloud.gcp_compute_snapshot_info gcp_compute_ssl_certificate: redirect: google.cloud.gcp_compute_ssl_certificate gcp_compute_ssl_certificate_info: redirect: google.cloud.gcp_compute_ssl_certificate_info gcp_compute_ssl_policy: redirect: google.cloud.gcp_compute_ssl_policy gcp_compute_ssl_policy_info: redirect: google.cloud.gcp_compute_ssl_policy_info gcp_compute_subnetwork: redirect: google.cloud.gcp_compute_subnetwork gcp_compute_subnetwork_info: redirect: google.cloud.gcp_compute_subnetwork_info gcp_compute_target_http_proxy: redirect: google.cloud.gcp_compute_target_http_proxy gcp_compute_target_http_proxy_info: redirect: google.cloud.gcp_compute_target_http_proxy_info gcp_compute_target_https_proxy: redirect: google.cloud.gcp_compute_target_https_proxy gcp_compute_target_https_proxy_info: redirect: google.cloud.gcp_compute_target_https_proxy_info gcp_compute_target_instance: redirect: google.cloud.gcp_compute_target_instance gcp_compute_target_instance_info: redirect: google.cloud.gcp_compute_target_instance_info gcp_compute_target_pool: redirect: google.cloud.gcp_compute_target_pool gcp_compute_target_pool_info: redirect: google.cloud.gcp_compute_target_pool_info gcp_compute_target_ssl_proxy: redirect: google.cloud.gcp_compute_target_ssl_proxy gcp_compute_target_ssl_proxy_info: redirect: google.cloud.gcp_compute_target_ssl_proxy_info gcp_compute_target_tcp_proxy: redirect: google.cloud.gcp_compute_target_tcp_proxy gcp_compute_target_tcp_proxy_info: redirect: google.cloud.gcp_compute_target_tcp_proxy_info gcp_compute_target_vpn_gateway: redirect: google.cloud.gcp_compute_target_vpn_gateway gcp_compute_target_vpn_gateway_info: redirect: google.cloud.gcp_compute_target_vpn_gateway_info gcp_compute_url_map: redirect: google.cloud.gcp_compute_url_map gcp_compute_url_map_info: redirect: google.cloud.gcp_compute_url_map_info gcp_compute_vpn_tunnel: redirect: google.cloud.gcp_compute_vpn_tunnel gcp_compute_vpn_tunnel_info: redirect: google.cloud.gcp_compute_vpn_tunnel_info gcp_container_cluster: redirect: google.cloud.gcp_container_cluster gcp_container_cluster_info: redirect: google.cloud.gcp_container_cluster_info gcp_container_node_pool: redirect: google.cloud.gcp_container_node_pool gcp_container_node_pool_info: redirect: google.cloud.gcp_container_node_pool_info gcp_dns_managed_zone: redirect: google.cloud.gcp_dns_managed_zone gcp_dns_managed_zone_info: redirect: google.cloud.gcp_dns_managed_zone_info gcp_dns_resource_record_set: redirect: google.cloud.gcp_dns_resource_record_set gcp_dns_resource_record_set_info: redirect: google.cloud.gcp_dns_resource_record_set_info gcp_filestore_instance: redirect: google.cloud.gcp_filestore_instance gcp_filestore_instance_info: redirect: google.cloud.gcp_filestore_instance_info gcp_iam_role: redirect: google.cloud.gcp_iam_role gcp_iam_role_info: redirect: google.cloud.gcp_iam_role_info gcp_iam_service_account: redirect: google.cloud.gcp_iam_service_account gcp_iam_service_account_info: redirect: google.cloud.gcp_iam_service_account_info gcp_iam_service_account_key: redirect: google.cloud.gcp_iam_service_account_key gcp_kms_crypto_key: redirect: google.cloud.gcp_kms_crypto_key gcp_kms_crypto_key_info: redirect: google.cloud.gcp_kms_crypto_key_info gcp_kms_key_ring: redirect: google.cloud.gcp_kms_key_ring gcp_kms_key_ring_info: redirect: google.cloud.gcp_kms_key_ring_info gcp_logging_metric: redirect: google.cloud.gcp_logging_metric gcp_logging_metric_info: redirect: google.cloud.gcp_logging_metric_info gcp_mlengine_model: redirect: google.cloud.gcp_mlengine_model gcp_mlengine_model_info: redirect: google.cloud.gcp_mlengine_model_info gcp_mlengine_version: redirect: google.cloud.gcp_mlengine_version gcp_mlengine_version_info: redirect: google.cloud.gcp_mlengine_version_info gcp_pubsub_subscription: redirect: google.cloud.gcp_pubsub_subscription gcp_pubsub_subscription_info: redirect: google.cloud.gcp_pubsub_subscription_info gcp_pubsub_topic: redirect: google.cloud.gcp_pubsub_topic gcp_pubsub_topic_info: redirect: google.cloud.gcp_pubsub_topic_info gcp_redis_instance: redirect: google.cloud.gcp_redis_instance gcp_redis_instance_info: redirect: google.cloud.gcp_redis_instance_info gcp_resourcemanager_project: redirect: google.cloud.gcp_resourcemanager_project gcp_resourcemanager_project_info: redirect: google.cloud.gcp_resourcemanager_project_info gcp_runtimeconfig_config: redirect: google.cloud.gcp_runtimeconfig_config gcp_runtimeconfig_config_info: redirect: google.cloud.gcp_runtimeconfig_config_info gcp_runtimeconfig_variable: redirect: google.cloud.gcp_runtimeconfig_variable gcp_runtimeconfig_variable_info: redirect: google.cloud.gcp_runtimeconfig_variable_info gcp_serviceusage_service: redirect: google.cloud.gcp_serviceusage_service gcp_serviceusage_service_info: redirect: google.cloud.gcp_serviceusage_service_info gcp_sourcerepo_repository: redirect: google.cloud.gcp_sourcerepo_repository gcp_sourcerepo_repository_info: redirect: google.cloud.gcp_sourcerepo_repository_info gcp_spanner_database: redirect: google.cloud.gcp_spanner_database gcp_spanner_database_info: redirect: google.cloud.gcp_spanner_database_info gcp_spanner_instance: redirect: google.cloud.gcp_spanner_instance gcp_spanner_instance_info: redirect: google.cloud.gcp_spanner_instance_info gcp_sql_database: redirect: google.cloud.gcp_sql_database gcp_sql_database_info: redirect: google.cloud.gcp_sql_database_info gcp_sql_instance: redirect: google.cloud.gcp_sql_instance gcp_sql_instance_info: redirect: google.cloud.gcp_sql_instance_info gcp_sql_user: redirect: google.cloud.gcp_sql_user gcp_sql_user_info: redirect: google.cloud.gcp_sql_user_info gcp_storage_bucket: redirect: google.cloud.gcp_storage_bucket gcp_storage_bucket_access_control: redirect: google.cloud.gcp_storage_bucket_access_control gcp_storage_object: redirect: google.cloud.gcp_storage_object gcp_tpu_node: redirect: google.cloud.gcp_tpu_node gcp_tpu_node_info: redirect: google.cloud.gcp_tpu_node_info purefa_alert: redirect: purestorage.flasharray.purefa_alert purefa_arrayname: redirect: purestorage.flasharray.purefa_arrayname purefa_banner: redirect: purestorage.flasharray.purefa_banner purefa_connect: redirect: purestorage.flasharray.purefa_connect purefa_dns: redirect: purestorage.flasharray.purefa_dns purefa_ds: redirect: purestorage.flasharray.purefa_ds purefa_dsrole: redirect: purestorage.flasharray.purefa_dsrole purefa_hg: redirect: purestorage.flasharray.purefa_hg purefa_host: redirect: purestorage.flasharray.purefa_host purefa_info: redirect: purestorage.flasharray.purefa_info purefa_ntp: redirect: purestorage.flasharray.purefa_ntp purefa_offload: redirect: purestorage.flasharray.purefa_offload purefa_pg: redirect: purestorage.flasharray.purefa_pg purefa_pgsnap: redirect: purestorage.flasharray.purefa_pgsnap purefa_phonehome: redirect: purestorage.flasharray.purefa_phonehome purefa_ra: redirect: purestorage.flasharray.purefa_ra purefa_smtp: redirect: purestorage.flasharray.purefa_smtp purefa_snap: redirect: purestorage.flasharray.purefa_snap purefa_snmp: redirect: purestorage.flasharray.purefa_snmp purefa_syslog: redirect: purestorage.flasharray.purefa_syslog purefa_user: redirect: purestorage.flasharray.purefa_user purefa_vg: redirect: purestorage.flasharray.purefa_vg purefa_volume: redirect: purestorage.flasharray.purefa_volume purefb_bucket: redirect: purestorage.flashblade.purefb_bucket purefb_ds: redirect: purestorage.flashblade.purefb_ds purefb_dsrole: redirect: purestorage.flashblade.purefb_dsrole purefb_fs: redirect: purestorage.flashblade.purefb_fs purefb_info: redirect: purestorage.flashblade.purefb_info purefb_network: redirect: purestorage.flashblade.purefb_network purefb_ra: redirect: purestorage.flashblade.purefb_ra purefb_s3acc: redirect: purestorage.flashblade.purefb_s3acc purefb_s3user: redirect: purestorage.flashblade.purefb_s3user purefb_smtp: redirect: purestorage.flashblade.purefb_smtp purefb_snap: redirect: purestorage.flashblade.purefb_snap purefb_subnet: redirect: purestorage.flashblade.purefb_subnet azure_rm_acs: redirect: azure.azcollection.azure_rm_acs azure_rm_virtualmachine_info: redirect: azure.azcollection.azure_rm_virtualmachine_info azure_rm_dnsrecordset_info: redirect: azure.azcollection.azure_rm_dnsrecordset_info azure_rm_dnszone_info: redirect: azure.azcollection.azure_rm_dnszone_info azure_rm_networkinterface_info: redirect: azure.azcollection.azure_rm_networkinterface_info azure_rm_publicipaddress_info: redirect: azure.azcollection.azure_rm_publicipaddress_info azure_rm_securitygroup_info: redirect: azure.azcollection.azure_rm_securitygroup_info azure_rm_storageaccount_info: redirect: azure.azcollection.azure_rm_storageaccount_info azure_rm_virtualnetwork_info: redirect: azure.azcollection.azure_rm_virtualnetwork_info azure_rm_deployment: redirect: azure.azcollection.azure_rm_deployment azure_rm_dnsrecordset: redirect: azure.azcollection.azure_rm_dnsrecordset azure_rm_dnszone: redirect: azure.azcollection.azure_rm_dnszone azure_rm_networkinterface: redirect: azure.azcollection.azure_rm_networkinterface azure_rm_publicipaddress: redirect: azure.azcollection.azure_rm_publicipaddress azure_rm_securitygroup: redirect: azure.azcollection.azure_rm_securitygroup azure_rm_storageaccount: redirect: azure.azcollection.azure_rm_storageaccount azure_rm_subnet: redirect: azure.azcollection.azure_rm_subnet azure_rm_virtualmachine: redirect: azure.azcollection.azure_rm_virtualmachine azure_rm_virtualnetwork: redirect: azure.azcollection.azure_rm_virtualnetwork azure_rm_aks: redirect: azure.azcollection.azure_rm_aks azure_rm_aks_info: redirect: azure.azcollection.azure_rm_aks_info azure_rm_aksversion_info: redirect: azure.azcollection.azure_rm_aksversion_info azure_rm_appgateway: redirect: azure.azcollection.azure_rm_appgateway azure_rm_applicationsecuritygroup: redirect: azure.azcollection.azure_rm_applicationsecuritygroup azure_rm_applicationsecuritygroup_info: redirect: azure.azcollection.azure_rm_applicationsecuritygroup_info azure_rm_appserviceplan: redirect: azure.azcollection.azure_rm_appserviceplan azure_rm_appserviceplan_info: redirect: azure.azcollection.azure_rm_appserviceplan_info azure_rm_availabilityset: redirect: azure.azcollection.azure_rm_availabilityset azure_rm_availabilityset_info: redirect: azure.azcollection.azure_rm_availabilityset_info azure_rm_containerinstance: redirect: azure.azcollection.azure_rm_containerinstance azure_rm_containerinstance_info: redirect: azure.azcollection.azure_rm_containerinstance_info azure_rm_containerregistry: redirect: azure.azcollection.azure_rm_containerregistry azure_rm_containerregistry_info: redirect: azure.azcollection.azure_rm_containerregistry_info azure_rm_deployment_info: redirect: azure.azcollection.azure_rm_deployment_info azure_rm_functionapp: redirect: azure.azcollection.azure_rm_functionapp azure_rm_functionapp_info: redirect: azure.azcollection.azure_rm_functionapp_info azure_rm_gallery: redirect: azure.azcollection.azure_rm_gallery azure_rm_gallery_info: redirect: azure.azcollection.azure_rm_gallery_info azure_rm_galleryimage: redirect: azure.azcollection.azure_rm_galleryimage azure_rm_galleryimage_info: redirect: azure.azcollection.azure_rm_galleryimage_info azure_rm_galleryimageversion: redirect: azure.azcollection.azure_rm_galleryimageversion azure_rm_galleryimageversion_info: redirect: azure.azcollection.azure_rm_galleryimageversion_info azure_rm_image: redirect: azure.azcollection.azure_rm_image azure_rm_image_info: redirect: azure.azcollection.azure_rm_image_info azure_rm_keyvault: redirect: azure.azcollection.azure_rm_keyvault azure_rm_keyvault_info: redirect: azure.azcollection.azure_rm_keyvault_info azure_rm_keyvaultkey: redirect: azure.azcollection.azure_rm_keyvaultkey azure_rm_keyvaultkey_info: redirect: azure.azcollection.azure_rm_keyvaultkey_info azure_rm_keyvaultsecret: redirect: azure.azcollection.azure_rm_keyvaultsecret azure_rm_manageddisk: redirect: azure.azcollection.azure_rm_manageddisk azure_rm_manageddisk_info: redirect: azure.azcollection.azure_rm_manageddisk_info azure_rm_resource: redirect: azure.azcollection.azure_rm_resource azure_rm_resource_info: redirect: azure.azcollection.azure_rm_resource_info azure_rm_resourcegroup: redirect: azure.azcollection.azure_rm_resourcegroup azure_rm_resourcegroup_info: redirect: azure.azcollection.azure_rm_resourcegroup_info azure_rm_snapshot: redirect: azure.azcollection.azure_rm_snapshot azure_rm_storageblob: redirect: azure.azcollection.azure_rm_storageblob azure_rm_subnet_info: redirect: azure.azcollection.azure_rm_subnet_info azure_rm_virtualmachineextension: redirect: azure.azcollection.azure_rm_virtualmachineextension azure_rm_virtualmachineextension_info: redirect: azure.azcollection.azure_rm_virtualmachineextension_info azure_rm_virtualmachineimage_info: redirect: azure.azcollection.azure_rm_virtualmachineimage_info azure_rm_virtualmachinescaleset: redirect: azure.azcollection.azure_rm_virtualmachinescaleset azure_rm_virtualmachinescaleset_info: redirect: azure.azcollection.azure_rm_virtualmachinescaleset_info azure_rm_virtualmachinescalesetextension: redirect: azure.azcollection.azure_rm_virtualmachinescalesetextension azure_rm_virtualmachinescalesetextension_info: redirect: azure.azcollection.azure_rm_virtualmachinescalesetextension_info azure_rm_virtualmachinescalesetinstance: redirect: azure.azcollection.azure_rm_virtualmachinescalesetinstance azure_rm_virtualmachinescalesetinstance_info: redirect: azure.azcollection.azure_rm_virtualmachinescalesetinstance_info azure_rm_webapp: redirect: azure.azcollection.azure_rm_webapp azure_rm_webapp_info: redirect: azure.azcollection.azure_rm_webapp_info azure_rm_webappslot: redirect: azure.azcollection.azure_rm_webappslot azure_rm_automationaccount: redirect: azure.azcollection.azure_rm_automationaccount azure_rm_automationaccount_info: redirect: azure.azcollection.azure_rm_automationaccount_info azure_rm_autoscale: redirect: azure.azcollection.azure_rm_autoscale azure_rm_autoscale_info: redirect: azure.azcollection.azure_rm_autoscale_info azure_rm_azurefirewall: redirect: azure.azcollection.azure_rm_azurefirewall azure_rm_azurefirewall_info: redirect: azure.azcollection.azure_rm_azurefirewall_info azure_rm_batchaccount: redirect: azure.azcollection.azure_rm_batchaccount azure_rm_cdnendpoint: redirect: azure.azcollection.azure_rm_cdnendpoint azure_rm_cdnendpoint_info: redirect: azure.azcollection.azure_rm_cdnendpoint_info azure_rm_cdnprofile: redirect: azure.azcollection.azure_rm_cdnprofile azure_rm_cdnprofile_info: redirect: azure.azcollection.azure_rm_cdnprofile_info azure_rm_iotdevice: redirect: azure.azcollection.azure_rm_iotdevice azure_rm_iotdevice_info: redirect: azure.azcollection.azure_rm_iotdevice_info azure_rm_iotdevicemodule: redirect: azure.azcollection.azure_rm_iotdevicemodule azure_rm_iothub: redirect: azure.azcollection.azure_rm_iothub azure_rm_iothub_info: redirect: azure.azcollection.azure_rm_iothub_info azure_rm_iothubconsumergroup: redirect: azure.azcollection.azure_rm_iothubconsumergroup azure_rm_loadbalancer: redirect: azure.azcollection.azure_rm_loadbalancer azure_rm_loadbalancer_info: redirect: azure.azcollection.azure_rm_loadbalancer_info azure_rm_lock: redirect: azure.azcollection.azure_rm_lock azure_rm_lock_info: redirect: azure.azcollection.azure_rm_lock_info azure_rm_loganalyticsworkspace: redirect: azure.azcollection.azure_rm_loganalyticsworkspace azure_rm_loganalyticsworkspace_info: redirect: azure.azcollection.azure_rm_loganalyticsworkspace_info azure_rm_monitorlogprofile: redirect: azure.azcollection.azure_rm_monitorlogprofile azure_rm_rediscache: redirect: azure.azcollection.azure_rm_rediscache azure_rm_rediscache_info: redirect: azure.azcollection.azure_rm_rediscache_info azure_rm_rediscachefirewallrule: redirect: azure.azcollection.azure_rm_rediscachefirewallrule azure_rm_roleassignment: redirect: azure.azcollection.azure_rm_roleassignment azure_rm_roleassignment_info: redirect: azure.azcollection.azure_rm_roleassignment_info azure_rm_roledefinition: redirect: azure.azcollection.azure_rm_roledefinition azure_rm_roledefinition_info: redirect: azure.azcollection.azure_rm_roledefinition_info azure_rm_route: redirect: azure.azcollection.azure_rm_route azure_rm_routetable: redirect: azure.azcollection.azure_rm_routetable azure_rm_routetable_info: redirect: azure.azcollection.azure_rm_routetable_info azure_rm_servicebus: redirect: azure.azcollection.azure_rm_servicebus azure_rm_servicebus_info: redirect: azure.azcollection.azure_rm_servicebus_info azure_rm_servicebusqueue: redirect: azure.azcollection.azure_rm_servicebusqueue azure_rm_servicebussaspolicy: redirect: azure.azcollection.azure_rm_servicebussaspolicy azure_rm_servicebustopic: redirect: azure.azcollection.azure_rm_servicebustopic azure_rm_servicebustopicsubscription: redirect: azure.azcollection.azure_rm_servicebustopicsubscription azure_rm_trafficmanagerendpoint: redirect: azure.azcollection.azure_rm_trafficmanagerendpoint azure_rm_trafficmanagerendpoint_info: redirect: azure.azcollection.azure_rm_trafficmanagerendpoint_info azure_rm_trafficmanagerprofile: redirect: azure.azcollection.azure_rm_trafficmanagerprofile azure_rm_trafficmanagerprofile_info: redirect: azure.azcollection.azure_rm_trafficmanagerprofile_info azure_rm_virtualnetworkgateway: redirect: azure.azcollection.azure_rm_virtualnetworkgateway azure_rm_virtualnetworkpeering: redirect: azure.azcollection.azure_rm_virtualnetworkpeering azure_rm_virtualnetworkpeering_info: redirect: azure.azcollection.azure_rm_virtualnetworkpeering_info azure_rm_cosmosdbaccount: redirect: azure.azcollection.azure_rm_cosmosdbaccount azure_rm_cosmosdbaccount_info: redirect: azure.azcollection.azure_rm_cosmosdbaccount_info azure_rm_devtestlab: redirect: azure.azcollection.azure_rm_devtestlab azure_rm_devtestlab_info: redirect: azure.azcollection.azure_rm_devtestlab_info azure_rm_devtestlabarmtemplate_info: redirect: azure.azcollection.azure_rm_devtestlabarmtemplate_info azure_rm_devtestlabartifact_info: redirect: azure.azcollection.azure_rm_devtestlabartifact_info azure_rm_devtestlabartifactsource: redirect: azure.azcollection.azure_rm_devtestlabartifactsource azure_rm_devtestlabartifactsource_info: redirect: azure.azcollection.azure_rm_devtestlabartifactsource_info azure_rm_devtestlabcustomimage: redirect: azure.azcollection.azure_rm_devtestlabcustomimage azure_rm_devtestlabcustomimage_info: redirect: azure.azcollection.azure_rm_devtestlabcustomimage_info azure_rm_devtestlabenvironment: redirect: azure.azcollection.azure_rm_devtestlabenvironment azure_rm_devtestlabenvironment_info: redirect: azure.azcollection.azure_rm_devtestlabenvironment_info azure_rm_devtestlabpolicy: redirect: azure.azcollection.azure_rm_devtestlabpolicy azure_rm_devtestlabpolicy_info: redirect: azure.azcollection.azure_rm_devtestlabpolicy_info azure_rm_devtestlabschedule: redirect: azure.azcollection.azure_rm_devtestlabschedule azure_rm_devtestlabschedule_info: redirect: azure.azcollection.azure_rm_devtestlabschedule_info azure_rm_devtestlabvirtualmachine: redirect: azure.azcollection.azure_rm_devtestlabvirtualmachine azure_rm_devtestlabvirtualmachine_info: redirect: azure.azcollection.azure_rm_devtestlabvirtualmachine_info azure_rm_devtestlabvirtualnetwork: redirect: azure.azcollection.azure_rm_devtestlabvirtualnetwork azure_rm_devtestlabvirtualnetwork_info: redirect: azure.azcollection.azure_rm_devtestlabvirtualnetwork_info azure_rm_hdinsightcluster: redirect: azure.azcollection.azure_rm_hdinsightcluster azure_rm_hdinsightcluster_info: redirect: azure.azcollection.azure_rm_hdinsightcluster_info azure_rm_mariadbconfiguration: redirect: azure.azcollection.azure_rm_mariadbconfiguration azure_rm_mariadbconfiguration_info: redirect: azure.azcollection.azure_rm_mariadbconfiguration_info azure_rm_mariadbdatabase: redirect: azure.azcollection.azure_rm_mariadbdatabase azure_rm_mariadbdatabase_info: redirect: azure.azcollection.azure_rm_mariadbdatabase_info azure_rm_mariadbfirewallrule: redirect: azure.azcollection.azure_rm_mariadbfirewallrule azure_rm_mariadbfirewallrule_info: redirect: azure.azcollection.azure_rm_mariadbfirewallrule_info azure_rm_mariadbserver: redirect: azure.azcollection.azure_rm_mariadbserver azure_rm_mariadbserver_info: redirect: azure.azcollection.azure_rm_mariadbserver_info azure_rm_mysqlconfiguration: redirect: azure.azcollection.azure_rm_mysqlconfiguration azure_rm_mysqlconfiguration_info: redirect: azure.azcollection.azure_rm_mysqlconfiguration_info azure_rm_mysqldatabase: redirect: azure.azcollection.azure_rm_mysqldatabase azure_rm_mysqldatabase_info: redirect: azure.azcollection.azure_rm_mysqldatabase_info azure_rm_mysqlfirewallrule: redirect: azure.azcollection.azure_rm_mysqlfirewallrule azure_rm_mysqlfirewallrule_info: redirect: azure.azcollection.azure_rm_mysqlfirewallrule_info azure_rm_mysqlserver: redirect: azure.azcollection.azure_rm_mysqlserver azure_rm_mysqlserver_info: redirect: azure.azcollection.azure_rm_mysqlserver_info azure_rm_postgresqlconfiguration: redirect: azure.azcollection.azure_rm_postgresqlconfiguration azure_rm_postgresqlconfiguration_info: redirect: azure.azcollection.azure_rm_postgresqlconfiguration_info azure_rm_postgresqldatabase: redirect: azure.azcollection.azure_rm_postgresqldatabase azure_rm_postgresqldatabase_info: redirect: azure.azcollection.azure_rm_postgresqldatabase_info azure_rm_postgresqlfirewallrule: redirect: azure.azcollection.azure_rm_postgresqlfirewallrule azure_rm_postgresqlfirewallrule_info: redirect: azure.azcollection.azure_rm_postgresqlfirewallrule_info azure_rm_postgresqlserver: redirect: azure.azcollection.azure_rm_postgresqlserver azure_rm_postgresqlserver_info: redirect: azure.azcollection.azure_rm_postgresqlserver_info azure_rm_sqldatabase: redirect: azure.azcollection.azure_rm_sqldatabase azure_rm_sqldatabase_info: redirect: azure.azcollection.azure_rm_sqldatabase_info azure_rm_sqlfirewallrule: redirect: azure.azcollection.azure_rm_sqlfirewallrule azure_rm_sqlfirewallrule_info: redirect: azure.azcollection.azure_rm_sqlfirewallrule_info azure_rm_sqlserver: redirect: azure.azcollection.azure_rm_sqlserver azure_rm_sqlserver_info: redirect: azure.azcollection.azure_rm_sqlserver_info openvswitch_port: redirect: openvswitch.openvswitch.openvswitch_port openvswitch_db: redirect: openvswitch.openvswitch.openvswitch_db openvswitch_bridge: redirect: openvswitch.openvswitch.openvswitch_bridge vyos_ospfv2: redirect: vyos.vyos.vyos_ospfv2 vyos_l3_interface: redirect: vyos.vyos.vyos_l3_interface vyos_banner: redirect: vyos.vyos.vyos_banner vyos_firewall_rules: redirect: vyos.vyos.vyos_firewall_rules vyos_static_route: redirect: vyos.vyos.vyos_static_route vyos_lldp_interface: redirect: vyos.vyos.vyos_lldp_interface vyos_vlan: redirect: vyos.vyos.vyos_vlan vyos_user: redirect: vyos.vyos.vyos_user vyos_firewall_interfaces: redirect: vyos.vyos.vyos_firewall_interfaces vyos_interface: redirect: vyos.vyos.vyos_interface vyos_firewall_global: redirect: vyos.vyos.vyos_firewall_global vyos_config: redirect: vyos.vyos.vyos_config vyos_facts: redirect: vyos.vyos.vyos_facts vyos_linkagg: redirect: vyos.vyos.vyos_linkagg vyos_ping: redirect: vyos.vyos.vyos_ping vyos_lag_interfaces: redirect: vyos.vyos.vyos_lag_interfaces vyos_lldp: redirect: vyos.vyos.vyos_lldp vyos_lldp_global: redirect: vyos.vyos.vyos_lldp_global vyos_l3_interfaces: redirect: vyos.vyos.vyos_l3_interfaces vyos_lldp_interfaces: redirect: vyos.vyos.vyos_lldp_interfaces vyos_interfaces: redirect: vyos.vyos.vyos_interfaces vyos_logging: redirect: vyos.vyos.vyos_logging vyos_static_routes: redirect: vyos.vyos.vyos_static_routes vyos_command: redirect: vyos.vyos.vyos_command vyos_system: redirect: vyos.vyos.vyos_system cpm_plugconfig: redirect: wti.remote.cpm_plugconfig cpm_plugcontrol: redirect: wti.remote.cpm_plugcontrol cpm_serial_port_config: redirect: wti.remote.cpm_serial_port_config cpm_serial_port_info: redirect: wti.remote.cpm_serial_port_info cpm_user: redirect: wti.remote.cpm_user module_utils: # test entries formerly_core: redirect: ansible_collections.testns.testcoll.plugins.module_utils.base sub1.sub2.formerly_core: redirect: ansible_collections.testns.testcoll.plugins.module_utils.base # real acme: redirect: community.crypto.acme alicloud_ecs: redirect: community.general.alicloud_ecs ansible_tower: redirect: awx.awx.ansible_tower aws.batch: redirect: amazon.aws.batch aws.cloudfront_facts: redirect: amazon.aws.cloudfront_facts aws.core: redirect: amazon.aws.core aws.direct_connect: redirect: amazon.aws.direct_connect aws.elb_utils: redirect: amazon.aws.elb_utils aws.elbv2: redirect: amazon.aws.elbv2 aws.iam: redirect: amazon.aws.iam aws.rds: redirect: amazon.aws.rds aws.s3: redirect: amazon.aws.s3 aws.urls: redirect: amazon.aws.urls aws.waf: redirect: amazon.aws.waf aws.waiters: redirect: amazon.aws.waiters azure_rm_common: redirect: azure.azcollection.azure_rm_common azure_rm_common_ext: redirect: azure.azcollection.azure_rm_common_ext azure_rm_common_rest: redirect: azure.azcollection.azure_rm_common_rest cloud: redirect: community.general.cloud cloudscale: redirect: cloudscale_ch.cloud.api cloudstack: redirect: ngine_io.cloudstack.cloudstack compat.ipaddress: redirect: ansible.netcommon.compat.ipaddress crypto: redirect: community.crypto.crypto database: redirect: community.general.database digital_ocean: redirect: community.digitalocean.digital_ocean dimensiondata: redirect: community.general.dimensiondata docker: redirect: community.docker.common docker.common: redirect: community.docker.common docker.swarm: redirect: community.docker.swarm ec2: redirect: amazon.aws.ec2 ecs: redirect: community.crypto.ecs ecs.api: redirect: community.crypto.ecs.api exoscale: redirect: ngine_io.exoscale.exoscale f5_utils: tombstone: removal_date: "2019-11-06" firewalld: redirect: ansible.posix.firewalld gcdns: redirect: community.google.gcdns gce: redirect: community.google.gce gcp: redirect: community.google.gcp gcp_utils: redirect: google.cloud.gcp_utils gitlab: redirect: community.general.gitlab hcloud: redirect: hetzner.hcloud.hcloud heroku: redirect: community.general.heroku hetzner: redirect: community.hrobot.robot hwc_utils: redirect: community.general.hwc_utils ibm_sa_utils: redirect: community.general.ibm_sa_utils identity: redirect: community.general.identity identity.keycloak: redirect: community.general.identity.keycloak identity.keycloak.keycloak: redirect: community.general.identity.keycloak.keycloak infinibox: redirect: infinidat.infinibox.infinibox influxdb: redirect: community.general.influxdb ipa: redirect: community.general.ipa ismount: redirect: ansible.posix.mount k8s.common: redirect: kubernetes.core.common k8s.raw: redirect: kubernetes.core.raw k8s.scale: redirect: kubernetes.core.scale known_hosts: redirect: community.general.known_hosts kubevirt: redirect: community.kubevirt.kubevirt ldap: redirect: community.general.ldap linode: redirect: community.general.linode lxd: redirect: community.general.lxd manageiq: redirect: community.general.manageiq memset: redirect: community.general.memset mysql: redirect: community.mysql.mysql net_tools.netbox.netbox_utils: redirect: netbox.netbox.netbox_utils net_tools.nios: redirect: community.general.net_tools.nios net_tools.nios.api: redirect: community.general.net_tools.nios.api netapp: redirect: netapp.ontap.netapp netapp_elementsw_module: redirect: netapp.ontap.netapp_elementsw_module netapp_module: redirect: netapp.ontap.netapp_module network.a10.a10: redirect: community.network.network.a10.a10 network.aci.aci: redirect: cisco.aci.aci network.aci.mso: redirect: cisco.mso.mso network.aireos.aireos: redirect: community.network.network.aireos.aireos network.aos.aos: redirect: community.network.network.aos.aos network.aruba.aruba: redirect: community.network.network.aruba.aruba network.asa.asa: redirect: cisco.asa.network.asa.asa network.avi.ansible_utils: redirect: community.network.network.avi.ansible_utils network.avi.avi: redirect: community.network.network.avi.avi network.avi.avi_api: redirect: community.network.network.avi.avi_api network.bigswitch.bigswitch: redirect: community.network.network.bigswitch.bigswitch network.checkpoint.checkpoint: redirect: check_point.mgmt.checkpoint network.cloudengine.ce: redirect: community.network.network.cloudengine.ce network.cnos.cnos: redirect: community.network.network.cnos.cnos network.cnos.cnos_devicerules: redirect: community.network.network.cnos.cnos_devicerules network.cnos.cnos_errorcodes: redirect: community.network.network.cnos.cnos_errorcodes network.common.cfg.base: redirect: ansible.netcommon.network.common.cfg.base network.common.config: redirect: ansible.netcommon.network.common.config network.common.facts.facts: redirect: ansible.netcommon.network.common.facts.facts network.common.netconf: redirect: ansible.netcommon.network.common.netconf network.common.network: redirect: ansible.netcommon.network.common.network network.common.parsing: redirect: ansible.netcommon.network.common.parsing network.common.utils: redirect: ansible.netcommon.network.common.utils network.dellos10.dellos10: redirect: dellemc.os10.network.os10 network.dellos9.dellos9: redirect: dellemc.os9.network.os9 network.dellos6.dellos6: redirect: dellemc.os6.network.os6 network.edgeos.edgeos: redirect: community.network.network.edgeos.edgeos network.edgeswitch.edgeswitch: redirect: community.network.network.edgeswitch.edgeswitch network.edgeswitch.edgeswitch_interface: redirect: community.network.network.edgeswitch.edgeswitch_interface network.enos.enos: redirect: community.network.network.enos.enos network.eos.argspec.facts: redirect: arista.eos.network.eos.argspec.facts network.eos.argspec.facts.facts: redirect: arista.eos.network.eos.argspec.facts.facts network.eos.argspec.interfaces: redirect: arista.eos.network.eos.argspec.interfaces network.eos.argspec.interfaces.interfaces: redirect: arista.eos.network.eos.argspec.interfaces.interfaces network.eos.argspec.l2_interfaces: redirect: arista.eos.network.eos.argspec.l2_interfaces network.eos.argspec.l2_interfaces.l2_interfaces: redirect: arista.eos.network.eos.argspec.l2_interfaces.l2_interfaces network.eos.argspec.l3_interfaces: redirect: arista.eos.network.eos.argspec.l3_interfaces network.eos.argspec.l3_interfaces.l3_interfaces: redirect: arista.eos.network.eos.argspec.l3_interfaces.l3_interfaces network.eos.argspec.lacp: redirect: arista.eos.network.eos.argspec.lacp network.eos.argspec.lacp.lacp: redirect: arista.eos.network.eos.argspec.lacp.lacp network.eos.argspec.lacp_interfaces: redirect: arista.eos.network.eos.argspec.lacp_interfaces network.eos.argspec.lacp_interfaces.lacp_interfaces: redirect: arista.eos.network.eos.argspec.lacp_interfaces.lacp_interfaces network.eos.argspec.lag_interfaces: redirect: arista.eos.network.eos.argspec.lag_interfaces network.eos.argspec.lag_interfaces.lag_interfaces: redirect: arista.eos.network.eos.argspec.lag_interfaces.lag_interfaces network.eos.argspec.lldp_global: redirect: arista.eos.network.eos.argspec.lldp_global network.eos.argspec.lldp_global.lldp_global: redirect: arista.eos.network.eos.argspec.lldp_global.lldp_global network.eos.argspec.lldp_interfaces: redirect: arista.eos.network.eos.argspec.lldp_interfaces network.eos.argspec.lldp_interfaces.lldp_interfaces: redirect: arista.eos.network.eos.argspec.lldp_interfaces.lldp_interfaces network.eos.argspec.vlans: redirect: arista.eos.network.eos.argspec.vlans network.eos.argspec.vlans.vlans: redirect: arista.eos.network.eos.argspec.vlans.vlans network.eos.config: redirect: arista.eos.network.eos.config network.eos.config.interfaces: redirect: arista.eos.network.eos.config.interfaces network.eos.config.interfaces.interfaces: redirect: arista.eos.network.eos.config.interfaces.interfaces network.eos.config.l2_interfaces: redirect: arista.eos.network.eos.config.l2_interfaces network.eos.config.l2_interfaces.l2_interfaces: redirect: arista.eos.network.eos.config.l2_interfaces.l2_interfaces network.eos.config.l3_interfaces: redirect: arista.eos.network.eos.config.l3_interfaces network.eos.config.l3_interfaces.l3_interfaces: redirect: arista.eos.network.eos.config.l3_interfaces.l3_interfaces network.eos.config.lacp: redirect: arista.eos.network.eos.config.lacp network.eos.config.lacp.lacp: redirect: arista.eos.network.eos.config.lacp.lacp network.eos.config.lacp_interfaces: redirect: arista.eos.network.eos.config.lacp_interfaces network.eos.config.lacp_interfaces.lacp_interfaces: redirect: arista.eos.network.eos.config.lacp_interfaces.lacp_interfaces network.eos.config.lag_interfaces: redirect: arista.eos.network.eos.config.lag_interfaces network.eos.config.lag_interfaces.lag_interfaces: redirect: arista.eos.network.eos.config.lag_interfaces.lag_interfaces network.eos.config.lldp_global: redirect: arista.eos.network.eos.config.lldp_global network.eos.config.lldp_global.lldp_global: redirect: arista.eos.network.eos.config.lldp_global.lldp_global network.eos.config.lldp_interfaces: redirect: arista.eos.network.eos.config.lldp_interfaces network.eos.config.lldp_interfaces.lldp_interfaces: redirect: arista.eos.network.eos.config.lldp_interfaces.lldp_interfaces network.eos.config.vlans: redirect: arista.eos.network.eos.config.vlans network.eos.config.vlans.vlans: redirect: arista.eos.network.eos.config.vlans.vlans network.eos.eos: redirect: arista.eos.network.eos.eos network.eos.facts: redirect: arista.eos.network.eos.facts network.eos.facts.facts: redirect: arista.eos.network.eos.facts.facts network.eos.facts.interfaces: redirect: arista.eos.network.eos.facts.interfaces network.eos.facts.interfaces.interfaces: redirect: arista.eos.network.eos.facts.interfaces.interfaces network.eos.facts.l2_interfaces: redirect: arista.eos.network.eos.facts.l2_interfaces network.eos.facts.l2_interfaces.l2_interfaces: redirect: arista.eos.network.eos.facts.l2_interfaces.l2_interfaces network.eos.facts.l3_interfaces: redirect: arista.eos.network.eos.facts.l3_interfaces network.eos.facts.l3_interfaces.l3_interfaces: redirect: arista.eos.network.eos.facts.l3_interfaces.l3_interfaces network.eos.facts.lacp: redirect: arista.eos.network.eos.facts.lacp network.eos.facts.lacp.lacp: redirect: arista.eos.network.eos.facts.lacp.lacp network.eos.facts.lacp_interfaces: redirect: arista.eos.network.eos.facts.lacp_interfaces network.eos.facts.lacp_interfaces.lacp_interfaces: redirect: arista.eos.network.eos.facts.lacp_interfaces.lacp_interfaces network.eos.facts.lag_interfaces: redirect: arista.eos.network.eos.facts.lag_interfaces network.eos.facts.lag_interfaces.lag_interfaces: redirect: arista.eos.network.eos.facts.lag_interfaces.lag_interfaces network.eos.facts.legacy: redirect: arista.eos.network.eos.facts.legacy network.eos.facts.legacy.base: redirect: arista.eos.network.eos.facts.legacy.base network.eos.facts.lldp_global: redirect: arista.eos.network.eos.facts.lldp_global network.eos.facts.lldp_global.lldp_global: redirect: arista.eos.network.eos.facts.lldp_global.lldp_global network.eos.facts.lldp_interfaces: redirect: arista.eos.network.eos.facts.lldp_interfaces network.eos.facts.lldp_interfaces.lldp_interfaces: redirect: arista.eos.network.eos.facts.lldp_interfaces.lldp_interfaces network.eos.facts.vlans: redirect: arista.eos.network.eos.facts.vlans network.eos.facts.vlans.vlans: redirect: arista.eos.network.eos.facts.vlans.vlans network.eos.providers: redirect: arista.eos.network.eos.providers network.eos.providers.cli: redirect: arista.eos.network.eos.providers.cli network.eos.providers.cli.config: redirect: arista.eos.network.eos.providers.cli.config network.eos.providers.cli.config.bgp: redirect: arista.eos.network.eos.providers.cli.config.bgp network.eos.providers.cli.config.bgp.address_family: redirect: arista.eos.network.eos.providers.cli.config.bgp.address_family network.eos.providers.cli.config.bgp.neighbors: redirect: arista.eos.network.eos.providers.cli.config.bgp.neighbors network.eos.providers.cli.config.bgp.process: redirect: arista.eos.network.eos.providers.cli.config.bgp.process network.eos.providers.module: redirect: arista.eos.network.eos.providers.module network.eos.providers.providers: redirect: arista.eos.network.eos.providers.providers network.eos.utils: redirect: arista.eos.network.eos.utils network.eos.utils.utils: redirect: arista.eos.network.eos.utils.utils network.eric_eccli.eric_eccli: redirect: community.network.network.eric_eccli.eric_eccli network.exos.argspec.facts.facts: redirect: community.network.network.exos.argspec.facts.facts network.exos.argspec.lldp_global: redirect: community.network.network.exos.argspec.lldp_global network.exos.argspec.lldp_global.lldp_global: redirect: community.network.network.exos.argspec.lldp_global.lldp_global network.exos.config.lldp_global: redirect: community.network.network.exos.config.lldp_global network.exos.config.lldp_global.lldp_global: redirect: community.network.network.exos.config.lldp_global.lldp_global network.exos.exos: redirect: community.network.network.exos.exos network.exos.facts.facts: redirect: community.network.network.exos.facts.facts network.exos.facts.legacy: redirect: community.network.network.exos.facts.legacy network.exos.facts.legacy.base: redirect: community.network.network.exos.facts.legacy.base network.exos.facts.lldp_global: redirect: community.network.network.exos.facts.lldp_global network.exos.facts.lldp_global.lldp_global: redirect: community.network.network.exos.facts.lldp_global.lldp_global network.exos.utils.utils: redirect: community.network.network.exos.utils.utils network.f5.bigip: redirect: f5networks.f5_modules.bigip network.f5.bigiq: redirect: f5networks.f5_modules.bigiq network.f5.common: redirect: f5networks.f5_modules.common network.f5.compare: redirect: f5networks.f5_modules.compare network.f5.icontrol: redirect: f5networks.f5_modules.icontrol network.f5.ipaddress: redirect: f5networks.f5_modules.ipaddress # FIXME: missing #network.f5.iworkflow: # redirect: f5networks.f5_modules.iworkflow #network.f5.legacy: # redirect: f5networks.f5_modules.legacy network.f5.urls: redirect: f5networks.f5_modules.urls network.fortianalyzer.common: redirect: community.fortios.fortianalyzer.common network.fortianalyzer.fortianalyzer: redirect: community.fortios.fortianalyzer.fortianalyzer network.fortimanager.common: redirect: fortinet.fortimanager.common network.fortimanager.fortimanager: redirect: fortinet.fortimanager.fortimanager network.fortios.argspec: redirect: fortinet.fortios.fortios.argspec network.fortios.argspec.facts: redirect: fortinet.fortios.fortios.argspec.facts network.fortios.argspec.facts.facts: redirect: fortinet.fortios.fortios.argspec.facts.facts network.fortios.argspec.system: redirect: fortinet.fortios.fortios.argspec.system network.fortios.argspec.system.system: redirect: fortinet.fortios.fortios.argspec.system.system network.fortios.facts: redirect: fortinet.fortios.fortios.facts network.fortios.facts.facts: redirect: fortinet.fortios.fortios.facts.facts network.fortios.facts.system: redirect: fortinet.fortios.fortios.facts.system network.fortios.facts.system.system: redirect: fortinet.fortios.fortios.facts.system.system network.fortios.fortios: redirect: fortinet.fortios.fortios.fortios network.frr: redirect: frr.frr.network.frr network.frr.frr: redirect: frr.frr.network.frr.frr network.frr.providers: redirect: frr.frr.network.frr.providers network.frr.providers.cli: redirect: frr.frr.network.frr.providers.cli network.frr.providers.cli.config: redirect: frr.frr.network.frr.providers.cli.config network.frr.providers.cli.config.base: redirect: frr.frr.network.frr.providers.cli.config.base network.frr.providers.cli.config.bgp: redirect: frr.frr.network.frr.providers.cli.config.bgp network.frr.providers.cli.config.bgp.address_family: redirect: frr.frr.network.frr.providers.cli.config.bgp.address_family network.frr.providers.cli.config.bgp.neighbors: redirect: frr.frr.network.frr.providers.cli.config.bgp.neighbors network.frr.providers.cli.config.bgp.process: redirect: frr.frr.network.frr.providers.cli.config.bgp.process network.frr.providers.module: redirect: frr.frr.network.frr.providers.module network.frr.providers.providers: redirect: frr.frr.network.frr.providers.providers network.ftd: redirect: community.network.network.ftd network.ftd.common: redirect: community.network.network.ftd.common network.ftd.configuration: redirect: community.network.network.ftd.configuration network.ftd.device: redirect: community.network.network.ftd.device network.ftd.fdm_swagger_client: redirect: community.network.network.ftd.fdm_swagger_client network.ftd.operation: redirect: community.network.network.ftd.operation network.icx: redirect: community.network.network.icx network.icx.icx: redirect: community.network.network.icx.icx network.ingate: redirect: community.network.network.ingate network.ingate.common: redirect: community.network.network.ingate.common network.ios: redirect: cisco.ios.network.ios network.ios.argspec: redirect: cisco.ios.network.ios.argspec network.ios.argspec.facts: redirect: cisco.ios.network.ios.argspec.facts network.ios.argspec.facts.facts: redirect: cisco.ios.network.ios.argspec.facts.facts network.ios.argspec.interfaces: redirect: cisco.ios.network.ios.argspec.interfaces network.ios.argspec.interfaces.interfaces: redirect: cisco.ios.network.ios.argspec.interfaces.interfaces network.ios.argspec.l2_interfaces: redirect: cisco.ios.network.ios.argspec.l2_interfaces network.ios.argspec.l2_interfaces.l2_interfaces: redirect: cisco.ios.network.ios.argspec.l2_interfaces.l2_interfaces network.ios.argspec.l3_interfaces: redirect: cisco.ios.network.ios.argspec.l3_interfaces network.ios.argspec.l3_interfaces.l3_interfaces: redirect: cisco.ios.network.ios.argspec.l3_interfaces.l3_interfaces network.ios.argspec.lacp: redirect: cisco.ios.network.ios.argspec.lacp network.ios.argspec.lacp.lacp: redirect: cisco.ios.network.ios.argspec.lacp.lacp network.ios.argspec.lacp_interfaces: redirect: cisco.ios.network.ios.argspec.lacp_interfaces network.ios.argspec.lacp_interfaces.lacp_interfaces: redirect: cisco.ios.network.ios.argspec.lacp_interfaces.lacp_interfaces network.ios.argspec.lag_interfaces: redirect: cisco.ios.network.ios.argspec.lag_interfaces network.ios.argspec.lag_interfaces.lag_interfaces: redirect: cisco.ios.network.ios.argspec.lag_interfaces.lag_interfaces network.ios.argspec.lldp_global: redirect: cisco.ios.network.ios.argspec.lldp_global network.ios.argspec.lldp_global.lldp_global: redirect: cisco.ios.network.ios.argspec.lldp_global.lldp_global network.ios.argspec.lldp_interfaces: redirect: cisco.ios.network.ios.argspec.lldp_interfaces network.ios.argspec.lldp_interfaces.lldp_interfaces: redirect: cisco.ios.network.ios.argspec.lldp_interfaces.lldp_interfaces network.ios.argspec.vlans: redirect: cisco.ios.network.ios.argspec.vlans network.ios.argspec.vlans.vlans: redirect: cisco.ios.network.ios.argspec.vlans.vlans network.ios.config: redirect: cisco.ios.network.ios.config network.ios.config.interfaces: redirect: cisco.ios.network.ios.config.interfaces network.ios.config.interfaces.interfaces: redirect: cisco.ios.network.ios.config.interfaces.interfaces network.ios.config.l2_interfaces: redirect: cisco.ios.network.ios.config.l2_interfaces network.ios.config.l2_interfaces.l2_interfaces: redirect: cisco.ios.network.ios.config.l2_interfaces.l2_interfaces network.ios.config.l3_interfaces: redirect: cisco.ios.network.ios.config.l3_interfaces network.ios.config.l3_interfaces.l3_interfaces: redirect: cisco.ios.network.ios.config.l3_interfaces.l3_interfaces network.ios.config.lacp: redirect: cisco.ios.network.ios.config.lacp network.ios.config.lacp.lacp: redirect: cisco.ios.network.ios.config.lacp.lacp network.ios.config.lacp_interfaces: redirect: cisco.ios.network.ios.config.lacp_interfaces network.ios.config.lacp_interfaces.lacp_interfaces: redirect: cisco.ios.network.ios.config.lacp_interfaces.lacp_interfaces network.ios.config.lag_interfaces: redirect: cisco.ios.network.ios.config.lag_interfaces network.ios.config.lag_interfaces.lag_interfaces: redirect: cisco.ios.network.ios.config.lag_interfaces.lag_interfaces network.ios.config.lldp_global: redirect: cisco.ios.network.ios.config.lldp_global network.ios.config.lldp_global.lldp_global: redirect: cisco.ios.network.ios.config.lldp_global.lldp_global network.ios.config.lldp_interfaces: redirect: cisco.ios.network.ios.config.lldp_interfaces network.ios.config.lldp_interfaces.lldp_interfaces: redirect: cisco.ios.network.ios.config.lldp_interfaces.lldp_interfaces network.ios.config.vlans: redirect: cisco.ios.network.ios.config.vlans network.ios.config.vlans.vlans: redirect: cisco.ios.network.ios.config.vlans.vlans network.ios.facts: redirect: cisco.ios.network.ios.facts network.ios.facts.facts: redirect: cisco.ios.network.ios.facts.facts network.ios.facts.interfaces: redirect: cisco.ios.network.ios.facts.interfaces network.ios.facts.interfaces.interfaces: redirect: cisco.ios.network.ios.facts.interfaces.interfaces network.ios.facts.l2_interfaces: redirect: cisco.ios.network.ios.facts.l2_interfaces network.ios.facts.l2_interfaces.l2_interfaces: redirect: cisco.ios.network.ios.facts.l2_interfaces.l2_interfaces network.ios.facts.l3_interfaces: redirect: cisco.ios.network.ios.facts.l3_interfaces network.ios.facts.l3_interfaces.l3_interfaces: redirect: cisco.ios.network.ios.facts.l3_interfaces.l3_interfaces network.ios.facts.lacp: redirect: cisco.ios.network.ios.facts.lacp network.ios.facts.lacp.lacp: redirect: cisco.ios.network.ios.facts.lacp.lacp network.ios.facts.lacp_interfaces: redirect: cisco.ios.network.ios.facts.lacp_interfaces network.ios.facts.lacp_interfaces.lacp_interfaces: redirect: cisco.ios.network.ios.facts.lacp_interfaces.lacp_interfaces network.ios.facts.lag_interfaces: redirect: cisco.ios.network.ios.facts.lag_interfaces network.ios.facts.lag_interfaces.lag_interfaces: redirect: cisco.ios.network.ios.facts.lag_interfaces.lag_interfaces network.ios.facts.legacy: redirect: cisco.ios.network.ios.facts.legacy network.ios.facts.legacy.base: redirect: cisco.ios.network.ios.facts.legacy.base network.ios.facts.lldp_global: redirect: cisco.ios.network.ios.facts.lldp_global network.ios.facts.lldp_global.lldp_global: redirect: cisco.ios.network.ios.facts.lldp_global.lldp_global network.ios.facts.lldp_interfaces: redirect: cisco.ios.network.ios.facts.lldp_interfaces network.ios.facts.lldp_interfaces.lldp_interfaces: redirect: cisco.ios.network.ios.facts.lldp_interfaces.lldp_interfaces network.ios.facts.vlans: redirect: cisco.ios.network.ios.facts.vlans network.ios.facts.vlans.vlans: redirect: cisco.ios.network.ios.facts.vlans.vlans network.ios.ios: redirect: cisco.ios.network.ios.ios network.ios.providers: redirect: cisco.ios.network.ios.providers network.ios.providers.cli: redirect: cisco.ios.network.ios.providers.cli network.ios.providers.cli.config: redirect: cisco.ios.network.ios.providers.cli.config network.ios.providers.cli.config.base: redirect: cisco.ios.network.ios.providers.cli.config.base network.ios.providers.cli.config.bgp: redirect: cisco.ios.network.ios.providers.cli.config.bgp network.ios.providers.cli.config.bgp.address_family: redirect: cisco.ios.network.ios.providers.cli.config.bgp.address_family network.ios.providers.cli.config.bgp.neighbors: redirect: cisco.ios.network.ios.providers.cli.config.bgp.neighbors network.ios.providers.cli.config.bgp.process: redirect: cisco.ios.network.ios.providers.cli.config.bgp.process network.ios.providers.module: redirect: cisco.ios.network.ios.providers.module network.ios.providers.providers: redirect: cisco.ios.network.ios.providers.providers network.ios.utils: redirect: cisco.ios.network.ios.utils network.ios.utils.utils: redirect: cisco.ios.network.ios.utils.utils network.iosxr: redirect: cisco.iosxr.network.iosxr network.iosxr.argspec: redirect: cisco.iosxr.network.iosxr.argspec network.iosxr.argspec.facts: redirect: cisco.iosxr.network.iosxr.argspec.facts network.iosxr.argspec.facts.facts: redirect: cisco.iosxr.network.iosxr.argspec.facts.facts network.iosxr.argspec.interfaces: redirect: cisco.iosxr.network.iosxr.argspec.interfaces network.iosxr.argspec.interfaces.interfaces: redirect: cisco.iosxr.network.iosxr.argspec.interfaces.interfaces network.iosxr.argspec.l2_interfaces: redirect: cisco.iosxr.network.iosxr.argspec.l2_interfaces network.iosxr.argspec.l2_interfaces.l2_interfaces: redirect: cisco.iosxr.network.iosxr.argspec.l2_interfaces.l2_interfaces network.iosxr.argspec.l3_interfaces: redirect: cisco.iosxr.network.iosxr.argspec.l3_interfaces network.iosxr.argspec.l3_interfaces.l3_interfaces: redirect: cisco.iosxr.network.iosxr.argspec.l3_interfaces.l3_interfaces network.iosxr.argspec.lacp: redirect: cisco.iosxr.network.iosxr.argspec.lacp network.iosxr.argspec.lacp.lacp: redirect: cisco.iosxr.network.iosxr.argspec.lacp.lacp network.iosxr.argspec.lacp_interfaces: redirect: cisco.iosxr.network.iosxr.argspec.lacp_interfaces network.iosxr.argspec.lacp_interfaces.lacp_interfaces: redirect: cisco.iosxr.network.iosxr.argspec.lacp_interfaces.lacp_interfaces network.iosxr.argspec.lag_interfaces: redirect: cisco.iosxr.network.iosxr.argspec.lag_interfaces network.iosxr.argspec.lag_interfaces.lag_interfaces: redirect: cisco.iosxr.network.iosxr.argspec.lag_interfaces.lag_interfaces network.iosxr.argspec.lldp_global: redirect: cisco.iosxr.network.iosxr.argspec.lldp_global network.iosxr.argspec.lldp_global.lldp_global: redirect: cisco.iosxr.network.iosxr.argspec.lldp_global.lldp_global network.iosxr.argspec.lldp_interfaces: redirect: cisco.iosxr.network.iosxr.argspec.lldp_interfaces network.iosxr.argspec.lldp_interfaces.lldp_interfaces: redirect: cisco.iosxr.network.iosxr.argspec.lldp_interfaces.lldp_interfaces network.iosxr.config: redirect: cisco.iosxr.network.iosxr.config network.iosxr.config.interfaces: redirect: cisco.iosxr.network.iosxr.config.interfaces network.iosxr.config.interfaces.interfaces: redirect: cisco.iosxr.network.iosxr.config.interfaces.interfaces network.iosxr.config.l2_interfaces: redirect: cisco.iosxr.network.iosxr.config.l2_interfaces network.iosxr.config.l2_interfaces.l2_interfaces: redirect: cisco.iosxr.network.iosxr.config.l2_interfaces.l2_interfaces network.iosxr.config.l3_interfaces: redirect: cisco.iosxr.network.iosxr.config.l3_interfaces network.iosxr.config.l3_interfaces.l3_interfaces: redirect: cisco.iosxr.network.iosxr.config.l3_interfaces.l3_interfaces network.iosxr.config.lacp: redirect: cisco.iosxr.network.iosxr.config.lacp network.iosxr.config.lacp.lacp: redirect: cisco.iosxr.network.iosxr.config.lacp.lacp network.iosxr.config.lacp_interfaces: redirect: cisco.iosxr.network.iosxr.config.lacp_interfaces network.iosxr.config.lacp_interfaces.lacp_interfaces: redirect: cisco.iosxr.network.iosxr.config.lacp_interfaces.lacp_interfaces network.iosxr.config.lag_interfaces: redirect: cisco.iosxr.network.iosxr.config.lag_interfaces network.iosxr.config.lag_interfaces.lag_interfaces: redirect: cisco.iosxr.network.iosxr.config.lag_interfaces.lag_interfaces network.iosxr.config.lldp_global: redirect: cisco.iosxr.network.iosxr.config.lldp_global network.iosxr.config.lldp_global.lldp_global: redirect: cisco.iosxr.network.iosxr.config.lldp_global.lldp_global network.iosxr.config.lldp_interfaces: redirect: cisco.iosxr.network.iosxr.config.lldp_interfaces network.iosxr.config.lldp_interfaces.lldp_interfaces: redirect: cisco.iosxr.network.iosxr.config.lldp_interfaces.lldp_interfaces network.iosxr.facts: redirect: cisco.iosxr.network.iosxr.facts network.iosxr.facts.facts: redirect: cisco.iosxr.network.iosxr.facts.facts network.iosxr.facts.interfaces: redirect: cisco.iosxr.network.iosxr.facts.interfaces network.iosxr.facts.interfaces.interfaces: redirect: cisco.iosxr.network.iosxr.facts.interfaces.interfaces network.iosxr.facts.l2_interfaces: redirect: cisco.iosxr.network.iosxr.facts.l2_interfaces network.iosxr.facts.l2_interfaces.l2_interfaces: redirect: cisco.iosxr.network.iosxr.facts.l2_interfaces.l2_interfaces network.iosxr.facts.l3_interfaces: redirect: cisco.iosxr.network.iosxr.facts.l3_interfaces network.iosxr.facts.l3_interfaces.l3_interfaces: redirect: cisco.iosxr.network.iosxr.facts.l3_interfaces.l3_interfaces network.iosxr.facts.lacp: redirect: cisco.iosxr.network.iosxr.facts.lacp network.iosxr.facts.lacp.lacp: redirect: cisco.iosxr.network.iosxr.facts.lacp.lacp network.iosxr.facts.lacp_interfaces: redirect: cisco.iosxr.network.iosxr.facts.lacp_interfaces network.iosxr.facts.lacp_interfaces.lacp_interfaces: redirect: cisco.iosxr.network.iosxr.facts.lacp_interfaces.lacp_interfaces network.iosxr.facts.lag_interfaces: redirect: cisco.iosxr.network.iosxr.facts.lag_interfaces network.iosxr.facts.lag_interfaces.lag_interfaces: redirect: cisco.iosxr.network.iosxr.facts.lag_interfaces.lag_interfaces network.iosxr.facts.legacy: redirect: cisco.iosxr.network.iosxr.facts.legacy network.iosxr.facts.legacy.base: redirect: cisco.iosxr.network.iosxr.facts.legacy.base network.iosxr.facts.lldp_global: redirect: cisco.iosxr.network.iosxr.facts.lldp_global network.iosxr.facts.lldp_global.lldp_global: redirect: cisco.iosxr.network.iosxr.facts.lldp_global.lldp_global network.iosxr.facts.lldp_interfaces: redirect: cisco.iosxr.network.iosxr.facts.lldp_interfaces network.iosxr.facts.lldp_interfaces.lldp_interfaces: redirect: cisco.iosxr.network.iosxr.facts.lldp_interfaces.lldp_interfaces network.iosxr.iosxr: redirect: cisco.iosxr.network.iosxr.iosxr network.iosxr.providers: redirect: cisco.iosxr.network.iosxr.providers network.iosxr.providers.cli: redirect: cisco.iosxr.network.iosxr.providers.cli network.iosxr.providers.cli.config: redirect: cisco.iosxr.network.iosxr.providers.cli.config network.iosxr.providers.cli.config.bgp: redirect: cisco.iosxr.network.iosxr.providers.cli.config.bgp network.iosxr.providers.cli.config.bgp.address_family: redirect: cisco.iosxr.network.iosxr.providers.cli.config.bgp.address_family network.iosxr.providers.cli.config.bgp.neighbors: redirect: cisco.iosxr.network.iosxr.providers.cli.config.bgp.neighbors network.iosxr.providers.cli.config.bgp.process: redirect: cisco.iosxr.network.iosxr.providers.cli.config.bgp.process network.iosxr.providers.module: redirect: cisco.iosxr.network.iosxr.providers.module network.iosxr.providers.providers: redirect: cisco.iosxr.network.iosxr.providers.providers network.iosxr.utils: redirect: cisco.iosxr.network.iosxr.utils network.iosxr.utils.utils: redirect: cisco.iosxr.network.iosxr.utils.utils network.ironware: redirect: community.network.network.ironware network.ironware.ironware: redirect: community.network.network.ironware.ironware network.junos: redirect: junipernetworks.junos.network.junos network.junos.argspec: redirect: junipernetworks.junos.network.junos.argspec network.junos.argspec.facts: redirect: junipernetworks.junos.network.junos.argspec.facts network.junos.argspec.facts.facts: redirect: junipernetworks.junos.network.junos.argspec.facts.facts network.junos.argspec.interfaces: redirect: junipernetworks.junos.network.junos.argspec.interfaces network.junos.argspec.interfaces.interfaces: redirect: junipernetworks.junos.network.junos.argspec.interfaces.interfaces network.junos.argspec.l2_interfaces: redirect: junipernetworks.junos.network.junos.argspec.l2_interfaces network.junos.argspec.l2_interfaces.l2_interfaces: redirect: junipernetworks.junos.network.junos.argspec.l2_interfaces.l2_interfaces network.junos.argspec.l3_interfaces: redirect: junipernetworks.junos.network.junos.argspec.l3_interfaces network.junos.argspec.l3_interfaces.l3_interfaces: redirect: junipernetworks.junos.network.junos.argspec.l3_interfaces.l3_interfaces network.junos.argspec.lacp: redirect: junipernetworks.junos.network.junos.argspec.lacp network.junos.argspec.lacp.lacp: redirect: junipernetworks.junos.network.junos.argspec.lacp.lacp network.junos.argspec.lacp_interfaces: redirect: junipernetworks.junos.network.junos.argspec.lacp_interfaces network.junos.argspec.lacp_interfaces.lacp_interfaces: redirect: junipernetworks.junos.network.junos.argspec.lacp_interfaces.lacp_interfaces network.junos.argspec.lag_interfaces: redirect: junipernetworks.junos.network.junos.argspec.lag_interfaces network.junos.argspec.lag_interfaces.lag_interfaces: redirect: junipernetworks.junos.network.junos.argspec.lag_interfaces.lag_interfaces network.junos.argspec.lldp_global: redirect: junipernetworks.junos.network.junos.argspec.lldp_global network.junos.argspec.lldp_global.lldp_global: redirect: junipernetworks.junos.network.junos.argspec.lldp_global.lldp_global network.junos.argspec.lldp_interfaces: redirect: junipernetworks.junos.network.junos.argspec.lldp_interfaces network.junos.argspec.lldp_interfaces.lldp_interfaces: redirect: junipernetworks.junos.network.junos.argspec.lldp_interfaces.lldp_interfaces network.junos.argspec.vlans: redirect: junipernetworks.junos.network.junos.argspec.vlans network.junos.argspec.vlans.vlans: redirect: junipernetworks.junos.network.junos.argspec.vlans.vlans network.junos.config: redirect: junipernetworks.junos.network.junos.config network.junos.config.interfaces: redirect: junipernetworks.junos.network.junos.config.interfaces network.junos.config.interfaces.interfaces: redirect: junipernetworks.junos.network.junos.config.interfaces.interfaces network.junos.config.l2_interfaces: redirect: junipernetworks.junos.network.junos.config.l2_interfaces network.junos.config.l2_interfaces.l2_interfaces: redirect: junipernetworks.junos.network.junos.config.l2_interfaces.l2_interfaces network.junos.config.l3_interfaces: redirect: junipernetworks.junos.network.junos.config.l3_interfaces network.junos.config.l3_interfaces.l3_interfaces: redirect: junipernetworks.junos.network.junos.config.l3_interfaces.l3_interfaces network.junos.config.lacp: redirect: junipernetworks.junos.network.junos.config.lacp network.junos.config.lacp.lacp: redirect: junipernetworks.junos.network.junos.config.lacp.lacp network.junos.config.lacp_interfaces: redirect: junipernetworks.junos.network.junos.config.lacp_interfaces network.junos.config.lacp_interfaces.lacp_interfaces: redirect: junipernetworks.junos.network.junos.config.lacp_interfaces.lacp_interfaces network.junos.config.lag_interfaces: redirect: junipernetworks.junos.network.junos.config.lag_interfaces network.junos.config.lag_interfaces.lag_interfaces: redirect: junipernetworks.junos.network.junos.config.lag_interfaces.lag_interfaces network.junos.config.lldp_global: redirect: junipernetworks.junos.network.junos.config.lldp_global network.junos.config.lldp_global.lldp_global: redirect: junipernetworks.junos.network.junos.config.lldp_global.lldp_global network.junos.config.lldp_interfaces: redirect: junipernetworks.junos.network.junos.config.lldp_interfaces network.junos.config.lldp_interfaces.lldp_interfaces: redirect: junipernetworks.junos.network.junos.config.lldp_interfaces.lldp_interfaces network.junos.config.vlans: redirect: junipernetworks.junos.network.junos.config.vlans network.junos.config.vlans.vlans: redirect: junipernetworks.junos.network.junos.config.vlans.vlans network.junos.facts: redirect: junipernetworks.junos.network.junos.facts network.junos.facts.facts: redirect: junipernetworks.junos.network.junos.facts.facts network.junos.facts.interfaces: redirect: junipernetworks.junos.network.junos.facts.interfaces network.junos.facts.interfaces.interfaces: redirect: junipernetworks.junos.network.junos.facts.interfaces.interfaces network.junos.facts.l2_interfaces: redirect: junipernetworks.junos.network.junos.facts.l2_interfaces network.junos.facts.l2_interfaces.l2_interfaces: redirect: junipernetworks.junos.network.junos.facts.l2_interfaces.l2_interfaces network.junos.facts.l3_interfaces: redirect: junipernetworks.junos.network.junos.facts.l3_interfaces network.junos.facts.l3_interfaces.l3_interfaces: redirect: junipernetworks.junos.network.junos.facts.l3_interfaces.l3_interfaces network.junos.facts.lacp: redirect: junipernetworks.junos.network.junos.facts.lacp network.junos.facts.lacp.lacp: redirect: junipernetworks.junos.network.junos.facts.lacp.lacp network.junos.facts.lacp_interfaces: redirect: junipernetworks.junos.network.junos.facts.lacp_interfaces network.junos.facts.lacp_interfaces.lacp_interfaces: redirect: junipernetworks.junos.network.junos.facts.lacp_interfaces.lacp_interfaces network.junos.facts.lag_interfaces: redirect: junipernetworks.junos.network.junos.facts.lag_interfaces network.junos.facts.lag_interfaces.lag_interfaces: redirect: junipernetworks.junos.network.junos.facts.lag_interfaces.lag_interfaces network.junos.facts.legacy: redirect: junipernetworks.junos.network.junos.facts.legacy network.junos.facts.legacy.base: redirect: junipernetworks.junos.network.junos.facts.legacy.base network.junos.facts.lldp_global: redirect: junipernetworks.junos.network.junos.facts.lldp_global network.junos.facts.lldp_global.lldp_global: redirect: junipernetworks.junos.network.junos.facts.lldp_global.lldp_global network.junos.facts.lldp_interfaces: redirect: junipernetworks.junos.network.junos.facts.lldp_interfaces network.junos.facts.lldp_interfaces.lldp_interfaces: redirect: junipernetworks.junos.network.junos.facts.lldp_interfaces.lldp_interfaces network.junos.facts.vlans: redirect: junipernetworks.junos.network.junos.facts.vlans network.junos.facts.vlans.vlans: redirect: junipernetworks.junos.network.junos.facts.vlans.vlans network.junos.junos: redirect: junipernetworks.junos.network.junos.junos network.junos.utils: redirect: junipernetworks.junos.network.junos.utils network.junos.utils.utils: redirect: junipernetworks.junos.network.junos.utils.utils network.meraki: redirect: cisco.meraki.network.meraki network.meraki.meraki: redirect: cisco.meraki.network.meraki.meraki network.netconf: redirect: ansible.netcommon.network.netconf network.netconf.netconf: redirect: ansible.netcommon.network.netconf.netconf network.netscaler: redirect: community.network.network.netscaler network.netscaler.netscaler: redirect: community.network.network.netscaler.netscaler network.netvisor: redirect: community.network.network.netvisor network.netvisor.netvisor: redirect: community.network.network.netvisor.netvisor network.netvisor.pn_nvos: redirect: community.network.network.netvisor.pn_nvos network.nos: redirect: community.network.network.nos network.nos.nos: redirect: community.network.network.nos.nos network.nso: redirect: cisco.nso.nso network.nso.nso: redirect: cisco.nso.nso network.nxos: redirect: cisco.nxos.network.nxos network.nxos.argspec: redirect: cisco.nxos.network.nxos.argspec network.nxos.argspec.bfd_interfaces: redirect: cisco.nxos.network.nxos.argspec.bfd_interfaces network.nxos.argspec.bfd_interfaces.bfd_interfaces: redirect: cisco.nxos.network.nxos.argspec.bfd_interfaces.bfd_interfaces network.nxos.argspec.facts: redirect: cisco.nxos.network.nxos.argspec.facts network.nxos.argspec.facts.facts: redirect: cisco.nxos.network.nxos.argspec.facts.facts network.nxos.argspec.interfaces: redirect: cisco.nxos.network.nxos.argspec.interfaces network.nxos.argspec.interfaces.interfaces: redirect: cisco.nxos.network.nxos.argspec.interfaces.interfaces network.nxos.argspec.l2_interfaces: redirect: cisco.nxos.network.nxos.argspec.l2_interfaces network.nxos.argspec.l2_interfaces.l2_interfaces: redirect: cisco.nxos.network.nxos.argspec.l2_interfaces.l2_interfaces network.nxos.argspec.l3_interfaces: redirect: cisco.nxos.network.nxos.argspec.l3_interfaces network.nxos.argspec.l3_interfaces.l3_interfaces: redirect: cisco.nxos.network.nxos.argspec.l3_interfaces.l3_interfaces network.nxos.argspec.lacp: redirect: cisco.nxos.network.nxos.argspec.lacp network.nxos.argspec.lacp.lacp: redirect: cisco.nxos.network.nxos.argspec.lacp.lacp network.nxos.argspec.lacp_interfaces: redirect: cisco.nxos.network.nxos.argspec.lacp_interfaces network.nxos.argspec.lacp_interfaces.lacp_interfaces: redirect: cisco.nxos.network.nxos.argspec.lacp_interfaces.lacp_interfaces network.nxos.argspec.lag_interfaces: redirect: cisco.nxos.network.nxos.argspec.lag_interfaces network.nxos.argspec.lag_interfaces.lag_interfaces: redirect: cisco.nxos.network.nxos.argspec.lag_interfaces.lag_interfaces network.nxos.argspec.lldp_global: redirect: cisco.nxos.network.nxos.argspec.lldp_global network.nxos.argspec.lldp_global.lldp_global: redirect: cisco.nxos.network.nxos.argspec.lldp_global.lldp_global network.nxos.argspec.telemetry: redirect: cisco.nxos.network.nxos.argspec.telemetry network.nxos.argspec.telemetry.telemetry: redirect: cisco.nxos.network.nxos.argspec.telemetry.telemetry network.nxos.argspec.vlans: redirect: cisco.nxos.network.nxos.argspec.vlans network.nxos.argspec.vlans.vlans: redirect: cisco.nxos.network.nxos.argspec.vlans.vlans network.nxos.cmdref: redirect: cisco.nxos.network.nxos.cmdref network.nxos.cmdref.telemetry: redirect: cisco.nxos.network.nxos.cmdref.telemetry network.nxos.cmdref.telemetry.telemetry: redirect: cisco.nxos.network.nxos.cmdref.telemetry.telemetry network.nxos.config: redirect: cisco.nxos.network.nxos.config network.nxos.config.bfd_interfaces: redirect: cisco.nxos.network.nxos.config.bfd_interfaces network.nxos.config.bfd_interfaces.bfd_interfaces: redirect: cisco.nxos.network.nxos.config.bfd_interfaces.bfd_interfaces network.nxos.config.interfaces: redirect: cisco.nxos.network.nxos.config.interfaces network.nxos.config.interfaces.interfaces: redirect: cisco.nxos.network.nxos.config.interfaces.interfaces network.nxos.config.l2_interfaces: redirect: cisco.nxos.network.nxos.config.l2_interfaces network.nxos.config.l2_interfaces.l2_interfaces: redirect: cisco.nxos.network.nxos.config.l2_interfaces.l2_interfaces network.nxos.config.l3_interfaces: redirect: cisco.nxos.network.nxos.config.l3_interfaces network.nxos.config.l3_interfaces.l3_interfaces: redirect: cisco.nxos.network.nxos.config.l3_interfaces.l3_interfaces network.nxos.config.lacp: redirect: cisco.nxos.network.nxos.config.lacp network.nxos.config.lacp.lacp: redirect: cisco.nxos.network.nxos.config.lacp.lacp network.nxos.config.lacp_interfaces: redirect: cisco.nxos.network.nxos.config.lacp_interfaces network.nxos.config.lacp_interfaces.lacp_interfaces: redirect: cisco.nxos.network.nxos.config.lacp_interfaces.lacp_interfaces network.nxos.config.lag_interfaces: redirect: cisco.nxos.network.nxos.config.lag_interfaces network.nxos.config.lag_interfaces.lag_interfaces: redirect: cisco.nxos.network.nxos.config.lag_interfaces.lag_interfaces network.nxos.config.lldp_global: redirect: cisco.nxos.network.nxos.config.lldp_global network.nxos.config.lldp_global.lldp_global: redirect: cisco.nxos.network.nxos.config.lldp_global.lldp_global network.nxos.config.telemetry: redirect: cisco.nxos.network.nxos.config.telemetry network.nxos.config.telemetry.telemetry: redirect: cisco.nxos.network.nxos.config.telemetry.telemetry network.nxos.config.vlans: redirect: cisco.nxos.network.nxos.config.vlans network.nxos.config.vlans.vlans: redirect: cisco.nxos.network.nxos.config.vlans.vlans network.nxos.facts: redirect: cisco.nxos.network.nxos.facts network.nxos.facts.bfd_interfaces: redirect: cisco.nxos.network.nxos.facts.bfd_interfaces network.nxos.facts.bfd_interfaces.bfd_interfaces: redirect: cisco.nxos.network.nxos.facts.bfd_interfaces.bfd_interfaces network.nxos.facts.facts: redirect: cisco.nxos.network.nxos.facts.facts network.nxos.facts.interfaces: redirect: cisco.nxos.network.nxos.facts.interfaces network.nxos.facts.interfaces.interfaces: redirect: cisco.nxos.network.nxos.facts.interfaces.interfaces network.nxos.facts.l2_interfaces: redirect: cisco.nxos.network.nxos.facts.l2_interfaces network.nxos.facts.l2_interfaces.l2_interfaces: redirect: cisco.nxos.network.nxos.facts.l2_interfaces.l2_interfaces network.nxos.facts.l3_interfaces: redirect: cisco.nxos.network.nxos.facts.l3_interfaces network.nxos.facts.l3_interfaces.l3_interfaces: redirect: cisco.nxos.network.nxos.facts.l3_interfaces.l3_interfaces network.nxos.facts.lacp: redirect: cisco.nxos.network.nxos.facts.lacp network.nxos.facts.lacp.lacp: redirect: cisco.nxos.network.nxos.facts.lacp.lacp network.nxos.facts.lacp_interfaces: redirect: cisco.nxos.network.nxos.facts.lacp_interfaces network.nxos.facts.lacp_interfaces.lacp_interfaces: redirect: cisco.nxos.network.nxos.facts.lacp_interfaces.lacp_interfaces network.nxos.facts.lag_interfaces: redirect: cisco.nxos.network.nxos.facts.lag_interfaces network.nxos.facts.lag_interfaces.lag_interfaces: redirect: cisco.nxos.network.nxos.facts.lag_interfaces.lag_interfaces network.nxos.facts.legacy: redirect: cisco.nxos.network.nxos.facts.legacy network.nxos.facts.legacy.base: redirect: cisco.nxos.network.nxos.facts.legacy.base network.nxos.facts.lldp_global: redirect: cisco.nxos.network.nxos.facts.lldp_global network.nxos.facts.lldp_global.lldp_global: redirect: cisco.nxos.network.nxos.facts.lldp_global.lldp_global network.nxos.facts.telemetry: redirect: cisco.nxos.network.nxos.facts.telemetry network.nxos.facts.telemetry.telemetry: redirect: cisco.nxos.network.nxos.facts.telemetry.telemetry network.nxos.facts.vlans: redirect: cisco.nxos.network.nxos.facts.vlans network.nxos.facts.vlans.vlans: redirect: cisco.nxos.network.nxos.facts.vlans.vlans network.nxos.nxos: redirect: cisco.nxos.network.nxos.nxos network.nxos.utils: redirect: cisco.nxos.network.nxos.utils network.nxos.utils.telemetry: redirect: cisco.nxos.network.nxos.utils.telemetry network.nxos.utils.telemetry.telemetry: redirect: cisco.nxos.network.nxos.utils.telemetry.telemetry network.nxos.utils.utils: redirect: cisco.nxos.network.nxos.utils.utils network.onyx: redirect: mellanox.onyx.network.onyx network.onyx.onyx: redirect: mellanox.onyx.network.onyx.onyx network.ordnance: redirect: community.network.network.ordnance network.ordnance.ordnance: redirect: community.network.network.ordnance.ordnance network.panos: redirect: community.network.network.panos network.panos.panos: redirect: community.network.network.panos.panos network.restconf: redirect: ansible.netcommon.network.restconf network.restconf.restconf: redirect: ansible.netcommon.network.restconf.restconf network.routeros: redirect: community.routeros.routeros network.routeros.routeros: redirect: community.routeros.routeros network.skydive: redirect: community.skydive.network.skydive network.skydive.api: redirect: community.skydive.network.skydive.api network.slxos: redirect: community.network.network.slxos network.slxos.slxos: redirect: community.network.network.slxos.slxos network.sros: redirect: community.network.network.sros network.sros.sros: redirect: community.network.network.sros.sros network.voss: redirect: community.network.network.voss network.voss.voss: redirect: community.network.network.voss.voss network.vyos: redirect: vyos.vyos.network.vyos network.vyos.argspec: redirect: vyos.vyos.network.vyos.argspec network.vyos.argspec.facts: redirect: vyos.vyos.network.vyos.argspec.facts network.vyos.argspec.facts.facts: redirect: vyos.vyos.network.vyos.argspec.facts.facts network.vyos.argspec.interfaces: redirect: vyos.vyos.network.vyos.argspec.interfaces network.vyos.argspec.interfaces.interfaces: redirect: vyos.vyos.network.vyos.argspec.interfaces.interfaces network.vyos.argspec.l3_interfaces: redirect: vyos.vyos.network.vyos.argspec.l3_interfaces network.vyos.argspec.l3_interfaces.l3_interfaces: redirect: vyos.vyos.network.vyos.argspec.l3_interfaces.l3_interfaces network.vyos.argspec.lag_interfaces: redirect: vyos.vyos.network.vyos.argspec.lag_interfaces network.vyos.argspec.lag_interfaces.lag_interfaces: redirect: vyos.vyos.network.vyos.argspec.lag_interfaces.lag_interfaces network.vyos.argspec.lldp_global: redirect: vyos.vyos.network.vyos.argspec.lldp_global network.vyos.argspec.lldp_global.lldp_global: redirect: vyos.vyos.network.vyos.argspec.lldp_global.lldp_global network.vyos.argspec.lldp_interfaces: redirect: vyos.vyos.network.vyos.argspec.lldp_interfaces network.vyos.argspec.lldp_interfaces.lldp_interfaces: redirect: vyos.vyos.network.vyos.argspec.lldp_interfaces.lldp_interfaces network.vyos.config: redirect: vyos.vyos.network.vyos.config network.vyos.config.interfaces: redirect: vyos.vyos.network.vyos.config.interfaces network.vyos.config.interfaces.interfaces: redirect: vyos.vyos.network.vyos.config.interfaces.interfaces network.vyos.config.l3_interfaces: redirect: vyos.vyos.network.vyos.config.l3_interfaces network.vyos.config.l3_interfaces.l3_interfaces: redirect: vyos.vyos.network.vyos.config.l3_interfaces.l3_interfaces network.vyos.config.lag_interfaces: redirect: vyos.vyos.network.vyos.config.lag_interfaces network.vyos.config.lag_interfaces.lag_interfaces: redirect: vyos.vyos.network.vyos.config.lag_interfaces.lag_interfaces network.vyos.config.lldp_global: redirect: vyos.vyos.network.vyos.config.lldp_global network.vyos.config.lldp_global.lldp_global: redirect: vyos.vyos.network.vyos.config.lldp_global.lldp_global network.vyos.config.lldp_interfaces: redirect: vyos.vyos.network.vyos.config.lldp_interfaces network.vyos.config.lldp_interfaces.lldp_interfaces: redirect: vyos.vyos.network.vyos.config.lldp_interfaces.lldp_interfaces network.vyos.facts: redirect: vyos.vyos.network.vyos.facts network.vyos.facts.facts: redirect: vyos.vyos.network.vyos.facts.facts network.vyos.facts.interfaces: redirect: vyos.vyos.network.vyos.facts.interfaces network.vyos.facts.interfaces.interfaces: redirect: vyos.vyos.network.vyos.facts.interfaces.interfaces network.vyos.facts.l3_interfaces: redirect: vyos.vyos.network.vyos.facts.l3_interfaces network.vyos.facts.l3_interfaces.l3_interfaces: redirect: vyos.vyos.network.vyos.facts.l3_interfaces.l3_interfaces network.vyos.facts.lag_interfaces: redirect: vyos.vyos.network.vyos.facts.lag_interfaces network.vyos.facts.lag_interfaces.lag_interfaces: redirect: vyos.vyos.network.vyos.facts.lag_interfaces.lag_interfaces network.vyos.facts.legacy: redirect: vyos.vyos.network.vyos.facts.legacy network.vyos.facts.legacy.base: redirect: vyos.vyos.network.vyos.facts.legacy.base network.vyos.facts.lldp_global: redirect: vyos.vyos.network.vyos.facts.lldp_global network.vyos.facts.lldp_global.lldp_global: redirect: vyos.vyos.network.vyos.facts.lldp_global.lldp_global network.vyos.facts.lldp_interfaces: redirect: vyos.vyos.network.vyos.facts.lldp_interfaces network.vyos.facts.lldp_interfaces.lldp_interfaces: redirect: vyos.vyos.network.vyos.facts.lldp_interfaces.lldp_interfaces network.vyos.utils: redirect: vyos.vyos.network.vyos.utils network.vyos.utils.utils: redirect: vyos.vyos.network.vyos.utils.utils network.vyos.vyos: redirect: vyos.vyos.network.vyos.vyos oneandone: redirect: community.general.oneandone oneview: redirect: community.general.oneview online: redirect: community.general.online opennebula: redirect: community.general.opennebula openstack: redirect: openstack.cloud.openstack oracle: redirect: community.general.oracle oracle.oci_utils: redirect: community.general.oracle.oci_utils ovirt: redirect: community.general._ovirt podman: redirect: containers.podman.podman podman.common: redirect: containers.podman.podman.common postgres: redirect: community.postgresql.postgres pure: redirect: community.general.pure rabbitmq: redirect: community.rabbitmq.rabbitmq rax: redirect: community.general.rax redfish_utils: redirect: community.general.redfish_utils redhat: redirect: community.general.redhat remote_management.dellemc: redirect: dellemc.openmanage remote_management.dellemc.dellemc_idrac: redirect: dellemc.openmanage.dellemc_idrac remote_management.dellemc.ome: redirect: dellemc.openmanage.ome remote_management.intersight: redirect: cisco.intersight.intersight remote_management.lxca: redirect: community.general.remote_management.lxca remote_management.lxca.common: redirect: community.general.remote_management.lxca.common remote_management.ucs: redirect: cisco.ucs.ucs scaleway: redirect: community.general.scaleway service_now: redirect: servicenow.servicenow.service_now source_control: redirect: community.general.source_control source_control.bitbucket: redirect: community.general.source_control.bitbucket storage: redirect: community.general.storage storage.emc: redirect: community.general.storage.emc storage.emc.emc_vnx: redirect: community.general.storage.emc.emc_vnx storage.hpe3par: redirect: community.general.storage.hpe3par storage.hpe3par.hpe3par: redirect: community.general.storage.hpe3par.hpe3par univention_umc: redirect: community.general.univention_umc utm_utils: redirect: community.general.utm_utils vca: redirect: community.vmware.vca vexata: redirect: community.general.vexata vmware: redirect: community.vmware.vmware vmware_rest_client: redirect: community.vmware.vmware_rest_client vmware_spbm: redirect: community.vmware.vmware_spbm vultr: redirect: ngine_io.vultr.vultr xenserver: redirect: community.general.xenserver # end module_utils cliconf: frr: redirect: frr.frr.frr aireos: redirect: community.network.aireos apconos: redirect: community.network.apconos aruba: redirect: community.network.aruba ce: redirect: community.network.ce cnos: redirect: community.network.cnos edgeos: redirect: community.network.edgeos edgeswitch: redirect: community.network.edgeswitch enos: redirect: community.network.enos eric_eccli: redirect: community.network.eric_eccli exos: redirect: community.network.exos icx: redirect: community.network.icx ironware: redirect: community.network.ironware netvisor: redirect: community.network.netvisor nos: redirect: community.network.nos onyx: redirect: mellanox.onyx.onyx routeros: redirect: community.routeros.routeros slxos: redirect: community.network.slxos voss: redirect: community.network.voss eos: redirect: arista.eos.eos asa: redirect: cisco.asa.asa ios: redirect: cisco.ios.ios iosxr: redirect: cisco.iosxr.iosxr nxos: redirect: cisco.nxos.nxos junos: redirect: junipernetworks.junos.junos dellos10: redirect: dellemc.os10.os10 dellos9: redirect: dellemc.os9.os9 dellos6: redirect: dellemc.os6.os6 vyos: redirect: vyos.vyos.vyos terminal: frr: redirect: frr.frr.frr aireos: redirect: community.network.aireos apconos: redirect: community.network.apconos aruba: redirect: community.network.aruba ce: redirect: community.network.ce cnos: redirect: community.network.cnos edgeos: redirect: community.network.edgeos edgeswitch: redirect: community.network.edgeswitch enos: redirect: community.network.enos eric_eccli: redirect: community.network.eric_eccli exos: redirect: community.network.exos icx: redirect: community.network.icx ironware: redirect: community.network.ironware netvisor: redirect: community.network.netvisor nos: redirect: community.network.nos onyx: redirect: mellanox.onyx.onyx routeros: redirect: community.routeros.routeros slxos: redirect: community.network.slxos sros: redirect: community.network.sros voss: redirect: community.network.voss eos: redirect: arista.eos.eos asa: redirect: cisco.asa.asa ios: redirect: cisco.ios.ios iosxr: redirect: cisco.iosxr.iosxr nxos: redirect: cisco.nxos.nxos bigip: redirect: f5networks.f5_modules.bigip junos: redirect: junipernetworks.junos.junos dellos10: redirect: dellemc.os10.os10 dellos9: redirect: dellemc.os9.os9 dellos6: redirect: dellemc.os6.os6 vyos: redirect: vyos.vyos.vyos action: # test entry, overloaded with module of same name to use a different base action (ie not "normal.py") uses_redirected_action: redirect: testns.testcoll.subclassed_norm aireos: redirect: community.network.aireos aruba: redirect: community.network.aruba ce: redirect: community.network.ce ce_template: redirect: community.network.ce_template cnos: redirect: community.network.cnos edgeos_config: redirect: community.network.edgeos_config enos: redirect: community.network.enos exos: redirect: community.network.exos ironware: redirect: community.network.ironware nos_config: redirect: community.network.nos_config onyx_config: redirect: mellanox.onyx.onyx_config slxos: redirect: community.network.slxos sros: redirect: community.network.sros voss: redirect: community.network.voss aws_s3: redirect: amazon.aws.aws_s3 cli_command: redirect: ansible.netcommon.cli_command cli_config: redirect: ansible.netcommon.cli_config net_base: redirect: ansible.netcommon.net_base net_user: redirect: ansible.netcommon.net_user net_vlan: redirect: ansible.netcommon.net_vlan net_static_route: redirect: ansible.netcommon.net_static_route net_lldp: redirect: ansible.netcommon.net_lldp net_vrf: redirect: ansible.netcommon.net_vrf net_ping: redirect: ansible.netcommon.net_ping net_l3_interface: redirect: ansible.netcommon.net_l3_interface net_l2_interface: redirect: ansible.netcommon.net_l2_interface net_interface: redirect: ansible.netcommon.net_interface net_system: redirect: ansible.netcommon.net_system net_lldp_interface: redirect: ansible.netcommon.net_lldp_interface net_put: redirect: ansible.netcommon.net_put net_get: redirect: ansible.netcommon.net_get net_logging: redirect: ansible.netcommon.net_logging net_banner: redirect: ansible.netcommon.net_banner net_linkagg: redirect: ansible.netcommon.net_linkagg netconf: redirect: ansible.netcommon.netconf network: redirect: ansible.netcommon.network telnet: redirect: ansible.netcommon.telnet patch: redirect: ansible.posix.patch synchronize: redirect: ansible.posix.synchronize win_copy: redirect: ansible.windows.win_copy win_reboot: redirect: ansible.windows.win_reboot win_template: redirect: ansible.windows.win_template win_updates: redirect: ansible.windows.win_updates fortios_config: redirect: fortinet.fortios.fortios_config eos: redirect: arista.eos.eos asa: redirect: cisco.asa.asa ios: redirect: cisco.ios.ios iosxr: redirect: cisco.iosxr.iosxr nxos: redirect: cisco.nxos.nxos nxos_file_copy: redirect: cisco.nxos.nxos_file_copy bigip: redirect: f5networks.f5_modules.bigip bigiq: redirect: f5networks.f5_modules.bigiq junos: redirect: junipernetworks.junos.junos dellos10: redirect: dellemc.os10.os10 dellos9: redirect: dellemc.os9.os9 dellos6: redirect: dellemc.os6.os6 vyos: redirect: vyos.vyos.vyos include: tombstone: removal_date: "2023-05-16" warning_text: Use include_tasks or import_tasks instead. become: doas: redirect: community.general.doas dzdo: redirect: community.general.dzdo ksu: redirect: community.general.ksu machinectl: redirect: community.general.machinectl pbrun: redirect: community.general.pbrun pfexec: redirect: community.general.pfexec pmrun: redirect: community.general.pmrun sesu: redirect: community.general.sesu enable: redirect: ansible.netcommon.enable cache: memcached: redirect: community.general.memcached pickle: redirect: community.general.pickle redis: redirect: community.general.redis yaml: redirect: community.general.yaml mongodb: redirect: community.mongodb.mongodb callback: actionable: redirect: community.general.actionable cgroup_memory_recap: redirect: community.general.cgroup_memory_recap context_demo: redirect: community.general.context_demo counter_enabled: redirect: community.general.counter_enabled dense: redirect: community.general.dense full_skip: redirect: community.general.full_skip hipchat: redirect: community.general.hipchat jabber: redirect: community.general.jabber log_plays: redirect: community.general.log_plays logdna: redirect: community.general.logdna logentries: redirect: community.general.logentries logstash: redirect: community.general.logstash mail: redirect: community.general.mail nrdp: redirect: community.general.nrdp 'null': redirect: community.general.null osx_say: redirect: community.general.osx_say say: redirect: community.general.say selective: redirect: community.general.selective slack: redirect: community.general.slack splunk: redirect: community.general.splunk stderr: redirect: community.general.stderr sumologic: redirect: community.general.sumologic syslog_json: redirect: community.general.syslog_json unixy: redirect: community.general.unixy yaml: redirect: community.general.yaml grafana_annotations: redirect: community.grafana.grafana_annotations aws_resource_actions: redirect: amazon.aws.aws_resource_actions cgroup_perf_recap: redirect: ansible.posix.cgroup_perf_recap debug: redirect: ansible.posix.debug json: redirect: ansible.posix.json profile_roles: redirect: ansible.posix.profile_roles profile_tasks: redirect: ansible.posix.profile_tasks skippy: redirect: ansible.posix.skippy timer: redirect: ansible.posix.timer foreman: redirect: theforeman.foreman.foreman # 'collections' integration test entries, do not remove formerly_core_callback: redirect: testns.testcoll.usercallback formerly_core_removed_callback: redirect: testns.testcoll.removedcallback formerly_core_missing_callback: redirect: bogusns.boguscoll.boguscallback doc_fragments: a10: redirect: community.network.a10 aireos: redirect: community.network.aireos alicloud: redirect: community.general.alicloud aruba: redirect: community.network.aruba auth_basic: redirect: community.general.auth_basic avi: redirect: community.network.avi ce: redirect: community.network.ce cloudscale: redirect: cloudscale_ch.cloud.api_parameters cloudstack: redirect: ngine_io.cloudstack.cloudstack cnos: redirect: community.network.cnos digital_ocean: redirect: community.digitalocean.digital_ocean dimensiondata: redirect: community.general.dimensiondata dimensiondata_wait: redirect: community.general.dimensiondata_wait docker: redirect: community.docker.docker emc: redirect: community.general.emc enos: redirect: community.network.enos exoscale: redirect: ngine_io.exoscale.exoscale gcp: redirect: google.cloud.gcp hetzner: redirect: community.hrobot.robot hpe3par: redirect: community.general.hpe3par hwc: redirect: community.general.hwc ibm_storage: redirect: community.general.ibm_storage infinibox: redirect: infinidat.infinibox.infinibox influxdb: redirect: community.general.influxdb ingate: redirect: community.network.ingate ipa: redirect: community.general.ipa ironware: redirect: community.network.ironware keycloak: redirect: community.general.keycloak kubevirt_common_options: redirect: community.kubevirt.kubevirt_common_options kubevirt_vm_options: redirect: community.kubevirt.kubevirt_vm_options ldap: redirect: community.general.ldap lxca_common: redirect: community.general.lxca_common manageiq: redirect: community.general.manageiq mysql: redirect: community.mysql.mysql netscaler: redirect: community.network.netscaler nios: redirect: community.general.nios nso: redirect: cisco.nso.nso oneview: redirect: community.general.oneview online: redirect: community.general.online onyx: redirect: mellanox.onyx.onyx opennebula: redirect: community.general.opennebula openswitch: redirect: community.general.openswitch oracle: redirect: community.general.oracle oracle_creatable_resource: redirect: community.general.oracle_creatable_resource oracle_display_name_option: redirect: community.general.oracle_display_name_option oracle_name_option: redirect: community.general.oracle_name_option oracle_tags: redirect: community.general.oracle_tags oracle_wait_options: redirect: community.general.oracle_wait_options ovirt_facts: redirect: community.general.ovirt_facts panos: redirect: community.network.panos postgres: redirect: community.postgresql.postgres proxysql: redirect: community.proxysql.proxysql purestorage: redirect: community.general.purestorage rabbitmq: redirect: community.rabbitmq.rabbitmq rackspace: redirect: community.general.rackspace scaleway: redirect: community.general.scaleway sros: redirect: community.network.sros utm: redirect: community.general.utm vexata: redirect: community.general.vexata vultr: redirect: ngine_io.vultr.vultr xenserver: redirect: community.general.xenserver zabbix: redirect: community.zabbix.zabbix k8s_auth_options: redirect: kubernetes.core.k8s_auth_options k8s_name_options: redirect: kubernetes.core.k8s_name_options k8s_resource_options: redirect: kubernetes.core.k8s_resource_options k8s_scale_options: redirect: kubernetes.core.k8s_scale_options k8s_state_options: redirect: kubernetes.core.k8s_state_options acme: redirect: community.crypto.acme ecs_credential: redirect: community.crypto.ecs_credential VmwareRestModule: redirect: vmware.vmware_rest.VmwareRestModule VmwareRestModule_filters: redirect: vmware.vmware_rest.VmwareRestModule_filters VmwareRestModule_full: redirect: vmware.vmware_rest.VmwareRestModule_full VmwareRestModule_state: redirect: vmware.vmware_rest.VmwareRestModule_state vca: redirect: community.vmware.vca vmware: redirect: community.vmware.vmware vmware_rest_client: redirect: community.vmware.vmware_rest_client service_now: redirect: servicenow.servicenow.service_now aws: redirect: amazon.aws.aws aws_credentials: redirect: amazon.aws.aws_credentials aws_region: redirect: amazon.aws.aws_region ec2: redirect: amazon.aws.ec2 netconf: redirect: ansible.netcommon.netconf network_agnostic: redirect: ansible.netcommon.network_agnostic fortios: redirect: fortinet.fortios.fortios netapp: redirect: netapp.ontap.netapp checkpoint_commands: redirect: check_point.mgmt.checkpoint_commands checkpoint_facts: redirect: check_point.mgmt.checkpoint_facts checkpoint_objects: redirect: check_point.mgmt.checkpoint_objects eos: redirect: arista.eos.eos aci: redirect: cisco.aci.aci asa: redirect: cisco.asa.asa intersight: redirect: cisco.intersight.intersight ios: redirect: cisco.ios.ios iosxr: redirect: cisco.iosxr.iosxr meraki: redirect: cisco.meraki.meraki mso: redirect: cisco.mso.modules nxos: redirect: cisco.nxos.nxos ucs: redirect: cisco.ucs.ucs f5: redirect: f5networks.f5_modules.f5 openstack: redirect: openstack.cloud.openstack junos: redirect: junipernetworks.junos.junos tower: redirect: awx.awx.auth ovirt: redirect: ovirt.ovirt.ovirt ovirt_info: redirect: ovirt.ovirt.ovirt_info dellos10: redirect: dellemc.os10.os10 dellos9: redirect: dellemc.os9.os9 dellos6: redirect: dellemc.os6.os6 hcloud: redirect: hetzner.hcloud.hcloud skydive: redirect: community.skydive.skydive azure: redirect: azure.azcollection.azure azure_tags: redirect: azure.azcollection.azure_tags vyos: redirect: vyos.vyos.vyos filter: # test entries formerly_core_filter: redirect: ansible.builtin.bool formerly_core_masked_filter: redirect: ansible.builtin.bool gcp_kms_encrypt: redirect: google.cloud.gcp_kms_encrypt gcp_kms_decrypt: redirect: google.cloud.gcp_kms_decrypt json_query: redirect: community.general.json_query random_mac: redirect: community.general.random_mac k8s_config_resource_name: redirect: kubernetes.core.k8s_config_resource_name cidr_merge: redirect: ansible.netcommon.cidr_merge ipaddr: redirect: ansible.netcommon.ipaddr ipmath: redirect: ansible.netcommon.ipmath ipwrap: redirect: ansible.netcommon.ipwrap ip4_hex: redirect: ansible.netcommon.ip4_hex ipv4: redirect: ansible.netcommon.ipv4 ipv6: redirect: ansible.netcommon.ipv6 ipsubnet: redirect: ansible.netcommon.ipsubnet next_nth_usable: redirect: ansible.netcommon.next_nth_usable network_in_network: redirect: ansible.netcommon.network_in_network network_in_usable: redirect: ansible.netcommon.network_in_usable reduce_on_network: redirect: ansible.netcommon.reduce_on_network nthhost: redirect: ansible.netcommon.nthhost previous_nth_usable: redirect: ansible.netcommon.previous_nth_usable slaac: redirect: ansible.netcommon.slaac hwaddr: redirect: ansible.netcommon.hwaddr parse_cli: redirect: ansible.netcommon.parse_cli parse_cli_textfsm: redirect: ansible.netcommon.parse_cli_textfsm parse_xml: redirect: ansible.netcommon.parse_xml type5_pw: redirect: ansible.netcommon.type5_pw hash_salt: redirect: ansible.netcommon.hash_salt comp_type5: redirect: ansible.netcommon.comp_type5 vlan_parser: redirect: ansible.netcommon.vlan_parser httpapi: exos: redirect: community.network.exos fortianalyzer: redirect: community.fortios.fortianalyzer fortimanager: redirect: fortinet.fortimanager.fortimanager ftd: redirect: community.network.ftd vmware: redirect: community.vmware.vmware restconf: redirect: ansible.netcommon.restconf fortios: redirect: fortinet.fortios.fortios checkpoint: redirect: check_point.mgmt.checkpoint eos: redirect: arista.eos.eos nxos: redirect: cisco.nxos.nxos splunk: redirect: splunk.es.splunk qradar: redirect: ibm.qradar.qradar inventory: # test entry formerly_core_inventory: redirect: testns.content_adj.statichost cloudscale: redirect: cloudscale_ch.cloud.inventory docker_machine: redirect: community.docker.docker_machine docker_swarm: redirect: community.docker.docker_swarm gitlab_runners: redirect: community.general.gitlab_runners kubevirt: redirect: community.kubevirt.kubevirt linode: redirect: community.general.linode nmap: redirect: community.general.nmap online: redirect: community.general.online scaleway: redirect: community.general.scaleway virtualbox: redirect: community.general.virtualbox vultr: redirect: ngine_io.vultr.vultr k8s: redirect: kubernetes.core.k8s openshift: redirect: kubernetes.core.openshift vmware_vm_inventory: redirect: community.vmware.vmware_vm_inventory aws_ec2: redirect: amazon.aws.aws_ec2 aws_rds: redirect: amazon.aws.aws_rds foreman: redirect: theforeman.foreman.foreman netbox: redirect: netbox.netbox.nb_inventory openstack: redirect: openstack.cloud.openstack tower: redirect: awx.awx.tower hcloud: redirect: hetzner.hcloud.hcloud gcp_compute: redirect: google.cloud.gcp_compute azure_rm: redirect: azure.azcollection.azure_rm lookup: # test entry formerly_core_lookup: redirect: testns.testcoll.mylookup avi: redirect: community.network.avi cartesian: redirect: community.general.cartesian chef_databag: redirect: community.general.chef_databag conjur_variable: redirect: cyberark.conjur.conjur_variable consul_kv: redirect: community.general.consul_kv credstash: redirect: community.general.credstash cyberarkpassword: redirect: community.general.cyberarkpassword dig: redirect: community.general.dig dnstxt: redirect: community.general.dnstxt etcd: redirect: community.general.etcd filetree: redirect: community.general.filetree flattened: redirect: community.general.flattened gcp_storage_file: redirect: community.google.gcp_storage_file hashi_vault: redirect: community.hashi_vault.hashi_vault hiera: redirect: community.general.hiera keyring: redirect: community.general.keyring lastpass: redirect: community.general.lastpass lmdb_kv: redirect: community.general.lmdb_kv manifold: redirect: community.general.manifold nios: redirect: community.general.nios nios_next_ip: redirect: community.general.nios_next_ip nios_next_network: redirect: community.general.nios_next_network onepassword: redirect: community.general.onepassword onepassword_raw: redirect: community.general.onepassword_raw passwordstore: redirect: community.general.passwordstore rabbitmq: redirect: community.rabbitmq.rabbitmq redis: redirect: community.general.redis shelvefile: redirect: community.general.shelvefile grafana_dashboard: redirect: community.grafana.grafana_dashboard openshift: redirect: kubernetes.core.openshift k8s: redirect: kubernetes.core.k8s mongodb: redirect: community.mongodb.mongodb laps_password: redirect: community.windows.laps_password aws_account_attribute: redirect: amazon.aws.aws_account_attribute aws_secret: redirect: amazon.aws.aws_secret aws_service_ip_ranges: redirect: amazon.aws.aws_service_ip_ranges aws_ssm: redirect: amazon.aws.aws_ssm skydive: redirect: community.skydive.skydive cpm_metering: redirect: wti.remote.cpm_metering cpm_status: redirect: wti.remote.cpm_status netconf: ce: redirect: community.network.ce sros: redirect: community.network.sros default: redirect: ansible.netcommon.default iosxr: redirect: cisco.iosxr.iosxr junos: redirect: junipernetworks.junos.junos shell: # test entry formerly_core_powershell: redirect: ansible.builtin.powershell csh: redirect: ansible.posix.csh fish: redirect: ansible.posix.fish test: # test entries formerly_core_test: redirect: ansible.builtin.search formerly_core_masked_test: redirect: ansible.builtin.search import_redirection: # test entry ansible.module_utils.formerly_core: redirect: ansible_collections.testns.testcoll.plugins.module_utils.base ansible.module_utils.known_hosts: redirect: ansible_collections.community.general.plugins.module_utils.known_hosts # ansible.builtin synthetic collection redirection hackery ansible_collections.ansible.builtin.plugins.modules: redirect: ansible.modules ansible_collections.ansible.builtin.plugins.module_utils: redirect: ansible.module_utils ansible_collections.ansible.builtin.plugins: redirect: ansible.plugins action_groups: testgroup: # The list items under a group should always be action/module name strings except # for a special 'metadata' dictionary. # The only valid key currently for the metadata dictionary is 'extend_group', which is a # list of other groups, the actions of which will be included in this group. # (Note: it's still possible to also have a module/action named 'metadata' in the list) - metadata: extend_group: - testns.testcoll.testgroup - testns.testcoll.anothergroup - testns.boguscoll.testgroup - ping - legacy_ping # Includes ansible.builtin.legacy_ping, not ansible.legacy.legacy_ping - formerly_core_ping testlegacy: - ansible.legacy.legacy_ping aws: - metadata: extend_group: - amazon.aws.aws - community.aws.aws acme: - metadata: extend_group: - community.crypto.acme azure: - metadata: extend_group: - azure.azcollection.azure cpm: - metadata: extend_group: - wti.remote.cpm docker: - metadata: extend_group: - community.general.docker - community.docker.docker gcp: - metadata: extend_group: - google.cloud.gcp k8s: - metadata: extend_group: - community.kubernetes.k8s - community.general.k8s - community.kubevirt.k8s - community.okd.k8s - kubernetes.core.k8s os: - metadata: extend_group: - openstack.cloud.os ovirt: - metadata: extend_group: - ovirt.ovirt.ovirt - community.general.ovirt vmware: - metadata: extend_group: - community.vmware.vmware ansible-core-2.16.3/lib/ansible/config/base.yml0000644000000000000000000024600714556006441020007 0ustar00rootroot# Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) --- ANSIBLE_HOME: name: The Ansible home path description: - The default root path for Ansible config files on the controller. default: ~/.ansible env: - name: ANSIBLE_HOME ini: - key: home section: defaults type: path version_added: '2.14' ANSIBLE_CONNECTION_PATH: name: Path of ansible-connection script default: null description: - Specify where to look for the ansible-connection script. This location will be checked before searching $PATH. - If null, ansible will start with the same directory as the ansible script. type: path env: [{name: ANSIBLE_CONNECTION_PATH}] ini: - {key: ansible_connection_path, section: persistent_connection} yaml: {key: persistent_connection.ansible_connection_path} version_added: "2.8" ANSIBLE_COW_SELECTION: name: Cowsay filter selection default: default description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them. env: [{name: ANSIBLE_COW_SELECTION}] ini: - {key: cow_selection, section: defaults} ANSIBLE_COW_ACCEPTLIST: name: Cowsay filter acceptance list default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www'] description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates. env: - name: ANSIBLE_COW_ACCEPTLIST version_added: '2.11' ini: - key: cowsay_enabled_stencils section: defaults version_added: '2.11' type: list ANSIBLE_FORCE_COLOR: name: Force color output default: False description: This option forces color mode even when running without a TTY or the "nocolor" setting is True. env: [{name: ANSIBLE_FORCE_COLOR}] ini: - {key: force_color, section: defaults} type: boolean yaml: {key: display.force_color} ANSIBLE_NOCOLOR: name: Suppress color output default: False description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information. env: - name: ANSIBLE_NOCOLOR # this is generic convention for CLI programs - name: NO_COLOR version_added: '2.11' ini: - {key: nocolor, section: defaults} type: boolean yaml: {key: display.nocolor} ANSIBLE_NOCOWS: name: Suppress cowsay output default: False description: If you have cowsay installed but want to avoid the 'cows' (why????), use this. env: [{name: ANSIBLE_NOCOWS}] ini: - {key: nocows, section: defaults} type: boolean yaml: {key: display.i_am_no_fun} ANSIBLE_COW_PATH: name: Set path to cowsay command default: null description: Specify a custom cowsay path or swap in your cowsay implementation of choice env: [{name: ANSIBLE_COW_PATH}] ini: - {key: cowpath, section: defaults} type: string yaml: {key: display.cowpath} ANSIBLE_PIPELINING: name: Connection pipelining default: False description: - This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all. - Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server, by executing many Ansible modules without actual file transfer. - It can result in a very significant performance improvement when enabled. - "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default." - This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled. env: - name: ANSIBLE_PIPELINING ini: - section: defaults key: pipelining - section: connection key: pipelining type: boolean ANY_ERRORS_FATAL: name: Make Task failures fatal default: False description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors. env: - name: ANSIBLE_ANY_ERRORS_FATAL ini: - section: defaults key: any_errors_fatal type: boolean yaml: {key: errors.any_task_errors_fatal} version_added: "2.4" BECOME_ALLOW_SAME_USER: name: Allow becoming the same user default: False description: - This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root. - If executable, it will be run and the resulting stdout will be used as the password. env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}] ini: - {key: become_allow_same_user, section: privilege_escalation} type: boolean yaml: {key: privilege_escalation.become_allow_same_user} BECOME_PASSWORD_FILE: name: Become password file default: ~ description: - 'The password file to use for the become plugin. --become-password-file.' - If executable, it will be run and the resulting stdout will be used as the password. env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}] ini: - {key: become_password_file, section: defaults} type: path version_added: '2.12' AGNOSTIC_BECOME_PROMPT: name: Display an agnostic become prompt default: True type: boolean description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}] ini: - {key: agnostic_become_prompt, section: privilege_escalation} yaml: {key: privilege_escalation.agnostic_become_prompt} version_added: "2.5" CACHE_PLUGIN: name: Persistent Cache plugin default: memory description: Chooses which cache plugin to use, the default 'memory' is ephemeral. env: [{name: ANSIBLE_CACHE_PLUGIN}] ini: - {key: fact_caching, section: defaults} yaml: {key: facts.cache.plugin} CACHE_PLUGIN_CONNECTION: name: Cache Plugin URI default: ~ description: Defines connection or path information for the cache plugin env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}] ini: - {key: fact_caching_connection, section: defaults} yaml: {key: facts.cache.uri} CACHE_PLUGIN_PREFIX: name: Cache Plugin table prefix default: ansible_facts description: Prefix to use for cache plugin files/tables env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}] ini: - {key: fact_caching_prefix, section: defaults} yaml: {key: facts.cache.prefix} CACHE_PLUGIN_TIMEOUT: name: Cache Plugin expiration timeout default: 86400 description: Expiration timeout for the cache plugin data env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}] ini: - {key: fact_caching_timeout, section: defaults} type: integer yaml: {key: facts.cache.timeout} COLLECTIONS_SCAN_SYS_PATH: name: Scan PYTHONPATH for installed collections description: A boolean to enable or disable scanning the sys.path for installed collections default: true type: boolean env: - {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH} ini: - {key: collections_scan_sys_path, section: defaults} COLLECTIONS_PATHS: name: ordered list of root paths for loading installed Ansible collections content description: > Colon separated paths in which Ansible will search for collections content. Collections must be in nested *subdirectories*, not directly in these directories. For example, if ``COLLECTIONS_PATHS`` includes ``'{{ ANSIBLE_HOME ~ "/collections" }}'``, and you want to add ``my.collection`` to that directory, it must be saved as ``'{{ ANSIBLE_HOME} ~ "/collections/ansible_collections/my/collection" }}'``. default: '{{ ANSIBLE_HOME ~ "/collections:/usr/share/ansible/collections" }}' type: pathspec env: - name: ANSIBLE_COLLECTIONS_PATHS deprecated: why: does not fit var naming standard, use the singular form ANSIBLE_COLLECTIONS_PATH instead version: "2.19" - name: ANSIBLE_COLLECTIONS_PATH version_added: '2.10' ini: - key: collections_paths section: defaults deprecated: why: does not fit var naming standard, use the singular form collections_path instead version: "2.19" - key: collections_path section: defaults version_added: '2.10' COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH: name: Defines behavior when loading a collection that does not support the current Ansible version description: - When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`). env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}] ini: [{key: collections_on_ansible_version_mismatch, section: defaults}] choices: &basic_error error: issue a 'fatal' error and stop the play warning: issue a warning but continue ignore: just continue silently default: warning COLOR_CHANGED: name: Color for 'changed' task status default: yellow description: Defines the color to use on 'Changed' task status env: [{name: ANSIBLE_COLOR_CHANGED}] ini: - {key: changed, section: colors} COLOR_CONSOLE_PROMPT: name: "Color for ansible-console's prompt task status" default: white description: Defines the default color to use for ansible-console env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}] ini: - {key: console_prompt, section: colors} version_added: "2.7" COLOR_DEBUG: name: Color for debug statements default: dark gray description: Defines the color to use when emitting debug messages env: [{name: ANSIBLE_COLOR_DEBUG}] ini: - {key: debug, section: colors} COLOR_DEPRECATE: name: Color for deprecation messages default: purple description: Defines the color to use when emitting deprecation messages env: [{name: ANSIBLE_COLOR_DEPRECATE}] ini: - {key: deprecate, section: colors} COLOR_DIFF_ADD: name: Color for diff added display default: green description: Defines the color to use when showing added lines in diffs env: [{name: ANSIBLE_COLOR_DIFF_ADD}] ini: - {key: diff_add, section: colors} yaml: {key: display.colors.diff.add} COLOR_DIFF_LINES: name: Color for diff lines display default: cyan description: Defines the color to use when showing diffs env: [{name: ANSIBLE_COLOR_DIFF_LINES}] ini: - {key: diff_lines, section: colors} COLOR_DIFF_REMOVE: name: Color for diff removed display default: red description: Defines the color to use when showing removed lines in diffs env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}] ini: - {key: diff_remove, section: colors} COLOR_ERROR: name: Color for error messages default: red description: Defines the color to use when emitting error messages env: [{name: ANSIBLE_COLOR_ERROR}] ini: - {key: error, section: colors} yaml: {key: colors.error} COLOR_HIGHLIGHT: name: Color for highlighting default: white description: Defines the color to use for highlighting env: [{name: ANSIBLE_COLOR_HIGHLIGHT}] ini: - {key: highlight, section: colors} COLOR_OK: name: Color for 'ok' task status default: green description: Defines the color to use when showing 'OK' task status env: [{name: ANSIBLE_COLOR_OK}] ini: - {key: ok, section: colors} COLOR_SKIP: name: Color for 'skip' task status default: cyan description: Defines the color to use when showing 'Skipped' task status env: [{name: ANSIBLE_COLOR_SKIP}] ini: - {key: skip, section: colors} COLOR_UNREACHABLE: name: Color for 'unreachable' host state default: bright red description: Defines the color to use on 'Unreachable' status env: [{name: ANSIBLE_COLOR_UNREACHABLE}] ini: - {key: unreachable, section: colors} COLOR_VERBOSE: name: Color for verbose messages default: blue description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's. env: [{name: ANSIBLE_COLOR_VERBOSE}] ini: - {key: verbose, section: colors} COLOR_WARN: name: Color for warning messages default: bright purple description: Defines the color to use when emitting warning messages env: [{name: ANSIBLE_COLOR_WARN}] ini: - {key: warn, section: colors} CONNECTION_PASSWORD_FILE: name: Connection password file default: ~ description: 'The password file to use for the connection plugin. --connection-password-file.' env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}] ini: - {key: connection_password_file, section: defaults} type: path version_added: '2.12' COVERAGE_REMOTE_OUTPUT: name: Sets the output directory and filename prefix to generate coverage run info. description: - Sets the output directory on the remote host to generate coverage reports to. - Currently only used for remote coverage on PowerShell modules. - This is for internal use only. env: - {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT} vars: - {name: _ansible_coverage_remote_output} type: str version_added: '2.9' COVERAGE_REMOTE_PATHS: name: Sets the list of paths to run coverage for. description: - A list of paths for files on the Ansible controller to run coverage for when executing on the remote host. - Only files that match the path glob will have its coverage collected. - Multiple path globs can be specified and are separated by ``:``. - Currently only used for remote coverage on PowerShell modules. - This is for internal use only. default: '*' env: - {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER} type: str version_added: '2.9' ACTION_WARNINGS: name: Toggle action warnings default: True description: - By default Ansible will issue a warning when received from a task action (module or action plugin) - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_ACTION_WARNINGS}] ini: - {key: action_warnings, section: defaults} type: boolean version_added: "2.5" LOCALHOST_WARNING: name: Warning when using implicit inventory with only localhost default: True description: - By default Ansible will issue a warning when there are no hosts in the inventory. - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_LOCALHOST_WARNING}] ini: - {key: localhost_warning, section: defaults} type: boolean version_added: "2.6" INVENTORY_UNPARSED_WARNING: name: Warning when no inventory files can be parsed, resulting in an implicit inventory with only localhost default: True description: - By default Ansible will issue a warning when no inventory was loaded and notes that it will use an implicit localhost-only inventory. - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_INVENTORY_UNPARSED_WARNING}] ini: - {key: inventory_unparsed_warning, section: inventory} type: boolean version_added: "2.14" DOC_FRAGMENT_PLUGIN_PATH: name: documentation fragment plugins path default: '{{ ANSIBLE_HOME ~ "/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments" }}' description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins. env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}] ini: - {key: doc_fragment_plugins, section: defaults} type: pathspec DEFAULT_ACTION_PLUGIN_PATH: name: Action plugins path default: '{{ ANSIBLE_HOME ~ "/plugins/action:/usr/share/ansible/plugins/action" }}' description: Colon separated paths in which Ansible will search for Action Plugins. env: [{name: ANSIBLE_ACTION_PLUGINS}] ini: - {key: action_plugins, section: defaults} type: pathspec yaml: {key: plugins.action.path} DEFAULT_ALLOW_UNSAFE_LOOKUPS: name: Allow unsafe lookups default: False description: - "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo) to return data that is not marked 'unsafe'." - By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language, as this could represent a security risk. This option is provided to allow for backward compatibility, however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run through the templating engine late env: [] ini: - {key: allow_unsafe_lookups, section: defaults} type: boolean version_added: "2.2.3" DEFAULT_ASK_PASS: name: Ask for the login password default: False description: - This controls whether an Ansible playbook should prompt for a login password. If using SSH keys for authentication, you probably do not need to change this setting. env: [{name: ANSIBLE_ASK_PASS}] ini: - {key: ask_pass, section: defaults} type: boolean yaml: {key: defaults.ask_pass} DEFAULT_ASK_VAULT_PASS: name: Ask for the vault password(s) default: False description: - This controls whether an Ansible playbook should prompt for a vault password. env: [{name: ANSIBLE_ASK_VAULT_PASS}] ini: - {key: ask_vault_pass, section: defaults} type: boolean DEFAULT_BECOME: name: Enable privilege escalation (become) default: False description: Toggles the use of privilege escalation, allowing you to 'become' another user after login. env: [{name: ANSIBLE_BECOME}] ini: - {key: become, section: privilege_escalation} type: boolean DEFAULT_BECOME_ASK_PASS: name: Ask for the privilege escalation (become) password default: False description: Toggle to prompt for privilege escalation password. env: [{name: ANSIBLE_BECOME_ASK_PASS}] ini: - {key: become_ask_pass, section: privilege_escalation} type: boolean DEFAULT_BECOME_METHOD: name: Choose privilege escalation method default: 'sudo' description: Privilege escalation method to use when `become` is enabled. env: [{name: ANSIBLE_BECOME_METHOD}] ini: - {section: privilege_escalation, key: become_method} DEFAULT_BECOME_EXE: name: Choose 'become' executable default: ~ description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH' env: [{name: ANSIBLE_BECOME_EXE}] ini: - {key: become_exe, section: privilege_escalation} DEFAULT_BECOME_FLAGS: name: Set 'become' executable options default: '' description: Flags to pass to the privilege escalation executable. env: [{name: ANSIBLE_BECOME_FLAGS}] ini: - {key: become_flags, section: privilege_escalation} BECOME_PLUGIN_PATH: name: Become plugins path default: '{{ ANSIBLE_HOME ~ "/plugins/become:/usr/share/ansible/plugins/become" }}' description: Colon separated paths in which Ansible will search for Become Plugins. env: [{name: ANSIBLE_BECOME_PLUGINS}] ini: - {key: become_plugins, section: defaults} type: pathspec version_added: "2.8" DEFAULT_BECOME_USER: # FIXME: should really be blank and make -u passing optional depending on it name: Set the user you 'become' via privilege escalation default: root description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified. env: [{name: ANSIBLE_BECOME_USER}] ini: - {key: become_user, section: privilege_escalation} yaml: {key: become.user} DEFAULT_CACHE_PLUGIN_PATH: name: Cache Plugins Path default: '{{ ANSIBLE_HOME ~ "/plugins/cache:/usr/share/ansible/plugins/cache" }}' description: Colon separated paths in which Ansible will search for Cache Plugins. env: [{name: ANSIBLE_CACHE_PLUGINS}] ini: - {key: cache_plugins, section: defaults} type: pathspec DEFAULT_CALLBACK_PLUGIN_PATH: name: Callback Plugins Path default: '{{ ANSIBLE_HOME ~ "/plugins/callback:/usr/share/ansible/plugins/callback" }}' description: Colon separated paths in which Ansible will search for Callback Plugins. env: [{name: ANSIBLE_CALLBACK_PLUGINS}] ini: - {key: callback_plugins, section: defaults} type: pathspec yaml: {key: plugins.callback.path} CALLBACKS_ENABLED: name: Enable callback plugins that require it. default: [] description: - "List of enabled callbacks, not all callbacks need enabling, but many of those shipped with Ansible do as we don't want them activated by default." env: - name: ANSIBLE_CALLBACKS_ENABLED version_added: '2.11' ini: - key: callbacks_enabled section: defaults version_added: '2.11' type: list DEFAULT_CLICONF_PLUGIN_PATH: name: Cliconf Plugins Path default: '{{ ANSIBLE_HOME ~ "/plugins/cliconf:/usr/share/ansible/plugins/cliconf" }}' description: Colon separated paths in which Ansible will search for Cliconf Plugins. env: [{name: ANSIBLE_CLICONF_PLUGINS}] ini: - {key: cliconf_plugins, section: defaults} type: pathspec DEFAULT_CONNECTION_PLUGIN_PATH: name: Connection Plugins Path default: '{{ ANSIBLE_HOME ~ "/plugins/connection:/usr/share/ansible/plugins/connection" }}' description: Colon separated paths in which Ansible will search for Connection Plugins. env: [{name: ANSIBLE_CONNECTION_PLUGINS}] ini: - {key: connection_plugins, section: defaults} type: pathspec yaml: {key: plugins.connection.path} DEFAULT_DEBUG: name: Debug mode default: False description: - "Toggles debug output in Ansible. This is *very* verbose and can hinder multiprocessing. Debug output can also include secret information despite no_log settings being enabled, which means debug mode should not be used in production." env: [{name: ANSIBLE_DEBUG}] ini: - {key: debug, section: defaults} type: boolean DEFAULT_EXECUTABLE: name: Target shell executable default: /bin/sh description: - "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target. Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is." env: [{name: ANSIBLE_EXECUTABLE}] ini: - {key: executable, section: defaults} DEFAULT_FACT_PATH: name: local fact path description: - "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering." - "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``." - "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module." - The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules, by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults. env: [{name: ANSIBLE_FACT_PATH}] ini: - {key: fact_path, section: defaults} type: string deprecated: # TODO: when removing set playbook/play.py to default=None why: the module_defaults keyword is a more generic version and can apply to all calls to the M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions version: "2.18" alternatives: module_defaults DEFAULT_FILTER_PLUGIN_PATH: name: Jinja2 Filter Plugins Path default: '{{ ANSIBLE_HOME ~ "/plugins/filter:/usr/share/ansible/plugins/filter" }}' description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins. env: [{name: ANSIBLE_FILTER_PLUGINS}] ini: - {key: filter_plugins, section: defaults} type: pathspec DEFAULT_FORCE_HANDLERS: name: Force handlers to run after failure default: False description: - This option controls if notified handlers run on a host even if a failure occurs on that host. - When false, the handlers will not run if a failure has occurred on a host. - This can also be set per play or on the command line. See Handlers and Failure for more details. env: [{name: ANSIBLE_FORCE_HANDLERS}] ini: - {key: force_handlers, section: defaults} type: boolean version_added: "1.9.1" DEFAULT_FORKS: name: Number of task forks default: 5 description: Maximum number of forks Ansible will use to execute tasks on target hosts. env: [{name: ANSIBLE_FORKS}] ini: - {key: forks, section: defaults} type: integer DEFAULT_GATHERING: name: Gathering behaviour default: 'implicit' description: - This setting controls the default policy of fact gathering (facts discovered about remote systems). - "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin." env: [{name: ANSIBLE_GATHERING}] ini: - key: gathering section: defaults version_added: "1.6" choices: implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set." explicit: facts will not be gathered unless directly requested in the play. smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run. DEFAULT_GATHER_SUBSET: name: Gather facts subset description: - Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering. See the module documentation for specifics. - "It does **not** apply to user defined ``ansible.builtin.setup`` tasks." env: [{name: ANSIBLE_GATHER_SUBSET}] ini: - key: gather_subset section: defaults version_added: "2.1" type: list deprecated: # TODO: when removing set playbook/play.py to default=None why: the module_defaults keyword is a more generic version and can apply to all calls to the M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions version: "2.18" alternatives: module_defaults DEFAULT_GATHER_TIMEOUT: name: Gather facts timeout description: - Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics. - "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks." env: [{name: ANSIBLE_GATHER_TIMEOUT}] ini: - {key: gather_timeout, section: defaults} type: integer deprecated: # TODO: when removing set playbook/play.py to default=None why: the module_defaults keyword is a more generic version and can apply to all calls to the M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions version: "2.18" alternatives: module_defaults DEFAULT_HASH_BEHAVIOUR: name: Hash merge behaviour default: replace type: string choices: replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins). merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources. description: - This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible. - This does not affect variables whose values are scalars (integers, strings) or arrays. - "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable, leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it." - We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much complexity has been introduced into the data structures and plays. - For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars`` that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope, but the setting itself affects all sources and makes debugging even harder. - All playbooks and roles in the official examples repos assume the default for this setting. - Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables. For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file. - The Ansible project recommends you **avoid ``merge`` for new projects.** - It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it. New projects should **avoid 'merge'**. env: [{name: ANSIBLE_HASH_BEHAVIOUR}] ini: - {key: hash_behaviour, section: defaults} DEFAULT_HOST_LIST: name: Inventory Source default: /etc/ansible/hosts description: Comma separated list of Ansible inventory sources env: - name: ANSIBLE_INVENTORY expand_relative_paths: True ini: - key: inventory section: defaults type: pathlist yaml: {key: defaults.inventory} DEFAULT_HTTPAPI_PLUGIN_PATH: name: HttpApi Plugins Path default: '{{ ANSIBLE_HOME ~ "/plugins/httpapi:/usr/share/ansible/plugins/httpapi" }}' description: Colon separated paths in which Ansible will search for HttpApi Plugins. env: [{name: ANSIBLE_HTTPAPI_PLUGINS}] ini: - {key: httpapi_plugins, section: defaults} type: pathspec DEFAULT_INTERNAL_POLL_INTERVAL: name: Internal poll interval default: 0.001 env: [] ini: - {key: internal_poll_interval, section: defaults} type: float version_added: "2.2" description: - This sets the interval (in seconds) of Ansible internal processes polling each other. Lower values improve performance with large playbooks at the expense of extra CPU load. Higher values are more suitable for Ansible usage in automation scenarios, when UI responsiveness is not required but CPU usage might be a concern. - "The default corresponds to the value hardcoded in Ansible <= 2.1" DEFAULT_INVENTORY_PLUGIN_PATH: name: Inventory Plugins Path default: '{{ ANSIBLE_HOME ~ "/plugins/inventory:/usr/share/ansible/plugins/inventory" }}' description: Colon separated paths in which Ansible will search for Inventory Plugins. env: [{name: ANSIBLE_INVENTORY_PLUGINS}] ini: - {key: inventory_plugins, section: defaults} type: pathspec DEFAULT_JINJA2_EXTENSIONS: name: Enabled Jinja2 extensions default: [] description: - This is a developer-specific feature that allows enabling additional Jinja2 extensions. - "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)" env: [{name: ANSIBLE_JINJA2_EXTENSIONS}] ini: - {key: jinja2_extensions, section: defaults} DEFAULT_JINJA2_NATIVE: name: Use Jinja2's NativeEnvironment for templating default: False description: This option preserves variable types during template operations. env: [{name: ANSIBLE_JINJA2_NATIVE}] ini: - {key: jinja2_native, section: defaults} type: boolean yaml: {key: jinja2_native} version_added: 2.7 DEFAULT_KEEP_REMOTE_FILES: name: Keep remote files default: False description: - Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote. - If this option is enabled it will disable ``ANSIBLE_PIPELINING``. env: [{name: ANSIBLE_KEEP_REMOTE_FILES}] ini: - {key: keep_remote_files, section: defaults} type: boolean DEFAULT_LIBVIRT_LXC_NOSECLABEL: # TODO: move to plugin name: No security label on Lxc default: False description: - "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh. This is necessary when running on systems which do not have SELinux." env: - name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL ini: - {key: libvirt_lxc_noseclabel, section: selinux} type: boolean version_added: "2.1" DEFAULT_LOAD_CALLBACK_PLUGINS: name: Load callbacks for adhoc default: False description: - Controls whether callback plugins are loaded when running /usr/bin/ansible. This may be used to log activity from the command line, send notifications, and so on. Callback plugins are always loaded for ``ansible-playbook``. env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}] ini: - {key: bin_ansible_callbacks, section: defaults} type: boolean version_added: "1.8" DEFAULT_LOCAL_TMP: name: Controller temporary directory default: '{{ ANSIBLE_HOME ~ "/tmp" }}' description: Temporary directory for Ansible to use on the controller. env: [{name: ANSIBLE_LOCAL_TEMP}] ini: - {key: local_tmp, section: defaults} type: tmppath DEFAULT_LOG_PATH: name: Ansible log file path default: ~ description: File to which Ansible will log on the controller. When empty logging is disabled. env: [{name: ANSIBLE_LOG_PATH}] ini: - {key: log_path, section: defaults} type: path DEFAULT_LOG_FILTER: name: Name filters for python logger default: [] description: List of logger names to filter out of the log file env: [{name: ANSIBLE_LOG_FILTER}] ini: - {key: log_filter, section: defaults} type: list DEFAULT_LOOKUP_PLUGIN_PATH: name: Lookup Plugins Path description: Colon separated paths in which Ansible will search for Lookup Plugins. default: '{{ ANSIBLE_HOME ~ "/plugins/lookup:/usr/share/ansible/plugins/lookup" }}' env: [{name: ANSIBLE_LOOKUP_PLUGINS}] ini: - {key: lookup_plugins, section: defaults} type: pathspec yaml: {key: defaults.lookup_plugins} DEFAULT_MANAGED_STR: name: Ansible managed default: 'Ansible managed' description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules. env: [] ini: - {key: ansible_managed, section: defaults} yaml: {key: defaults.ansible_managed} DEFAULT_MODULE_ARGS: name: Adhoc default arguments default: ~ description: - This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified. env: [{name: ANSIBLE_MODULE_ARGS}] ini: - {key: module_args, section: defaults} DEFAULT_MODULE_COMPRESSION: name: Python module compression default: ZIP_DEFLATED description: Compression scheme to use when transferring Python modules to the target. env: [] ini: - {key: module_compression, section: defaults} # vars: # - name: ansible_module_compression DEFAULT_MODULE_NAME: name: Default adhoc module default: command description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``." env: [] ini: - {key: module_name, section: defaults} DEFAULT_MODULE_PATH: name: Modules Path description: Colon separated paths in which Ansible will search for Modules. default: '{{ ANSIBLE_HOME ~ "/plugins/modules:/usr/share/ansible/plugins/modules" }}' env: [{name: ANSIBLE_LIBRARY}] ini: - {key: library, section: defaults} type: pathspec DEFAULT_MODULE_UTILS_PATH: name: Module Utils Path description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules. default: '{{ ANSIBLE_HOME ~ "/plugins/module_utils:/usr/share/ansible/plugins/module_utils" }}' env: [{name: ANSIBLE_MODULE_UTILS}] ini: - {key: module_utils, section: defaults} type: pathspec DEFAULT_NETCONF_PLUGIN_PATH: name: Netconf Plugins Path default: '{{ ANSIBLE_HOME ~ "/plugins/netconf:/usr/share/ansible/plugins/netconf" }}' description: Colon separated paths in which Ansible will search for Netconf Plugins. env: [{name: ANSIBLE_NETCONF_PLUGINS}] ini: - {key: netconf_plugins, section: defaults} type: pathspec DEFAULT_NO_LOG: name: No log default: False description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures." env: [{name: ANSIBLE_NO_LOG}] ini: - {key: no_log, section: defaults} type: boolean DEFAULT_NO_TARGET_SYSLOG: name: No syslog on target default: False description: - Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer style PowerShell modules from writing to the event log. env: [{name: ANSIBLE_NO_TARGET_SYSLOG}] ini: - {key: no_target_syslog, section: defaults} vars: - name: ansible_no_target_syslog version_added: '2.10' type: boolean yaml: {key: defaults.no_target_syslog} DEFAULT_NULL_REPRESENTATION: name: Represent a null default: ~ description: What templating should return as a 'null' value. When not set it will let Jinja2 decide. env: [{name: ANSIBLE_NULL_REPRESENTATION}] ini: - {key: null_representation, section: defaults} type: raw DEFAULT_POLL_INTERVAL: name: Async poll interval default: 15 description: - For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how often to check back on the status of those tasks when an explicit poll interval is not supplied. The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and providing a quick turnaround when something may have completed. env: [{name: ANSIBLE_POLL_INTERVAL}] ini: - {key: poll_interval, section: defaults} type: integer DEFAULT_PRIVATE_KEY_FILE: name: Private key file default: ~ description: - Option for connections using a certificate or key file to authenticate, rather than an agent or passwords, you can set the default value here to avoid re-specifying --private-key with every invocation. env: [{name: ANSIBLE_PRIVATE_KEY_FILE}] ini: - {key: private_key_file, section: defaults} type: path DEFAULT_PRIVATE_ROLE_VARS: name: Private role variables default: False description: - By default, imported roles publish their variables to the play and other roles, this setting can avoid that. - This was introduced as a way to reset role variables to default values if a role is used more than once in a playbook. - Included roles only make their variables public at execution, unlike imported roles which happen at playbook compile time. env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}] ini: - {key: private_role_vars, section: defaults} type: boolean yaml: {key: defaults.private_role_vars} DEFAULT_REMOTE_PORT: name: Remote port default: ~ description: Port to use in remote connections, when blank it will use the connection plugin default. env: [{name: ANSIBLE_REMOTE_PORT}] ini: - {key: remote_port, section: defaults} type: integer yaml: {key: defaults.remote_port} DEFAULT_REMOTE_USER: name: Login/Remote User description: - Sets the login user for the target machines - "When blank it uses the connection plugin's default, normally the user currently executing Ansible." env: [{name: ANSIBLE_REMOTE_USER}] ini: - {key: remote_user, section: defaults} DEFAULT_ROLES_PATH: name: Roles path default: '{{ ANSIBLE_HOME ~ "/roles:/usr/share/ansible/roles:/etc/ansible/roles" }}' description: Colon separated paths in which Ansible will search for Roles. env: [{name: ANSIBLE_ROLES_PATH}] expand_relative_paths: True ini: - {key: roles_path, section: defaults} type: pathspec yaml: {key: defaults.roles_path} DEFAULT_SELINUX_SPECIAL_FS: name: Problematic file systems default: fuse, nfs, vboxsf, ramfs, 9p, vfat description: - "Some filesystems do not support safe operations and/or return inconsistent errors, this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors." - Data corruption may occur and writes are not always verified when a filesystem is in the list. env: - name: ANSIBLE_SELINUX_SPECIAL_FS version_added: "2.9" ini: - {key: special_context_filesystems, section: selinux} type: list DEFAULT_STDOUT_CALLBACK: name: Main display callback plugin default: default description: - "Set the main callback used to display Ansible output. You can only have one at a time." - You can have many other callbacks, but just one can be in charge of stdout. - See :ref:`callback_plugins` for a list of available options. env: [{name: ANSIBLE_STDOUT_CALLBACK}] ini: - {key: stdout_callback, section: defaults} EDITOR: name: editor application touse default: vi descrioption: - for the cases in which Ansible needs to return a file within an editor, this chooses the application to use ini: - section: defaults key: editor version_added: '2.15' env: - name: ANSIBLE_EDITOR version_added: '2.15' - name: EDITOR ENABLE_TASK_DEBUGGER: name: Whether to enable the task debugger default: False description: - Whether or not to enable the task debugger, this previously was done as a strategy plugin. - Now all strategy plugins can inherit this behavior. The debugger defaults to activating when - a task is failed on unreachable. Use the debugger keyword for more flexibility. type: boolean env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}] ini: - {key: enable_task_debugger, section: defaults} version_added: "2.5" TASK_DEBUGGER_IGNORE_ERRORS: name: Whether a failed task with ignore_errors=True will still invoke the debugger default: True description: - This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True is specified. - True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors. type: boolean env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}] ini: - {key: task_debugger_ignore_errors, section: defaults} version_added: "2.7" DEFAULT_STRATEGY: name: Implied strategy default: 'linear' description: Set the default strategy used for plays. env: [{name: ANSIBLE_STRATEGY}] ini: - {key: strategy, section: defaults} version_added: "2.3" DEFAULT_STRATEGY_PLUGIN_PATH: name: Strategy Plugins Path description: Colon separated paths in which Ansible will search for Strategy Plugins. default: '{{ ANSIBLE_HOME ~ "/plugins/strategy:/usr/share/ansible/plugins/strategy" }}' env: [{name: ANSIBLE_STRATEGY_PLUGINS}] ini: - {key: strategy_plugins, section: defaults} type: pathspec DEFAULT_SU: default: False description: 'Toggle the use of "su" for tasks.' env: [{name: ANSIBLE_SU}] ini: - {key: su, section: defaults} type: boolean yaml: {key: defaults.su} DEFAULT_SYSLOG_FACILITY: name: syslog facility default: LOG_USER description: Syslog facility to use when Ansible logs to the remote target env: [{name: ANSIBLE_SYSLOG_FACILITY}] ini: - {key: syslog_facility, section: defaults} DEFAULT_TERMINAL_PLUGIN_PATH: name: Terminal Plugins Path default: '{{ ANSIBLE_HOME ~ "/plugins/terminal:/usr/share/ansible/plugins/terminal" }}' description: Colon separated paths in which Ansible will search for Terminal Plugins. env: [{name: ANSIBLE_TERMINAL_PLUGINS}] ini: - {key: terminal_plugins, section: defaults} type: pathspec DEFAULT_TEST_PLUGIN_PATH: name: Jinja2 Test Plugins Path description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins. default: '{{ ANSIBLE_HOME ~ "/plugins/test:/usr/share/ansible/plugins/test" }}' env: [{name: ANSIBLE_TEST_PLUGINS}] ini: - {key: test_plugins, section: defaults} type: pathspec DEFAULT_TIMEOUT: name: Connection timeout default: 10 description: This is the default timeout for connection plugins to use. env: [{name: ANSIBLE_TIMEOUT}] ini: - {key: timeout, section: defaults} type: integer DEFAULT_TRANSPORT: name: Connection plugin default: ssh description: - Can be any connection plugin available to your ansible installation. - There is also a (DEPRECATED) special 'smart' option, that will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions. env: [{name: ANSIBLE_TRANSPORT}] ini: - {key: transport, section: defaults} DEFAULT_UNDEFINED_VAR_BEHAVIOR: name: Jinja2 fail on undefined default: True version_added: "1.3" description: - When True, this causes ansible templating to fail steps that reference variable names that are likely typoed. - "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written." env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}] ini: - {key: error_on_undefined_vars, section: defaults} type: boolean DEFAULT_VARS_PLUGIN_PATH: name: Vars Plugins Path default: '{{ ANSIBLE_HOME ~ "/plugins/vars:/usr/share/ansible/plugins/vars" }}' description: Colon separated paths in which Ansible will search for Vars Plugins. env: [{name: ANSIBLE_VARS_PLUGINS}] ini: - {key: vars_plugins, section: defaults} type: pathspec # TODO: unused? #DEFAULT_VAR_COMPRESSION_LEVEL: # default: 0 # description: 'TODO: write it' # env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}] # ini: # - {key: var_compression_level, section: defaults} # type: integer # yaml: {key: defaults.var_compression_level} DEFAULT_VAULT_ID_MATCH: name: Force vault id match default: False description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id' env: [{name: ANSIBLE_VAULT_ID_MATCH}] ini: - {key: vault_id_match, section: defaults} yaml: {key: defaults.vault_id_match} DEFAULT_VAULT_IDENTITY: name: Vault id label default: default description: 'The label to use for the default vault id label in cases where a vault id label is not provided' env: [{name: ANSIBLE_VAULT_IDENTITY}] ini: - {key: vault_identity, section: defaults} yaml: {key: defaults.vault_identity} VAULT_ENCRYPT_SALT: name: Vault salt to use for encryption default: ~ description: 'The salt to use for the vault encryption. If it is not provided, a random salt will be used.' env: [{name: ANSIBLE_VAULT_ENCRYPT_SALT}] ini: - {key: vault_encrypt_salt, section: defaults} version_added: '2.15' DEFAULT_VAULT_ENCRYPT_IDENTITY: name: Vault id to use for encryption description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.' env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}] ini: - {key: vault_encrypt_identity, section: defaults} yaml: {key: defaults.vault_encrypt_identity} DEFAULT_VAULT_IDENTITY_LIST: name: Default vault ids default: [] description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.' env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}] ini: - {key: vault_identity_list, section: defaults} type: list yaml: {key: defaults.vault_identity_list} DEFAULT_VAULT_PASSWORD_FILE: name: Vault password file default: ~ description: - 'The vault password file to use. Equivalent to --vault-password-file or --vault-id' - If executable, it will be run and the resulting stdout will be used as the password. env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}] ini: - {key: vault_password_file, section: defaults} type: path yaml: {key: defaults.vault_password_file} DEFAULT_VERBOSITY: name: Verbosity default: 0 description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line. env: [{name: ANSIBLE_VERBOSITY}] ini: - {key: verbosity, section: defaults} type: integer DEPRECATION_WARNINGS: name: Deprecation messages default: True description: "Toggle to control the showing of deprecation warnings" env: [{name: ANSIBLE_DEPRECATION_WARNINGS}] ini: - {key: deprecation_warnings, section: defaults} type: boolean DEVEL_WARNING: name: Running devel warning default: True description: Toggle to control showing warnings related to running devel env: [{name: ANSIBLE_DEVEL_WARNING}] ini: - {key: devel_warning, section: defaults} type: boolean DIFF_ALWAYS: name: Show differences default: False description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``. env: [{name: ANSIBLE_DIFF_ALWAYS}] ini: - {key: always, section: diff} type: bool DIFF_CONTEXT: name: Difference context default: 3 description: How many lines of context to show when displaying the differences between files. env: [{name: ANSIBLE_DIFF_CONTEXT}] ini: - {key: context, section: diff} type: integer DISPLAY_ARGS_TO_STDOUT: name: Show task arguments default: False description: - "Normally ``ansible-playbook`` will print a header for each task that is run. These headers will contain the name: field from the task if you specified one. If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running. Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action. If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header." - "This setting defaults to False because there is a chance that you have sensitive values in your parameters and you do not want those to be printed." - "If you set this to True you should be sure that you have secured your environment's stdout (no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values See How do I keep secret data in my playbook? for more information." env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}] ini: - {key: display_args_to_stdout, section: defaults} type: boolean version_added: "2.1" DISPLAY_SKIPPED_HOSTS: name: Show skipped results default: True description: "Toggle to control displaying skipped task/host entries in a task in the default callback" env: - name: ANSIBLE_DISPLAY_SKIPPED_HOSTS ini: - {key: display_skipped_hosts, section: defaults} type: boolean DOCSITE_ROOT_URL: name: Root docsite URL default: https://docs.ansible.com/ansible-core/ description: Root docsite URL used to generate docs URLs in warning/error text; must be an absolute URL with valid scheme and trailing slash. ini: - {key: docsite_root_url, section: defaults} version_added: "2.8" DUPLICATE_YAML_DICT_KEY: name: Controls ansible behaviour when finding duplicate keys in YAML. default: warn description: - By default Ansible will issue a warning when a duplicate dict key is encountered in YAML. - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}] ini: - {key: duplicate_dict_key, section: defaults} type: string choices: &basic_error2 error: issue a 'fatal' error and stop the play warn: issue a warning but continue ignore: just continue silently version_added: "2.9" ERROR_ON_MISSING_HANDLER: name: Missing handler error default: True description: "Toggle to allow missing handlers to become a warning instead of an error when notifying." env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}] ini: - {key: error_on_missing_handler, section: defaults} type: boolean CONNECTION_FACTS_MODULES: name: Map of connections to fact modules default: # use ansible.legacy names on unqualified facts modules to allow library/ overrides asa: ansible.legacy.asa_facts cisco.asa.asa: cisco.asa.asa_facts eos: ansible.legacy.eos_facts arista.eos.eos: arista.eos.eos_facts frr: ansible.legacy.frr_facts frr.frr.frr: frr.frr.frr_facts ios: ansible.legacy.ios_facts cisco.ios.ios: cisco.ios.ios_facts iosxr: ansible.legacy.iosxr_facts cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts junos: ansible.legacy.junos_facts junipernetworks.junos.junos: junipernetworks.junos.junos_facts nxos: ansible.legacy.nxos_facts cisco.nxos.nxos: cisco.nxos.nxos_facts vyos: ansible.legacy.vyos_facts vyos.vyos.vyos: vyos.vyos.vyos_facts exos: ansible.legacy.exos_facts extreme.exos.exos: extreme.exos.exos_facts slxos: ansible.legacy.slxos_facts extreme.slxos.slxos: extreme.slxos.slxos_facts voss: ansible.legacy.voss_facts extreme.voss.voss: extreme.voss.voss_facts ironware: ansible.legacy.ironware_facts community.network.ironware: community.network.ironware_facts description: "Which modules to run during a play's fact gathering stage based on connection" type: dict FACTS_MODULES: name: Gather Facts Modules default: - smart description: - "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type." - "If adding your own modules but you still want to use the default Ansible facts, you will want to include 'setup' or corresponding network module to the list (if you add 'smart', Ansible will also figure it out)." - "This does not affect explicit calls to the 'setup' module, but does always affect the 'gather_facts' action (implicit or explicit)." env: [{name: ANSIBLE_FACTS_MODULES}] ini: - {key: facts_modules, section: defaults} type: list vars: - name: ansible_facts_modules GALAXY_IGNORE_CERTS: name: Galaxy validate certs description: - If set to yes, ansible-galaxy will not validate TLS certificates. This can be useful for testing against a server with a self-signed certificate. env: [{name: ANSIBLE_GALAXY_IGNORE}] ini: - {key: ignore_certs, section: galaxy} type: boolean GALAXY_SERVER_TIMEOUT: name: Default timeout to use for API calls description: - The default timeout for Galaxy API calls. Galaxy servers that don't configure a specific timeout will fall back to this value. env: [{name: ANSIBLE_GALAXY_SERVER_TIMEOUT}] default: 60 ini: - {key: server_timeout, section: galaxy} type: int GALAXY_ROLE_SKELETON: name: Galaxy role skeleton directory description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``. env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}] ini: - {key: role_skeleton, section: galaxy} type: path GALAXY_ROLE_SKELETON_IGNORE: name: Galaxy role skeleton ignore default: ["^.git$", "^.*/.git_keep$"] description: patterns of files to ignore inside a Galaxy role or collection skeleton directory env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}] ini: - {key: role_skeleton_ignore, section: galaxy} type: list GALAXY_COLLECTION_SKELETON: name: Galaxy collection skeleton directory description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``. env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}] ini: - {key: collection_skeleton, section: galaxy} type: path GALAXY_COLLECTION_SKELETON_IGNORE: name: Galaxy collection skeleton ignore default: ["^.git$", "^.*/.git_keep$"] description: patterns of files to ignore inside a Galaxy collection skeleton directory env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}] ini: - {key: collection_skeleton_ignore, section: galaxy} type: list GALAXY_COLLECTIONS_PATH_WARNING: name: "ansible-galaxy collection install colections path warnings" description: "whether ``ansible-galaxy collection install`` should warn about ``--collections-path`` missing from configured :ref:`collections_paths`" default: true type: bool env: [{name: ANSIBLE_GALAXY_COLLECTIONS_PATH_WARNING}] ini: - {key: collections_path_warning, section: galaxy} version_added: "2.16" # TODO: unused? #GALAXY_SCMS: # name: Galaxy SCMS # default: git, hg # description: Available galaxy source control management systems. # env: [{name: ANSIBLE_GALAXY_SCMS}] # ini: # - {key: scms, section: galaxy} # type: list GALAXY_SERVER: default: https://galaxy.ansible.com description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source." env: [{name: ANSIBLE_GALAXY_SERVER}] ini: - {key: server, section: galaxy} yaml: {key: galaxy.server} GALAXY_SERVER_LIST: description: - A list of Galaxy servers to use when installing a collection. - The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details. - 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.' - The order of servers in this list is used to as the order in which a collection is resolved. - Setting this config option will ignore the :ref:`galaxy_server` config option. env: [{name: ANSIBLE_GALAXY_SERVER_LIST}] ini: - {key: server_list, section: galaxy} type: list version_added: "2.9" GALAXY_TOKEN_PATH: default: '{{ ANSIBLE_HOME ~ "/galaxy_token" }}' description: "Local path to galaxy access token file" env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}] ini: - {key: token_path, section: galaxy} type: path version_added: "2.9" GALAXY_DISPLAY_PROGRESS: default: ~ description: - Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when outputting the stdout to a file. - This config option controls whether the display wheel is shown or not. - The default is to show the display wheel if stdout has a tty. env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}] ini: - {key: display_progress, section: galaxy} type: bool version_added: "2.10" GALAXY_CACHE_DIR: default: '{{ ANSIBLE_HOME ~ "/galaxy_cache" }}' description: - The directory that stores cached responses from a Galaxy server. - This is only used by the ``ansible-galaxy collection install`` and ``download`` commands. - Cache files inside this dir will be ignored if they are world writable. env: - name: ANSIBLE_GALAXY_CACHE_DIR ini: - section: galaxy key: cache_dir type: path version_added: '2.11' GALAXY_DISABLE_GPG_VERIFY: default: false type: bool env: - name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY ini: - section: galaxy key: disable_gpg_verify description: - Disable GPG signature verification during collection installation. version_added: '2.13' GALAXY_GPG_KEYRING: type: path env: - name: ANSIBLE_GALAXY_GPG_KEYRING ini: - section: galaxy key: gpg_keyring description: - Configure the keyring used for GPG signature verification during collection installation and verification. version_added: '2.13' GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES: type: list env: - name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES ini: - section: galaxy key: ignore_signature_status_codes description: - A list of GPG status codes to ignore during GPG signature verification. See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions. - If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`, signature verification will fail even if all error codes are ignored. choices: - EXPSIG - EXPKEYSIG - REVKEYSIG - BADSIG - ERRSIG - NO_PUBKEY - MISSING_PASSPHRASE - BAD_PASSPHRASE - NODATA - UNEXPECTED - ERROR - FAILURE - BADARMOR - KEYEXPIRED - KEYREVOKED - NO_SECKEY GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: type: str default: 1 env: - name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT ini: - section: galaxy key: required_valid_signature_count description: - The number of signatures that must be successful during GPG signature verification while installing or verifying collections. - This should be a positive integer or all to indicate all signatures must successfully validate the collection. - Prepend + to the value to fail if no valid signatures are found for the collection. HOST_KEY_CHECKING: # note: constant not in use by ssh plugin anymore # TODO: check non ssh connection plugins for use/migration name: Check host keys default: True description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host' env: [{name: ANSIBLE_HOST_KEY_CHECKING}] ini: - {key: host_key_checking, section: defaults} type: boolean HOST_PATTERN_MISMATCH: name: Control host pattern mismatch behaviour default: 'warning' description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}] ini: - {key: host_pattern_mismatch, section: inventory} choices: <<: *basic_error version_added: "2.8" INTERPRETER_PYTHON: name: Python interpreter path (or automatic discovery behavior) used for module execution default: auto env: [{name: ANSIBLE_PYTHON_INTERPRETER}] ini: - {key: interpreter_python, section: defaults} vars: - {name: ansible_python_interpreter} version_added: "2.8" description: - Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode. Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``. All discovery modes employ a lookup table to use the included system Python (on distributions known to include one), falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or ``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present. _INTERPRETER_PYTHON_DISTRO_MAP: name: Mapping of known included platform pythons for various Linux distros default: redhat: '6': /usr/bin/python '8': /usr/libexec/platform-python '9': /usr/bin/python3 debian: '8': /usr/bin/python '10': /usr/bin/python3 fedora: '23': /usr/bin/python3 ubuntu: '14': /usr/bin/python '16': /usr/bin/python3 version_added: "2.8" # FUTURE: add inventory override once we're sure it can't be abused by a rogue target # FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc? INTERPRETER_PYTHON_FALLBACK: name: Ordered list of Python interpreters to check for in discovery default: - python3.12 - python3.11 - python3.10 - python3.9 - python3.8 - python3.7 - python3.6 - /usr/bin/python3 - /usr/libexec/platform-python - python2.7 - /usr/bin/python - python vars: - name: ansible_interpreter_python_fallback type: list version_added: "2.8" TRANSFORM_INVALID_GROUP_CHARS: name: Transform invalid characters in group names default: 'never' description: - Make ansible transform invalid characters in group names supplied by inventory sources. env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}] ini: - {key: force_valid_group_names, section: defaults} type: string choices: always: it will replace any invalid characters with '_' (underscore) and warn the user never: it will allow for the group name but warn about the issue ignore: it does the same as 'never', without issuing a warning silently: it does the same as 'always', without issuing a warning version_added: '2.8' INVALID_TASK_ATTRIBUTE_FAILED: name: Controls whether invalid attributes for a task result in errors instead of warnings default: True description: If 'false', invalid attributes for a task will result in warnings instead of errors type: boolean env: - name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED ini: - key: invalid_task_attribute_failed section: defaults version_added: "2.7" INVENTORY_ANY_UNPARSED_IS_FAILED: name: Controls whether any unparsable inventory source is a fatal error default: False description: > If 'true', it is a fatal error when any given inventory source cannot be successfully parsed by any available inventory plugin; otherwise, this situation only attracts a warning. type: boolean env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}] ini: - {key: any_unparsed_is_failed, section: inventory} version_added: "2.7" INVENTORY_CACHE_ENABLED: name: Inventory caching enabled default: False description: - Toggle to turn on inventory caching. - This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`. - The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration. - This message will be removed in 2.16. env: [{name: ANSIBLE_INVENTORY_CACHE}] ini: - {key: cache, section: inventory} type: bool INVENTORY_CACHE_PLUGIN: name: Inventory cache plugin description: - The plugin for caching inventory. - This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`. - The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration. - This message will be removed in 2.16. env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}] ini: - {key: cache_plugin, section: inventory} INVENTORY_CACHE_PLUGIN_CONNECTION: name: Inventory cache plugin URI to override the defaults section description: - The inventory cache connection. - This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`. - The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration. - This message will be removed in 2.16. env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}] ini: - {key: cache_connection, section: inventory} INVENTORY_CACHE_PLUGIN_PREFIX: name: Inventory cache plugin table prefix description: - The table prefix for the cache plugin. - This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`. - The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration. - This message will be removed in 2.16. env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}] default: ansible_inventory_ ini: - {key: cache_prefix, section: inventory} INVENTORY_CACHE_TIMEOUT: name: Inventory cache plugin expiration timeout description: - Expiration timeout for the inventory cache plugin data. - This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`. - The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration. - This message will be removed in 2.16. default: 3600 env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}] ini: - {key: cache_timeout, section: inventory} INVENTORY_ENABLED: name: Active Inventory plugins default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml'] description: List of enabled inventory plugins, it also determines the order in which they are used. env: [{name: ANSIBLE_INVENTORY_ENABLED}] ini: - {key: enable_plugins, section: inventory} type: list INVENTORY_EXPORT: name: Set ansible-inventory into export mode default: False description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting. env: [{name: ANSIBLE_INVENTORY_EXPORT}] ini: - {key: export, section: inventory} type: bool INVENTORY_IGNORE_EXTS: name: Inventory ignore extensions default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}" description: List of extensions to ignore when using a directory as an inventory source env: [{name: ANSIBLE_INVENTORY_IGNORE}] ini: - {key: inventory_ignore_extensions, section: defaults} - {key: ignore_extensions, section: inventory} type: list INVENTORY_IGNORE_PATTERNS: name: Inventory ignore patterns default: [] description: List of patterns to ignore when using a directory as an inventory source env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}] ini: - {key: inventory_ignore_patterns, section: defaults} - {key: ignore_patterns, section: inventory} type: list INVENTORY_UNPARSED_IS_FAILED: name: Unparsed Inventory failure default: False description: > If 'true' it is a fatal error if every single potential inventory source fails to parse, otherwise this situation will only attract a warning. env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}] ini: - {key: unparsed_is_failed, section: inventory} type: bool JINJA2_NATIVE_WARNING: name: Running older than required Jinja version for jinja2_native warning default: True description: Toggle to control showing warnings related to running a Jinja version older than required for jinja2_native env: - name: ANSIBLE_JINJA2_NATIVE_WARNING deprecated: why: This option is no longer used in the Ansible Core code base. version: "2.17" ini: - {key: jinja2_native_warning, section: defaults} type: boolean MAX_FILE_SIZE_FOR_DIFF: name: Diff maximum file size default: 104448 description: Maximum size of files to be considered for diff display env: [{name: ANSIBLE_MAX_DIFF_SIZE}] ini: - {key: max_diff_size, section: defaults} type: int NETWORK_GROUP_MODULES: name: Network module families default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos] description: 'TODO: write it' env: - name: ANSIBLE_NETWORK_GROUP_MODULES ini: - {key: network_group_modules, section: defaults} type: list yaml: {key: defaults.network_group_modules} INJECT_FACTS_AS_VARS: default: True description: - Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace. - Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix. env: [{name: ANSIBLE_INJECT_FACT_VARS}] ini: - {key: inject_facts_as_vars, section: defaults} type: boolean version_added: "2.5" MODULE_IGNORE_EXTS: name: Module ignore extensions default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}" description: - List of extensions to ignore when looking for modules to load - This is for rejecting script and binary module fallback extensions env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}] ini: - {key: module_ignore_exts, section: defaults} type: list MODULE_STRICT_UTF8_RESPONSE: name: Module strict UTF-8 response description: - Enables whether module responses are evaluated for containing non UTF-8 data - Disabling this may result in unexpected behavior - Only ansible-core should evaluate this configuration env: [{name: ANSIBLE_MODULE_STRICT_UTF8_RESPONSE}] ini: - {key: module_strict_utf8_response, section: defaults} type: bool default: True OLD_PLUGIN_CACHE_CLEARING: description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in previous plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour. env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}] ini: - {key: old_plugin_cache_clear, section: defaults} type: boolean default: False version_added: "2.8" PAGER: name: pager application to use default: less descrioption: - for the cases in which Ansible needs to return output in pageable fashion, this chooses the application to use ini: - section: defaults key: pager version_added: '2.15' env: - name: ANSIBLE_PAGER version_added: '2.15' - name: PAGER PARAMIKO_HOST_KEY_AUTO_ADD: # TODO: move to plugin default: False description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}] ini: - {key: host_key_auto_add, section: paramiko_connection} type: boolean PARAMIKO_LOOK_FOR_KEYS: name: look for keys default: True description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}] ini: - {key: look_for_keys, section: paramiko_connection} type: boolean PERSISTENT_CONTROL_PATH_DIR: name: Persistence socket path default: '{{ ANSIBLE_HOME ~ "/pc" }}' description: Path to socket to be used by the connection persistence system. env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}] ini: - {key: control_path_dir, section: persistent_connection} type: path PERSISTENT_CONNECT_TIMEOUT: name: Persistence timeout default: 30 description: This controls how long the persistent connection will remain idle before it is destroyed. env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}] ini: - {key: connect_timeout, section: persistent_connection} type: integer PERSISTENT_CONNECT_RETRY_TIMEOUT: name: Persistence connection retry timeout default: 15 description: This controls the retry timeout for persistent connection to connect to the local domain socket. env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}] ini: - {key: connect_retry_timeout, section: persistent_connection} type: integer PERSISTENT_COMMAND_TIMEOUT: name: Persistence command timeout default: 30 description: This controls the amount of time to wait for response from remote device before timing out persistent connection. env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}] ini: - {key: command_timeout, section: persistent_connection} type: int PLAYBOOK_DIR: name: playbook dir override for non-playbook CLIs (ala --playbook-dir) version_added: "2.9" description: - A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it. env: [{name: ANSIBLE_PLAYBOOK_DIR}] ini: [{key: playbook_dir, section: defaults}] type: path PLAYBOOK_VARS_ROOT: name: playbook vars files root default: top version_added: "2.4.1" description: - This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}] ini: - {key: playbook_vars_root, section: defaults} choices: top: follows the traditional behavior of using the top playbook in the chain to find the root directory. bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory. all: examines from the first parent to the current playbook. PLUGIN_FILTERS_CFG: name: Config file for limiting valid plugins default: null version_added: "2.5.0" description: - "A path to configuration for filtering which plugins installed on the system are allowed to be used." - "See :ref:`plugin_filtering_config` for details of the filter file's format." - " The default is /etc/ansible/plugin_filters.yml" ini: - key: plugin_filters_cfg section: defaults type: path PYTHON_MODULE_RLIMIT_NOFILE: name: Adjust maximum file descriptor soft limit during Python module execution description: - Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default value of 0 does not attempt to adjust existing system-defined limits. default: 0 env: - {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE} ini: - {key: python_module_rlimit_nofile, section: defaults} vars: - {name: ansible_python_module_rlimit_nofile} version_added: '2.8' RETRY_FILES_ENABLED: name: Retry files default: False description: This controls whether a failed Ansible playbook should create a .retry file. env: [{name: ANSIBLE_RETRY_FILES_ENABLED}] ini: - {key: retry_files_enabled, section: defaults} type: bool RETRY_FILES_SAVE_PATH: name: Retry files path default: ~ description: - This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled. - This file will be overwritten after each run with the list of failed hosts from all plays. env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}] ini: - {key: retry_files_save_path, section: defaults} type: path RUN_VARS_PLUGINS: name: When should vars plugins run relative to inventory default: demand description: - This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection. env: [{name: ANSIBLE_RUN_VARS_PLUGINS}] ini: - {key: run_vars_plugins, section: defaults} type: str choices: demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks. start: will run vars_plugins relative to inventory sources after importing that inventory source. version_added: "2.10" SHOW_CUSTOM_STATS: name: Display custom stats default: False description: 'This adds the custom stats set via the set_stats plugin to the default output' env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}] ini: - {key: show_custom_stats, section: defaults} type: bool STRING_TYPE_FILTERS: name: Filters to preserve strings default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json] description: - "This list of filters avoids 'type conversion' when templating variables" - Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example. env: [{name: ANSIBLE_STRING_TYPE_FILTERS}] ini: - {key: dont_type_filters, section: jinja2} type: list SYSTEM_WARNINGS: name: System warnings default: True description: - Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts) - These may include warnings about 3rd party packages or other conditions that should be resolved if possible. env: [{name: ANSIBLE_SYSTEM_WARNINGS}] ini: - {key: system_warnings, section: defaults} type: boolean TAGS_RUN: name: Run Tags default: [] type: list description: default list of tags to run in your plays, Skip Tags has precedence. env: [{name: ANSIBLE_RUN_TAGS}] ini: - {key: run, section: tags} version_added: "2.5" TAGS_SKIP: name: Skip Tags default: [] type: list description: default list of tags to skip in your plays, has precedence over Run Tags env: [{name: ANSIBLE_SKIP_TAGS}] ini: - {key: skip, section: tags} version_added: "2.5" TASK_TIMEOUT: name: Task Timeout default: 0 description: - Set the maximum time (in seconds) that a task can run for. - If set to 0 (the default) there is no timeout. env: [{name: ANSIBLE_TASK_TIMEOUT}] ini: - {key: task_timeout, section: defaults} type: integer version_added: '2.10' WORKER_SHUTDOWN_POLL_COUNT: name: Worker Shutdown Poll Count default: 0 description: - The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly. - After this limit is reached any worker processes still running will be terminated. - This is for internal use only. env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}] type: integer version_added: '2.10' WORKER_SHUTDOWN_POLL_DELAY: name: Worker Shutdown Poll Delay default: 0.1 description: - The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly. - This is for internal use only. env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}] type: float version_added: '2.10' USE_PERSISTENT_CONNECTIONS: name: Persistence default: False description: Toggles the use of persistence for connections. env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}] ini: - {key: use_persistent_connections, section: defaults} type: boolean VARIABLE_PLUGINS_ENABLED: name: Vars plugin enabled list default: ['host_group_vars'] description: Accept list for variable plugins that require it. env: [{name: ANSIBLE_VARS_ENABLED}] ini: - {key: vars_plugins_enabled, section: defaults} type: list version_added: "2.10" VARIABLE_PRECEDENCE: name: Group variable precedence default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play'] description: Allows to change the group variable precedence merge order. env: [{name: ANSIBLE_PRECEDENCE}] ini: - {key: precedence, section: defaults} type: list version_added: "2.4" WIN_ASYNC_STARTUP_TIMEOUT: name: Windows Async Startup Timeout default: 5 description: - For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load. - This is not the total time an async command can run for, but is a separate timeout to wait for an async command to start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the overall maximum duration the task can take will be extended by the amount specified here. env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}] ini: - {key: win_async_startup_timeout, section: defaults} type: integer vars: - {name: ansible_win_async_startup_timeout} version_added: '2.10' YAML_FILENAME_EXTENSIONS: name: Valid YAML extensions default: [".yml", ".yaml", ".json"] description: - "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these." - 'This affects vars_files, include_vars, inventory and vars plugins among others.' env: - name: ANSIBLE_YAML_FILENAME_EXT ini: - section: defaults key: yaml_valid_extensions type: list NETCONF_SSH_CONFIG: description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump host ssh settings should be present in ~/.ssh/config file, alternatively it can be set to custom ssh configuration file path to read the bastion/jump host settings. env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}] ini: - {key: ssh_config, section: netconf_connection} yaml: {key: netconf_connection.ssh_config} default: null STRING_CONVERSION_ACTION: version_added: '2.8' description: - Action to take when a module parameter value is converted to a string (this does not affect variables). For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc. will be converted by the YAML parser unless fully quoted. - Valid options are 'error', 'warn', and 'ignore'. - Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12. default: 'warn' env: - name: ANSIBLE_STRING_CONVERSION_ACTION ini: - section: defaults key: string_conversion_action type: string deprecated: why: This option is no longer used in the Ansible Core code base. version: "2.19" alternatives: There is no alternative at the moment. A different mechanism would have to be implemented in the current code base. VALIDATE_ACTION_GROUP_METADATA: version_added: '2.12' description: - A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group. Metadata containing unexpected fields or value types will produce a warning when this is True. default: True env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}] ini: - section: defaults key: validate_action_group_metadata type: bool VERBOSE_TO_STDERR: version_added: '2.8' description: - Force 'verbose' option to use stderr instead of stdout default: False env: - name: ANSIBLE_VERBOSE_TO_STDERR ini: - section: defaults key: verbose_to_stderr type: bool ... ansible-core-2.16.3/lib/ansible/config/manager.py0000644000000000000000000006144314556006441020335 0ustar00rootroot# Copyright: (c) 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import atexit import configparser import os import os.path import sys import stat import tempfile from collections import namedtuple from collections.abc import Mapping, Sequence from jinja2.nativetypes import NativeEnvironment from ansible.errors import AnsibleOptionsError, AnsibleError from ansible.module_utils.common.text.converters import to_text, to_bytes, to_native from ansible.module_utils.common.yaml import yaml_load from ansible.module_utils.six import string_types from ansible.module_utils.parsing.convert_bool import boolean from ansible.parsing.quoting import unquote from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode from ansible.utils import py3compat from ansible.utils.path import cleanup_tmp_file, makedirs_safe, unfrackpath Plugin = namedtuple('Plugin', 'name type') Setting = namedtuple('Setting', 'name value origin type') INTERNAL_DEFS = {'lookup': ('_terms',)} def _get_entry(plugin_type, plugin_name, config): ''' construct entry for requested config ''' entry = '' if plugin_type: entry += 'plugin_type: %s ' % plugin_type if plugin_name: entry += 'plugin: %s ' % plugin_name entry += 'setting: %s ' % config return entry # FIXME: see if we can unify in module_utils with similar function used by argspec def ensure_type(value, value_type, origin=None): ''' return a configuration variable with casting :arg value: The value to ensure correct typing of :kwarg value_type: The type of the value. This can be any of the following strings: :boolean: sets the value to a True or False value :bool: Same as 'boolean' :integer: Sets the value to an integer or raises a ValueType error :int: Same as 'integer' :float: Sets the value to a float or raises a ValueType error :list: Treats the value as a comma separated list. Split the value and return it as a python list. :none: Sets the value to None :path: Expands any environment variables and tilde's in the value. :tmppath: Create a unique temporary directory inside of the directory specified by value and return its path. :temppath: Same as 'tmppath' :tmp: Same as 'tmppath' :pathlist: Treat the value as a typical PATH string. (On POSIX, this means comma separated strings.) Split the value and then expand each part for environment variables and tildes. :pathspec: Treat the value as a PATH string. Expands any environment variables tildes's in the value. :str: Sets the value to string types. :string: Same as 'str' ''' errmsg = '' basedir = None if origin and os.path.isabs(origin) and os.path.exists(to_bytes(origin)): basedir = origin if value_type: value_type = value_type.lower() if value is not None: if value_type in ('boolean', 'bool'): value = boolean(value, strict=False) elif value_type in ('integer', 'int'): value = int(value) elif value_type == 'float': value = float(value) elif value_type == 'list': if isinstance(value, string_types): value = [unquote(x.strip()) for x in value.split(',')] elif not isinstance(value, Sequence): errmsg = 'list' elif value_type == 'none': if value == "None": value = None if value is not None: errmsg = 'None' elif value_type == 'path': if isinstance(value, string_types): value = resolve_path(value, basedir=basedir) else: errmsg = 'path' elif value_type in ('tmp', 'temppath', 'tmppath'): if isinstance(value, string_types): value = resolve_path(value, basedir=basedir) if not os.path.exists(value): makedirs_safe(value, 0o700) prefix = 'ansible-local-%s' % os.getpid() value = tempfile.mkdtemp(prefix=prefix, dir=value) atexit.register(cleanup_tmp_file, value, warn=True) else: errmsg = 'temppath' elif value_type == 'pathspec': if isinstance(value, string_types): value = value.split(os.pathsep) if isinstance(value, Sequence): value = [resolve_path(x, basedir=basedir) for x in value] else: errmsg = 'pathspec' elif value_type == 'pathlist': if isinstance(value, string_types): value = [x.strip() for x in value.split(',')] if isinstance(value, Sequence): value = [resolve_path(x, basedir=basedir) for x in value] else: errmsg = 'pathlist' elif value_type in ('dict', 'dictionary'): if not isinstance(value, Mapping): errmsg = 'dictionary' elif value_type in ('str', 'string'): if isinstance(value, (string_types, AnsibleVaultEncryptedUnicode, bool, int, float, complex)): value = to_text(value, errors='surrogate_or_strict') if origin == 'ini': value = unquote(value) else: errmsg = 'string' # defaults to string type elif isinstance(value, (string_types, AnsibleVaultEncryptedUnicode)): value = to_text(value, errors='surrogate_or_strict') if origin == 'ini': value = unquote(value) if errmsg: raise ValueError('Invalid type provided for "%s": %s' % (errmsg, to_native(value))) return to_text(value, errors='surrogate_or_strict', nonstring='passthru') # FIXME: see if this can live in utils/path def resolve_path(path, basedir=None): ''' resolve relative or 'variable' paths ''' if '{{CWD}}' in path: # allow users to force CWD using 'magic' {{CWD}} path = path.replace('{{CWD}}', os.getcwd()) return unfrackpath(path, follow=False, basedir=basedir) # FIXME: generic file type? def get_config_type(cfile): ftype = None if cfile is not None: ext = os.path.splitext(cfile)[-1] if ext in ('.ini', '.cfg'): ftype = 'ini' elif ext in ('.yaml', '.yml'): ftype = 'yaml' else: raise AnsibleOptionsError("Unsupported configuration file extension for %s: %s" % (cfile, to_native(ext))) return ftype # FIXME: can move to module_utils for use for ini plugins also? def get_ini_config_value(p, entry): ''' returns the value of last ini entry found ''' value = None if p is not None: try: value = p.get(entry.get('section', 'defaults'), entry.get('key', ''), raw=True) except Exception: # FIXME: actually report issues here pass return value def find_ini_config_file(warnings=None): ''' Load INI Config File order(first found is used): ENV, CWD, HOME, /etc/ansible ''' # FIXME: eventually deprecate ini configs if warnings is None: # Note: In this case, warnings does nothing warnings = set() # A value that can never be a valid path so that we can tell if ANSIBLE_CONFIG was set later # We can't use None because we could set path to None. SENTINEL = object potential_paths = [] # Environment setting path_from_env = os.getenv("ANSIBLE_CONFIG", SENTINEL) if path_from_env is not SENTINEL: path_from_env = unfrackpath(path_from_env, follow=False) if os.path.isdir(to_bytes(path_from_env)): path_from_env = os.path.join(path_from_env, "ansible.cfg") potential_paths.append(path_from_env) # Current working directory warn_cmd_public = False try: cwd = os.getcwd() perms = os.stat(cwd) cwd_cfg = os.path.join(cwd, "ansible.cfg") if perms.st_mode & stat.S_IWOTH: # Working directory is world writable so we'll skip it. # Still have to look for a file here, though, so that we know if we have to warn if os.path.exists(cwd_cfg): warn_cmd_public = True else: potential_paths.append(to_text(cwd_cfg, errors='surrogate_or_strict')) except OSError: # If we can't access cwd, we'll simply skip it as a possible config source pass # Per user location potential_paths.append(unfrackpath("~/.ansible.cfg", follow=False)) # System location potential_paths.append("/etc/ansible/ansible.cfg") for path in potential_paths: b_path = to_bytes(path) if os.path.exists(b_path) and os.access(b_path, os.R_OK): break else: path = None # Emit a warning if all the following are true: # * We did not use a config from ANSIBLE_CONFIG # * There's an ansible.cfg in the current working directory that we skipped if path_from_env != path and warn_cmd_public: warnings.add(u"Ansible is being run in a world writable directory (%s)," u" ignoring it as an ansible.cfg source." u" For more information see" u" https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir" % to_text(cwd)) return path def _add_base_defs_deprecations(base_defs): '''Add deprecation source 'ansible.builtin' to deprecations in base.yml''' def process(entry): if 'deprecated' in entry: entry['deprecated']['collection_name'] = 'ansible.builtin' for dummy, data in base_defs.items(): process(data) for section in ('ini', 'env', 'vars'): if section in data: for entry in data[section]: process(entry) class ConfigManager(object): DEPRECATED = [] # type: list[tuple[str, dict[str, str]]] WARNINGS = set() # type: set[str] def __init__(self, conf_file=None, defs_file=None): self._base_defs = {} self._plugins = {} self._parsers = {} self._config_file = conf_file self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__))) _add_base_defs_deprecations(self._base_defs) if self._config_file is None: # set config using ini self._config_file = find_ini_config_file(self.WARNINGS) # consume configuration if self._config_file: # initialize parser and read config self._parse_config_file() # ensure we always have config def entry self._base_defs['CONFIG_FILE'] = {'default': None, 'type': 'path'} def _read_config_yaml_file(self, yml_file): # TODO: handle relative paths as relative to the directory containing the current playbook instead of CWD # Currently this is only used with absolute paths to the `ansible/config` directory yml_file = to_bytes(yml_file) if os.path.exists(yml_file): with open(yml_file, 'rb') as config_def: return yaml_load(config_def) or {} raise AnsibleError( "Missing base YAML definition file (bad install?): %s" % to_native(yml_file)) def _parse_config_file(self, cfile=None): ''' return flat configuration settings from file(s) ''' # TODO: take list of files with merge/nomerge if cfile is None: cfile = self._config_file ftype = get_config_type(cfile) if cfile is not None: if ftype == 'ini': self._parsers[cfile] = configparser.ConfigParser(inline_comment_prefixes=(';',)) with open(to_bytes(cfile), 'rb') as f: try: cfg_text = to_text(f.read(), errors='surrogate_or_strict') except UnicodeError as e: raise AnsibleOptionsError("Error reading config file(%s) because the config file was not utf8 encoded: %s" % (cfile, to_native(e))) try: self._parsers[cfile].read_string(cfg_text) except configparser.Error as e: raise AnsibleOptionsError("Error reading config file (%s): %s" % (cfile, to_native(e))) # FIXME: this should eventually handle yaml config files # elif ftype == 'yaml': # with open(cfile, 'rb') as config_stream: # self._parsers[cfile] = yaml_load(config_stream) else: raise AnsibleOptionsError("Unsupported configuration file type: %s" % to_native(ftype)) def _find_yaml_config_files(self): ''' Load YAML Config Files in order, check merge flags, keep origin of settings''' pass def get_plugin_options(self, plugin_type, name, keys=None, variables=None, direct=None): options = {} defs = self.get_configuration_definitions(plugin_type, name) for option in defs: options[option] = self.get_config_value(option, plugin_type=plugin_type, plugin_name=name, keys=keys, variables=variables, direct=direct) return options def get_plugin_vars(self, plugin_type, name): pvars = [] for pdef in self.get_configuration_definitions(plugin_type, name).values(): if 'vars' in pdef and pdef['vars']: for var_entry in pdef['vars']: pvars.append(var_entry['name']) return pvars def get_plugin_options_from_var(self, plugin_type, name, variable): options = [] for option_name, pdef in self.get_configuration_definitions(plugin_type, name).items(): if 'vars' in pdef and pdef['vars']: for var_entry in pdef['vars']: if variable == var_entry['name']: options.append(option_name) return options def get_configuration_definition(self, name, plugin_type=None, plugin_name=None): ret = {} if plugin_type is None: ret = self._base_defs.get(name, None) elif plugin_name is None: ret = self._plugins.get(plugin_type, {}).get(name, None) else: ret = self._plugins.get(plugin_type, {}).get(plugin_name, {}).get(name, None) return ret def has_configuration_definition(self, plugin_type, name): has = False if plugin_type in self._plugins: has = (name in self._plugins[plugin_type]) return has def get_configuration_definitions(self, plugin_type=None, name=None, ignore_private=False): ''' just list the possible settings, either base or for specific plugins or plugin ''' ret = {} if plugin_type is None: ret = self._base_defs elif name is None: ret = self._plugins.get(plugin_type, {}) else: ret = self._plugins.get(plugin_type, {}).get(name, {}) if ignore_private: for cdef in list(ret.keys()): if cdef.startswith('_'): del ret[cdef] return ret def _loop_entries(self, container, entry_list): ''' repeat code for value entry assignment ''' value = None origin = None for entry in entry_list: name = entry.get('name') try: temp_value = container.get(name, None) except UnicodeEncodeError: self.WARNINGS.add(u'value for config entry {0} contains invalid characters, ignoring...'.format(to_text(name))) continue if temp_value is not None: # only set if entry is defined in container # inline vault variables should be converted to a text string if isinstance(temp_value, AnsibleVaultEncryptedUnicode): temp_value = to_text(temp_value, errors='surrogate_or_strict') value = temp_value origin = name # deal with deprecation of setting source, if used if 'deprecated' in entry: self.DEPRECATED.append((entry['name'], entry['deprecated'])) return value, origin def get_config_value(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None): ''' wrapper ''' try: value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name, keys=keys, variables=variables, direct=direct) except AnsibleError: raise except Exception as e: raise AnsibleError("Unhandled exception when retrieving %s:\n%s" % (config, to_native(e)), orig_exc=e) return value def get_config_value_and_origin(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None): ''' Given a config key figure out the actual value and report on the origin of the settings ''' if cfile is None: # use default config cfile = self._config_file if config == 'CONFIG_FILE': return cfile, '' # Note: sources that are lists listed in low to high precedence (last one wins) value = None origin = None defs = self.get_configuration_definitions(plugin_type, plugin_name) if config in defs: aliases = defs[config].get('aliases', []) # direct setting via plugin arguments, can set to None so we bypass rest of processing/defaults if direct: if config in direct: value = direct[config] origin = 'Direct' else: direct_aliases = [direct[alias] for alias in aliases if alias in direct] if direct_aliases: value = direct_aliases[0] origin = 'Direct' if value is None and variables and defs[config].get('vars'): # Use 'variable overrides' if present, highest precedence, but only present when querying running play value, origin = self._loop_entries(variables, defs[config]['vars']) origin = 'var: %s' % origin # use playbook keywords if you have em if value is None and defs[config].get('keyword') and keys: value, origin = self._loop_entries(keys, defs[config]['keyword']) origin = 'keyword: %s' % origin # automap to keywords # TODO: deprecate these in favor of explicit keyword above if value is None and keys: if config in keys: value = keys[config] keyword = config elif aliases: for alias in aliases: if alias in keys: value = keys[alias] keyword = alias break if value is not None: origin = 'keyword: %s' % keyword if value is None and 'cli' in defs[config]: # avoid circular import .. until valid from ansible import context value, origin = self._loop_entries(context.CLIARGS, defs[config]['cli']) origin = 'cli: %s' % origin # env vars are next precedence if value is None and defs[config].get('env'): value, origin = self._loop_entries(py3compat.environ, defs[config]['env']) origin = 'env: %s' % origin # try config file entries next, if we have one if self._parsers.get(cfile, None) is None: self._parse_config_file(cfile) if value is None and cfile is not None: ftype = get_config_type(cfile) if ftype and defs[config].get(ftype): if ftype == 'ini': # load from ini config try: # FIXME: generalize _loop_entries to allow for files also, most of this code is dupe for ini_entry in defs[config]['ini']: temp_value = get_ini_config_value(self._parsers[cfile], ini_entry) if temp_value is not None: value = temp_value origin = cfile if 'deprecated' in ini_entry: self.DEPRECATED.append(('[%s]%s' % (ini_entry['section'], ini_entry['key']), ini_entry['deprecated'])) except Exception as e: sys.stderr.write("Error while loading ini config %s: %s" % (cfile, to_native(e))) elif ftype == 'yaml': # FIXME: implement, also , break down key from defs (. notation???) origin = cfile # set default if we got here w/o a value if value is None: if defs[config].get('required', False): if not plugin_type or config not in INTERNAL_DEFS.get(plugin_type, {}): raise AnsibleError("No setting was provided for required configuration %s" % to_native(_get_entry(plugin_type, plugin_name, config))) else: origin = 'default' value = defs[config].get('default') if isinstance(value, string_types) and (value.startswith('{{') and value.endswith('}}')) and variables is not None: # template default values if possible # NOTE: cannot use is_template due to circular dep try: t = NativeEnvironment().from_string(value) value = t.render(variables) except Exception: pass # not templatable # ensure correct type, can raise exceptions on mismatched types try: value = ensure_type(value, defs[config].get('type'), origin=origin) except ValueError as e: if origin.startswith('env:') and value == '': # this is empty env var for non string so we can set to default origin = 'default' value = ensure_type(defs[config].get('default'), defs[config].get('type'), origin=origin) else: raise AnsibleOptionsError('Invalid type for configuration option %s (from %s): %s' % (to_native(_get_entry(plugin_type, plugin_name, config)).strip(), origin, to_native(e))) # deal with restricted values if value is not None and 'choices' in defs[config] and defs[config]['choices'] is not None: invalid_choices = True # assume the worst! if defs[config].get('type') == 'list': # for a list type, compare all values in type are allowed invalid_choices = not all(choice in defs[config]['choices'] for choice in value) else: # these should be only the simple data types (string, int, bool, float, etc) .. ignore dicts for now invalid_choices = value not in defs[config]['choices'] if invalid_choices: if isinstance(defs[config]['choices'], Mapping): valid = ', '.join([to_text(k) for k in defs[config]['choices'].keys()]) elif isinstance(defs[config]['choices'], string_types): valid = defs[config]['choices'] elif isinstance(defs[config]['choices'], Sequence): valid = ', '.join([to_text(c) for c in defs[config]['choices']]) else: valid = defs[config]['choices'] raise AnsibleOptionsError('Invalid value "%s" for configuration option "%s", valid values are: %s' % (value, to_native(_get_entry(plugin_type, plugin_name, config)), valid)) # deal with deprecation of the setting if 'deprecated' in defs[config] and origin != 'default': self.DEPRECATED.append((config, defs[config].get('deprecated'))) else: raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config))) return value, origin def initialize_plugin_configuration_definitions(self, plugin_type, name, defs): if plugin_type not in self._plugins: self._plugins[plugin_type] = {} self._plugins[plugin_type][name] = defs ansible-core-2.16.3/lib/ansible/constants.py0000644000000000000000000001773214556006441017474 0ustar00rootroot# Copyright: (c) 2012-2014, Michael DeHaan # Copyright: (c) 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import re from string import ascii_letters, digits from ansible.config.manager import ConfigManager from ansible.module_utils.common.text.converters import to_text from ansible.module_utils.common.collections import Sequence from ansible.module_utils.parsing.convert_bool import BOOLEANS_TRUE from ansible.release import __version__ from ansible.utils.fqcn import add_internal_fqcns def _warning(msg): ''' display is not guaranteed here, nor it being the full class, but try anyways, fallback to sys.stderr.write ''' try: from ansible.utils.display import Display Display().warning(msg) except Exception: import sys sys.stderr.write(' [WARNING] %s\n' % (msg)) def _deprecated(msg, version): ''' display is not guaranteed here, nor it being the full class, but try anyways, fallback to sys.stderr.write ''' try: from ansible.utils.display import Display Display().deprecated(msg, version=version) except Exception: import sys sys.stderr.write(' [DEPRECATED] %s, to be removed in %s\n' % (msg, version)) def set_constant(name, value, export=vars()): ''' sets constants and returns resolved options dict ''' export[name] = value class _DeprecatedSequenceConstant(Sequence): def __init__(self, value, msg, version): self._value = value self._msg = msg self._version = version def __len__(self): _deprecated(self._msg, self._version) return len(self._value) def __getitem__(self, y): _deprecated(self._msg, self._version) return self._value[y] # CONSTANTS ### yes, actual ones # The following are hard-coded action names _ACTION_DEBUG = add_internal_fqcns(('debug', )) _ACTION_IMPORT_PLAYBOOK = add_internal_fqcns(('import_playbook', )) _ACTION_IMPORT_ROLE = add_internal_fqcns(('import_role', )) _ACTION_IMPORT_TASKS = add_internal_fqcns(('import_tasks', )) _ACTION_INCLUDE_ROLE = add_internal_fqcns(('include_role', )) _ACTION_INCLUDE_TASKS = add_internal_fqcns(('include_tasks', )) _ACTION_INCLUDE_VARS = add_internal_fqcns(('include_vars', )) _ACTION_INVENTORY_TASKS = add_internal_fqcns(('add_host', 'group_by')) _ACTION_META = add_internal_fqcns(('meta', )) _ACTION_SET_FACT = add_internal_fqcns(('set_fact', )) _ACTION_SETUP = add_internal_fqcns(('setup', )) _ACTION_HAS_CMD = add_internal_fqcns(('command', 'shell', 'script')) _ACTION_ALLOWS_RAW_ARGS = _ACTION_HAS_CMD + add_internal_fqcns(('raw', )) _ACTION_ALL_INCLUDES = _ACTION_INCLUDE_TASKS + _ACTION_INCLUDE_ROLE _ACTION_ALL_INCLUDE_IMPORT_TASKS = _ACTION_INCLUDE_TASKS + _ACTION_IMPORT_TASKS _ACTION_ALL_PROPER_INCLUDE_IMPORT_ROLES = _ACTION_INCLUDE_ROLE + _ACTION_IMPORT_ROLE _ACTION_ALL_PROPER_INCLUDE_IMPORT_TASKS = _ACTION_INCLUDE_TASKS + _ACTION_IMPORT_TASKS _ACTION_ALL_INCLUDE_ROLE_TASKS = _ACTION_INCLUDE_ROLE + _ACTION_INCLUDE_TASKS _ACTION_FACT_GATHERING = _ACTION_SETUP + add_internal_fqcns(('gather_facts', )) _ACTION_WITH_CLEAN_FACTS = _ACTION_SET_FACT + _ACTION_INCLUDE_VARS # http://nezzen.net/2008/06/23/colored-text-in-python-using-ansi-escape-sequences/ COLOR_CODES = { 'black': u'0;30', 'bright gray': u'0;37', 'blue': u'0;34', 'white': u'1;37', 'green': u'0;32', 'bright blue': u'1;34', 'cyan': u'0;36', 'bright green': u'1;32', 'red': u'0;31', 'bright cyan': u'1;36', 'purple': u'0;35', 'bright red': u'1;31', 'yellow': u'0;33', 'bright purple': u'1;35', 'dark gray': u'1;30', 'bright yellow': u'1;33', 'magenta': u'0;35', 'bright magenta': u'1;35', 'normal': u'0', } REJECT_EXTS = ('.pyc', '.pyo', '.swp', '.bak', '~', '.rpm', '.md', '.txt', '.rst') BOOL_TRUE = BOOLEANS_TRUE COLLECTION_PTYPE_COMPAT = {'module': 'modules'} PYTHON_DOC_EXTENSIONS = ('.py',) YAML_DOC_EXTENSIONS = ('.yml', '.yaml') DOC_EXTENSIONS = PYTHON_DOC_EXTENSIONS + YAML_DOC_EXTENSIONS DEFAULT_BECOME_PASS = None DEFAULT_PASSWORD_CHARS = to_text(ascii_letters + digits + ".,:-_", errors='strict') # characters included in auto-generated passwords DEFAULT_REMOTE_PASS = None DEFAULT_SUBSET = None # FIXME: expand to other plugins, but never doc fragments CONFIGURABLE_PLUGINS = ('become', 'cache', 'callback', 'cliconf', 'connection', 'httpapi', 'inventory', 'lookup', 'netconf', 'shell', 'vars') # NOTE: always update the docs/docsite/Makefile to match DOCUMENTABLE_PLUGINS = CONFIGURABLE_PLUGINS + ('module', 'strategy', 'test', 'filter') IGNORE_FILES = ("COPYING", "CONTRIBUTING", "LICENSE", "README", "VERSION", "GUIDELINES", "MANIFEST", "Makefile") # ignore during module search INTERNAL_RESULT_KEYS = ('add_host', 'add_group') LOCALHOST = ('127.0.0.1', 'localhost', '::1') MODULE_REQUIRE_ARGS = tuple(add_internal_fqcns(('command', 'win_command', 'ansible.windows.win_command', 'shell', 'win_shell', 'ansible.windows.win_shell', 'raw', 'script'))) MODULE_NO_JSON = tuple(add_internal_fqcns(('command', 'win_command', 'ansible.windows.win_command', 'shell', 'win_shell', 'ansible.windows.win_shell', 'raw'))) RESTRICTED_RESULT_KEYS = ('ansible_rsync_path', 'ansible_playbook_python', 'ansible_facts') SYNTHETIC_COLLECTIONS = ('ansible.builtin', 'ansible.legacy') TREE_DIR = None VAULT_VERSION_MIN = 1.0 VAULT_VERSION_MAX = 1.0 # This matches a string that cannot be used as a valid python variable name i.e 'not-valid', 'not!valid@either' '1_nor_This' INVALID_VARIABLE_NAMES = re.compile(r'^[\d\W]|[^\w]') # FIXME: remove once play_context mangling is removed # the magic variable mapping dictionary below is used to translate # host/inventory variables to fields in the PlayContext # object. The dictionary values are tuples, to account for aliases # in variable names. COMMON_CONNECTION_VARS = frozenset(('ansible_connection', 'ansible_host', 'ansible_user', 'ansible_shell_executable', 'ansible_port', 'ansible_pipelining', 'ansible_password', 'ansible_timeout', 'ansible_shell_type', 'ansible_module_compression', 'ansible_private_key_file')) MAGIC_VARIABLE_MAPPING = dict( # base connection=('ansible_connection', ), module_compression=('ansible_module_compression', ), shell=('ansible_shell_type', ), executable=('ansible_shell_executable', ), # connection common remote_addr=('ansible_ssh_host', 'ansible_host'), remote_user=('ansible_ssh_user', 'ansible_user'), password=('ansible_ssh_pass', 'ansible_password'), port=('ansible_ssh_port', 'ansible_port'), pipelining=('ansible_ssh_pipelining', 'ansible_pipelining'), timeout=('ansible_ssh_timeout', 'ansible_timeout'), private_key_file=('ansible_ssh_private_key_file', 'ansible_private_key_file'), # networking modules network_os=('ansible_network_os', ), connection_user=('ansible_connection_user',), # ssh TODO: remove ssh_executable=('ansible_ssh_executable', ), ssh_common_args=('ansible_ssh_common_args', ), sftp_extra_args=('ansible_sftp_extra_args', ), scp_extra_args=('ansible_scp_extra_args', ), ssh_extra_args=('ansible_ssh_extra_args', ), ssh_transfer_method=('ansible_ssh_transfer_method', ), # docker TODO: remove docker_extra_args=('ansible_docker_extra_args', ), # become become=('ansible_become', ), become_method=('ansible_become_method', ), become_user=('ansible_become_user', ), become_pass=('ansible_become_password', 'ansible_become_pass'), become_exe=('ansible_become_exe', ), become_flags=('ansible_become_flags', ), ) # POPULATE SETTINGS FROM CONFIG ### config = ConfigManager() # Generate constants from config for setting in config.get_configuration_definitions(): set_constant(setting, config.get_config_value(setting, variables=vars())) for warn in config.WARNINGS: _warning(warn) ansible-core-2.16.3/lib/ansible/context.py0000644000000000000000000000374214556006441017140 0ustar00rootroot# Copyright: (c) 2018, Toshio Kuratomi # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type """ Context of the running Ansible. In the future we *may* create Context objects to allow running multiple Ansible plays in parallel with different contexts but that is currently out of scope as the Ansible library is just for running the ansible command line tools. These APIs are still in flux so do not use them unless you are willing to update them with every Ansible release """ from collections.abc import Mapping, Set from ansible.module_utils.common.collections import is_sequence from ansible.utils.context_objects import CLIArgs, GlobalCLIArgs __all__ = ('CLIARGS',) # Note: this is not the singleton version. The Singleton is only created once the program has # actually parsed the args CLIARGS = CLIArgs({}) # This should be called immediately after cli_args are processed (parsed, validated, and any # normalization performed on them). No other code should call it def _init_global_context(cli_args): """Initialize the global context objects""" global CLIARGS CLIARGS = GlobalCLIArgs.from_options(cli_args) def cliargs_deferred_get(key, default=None, shallowcopy=False): """Closure over getting a key from CLIARGS with shallow copy functionality Primarily used in ``FieldAttribute`` where we need to defer setting the default until after the CLI arguments have been parsed This function is not directly bound to ``CliArgs`` so that it works with ``CLIARGS`` being replaced """ def inner(): value = CLIARGS.get(key, default=default) if not shallowcopy: return value elif is_sequence(value): return value[:] elif isinstance(value, (Mapping, Set)): return value.copy() return value return inner ansible-core-2.16.3/lib/ansible/errors/0000755000000000000000000000000014556006441016410 5ustar00rootrootansible-core-2.16.3/lib/ansible/errors/__init__.py0000644000000000000000000003474714556006441020540 0ustar00rootroot# (c) 2012-2014, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import re import traceback from collections.abc import Sequence from ansible.errors.yaml_strings import ( YAML_COMMON_DICT_ERROR, YAML_COMMON_LEADING_TAB_ERROR, YAML_COMMON_PARTIALLY_QUOTED_LINE_ERROR, YAML_COMMON_UNBALANCED_QUOTES_ERROR, YAML_COMMON_UNQUOTED_COLON_ERROR, YAML_COMMON_UNQUOTED_VARIABLE_ERROR, YAML_POSITION_DETAILS, YAML_AND_SHORTHAND_ERROR, ) from ansible.module_utils.common.text.converters import to_native, to_text class AnsibleError(Exception): ''' This is the base class for all errors raised from Ansible code, and can be instantiated with two optional parameters beyond the error message to control whether detailed information is displayed when the error occurred while parsing a data file of some kind. Usage: raise AnsibleError('some message here', obj=obj, show_content=True) Where "obj" is some subclass of ansible.parsing.yaml.objects.AnsibleBaseYAMLObject, which should be returned by the DataLoader() class. ''' def __init__(self, message="", obj=None, show_content=True, suppress_extended_error=False, orig_exc=None): super(AnsibleError, self).__init__(message) self._show_content = show_content self._suppress_extended_error = suppress_extended_error self._message = to_native(message) self.obj = obj self.orig_exc = orig_exc @property def message(self): # we import this here to prevent an import loop problem, # since the objects code also imports ansible.errors from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject message = [self._message] if isinstance(self.obj, AnsibleBaseYAMLObject): extended_error = self._get_extended_error() if extended_error and not self._suppress_extended_error: message.append( '\n\n%s' % to_native(extended_error) ) elif self.orig_exc: message.append('. %s' % to_native(self.orig_exc)) return ''.join(message) @message.setter def message(self, val): self._message = val def __str__(self): return self.message def __repr__(self): return self.message def _get_error_lines_from_file(self, file_name, line_number): ''' Returns the line in the file which corresponds to the reported error location, as well as the line preceding it (if the error did not occur on the first line), to provide context to the error. ''' target_line = '' prev_line = '' with open(file_name, 'r') as f: lines = f.readlines() # In case of a YAML loading error, PyYAML will report the very last line # as the location of the error. Avoid an index error here in order to # return a helpful message. file_length = len(lines) if line_number >= file_length: line_number = file_length - 1 # If target_line contains only whitespace, move backwards until # actual code is found. If there are several empty lines after target_line, # the error lines would just be blank, which is not very helpful. target_line = lines[line_number] while not target_line.strip(): line_number -= 1 target_line = lines[line_number] if line_number > 0: prev_line = lines[line_number - 1] return (target_line, prev_line) def _get_extended_error(self): ''' Given an object reporting the location of the exception in a file, return detailed information regarding it including: * the line which caused the error as well as the one preceding it * causes and suggested remedies for common syntax errors If this error was created with show_content=False, the reporting of content is suppressed, as the file contents may be sensitive (ie. vault data). ''' error_message = '' try: (src_file, line_number, col_number) = self.obj.ansible_pos error_message += YAML_POSITION_DETAILS % (src_file, line_number, col_number) if src_file not in ('', '') and self._show_content: (target_line, prev_line) = self._get_error_lines_from_file(src_file, line_number - 1) target_line = to_text(target_line) prev_line = to_text(prev_line) if target_line: stripped_line = target_line.replace(" ", "") # Check for k=v syntax in addition to YAML syntax and set the appropriate error position, # arrow index if re.search(r'\w+(\s+)?=(\s+)?[\w/-]+', prev_line): error_position = prev_line.rstrip().find('=') arrow_line = (" " * error_position) + "^ here" error_message = YAML_POSITION_DETAILS % (src_file, line_number - 1, error_position + 1) error_message += "\nThe offending line appears to be:\n\n%s\n%s\n\n" % (prev_line.rstrip(), arrow_line) error_message += YAML_AND_SHORTHAND_ERROR else: arrow_line = (" " * (col_number - 1)) + "^ here" error_message += "\nThe offending line appears to be:\n\n%s\n%s\n%s\n" % (prev_line.rstrip(), target_line.rstrip(), arrow_line) # TODO: There may be cases where there is a valid tab in a line that has other errors. if '\t' in target_line: error_message += YAML_COMMON_LEADING_TAB_ERROR # common error/remediation checking here: # check for unquoted vars starting lines if ('{{' in target_line and '}}' in target_line) and ('"{{' not in target_line or "'{{" not in target_line): error_message += YAML_COMMON_UNQUOTED_VARIABLE_ERROR # check for common dictionary mistakes elif ":{{" in stripped_line and "}}" in stripped_line: error_message += YAML_COMMON_DICT_ERROR # check for common unquoted colon mistakes elif (len(target_line) and len(target_line) > 1 and len(target_line) > col_number and target_line[col_number] == ":" and target_line.count(':') > 1): error_message += YAML_COMMON_UNQUOTED_COLON_ERROR # otherwise, check for some common quoting mistakes else: # FIXME: This needs to split on the first ':' to account for modules like lineinfile # that may have lines that contain legitimate colons, e.g., line: 'i ALL= (ALL) NOPASSWD: ALL' # and throw off the quote matching logic. parts = target_line.split(":") if len(parts) > 1: middle = parts[1].strip() match = False unbalanced = False if middle.startswith("'") and not middle.endswith("'"): match = True elif middle.startswith('"') and not middle.endswith('"'): match = True if (len(middle) > 0 and middle[0] in ['"', "'"] and middle[-1] in ['"', "'"] and target_line.count("'") > 2 or target_line.count('"') > 2): unbalanced = True if match: error_message += YAML_COMMON_PARTIALLY_QUOTED_LINE_ERROR if unbalanced: error_message += YAML_COMMON_UNBALANCED_QUOTES_ERROR except (IOError, TypeError): error_message += '\n(could not open file to display line)' except IndexError: error_message += '\n(specified line no longer in file, maybe it changed?)' return error_message class AnsiblePromptInterrupt(AnsibleError): '''User interrupt''' class AnsiblePromptNoninteractive(AnsibleError): '''Unable to get user input''' class AnsibleAssertionError(AnsibleError, AssertionError): '''Invalid assertion''' pass class AnsibleOptionsError(AnsibleError): ''' bad or incomplete options passed ''' pass class AnsibleParserError(AnsibleError): ''' something was detected early that is wrong about a playbook or data file ''' pass class AnsibleInternalError(AnsibleError): ''' internal safeguards tripped, something happened in the code that should never happen ''' pass class AnsibleRuntimeError(AnsibleError): ''' ansible had a problem while running a playbook ''' pass class AnsibleModuleError(AnsibleRuntimeError): ''' a module failed somehow ''' pass class AnsibleConnectionFailure(AnsibleRuntimeError): ''' the transport / connection_plugin had a fatal error ''' pass class AnsibleAuthenticationFailure(AnsibleConnectionFailure): '''invalid username/password/key''' pass class AnsibleCallbackError(AnsibleRuntimeError): ''' a callback failure ''' pass class AnsibleTemplateError(AnsibleRuntimeError): '''A template related error''' pass class AnsibleFilterError(AnsibleTemplateError): ''' a templating failure ''' pass class AnsibleLookupError(AnsibleTemplateError): ''' a lookup failure ''' pass class AnsibleUndefinedVariable(AnsibleTemplateError): ''' a templating failure ''' pass class AnsibleFileNotFound(AnsibleRuntimeError): ''' a file missing failure ''' def __init__(self, message="", obj=None, show_content=True, suppress_extended_error=False, orig_exc=None, paths=None, file_name=None): self.file_name = file_name self.paths = paths if message: message += "\n" if self.file_name: message += "Could not find or access '%s'" % to_text(self.file_name) else: message += "Could not find file" if self.paths and isinstance(self.paths, Sequence): searched = to_text('\n\t'.join(self.paths)) if message: message += "\n" message += "Searched in:\n\t%s" % searched message += " on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option" super(AnsibleFileNotFound, self).__init__(message=message, obj=obj, show_content=show_content, suppress_extended_error=suppress_extended_error, orig_exc=orig_exc) # These Exceptions are temporary, using them as flow control until we can get a better solution. # DO NOT USE as they will probably be removed soon. # We will port the action modules in our tree to use a context manager instead. class AnsibleAction(AnsibleRuntimeError): ''' Base Exception for Action plugin flow control ''' def __init__(self, message="", obj=None, show_content=True, suppress_extended_error=False, orig_exc=None, result=None): super(AnsibleAction, self).__init__(message=message, obj=obj, show_content=show_content, suppress_extended_error=suppress_extended_error, orig_exc=orig_exc) if result is None: self.result = {} else: self.result = result class AnsibleActionSkip(AnsibleAction): ''' an action runtime skip''' def __init__(self, message="", obj=None, show_content=True, suppress_extended_error=False, orig_exc=None, result=None): super(AnsibleActionSkip, self).__init__(message=message, obj=obj, show_content=show_content, suppress_extended_error=suppress_extended_error, orig_exc=orig_exc, result=result) self.result.update({'skipped': True, 'msg': message}) class AnsibleActionFail(AnsibleAction): ''' an action runtime failure''' def __init__(self, message="", obj=None, show_content=True, suppress_extended_error=False, orig_exc=None, result=None): super(AnsibleActionFail, self).__init__(message=message, obj=obj, show_content=show_content, suppress_extended_error=suppress_extended_error, orig_exc=orig_exc, result=result) self.result.update({'failed': True, 'msg': message, 'exception': traceback.format_exc()}) class _AnsibleActionDone(AnsibleAction): ''' an action runtime early exit''' pass class AnsiblePluginError(AnsibleError): ''' base class for Ansible plugin-related errors that do not need AnsibleError contextual data ''' def __init__(self, message=None, plugin_load_context=None): super(AnsiblePluginError, self).__init__(message) self.plugin_load_context = plugin_load_context class AnsiblePluginRemovedError(AnsiblePluginError): ''' a requested plugin has been removed ''' pass class AnsiblePluginCircularRedirect(AnsiblePluginError): '''a cycle was detected in plugin redirection''' pass class AnsibleCollectionUnsupportedVersionError(AnsiblePluginError): '''a collection is not supported by this version of Ansible''' pass class AnsibleFilterTypeError(AnsibleTemplateError, TypeError): ''' a Jinja filter templating failure due to bad type''' pass class AnsiblePluginNotFound(AnsiblePluginError): ''' Indicates we did not find an Ansible plugin ''' pass ansible-core-2.16.3/lib/ansible/errors/yaml_strings.py0000644000000000000000000000754614556006441021511 0ustar00rootroot# (c) 2012-2014, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type __all__ = [ 'YAML_SYNTAX_ERROR', 'YAML_POSITION_DETAILS', 'YAML_COMMON_DICT_ERROR', 'YAML_COMMON_UNQUOTED_VARIABLE_ERROR', 'YAML_COMMON_UNQUOTED_COLON_ERROR', 'YAML_COMMON_PARTIALLY_QUOTED_LINE_ERROR', 'YAML_COMMON_UNBALANCED_QUOTES_ERROR', ] YAML_SYNTAX_ERROR = """\ Syntax Error while loading YAML. %s""" YAML_POSITION_DETAILS = """\ The error appears to be in '%s': line %s, column %s, but may be elsewhere in the file depending on the exact syntax problem. """ YAML_COMMON_DICT_ERROR = """\ This one looks easy to fix. YAML thought it was looking for the start of a hash/dictionary and was confused to see a second "{". Most likely this was meant to be an ansible template evaluation instead, so we have to give the parser a small hint that we wanted a string instead. The solution here is to just quote the entire value. For instance, if the original line was: app_path: {{ base_path }}/foo It should be written as: app_path: "{{ base_path }}/foo" """ YAML_COMMON_UNQUOTED_VARIABLE_ERROR = """\ We could be wrong, but this one looks like it might be an issue with missing quotes. Always quote template expression brackets when they start a value. For instance: with_items: - {{ foo }} Should be written as: with_items: - "{{ foo }}" """ YAML_COMMON_UNQUOTED_COLON_ERROR = """\ This one looks easy to fix. There seems to be an extra unquoted colon in the line and this is confusing the parser. It was only expecting to find one free colon. The solution is just add some quotes around the colon, or quote the entire line after the first colon. For instance, if the original line was: copy: src=file.txt dest=/path/filename:with_colon.txt It can be written as: copy: src=file.txt dest='/path/filename:with_colon.txt' Or: copy: 'src=file.txt dest=/path/filename:with_colon.txt' """ YAML_COMMON_PARTIALLY_QUOTED_LINE_ERROR = """\ This one looks easy to fix. It seems that there is a value started with a quote, and the YAML parser is expecting to see the line ended with the same kind of quote. For instance: when: "ok" in result.stdout Could be written as: when: '"ok" in result.stdout' Or equivalently: when: "'ok' in result.stdout" """ YAML_COMMON_UNBALANCED_QUOTES_ERROR = """\ We could be wrong, but this one looks like it might be an issue with unbalanced quotes. If starting a value with a quote, make sure the line ends with the same set of quotes. For instance this arbitrary example: foo: "bad" "wolf" Could be written as: foo: '"bad" "wolf"' """ YAML_COMMON_LEADING_TAB_ERROR = """\ There appears to be a tab character at the start of the line. YAML does not use tabs for formatting. Tabs should be replaced with spaces. For example: - name: update tooling vars: version: 1.2.3 # ^--- there is a tab there. Should be written as: - name: update tooling vars: version: 1.2.3 # ^--- all spaces here. """ YAML_AND_SHORTHAND_ERROR = """\ There appears to be both 'k=v' shorthand syntax and YAML in this task. \ Only one syntax may be used. """ ansible-core-2.16.3/lib/ansible/executor/0000755000000000000000000000000014556006441016732 5ustar00rootrootansible-core-2.16.3/lib/ansible/executor/__init__.py0000644000000000000000000000150114556006441021040 0ustar00rootroot# (c) 2012-2014, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type ansible-core-2.16.3/lib/ansible/executor/action_write_locks.py0000644000000000000000000000357314556006441023176 0ustar00rootroot# (c) 2016 - Red Hat, Inc. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . from __future__ import annotations import multiprocessing.synchronize from multiprocessing import Lock from ansible.module_utils.facts.system.pkg_mgr import PKG_MGRS if 'action_write_locks' not in globals(): # Do not initialize this more than once because it seems to bash # the existing one. multiprocessing must be reloading the module # when it forks? action_write_locks: dict[str | None, multiprocessing.synchronize.Lock] = dict() # Below is a Lock for use when we weren't expecting a named module. It gets used when an action # plugin invokes a module whose name does not match with the action's name. Slightly less # efficient as all processes with unexpected module names will wait on this lock action_write_locks[None] = Lock() # These plugins are known to be called directly by action plugins with names differing from the # action plugin name. We precreate them here as an optimization. # If a list of service managers is created in the future we can do the same for them. mods = set(p['name'] for p in PKG_MGRS) mods.update(('copy', 'file', 'setup', 'slurp', 'stat')) for mod_name in mods: action_write_locks[mod_name] = Lock() ansible-core-2.16.3/lib/ansible/executor/discovery/0000755000000000000000000000000014556006441020741 5ustar00rootrootansible-core-2.16.3/lib/ansible/executor/discovery/__init__.py0000644000000000000000000000000014556006441023040 0ustar00rootrootansible-core-2.16.3/lib/ansible/executor/discovery/python_target.py0000644000000000000000000000232214556006441024201 0ustar00rootroot# Copyright: (c) 2018 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # FUTURE: this could be swapped out for our bundled version of distro to move more complete platform # logic to the targets, so long as we maintain Py2.6 compat and don't need to do any kind of script assembly from __future__ import (absolute_import, division, print_function) __metaclass__ = type import json import platform import io import os def read_utf8_file(path, encoding='utf-8'): if not os.access(path, os.R_OK): return None with io.open(path, 'r', encoding=encoding) as fd: content = fd.read() return content def get_platform_info(): result = dict(platform_dist_result=[]) if hasattr(platform, 'dist'): result['platform_dist_result'] = platform.dist() osrelease_content = read_utf8_file('/etc/os-release') # try to fall back to /usr/lib/os-release if not osrelease_content: osrelease_content = read_utf8_file('/usr/lib/os-release') result['osrelease_content'] = osrelease_content return result def main(): info = get_platform_info() print(json.dumps(info)) if __name__ == '__main__': main() ansible-core-2.16.3/lib/ansible/executor/interpreter_discovery.py0000644000000000000000000002332714556006441023745 0ustar00rootroot# Copyright: (c) 2018 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import bisect import json import pkgutil import re from ansible import constants as C from ansible.module_utils.common.text.converters import to_native, to_text from ansible.module_utils.distro import LinuxDistribution from ansible.utils.display import Display from ansible.utils.plugin_docs import get_versioned_doclink from ansible.module_utils.compat.version import LooseVersion from ansible.module_utils.facts.system.distribution import Distribution from traceback import format_exc OS_FAMILY_LOWER = {k.lower(): v.lower() for k, v in Distribution.OS_FAMILY.items()} display = Display() foundre = re.compile(r'(?s)PLATFORM[\r\n]+(.*)FOUND(.*)ENDFOUND') class InterpreterDiscoveryRequiredError(Exception): def __init__(self, message, interpreter_name, discovery_mode): super(InterpreterDiscoveryRequiredError, self).__init__(message) self.interpreter_name = interpreter_name self.discovery_mode = discovery_mode def __str__(self): return self.message def __repr__(self): # TODO: proper repr impl return self.message def discover_interpreter(action, interpreter_name, discovery_mode, task_vars): # interpreter discovery is a 2-step process with the target. First, we use a simple shell-agnostic bootstrap to # get the system type from uname, and find any random Python that can get us the info we need. For supported # target OS types, we'll dispatch a Python script that calls plaform.dist() (for older platforms, where available) # and brings back /etc/os-release (if present). The proper Python path is looked up in a table of known # distros/versions with included Pythons; if nothing is found, depending on the discovery mode, either the # default fallback of /usr/bin/python is used (if we know it's there), or discovery fails. # FUTURE: add logical equivalence for "python3" in the case of py3-only modules? if interpreter_name != 'python': raise ValueError('Interpreter discovery not supported for {0}'.format(interpreter_name)) host = task_vars.get('inventory_hostname', 'unknown') res = None platform_type = 'unknown' found_interpreters = [u'/usr/bin/python'] # fallback value is_auto_legacy = discovery_mode.startswith('auto_legacy') is_silent = discovery_mode.endswith('_silent') try: platform_python_map = C.config.get_config_value('_INTERPRETER_PYTHON_DISTRO_MAP', variables=task_vars) bootstrap_python_list = C.config.get_config_value('INTERPRETER_PYTHON_FALLBACK', variables=task_vars) display.vvv(msg=u"Attempting {0} interpreter discovery".format(interpreter_name), host=host) # not all command -v impls accept a list of commands, so we have to call it once per python command_list = ["command -v '%s'" % py for py in bootstrap_python_list] shell_bootstrap = "echo PLATFORM; uname; echo FOUND; {0}; echo ENDFOUND".format('; '.join(command_list)) # FUTURE: in most cases we probably don't want to use become, but maybe sometimes we do? res = action._low_level_execute_command(shell_bootstrap, sudoable=False) raw_stdout = res.get('stdout', u'') match = foundre.match(raw_stdout) if not match: display.debug(u'raw interpreter discovery output: {0}'.format(raw_stdout), host=host) raise ValueError('unexpected output from Python interpreter discovery') platform_type = match.groups()[0].lower().strip() found_interpreters = [interp.strip() for interp in match.groups()[1].splitlines() if interp.startswith('/')] display.debug(u"found interpreters: {0}".format(found_interpreters), host=host) if not found_interpreters: if not is_silent: action._discovery_warnings.append(u'No python interpreters found for ' u'host {0} (tried {1})'.format(host, bootstrap_python_list)) # this is lame, but returning None or throwing an exception is uglier return u'/usr/bin/python' if platform_type != 'linux': raise NotImplementedError('unsupported platform for extended discovery: {0}'.format(to_native(platform_type))) platform_script = pkgutil.get_data('ansible.executor.discovery', 'python_target.py') # FUTURE: respect pipelining setting instead of just if the connection supports it? if action._connection.has_pipelining: res = action._low_level_execute_command(found_interpreters[0], sudoable=False, in_data=platform_script) else: # FUTURE: implement on-disk case (via script action or ?) raise NotImplementedError('pipelining support required for extended interpreter discovery') platform_info = json.loads(res.get('stdout')) distro, version = _get_linux_distro(platform_info) if not distro or not version: raise NotImplementedError('unable to get Linux distribution/version info') family = OS_FAMILY_LOWER.get(distro.lower().strip()) version_map = platform_python_map.get(distro.lower().strip()) or platform_python_map.get(family) if not version_map: raise NotImplementedError('unsupported Linux distribution: {0}'.format(distro)) platform_interpreter = to_text(_version_fuzzy_match(version, version_map), errors='surrogate_or_strict') # provide a transition period for hosts that were using /usr/bin/python previously (but shouldn't have been) if is_auto_legacy: if platform_interpreter != u'/usr/bin/python' and u'/usr/bin/python' in found_interpreters: if not is_silent: action._discovery_warnings.append( u"Distribution {0} {1} on host {2} should use {3}, but is using " u"/usr/bin/python for backward compatibility with prior Ansible releases. " u"See {4} for more information" .format(distro, version, host, platform_interpreter, get_versioned_doclink('reference_appendices/interpreter_discovery.html'))) return u'/usr/bin/python' if platform_interpreter not in found_interpreters: if platform_interpreter not in bootstrap_python_list: # sanity check to make sure we looked for it if not is_silent: action._discovery_warnings \ .append(u"Platform interpreter {0} on host {1} is missing from bootstrap list" .format(platform_interpreter, host)) if not is_silent: action._discovery_warnings \ .append(u"Distribution {0} {1} on host {2} should use {3}, but is using {4}, since the " u"discovered platform python interpreter was not present. See {5} " u"for more information." .format(distro, version, host, platform_interpreter, found_interpreters[0], get_versioned_doclink('reference_appendices/interpreter_discovery.html'))) return found_interpreters[0] return platform_interpreter except NotImplementedError as ex: display.vvv(msg=u'Python interpreter discovery fallback ({0})'.format(to_text(ex)), host=host) except Exception as ex: if not is_silent: display.warning(msg=u'Unhandled error in Python interpreter discovery for host {0}: {1}'.format(host, to_text(ex))) display.debug(msg=u'Interpreter discovery traceback:\n{0}'.format(to_text(format_exc())), host=host) if res and res.get('stderr'): display.vvv(msg=u'Interpreter discovery remote stderr:\n{0}'.format(to_text(res.get('stderr'))), host=host) if not is_silent: action._discovery_warnings \ .append(u"Platform {0} on host {1} is using the discovered Python interpreter at {2}, but future installation of " u"another Python interpreter could change the meaning of that path. See {3} " u"for more information." .format(platform_type, host, found_interpreters[0], get_versioned_doclink('reference_appendices/interpreter_discovery.html'))) return found_interpreters[0] def _get_linux_distro(platform_info): dist_result = platform_info.get('platform_dist_result', []) if len(dist_result) == 3 and any(dist_result): return dist_result[0], dist_result[1] osrelease_content = platform_info.get('osrelease_content') if not osrelease_content: return u'', u'' osr = LinuxDistribution._parse_os_release_content(osrelease_content) return osr.get('id', u''), osr.get('version_id', u'') def _version_fuzzy_match(version, version_map): # try exact match first res = version_map.get(version) if res: return res sorted_looseversions = sorted([LooseVersion(v) for v in version_map.keys()]) find_looseversion = LooseVersion(version) # slot match; return nearest previous version we're newer than kpos = bisect.bisect(sorted_looseversions, find_looseversion) if kpos == 0: # older than everything in the list, return the oldest version # TODO: warning-worthy? return version_map.get(sorted_looseversions[0].vstring) # TODO: is "past the end of the list" warning-worthy too (at least if it's not a major version match)? # return the next-oldest entry that we're newer than... return version_map.get(sorted_looseversions[kpos - 1].vstring) ansible-core-2.16.3/lib/ansible/executor/module_common.py0000644000000000000000000020045214556006441022144 0ustar00rootroot# (c) 2013-2014, Michael DeHaan # (c) 2015 Toshio Kuratomi # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import ast import base64 import datetime import json import os import shlex import time import zipfile import re import pkgutil from ast import AST, Import, ImportFrom from io import BytesIO from ansible.release import __version__, __author__ from ansible import constants as C from ansible.errors import AnsibleError from ansible.executor.interpreter_discovery import InterpreterDiscoveryRequiredError from ansible.executor.powershell import module_manifest as ps_manifest from ansible.module_utils.common.json import AnsibleJSONEncoder from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native from ansible.plugins.loader import module_utils_loader from ansible.utils.collection_loader._collection_finder import _get_collection_metadata, _nested_dict_get # Must import strategy and use write_locks from there # If we import write_locks directly then we end up binding a # variable to the object and then it never gets updated. from ansible.executor import action_write_locks from ansible.utils.display import Display from collections import namedtuple import importlib.util import importlib.machinery display = Display() ModuleUtilsProcessEntry = namedtuple('ModuleUtilsProcessEntry', ['name_parts', 'is_ambiguous', 'has_redirected_child', 'is_optional']) REPLACER = b"#<>" REPLACER_VERSION = b"\"<>\"" REPLACER_COMPLEX = b"\"<>\"" REPLACER_WINDOWS = b"# POWERSHELL_COMMON" REPLACER_JSONARGS = b"<>" REPLACER_SELINUX = b"<>" # We could end up writing out parameters with unicode characters so we need to # specify an encoding for the python source file ENCODING_STRING = u'# -*- coding: utf-8 -*-' b_ENCODING_STRING = b'# -*- coding: utf-8 -*-' # module_common is relative to module_utils, so fix the path _MODULE_UTILS_PATH = os.path.join(os.path.dirname(__file__), '..', 'module_utils') # ****************************************************************************** ANSIBALLZ_TEMPLATE = u'''%(shebang)s %(coding)s _ANSIBALLZ_WRAPPER = True # For test-module.py script to tell this is a ANSIBALLZ_WRAPPER # This code is part of Ansible, but is an independent component. # The code in this particular templatable string, and this templatable string # only, is BSD licensed. Modules which end up using this snippet, which is # dynamically combined together by Ansible still belong to the author of the # module, and they may assign their own license to the complete work. # # Copyright (c), James Cammarata, 2016 # Copyright (c), Toshio Kuratomi, 2016 # # Redistribution and use in source and binary forms, with or without modification, # are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. # IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. def _ansiballz_main(): import os import os.path # Access to the working directory is required by Python when using pipelining, as well as for the coverage module. # Some platforms, such as macOS, may not allow querying the working directory when using become to drop privileges. try: os.getcwd() except OSError: try: os.chdir(os.path.expanduser('~')) except OSError: os.chdir('/') %(rlimit)s import sys import __main__ # For some distros and python versions we pick up this script in the temporary # directory. This leads to problems when the ansible module masks a python # library that another import needs. We have not figured out what about the # specific distros and python versions causes this to behave differently. # # Tested distros: # Fedora23 with python3.4 Works # Ubuntu15.10 with python2.7 Works # Ubuntu15.10 with python3.4 Fails without this # Ubuntu16.04.1 with python3.5 Fails without this # To test on another platform: # * use the copy module (since this shadows the stdlib copy module) # * Turn off pipelining # * Make sure that the destination file does not exist # * ansible ubuntu16-test -m copy -a 'src=/etc/motd dest=/var/tmp/m' # This will traceback in shutil. Looking at the complete traceback will show # that shutil is importing copy which finds the ansible module instead of the # stdlib module scriptdir = None try: scriptdir = os.path.dirname(os.path.realpath(__main__.__file__)) except (AttributeError, OSError): # Some platforms don't set __file__ when reading from stdin # OSX raises OSError if using abspath() in a directory we don't have # permission to read (realpath calls abspath) pass # Strip cwd from sys.path to avoid potential permissions issues excludes = set(('', '.', scriptdir)) sys.path = [p for p in sys.path if p not in excludes] import base64 import runpy import shutil import tempfile import zipfile if sys.version_info < (3,): PY3 = False else: PY3 = True ZIPDATA = """%(zipdata)s""" # Note: temp_path isn't needed once we switch to zipimport def invoke_module(modlib_path, temp_path, json_params): # When installed via setuptools (including python setup.py install), # ansible may be installed with an easy-install.pth file. That file # may load the system-wide install of ansible rather than the one in # the module. sitecustomize is the only way to override that setting. z = zipfile.ZipFile(modlib_path, mode='a') # py3: modlib_path will be text, py2: it's bytes. Need bytes at the end sitecustomize = u'import sys\\nsys.path.insert(0,"%%s")\\n' %% modlib_path sitecustomize = sitecustomize.encode('utf-8') # Use a ZipInfo to work around zipfile limitation on hosts with # clocks set to a pre-1980 year (for instance, Raspberry Pi) zinfo = zipfile.ZipInfo() zinfo.filename = 'sitecustomize.py' zinfo.date_time = %(date_time)s z.writestr(zinfo, sitecustomize) z.close() # Put the zipped up module_utils we got from the controller first in the python path so that we # can monkeypatch the right basic sys.path.insert(0, modlib_path) # Monkeypatch the parameters into basic from ansible.module_utils import basic basic._ANSIBLE_ARGS = json_params %(coverage)s # Run the module! By importing it as '__main__', it thinks it is executing as a script runpy.run_module(mod_name='%(module_fqn)s', init_globals=dict(_module_fqn='%(module_fqn)s', _modlib_path=modlib_path), run_name='__main__', alter_sys=True) # Ansible modules must exit themselves print('{"msg": "New-style module did not handle its own exit", "failed": true}') sys.exit(1) def debug(command, zipped_mod, json_params): # The code here normally doesn't run. It's only used for debugging on the # remote machine. # # The subcommands in this function make it easier to debug ansiballz # modules. Here's the basic steps: # # Run ansible with the environment variable: ANSIBLE_KEEP_REMOTE_FILES=1 and -vvv # to save the module file remotely:: # $ ANSIBLE_KEEP_REMOTE_FILES=1 ansible host1 -m ping -a 'data=october' -vvv # # Part of the verbose output will tell you where on the remote machine the # module was written to:: # [...] # SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o # PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o # ControlPath=/home/badger/.ansible/cp/ansible-ssh-%%h-%%p-%%r -tt rhel7 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 # LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping'"'"'' # [...] # # Login to the remote machine and run the module file via from the previous # step with the explode subcommand to extract the module payload into # source files:: # $ ssh host1 # $ /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping explode # Module expanded into: # /home/badger/.ansible/tmp/ansible-tmp-1461173408.08-279692652635227/ansible # # You can now edit the source files to instrument the code or experiment with # different parameter values. When you're ready to run the code you've modified # (instead of the code from the actual zipped module), use the execute subcommand like this:: # $ /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping execute # Okay to use __file__ here because we're running from a kept file basedir = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'debug_dir') args_path = os.path.join(basedir, 'args') if command == 'explode': # transform the ZIPDATA into an exploded directory of code and then # print the path to the code. This is an easy way for people to look # at the code on the remote machine for debugging it in that # environment z = zipfile.ZipFile(zipped_mod) for filename in z.namelist(): if filename.startswith('/'): raise Exception('Something wrong with this module zip file: should not contain absolute paths') dest_filename = os.path.join(basedir, filename) if dest_filename.endswith(os.path.sep) and not os.path.exists(dest_filename): os.makedirs(dest_filename) else: directory = os.path.dirname(dest_filename) if not os.path.exists(directory): os.makedirs(directory) f = open(dest_filename, 'wb') f.write(z.read(filename)) f.close() # write the args file f = open(args_path, 'wb') f.write(json_params) f.close() print('Module expanded into:') print('%%s' %% basedir) exitcode = 0 elif command == 'execute': # Execute the exploded code instead of executing the module from the # embedded ZIPDATA. This allows people to easily run their modified # code on the remote machine to see how changes will affect it. # Set pythonpath to the debug dir sys.path.insert(0, basedir) # read in the args file which the user may have modified with open(args_path, 'rb') as f: json_params = f.read() # Monkeypatch the parameters into basic from ansible.module_utils import basic basic._ANSIBLE_ARGS = json_params # Run the module! By importing it as '__main__', it thinks it is executing as a script runpy.run_module(mod_name='%(module_fqn)s', init_globals=None, run_name='__main__', alter_sys=True) # Ansible modules must exit themselves print('{"msg": "New-style module did not handle its own exit", "failed": true}') sys.exit(1) else: print('WARNING: Unknown debug command. Doing nothing.') exitcode = 0 return exitcode # # See comments in the debug() method for information on debugging # ANSIBALLZ_PARAMS = %(params)s if PY3: ANSIBALLZ_PARAMS = ANSIBALLZ_PARAMS.encode('utf-8') try: # There's a race condition with the controller removing the # remote_tmpdir and this module executing under async. So we cannot # store this in remote_tmpdir (use system tempdir instead) # Only need to use [ansible_module]_payload_ in the temp_path until we move to zipimport # (this helps ansible-test produce coverage stats) temp_path = tempfile.mkdtemp(prefix='ansible_%(ansible_module)s_payload_') zipped_mod = os.path.join(temp_path, 'ansible_%(ansible_module)s_payload.zip') with open(zipped_mod, 'wb') as modlib: modlib.write(base64.b64decode(ZIPDATA)) if len(sys.argv) == 2: exitcode = debug(sys.argv[1], zipped_mod, ANSIBALLZ_PARAMS) else: # Note: temp_path isn't needed once we switch to zipimport invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS) finally: try: shutil.rmtree(temp_path) except (NameError, OSError): # tempdir creation probably failed pass sys.exit(exitcode) if __name__ == '__main__': _ansiballz_main() ''' ANSIBALLZ_COVERAGE_TEMPLATE = ''' os.environ['COVERAGE_FILE'] = '%(coverage_output)s=python-%%s=coverage' %% '.'.join(str(v) for v in sys.version_info[:2]) import atexit try: import coverage except ImportError: print('{"msg": "Could not import `coverage` module.", "failed": true}') sys.exit(1) cov = coverage.Coverage(config_file='%(coverage_config)s') def atexit_coverage(): cov.stop() cov.save() atexit.register(atexit_coverage) cov.start() ''' ANSIBALLZ_COVERAGE_CHECK_TEMPLATE = ''' try: if PY3: import importlib.util if importlib.util.find_spec('coverage') is None: raise ImportError else: import imp imp.find_module('coverage') except ImportError: print('{"msg": "Could not find `coverage` module.", "failed": true}') sys.exit(1) ''' ANSIBALLZ_RLIMIT_TEMPLATE = ''' import resource existing_soft, existing_hard = resource.getrlimit(resource.RLIMIT_NOFILE) # adjust soft limit subject to existing hard limit requested_soft = min(existing_hard, %(rlimit_nofile)d) if requested_soft != existing_soft: try: resource.setrlimit(resource.RLIMIT_NOFILE, (requested_soft, existing_hard)) except ValueError: # some platforms (eg macOS) lie about their hard limit pass ''' def _strip_comments(source): # Strip comments and blank lines from the wrapper buf = [] for line in source.splitlines(): l = line.strip() if not l or l.startswith(u'#'): continue buf.append(line) return u'\n'.join(buf) if C.DEFAULT_KEEP_REMOTE_FILES: # Keep comments when KEEP_REMOTE_FILES is set. That way users will see # the comments with some nice usage instructions ACTIVE_ANSIBALLZ_TEMPLATE = ANSIBALLZ_TEMPLATE else: # ANSIBALLZ_TEMPLATE stripped of comments for smaller over the wire size ACTIVE_ANSIBALLZ_TEMPLATE = _strip_comments(ANSIBALLZ_TEMPLATE) # dirname(dirname(dirname(site-packages/ansible/executor/module_common.py) == site-packages # Do this instead of getting site-packages from distutils.sysconfig so we work when we # haven't been installed site_packages = os.path.dirname(os.path.dirname(os.path.dirname(__file__))) CORE_LIBRARY_PATH_RE = re.compile(r'%s/(?Pansible/modules/.*)\.(py|ps1)$' % re.escape(site_packages)) COLLECTION_PATH_RE = re.compile(r'/(?Pansible_collections/[^/]+/[^/]+/plugins/modules/.*)\.(py|ps1)$') # Detect new-style Python modules by looking for required imports: # import ansible_collections.[my_ns.my_col.plugins.module_utils.my_module_util] # from ansible_collections.[my_ns.my_col.plugins.module_utils import my_module_util] # import ansible.module_utils[.basic] # from ansible.module_utils[ import basic] # from ansible.module_utils[.basic import AnsibleModule] # from ..module_utils[ import basic] # from ..module_utils[.basic import AnsibleModule] NEW_STYLE_PYTHON_MODULE_RE = re.compile( # Relative imports br'(?:from +\.{2,} *module_utils.* +import |' # Collection absolute imports: br'from +ansible_collections\.[^.]+\.[^.]+\.plugins\.module_utils.* +import |' br'import +ansible_collections\.[^.]+\.[^.]+\.plugins\.module_utils.*|' # Core absolute imports br'from +ansible\.module_utils.* +import |' br'import +ansible\.module_utils\.)' ) class ModuleDepFinder(ast.NodeVisitor): def __init__(self, module_fqn, tree, is_pkg_init=False, *args, **kwargs): """ Walk the ast tree for the python module. :arg module_fqn: The fully qualified name to reach this module in dotted notation. example: ansible.module_utils.basic :arg is_pkg_init: Inform the finder it's looking at a package init (eg __init__.py) to allow relative import expansion to use the proper package level without having imported it locally first. Save submodule[.submoduleN][.identifier] into self.submodules when they are from ansible.module_utils or ansible_collections packages self.submodules will end up with tuples like: - ('ansible', 'module_utils', 'basic',) - ('ansible', 'module_utils', 'urls', 'fetch_url') - ('ansible', 'module_utils', 'database', 'postgres') - ('ansible', 'module_utils', 'database', 'postgres', 'quote') - ('ansible', 'module_utils', 'database', 'postgres', 'quote') - ('ansible_collections', 'my_ns', 'my_col', 'plugins', 'module_utils', 'foo') It's up to calling code to determine whether the final element of the tuple are module names or something else (function, class, or variable names) .. seealso:: :python3:class:`ast.NodeVisitor` """ super(ModuleDepFinder, self).__init__(*args, **kwargs) self._tree = tree # squirrel this away so we can compare node parents to it self.submodules = set() self.optional_imports = set() self.module_fqn = module_fqn self.is_pkg_init = is_pkg_init self._visit_map = { Import: self.visit_Import, ImportFrom: self.visit_ImportFrom, } self.visit(tree) def generic_visit(self, node): """Overridden ``generic_visit`` that makes some assumptions about our use case, and improves performance by calling visitors directly instead of calling ``visit`` to offload calling visitors. """ generic_visit = self.generic_visit visit_map = self._visit_map for field, value in ast.iter_fields(node): if isinstance(value, list): for item in value: if isinstance(item, (Import, ImportFrom)): item.parent = node visit_map[item.__class__](item) elif isinstance(item, AST): generic_visit(item) visit = generic_visit def visit_Import(self, node): """ Handle import ansible.module_utils.MODLIB[.MODLIBn] [as asname] We save these as interesting submodules when the imported library is in ansible.module_utils or ansible.collections """ for alias in node.names: if (alias.name.startswith('ansible.module_utils.') or alias.name.startswith('ansible_collections.')): py_mod = tuple(alias.name.split('.')) self.submodules.add(py_mod) # if the import's parent is the root document, it's a required import, otherwise it's optional if node.parent != self._tree: self.optional_imports.add(py_mod) self.generic_visit(node) def visit_ImportFrom(self, node): """ Handle from ansible.module_utils.MODLIB import [.MODLIBn] [as asname] Also has to handle relative imports We save these as interesting submodules when the imported library is in ansible.module_utils or ansible.collections """ # FIXME: These should all get skipped: # from ansible.executor import module_common # from ...executor import module_common # from ... import executor (Currently it gives a non-helpful error) if node.level > 0: # if we're in a package init, we have to add one to the node level (and make it none if 0 to preserve the right slicing behavior) level_slice_offset = -node.level + 1 or None if self.is_pkg_init else -node.level if self.module_fqn: parts = tuple(self.module_fqn.split('.')) if node.module: # relative import: from .module import x node_module = '.'.join(parts[:level_slice_offset] + (node.module,)) else: # relative import: from . import x node_module = '.'.join(parts[:level_slice_offset]) else: # fall back to an absolute import node_module = node.module else: # absolute import: from module import x node_module = node.module # Specialcase: six is a special case because of its # import logic py_mod = None if node.names[0].name == '_six': self.submodules.add(('_six',)) elif node_module.startswith('ansible.module_utils'): # from ansible.module_utils.MODULE1[.MODULEn] import IDENTIFIER [as asname] # from ansible.module_utils.MODULE1[.MODULEn] import MODULEn+1 [as asname] # from ansible.module_utils.MODULE1[.MODULEn] import MODULEn+1 [,IDENTIFIER] [as asname] # from ansible.module_utils import MODULE1 [,MODULEn] [as asname] py_mod = tuple(node_module.split('.')) elif node_module.startswith('ansible_collections.'): if node_module.endswith('plugins.module_utils') or '.plugins.module_utils.' in node_module: # from ansible_collections.ns.coll.plugins.module_utils import MODULE [as aname] [,MODULE2] [as aname] # from ansible_collections.ns.coll.plugins.module_utils.MODULE import IDENTIFIER [as aname] # FIXME: Unhandled cornercase (needs to be ignored): # from ansible_collections.ns.coll.plugins.[!module_utils].[FOO].plugins.module_utils import IDENTIFIER py_mod = tuple(node_module.split('.')) else: # Not from module_utils so ignore. for instance: # from ansible_collections.ns.coll.plugins.lookup import IDENTIFIER pass if py_mod: for alias in node.names: self.submodules.add(py_mod + (alias.name,)) # if the import's parent is the root document, it's a required import, otherwise it's optional if node.parent != self._tree: self.optional_imports.add(py_mod + (alias.name,)) self.generic_visit(node) def _slurp(path): if not os.path.exists(path): raise AnsibleError("imported module support code does not exist at %s" % os.path.abspath(path)) with open(path, 'rb') as fd: data = fd.read() return data def _get_shebang(interpreter, task_vars, templar, args=tuple(), remote_is_local=False): """ Handles the different ways ansible allows overriding the shebang target for a module. """ # FUTURE: add logical equivalence for python3 in the case of py3-only modules interpreter_name = os.path.basename(interpreter).strip() # name for interpreter var interpreter_config = u'ansible_%s_interpreter' % interpreter_name # key for config interpreter_config_key = "INTERPRETER_%s" % interpreter_name.upper() interpreter_out = None # looking for python, rest rely on matching vars if interpreter_name == 'python': # skip detection for network os execution, use playbook supplied one if possible if remote_is_local: interpreter_out = task_vars['ansible_playbook_python'] # a config def exists for this interpreter type; consult config for the value elif C.config.get_configuration_definition(interpreter_config_key): interpreter_from_config = C.config.get_config_value(interpreter_config_key, variables=task_vars) interpreter_out = templar.template(interpreter_from_config.strip()) # handle interpreter discovery if requested or empty interpreter was provided if not interpreter_out or interpreter_out in ['auto', 'auto_legacy', 'auto_silent', 'auto_legacy_silent']: discovered_interpreter_config = u'discovered_interpreter_%s' % interpreter_name facts_from_task_vars = task_vars.get('ansible_facts', {}) if discovered_interpreter_config not in facts_from_task_vars: # interpreter discovery is desired, but has not been run for this host raise InterpreterDiscoveryRequiredError("interpreter discovery needed", interpreter_name=interpreter_name, discovery_mode=interpreter_out) else: interpreter_out = facts_from_task_vars[discovered_interpreter_config] else: raise InterpreterDiscoveryRequiredError("interpreter discovery required", interpreter_name=interpreter_name, discovery_mode='auto_legacy') elif interpreter_config in task_vars: # for non python we consult vars for a possible direct override interpreter_out = templar.template(task_vars.get(interpreter_config).strip()) if not interpreter_out: # nothing matched(None) or in case someone configures empty string or empty intepreter interpreter_out = interpreter # set shebang shebang = u'#!{0}'.format(interpreter_out) if args: shebang = shebang + u' ' + u' '.join(args) return shebang, interpreter_out class ModuleUtilLocatorBase: def __init__(self, fq_name_parts, is_ambiguous=False, child_is_redirected=False, is_optional=False): self._is_ambiguous = is_ambiguous # a child package redirection could cause intermediate package levels to be missing, eg # from ansible.module_utils.x.y.z import foo; if x.y.z.foo is redirected, we may not have packages on disk for # the intermediate packages x.y.z, so we'll need to supply empty packages for those self._child_is_redirected = child_is_redirected self._is_optional = is_optional self.found = False self.redirected = False self.fq_name_parts = fq_name_parts self.source_code = '' self.output_path = '' self.is_package = False self._collection_name = None # for ambiguous imports, we should only test for things more than one level below module_utils # this lets us detect erroneous imports and redirections earlier if is_ambiguous and len(self._get_module_utils_remainder_parts(fq_name_parts)) > 1: self.candidate_names = [fq_name_parts, fq_name_parts[:-1]] else: self.candidate_names = [fq_name_parts] @property def candidate_names_joined(self): return ['.'.join(n) for n in self.candidate_names] def _handle_redirect(self, name_parts): module_utils_relative_parts = self._get_module_utils_remainder_parts(name_parts) # only allow redirects from below module_utils- if above that, bail out (eg, parent package names) if not module_utils_relative_parts: return False try: collection_metadata = _get_collection_metadata(self._collection_name) except ValueError as ve: # collection not found or some other error related to collection load if self._is_optional: return False raise AnsibleError('error processing module_util {0} loading redirected collection {1}: {2}' .format('.'.join(name_parts), self._collection_name, to_native(ve))) routing_entry = _nested_dict_get(collection_metadata, ['plugin_routing', 'module_utils', '.'.join(module_utils_relative_parts)]) if not routing_entry: return False # FIXME: add deprecation warning support dep_or_ts = routing_entry.get('tombstone') removed = dep_or_ts is not None if not removed: dep_or_ts = routing_entry.get('deprecation') if dep_or_ts: removal_date = dep_or_ts.get('removal_date') removal_version = dep_or_ts.get('removal_version') warning_text = dep_or_ts.get('warning_text') msg = 'module_util {0} has been removed'.format('.'.join(name_parts)) if warning_text: msg += ' ({0})'.format(warning_text) else: msg += '.' display.deprecated(msg, removal_version, removed, removal_date, self._collection_name) if 'redirect' in routing_entry: self.redirected = True source_pkg = '.'.join(name_parts) self.is_package = True # treat all redirects as packages redirect_target_pkg = routing_entry['redirect'] # expand FQCN redirects if not redirect_target_pkg.startswith('ansible_collections'): split_fqcn = redirect_target_pkg.split('.') if len(split_fqcn) < 3: raise Exception('invalid redirect for {0}: {1}'.format(source_pkg, redirect_target_pkg)) # assume it's an FQCN, expand it redirect_target_pkg = 'ansible_collections.{0}.{1}.plugins.module_utils.{2}'.format( split_fqcn[0], # ns split_fqcn[1], # coll '.'.join(split_fqcn[2:]) # sub-module_utils remainder ) display.vvv('redirecting module_util {0} to {1}'.format(source_pkg, redirect_target_pkg)) self.source_code = self._generate_redirect_shim_source(source_pkg, redirect_target_pkg) return True return False def _get_module_utils_remainder_parts(self, name_parts): # subclasses should override to return the name parts after module_utils return [] def _get_module_utils_remainder(self, name_parts): # return the remainder parts as a package string return '.'.join(self._get_module_utils_remainder_parts(name_parts)) def _find_module(self, name_parts): return False def _locate(self, redirect_first=True): for candidate_name_parts in self.candidate_names: if redirect_first and self._handle_redirect(candidate_name_parts): break if self._find_module(candidate_name_parts): break if not redirect_first and self._handle_redirect(candidate_name_parts): break else: # didn't find what we were looking for- last chance for packages whose parents were redirected if self._child_is_redirected: # make fake packages self.is_package = True self.source_code = '' else: # nope, just bail return if self.is_package: path_parts = candidate_name_parts + ('__init__',) else: path_parts = candidate_name_parts self.found = True self.output_path = os.path.join(*path_parts) + '.py' self.fq_name_parts = candidate_name_parts def _generate_redirect_shim_source(self, fq_source_module, fq_target_module): return """ import sys import {1} as mod sys.modules['{0}'] = mod """.format(fq_source_module, fq_target_module) # FIXME: add __repr__ impl class LegacyModuleUtilLocator(ModuleUtilLocatorBase): def __init__(self, fq_name_parts, is_ambiguous=False, mu_paths=None, child_is_redirected=False): super(LegacyModuleUtilLocator, self).__init__(fq_name_parts, is_ambiguous, child_is_redirected) if fq_name_parts[0:2] != ('ansible', 'module_utils'): raise Exception('this class can only locate from ansible.module_utils, got {0}'.format(fq_name_parts)) if fq_name_parts[2] == 'six': # FIXME: handle the ansible.module_utils.six._six case with a redirect or an internal _six attr on six itself? # six creates its submodules at runtime; convert all these to just 'ansible.module_utils.six' fq_name_parts = ('ansible', 'module_utils', 'six') self.candidate_names = [fq_name_parts] self._mu_paths = mu_paths self._collection_name = 'ansible.builtin' # legacy module utils always look in ansible.builtin for redirects self._locate(redirect_first=False) # let local stuff override redirects for legacy def _get_module_utils_remainder_parts(self, name_parts): return name_parts[2:] # eg, foo.bar for ansible.module_utils.foo.bar def _find_module(self, name_parts): rel_name_parts = self._get_module_utils_remainder_parts(name_parts) # no redirection; try to find the module if len(rel_name_parts) == 1: # direct child of module_utils, just search the top-level dirs we were given paths = self._mu_paths else: # a nested submodule of module_utils, extend the paths given with the intermediate package names paths = [os.path.join(p, *rel_name_parts[:-1]) for p in self._mu_paths] # extend the MU paths with the relative bit # find_spec needs the full module name self._info = info = importlib.machinery.PathFinder.find_spec('.'.join(name_parts), paths) if info is not None and os.path.splitext(info.origin)[1] in importlib.machinery.SOURCE_SUFFIXES: self.is_package = info.origin.endswith('/__init__.py') path = info.origin else: return False self.source_code = _slurp(path) return True class CollectionModuleUtilLocator(ModuleUtilLocatorBase): def __init__(self, fq_name_parts, is_ambiguous=False, child_is_redirected=False, is_optional=False): super(CollectionModuleUtilLocator, self).__init__(fq_name_parts, is_ambiguous, child_is_redirected, is_optional) if fq_name_parts[0] != 'ansible_collections': raise Exception('CollectionModuleUtilLocator can only locate from ansible_collections, got {0}'.format(fq_name_parts)) elif len(fq_name_parts) >= 6 and fq_name_parts[3:5] != ('plugins', 'module_utils'): raise Exception('CollectionModuleUtilLocator can only locate below ansible_collections.(ns).(coll).plugins.module_utils, got {0}' .format(fq_name_parts)) self._collection_name = '.'.join(fq_name_parts[1:3]) self._locate() def _find_module(self, name_parts): # synthesize empty inits for packages down through module_utils- we don't want to allow those to be shipped over, but the # package hierarchy needs to exist if len(name_parts) < 6: self.source_code = '' self.is_package = True return True # NB: we can't use pkgutil.get_data safely here, since we don't want to import/execute package/module code on # the controller while analyzing/assembling the module, so we'll have to manually import the collection's # Python package to locate it (import root collection, reassemble resource path beneath, fetch source) collection_pkg_name = '.'.join(name_parts[0:3]) resource_base_path = os.path.join(*name_parts[3:]) src = None # look for package_dir first, then module try: src = pkgutil.get_data(collection_pkg_name, to_native(os.path.join(resource_base_path, '__init__.py'))) except ImportError: pass # TODO: we might want to synthesize fake inits for py3-style packages, for now they're required beneath module_utils if src is not None: # empty string is OK self.is_package = True else: try: src = pkgutil.get_data(collection_pkg_name, to_native(resource_base_path + '.py')) except ImportError: pass if src is None: # empty string is OK return False self.source_code = src return True def _get_module_utils_remainder_parts(self, name_parts): return name_parts[5:] # eg, foo.bar for ansible_collections.ns.coll.plugins.module_utils.foo.bar def _make_zinfo(filename, date_time, zf=None): zinfo = zipfile.ZipInfo( filename=filename, date_time=date_time ) if zf: zinfo.compress_type = zf.compression return zinfo def recursive_finder(name, module_fqn, module_data, zf, date_time=None): """ Using ModuleDepFinder, make sure we have all of the module_utils files that the module and its module_utils files needs. (no longer actually recursive) :arg name: Name of the python module we're examining :arg module_fqn: Fully qualified name of the python module we're scanning :arg module_data: string Python code of the module we're scanning :arg zf: An open :python:class:`zipfile.ZipFile` object that holds the Ansible module payload which we're assembling """ if date_time is None: date_time = time.gmtime()[:6] # py_module_cache maps python module names to a tuple of the code in the module # and the pathname to the module. # Here we pre-load it with modules which we create without bothering to # read from actual files (In some cases, these need to differ from what ansible # ships because they're namespace packages in the module) # FIXME: do we actually want ns pkg behavior for these? Seems like they should just be forced to emptyish pkg stubs py_module_cache = { ('ansible',): ( b'from pkgutil import extend_path\n' b'__path__=extend_path(__path__,__name__)\n' b'__version__="' + to_bytes(__version__) + b'"\n__author__="' + to_bytes(__author__) + b'"\n', 'ansible/__init__.py'), ('ansible', 'module_utils'): ( b'from pkgutil import extend_path\n' b'__path__=extend_path(__path__,__name__)\n', 'ansible/module_utils/__init__.py')} module_utils_paths = [p for p in module_utils_loader._get_paths(subdirs=False) if os.path.isdir(p)] module_utils_paths.append(_MODULE_UTILS_PATH) # Parse the module code and find the imports of ansible.module_utils try: tree = compile(module_data, '', 'exec', ast.PyCF_ONLY_AST) except (SyntaxError, IndentationError) as e: raise AnsibleError("Unable to import %s due to %s" % (name, e.msg)) finder = ModuleDepFinder(module_fqn, tree) # the format of this set is a tuple of the module name and whether or not the import is ambiguous as a module name # or an attribute of a module (eg from x.y import z <-- is z a module or an attribute of x.y?) modules_to_process = [ModuleUtilsProcessEntry(m, True, False, is_optional=m in finder.optional_imports) for m in finder.submodules] # HACK: basic is currently always required since module global init is currently tied up with AnsiballZ arg input modules_to_process.append(ModuleUtilsProcessEntry(('ansible', 'module_utils', 'basic'), False, False, is_optional=False)) # we'll be adding new modules inline as we discover them, so just keep going til we've processed them all while modules_to_process: modules_to_process.sort() # not strictly necessary, but nice to process things in predictable and repeatable order py_module_name, is_ambiguous, child_is_redirected, is_optional = modules_to_process.pop(0) if py_module_name in py_module_cache: # this is normal; we'll often see the same module imported many times, but we only need to process it once continue if py_module_name[0:2] == ('ansible', 'module_utils'): module_info = LegacyModuleUtilLocator(py_module_name, is_ambiguous=is_ambiguous, mu_paths=module_utils_paths, child_is_redirected=child_is_redirected) elif py_module_name[0] == 'ansible_collections': module_info = CollectionModuleUtilLocator(py_module_name, is_ambiguous=is_ambiguous, child_is_redirected=child_is_redirected, is_optional=is_optional) else: # FIXME: dot-joined result display.warning('ModuleDepFinder improperly found a non-module_utils import %s' % [py_module_name]) continue # Could not find the module. Construct a helpful error message. if not module_info.found: if is_optional: # this was a best-effort optional import that we couldn't find, oh well, move along... continue # FIXME: use dot-joined candidate names msg = 'Could not find imported module support code for {0}. Looked for ({1})'.format(module_fqn, module_info.candidate_names_joined) raise AnsibleError(msg) # check the cache one more time with the module we actually found, since the name could be different than the input # eg, imported name vs module if module_info.fq_name_parts in py_module_cache: continue # compile the source, process all relevant imported modules try: tree = compile(module_info.source_code, '', 'exec', ast.PyCF_ONLY_AST) except (SyntaxError, IndentationError) as e: raise AnsibleError("Unable to import %s due to %s" % (module_info.fq_name_parts, e.msg)) finder = ModuleDepFinder('.'.join(module_info.fq_name_parts), tree, module_info.is_package) modules_to_process.extend(ModuleUtilsProcessEntry(m, True, False, is_optional=m in finder.optional_imports) for m in finder.submodules if m not in py_module_cache) # we've processed this item, add it to the output list py_module_cache[module_info.fq_name_parts] = (module_info.source_code, module_info.output_path) # ensure we process all ancestor package inits accumulated_pkg_name = [] for pkg in module_info.fq_name_parts[:-1]: accumulated_pkg_name.append(pkg) # we're accumulating this across iterations normalized_name = tuple(accumulated_pkg_name) # extra machinations to get a hashable type (list is not) if normalized_name not in py_module_cache: modules_to_process.append(ModuleUtilsProcessEntry(normalized_name, False, module_info.redirected, is_optional=is_optional)) for py_module_name in py_module_cache: py_module_file_name = py_module_cache[py_module_name][1] zf.writestr( _make_zinfo(py_module_file_name, date_time, zf=zf), py_module_cache[py_module_name][0] ) mu_file = to_text(py_module_file_name, errors='surrogate_or_strict') display.vvvvv("Including module_utils file %s" % mu_file) def _is_binary(b_module_data): textchars = bytearray(set([7, 8, 9, 10, 12, 13, 27]) | set(range(0x20, 0x100)) - set([0x7f])) start = b_module_data[:1024] return bool(start.translate(None, textchars)) def _get_ansible_module_fqn(module_path): """ Get the fully qualified name for an ansible module based on its pathname remote_module_fqn is the fully qualified name. Like ansible.modules.system.ping Or ansible_collections.Namespace.Collection_name.plugins.modules.ping .. warning:: This function is for ansible modules only. It won't work for other things (non-module plugins, etc) """ remote_module_fqn = None # Is this a core module? match = CORE_LIBRARY_PATH_RE.search(module_path) if not match: # Is this a module in a collection? match = COLLECTION_PATH_RE.search(module_path) # We can tell the FQN for core modules and collection modules if match: path = match.group('path') if '.' in path: # FQNs must be valid as python identifiers. This sanity check has failed. # we could check other things as well raise ValueError('Module name (or path) was not a valid python identifier') remote_module_fqn = '.'.join(path.split('/')) else: # Currently we do not handle modules in roles so we can end up here for that reason raise ValueError("Unable to determine module's fully qualified name") return remote_module_fqn def _add_module_to_zip(zf, date_time, remote_module_fqn, b_module_data): """Add a module from ansible or from an ansible collection into the module zip""" module_path_parts = remote_module_fqn.split('.') # Write the module module_path = '/'.join(module_path_parts) + '.py' zf.writestr( _make_zinfo(module_path, date_time, zf=zf), b_module_data ) # Write the __init__.py's necessary to get there if module_path_parts[0] == 'ansible': # The ansible namespace is setup as part of the module_utils setup... start = 2 existing_paths = frozenset() else: # ... but ansible_collections and other toplevels are not start = 1 existing_paths = frozenset(zf.namelist()) for idx in range(start, len(module_path_parts)): package_path = '/'.join(module_path_parts[:idx]) + '/__init__.py' # If a collections module uses module_utils from a collection then most packages will have already been added by recursive_finder. if package_path in existing_paths: continue # Note: We don't want to include more than one ansible module in a payload at this time # so no need to fill the __init__.py with namespace code zf.writestr( _make_zinfo(package_path, date_time, zf=zf), b'' ) def _find_module_utils(module_name, b_module_data, module_path, module_args, task_vars, templar, module_compression, async_timeout, become, become_method, become_user, become_password, become_flags, environment, remote_is_local=False): """ Given the source of the module, convert it to a Jinja2 template to insert module code and return whether it's a new or old style module. """ module_substyle = module_style = 'old' # module_style is something important to calling code (ActionBase). It # determines how arguments are formatted (json vs k=v) and whether # a separate arguments file needs to be sent over the wire. # module_substyle is extra information that's useful internally. It tells # us what we have to look to substitute in the module files and whether # we're using module replacer or ansiballz to format the module itself. if _is_binary(b_module_data): module_substyle = module_style = 'binary' elif REPLACER in b_module_data: # Do REPLACER before from ansible.module_utils because we need make sure # we substitute "from ansible.module_utils basic" for REPLACER module_style = 'new' module_substyle = 'python' b_module_data = b_module_data.replace(REPLACER, b'from ansible.module_utils.basic import *') elif NEW_STYLE_PYTHON_MODULE_RE.search(b_module_data): module_style = 'new' module_substyle = 'python' elif REPLACER_WINDOWS in b_module_data: module_style = 'new' module_substyle = 'powershell' b_module_data = b_module_data.replace(REPLACER_WINDOWS, b'#Requires -Module Ansible.ModuleUtils.Legacy') elif re.search(b'#Requires -Module', b_module_data, re.IGNORECASE) \ or re.search(b'#Requires -Version', b_module_data, re.IGNORECASE)\ or re.search(b'#AnsibleRequires -OSVersion', b_module_data, re.IGNORECASE) \ or re.search(b'#AnsibleRequires -Powershell', b_module_data, re.IGNORECASE) \ or re.search(b'#AnsibleRequires -CSharpUtil', b_module_data, re.IGNORECASE): module_style = 'new' module_substyle = 'powershell' elif REPLACER_JSONARGS in b_module_data: module_style = 'new' module_substyle = 'jsonargs' elif b'WANT_JSON' in b_module_data: module_substyle = module_style = 'non_native_want_json' shebang = None # Neither old-style, non_native_want_json nor binary modules should be modified # except for the shebang line (Done by modify_module) if module_style in ('old', 'non_native_want_json', 'binary'): return b_module_data, module_style, shebang output = BytesIO() try: remote_module_fqn = _get_ansible_module_fqn(module_path) except ValueError: # Modules in roles currently are not found by the fqn heuristic so we # fallback to this. This means that relative imports inside a module from # a role may fail. Absolute imports should be used for future-proofness. # People should start writing collections instead of modules in roles so we # may never fix this display.debug('ANSIBALLZ: Could not determine module FQN') remote_module_fqn = 'ansible.modules.%s' % module_name if module_substyle == 'python': date_time = time.gmtime()[:6] if date_time[0] < 1980: date_string = datetime.datetime(*date_time, tzinfo=datetime.timezone.utc).strftime('%c') raise AnsibleError(f'Cannot create zipfile due to pre-1980 configured date: {date_string}') params = dict(ANSIBLE_MODULE_ARGS=module_args,) try: python_repred_params = repr(json.dumps(params, cls=AnsibleJSONEncoder, vault_to_text=True)) except TypeError as e: raise AnsibleError("Unable to pass options to module, they must be JSON serializable: %s" % to_native(e)) try: compression_method = getattr(zipfile, module_compression) except AttributeError: display.warning(u'Bad module compression string specified: %s. Using ZIP_STORED (no compression)' % module_compression) compression_method = zipfile.ZIP_STORED lookup_path = os.path.join(C.DEFAULT_LOCAL_TMP, 'ansiballz_cache') cached_module_filename = os.path.join(lookup_path, "%s-%s" % (remote_module_fqn, module_compression)) zipdata = None # Optimization -- don't lock if the module has already been cached if os.path.exists(cached_module_filename): display.debug('ANSIBALLZ: using cached module: %s' % cached_module_filename) with open(cached_module_filename, 'rb') as module_data: zipdata = module_data.read() else: if module_name in action_write_locks.action_write_locks: display.debug('ANSIBALLZ: Using lock for %s' % module_name) lock = action_write_locks.action_write_locks[module_name] else: # If the action plugin directly invokes the module (instead of # going through a strategy) then we don't have a cross-process # Lock specifically for this module. Use the "unexpected # module" lock instead display.debug('ANSIBALLZ: Using generic lock for %s' % module_name) lock = action_write_locks.action_write_locks[None] display.debug('ANSIBALLZ: Acquiring lock') with lock: display.debug('ANSIBALLZ: Lock acquired: %s' % id(lock)) # Check that no other process has created this while we were # waiting for the lock if not os.path.exists(cached_module_filename): display.debug('ANSIBALLZ: Creating module') # Create the module zip data zipoutput = BytesIO() zf = zipfile.ZipFile(zipoutput, mode='w', compression=compression_method) # walk the module imports, looking for module_utils to send- they'll be added to the zipfile recursive_finder(module_name, remote_module_fqn, b_module_data, zf, date_time) display.debug('ANSIBALLZ: Writing module into payload') _add_module_to_zip(zf, date_time, remote_module_fqn, b_module_data) zf.close() zipdata = base64.b64encode(zipoutput.getvalue()) # Write the assembled module to a temp file (write to temp # so that no one looking for the file reads a partially # written file) # # FIXME: Once split controller/remote is merged, this can be simplified to # os.makedirs(lookup_path, exist_ok=True) if not os.path.exists(lookup_path): try: # Note -- if we have a global function to setup, that would # be a better place to run this os.makedirs(lookup_path) except OSError: # Multiple processes tried to create the directory. If it still does not # exist, raise the original exception. if not os.path.exists(lookup_path): raise display.debug('ANSIBALLZ: Writing module') with open(cached_module_filename + '-part', 'wb') as f: f.write(zipdata) # Rename the file into its final position in the cache so # future users of this module can read it off the # filesystem instead of constructing from scratch. display.debug('ANSIBALLZ: Renaming module') os.rename(cached_module_filename + '-part', cached_module_filename) display.debug('ANSIBALLZ: Done creating module') if zipdata is None: display.debug('ANSIBALLZ: Reading module after lock') # Another process wrote the file while we were waiting for # the write lock. Go ahead and read the data from disk # instead of re-creating it. try: with open(cached_module_filename, 'rb') as f: zipdata = f.read() except IOError: raise AnsibleError('A different worker process failed to create module file. ' 'Look at traceback for that process for debugging information.') zipdata = to_text(zipdata, errors='surrogate_or_strict') o_interpreter, o_args = _extract_interpreter(b_module_data) if o_interpreter is None: o_interpreter = u'/usr/bin/python' shebang, interpreter = _get_shebang(o_interpreter, task_vars, templar, o_args, remote_is_local=remote_is_local) # FUTURE: the module cache entry should be invalidated if we got this value from a host-dependent source rlimit_nofile = C.config.get_config_value('PYTHON_MODULE_RLIMIT_NOFILE', variables=task_vars) if not isinstance(rlimit_nofile, int): rlimit_nofile = int(templar.template(rlimit_nofile)) if rlimit_nofile: rlimit = ANSIBALLZ_RLIMIT_TEMPLATE % dict( rlimit_nofile=rlimit_nofile, ) else: rlimit = '' coverage_config = os.environ.get('_ANSIBLE_COVERAGE_CONFIG') if coverage_config: coverage_output = os.environ['_ANSIBLE_COVERAGE_OUTPUT'] if coverage_output: # Enable code coverage analysis of the module. # This feature is for internal testing and may change without notice. coverage = ANSIBALLZ_COVERAGE_TEMPLATE % dict( coverage_config=coverage_config, coverage_output=coverage_output, ) else: # Verify coverage is available without importing it. # This will detect when a module would fail with coverage enabled with minimal overhead. coverage = ANSIBALLZ_COVERAGE_CHECK_TEMPLATE else: coverage = '' output.write(to_bytes(ACTIVE_ANSIBALLZ_TEMPLATE % dict( zipdata=zipdata, ansible_module=module_name, module_fqn=remote_module_fqn, params=python_repred_params, shebang=shebang, coding=ENCODING_STRING, date_time=date_time, coverage=coverage, rlimit=rlimit, ))) b_module_data = output.getvalue() elif module_substyle == 'powershell': # Powershell/winrm don't actually make use of shebang so we can # safely set this here. If we let the fallback code handle this # it can fail in the presence of the UTF8 BOM commonly added by # Windows text editors shebang = u'#!powershell' # create the common exec wrapper payload and set that as the module_data # bytes b_module_data = ps_manifest._create_powershell_wrapper( b_module_data, module_path, module_args, environment, async_timeout, become, become_method, become_user, become_password, become_flags, module_substyle, task_vars, remote_module_fqn ) elif module_substyle == 'jsonargs': module_args_json = to_bytes(json.dumps(module_args, cls=AnsibleJSONEncoder, vault_to_text=True)) # these strings could be included in a third-party module but # officially they were included in the 'basic' snippet for new-style # python modules (which has been replaced with something else in # ansiballz) If we remove them from jsonargs-style module replacer # then we can remove them everywhere. python_repred_args = to_bytes(repr(module_args_json)) b_module_data = b_module_data.replace(REPLACER_VERSION, to_bytes(repr(__version__))) b_module_data = b_module_data.replace(REPLACER_COMPLEX, python_repred_args) b_module_data = b_module_data.replace(REPLACER_SELINUX, to_bytes(','.join(C.DEFAULT_SELINUX_SPECIAL_FS))) # The main event -- substitute the JSON args string into the module b_module_data = b_module_data.replace(REPLACER_JSONARGS, module_args_json) facility = b'syslog.' + to_bytes(task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY), errors='surrogate_or_strict') b_module_data = b_module_data.replace(b'syslog.LOG_USER', facility) return (b_module_data, module_style, shebang) def _extract_interpreter(b_module_data): """ Used to extract shebang expression from binary module data and return a text string with the shebang, or None if no shebang is detected. """ interpreter = None args = [] b_lines = b_module_data.split(b"\n", 1) if b_lines[0].startswith(b"#!"): b_shebang = b_lines[0].strip() # shlex.split needs text on Python 3 cli_split = shlex.split(to_text(b_shebang[2:], errors='surrogate_or_strict')) # convert args to text cli_split = [to_text(a, errors='surrogate_or_strict') for a in cli_split] interpreter = cli_split[0] args = cli_split[1:] return interpreter, args def modify_module(module_name, module_path, module_args, templar, task_vars=None, module_compression='ZIP_STORED', async_timeout=0, become=False, become_method=None, become_user=None, become_password=None, become_flags=None, environment=None, remote_is_local=False): """ Used to insert chunks of code into modules before transfer rather than doing regular python imports. This allows for more efficient transfer in a non-bootstrapping scenario by not moving extra files over the wire and also takes care of embedding arguments in the transferred modules. This version is done in such a way that local imports can still be used in the module code, so IDEs don't have to be aware of what is going on. Example: from ansible.module_utils.basic import * ... will result in the insertion of basic.py into the module from the module_utils/ directory in the source tree. For powershell, this code effectively no-ops, as the exec wrapper requires access to a number of properties not available here. """ task_vars = {} if task_vars is None else task_vars environment = {} if environment is None else environment with open(module_path, 'rb') as f: # read in the module source b_module_data = f.read() (b_module_data, module_style, shebang) = _find_module_utils(module_name, b_module_data, module_path, module_args, task_vars, templar, module_compression, async_timeout=async_timeout, become=become, become_method=become_method, become_user=become_user, become_password=become_password, become_flags=become_flags, environment=environment, remote_is_local=remote_is_local) if module_style == 'binary': return (b_module_data, module_style, to_text(shebang, nonstring='passthru')) elif shebang is None: interpreter, args = _extract_interpreter(b_module_data) # No interpreter/shebang, assume a binary module? if interpreter is not None: shebang, new_interpreter = _get_shebang(interpreter, task_vars, templar, args, remote_is_local=remote_is_local) # update shebang b_lines = b_module_data.split(b"\n", 1) if interpreter != new_interpreter: b_lines[0] = to_bytes(shebang, errors='surrogate_or_strict', nonstring='passthru') if os.path.basename(interpreter).startswith(u'python'): b_lines.insert(1, b_ENCODING_STRING) b_module_data = b"\n".join(b_lines) return (b_module_data, module_style, shebang) def get_action_args_with_defaults(action, args, defaults, templar, action_groups=None): # Get the list of groups that contain this action if action_groups is None: msg = ( "Finding module_defaults for action %s. " "The caller has not passed the action_groups, so any " "that may include this action will be ignored." ) display.warning(msg=msg) group_names = [] else: group_names = action_groups.get(action, []) tmp_args = {} module_defaults = {} # Merge latest defaults into dict, since they are a list of dicts if isinstance(defaults, list): for default in defaults: module_defaults.update(default) # module_defaults keys are static, but the values may be templated module_defaults = templar.template(module_defaults) for default in module_defaults: if default.startswith('group/'): group_name = default.split('group/')[-1] if group_name in group_names: tmp_args.update((module_defaults.get('group/%s' % group_name) or {}).copy()) # handle specific action defaults tmp_args.update(module_defaults.get(action, {}).copy()) # direct args override all tmp_args.update(args) return tmp_args ansible-core-2.16.3/lib/ansible/executor/play_iterator.py0000644000000000000000000007345314556006441022176 0ustar00rootroot# (c) 2012-2014, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import fnmatch from enum import IntEnum, IntFlag from ansible import constants as C from ansible.errors import AnsibleAssertionError from ansible.module_utils.parsing.convert_bool import boolean from ansible.playbook.block import Block from ansible.playbook.task import Task from ansible.utils.display import Display display = Display() __all__ = ['PlayIterator', 'IteratingStates', 'FailedStates'] class IteratingStates(IntEnum): SETUP = 0 TASKS = 1 RESCUE = 2 ALWAYS = 3 HANDLERS = 4 COMPLETE = 5 class FailedStates(IntFlag): NONE = 0 SETUP = 1 TASKS = 2 RESCUE = 4 ALWAYS = 8 HANDLERS = 16 # NOTE not in use anymore class HostState: def __init__(self, blocks): self._blocks = blocks[:] self.handlers = [] self.handler_notifications = [] self.cur_block = 0 self.cur_regular_task = 0 self.cur_rescue_task = 0 self.cur_always_task = 0 self.cur_handlers_task = 0 self.run_state = IteratingStates.SETUP self.fail_state = FailedStates.NONE self.pre_flushing_run_state = None self.update_handlers = True self.pending_setup = False self.tasks_child_state = None self.rescue_child_state = None self.always_child_state = None self.did_rescue = False self.did_start_at_task = False def __repr__(self): return "HostState(%r)" % self._blocks def __str__(self): return ("HOST STATE: block=%d, task=%d, rescue=%d, always=%d, handlers=%d, run_state=%s, fail_state=%s, " "pre_flushing_run_state=%s, update_handlers=%s, pending_setup=%s, " "tasks child state? (%s), rescue child state? (%s), always child state? (%s), " "did rescue? %s, did start at task? %s" % ( self.cur_block, self.cur_regular_task, self.cur_rescue_task, self.cur_always_task, self.cur_handlers_task, self.run_state, self.fail_state, self.pre_flushing_run_state, self.update_handlers, self.pending_setup, self.tasks_child_state, self.rescue_child_state, self.always_child_state, self.did_rescue, self.did_start_at_task, )) def __eq__(self, other): if not isinstance(other, HostState): return False for attr in ('_blocks', 'cur_block', 'cur_regular_task', 'cur_rescue_task', 'cur_always_task', 'cur_handlers_task', 'run_state', 'fail_state', 'pre_flushing_run_state', 'update_handlers', 'pending_setup', 'tasks_child_state', 'rescue_child_state', 'always_child_state'): if getattr(self, attr) != getattr(other, attr): return False return True def get_current_block(self): return self._blocks[self.cur_block] def copy(self): new_state = HostState(self._blocks) new_state.handlers = self.handlers[:] new_state.handler_notifications = self.handler_notifications[:] new_state.cur_block = self.cur_block new_state.cur_regular_task = self.cur_regular_task new_state.cur_rescue_task = self.cur_rescue_task new_state.cur_always_task = self.cur_always_task new_state.cur_handlers_task = self.cur_handlers_task new_state.run_state = self.run_state new_state.fail_state = self.fail_state new_state.pre_flushing_run_state = self.pre_flushing_run_state new_state.update_handlers = self.update_handlers new_state.pending_setup = self.pending_setup new_state.did_rescue = self.did_rescue new_state.did_start_at_task = self.did_start_at_task if self.tasks_child_state is not None: new_state.tasks_child_state = self.tasks_child_state.copy() if self.rescue_child_state is not None: new_state.rescue_child_state = self.rescue_child_state.copy() if self.always_child_state is not None: new_state.always_child_state = self.always_child_state.copy() return new_state class PlayIterator: def __init__(self, inventory, play, play_context, variable_manager, all_vars, start_at_done=False): self._play = play self._blocks = [] self._variable_manager = variable_manager setup_block = Block(play=self._play) # Gathering facts with run_once would copy the facts from one host to # the others. setup_block.run_once = False setup_task = Task(block=setup_block) setup_task.action = 'gather_facts' # TODO: hardcoded resolution here, but should use actual resolution code in the end, # in case of 'legacy' mismatch setup_task.resolved_action = 'ansible.builtin.gather_facts' setup_task.name = 'Gathering Facts' setup_task.args = {} # Unless play is specifically tagged, gathering should 'always' run if not self._play.tags: setup_task.tags = ['always'] # Default options to gather for option in ('gather_subset', 'gather_timeout', 'fact_path'): value = getattr(self._play, option, None) if value is not None: setup_task.args[option] = value setup_task.set_loader(self._play._loader) # short circuit fact gathering if the entire playbook is conditional if self._play._included_conditional is not None: setup_task.when = self._play._included_conditional[:] setup_block.block = [setup_task] setup_block = setup_block.filter_tagged_tasks(all_vars) self._blocks.append(setup_block) # keep flatten (no blocks) list of all tasks from the play # used for the lockstep mechanism in the linear strategy self.all_tasks = setup_block.get_tasks() for block in self._play.compile(): new_block = block.filter_tagged_tasks(all_vars) if new_block.has_tasks(): self._blocks.append(new_block) self.all_tasks.extend(new_block.get_tasks()) # keep list of all handlers, it is copied into each HostState # at the beginning of IteratingStates.HANDLERS # the copy happens at each flush in order to restore the original # list and remove any included handlers that might not be notified # at the particular flush self.handlers = [h for b in self._play.handlers for h in b.block] self._host_states = {} start_at_matched = False batch = inventory.get_hosts(self._play.hosts, order=self._play.order) self.batch_size = len(batch) for host in batch: self.set_state_for_host(host.name, HostState(blocks=self._blocks)) # if we're looking to start at a specific task, iterate through # the tasks for this host until we find the specified task if play_context.start_at_task is not None and not start_at_done: while True: (s, task) = self.get_next_task_for_host(host, peek=True) if s.run_state == IteratingStates.COMPLETE: break if task.name == play_context.start_at_task or (task.name and fnmatch.fnmatch(task.name, play_context.start_at_task)) or \ task.get_name() == play_context.start_at_task or fnmatch.fnmatch(task.get_name(), play_context.start_at_task): start_at_matched = True break self.set_state_for_host(host.name, s) # finally, reset the host's state to IteratingStates.SETUP if start_at_matched: self._host_states[host.name].did_start_at_task = True self._host_states[host.name].run_state = IteratingStates.SETUP if start_at_matched: # we have our match, so clear the start_at_task field on the # play context to flag that we've started at a task (and future # plays won't try to advance) play_context.start_at_task = None self.end_play = False self.cur_task = 0 def get_host_state(self, host): # Since we're using the PlayIterator to carry forward failed hosts, # in the event that a previous host was not in the current inventory # we create a stub state for it now if host.name not in self._host_states: self.set_state_for_host(host.name, HostState(blocks=[])) return self._host_states[host.name].copy() def get_next_task_for_host(self, host, peek=False): display.debug("getting the next task for host %s" % host.name) s = self.get_host_state(host) task = None if s.run_state == IteratingStates.COMPLETE: display.debug("host %s is done iterating, returning" % host.name) return (s, None) (s, task) = self._get_next_task_from_state(s, host=host) if not peek: self.set_state_for_host(host.name, s) display.debug("done getting next task for host %s" % host.name) display.debug(" ^ task is: %s" % task) display.debug(" ^ state is: %s" % s) return (s, task) def _get_next_task_from_state(self, state, host): task = None # try and find the next task, given the current state. while True: # try to get the current block from the list of blocks, and # if we run past the end of the list we know we're done with # this block try: block = state._blocks[state.cur_block] except IndexError: state.run_state = IteratingStates.COMPLETE return (state, None) if state.run_state == IteratingStates.SETUP: # First, we check to see if we were pending setup. If not, this is # the first trip through IteratingStates.SETUP, so we set the pending_setup # flag and try to determine if we do in fact want to gather facts for # the specified host. if not state.pending_setup: state.pending_setup = True # Gather facts if the default is 'smart' and we have not yet # done it for this host; or if 'explicit' and the play sets # gather_facts to True; or if 'implicit' and the play does # NOT explicitly set gather_facts to False. gathering = C.DEFAULT_GATHERING implied = self._play.gather_facts is None or boolean(self._play.gather_facts, strict=False) if (gathering == 'implicit' and implied) or \ (gathering == 'explicit' and boolean(self._play.gather_facts, strict=False)) or \ (gathering == 'smart' and implied and not (self._variable_manager._fact_cache.get(host.name, {}).get('_ansible_facts_gathered', False))): # The setup block is always self._blocks[0], as we inject it # during the play compilation in __init__ above. setup_block = self._blocks[0] if setup_block.has_tasks() and len(setup_block.block) > 0: task = setup_block.block[0] else: # This is the second trip through IteratingStates.SETUP, so we clear # the flag and move onto the next block in the list while setting # the run state to IteratingStates.TASKS state.pending_setup = False state.run_state = IteratingStates.TASKS if not state.did_start_at_task: state.cur_block += 1 state.cur_regular_task = 0 state.cur_rescue_task = 0 state.cur_always_task = 0 state.tasks_child_state = None state.rescue_child_state = None state.always_child_state = None elif state.run_state == IteratingStates.TASKS: # clear the pending setup flag, since we're past that and it didn't fail if state.pending_setup: state.pending_setup = False # First, we check for a child task state that is not failed, and if we # have one recurse into it for the next task. If we're done with the child # state, we clear it and drop back to getting the next task from the list. if state.tasks_child_state: (state.tasks_child_state, task) = self._get_next_task_from_state(state.tasks_child_state, host=host) if self._check_failed_state(state.tasks_child_state): # failed child state, so clear it and move into the rescue portion state.tasks_child_state = None self._set_failed_state(state) else: # get the next task recursively if task is None or state.tasks_child_state.run_state == IteratingStates.COMPLETE: # we're done with the child state, so clear it and continue # back to the top of the loop to get the next task state.tasks_child_state = None continue else: # First here, we check to see if we've failed anywhere down the chain # of states we have, and if so we move onto the rescue portion. Otherwise, # we check to see if we've moved past the end of the list of tasks. If so, # we move into the always portion of the block, otherwise we get the next # task from the list. if self._check_failed_state(state): state.run_state = IteratingStates.RESCUE elif state.cur_regular_task >= len(block.block): state.run_state = IteratingStates.ALWAYS else: task = block.block[state.cur_regular_task] # if the current task is actually a child block, create a child # state for us to recurse into on the next pass if isinstance(task, Block): state.tasks_child_state = HostState(blocks=[task]) state.tasks_child_state.run_state = IteratingStates.TASKS # since we've created the child state, clear the task # so we can pick up the child state on the next pass task = None state.cur_regular_task += 1 elif state.run_state == IteratingStates.RESCUE: # The process here is identical to IteratingStates.TASKS, except instead # we move into the always portion of the block. if state.rescue_child_state: (state.rescue_child_state, task) = self._get_next_task_from_state(state.rescue_child_state, host=host) if self._check_failed_state(state.rescue_child_state): state.rescue_child_state = None self._set_failed_state(state) else: if task is None or state.rescue_child_state.run_state == IteratingStates.COMPLETE: state.rescue_child_state = None continue else: if state.fail_state & FailedStates.RESCUE == FailedStates.RESCUE: state.run_state = IteratingStates.ALWAYS elif state.cur_rescue_task >= len(block.rescue): if len(block.rescue) > 0: state.fail_state = FailedStates.NONE state.run_state = IteratingStates.ALWAYS state.did_rescue = True else: task = block.rescue[state.cur_rescue_task] if isinstance(task, Block): state.rescue_child_state = HostState(blocks=[task]) state.rescue_child_state.run_state = IteratingStates.TASKS task = None state.cur_rescue_task += 1 elif state.run_state == IteratingStates.ALWAYS: # And again, the process here is identical to IteratingStates.TASKS, except # instead we either move onto the next block in the list, or we set the # run state to IteratingStates.COMPLETE in the event of any errors, or when we # have hit the end of the list of blocks. if state.always_child_state: (state.always_child_state, task) = self._get_next_task_from_state(state.always_child_state, host=host) if self._check_failed_state(state.always_child_state): state.always_child_state = None self._set_failed_state(state) else: if task is None or state.always_child_state.run_state == IteratingStates.COMPLETE: state.always_child_state = None continue else: if state.cur_always_task >= len(block.always): if state.fail_state != FailedStates.NONE: state.run_state = IteratingStates.COMPLETE else: state.cur_block += 1 state.cur_regular_task = 0 state.cur_rescue_task = 0 state.cur_always_task = 0 state.run_state = IteratingStates.TASKS state.tasks_child_state = None state.rescue_child_state = None state.always_child_state = None state.did_rescue = False else: task = block.always[state.cur_always_task] if isinstance(task, Block): state.always_child_state = HostState(blocks=[task]) state.always_child_state.run_state = IteratingStates.TASKS task = None state.cur_always_task += 1 elif state.run_state == IteratingStates.HANDLERS: if state.update_handlers: # reset handlers for HostState since handlers from include_tasks # might be there from previous flush state.handlers = self.handlers[:] state.update_handlers = False state.cur_handlers_task = 0 while True: try: task = state.handlers[state.cur_handlers_task] except IndexError: task = None state.run_state = state.pre_flushing_run_state state.update_handlers = True break else: state.cur_handlers_task += 1 if task.is_host_notified(host): break elif state.run_state == IteratingStates.COMPLETE: return (state, None) # if something above set the task, break out of the loop now if task: break return (state, task) def _set_failed_state(self, state): if state.run_state == IteratingStates.SETUP: state.fail_state |= FailedStates.SETUP state.run_state = IteratingStates.COMPLETE elif state.run_state == IteratingStates.TASKS: if state.tasks_child_state is not None: state.tasks_child_state = self._set_failed_state(state.tasks_child_state) else: state.fail_state |= FailedStates.TASKS if state._blocks[state.cur_block].rescue: state.run_state = IteratingStates.RESCUE elif state._blocks[state.cur_block].always: state.run_state = IteratingStates.ALWAYS else: state.run_state = IteratingStates.COMPLETE elif state.run_state == IteratingStates.RESCUE: if state.rescue_child_state is not None: state.rescue_child_state = self._set_failed_state(state.rescue_child_state) else: state.fail_state |= FailedStates.RESCUE if state._blocks[state.cur_block].always: state.run_state = IteratingStates.ALWAYS else: state.run_state = IteratingStates.COMPLETE elif state.run_state == IteratingStates.ALWAYS: if state.always_child_state is not None: state.always_child_state = self._set_failed_state(state.always_child_state) else: state.fail_state |= FailedStates.ALWAYS state.run_state = IteratingStates.COMPLETE return state def mark_host_failed(self, host): s = self.get_host_state(host) display.debug("marking host %s failed, current state: %s" % (host, s)) if s.run_state == IteratingStates.HANDLERS: # we are failing `meta: flush_handlers`, so just reset the state to whatever # it was before and let `_set_failed_state` figure out the next state s.run_state = s.pre_flushing_run_state s.update_handlers = True s = self._set_failed_state(s) display.debug("^ failed state is now: %s" % s) self.set_state_for_host(host.name, s) self._play._removed_hosts.append(host.name) def get_failed_hosts(self): return dict((host, True) for (host, state) in self._host_states.items() if self._check_failed_state(state)) def _check_failed_state(self, state): if state is None: return False elif state.run_state == IteratingStates.RESCUE and self._check_failed_state(state.rescue_child_state): return True elif state.run_state == IteratingStates.ALWAYS and self._check_failed_state(state.always_child_state): return True elif state.fail_state != FailedStates.NONE: if state.run_state == IteratingStates.RESCUE and state.fail_state & FailedStates.RESCUE == 0: return False elif state.run_state == IteratingStates.ALWAYS and state.fail_state & FailedStates.ALWAYS == 0: return False else: return not (state.did_rescue and state.fail_state & FailedStates.ALWAYS == 0) elif state.run_state == IteratingStates.TASKS and self._check_failed_state(state.tasks_child_state): cur_block = state._blocks[state.cur_block] if len(cur_block.rescue) > 0 and state.fail_state & FailedStates.RESCUE == 0: return False else: return True return False def is_failed(self, host): s = self.get_host_state(host) return self._check_failed_state(s) def clear_host_errors(self, host): self._clear_state_errors(self.get_state_for_host(host.name)) def _clear_state_errors(self, state: HostState) -> None: state.fail_state = FailedStates.NONE if state.tasks_child_state is not None: self._clear_state_errors(state.tasks_child_state) elif state.rescue_child_state is not None: self._clear_state_errors(state.rescue_child_state) elif state.always_child_state is not None: self._clear_state_errors(state.always_child_state) def get_active_state(self, state): ''' Finds the active state, recursively if necessary when there are child states. ''' if state.run_state == IteratingStates.TASKS and state.tasks_child_state is not None: return self.get_active_state(state.tasks_child_state) elif state.run_state == IteratingStates.RESCUE and state.rescue_child_state is not None: return self.get_active_state(state.rescue_child_state) elif state.run_state == IteratingStates.ALWAYS and state.always_child_state is not None: return self.get_active_state(state.always_child_state) return state def is_any_block_rescuing(self, state): ''' Given the current HostState state, determines if the current block, or any child blocks, are in rescue mode. ''' if state.run_state == IteratingStates.TASKS and state.get_current_block().rescue: return True if state.tasks_child_state is not None: return self.is_any_block_rescuing(state.tasks_child_state) if state.rescue_child_state is not None: return self.is_any_block_rescuing(state.rescue_child_state) if state.always_child_state is not None: return self.is_any_block_rescuing(state.always_child_state) return False def _insert_tasks_into_state(self, state, task_list): # if we've failed at all, or if the task list is empty, just return the current state if (state.fail_state != FailedStates.NONE and state.run_state == IteratingStates.TASKS) or not task_list: return state if state.run_state == IteratingStates.TASKS: if state.tasks_child_state: state.tasks_child_state = self._insert_tasks_into_state(state.tasks_child_state, task_list) else: target_block = state._blocks[state.cur_block].copy() before = target_block.block[:state.cur_regular_task] after = target_block.block[state.cur_regular_task:] target_block.block = before + task_list + after state._blocks[state.cur_block] = target_block elif state.run_state == IteratingStates.RESCUE: if state.rescue_child_state: state.rescue_child_state = self._insert_tasks_into_state(state.rescue_child_state, task_list) else: target_block = state._blocks[state.cur_block].copy() before = target_block.rescue[:state.cur_rescue_task] after = target_block.rescue[state.cur_rescue_task:] target_block.rescue = before + task_list + after state._blocks[state.cur_block] = target_block elif state.run_state == IteratingStates.ALWAYS: if state.always_child_state: state.always_child_state = self._insert_tasks_into_state(state.always_child_state, task_list) else: target_block = state._blocks[state.cur_block].copy() before = target_block.always[:state.cur_always_task] after = target_block.always[state.cur_always_task:] target_block.always = before + task_list + after state._blocks[state.cur_block] = target_block elif state.run_state == IteratingStates.HANDLERS: state.handlers[state.cur_handlers_task:state.cur_handlers_task] = [h for b in task_list for h in b.block] return state def add_tasks(self, host, task_list): self.set_state_for_host(host.name, self._insert_tasks_into_state(self.get_host_state(host), task_list)) @property def host_states(self): return self._host_states def get_state_for_host(self, hostname: str) -> HostState: return self._host_states[hostname] def set_state_for_host(self, hostname: str, state: HostState) -> None: if not isinstance(state, HostState): raise AnsibleAssertionError('Expected state to be a HostState but was a %s' % type(state)) self._host_states[hostname] = state def set_run_state_for_host(self, hostname: str, run_state: IteratingStates) -> None: if not isinstance(run_state, IteratingStates): raise AnsibleAssertionError('Expected run_state to be a IteratingStates but was %s' % (type(run_state))) self._host_states[hostname].run_state = run_state def set_fail_state_for_host(self, hostname: str, fail_state: FailedStates) -> None: if not isinstance(fail_state, FailedStates): raise AnsibleAssertionError('Expected fail_state to be a FailedStates but was %s' % (type(fail_state))) self._host_states[hostname].fail_state = fail_state def add_notification(self, hostname: str, notification: str) -> None: # preserve order host_state = self._host_states[hostname] if notification not in host_state.handler_notifications: host_state.handler_notifications.append(notification) def clear_notification(self, hostname: str, notification: str) -> None: self._host_states[hostname].handler_notifications.remove(notification) ansible-core-2.16.3/lib/ansible/executor/playbook_executor.py0000644000000000000000000003537214556006441023054 0ustar00rootroot# (c) 2012-2014, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os from ansible import constants as C from ansible import context from ansible.executor.task_queue_manager import TaskQueueManager, AnsibleEndPlay from ansible.module_utils.common.text.converters import to_text from ansible.module_utils.parsing.convert_bool import boolean from ansible.plugins.loader import become_loader, connection_loader, shell_loader from ansible.playbook import Playbook from ansible.template import Templar from ansible.utils.helpers import pct_to_int from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path, _get_collection_playbook_path from ansible.utils.path import makedirs_safe from ansible.utils.ssh_functions import set_default_transport from ansible.utils.display import Display display = Display() class PlaybookExecutor: ''' This is the primary class for executing playbooks, and thus the basis for bin/ansible-playbook operation. ''' def __init__(self, playbooks, inventory, variable_manager, loader, passwords): self._playbooks = playbooks self._inventory = inventory self._variable_manager = variable_manager self._loader = loader self.passwords = passwords self._unreachable_hosts = dict() if context.CLIARGS.get('listhosts') or context.CLIARGS.get('listtasks') or \ context.CLIARGS.get('listtags') or context.CLIARGS.get('syntax'): self._tqm = None else: self._tqm = TaskQueueManager( inventory=inventory, variable_manager=variable_manager, loader=loader, passwords=self.passwords, forks=context.CLIARGS.get('forks'), ) # Note: We run this here to cache whether the default ansible ssh # executable supports control persist. Sometime in the future we may # need to enhance this to check that ansible_ssh_executable specified # in inventory is also cached. We can't do this caching at the point # where it is used (in task_executor) because that is post-fork and # therefore would be discarded after every task. set_default_transport() def run(self): ''' Run the given playbook, based on the settings in the play which may limit the runs to serialized groups, etc. ''' result = 0 entrylist = [] entry = {} try: # preload become/connection/shell to set config defs cached list(connection_loader.all(class_only=True)) list(shell_loader.all(class_only=True)) list(become_loader.all(class_only=True)) for playbook in self._playbooks: # deal with FQCN resource = _get_collection_playbook_path(playbook) if resource is not None: playbook_path = resource[1] playbook_collection = resource[2] else: playbook_path = playbook # not fqcn, but might still be collection playbook playbook_collection = _get_collection_name_from_path(playbook) if playbook_collection: display.v("running playbook inside collection {0}".format(playbook_collection)) AnsibleCollectionConfig.default_collection = playbook_collection else: AnsibleCollectionConfig.default_collection = None pb = Playbook.load(playbook_path, variable_manager=self._variable_manager, loader=self._loader) # FIXME: move out of inventory self._inventory.set_playbook_basedir(os.path.realpath(os.path.dirname(playbook_path))) if self._tqm is None: # we are doing a listing entry = {'playbook': playbook_path} entry['plays'] = [] else: # make sure the tqm has callbacks loaded self._tqm.load_callbacks() self._tqm.send_callback('v2_playbook_on_start', pb) i = 1 plays = pb.get_plays() display.vv(u'%d plays in %s' % (len(plays), to_text(playbook_path))) for play in plays: if play._included_path is not None: self._loader.set_basedir(play._included_path) else: self._loader.set_basedir(pb._basedir) # clear any filters which may have been applied to the inventory self._inventory.remove_restriction() # Allow variables to be used in vars_prompt fields. all_vars = self._variable_manager.get_vars(play=play) templar = Templar(loader=self._loader, variables=all_vars) setattr(play, 'vars_prompt', templar.template(play.vars_prompt)) # FIXME: this should be a play 'sub object' like loop_control if play.vars_prompt: for var in play.vars_prompt: vname = var['name'] prompt = var.get("prompt", vname) default = var.get("default", None) private = boolean(var.get("private", True)) confirm = boolean(var.get("confirm", False)) encrypt = var.get("encrypt", None) salt_size = var.get("salt_size", None) salt = var.get("salt", None) unsafe = boolean(var.get("unsafe", False)) if vname not in self._variable_manager.extra_vars: if self._tqm: self._tqm.send_callback('v2_playbook_on_vars_prompt', vname, private, prompt, encrypt, confirm, salt_size, salt, default, unsafe) play.vars[vname] = display.do_var_prompt(vname, private, prompt, encrypt, confirm, salt_size, salt, default, unsafe) else: # we are either in --list-